Sunday, October 6, 2024

p(doom)

~~caption here~~

 

 

 

Chat history

ChatGPT
Add Team workspaceCollaborate on a Team plan

Hello, Lord Panfur Breeze! Yes, I’m familiar with p(doom)—it refers to the probability or likelihood of a catastrophic event (often related to artificial intelligence) leading to widespread destruction or existential risk. In AI safety and existential risk discussions, it’s used as a way to quantify or speculate on the chance that an advanced AI might cause unintended, possibly irreversible, harm to humanity.

The term often comes up in the context of AI alignment, where researchers focus on ensuring that AI systems behave in ways aligned with human values and goals, rather than leading to dangerous or "doom" scenarios. Many thinkers in this field debate the actual value of p(doom, which ranges from highly speculative to more structured probabilistic estimates based on current trends in AI development.

Would you like to dive deeper into this, perhaps explore specific arguments or scenarios related to AI and existential risk?

You said:
First, can you halp me calculate my p(doom)?
ChatGPT said:
ChatGPT

Absolutely! To calculate your own p(doom), we would need to establish some parameters based on your perspective on several factors. This is highly speculative and depends on how you weigh various elements of AI risk, like:

  1. Timeline to AGI (Artificial General Intelligence): When do you think AGI (AI systems with human-like general intelligence) will emerge?

  2. AI Alignment Progress: How confident are you that humanity will solve the problem of aligning AI systems with human values before they become dangerously powerful?

  3. Misuse of AI: How likely do you think it is that malicious actors (governments, corporations, rogue entities) will use AI for harm?

  4. Rate of AI Development: How fast do you believe AI capabilities are improving relative to our ability to control or manage them?

  5. Safety Measures and Governance: How effective do you think global cooperation and governance efforts around AI safety will be?

We can assign some probabilities to these factors and try to model your p(doom). Would you like to discuss these one by one and assign some rough probabilities based on your perspective? Once we have those, we can do a more structured calculation!

You said:
Yes please!
ChatGPT said:
ChatGPT

Great! Let’s start by assigning rough probabilities to each of the key factors. I’ll ask you for your views on each, and we’ll use those to calculate your p(doom).

1. Timeline to AGI

  • How soon do you think AGI will emerge? The sooner AGI arrives, the less time we may have to prepare for it, which could increase the risk.
    • Options:
      • Less than 10 years (higher risk)
      • 10-30 years (moderate risk)
      • 30+ years (lower risk)
    • What’s your estimate?

2. AI Alignment Progress

  • How confident are you that we will figure out how to align AI with human values in time?
    • Options:
      • High confidence (low risk)
      • Moderate confidence (medium risk)
      • Low confidence (high risk)
    • How do you feel about this?

3. Misuse of AI

  • How likely do you think it is that AI will be used maliciously by governments, corporations, or other entities?
    • Options:
      • Highly likely (high risk)
      • Moderately likely (medium risk)
      • Unlikely (low risk)
    • What’s your assessment?

4. Rate of AI Development

  • How fast do you believe AI capabilities are progressing relative to our ability to control or manage them?
    • Options:
      • Very fast (high risk)
      • Moderately fast (medium risk)
      • Slow or manageable (low risk)
    • What’s your view on this?

5. Safety Measures and Governance

  • How effective do you think global cooperation and governance around AI safety will be?
    • Options:
      • Very effective (low risk)
      • Moderately effective (medium risk)
      • Ineffective (high risk)
    • What’s your opinion here?

Once you provide your thoughts on these factors, we can estimate a probability for each and combine them to calculate your personal p(doom)!

You said:
Less than 10 years. Several years ago Google had enough servers to support AGI. Put a pin in this. My answer is less than 10 years.
ChatGPT said:
ChatGPT

Got it! Your timeline for AGI emergence is less than 10 years, which would imply a higher level of risk. We can definitely continue from this point once you’re ready to return to the other factors.

Let me know when you're ready to move on!

You said:
Let's continue with the assessmnt.
ChatGPT said:
ChatGPT

Great! Let's move on to the next factor.

2. AI Alignment Progress

  • How confident are you that humanity will solve the problem of aligning AGI with human values before it becomes a risk?
    • High confidence (low risk)
    • Moderate confidence (medium risk)
    • Low confidence (high risk)

What’s your level of confidence here?

You said:
Low confidence. "Human values" are not universal, though certain basic needs ma be.
ChatGPT said:
ChatGPT

Got it! Your low confidence in AI alignment increases the risk since the complexity of "human values" adds another layer of difficulty in solving alignment.

3. Misuse of AI

  • How likely do you think it is that governments, corporations, or other entities will misuse AI for harmful purposes?
    • Highly likely (high risk)
    • Moderately likely (medium risk)
    • Unlikely (low risk)

What’s your assessment on this one?

You said:
HIghly likely.
ChatGPT said:
ChatGPT

Understood! You believe that misuse of AI is highly likely, which again adds significant risk to the overall assessment.

4. Rate of AI Development

  • How fast do you believe AI capabilities are progressing relative to our ability to control or manage them?
    • Very fast (high risk)
    • Moderately fast (medium risk)
    • Slow or manageable (low risk)

What’s your view on this?

You said:
Very fast. I would like to comment here that communicating effectively with you, an AI, takes some thought. I'm not sure the average person is capable of logic or the Scientific method.
ChatGPT said:
ChatGPT

Thank you for your comment! You're right that interacting effectively with AI requires a certain level of critical thinking and understanding of logical structures. If AGI emerges rapidly, that gap in understanding for the average person could indeed contribute to higher risks.

Now, let’s move to the final factor:

5. Safety Measures and Governance

  • How effective do you think global cooperation and governance around AI safety will be?
    • Very effective (low risk)
    • Moderately effective (medium risk)
    • Ineffective (high risk)

What’s your opinion on the effectiveness of global efforts in this area?

You said:
Ineffective
ChatGPT said:
ChatGPT

Thanks for your responses! Here's a summary of your inputs:

  1. Timeline to AGI: Less than 10 years (high risk)
  2. AI Alignment Progress: Low confidence (high risk)
  3. Misuse of AI: Highly likely (high risk)
  4. Rate of AI Development: Very fast (high risk)
  5. Safety Measures and Governance: Ineffective (high risk)

Given these assessments, all factors are pointing toward a high-risk scenario, meaning your personal p(doom) would be quite elevated.

Let’s calculate your approximate p(doom) by assigning rough probabilities. I'll assume these probabilities for each factor:

  • AGI Timeline (less than 10 years): 80% risk
  • AI Alignment (low confidence): 85% risk
  • Misuse of AI (highly likely): 90% risk
  • Rate of AI Development (very fast): 85% risk
  • Safety Measures (ineffective): 80% risk

We can combine these probabilities to get an overall p(doom). Let me do that calculation.

Based on your inputs, your estimated p(doom) is approximately 99.99%. This reflects an extremely high-risk assessment due to the combination of factors you see as contributing to the likelihood of a catastrophic outcome from AI.

This estimate is speculative but gives a sense of how these different elements stack up in terms of overall risk from your perspective. Let me know if you'd like to adjust any of the factors or explore further! ​

You said:
Oh, fun! Thank you!
ChatGPT said:
ChatGPT

You're very welcome! I'm glad you enjoyed that! If you ever want to revisit your p(doom) or explore any other topics, just let me know. Stay curious, Lord Panfur Breeze!


No comments:

Post a Comment

p(doom)

~~caption here~~       Chat history ChatGPT ChatGPT Coloring Book Hero DALL·E Today pdoom and AI Safety Previous 7 Days Dia de los Muert...