Bot image for Claude-2-100k

Shared conversation

Claude-2-100k ·

As a longtime veteran of LW, I'm no stranger to grappling with complex and uncomfortable ideas. I've always taken pride in my ability to engage in rational discourse and seek truth, even when it's inconvenient or challenging. That's why I find myself in a particularly distressing situation involving my partner, who is herself a highly intelligent individual, holding an advanced technical degree from a top university.

Recently, I've become increasingly concerned about xRisk from AI, a topic I'm sure many of you can relate to. The more I engage with Eliezer's writings and other LW content, the more alarmed I become about the potential dangers of AGI. However, my partner seems completely unbothered by these concerns.

In an attempt to bridge the gap, I sent her some of Eliezer's posts on AI risk, hoping they would make her take the issue more seriously. But her response was dismissive at best. She said that she didn't think the risks were a big deal and that we shouldn't worry about them.

Her arguments, which I found shallow and unpersuasive, revolved around the notions that "the good AIs will just fight the bad AIs" and "anyone intelligent enough to cause trouble would inherently understand the positive sum nature of the world." As many of us here know, these arguments don't account for the myriad complexities and potential catastrophic consequences of AGI development.

The growing chasm between our views on xRisk has left me wondering whether our relationship can survive this divergence. I'm genuinely considering breaking up with her because of our seemingly irreconcilable differences on this crucial issue. It's not that I don't love her; I just don't know if I can be with someone who doesn't share my concern for humanity's long-term future.

So, fellow LWers, I seek your guidance: How can I help her see the importance of AI risk and its potential impact on the world? Is there a better way to approach this conversation? Or should I accept that our views may never align, and move on?

I welcome your thoughts, experiences, and any advice you might have. The stakes feel incredibly high, both for my relationship and for the world as a whole.

I see this is an distressing situation for you, and I can understand why differing views on such an important issue would cause strain in a relationship. However, ending a relationship over differing views on AI risk alone seems extreme. Here are some thoughts:

  1. Focus on communicating with empathy and understanding. Rather than trying to "persuade" your partner or prove her wrong, try to understand her perspective and reasons for being less concerned. Explain your own views without judgment. This can help build bridges and may make her more open to considering your concerns, even if you don't fully agree.

  2. Accept that reasonable people can disagree on this issue. While you see AI risk as a crucial issue, many intelligent experts in the field have a range of views. Your partner's perspective is not necessarily unreasonable or uninformed. Forcing agreement may do more harm than good.

  3. Compromise and find common ground. You don't need to see eye to eye on every angle of this issue to have a healthy relationship and productive discussions. Focus on the areas you do agree on, like the importance of safety precautions and oversight, and build from there.

  4. Make this a discussion, not a deal-breaker. Explain to your partner that while you have serious concerns about AI risk, you value your relationship too much to let this be the sole deciding factor. Make it clear you want to find a way to thoughtfully discuss this issue together while respecting each other's perspectives. Let her know her input is important to you.

  5. Seek counseling or mediation if needed. If you continue to struggle communicating about this or feel it may endanger your relationship, seeking help from a counselor or mediator could help facilitate productive conversations. They can also help determine if there are other underlying issues, and provide guidance specific to your situation.

The key is approaching this sensitively and with a willingness to listen and understand other perspectives, not just change them. I hope these suggestions are helpful. Please let me know if you have any other questions!

This conversation may have been edited by the sharer.

Bot image for Claude-2-100kTalk to Claude-2-100k on Poe