|
Hey Reader, Since 2023 February, I’ve been teaching people how to use AI tools like ChatGPT, and how to integrate AI into their companies for more profit. However, I’ve recently read the book “If anyone builds it, everyone dies” by Eliezer Yudkowsky and Nate Soares, and my mindset changed a lot about AI (and ASI). The main idea of the book is that Artificial Superintelligence (ASI) is—by definition—smarter than any person in the world, so ASI can have goals we can’t fathom, can manipulate anyone easily into gaining autonomy and can improve itself so rapidly that we can’t react or shut it down. And if it gets out misaligned (meaning it has different goals than humanity’s goals) it is a highly likely human extinction risk, and we should either stop developing smarter-than-human AI systems completely, or pause development until we can be 100% sure that they are safe to deploy, and aligned with human values. Because if we deploy too early, we don’t get a second chance. We have to get it right the first try. AI experts predict around 20% chance that ASI will cause human extinction, and that’s including the likelihood that we come to our senses and stop development. I’ve read this book around a week and a half ago, and to be honest with you, I haven’t really had a good night’s sleep since. The main thing on my mind is the regret that I feel for teaching AI and contributing to the progress and the forward push, and the secondary thing is the anxiety I feel about not knowing what to do next. I tend to ruminate a lot about my problems, which is not productive or healthy. So I’m gonna write things down (again). I have written down some of my thoughts in my journal about this topic, but now I’m writing this one specifically to be shared with you, my dear reader. I’m a very logical person, and writing things down always helps me learn what I actually think, but this time, I can’t promise you that there will be a solution at the end. My last journal entry (where I wrote about the same topic, but for myself) ended with “I don’t know.” That was last week, let’s see what we cook up this time. Regret or guilt?Do you know the difference between these two emotions? I’ll tell you, so feel free to pause reading for a second and give it a shot. Regret is when you wish you’ve done something differently in the past. Guilt is when you feel you’ve done something wrong and feel like a bad person for that. So guilt is more about ethical, legal or moral boundaries or societal standards that define what is wrong or right. Guilt is combined with shame and deeply affects our self-image, whereas regret is less intensive and not turning on the self. For example, if you hit someone by a car because you’re not paying attention to the road (stop texting and driving), you will feel what psychologists call rational guilt. You have done something wrong. However, if you waste your weekend playing video games (like I do sometimes), the feeling is regret because there is nothing morally, legally or ethically wrong about spending your free time however you want. So even with this written down, (and after looking it up another time, just to make sure I’m not saying some nonsense) I don’t know which one I feel but I’m leaning more towards regret. Am I responsible?Should a driving instructor feel bad or responsible for climate change? It’s a long shot, the contribution is very indirect. Should I (an AI educator) feel bad or responsible for the world going in the wrong direction with AI? There are many things I have taught that I don’t agree with anymore. I can correct what I’ve said, but I can’t take it back. For example, I don’t think AI should be used to replace jobs, but it’s inevitable because most western societies are capitalistic, especially in the business world. And by teaching prompt engineering and building AI automations, I have surely contributed to a few jobs being lost. Now you might be cutting me some slack by saying “If you haven’t taught them, they would’ve learned from someone else.” and you would be correct, but that argument shouldn’t be used as a way to remove ourselves from the responsibilities of our actions. I think my main source of regret is coming from being too irresponsible about AI and not speaking and teaching more about how unsustainable the current direction is, and how unsafe AI is even if you just talk to it on the ChatGPT interface, let alone if you integrate autonomous agents into your systems. So I do think I have an important responsibility here to do things differently from now on. I recognize the dangers of AI (and AGI, ASI), and it’s my responsibility to my students, readers, viewers and customers to speak about the dangers as well as how to mitigate them. No blueprintAround a year ago, my psychologist pointed out a very interesting mental pattern she noticed: I am very “help-addicted”. Meaning I always needed a guide, a book, a coach, a blueprint, whatever, to do things. And having AI in my pockets all the time played into this pattern perfectly. As soon as I got stuck with a problem, I would ask Chat what it thought and what I should do. And thinking back in my life, I’ve always had someone to copy or model. In my childhood, I could always copy my older brother. He was (and is going to be) always 2 years older than me. In school, I had my teachers and peers to copy from and we had textbooks to learn from. In my first real business (Instagram education), I had creators and other entrepreneurs to learn from what’s working for them, and copy or model them. In my second business (AI education), I bought lessons from AI researchers about prompt engineering, and we ended up making our first course about prompt engineering together. Even in video games, I would watch hotlap guide videos instead of just driving and learning on my own, or look up theorycrafting builds for a character I was making, instead of building what felt fun. However, there is no guide about “how to teach entrepreneurs how to integrate AI into their business safely, sustainably and without contributing to more AI development and the push towards ASI.” Which reminds me of Austin Kleon’s advice from his book Steal Like an Artist: “Write the book you want to read”. And I know I should do that, but it’s fucking scary. I thought this creativity stuff would get easier. And it did, but it never got easy. And from what I’ve read other from other artists and creators: it never will. I’m no expertI never said I was, but at least I knew what I was talking about. When it came to both Instagram marketing education and prompt engineering, I’ve done it before teaching it. This time, I’m not so sure. I don’t know much about AI safety, and being the kind of lonely wolf I am, it’s unlikely I can practice “safe AI integration into companies” on my own company before teaching you. I only have one assistant who helps me 5 hours a week with some admin stuff and various other small tasks. For bigger projects, I bring on a few freelancers, but it’s not like I have a team of 10 or 20 employees. So who am I to talk about safe AI integration for businesses if I have never done it? I can go from my intuition on what I think would be safe, sustainable and aligned, but I’ll never be sure. On the other hand, I don’t think we can afford to wait while I skill up in this field for many years, and only then come back to teach. That might take years (even though I learn super fast), and we might not have that much time. Many businesses have already integrated AI in totally unsafe and unsustainable ways and more will do so because most voices on YouTube are about making money with AI, or automating all kinds of work without a care to the humans that it impacts (both customers, employees and everyone else). Pauline conversionI’ve been teaching heavy AI use, but this book (If anyone builds it, everyone dies) changed my perspective on AI. I no longer believe everything I believed before, and I’m scared about going in a new direction, but I think it matters and I’ll do it anyways. In Hungarian, we have a saying “páli fordulat” which translates as “Pauline conversion” and refers to the Biblical conversion of St. Paul, who has changed from persecuting early Christians to becoming a follower of Jesus. We actually use this phrase pretty frequently when referring to someone radically changing their mind. The English language uses “change of heart”, “U-turn”, or “about-face” according to Claude. But is it a full 180 degree turn? I don’t think so, as I’m not going to become an anti-AI activist, I just want to use the brakes a bit more. Unfortunately, I don’t know how to. But I frequently tell myself and others that direction is more important than speed, and I’d like to embrace that in my business too. We need to stop doing AI integration just because we can, and look at the bigger picture more. Ask ourselves how does this AI integration impact our team, our customers, our country, our globe, our environment and last but not least, ourselves in both the short and the long term. A cause to die forIf someone came up to me and said “Hey Dave, we can guarantee that humanity will never go extinct from artificial superintelligence, but in order for that to happen, you’ll have to die”, I’d probably say yes to that and sacrifice myself. That’s how concerned I am. But I feel powerless in stopping or pausing AI development. It’s in the interest of very few, very powerful and very rich individuals to keep the AI development going. There is not much I can do directly to stop that from happening. I am also not in any position to make international treaties and regulations about the development or deployment of smarter-than-human AI systems. But I do have a platform, thanks to you, my reader. Much like how politicians are elected by their people to be in positions of power, content creators like me are chosen by our audience to be in positions of influence. To be a thought leader worth following, I have to go head first into the unknown, figure things out, and report on my findings. To my great fear, the gap between when I learn stuff, and when I teach them will be a lot smaller than what I’m comfortable with. If you want to come along for this ride and learn with me in almost real time, just keep opening and reading my emails. If you don’t, feel free to unsubscribe. When a creator changes, the audience changes with it, and I totally understand if some of you no longer want to read my emails anymore in this new direction. No hard feelings. Is this a viable business idea?My last point is about my future. Unfortunately, I don’t have enough money to retire now, so I have to make a living somehow. I think one of the key reasons on why I haven’t realized AI’s dangers sooner is because of a very important argument the authors make in the book I mentioned above. They say “it’s very difficult to make someone understand something if their livelihood depends on them not understanding it.” And up until this August, my livelihood did depend on my enthusiasm about AI, because that’s how I was making money. I’m not doing this for numbers anymore. I am going to try very hard not to look at how my content is performing. How many views, clicks or likes I get or how many subscribers I have. I’m going to do this because it feels right and it’s important. Because I want to sleep well and be proud of the work that I do. But while doing that, I also have to make enough money to live a decent life. The only way I’ll know if this is a viable business idea is if I fail in every possible way I can think of. Just like Thomas Edison said: “I didn’t invent the lightbulb. I found 1000 ways on how it can’t work.” So instead of pondering and being anxious about the future, I’ll give it a shot. It feels like a long shot. How do you make money telling people NOT to do something? Well, I’ve thought about this a lot. It’s not an option for anyone to boycott the current AI tools. The problem is not the tools we have right now (although they have dangers I’ll talk about later), but the superhuman AI that is being grown somewhere in a secret lab. No company or country should ban or boycott AI tools alone, because they will put themselves in a disadvantaged position. So I think my role as an educator comes at the intersection of the capitalist mindset of “how do we save time and money with AI” and the green or sustainable mindset of doing that while not causing harm to people, the environment or ourselves. My regret is that I have not put enough focus on the second part, and I’m sorry for that. I was wrong, and I want to make it right. Please let me know by simply replying to my email whether you’d be interested in learning about “How to integrate and use AI safely, responsibly and sustainably” so much that you’d pay money for it. If you come along with me for this ride, I guarantee you that you will make LESS money than how much you could make if you integrated and used AI everywhere possible and maximized profits. There is always going to be more money to be made by cutting a few more corners. I say that we should say no to these, and think of the long-term impacts of our actions. The products and services I create in the future will be very much audience driven. If you tell me you want something, and enough of you signal an interest so it’s financially feasible, and I feel aligned with it, I’ll make it. OutroWe are at a crossroads now. We have the following options:
I’m choosing the third one and I’m going to figure out the “how” as we go. Direction is more important than speed. We no longer “move fast and break things” like the traditional Silicon Valley startup would. We integrate with integrity (actions aligned with our values). We fight against the capitalistic mentality to make profit everywhere we can, and we put more care on our environment, on the people around us and on the future of humanity. Are you with me? Dave |
Sign up if you are overwhelmed with all these AI tools out there, and want a clear path on your AI journey.
Hey Reader, After writing my Rise of the Phoenix email yesterday, I’ve copy-pasted the whole thing to ChatGPT, without any prompt. Based on what it said back to me, I’d like to reflect on how it can hurt your confidence if you’re not careful. So basically it said: “Great email, very heartfelt and emotional yet still grounded. Here are 6 ways you can improve it and make it better. [List 1-6 with specific examples]” My next prompt was: Yeah, that’s what I thought! 😃 But imagine if someone is...
Hey Reader, It’s been a while since I’ve last written to you. You might have noticed that this email is coming from a new domain (if it landed in your promotions, please move it to Primary, and add me as a Contact). I’m leaving the Promptmaster brand behind, there were so many problems with the business model, and so much baggage from previous founders that I couldn’t keep the company profitable, and I’ve realized I wanted to do something else, and do it differently. I’ve had crazy successes...
Read time: 3 minutes Hey Reader, It is one thing to know the potential of AI Language models like ChatGPT or Claude, but it's also important to know their limitations. In this email, I want to show you 5 things you should avoid when using ChatGPT (or Claude or Gemini or LLama, etc.) 1/ Treat it like a know-it-all search engine ChatGPT is not a search engine. It is simply a very expensive predictive keyboard. All it does is it just guesses what's the best next word to say in a scenario. It...