It’s mailbag day here at The Daily Cut.
And we’re keeping the spotlight on the world’s most powerful megatrend – the rise of artificial intelligence (“AI”).
AIs are “thinking machines.”
Using complex algorithms, they can drive cars… outsmart human pilots at flying an F-16 fighter jet… and discover new drug candidates faster than any human can.
And as I predicted in these pages four years ago, that makes AI the greatest investment opportunity of the next decade.
The scope of this opportunity is mind-boggling. As I wrote back then…
AI can help solve just about any problem you can think of… So, as AIs get more powerful, we’ll see a quantum leap in technological progress unlike anything we’ve seen before.
AIs excel at crunching through vast amounts of data to figure out patterns that predict certain outcomes. This gives them a superhuman ability to solve the problems humans struggle with.
For instance, an AI could help us figure out how to use nuclear fusion to generate electricity. This would give us an unlimited zero-carbon, non-combustion energy source… and radically transform the energy industry.
But AI didn’t rise to top of mind for most folks until the release last November of AI chatbot ChatGPT.
It had the fastest growth to 100 million users of any consumer app in the history of the internet. It hit that milestone after just 64 days.
The next fastest growing app was Chinese social media app TikTok. It took nine months to reach the 100 million user mark.
And it’s easy to see why. ChatGPT has read and memorized every word on the internet up until 2021. And you can have a conversation with it much like you’d have a conversation with a real person. (To see what I mean, catch up on Part I and Part II of my interview with ChatGPT.)
Naturally, your fellow readers have lots of questions. And standing by with answers is tech investing expert Jeff Brown. He’s been tracking the rise of AI since he joined the team at Legacy in 2015…
First up, a reader wants to know if AI chatbots like ChatGPT could be used to spread misinformation…
Reader comment: Hello Jeff, I’ve been following your AI commentaries with great interest and can certainly see the terrific investment potential as well as the benefits to companies, people, and society in general, but I also see some very dark aspects.
The one that most concerns me is the potential for the AIs to be trained on data sets and information that strongly favors or supports the aims of certain groups. There is no way that I or any individual I know would ever be able to own or control an AI – they will only be available to governments and mega corporations, which means the data sets will certainly be slanted so as to ensure the AI acts in the interest of the owner or controller.
This begs the question of whether those organizations are going to be more interested in controlling me to my detriment for their own profit, or whether they will be focused on benefiting society in general. I’m quite sure we know the answer is going to be that the control will be to our detriment.
– Alfred R.
Jeff’s response: Hi, Alfred. We’re about to embark on a decades-long struggle between folks who want to control what we think and folks who want to think for themselves.
We’ve already seen some concrete examples of problems and bias in AIs.
ChatGPT is a good example. It’s trained mostly on information it reads online. This includes information produced by the mainstream media… tech giants such as Google… and the online encyclopedia Wikipedia, to name three examples.
So that’s one major problem. An AI can learn on biased information rather than truthful inputs. That means the outputs will also be heavily biased.
ChatGPT also contains code written by humans with biases. Sometimes, developers refer to this as programming in “safety rails.” But this human programming is designed to manipulate us and how we think.
That’s why, when we’re using generative AIs, we should ask ourselves, “Where does this AI come from?” If it has been developed by OpenAI, Google, Microsoft, or some government agency, we should assume it has these institutions’ biases.
The problem is these systems perform better with larger training sets. And it’s too time-consuming – and perhaps impossible – to create a multibillion-parameter training set that eliminate bias.
The other important question to ask is, “What’s the business model of the company or organization behind the AI system I’m using?”
If the AI is free to use, you’re the product. You can be sure the AI is collecting your data, developing a profile on you, and influencing you through advertising.
And if an AI comes from a government, you should be even more skeptical.
In the last few months, we learned that U.S. government agencies were pushing Microsoft, Google, Facebook, and Twitter to suppress opinions and scientific research that didn’t fit their chosen narrative.
I hope we’ll do our best to keep our wits about us and make efforts to keep the system honest. It won’t be easy. But it’s a worthy fight to ensure freedom of thought and speech.
But not everyone sees the glass half empty. Another Jeff reader wants to know if AIs will make our lives better and easier.…
Reader question: Your remarkable reports on AI being positioned to be competent doctors, lawyers, and judges leads me to wonder if AI could be turned loose on some of the largest issues of the day: social and environmental sustainability. Would AI be able to guide us to less pollution and less war and conflict?
Thank you for your thoughts on such matters!
– Richard S.
Jeff’s response: Hi, Richard. Thanks for being a reader and writing in with your question. It’s a good one.
Let’s tackle the topic of environmental sustainability and pollution reduction first. This is an area where AI can have a near-term positive impact.
A simple example is nuclear fusion.
We’ll use AI systems to control the magnetic fields needed to contain a fusion plasma for extended periods of time. This requires extremely complicated calculations. And it’s a limiting factor right now on successful fusion.
Just imagine if our entire electricity production infrastructure was powered by 100% clean, limitless, fusion energy? No carbon emissions, no pollution whatsoever.
We’re also using AIs to develop new materials and new molecular compounds we can use to cut pollution.
AIs are also helping discover new battery chemistries that could potentially replace the standard lithium-ion batteries. The production of these is damaging to the environment due to the mining of needed metals.
The issue of war and conflict is a more complex issue.
AIs may not be able to stop wars once they’re started. But they may reduce the chance of conflict by helping us create a world of abundance.
In a world where nuclear fusion can produce limitless clean energy that is almost free, we remove the need for conflict over oil and natural gas.
That alone would be transformational.
And when we apply AI to robotics, we get an immensely useful labor force that can perform dangerous and laborious jobs that are generally less desirable for a human labor force. Robotic assistants also will transform businesses and homes with a high-tech version of assisted living.
We’re also using AI to develop new drugs. This is revolutionizing therapeutic development. It’s allowing us to tailor therapies based on our genetic data.
When people have an improved quality of life and aren’t in desperate need of something, we’re less likely to create war and conflict.
That’s all we have for today’s mailbag. We’re still in the early stages of this massive trend. So, keep your eye on your inbox for more from me and my team as it picks up speed.
And remember, you can write Jeff, or any of the experts here at Legacy, at [email protected].
Have a great weekend.
Regards,
Chris Lowe
Editor, The Daily Cut