New Users of Copula AI, and on Unconscious Intelligence
How folks are using Copula AI, and a take on AGI
Copula AI has a new homepage which lists some users' new My Copula AI-powered websites. Our mission is to make expert knowledge accessible to the curious layperson. These users have contributed towards this mission for each of their individual domains, curating the associated reference texts for their websites.
After mentioning a couple other new features rolled out this past month, I examine some artificial intelligence terminology. My goal here is to contribute some clarity to the debate around AGI. I conclude with the near-term plans for Copula AI.
My Copula AI: Usecases
Users from organizations of all sizes have signed up to My Copula AI. Each of them shared a Google Drive folder with docs pertinent to their group, and obtained a website that offered AI-powered Q&A on those docs. Here is Don Bosco Africa's website that was created in this manner.
The Salesians of Don Bosco, also known as the Salesian Society, are a religious congregation founded by Saint John Bosco, commonly known as Don Bosco. The congregation was established in the 19th century and is dedicated to the education and spiritual development of young people, especially those who are poor and marginalized.
The above paragraph is from our AI's answer to "Who are the Salesians of Don Bosco?", as asked on the Don Bosco Africa's My Copula AI website.
Academic groups are also using My Copula AI. The Institute of Gender Studies at the Cyril and Methodius University is interrogating the seminal texts of their field as they go about their work. Here is their My Copula AI site. Prof. Bobi Badarevski who set it up has thanked me on his group's website.
Lastly, our free single document service, Free Copula AI, has also had some upgrades. Users can now use a share link that retains the document they had uploaded. For example, here's such a link to do Q&A on the New York Times lawsuit against OpenAI. A specific Q&A exchange is also shareable.
OpenAI's Q* and AGI
In the news surrounding last month's leadership struggle at OpenAI, a letter of concern written to OpenAI's board surfaced. The exact contents of the letter were not revealed in the story by Reuters; the story just said that it contained a warning by some OpenAI staff of a breakthrough in their AI research. The story proposes that the research relates to an internal OpenAI project called Q*, and that its advance could lead to AGI (artificial general intelligence).
Let's unpack the term AGI, to help clarify discussions on this hot button topic. The term Artificial is easy—we understand it to mean not naturally occurring, but man-made. So far, the hardware has been electronics and silicon based; though one can imagine other substrates (biotech?) that might effect computation.
General in AGI refers to general-purpose ability to work in multiple domains; planning, learning and strategizing as needed. IBM's Deep Blue was great at chess, but couldn't play tic-tac-toe (let alone write a poem)—so it wouldn't qualify as being generally intelligent, as impressive as its human-defeating ability was.
Intelligence relates to the ability to think. Traditionally, computers were seen as general-purpose machines that can compute. They also worked with data, that could serve as memory. But intelligence is a higher-level concept. It includes reasoning (not just computing), having a world model (not just memory), being able to run simulations ("thought experiments"), etc.
Unconscious Intelligence
So that's "AGI". You may have noticed that our discussion of it lacked humanistic considerations. The two aspects of the human condition philosophers have continuously argued about since the invention of armchairs—consciousness and free will—did not figure.
But it looks like we had expectations that some humanism accompanies AGI. Why else did we find ChatGPT so uncanny? We were bewildered that a system so generally intelligent was nevertheless as conscious as a cuckoo clock; with no desires, self-identity, understanding of others, sense of justice, aesthetic interest, etc1.
In 2023, we are realizing that strong, general intelligence need not be accompanied by consciousness. No one knows if a sufficient increase of intelligence in an AI system would by itself tip the AI into consciousness. The consensus appears that some additional breakthrough is needed. We are left to wonder if the breakthrough the Q* researchers warned about in their letter is one such.
An AI system need not be conscious to be dangerous—just like chemical weapons, which aren't even plain intelligent, its abilities can be leveraged by bad actors to cause massive harm. So discussions about the dangers of AI can be clarified by separating concerns about
a) an AI system becoming conscious
from
b) AI's increasing general intelligence.
The former is a worry over the emergence of artificial consciousness; the latter, over the power of unconscious intelligence.
Conclusion; Copula AI Next Steps
If you enjoyed the exploration of AGI above, consider subscribing to my AI and philosophy substack Autonomy. Its next issue should come out in a week or two, and will discuss Anthropic/Claude's Constitutional AI.
As for Copula AI, we will be rolling out payment plans to support larger user accounts that need more computational resources. We will make progress on the laundry list of small feature requests and usability improvements for our service. Lastly, we will embark on a couple of marketing initiatives.
I look forward to writing you again in 2024. All the best!