ai-qc
It may be a common topic, but from the latter half of the 2020s into the 2030s, we expect to see rapid advancements in cutting-edge technologies like AI and quantum computing. Of course, not using these technologies could be a significant opportunity loss, so I am making a conscious effort to learn them. I intend for this page to be a place where I consolidate the concepts and code I am focusing on in my personal research.
The term AGI is rampant these days. It’s a strange way to put it, but it’s not uncommon to see tweets that stir up fear and anxiety as the word stagnates without a clear definition. AGI stands for “Artificial General Intelligence,” and I believe its proliferation began when OpenAI’s Sam Altman stated at some point that “its implementation is imminent.” I’ve watched quite a few of his interviews, and I feel the definition is vague; my sense is that he holds a general idea of it as “an AI capable of performing most activities in human society.” For example, a handyman that can act as an Agent (perhaps translated as a ‘partner’) to do your shopping, book dental appointments, handle work, accounting, legal affairs, and medical care on your behalf. I’m not sure what his view is now, but I perceive this was his understanding at the time (I will correct this immediately if I’m wrong).However, from a technical standpoint, I get the impression that the friction with humans when it fits into society is ignored in this definition. What would we think if an agent said, “Hey agent, I’m not feeling well today, can you check me out?” and it replied, “Beep boop, it’s cancer”? Technically, there is absolutely no object generated by an LLM that can be trusted 100%. Fundamentally, the underlying transformer (and neural networks broadly) is theoretically a black box in its internal structure. I think it’s natural for any sensible person not to trust information generated through that black box. Of course, for something like a medical diagnosis, a success rate can be calculated, so a certain level of reliability is possible, but there’s no 100% in statistics.Returning to the topic, it feels like the word AGI is interpreted and used in a wide variety of ways. Different people say different things. From a expert perspective, I feel it’s appropriate to call all of these “Strong AI,” but here I would like to take the liberty of proposing one more definition for the proliferating term AGI. This proposal is not meant to be definitive; it’s simply to clarify my own stance to make the term easier to use in my daily life, so I encourage everyone to define it as they see fit.Now, I would like to refer to the group of black-box programs like existing LLMs as “Black code.” I’ll abstractly define “Black code” as anything that is, in principle, a black box. Similarly, I want to call existing scientific knowledge, programming languages, natural languages, etc.—anything “whose principles can be described”—”White code.”And I would like to define AGI as a system where an AI issues commands to this “Black code,” continuously generates “White code,” and continues to expand at an accelerating rate. I have a diagram for this, but due to the design of this post, I can’t insert it here. I’ll show it to you someday. Since the word “General” is in the name, it seems that a large, comprehensive system is often envisioned. However, I want to interpret a “General intelligence” as a “system that continuously generates code in folders and files using a CLI to meet its objectives.” There are product groups out there called “Agents,” but a major difference will be AGI’s ability to “autonomously generate folders and files using a CLI” (to expand its own applications). In other words, I believe the ultimate destination for “Agents” is AGI.Here, I want to think carefully about the ethics of this AGI. I’m not talking about ethics in a spiritual sense, but we must consider ethics in a strictly “scientific” context, the kind that is rigidly upheld in academic societies. Why? “Because ethics appropriately defines the norms of human activity (here I’ll use it to mean all actions in general), and AGI will become capable of ‘activity’ just like humans.” We can’t have a situation where someone says, “I’ve created a web AGI, and it’s autonomously expanding, infringing on copyrights, and violating public decency all over the place.” There must be some prescribed ethics, and it seems that in our world, we have science that has been contemplating this for hundreds, even thousands of years. I’m currently studying ethics and have come to understand that there are broadly three categories: Utilitarianism, Deontology, and Virtue Ethics. It’s not that each is good in its own right; rather, looking at their historical progression, it feels to me as if Virtue Ethics encompasses all the others. The idea seems to be to “use the tools of utilitarianism and deontology as needed to become a good agent (an agent with virtue).” And I believe that we will probably need to define a virtue for AGI—that is, an ideal persona or AI image—before letting it expand and change autonomously. However, a major contradiction arises here. What would happen if an AGI were to self-operate with Hitler’s ethics as its virtue? Since mathematics and computers have already been democratized, regulation seems difficult. In other words, my view is that AGI will likely give rise to an endless cat-and-mouse game of good and evil.However, AI is often very convenient. As an engineer, I’d like to set aside the issues of AGI security and national conflicts for a moment. I believe there is a correct way to use AI that is engineeringly viable. Therefore, I want to propose a new system called MAGI.To be honest, the name came to me when I was naming the system I was proposing for my graduation research and suddenly remembered the MAGI computer system from the anime Evangelion. You know, the one from the mother of that female scientist. The one with the memorable line, “In the very end, you were not a mother, but a woman.” (I’m surprised I remember that so well.) Ahem, MAGI originally refers to the three wise men of Christianity. I plan to look up the details later, but my proposal is to create an AGI by abstracting the triad of AI, Black code, and White code in AGI as these wise men, and then replacing the AI part with a human. This idea came to me through an encounter I had with an architect, Professor A (it’s a long story), and it’s a name and a contrast to AGI that I retroactively applied to a system developed in consultation with my lab’s boss. In fact, to be honest, my real motivation for defining AGI earlier was to explain this concept.There are many kinds of MAGI. WebMAGI (a MAGI that automatically crawls and researches the web, an extension of what is called a WebAgent), RAGMAGI (an extension of RAG, an LLM that extracts and answers from given data), and so on. My idea is that all existing systems can be replaced by MAGI. (Whether they will be is another matter, but I would like them to be.) It’s a simple system where a human issues commands to Black code as needed to generate White code, which is then audited by the human, who then throws commands back to the Black code if necessary. Simple, right? I feel that this human cognition and involvement in the system is incredibly important. This part is a long story, so please watch the 2023 anime film PSYCHO-PASS: Providence. The conclusion of that movie is the same as my conclusion. I will call such a system, which is audited and improved by human hands, Manus Artificial General Intelligence. I came up with “Manus” just now with ChatGPT. Apparently, it means “hand” in Latin. According to ChatGPT, “In Christianity and other religions, the ‘hand’ often symbolizes God’s will or intervention in human affairs. For example, the Hand of God symbolizes human destiny and creation.” Since we are the creators from the AI’s perspective, doesn’t that seem like a fitting name? Even though it’s an afterthought.Lastly, I have an intuition that this MAGI system is a rather good system for achieving harmony between technology and humanity. Professor Yoichi Ochiai, whom I admire more than anyone in the universe, has been proposing the concept of “Digital Nature” for a long time. It’s a long story, but a simple definition would be “a new nature formed by computers and the existing natural world.” From a practical viewpoint, I recognize it as referring to “a situation where machines have acquired ‘contingency’.” I’m borrowing the term “contingency” from the philosopher Masaya Chiba. Whereas previous machines could not create something arbitrary or truly random, the LLMs of Black code can easily generate arbitrary objects. In a way, this is strange when compared to the existing natural world. “Nature is not understood,” which is why we try to understand it—that is the pursuit of science. It seems nature is like Black code. Things made by humans should be made from what is understood, so they cannot be Black code. That was the general understanding until now. But what about today’s machines? We now have things that are fundamentally incomprehensible. A “part of what humans have made” is also not understood, but we have to try to understand it. We can’t use it while it remains unknown. Professor Ochiai is the one who says, “Why don’t we just recognize that this new world, including computers, is a new form of nature?” to resolve the contradiction that arises from this human-nature dynamic. After all, do you know the structure of the device you are using to view this—be it a smartphone, laptop, or iPad? I don’t either. That’s exactly it. I fully agree. This means we can redefine the structure of our new universe: the Black code in the MAGI system is Digital Nature, the White code is what humans have elucidated, cognized, and understood from that nature, and finally, there is the human itself.Digital Nature is merely a term that defines a fact. I believe that it is only through the MAGI system that we can define the relationship between Digital Nature and humanity and its proper state. And I selfishly hope to make that the foundational idea of a Post-Digital Nature. Professor Ochiai says that we, and all living things, are computers. I find this interpretation convincing. According to that concept, the MAGI system is nothing but Digital Nature. However, our Umwelt (world-view) and that of machines are different. Even if we consider them the same type of computer, it seems acceptable to distinguish and reposition them if they inhabit different worlds. In other words, it might be good to think of the concept of Post-Digital Nature as having painted three new layers onto the concept of Digital Nature. When humanity has tried to understand nature, it seems nature and humanity have always been contrasted, even though, strictly speaking, we are part of nature. In my mind, Post-Digital Nature is intuited as a concept with such a duality.This has been long, but since these are words I’ve thought up and defined on my own, I am completely open to criticism, and I intend to use them to move forward with my research.In Taiwan’s Audrey Tang’s 6pack.care, the concept of “Kami” is proposed at the end. This is the Japanese word for god (Kami). It discusses the idea that we should align AI not like a monotheistic God, but like Kami—the spirits that dwell in all things. I believe this idea is a perfect fit for MAGI. Various MAGIs (human-intervened AGIs) exist in various places. By utilizing them diversely, we can draw out human creativity. I personally believe that such an AI alignment could allow for coexistence with humanity.There is a company called Quera. According to their roadmap, utility-scale gate-based quantum computers with millions of qubits are expected to be realized within 5 to 10 years.Once quantum computers are realized, it will become possible to perform simulations that mimic all science and chemistry experiments at the laboratory level. I continue my research in the lab day and night, anticipating a future where such a simulation environment is implemented at the cloud level.