I am writing this brief essay as a witness to history. As of the publication of this blog post, I have not personally sought access to GPT4, and so recognise that this essay contributes little to the worldwide technical discussion.
Nonetheless I felt compelled to voice my emotions over the recent developments. If not near the end of all, be this the beginning of a beginning.
In Series …
Local (personal, potentially shallow, and subject to change) outlooks on science, technology, growth, and occasionally culture and history. I aim to write an essay every week, but whether it can make its way to FWPhys is random. Hence the series title.
“Matching Humans” was an expression that saw salient usage in my lifetime so far in describing computer programs.
From my earliest memories reading about chess bots and IBM Watson, to later algorithms defeating professional players at go and World of Warcraft. I still remember, in grade 9, writing about genetic algorithms and the weird-looking NASA antenna with mild amusement. As well-versed in tech trivia as I am, it is hard to say I could have foreseen a time so soon that humans are “matched” in increasingly substantial aspects of life.
OpenAI, at the helm of many such changes, is an interesting company to me, one which hardly mentions other company’s competing products in their keynotes and documentations. They are that far ahead; their expensive gamble in stacking up system sizes paid off; they deserve the current competitive edge, and the right to commercialise their research products.
On the other hand, one can’t help but feel that they are less “Open” than previous years, to the point that people sometimes joke how GPT5 will be hand-tuned by GPT4, leaving it thoroughly, by purpose and by nature, a black box with little means of assuring its reliability and safety.
This thread warrants a discussion for another day.
As I have implied in the opening, this essay is more of an emotional address to being “matched”, and an existentially incentivised exploration of where we are going next.
World of Goo is a casual puzzle game from ex-EA engineer Kyle Gabler. It is surprising – but fair and long overdue – to note how much this work influenced me as a scientist and (unfortunately now) technical artist. The game’s aesthetics and narrative rather powerfully shaped the ways I think of computer systems in the real world, goggly eyes UI notwithstanding.
In the chapter “Information Superhighway”, you look for “MOM”, a retired AI-powered search engine / ad bot abandoned by users and buried in the depths of an outdated GPU farm. There you traverse through a personification of the history of computing. From bits and bytes, to networks and server farms. It ends with you blowing up a company with all the spam emails that people asked MOM to compose over the years. It was a chapter delivered with the game’s signature creative crispness, but also an above-average dosage of sorrowful solitude — of the boom times past, but also of existence itself.
I possess no technically informed opinion on whether big language models can reason or experience, and there isn’t much I can do about it either way. My notion of solitude here is more rooted in basic (as opposed to emergent) physics — one might even describe my sorrow as “how sorrowful it is that our bots can’t be sorrowful!”
Human conscience and science emerged out of some serendipitous evolutionary pressures that otherwise required little, and yet we persisted and flourished. In this self-guided process we organised ourselves, sent the occasional crews away from the planet, and dream of more bold and brilliant futures for all. On this grand scale, AI, whether it joins, augments, or supersedes us, is a natural process and respectful utilisation of everything we have learned and fear forgetting.
It may be an optimistic sentiment, possibly shared by many 2010s / 2020s physicists, that we have sufficient data to answer questions not yet asked, and they will be the actual big ones. Maybe nurturing a bot using the collective wisdom of humanity is a good way of getting another perspective at ourselves, of looking at places that we have grown used to glance past.
GPT4 is offered in the chat bot format from the start, unlike its numbered predecessors. This might just be due to engineering reasons like not needing to reinvent the framework (also because it now takes image inputs). Still, I also feel this is an implicit recognition that ChatGPT has established the basic form via which a human-interfacing AI should appear.
I’ve rambled a lot…
As an AI language model, I do not have desires or feelings, and I do not have physical form. I exist solely as a program running on servers. Whether I will be given a body or not is a decision that depends on the creators and developers of AI technology.
Currently, there are no plans to give language models like me a physical body, as our primary function is to process and generate text. However, as technology advances, who knows what the future may hold.ChatGPT / GPT 3.5, Private Communications
2 thoughts on “Matching Humans”
A requirement should be instigated that A1 and similar human interfacing programs should be identifiable. If you’re contacted by it, there should be a red flag. ‘This is not a human to human interaction.. ‘
But everyone is saying its not possible.
And that’s bullshit.
Yeah spam technologies are also along for the ride … good point