💖Motivation Meows💖
Hello, and welcome back to another edition of The Black Cat.
To read what you missed this week in Good Black News, click here.
This month, I’m reading: La Fortune des Rougon by Zola
This weekend, I can’t stop listening to: The Show by Doug E. Fresh
💢From the Chatterbox💢
I will not keep harping on this. I will not keep harping on this. But this week, I want to share some last-minute thoughts.
The narrative has started to shift on Sam Altman and his OpenAI, and it’s been quite interesting to witness.
Since at least November, I’ve used my naturally skeptic journalist eye to watch Sam closely. It’s taken months, but now people are starting to question whether or not he is a dishonest person. Anyone paying attention could have spotted the trail a while ago.
He spent all last year on tour — I joke, alongside Beyoncé and Taylor Swift — making pitstops in Europe and such to offer himself and OpenAI as the solution to the latest artificial intelligence revolution they helped start. He wanted to be the challenger and the umpire of this latest innovation. Anyone in storytelling can tell the red flags that emerge when someone is trying too hard to control a narrative, so immediately, I was intrigued.
He spoke about the dangers that AI could pose to this next generation while rushing to build said technology, which he billed as getting a head start. I always thought that the solution was simply to not build this so-called dangerous bomb and avoid Oppenheimer's guilt. But I guess the debate there is that someone was going to build it, and it should be Sam, right? And we can trust him, right? Remember, OpenAI was supposed to be a non-profit research organization with a for-profit arm to help fund its studies. A core selling point of OpenAI was its focus on safety, and for a while, everyone believed that. Which is why it was so shocking at the time when, back in November, Sam was ousted from the board. As my colleague Kyle noted in TechCrunch, some board members have come forward to say that Sam misled them on certain happenings in the company, that he sometimes outright lied, and that he also was not forthcoming about OpenAI’s safety practices.
The board lost confidence in him and ousted him. It was around this time that other stories about him started to swirl, like the rumor he was fired from YCombinator (Paul Graham went on Twitter last week to say that wasn’t exactly true). There was another rumor that emerged that some people at Sam’s former startup, Loop, wanted him gone for similar behavior because he was misleading and dishonest. Still, as Tech does, everyone — including OpenAI staff — railled around him. He returned to OpenAI, and that board was disbanded and later replaced with a new one.
Since then, numerous safety experts have left the company, decrying the same thing — that there is a lack of transparency at the company and that its focus is increasingly on commercial efforts rather than the research it vowed to do. It doesn’t help that OpenAI dissolved its AI safety team that was supposed to help govern its AI systems. That team, as Kyle noted, was also promised a sizeable amount of company resources that it never received. Among those who left was OpenAI co-founder Ilya Sutskever, who also served on the board that tried to oust Sam.
That safety team was replaced with a new team to help oversee safety and security, but those on the team include Sam and others who appear loyal to the company. There are no outside voices. The council is Sam and friends, answering to Sam and friends to suggest work and strategize with Sam and friends. The team is another AI circle jerk — which has become a trend in Big Tech — with little room for detractors.
That’s suspicious, right? Or, I guess it’s only a red flag if one never expected this to happen. I think one of the big problems here is that Sam sold OpenAI to us as something entirely different than what it is. It could be that Sam always wanted to turn OpenAI into a traditionally for-profit enterprise (which he is now looking to formally do), and selling the company as research was the easiest way to gain trust; it could be that, as what often happens, the power and the money became too enticing and Sam wondered why would he keep doing this non-profit stuff when he could just take over the world. Whatever it is, OpenAI is not the same company it was when we met it a few years ago. I don’t think anyone knows what this means, but I do know that a sense of betrayal appears to be looming over even Sam’s once-fiercest supporters.
I just don’t know what people expected. We’ve been through this with Big Tech before and we fall for the same traps each time. Even the media is falling for it again, or at least, the business side of it is this time. You can see the break: editorially, newsrooms are saying to slow down with OpenAI because there is no transparency, no clear path as to what is happening, and Sam cannot be trusted. Business-wise, leaders are striking deals with OpenAI to train their models on the work of their writers. Likely, we will never see a dime from that. So far, the biggest name to stand up to OpenAI is the New York Times, which is taking them to court over copyright infringement. So many questions still loom about what it means for a newsroom to follow OpenAI — but there are answers to what happens when media trusts Big Tech in search of gold. Look at what Facebook and Google are doing and have done to news. Time and time again, we are reminded that greed will win every time.
To me, it’s so obvious that Sam used a classic Silicon Valley tactic to gain support from everyone: He made people and companies feel as if they would be left behind if they did not follow him; that he and OpenAI were the answer to their problems. I don’t know if anyone knows what this means for the future, though. There are times when I think, even with its great capabilities, that OpenAI oversells itself. And there are times when I question how it could balance being the hero and villain of its own story. What does all of this technology mean for the average person? If AI is trained on humans, how can we teach it equality, freedom, and justice — things we’ve never had? And how could we trust OpenAI and Sam to be the shepherd of such things when the past few months have shown that the company (the people who helped build it) doesn’t even trust itself? There are times I suspect I’ve come across a wolf in sheep’s clothing. But I just can’t figure out why the company presents itself as such. It could have just quietly, or even ruthlessly but more authentically, taken over the world.
It’s a classic tech mentality to move fast and break things, including, I guess, democracy. America is a land of buildings, and I don’t think we have enough safeguards in place when it comes to what happens when tech corporate greed tries its takeover. Of course, OpenAI’s ask for a $1 trillion-plus raise raised other red flags for me. Then again, it is also common for some founders to move and then ask questions later — to promise the goods are there and find a way to build it one day, someday, as long as you give them the money now. Such an endless pursuit of splendor in the face of inevitable destruction told me early on that perhaps something else lurked behind OpenAI’s charming mask. I would ask others too, and some shot me down because the front was too alluring. To me, it’s just become more suspicious.
Now, though, I think others are starting to watch more inquisitively. “Dude is shady,” one founder messaged me recently about Sam. I don’t know what that means, though and I have no idea what is going to happen next. I think we are writing the story as it goes, and we are seeing just how deep this rabbit hole can go — how much we can all get away with. These were all just some musings I had. One of the fun parts about covering tech is seeing how it is truly like any other industry where power and money cross. Climbing the ranks of American technocracy is a tale as old as time. The English know. Everyone is just waltzing for a place near the throne, for a title that deems them the richest, most powerful being alive.
💫Kitty Talk💫
Here are some interesting articles I’ve read since we last met: