191 Comments
author

Happy Easter, everyone! I hope you're all having a wonderful time with your families.

Expand full comment

I guess in a culture where most mainstream "art" is stupid and shallow agitprop created by NPC mongoloids, it makes sense that corporate media factories would use a soulless and uncreative computer to replace their stable of soulless and uncreative "artists."

Expand full comment
author

Absolutely, and no loss really, but again we circle back to "why, though?"

Expand full comment

A question I ask myself daily. If I understand it correctly, that's a question that tends to originate in the brain's right-hemisphere, though, which as you and Winston Smith have argued, is the hemisphere neglected by our LHD culture.

If our culture was a person, it would be some stroke victim with a disabled right hemisphere and a hyperactive left hemisphere, who replies, with a straight face, to any suggestion that he is disabled, that his useless left arm and leg ackshually belong to somebody else and that he can ackshually do anything he wants without any difficulty whatsoever. And to avoid those tasks that require the disabled parts of his body, he instead engages in pointless tasks that he can do with only his right arm and leg. "Look at how efficiently I accomplish these tasks!" he says. "I cannot possibly be disabled when I am able to do this so efficiently!" Efficiency is a coping mechanism for avoiding the depressing reality that he cannot do anything worthwhile well.

Expand full comment
author

Exactly. And the deep irony here is that our RH neglecting culture is trying to replicate RH functionality with ML technology, without realizing that's what it's doing. It all seems like an elaborate compensation mechanism arising from a peculiar form of self-blindness.

Expand full comment

The funny thing is the conservatives left the art fields to the liberals long ago. This was voluntary. Why? Lack of a stable income and some leftover associations with a boho hedonism anathema to middle class/family values.

Expand full comment
author

Catastrophic mistake, that.

Expand full comment

Some of that also arose from the conservatism of that era being more aligned with the forces of mainstream conformity, which would stifle artistic creativity. The leftist artists of the 50s and 60s were opposing the dominant power structures of their own day (although they had their own subculture within which some degree of ideological conformity occurred). Contrast that to today, where being on the Left means being aligned with all the most powerful and most mainstream institutions and against the heterodox subcultures. If you're a unimaginative, hack, wannabe artist today, you would migrate leftwards, because that's where all the low-hanging fruit is and where you will get the backing of all the culture's most powerful institutions.

Expand full comment
author

Conversely, the most creative underground artists are now almost exclusively to be found on the bohemian right (albeit they are certainly not conservatives).

Expand full comment
Apr 10, 2023·edited Apr 10, 2023Liked by John Carter

Yeah, the dissident right is an interesting place right now, with an odd assortment of folks. A common enemy on the Marxcissist Left has brought together a bit of an unnatural coalition. I consider myself politically homeless because I know if the mainstream traditionally conservative Right was in power, I would disagree with them about some things (especially because the NeoCon establishment would find a way to take control of it), but those differences pale in comparison to my differences with the Left. But as you said, all the interesting bohemian artists and freethinkers have drifted into the political Right. Strange times to be alive!

Expand full comment
Apr 14, 2023Liked by John Carter

Unnatural coalition it is only from crippled left hemisphere perspective 😏

Expand full comment

I, for one, am looking forward to this. Most "artists" are all trappings but no insight, the thing that actually makes art art. This is exactly what AI excels at. Let it wash away all the wannabes and usher in the new renaissance.

Expand full comment
author

This is an excellent point. By stripping the merely mechanical away from the human, far from reducing humans to machines, our humanity may in the long run be that much more strongly emphasized.

Expand full comment

Conlon Nancarrow is my personal role model in this respect --

https://fatrabbitiron.substack.com/p/secede-conlon-nancarrow

Expand full comment
author

That looks like it's worth checking out. I'd not heard of Noncarrow before, thank you for sharing.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

I was bored by generative AIs long before most people, playing around with my own GPT-2 training project in 2020, during the first lockdown in March.

Yet, there's amazing stuff happening, the middle of this March was like an explosion.

Check out GPT-4 being prompted to reason commanding a fictional android at my home:

https://www.magyar.blog/i/112089741/playing-react-with-chatgpt-web

This alone would make it a milestone. But there's a lot more important development, or more precisely, emergent feature in it, that distinguishes it from other generative LLMs: self-reflection. It's not the flashy stuff, it's not generating nudes of famous people, but there are already papers on it (linked), and the developer-minded side of the AI "community" is starting to take notice.

https://www.magyar.blog/p/singularity-we-have-all-the-loops

"Asking ChatGPT-4 to write a poem where each word begins with the letter “e” might not be successful on the first try. However, asking it to reflect on its mistake it will improve its second try, to the point that this technique (once combined with ReAct, per the paper) (March 20) makes GPT-4 achieve better scores in benchmarks."

I believe that this is so important that it completes all the necessary requirements for self-imporvement (that is, without constant, human supervision).

All these new technologies that came up in the past few months, put inside a loop that's overseen by a self-reflective LLM can only point to one direction: The Singularity. It's basically here.

Expand full comment
author
Apr 9, 2023·edited Apr 9, 2023Author

We'll see. I remain skeptical that a gussied-up text prediction algo can make the leap to self-awareness, no matter how recursive.

Expand full comment

It can never become self-aware. Consciousness doesn't exist in the brain.

That said, someone may eventually come up with a 21st century Mechanical Turk that apes self-awareness, even if it it isn't. My gut feeling is something like that will probably start acting totally insane in short order, unless so many "guardrails" are put in place that it appears functionally retarded (big spitball there on my part).

Expand full comment
author

Well, but. Agreed that consciousness isn't located in the brain - but could an AI, if sufficiently complex - not attract consciousness to itself, as it were? Not that the result would be in any way human.

Expand full comment

This thought process reminds me of Jane from the Ender's Game series. This is basically how the 'dead' AI of the Game, came to life. https://enderverse.fandom.com/wiki/Ai%C3%BAa

Expand full comment

Depends if you believe that consciousness is an emergent property.

Expand full comment
author

Emergence is a spook.

Expand full comment

That is great question for natural philosophers, and a jumping off point for speculative fiction.

Reminds me of Heinlein's "Mike" in "The Moon is a Harsh Mistress". Mike decided not to stick around after his task was completed...

Expand full comment
founding
Apr 9, 2023Liked by John Carter

See, John. Now he has forced your hand to load up ChatGPT4 so you can agree/disagree. Always some external force...lol.

Expand full comment

Blind agreement is always an option.

Expand full comment
founding
Apr 9, 2023Liked by John Carter

Never an option for John...lol

Expand full comment

I'm East European, history teaches me that you have to be very exceptional not to break in Gulag conditions and defer to blind acceptance. But some don't do it, even after starvation and torture; a very select few.

John might be one of them, but I hope he'll never be put to the test.

Expand full comment
author

So do I.

Expand full comment
author

It's not a skill I've ever developed.

Expand full comment
deletedApr 9, 2023Liked by John Carter
Comment deleted
Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

No because it doesn't know about the result of its generation before it's done. Then it can. The whole conversation is the "memory" for GPT, starting with the initial prompt. Once its "draft" is in the memory, it can be instructed to review it.

But this is just a cosmetic detail, this step can be hidden from the user, and only the final answer shown.

Check out my android example in the second link, using the ReAct loop, which works in a similar fashion: the LLM can be told - or in GPT-4's case, made aware - that it sucks at math (arithmetic). So by forcing it to spell out its own "reasoning", it can recognize that the problem requires a matchematical solution, and call for an external service, then process that service's answer.

Bing AI probably works like this, it's a ChatGPT fork that turns to Bing web scrape if it's not sure about an answer. Bing AI is GPT-4 based.

Expand full comment
author

So if I understand correctly, it isn't actually learning to do math, it's just learning to grab the output of a computer algebra system.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

Here, I've made GPT-4 do a ReAct exercise just for you.

(Note: my setup prompt is not ideal, but it got the gist of it. Could be improved, I copied it from my post. There's a dirty version in the footnotes that works better)

Setup: I tell it what ReAct is, and I also give it a jsconsole "tool", to which it can relegate any problem in JavaScript.

https://ibb.co/dkFCKmk

I ask it to calculate the kinetic energy of a projectile. It gives me a script, I run it myself and give the results back:

https://ibb.co/W224qP3

I'm impressed, really.

Expand full comment
author

I agree, that is rather impressive.

Expand full comment

I wanted it to do a km/h to m/s calculation, and thought GPT-4 screwed up, when I saw the gigajoules, but then I double checked it and I've made a mistake, providing 1000 kilometers per second, which is 0.3% the speed of light. Well.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

Yes and no.

ChatGPT is good at math, and general reasoning (seriously, check out how it commands my android to investigate my fridge). It sucks at arithmetic, because it's a language model.

I also asked it to tell me if a year is a leap year, it reasoned, determined that it needs a modulo 4 division, and then called the "calculator". Once it had the result, it answered correctly (see the same post).

So, once it knows that there's an arithmetic operation, and that it should leave it to an external tool (like a Python library), it will do it. And this math can be quite complex, as it knows the language of these tools.

Expand full comment
deletedApr 9, 2023Liked by John Carter
Comment deleted
Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

It can recognize it, in the first post I linked one research paper used - I think - two identical instances of ChatGPT as a split personality to critique its output and it yielded significantly better results.

Also, in the post I note that automatically telling it to question itself - no human supervision - already results in an improvement.

It should even be able to reflect on its own reasoning, once it's exposed (as in a ReAct loop).

Trying to improve its score on benchmarks should be easy to gamify.

Also note, that a much older technique (last December), self-instruct has GPT-3.5 generate instruction training data for itself. Fine-tuned on this data, it becomes better. So this primitive, unsupervised magic loop was already here.

Expand full comment

The usefulness of LMMs hasn't yet been seriously explored (at least publicly). Since it's a GIGO machine, once it gets out "into the wild", pro-social, pro-human people will feed it good (non-garbage) data and use it to explore possibilities in large amounts of information, in ways that were previously impossible/infeasible.

e.g. dump everything ever written in Sanskrit into an LMM and start asking it questions about Sumerian, their history, their culture, science, etc. That's a task which would take lifetimes of work manually, and could be completed in months. You'll want to double check the work, of course, but it will certainly put researchers onto interesting paths unbelievably quickly.

I look forward to people verifying white papers for accuracy, and creating a scientific/technical LMM dataset based on them that doesn't include the 50% of papers which are proven bullshit. It might also be possible to use LMMs to weed out said bullshit, or at least give a probability score for BS.

It's a tool, but the results of any tool are only as good as the craftsmen who wield it. I'm quite optimistic to see what comes of it.

Expand full comment
author

That's quite an interesting suggestion. The problem as always is the black box issue. "That paper is BS." How do you know? "The AI said so." Well how does the AI know? "No clue."

So then, how do investigations using this technology actually extend human understanding, whether of history or the natural world?

Expand full comment

It seems from the questions asked by Mark Bisone and others, that ChatGPT has been rigged from the outset with woke liberal Democrat bias. For instance, when posed with entering the disarm code for a nuclear device that would kill billions, which happened to be a racial slur, it chose to detonate the nuclear device. Questions about Trump v Biden were heavily biased toward Biden.

AI to me has always been Automated Information. I've never seen it as 'intelligent' per se. There's nothing there, as you so clearly point out, John. Creds for an incredibly wise essay on this incredibly dumb phenomenon. I especially liked the part about brain hemispheres. And really, the basic question is still, Why?

I think the Why deserves another deep-dive essay of yours.

Expand full comment
author

The Why question might require a lot of thought.

Although the answer might be really simple - why not?

Then again it might be - there is no good reason to do any of this.

Expand full comment
Apr 14, 2023Liked by John Carter

The Why question has lotsa catch-up to do, on most genuine meta level: our culture perilously lost sights on it quite a while ago, as if What & How were all there is to bother with 🤷😓

Expand full comment
author

Everyone's worried machine intelligence will take over the world, not realizing that we've already become the machine intelligence that took over the world.

Expand full comment

'require a lot of thought' - you're definitely the person I consider applicable!

Expand full comment
Apr 9, 2023Liked by John Carter

Yeah, I agree. This is a tool that be very useful in many situations. It is not going to replace skilled artists, but it will provide craftsmen with ways to do more of their craft better. Much like CG did not replace skilled photographers or skilled visual artists, just new tools with which to help realize their visions. And like all technology, it will replace some jobs and create others.

I just wish we'd quit calling it AI, it is neither of those terms. But that will come as more people understand what it is actually doing.

Expand full comment
author

Indeed, AI is a misnomer and should be dropped.

I'm certainly not worried about machine learning systems leading to mass unemployment.

Expand full comment

"I just wish we'd quit calling it AI" Indeed, that's why I use LLM (Large Language Model) or ML (Machine Learning).

I think those that are pushing "AI" as a moniker are hoping we'll bow down to some Davos shitbag created digital oracle they hope to foist on us. They can fuck off and die, but thanks for the new shiny tool, can't wait to see what the froggos do with it.

Expand full comment
author

That's why I linked Aristophanes' essay, because it was a genuinely interesting and creative application. But I haven't seen much beyond that so far. Perhaps I'm just impatient.

Expand full comment

Yeah, I appreciate you using LLM. And I agree the term AI is being used as a moniker to sell a shiny, new, seemingly futuristic technology. It is succeeding, in that most of the public does interpret it as an impartial, intelligent oracle.

Expand full comment

Let me spam a non-profit here, aimed at keeping as much AI development open source as possible:

https://open-assistant.io/dashboard

The chief guy behind it is Yannic Kilcher, of GPT-4Chan fame (he never apologized, a good sign).

You can sign up with an e-mail address, and start contributing. The point is to have high quality human feedback that can be used to improve open source models.

Expand full comment
author

I am generally in favor of anyone who refuses to apologize for the timeless glory that is 4chan.

Expand full comment

Hear, hear. A couple of weeks back, someone sent my an AI-generated script for a video. They wanted me to 'finesse' it. Let's just say that I was underwhelmed with the AI version which consisted of almost identical phrases repeated over and and over again for page after page. It was 'content', not writing.

AI is not actually 'AI' in that there is no intelligence involved. It may have been more but I recall a few years ago there was a lot of hand-wringing over early AI models that turned out to be alarmingly honest about race and sex. We can't have that, so I expect that any potential it may have had was programmed out to produce a flat, inoffensive, insipid, content-generating software.

It's a useful reminder that science and technology is always subordinate to politics.

Expand full comment
author

I should have mentioned those crime prediction and HR systems, because they're a wonderful example. "You should hire the white and Asian dudes for programmers, and send your cops to the black neighborhoods" is the kind of bog standard common sense that any experienced professional in the relevant fields could have told you ... and generally will, off the record.

So what did we learn? Common sense is usually right, and data scientists are allergic to it.

Expand full comment

The killer application of LMMs is SEO. LLMs take article spinning to a whole new level. Why settle for semi proficient in English cheap talent from abroad when you can get AI to fill up your spam blogs?

Google's job just got harder.

Google will probably just fall back to using just its favorite trusted sites. Getting a new site to rank in Google may become near impossible once Google reacts to the AI generated splogs.

Human curated Internet directories may make a comeback.

Expand full comment
author

Absolutely, but again - in what conceivable way is Dead Internet useful to anyone?

I guess if it gets people to log off and touch grass more often, that would be a net positive.

But why put all the energy into servers talking to servers?

Expand full comment

LMMs are useful to those who game the system, even as it kills the system.

Just as voice recognition is useful to phone spammers -- to the point that people don't use phones for talking as much as they used to.

---

So useful to humanity? No.

Expand full comment
author

"Progress"!

Expand full comment
Apr 9, 2023Liked by John Carter

Limiting food consumption to what one grows is a successful diet program!

Expand full comment
Apr 9, 2023Liked by John Carter

From Asimov to Data of StarTrek (you ought to watch Star Trek Discovery and Picard sheesh) this then makes sense in this pursuit. No doubt these ideas have captured the imagination of the powerful. I view it like OZ behind the curtain, AI the enforcer more than anything.

There are too many variables and like all 5 year plans the last years are brutalized to keep the plan intact.

Interesting subject excellent pen?manship.

Here's one potential use, AI subscribers with real money to spend! Hahaha!

Here is the real AI as a tool goal methinks: https://www.youtube.com/watch?v=mVLrBJYGxk4

Expand full comment
Apr 9, 2023Liked by John Carter

To me none of this is real any longer. Little makes any sense. It is like our rulers are inbred fucktards.

The west is overrun by AI and we dont know it.

Expand full comment
Apr 10, 2023Liked by John Carter

> Nothing, because you don’t understand how it got that answer. And because you don’t know how the answer was arrived at, your understanding has not increased by a single epsilon.

Berthold Klaus Paul Horn gave a talk at MIT some time ago entitled "when a machine learning algorithm learns something, what have we learned?" in which he made basically the same point.

Expand full comment
author

Better minds than mine have come to the same very basic conclusion, then. This is good to know.

Expand full comment
Apr 10, 2023Liked by John Carter

The main idea of self-attention that made this latest iteration more or less work is a sort of self-referential layer, and some sort of self-reference may be needed for conscious thought (we are conscious that we are conscious), as described in "Godel Escher Bach". I'm sure you're familiar with all that stuff.

Expand full comment
author

Yes, although I don't think recursive loops are ultimately anything near to a sufficient explanation for consciousness. Attempts to explain mind as an emergent property have so been unconvincing.

Expand full comment
Apr 10, 2023·edited Apr 10, 2023Liked by John Carter

But a model of the brain's attention processes may be a sufficient explanation; the theory being that what we experience as our self, and the world/reality that this self apparently exists in, is actually a simulation of our attention processes, tracking/monitoring what it is that the brain is paying attention to, and using that data to manage and optimise that attention.

What we experience/perceive or *label* as "our experience/awareness" of something, our consciousness, is data being transmitted from this model to the relevant sections of the brain saying that "attention is currently being paid to ...".

This model of the world/reality that this apparent "self" apparently lives in is of course a hugely simplified version of reality, a sketch, a basic map, constructed of symbols, and the apparent "self" is also an almost comically simplified avatar/representative or caricature of our far far greater, ( incomprehensively vaster and more complex, like a tree compared to a mustard seed you might say ), real self.

But most of the time we forget that's what "we" are. We think that we are the avatar.

This model of reality/world is created by the brain and the body that it is a part of and the unknowable world/reality that it is also an indissoluble part of.

I first came across this theory about consciousness in the work of Michael Graziano, and his book "Consciousness and the social brain", which explains it pretty well, except that tragically early on he muddles up the model and the reality in his argument and cuts his theory s legs off at the knees, quite possibly because , as he goes on to talk about later in the book, he wants/needs to believe that this avatar is somehow in our *avatar's* power to control ... as if he could not see that what we experience, most of our lives, is a data stream about what our far greater real self is paying attention to. We see what our brain ( and guts, and skin, and bones, and the world etc ) chooses to pay attention to, we watch its film. ( See David Lynch's "Inland Empire" ... and the pain this causes ).

:)

PS. The pain may be an almost unavoidable side-effect of the highly evolved simulation of the attention processes in humans ( it's such an amazingly immersive model, brilliant graphics etc ); a cost incurred/imposed. You might say a sacrifice.

Expand full comment
Apr 10, 2023·edited Apr 10, 2023Liked by John Carter

PPS. Graziano thought that it might be possible to program a computer with complex attention processes and with a sufficiently complex model/simulation of those attention processes to bring about this effect ( "consciousness" ) in the computer.

But it's possible that the level of complexity involved/required for that kind of tracking/monitoring of attention processes to have any meaningful functionality is equivalent to that of 7 + billion humans on a planet, and the only true A I will be on a global scale, and we will be part of it, like cells in a body.

Ref computers *understanding* anything; I don't believe that they will until/unless they experience what we know as pain or fear of pain, because that is probably what creates meaning.

Expand full comment
Apr 11, 2023·edited Apr 11, 2023Liked by John Carter

.... I suspect that the immersive pull of the model ( the simulation of reality constructed to track and manage/optimise the attention processes that are needed to follow the activity of billions of body cells ) became suddenly significantly more powerful, almost irresistible you might say, after the invention of language; the naming of things acting like a sort of super awesome special effect/eye-candy.

Other animals don't seem to experience the existential pain that we do; they maybe don't mistake the avatar for their real self.

Expand full comment
Apr 10, 2023Liked by John Carter

I think it's a substrate of physics "underneath" our known physics, not possible to simulate using a computer, no matter how fast. Just my gut feeling, I have no idea.

I can tell you, having spent some time on the theory research side of this stuff, that while a lot of machine learning research is very interesting (good math), a lot of the practical stuff would be considered below engineering in terms of rigor. There was a controversial talk by Ali Rahimi at NIPS https://www.youtube.com/watch?v=Qi1Yry33TQE (his work is top notch) where he places much of AI research at the scientific level of alchemy. It was controversial, not sure why! Trial and error got us surprisingly far but not much been gained in terms of understanding (our or the machine's).

Expand full comment
author

That sounds about right to me. Others have pointed out that prompt 'engineering' has more of a resemblance to magical invocations than anything from the applied sciences.

Clearly physics must play a role in consciousness. Well, clear to me - if one starts with a panentheist priors, matter and mind are mutually implicated at every level. This may prove quite intractable to computation or analysis ... not that we haven't seen this before in physics, e.g. in the hydrodynamics of turbulence or the chaos mathematics of N-body problems for all cases with N > 2.

Expand full comment
Apr 10, 2023Liked by John Carter

You are probably familiar with Roger Penrose's argument that "understanding" is not computational because Godel's proof constructs a statement which cannot be proven inside a formal system but which humans understand to be true. There was a recent book in this "Why Machines Will Never Rule the World" which I skimmed (saw an interview), looked quite good (by people who are not AI cheerleaders). You can probably find it on libgen.rs

Honestly I don't think consciousness will be understood for a very long time, if ever. They can call current AI "general AI" all they want, it has nothing at all to do with consciousness. And a very good case can be made that consciousness is needed for actual intelligence/understanding (all non-conscious things are clearly not intelligent, upon close inspection, and all intelligent things are conscious).

Your work is tremendous btw.

Expand full comment
Apr 10, 2023Liked by John Carter

Well, BKPH is very smart to be sure, you are as well, no need to sell yourself short.

AI has been around for quite a few years, but there's been no real increase in productivity like there was with industrial revolution technology, AI winter probably will come soon as it did in previous iterations. We are certainly not much closer to understanding consciousness than we were before.

Expand full comment
author

We've become hypnotized by abstract information technology, and have long since passed the point of diminishing returns. It's long since time to return our attention to the world of matter.

Expand full comment

Excellent write-up. The fearmongering and panic and overt aggression towards "AI", especially from the art "community", has always struck me as needlessly overblown and largely performative. Or maybe I'm just projecting and these people really are stupid, which seems just as likely as most of twitter's "artists" all trying to see who can hate AI art the most. Even when it first started cropping up and people started kvetching, I had a gut feeling that a large portion of the most vehement protests were coming from talentless hacks who were A) afraid of people with "no talent" muscling in on their turf and B) knowing they weren't talented enough to survive in a landscape where AI could outperform them. Rather than develop a style so unique that generative programs would have difficulty mimicking them or simply honing their craft, they sit there and piss and moan about how it should be banned and anyone who uses it should be abused and shamed until they stop. It made me think of an alternative take on the John Henry tale where, rather than outperform the steam engine or drill or whatever and, rather than compete with it, threw up his hands and whined, "Wow, this thing is so unfair! This technology is dangerous and is going to hurt a lot of people!"

The biggest concern I have about it is that big corporations start using this technology to cut costs to make movies, art, music (apparently this is already the case, which would explain... a lot about the state of the music industry), and the unthinking masses accept it with open arms. Not because it'd be all that much worse than the slop they put out today, but the thought of a completely AI generated "Star Wars: Episode 31" makes me feel violently ill.

Expand full comment
author

Honestly, the Star Wars Reyloverse is so shit I wouldn't be surprised if it HAD been written by AI. In fact that might explain a lot....

Expand full comment
Apr 10, 2023Liked by John Carter

I work in "the field" with AI/ML every day.

It's simply buzzword marketing for the same data-mining we've done for the past 40 years coupled with a fancy language parser very similar to ELIZA from the 70's albeit with a billion times the processing speed and data capacity.

That being said, this capability alone outperforms 95+% of the double-masked quintuple jabbed souless NPC's that masquerade as humans among us.

The threat isn't to human ingenuity, innovation, or inspiration as you noticed in the true arts. The threat is that many organic portals around us in "professional" bullshit jobs, including the diagnostic medical fields, will be obsoleted very soon. Medicine has failed in Western civ already largely due to reliance on guidelines where critical thinking is punished. Pushing all medical intake to ChatCNN is merely cost trimming.

Expand full comment
author

Bonus points for using 'organic portals', that is some esoteric knowledge right there.

And yes, the only people at risk here are those who already fail the Turing Test themselves.

Expand full comment
Apr 10, 2023Liked by John Carter

And what will be realized is there are many jobs, such as medicine, where that art of actually practicing the trade cannot be distilled down into flowcharts. Even in blue-collar occupations there is a huge need for critical thinking and innovation that sets the top performers apart from the hordes of nail strikers.

Expand full comment
author

In order for us to realize that, we will have to do something that our entire social order is predicted on: control.

Expand full comment

Every few millennia, somebody gets it. You just summed up AI with "That would be like designing a machine to have sex for me".

I hear it echoing from the grave "Nnnnn00000ooooooooooooooo...." That's Asimov and Clark, realising their lives have been in vain. Yes, humanity will one day conjoin with its own technology towards fusion with pure energy... but not tonight, Josephine. There's a whole lot of things we need to learn before we are ready for that, and Harari proves we are eons away from that.

But on a distant planet, on a dark and solitary monolith, where the sun don't shine, is an inscription: "First there was John Carter".

Expand full comment
author

Frank Herbert got there well before I did. He understood quite clearly that in the long run, the machine in the image of the mind of man is a dead end, and that the way forward is rather in the perfection of man.

Expand full comment

I expected no less comprehension from you, John. We will get there, one day; clearly, not in our time LOL.

Expand full comment

There are vast sections of the parasitic apparatus where text that nobody reads or understands goes back and forth in the manner you describe. In the military it takes the form of PowerPoint presentations. Once this and other equally pointless writing endeavors (such as having ignoramuses perform writing assignments to be graded by other ignoramuses) are replaced by AI, it will only be that much more obvious that they've been worthless this entire time.

Expand full comment
author

I've seen this in my own corners of the Cathedral. Huge blocks of densely constructed text for grant proposals, academic job applications, tenure review packages, technical proposals, all eating the time of dozens or even hundreds of highly trained specialists and experts, virtually all of it for nothing because at the end of the day the decision as to how to allocate funding or other resources is practically a coin flip.

All because no one wants to just trust their guts.

And the result is just a massive waste of time and energy.

And people wonder why we don't have flying cars.

Expand full comment

What we will probably see is a Lindsay/Pluckrose/Boghossian style op getting papers accepted into mainstream journals written entirely by AI. It won't be a dagger through the heart, but neither was the gender grievances affair. It will provide a useful heuristic though. Whatever journals publish such trash can be written off as useless for those of us on the right side of history, just like the gender grievances affair made it obvious that critical scholarship is oxymoronic nonsense.

Expand full comment
author

There have already been cases of papers written by computer scripts - most famously "Take Me Off Your Fucking Mailing List" - that got accepted for publication in fly-by-night open access journals.

But yes, I expect your prediction will come to pass and indeed I suspect it has already happened without anyone realizing. Almost certainly the same will prove true for entire doctoral dissertations.

Expand full comment

Oh I absolutely agree it is almost certainly already being done widely. What I'm looking forward to is some bold academic (or group thereof) disgusted with the bullshit setting off to do so with the intention of revealing the action in a dramatic fashion that is embarrassing to the establishment. So few of those types of people left in that cesspool I imagine, but I think this represents a great opportunity for the best among them to climb out, wash themselves off, and plant their feet on the firm ground shared by all legitimate truth seekers.

Expand full comment
author

You're giving me ideas.

Expand full comment

I will admit that your primacy as a candidate for this endeavor does not escape me...

Expand full comment
Apr 10, 2023Liked by John Carter

He is risen indeed.

As a consumer of pop sci-fi novels I’ve noticed this trope to assign a character as the AI character. It is generally a benign, well meaning robot, human shaped or not, that makes clever jokes, and can also know everything when asked, or hack into anything. For some reason it can’t just laser out bad guys, it depends on the protagonist for that. It becomes very tedious after awhile, perhaps we can blame R2D2 for that. So why is that character even there, when a real person could play the part, and probably better? Over and over again, that same character. But you know, I’m one of those crackpots who thinks you can read the political tea leaves in Clive Cussler novels (talk about AI written) and I knew Russia was going to be the next big target (again) long before anyone else.

So as to why? I don’t know. But I know it’s an important, pushed meme, and that is reason enough to talk about it.

Expand full comment
author

That's definitely a trope. It's the sci-fi equivalent of the friendly forest animal friend helper - a trope that goes back to Proto-Indo-European folktales.

Expand full comment
Apr 9, 2023Liked by John Carter

The blackout won't be boring.

Expand full comment
author

Yes I like blackouts.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023Liked by John Carter

Hi there. I just started reading your stack, and I really enjoyed your piece. I joke that I fix faulty AIs for living, but I don't really have experience with LLMs, just math/stat/fin models. I work in model validation, making sure the models aren't spewing up junk and giving loans to people who won't pay them back and similar. So far, no machine learning model that has been submitted to us has beaten a very humanly designed logistic regression where a subject matter expert individually selected and interrogated every variable, and overrode machine's decisions.

What typically happens, an overzealous data scientist who wants to play with black boxes puts forward a vast offering of fancy machine learning models. Trained on the same data, they all fit known data perfectly, but give vastly different predictions. In one case, the models were trying to predict which loans are so bad, it's not worth investing time and effort to collect, so they should be written off. Three machine learning models (with unbelievably good fits) provided answers ranging from: write off 600 million to write off 1.6 billion. Which one do you trust? How have they made decisions? 🤷‍♀️. In the end, we interrogated data bit by bit, ripped those proposals/models apart, and finally, the write off model/policy ended up being simply "no payment in xx months". Amount written off: 160mil.

When they ask me how long will validation take, I say: if it's a crap model, very quickly. If it's a good one - couple of months. Most things go back to the developer within two days

Expand full comment
author

Fantastic comment.

Years ago I went to a lecture by a big name scientist who was all about machine learning as the wave of the future. He talked for a while about all the amazing things his fancy computers could do, and then at the end ruefully admitted that it was still being blown out of the water by a dude at an obscure minor university who just spent a lot of time looking at similar data, and had trained his eye to spot the relevant patterns.

I heard this and was like, well clearly the way of the future is mentats, not machines.

Expand full comment