AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

M3CHK1LLA

angel sword guardian
Joined
Mar 18, 2010
Messages
6,453
Reaction score
2,134
Location
orbiting caprica
I didn't see a thread on this, so I thought I'd start one. Mods feel free to close or merge with any existing thread.

So I've been seeing these infomercials about AI and the future it will have on our jobs, and personal lives.

I know during the actors guild and writers strike one of their stipulations were that they could not be replaced with AI which got me to thinking, how many of us have a job where we could be replaced?

I've seen where people have been using AI to create art, videos and music. Are any of you guys using it?
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

AwakenTheSkies

Life is like a box of chocolates
Joined
Nov 13, 2018
Messages
1,111
Reaction score
1,179
Location
The unemployment office
I have mixed feelings about it. I love it's potential to make our lives better but I don't think it belongs in art. Potential to help develop cures for diseases that we haven't been able to cure yet, and so much more.

I hate how it's being used to copy a human's voice for example. Imagine you're that actor or singer? How would it feel to hear your voice on something that you would never agree to do? That aaaanyone can do whatever they want with your voice.
Music is a way for humans to express themselves, maybe a peek into the soul of the artist or maybe just a front of lies made to build an image. But what's the point of a machine doing it? Or AI help for songwriting or mixing? Let talent be talent!
But I doubt those who aren't affected will give a shit. If it sounds good and catchy then who cares right?
 

Yul Brynner

Custom title
Joined
Sep 12, 2008
Messages
7,466
Reaction score
9,014
Location
Mongolia
It's gonna take a lot of pressure off of songwriting. If you come down with a good case of writer's block, just get the ai to spit out ideas until you get something you can work with.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,698
Reaction score
12,590
Location
Gatineau, Quebec
I continue to think that a lot of people, intentionally or not, are ignorant of the limitations of AI, or any software for that matter. It's not like jobs being lost or changed to automation is a new thing, this is just a more clever version of it. Creativity is not obsolete, and human intervention is still going to be needed to get quality results in a lot of fields, and I don't expect that to change any time soon.

I'm not worried about AI music. Both because software still sucks at writing music, and because people can just.... choose not to. In the same way that so many people push back against AI art, I expect a lot of people would look at AI music as a novelty without "integrity" and just not partake. And it's not like AI and put on shows and set up a merch table.

Realistically, the kinds of people I see jumping on stuff like AI art for practical purposes are the same kinds of people who would be otherwise getting their art from Fiver or stealing it off of google image search anyway, so there wasn't much in the way of integrity there to begin with.

AI has an impact on my work, for sure, but not in the "it's going to replace me any time soon" kind of way. AI can't do my job. Someone I work with is doing some AI - centered research, and it's interesting to see the amount of effort that goes into getting something meaningful out of it when "working with AI" isn't just crafting prompts for ChatGPT.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,476
Reaction score
30,128
Location
Tokyo
I didn't see a thread on this, so I thought I'd start one. Mods feel free to close or merge with any existing thread.
I started an AI thread years ago but probably counts as a necrobump to discuss there.
So I've been seeing these infomercials about AI and the future it will have on our jobs, and personal lives.
Naturally it depends on the job. If your job involves relaying information or serving as basically a human layer between an internal computer system and some outward client, then yea, that's likely going to be gone. And that's a ton of people.

Within the creative fields its important to define some strata...
- artists who really think outside the box and create something that establishes them as a unique voice
- artists who are employed by industry
- artists who are freelance

I think the first two categories are pretty safe. I know many in-house artists who are quite into bringing AI into their workflow, though the culture is a bit different in Japan, and though probably we see some downsizing in the artists as some design tasks will be replaced with more generate->select->regenerate loops with people who can't create.

For the most part though, I think the creative jobs are a bit over-rated. The unique voices are few, and the wannabes are plentiful. I mean, I live in Japan. Consider how many unique art styles there are within manga vs. how many completely derivative styles there are. Those guys are gone. There's no room for the derivative artists, who could have otherwise made a living as a more cost-effective choice vs. more established artists, in the world of AI.

I've seen where people have been using AI to create art, videos and music. Are any of you guys using it?
I always wanted to do some cyberpunk/prog synth+guitar stuff with AI, but just never get around to it. Would still like to revisit maybe during xmas break.

But yea, I expect that within 5 years we can just spit out AI songs in genres like djent that pass for like the top 5% of releases in the genre. Regarding AI art and integrity, when the music is as good as human music, and better than most, I don't think people will care. If you think of some banging track you like now, and imagine that was just in some pool of AI generated songs you were listening though and not especially loving, and then that came along, I think you like it just as much regardless of how it was made. You can try not to participate in it, but when you hear a song made by AI that you really like, you really like it -- you don't really have any choice in the matter.
 

M3CHK1LLA

angel sword guardian
Joined
Mar 18, 2010
Messages
6,453
Reaction score
2,134
Location
orbiting caprica
I have mixed feelings about it. I love it's potential to make our lives better but I don't think it belongs in art. Potential to help develop cures for diseases that we haven't been able to cure yet, and so much more.

I hate how it's being used to copy a human's voice for example. Imagine you're that actor or singer? How would it feel to hear your voice on something that you would never agree to do? That aaaanyone can do whatever they want with your voice.
Music is a way for humans to express themselves, maybe a peek into the soul of the artist or maybe just a front of lies made to build an image. But what's the point of a machine doing it? Or AI help for songwriting or mixing? Let talent be talent!
But I doubt those who aren't affected will give a shit. If it sounds good and catchy then who cares right?
I've seen some AI art and it looks interesting at the very least but some of it look pretty good.

The use of human voices is very interesting. I can see where actors, musicians and other famous individuals would hate the use of them without permission. I can also see though where something like that could bring a voice back from the dead for say movies and music.

AI being used to help in the cures and prevention of diseases is something that would really benefit humanity. It's almost unbelievable how long man has been working on curing some of the diseases that have been around for millennia...
 

Randy

✝✝✝
Super Moderator
Joined
Apr 23, 2006
Messages
25,533
Reaction score
17,793
Location
The Electric City, NY
The problem with AI for art (music, visual, etc) is that the program generating it has its own algorithm that's also only fed by existing piece. So the art will always be limited by the programmer or the palette it has to work with.

By comparison, all or most people have the capacity to create art. It might not all be great but the capacity is there and the influence or palette they have to work from is nearly unlimited based on anything a person can see or think.

Considering the amount of work it takes to program AI, you'll never have the volume or feedback to the degree a living artist does.

Kind of think of AI art the same way I think of something like EZ Mix vs a living producer. It has its place if you want something finished ish but you will get something somewhat cookie cutter versus having it produced by a person.
 

M3CHK1LLA

angel sword guardian
Joined
Mar 18, 2010
Messages
6,453
Reaction score
2,134
Location
orbiting caprica
I want to respond to some more of these comments , but I'm at work now. you guys have brought up some interesting perspectives.

@narad do you have a link to your old thread? I'd like to do some reading
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,698
Reaction score
12,590
Location
Gatineau, Quebec
And of course as soon as this thread pops up, another Slack thread starts with people posting their weird AI art creations. And a thread about how AI generated feet still often have the wrong number of toes.

And that's a large part of why I don't think AI can do music. Music is so insanely subjective, and the criteria to evaluate what makes music, let alone "good" music, can barely be pinned down by people, so I can't imagine how you'd train a machine to recognize it. It's one thing to feed a machine a bunch of data and say "identify the genre" or "spit out a new version of this with different chord progressions", but it's always going to be weird garbled derivative stuff. Maybe people will like that, I dunno.

You could use the tools to generate the melody maybe. Feed a machine a bunch of sheet music and have it spit out a new score, but a score vs. the interpretation and performance of that score are very different things. And I'm sure that computerized music writing tools existed already - things that generate and randomize drums loops, for example.
 

Strobe

Well-Known Member
Joined
May 30, 2011
Messages
852
Reaction score
856
Location
St. Paul, MN
AI is already pretty good for doing a first pass of something. Need to write a letter of recommendation? Prompt it to get the structure and then fix the overly generic nature of AI writing. I have used it to break up a bit of writers block when I was trying to write a challenging message in a work context. I never just use the output without changes - but editing can be faster than starting from scratch. It is not a normal part of my workflow, but I want to be aware of its current state and potential, so I make an effort to familiarize myself.

I think it's similar for music. Let's say you want to harmonize a part with thirds. That's formulaic - a computer could do that. Writing a bass line can be a little formulaic as well, and I have heard artificially generated drum and bass lines that sounded good. If it saves time without being relied on for the truly creative decisions, that is a good thing. It's a little odd when it's actually in the driver's seat for what would be artistic decisions. I think it is a ways off before it does a good job there without a lot of actual human hand holding. That said, if AI could generate massive quantities of high quality emotive music - that is a weird place to be but there are worse dystopias out there.
 

Drew

Forum MVP
Joined
Aug 17, 2004
Messages
33,591
Reaction score
11,132
Location
Somerville, MA
I continue to think that a lot of people, intentionally or not, are ignorant of the limitations of AI, or any software for that matter. I
I think the fastest way to see this is take a subject that you have undisputed expertise in, and ask ChatGPT to do something in that subject. Doubly so if it's a fairly fast-evolving field.

At present, AI is a pretty useful tool. It can streamline mundane tasks - I think some of the generative fill tools in Photoshop are a great example of the potential, where it can do a much better job removing objects or changing elements than the old "dumb" tools. It'll allow people with little PS experience to do relatively simple things that used to take a lot of work, and it will allow PS experts to work much faster than they used to.

But, ask ChatGPT to do something fairly specialized, and you start to see its limits. For example, I just gave it this prompt:

"despite a lower than expected inflation report last week, why might the Federal Reserve still raise interest rates in their December meeting?"

...and received this answer:

"The Federal Reserve's decision to raise interest rates isn't solely determined by a single data point, such as the recent lower-than-expected inflation report. Instead, it considers a range of economic indicators and trends. Even with a lower inflation report, the Fed might still opt to raise interest rates in their December meeting due to a broader assessment of the economy. If other indicators, like strong employment numbers, robust consumer spending, or signs of potential overheating in certain sectors, point towards economic resilience or risks of inflationary pressures building up over time, the Fed might choose to preemptively adjust rates. Additionally, the Fed's decisions often consider the longer-term trajectory of inflation and the overall health of the economy rather than reacting solely to short-term fluctuations in specific data points. Therefore, despite the recent inflation report, the Fed might raise rates as part of a proactive strategy to ensure stable and sustainable economic growth."

This is... not even wrong, really, it's just useless. It speaks in broad hypothetical generalities. If I were asked this question, I'd be talking about how the November decision to hold rates steady was in part predicated by the rise in long term yields being restrictive enough that the Fed thought they didn't need to hike and most of October's increase has now reversed, I'd be talking about most of the decline in core inflation were due to elements like rent where the Fed has been expecting drops anyway and the so called "supercore" measure, core less shelter, hasn't actually fallen all that much, I'd be talking about the relatively weak October employment report was impacted by the UAW strike and was fairly normal in the establishment survey if you backed out the estimated 30,000 job impact from that, and I'd be talking anout how important the upcoming personal consuption expenditures report will be, after a less-weak-than-feared retail spending report today but retail spending is heavily geared towards goods and I'm more concerned about services inflation and in turn services demand now. I'd ALSO be talking about how I don't think they WILL hike in December, and that the futures market has priced out any possibility of a hike in both December AND January now.

Still, as a broad general discussion of why the Fed might or might not hike in any particular meeting given a particular input, that really isn't bad. It reads like a textbook, but it hits the established salient points. It's just worthless if you want some nuance and color on WHY they might or might not act, in this particular instance. It's worthless, for an investor trying to figure out what to do today and wondering what to watch.

Another issue - the last time we had this discussion was the first time I actually bothered to sign up for Chat GPT. As a proof of concept I asked it to write me a training plan to raise my cycling FTP - don't worry if that doesn't make any sense, but the gist is I'm a pretty serious cycling, am self-coached, know a decent amount about contemporary theory on training, and FTP is a very important predictor of ability, basically the power you can sustain for a long period of time. XChatGPT delivered me a 6-week plan, and again, it was totally fine... but it also reflects a very dated understanding of what drives training adaptations, likely as a result of the fact that ChatGPT is "taught" by a snap of the internet that predates it by several years so it cant create feedback loops, as I understand. IT has a heavy focus on offbike strength training, as well, which is more appropriate for burst power than it is sustained power, so I also kind of question how well it really "understands" the purpose of the training plans it was likely basing this off.

I mean, it's an incredibly powerful tool, and can generate copy FAR faster than I can. But it's not smart.
 
Last edited:

Asdrael

Green and Blue stuff.
Joined
Apr 6, 2008
Messages
514
Reaction score
557
Location
Germany
In my general field -public funded scientific research- AI is both a very useful tool and a threat. The fact is also that a lot of decision makers for us (politicians and should-already-be-retired-if-not-dead scientist) have no fucking clue what it is.

- AI as a research field in itself is interesting but very young. Most people, even at a professor level, are around 40 years old max. There is fascinating stuff coming up, in particular in my field where very difficult to interpret medical images can almost instantaneously accurately lead to a diagnosis.
- We see a MAJOR influx of ChatGPT written shit being published. This is bad because the output quality of ChatGPT is terrible for us still (poor citations, poor train of thought etc) so we have to sort through even more shit before finding good information, and I am including stuff where "researchers" from not so outstanding research countries don't even bother removing the "this was written by an AI line". This will soon hit the general public with an even bigger layer of mistrust into scientific research sadly.
- In teaching, students are becoming even lazier than they were and just AI their thesis. The detection tools we have for that are bad, and at a lower science level (bachelor, master) it gets difficult to distinguish between actual reasoning AI. On top of the student quality decreasing strongly in the last 15 years at the Master+ level. Most can't think on their own and have very poor reasoning and projection skills. It's very rare now that I teach a class and think "that person is PhD material", and now that they know they can just AI the shit out of everything (plus other pressure outside of the frame of this discussion making it a joke).

Overall, I am not afraid that AI is going to steal my job or whatever since it by definition requires inventing new things, something the AI can't do and won't be able to in the foreseeable future. However, I have a strong suspicion that it is going to widen the gap between people that have the can understand science and people that are just going to misundestand and misuse it - the risk being that they will be taken advantage off, if not worse.

I miss the microsoft chat bot that basically turned into Hitler in a week. Would have been fun to have his input in some other off topic threads ;)
 

nightflameauto

Well-Known Member
Joined
Sep 25, 2010
Messages
3,076
Reaction score
3,862
Location
Sioux Falls, SD
AI at the moment isn't really all that great at anything more than helping pair down shitty search results to make them slightly less shitty. It can take massive amounts of data, and turn it into easily digestible nuggets. It's helpful if you have a very specific thing you need to ask it, and it has that particular thing within its range already.

Programming it's helpful because what used to be a half-hour of digging through crap results on Google can now be about three to four minutes on something like ChatGPT. So it's increasing productivity for people already versed in that particular field. What it's not currently capable of, and probably won't be for a long time, is taking obscure bullshit concepts like what a marketing director or a sales droid would tell you they need, and turning it into something real, usable, testable, and eventually releasable. That's where humans are going to have the edge for a very, VERY long time. But, they can help us take our "vision" if you will, and turn it around faster.

Now, if we ever get to the point where the AI systems get to self-repair, self-refactor? That may be a different story. But where we are today? The major disruption is going to come from the hype machine and the fantasy-fetish CEOs that are absolutely convinced that they can replace half their workforce with AI and get quality output from them. I know we'll see it start happening, regardless of the feasibility. I expect to see a lot of companies that fall for the bullshit fail quickly. I also expect to see the real, long-term impact of where AI might be headed to be a slow removal of necessary desk jobs today.

Basically, if you're a skillset worker that's well versed in your field? You've got a good long while before you truly have to worry about the machines taking over. If you're mediocre and just barely able to keep up? Prepare to be run over slightly sooner.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,476
Reaction score
30,128
Location
Tokyo
- AI as a research field in itself is interesting but very young. Most people, even at a professor level, are around 40 years old max. There is fascinating stuff coming up, in particular in my field where very difficult to interpret medical images can almost instantaneously accurately lead to a diagnosis.

Just addressing this point really quick -- AI is a comparatively young field, but it's not a young field. Many would say AI as a field of academic pursuit started in the late 1950s, and you had guys like Marvin Minsky, John McCarthy, and Claude Shannon laying down the fundamentals of AI. Even a lot of what Turing was doing in his later career was very AI. And in the 70s/80s you had the birth of connectionist models which are the direct forebearers to modern AI. Getting these models to work well at scale for practical problems is a relatively recent thing, but AI as a research field precedes that. All of the so-called "godfathers of AI" are deep learning researchers, all older than 60 I think (Hinton counts as ancient for sure).
 

Asdrael

Green and Blue stuff.
Joined
Apr 6, 2008
Messages
514
Reaction score
557
Location
Germany
That's right, but the buzz around it -leading to more funding, more positions around it etc- is around 10 to 15 years old. The amount of open chairs for "AI in whatever" is crazy. This matches with the translation of AI to other fields. As usual, the theoretical research on something is niche, but as soon as there is a breakthrough on applications it explodes (and then generally deflates within 20 years over the hype is gone. eg nanothech in medicine lulz).

Interestingly, my wife is part of a research cluster for artificial intelligence and there is now a major research component on the ethics of AI. Not only of an autonomous car should run over a baby or a grandma preferentially, but intrinsic bias and stuff like this.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,476
Reaction score
30,128
Location
Tokyo
I want to get back to some earlier points but you picked the right week OP lol:

 

p0ke

7-string guitard
Joined
Jul 25, 2007
Messages
3,692
Reaction score
2,560
Location
Salo, Finland
In my line of work I feel like AI is mostly just another tool in the toolbox. I'm a software developer, and I'm aware that code can be generated by AI if told specifically what to do, but I don't feel threatened by it at all. For example, a junior web dev at my company had a problem with his code and tried to solve it with the help of ChatGPT. It just didn't find a working solution, so then he called me and we had it solved within minutes. There's just no replacing years of experience after all.

But at the same time, we're developing an AI based system for the corporation that my little company's part of, that's gonna automate recruitment (automatically writing descriptions about jobs, analyzing people's CV's and application letters, selecting the most suitable candidates, contacting them by email or even by some deepfake calling robot and so on), and I get why the people doing that manually now are scared of losing their jobs. When we perfect that system, the only thing they'll need to do is oversee that the AI makes the right decisions and at the end of the recruitment process, contact the chosen applicant(s).

I just hope the result will be that the same people can make a lot more recruitments instead of the same amount of recruitments being made by less people...
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,476
Reaction score
30,128
Location
Tokyo
I think the fastest way to see this is take a subject that you have undisputed expertise in, and ask ChatGPT to do something in that subject. Doubly so if it's a fairly fast-evolving field.

At present, AI is a pretty useful tool. It can streamline mundane tasks - I think some of the generative fill tools in Photoshop are a great example of the potential, where it can do a much better job removing objects or changing elements than the old "dumb" tools. It'll allow people with little PS experience to do relatively simple things that used to take a lot of work, and it will allow PS experts to work much faster than they used to.

But, ask ChatGPT to do something fairly specialized, and you start to see its limits. For example, I just gave it this prompt:

"despite a lower than expected inflation report last week, why might the Federal Reserve still raise interest rates in their December meeting?"

...and received this answer:



This is... not even wrong, really, it's just useless. It speaks in broad hypothetical generalities. If I were asked this question, I'd be talking about how the November decision to hold rates steady was in part predicated by the rise in long term yields being restrictive enough that the Fed thought they didn't need to hike and most of October's increase has now reversed, I'd be talking about most of the decline in core inflation were due to elements like rent where the Fed has been expecting drops anyway and the so called "supercore" measure, core less shelter, hasn't actually fallen all that much, I'd be talking about the relatively weak October employment report was impacted by the UAW strike and was fairly normal in the establishment survey if you backed out the estimated 30,000 job impact from that, and I'd be talking anout how important the upcoming personal consuption expenditures report will be, after a less-weak-than-feared retail spending report today but retail spending is heavily geared towards goods and I'm more concerned about services inflation and in turn services demand now. I'd ALSO be talking about how I don't think they WILL hike in December, and that the futures market has priced out any possibility of a hike in both December AND January now.

Still, as a broad general discussion of why the Fed might or might not hike in any particular meeting given a particular input, that really isn't bad. It reads like a textbook, but it hits the established salient points. It's just worthless if you want some nuance and color on WHY they might or might not act, in this particular instance. It's worthless, for an investor trying to figure out what to do today and wondering what to watch.

Another issue - the last time we had this discussion was the first time I actually bothered to sign up for Chat GPT. As a proof of concept I asked it to write me a training plan to raise my cycling FTP - don't worry if that doesn't make any sense, but the gist is I'm a pretty serious cycling, am self-coached, know a decent amount about contemporary theory on training, and FTP is a very important predictor of ability, basically the power you can sustain for a long period of time. XChatGPT delivered me a 6-week plan, and again, it was totally fine... but it also reflects a very dated understanding of what drives training adaptations, likely as a result of the fact that ChatGPT is "taught" by a snap of the internet that predates it by several years so it cant create feedback loops, as I understand. IT has a heavy focus on offbike strength training, as well, which is more appropriate for burst power than it is sustained power, so I also kind of question how well it really "understands" the purpose of the training plans it was likely basing this off.

I mean, it's an incredibly powerful tool, and can generate copy FAR faster than I can. But it's not smart.

Back with more procrastination :D I said this in the other thread but this is such a weird take to me. I guess succinctly the mistakes are:
1.) looking at today's models and making very conservative claims about the future just because this exact model isn't delivering exactly the type of response you want,
2.) looking for a certain sort of depth and analysis in a model that in aligned specifically to be helpful. So it has nothing to do with the model's analysis, but the type of analysis you want not matching the type of analysis it is designed for.
3.) asking the model about some niche topic that it has never seen in its training data, and being surprised that the response doesn't reflect the knowledge it never saw.

If you consider things the model does know, and reflect on how it knows them, and how it generalizes on those topics, future models will both be similarly suited to more niche things (by virtue of more data and capacity) and be better at going out, fetching contemporary discussion (go grab 300 pages on the topic of FTP, for instance) synthesize that information and spit out the report. If a human without any prior knowledge about cycling FTP plans can go out and read a couple books on the topic and synthesize a decent training plan, so will the AI. That's pretty much a certainty. To see it any other way would be like looking at the Wright brothers' first flight and being like, "Nope, no way that thing's going any higher than that".

Superficial criticism of "it doesn't do this right now" is fine, but rarely is it useful IMO to worry about this exact current set of models taking anyone's jobs. Just considering the amount of time to roll out institutional changes to leverage AI means it's not going to happen in the super short term, but the newer and better models will simply be plug-and-play -- a one line change in your API call. So I think a valid criticism of capabilities has to come from a deep understanding of how the models work, and their abilities vs capacity and training strategy, historically.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
12,698
Reaction score
12,590
Location
Gatineau, Quebec
but rarely is it useful IMO to worry about this exact current set of models taking anyone's jobs.
Except that plenty of people are claiming that this is exactly what's about to happen, or is already happening. Most of the non-technical people who have insisted on talking to me about AI are convinced that ChatGPT can do my job right now. I forget who it was in the previous thread who was trying to make the argument that ChatGPT is preeeeeetty close to AGI already, which I think is pretty off the mark.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,476
Reaction score
30,128
Location
Tokyo
Except that plenty of people are claiming that this is exactly what's about to happen, or is already happening. Most of the non-technical people who have insisted on talking to me about AI are convinced that ChatGPT can do my job right now. I forget who it was in the previous thread who was trying to make the argument that ChatGPT is preeeeeetty close to AGI already, which I think is pretty off the mark.
Without trying to repeat too much of the above, let's pretend that ChatGPT could replace your job, today, in terms of having roughly the same input/output given the right prompt for the context. The delay in integrating that into your company, and having the appropriate safeguards, and legal concerns and accountability, would already put that many months, if not years into the future. And at that point it's not GPT4 that's taking your job, it's GPT6+. So looking at literally today's models as an existential job threat is pointless.
 
Top