AI and it's effect on your music, your job and the future

  • Thread starter M3CHK1LLA
  • Start date
  • This site may earn a commission from merchant affiliate links like Ebay, Amazon, and others.

wankerness

Well-Known Member
Joined
Feb 26, 2012
Messages
8,718
Reaction score
2,679
Location
WI
Maybe some, but I can vouch that at least some companies are being very cautious around AI for a whole host of reasons. A lot of companies feel like their value is in the team they've cultivated, not so much lines of code or some other arbitrary measure of productivity. Some companies recognize that using AI could put you in hot water for IP and copyright given that you can only train an AI to "be a programmer" by feeding it someone else's code. I think smart companies will be evaluating for themselves what they can and can't get out of AI before jumping to any conclusions about who or what it will replace.

I certainly think some will try to replace their programmers with AI, and there will be some successes, but I expect even more failures. Because, as Demiurge said:

And I don't mean that as a jab to say that anyone who believes AI can replace a programmer must be a bad programmer, but not all software jobs are equivalent. It's not like code generation or automation are new concepts in the world of software. Automated testing and CI etc. didn't replace QA, for example.
That's the optimistic view. I suspect companies that know what they're doing and aren't beholden to shareholders will do what you say. But I expect plenty more to go all-in on it and face consequences down the road somewhere.

I think AI is kind of like Blockchain in that it's currently a stupid piece of technology that is basically the technological equivalent to a buzzword which everyone's falling all over themselves to loudly say they have in their product, and in a few months that fad will be over. I think it's far more dangerous than Blockchain in that it actually is causing bad management to try and remove real workers, and is already costing thousands of people their jobs, and that it some uses of it that are actually effective will continue being used. Especially with idiot execs that make AI-generated posters for their movies, or get it to write copy, or whatever. Those are considered "good enough" by the vast majority of people that look at them, so I'm afraid it's here to stay unless there are some stunning pro-labor, anti-exec supreme court decisions (yeah right, with the conservative wackos in charge) that basically make all of AI-generated art/writing trained on real art/writing a copyright infringement.

I saw a news article yesterday about how ChatGPT had developed a tool for teachers to detect with 99.9% accuracy whether their student had used chatgpt, but they'd been sitting on it for a year cause "30% of chatgpt users are opposed to being able to be caught" basically. So, there's plenty of "customers" out there that are the problem, too.
 

This site may earn a commission from merchant links like Ebay, Amazon, and others.

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,077
Reaction score
13,534
Location
Gatineau, Quebec
aren't beholden to shareholders
Any company I was thinking of is VERY beholden to shareholders, because a large part of serving shareholders is shielding them from the risks that come from adopting new, controversial technologies and the legal implications attached to them. That's not to say they (by which I mean we - I'm mostly thinking about the offices I've worked for and our corporate overlords in the gaming space) aren't using it at all, but there's a a much thicker layer of caution than it might seem from the outside.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,828
Reaction score
31,338
Location
Tokyo
It's weird to me to watch these discussions about "if" AI can replace a human {programmer, analyst, doctor, etc}. If we were to step back and look at the capabilities of these models over the past 10 years, we would be looking at some hugely exponentially rising curve, at the end of a string of countless "machines can't do that"s which are now long broken, and saying, "yea, yea, I know that happened but surely X could never be replaced". The question isn't if programmers will be replaced, it's when will programmers be replaced. In my opinion, the answer for many is rather soon.
 

wankerness

Well-Known Member
Joined
Feb 26, 2012
Messages
8,718
Reaction score
2,679
Location
WI
It's weird to me to watch these discussions about "if" AI can replace a human {programmer, analyst, doctor, etc}. If we were to step back and look at the capabilities of these models over the past 10 years, we would be looking at some hugely exponentially rising curve, at the end of a string of countless "machines can't do that"s which are now long broken, and saying, "yea, yea, I know that happened but surely X could never be replaced". The question isn't if programmers will be replaced, it's when will programmers be replaced. In my opinion, the answer for many is rather soon.
I think that the thing with generative AI is that it's been an exponentially rising curve in ability so far, but due to the nature of it, it also is capped at a certain level of competency that we're probably going to be close to soon. Like, it won't just keep going up until it replaces everyone. It would take an entirely new kind of AI to actually get past the limitations with these systems. It will get better at making deepfakes, sure.
 

crushingpetal

SS.org Regular
Joined
Nov 11, 2022
Messages
1,373
Reaction score
1,930
Even if AI reduces a human team of 10 programmers to 5 programmers, that is a significant "disruption" to the industry.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,828
Reaction score
31,338
Location
Tokyo
I think that the thing with generative AI is that it's been an exponentially rising curve in ability so far, but due to the nature of it, it also is capped at a certain level of competency that we're probably going to be close to soon. Like, it won't just keep going up until it replaces everyone. It would take an entirely new kind of AI to actually get past the limitations with these systems. It will get better at making deepfakes, sure.

I'm not exactly sure which generative AI models you're referring to, but I think there's no reason to believe that. On the language side, there's lots of avenues of research happening to both scale up these models and address all the limitations of current AI, and we're a long way from learning what the hard limitations actually are.
 

TedEH

Cromulent
Joined
Jun 8, 2007
Messages
13,077
Reaction score
13,534
Location
Gatineau, Quebec
Time for lureGPT to change their passwords for everything.

it's when will programmers be replaced
I know we've gone back and forth on this, and we need not rehash the whole thing, but I remain unconvinced, even after having learned a fair bit more about how AI works since those conversations. We have just as much evidence to show that people are bad at predicting the future as far as technology goes though. In my mind, at this moment, asserting they'll never be replaced or they'll inevitably be replaced are equally absurd predictions to make. As it stands, it's my professional opinion that the current state of AI cannot match a human programmer at all - just like in every other domain AI is used, it will excel and beat humans in some specific contexts and measurements, but will fail in the bigger context of what you really hire a human to accomplish.

Unrelated - It would be such an easier conversation to have if so much of the attention and hype being brought to it come from people who had any idea what they were talking about, and it's not an easy subject to understand. Just like crypto and blockchains are nebulous to the layman, AI is even more so. Whenever AI comes up it feels like this to me:
tasks.png


And before anyone says it, yes I see the irony that AI can do the thing in the comic now, but the point is about the barrier to understand what technology can or can't do and why. I don't think that the way AI works has really made its way into the public consciousness yet - which makes it very difficult to evaluate the difference between a meaningful tech breakthrough or just a clever trick or calculation.

Also, I'm not accusing narad of not knowing what he's talking about. I'm well aware he does. I'm speaking very generally. No shade on narads perspective, it's a reasonable one.

it also is capped at a certain level of competency that we're probably going to be close to soon
IMO we're already sort of reaching a point where we're understanding the limits of what some of the AI products out there can actually do. Enough so that the word "slop" has found a home in conversations about generative AI content. Enough so that it's becoming part of internet literacy to be able to pick out when something is AI or not.

I'm sure someone might argue that we've reached the "slop" phase so quickly that something non-discernible from "true" intelligence and "true" art must be just around the corner, and I'd say that coming from the gaming space, I find that hard to believe at face value. I say that because every time a new visual technology comes out, we inevitably say "wow, we're so close to photo-realistic!" We've been saying it since PS1 and N64. But we've never been close to photorealism. Every new generation just demonstrated how much farther we really have to go. Someone will inevitably say "yeah, but with today's ray-tracing, some games already look convincingly real", but in the year 2035, with the advantage of hindsight, we'll wonder how we ever put up with such ugly approximations of the world. For all we know, at least. As I said - we're bad at predicting the future.
 

Moongrum

Well-Known Member
Joined
Sep 7, 2013
Messages
711
Reaction score
911
Location
Pacific NW
It's weird to me to watch these discussions about "if" AI can replace a human {programmer, analyst, doctor, etc}. If we were to step back and look at the capabilities of these models over the past 10 years, we would be looking at some hugely exponentially rising curve, at the end of a string of countless "machines can't do that"s which are now long broken, and saying, "yea, yea, I know that happened but surely X could never be replaced". The question isn't if programmers will be replaced, it's when will programmers be replaced. In my opinion, the answer for many is rather soon.
may I remind you all, this is the man that had no moral issue writing a program that turns knobs automatically, putting all the knob turner technicians out of work.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,828
Reaction score
31,338
Location
Tokyo
I know we've gone back and forth on this, and we need not rehash the whole thing, but I remain unconvinced, even after having learned a fair bit more about how AI works since those conversations. We have just as much evidence to show that people are bad at predicting the future as far as technology goes though. In my mind, at this moment, asserting they'll never be replaced or they'll inevitably be replaced are equally absurd predictions to make. As it stands, it's my professional opinion that the current state of AI cannot match a human programmer at all - just like in every other domain AI is used, it will excel and beat humans in some specific contexts and measurements, but will fail in the bigger context of what you really hire a human to accomplish.
I don't know how to make a very fact-based argument about what will and will not happen either. But I think we can look at the trend lines. Was AI a better programmer than an undergrad CS student 5 years ago? I don't think so. I think it is now though. Claude can sometimes solve project-level programming tasks right-out. You kind of either look at that and see the writing on the wall, or you don't.

And before anyone says it, yes I see the irony that AI can do the thing in the comic now, but the point is about the barrier to understand what technology can or can't do and why. I don't think that the way AI works has really made its way into the public consciousness yet - which makes it very difficult to evaluate the difference between a meaningful tech breakthrough or just a clever trick or calculation.
The public definitely isn't going to understand the model mechanisms for this stuff basically ever, but I read this and feel like I'm supposed to be read it like, if you know how it works, you can understand that it's just calculation and not a meaningful tech breakthrough. But (a) it can both be just (millions of) clever tricks -and- also be a tech breakthrough, and (b) I think the more people know the models and how they work, the more scared they are of the potential for them to cause disruption. Look at Hinton -- he's way up on both of these metrics.

Yes the general idiot AI influencers hyping this up are in this category too, but that shouldn't overshadow the amount of experts that believe similarly. It's like crypto in that sense too -- I think the bitcoin whitepaper is absolutely genius. That youtube is filled with asshats trying to hawk their own NFTs and shitcoins and ape images shouldn't take away from the goodness and importance of that original vision.
 

PuckishGuitar

Wife has Dior, I have ESP
Joined
Jul 14, 2022
Messages
228
Reaction score
324
Location
Clutch City
re: AI taking over the roles of engineers, just imagine how much worse this comic would be with current AI interpreting client demands. I see some MC Escher-type designs getting made and production just hanging up trying to interpret it :lol:

.
FgXmnMgXoAI4NF1.jpeg


Over the past few months and seeing what AI is actually doing, I haven't changed my opinion much. It's going to be a tool for knowledge workers initially and I see it being a culling device like computers were; either you adapt it into your workflow and progress, or you miss out and suddenly your skillset starts looking out of date, all other things being equal. So far I've mainly just used AI to whip up some quick graphics for presentations, but since I'm in the middle of rewriting standard procedures into a different format, it has been helpful to generate outlines out of source material that I can then start editing and revising. I'll be thrilled once we start getting conversational AI that can I can describe a model to with parameters to vary and ultimate design goals, and then it'll create the data files and sensitivities to run across multiple software and Excel sheets and give me a summary. Still have to check the inputs and outputs to make sense, but would save a ton of time.

edit: eventually I do see AI getting to the point where those checks become unnecessary, and I could give it a more open ended prompt and let it generate a solution on it's own. But until it becomes truly "Data from ST:TNG" there will need to be some hand guiding it to what we humans need.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
16,828
Reaction score
31,338
Location
Tokyo
re: AI taking over the roles of engineers, just imagine how much worse this comic would be with current AI interpreting client demands. I see some MC Escher-type designs getting made and production just hanging up trying to interpret it

Misinterpreting client demands is a huge problem when the result is hundreds of man hours working towards the wrong solution, but you also have to consider that with AI you receive your solution almost immediately. How many times do you have to botch up interpreting the client demands to make it a net loss when you can generate a potential solution 500 times within the original timeframe?
 
Top
')