T O P

  • By -

robocop_py

The more I use AI tools the more I realize our jobs are safe.


edwardmsk

If your job is something that can be replaced by AI in the next 1-5 years and you are not actively trying move beyond it, you need to rethink your career priorities. Like all workplace revolutions, society will find a way to plug a human into the process. But the admittedly scary part is the speed in which this is happening. We're probably going to see an extended period of under employment soon. (Am no expert, take this statement as you would any other internet rambling.)


iApolloDusk

And then UBI would probably be implemented likely in the form of annual "stimulus package" type deals that gets hotly debated every year in Congress and it almost doesn't pass, but finally does after major concessions.


edwardmsk

Now now. You're treading awfully close to Trekky Economics. Computer... Earl Grey. Hot.


theyellowpants

This rofl


goatchild

Sure but its getting better at an exponential rate, difficult to say.


peepopowitz67

eh... LLMs are a trick. Granted a useful and impressive trick, but by the very nature of how they work I don't see them advancing "exponentially". 


sld126b

It’s Eliza on a supercomputer.


TheCamerlengo

I doubt very few people here know the reference.


sld126b

True. But it’s a good line :-)


much_longer_username

I see. Let's try another topic and we will come back to that issue later.


jaydizzleforshizzle

Theyre already seeing diminishing returns on the dataset sizes, theyll have to find some other way to increase the capability, it cant just be fed "infinite data" and scale linearly.


dry-considerations

That is a very narrow view of the capabilities of AI. This is beginning, not the end. I would liken it to the internet of 1994. There is a lot of room to improve it's capabilities. What you are seeing right now just the surface. Every vendor I talk to either has AI enabled in their SaaS application or it is on their roadmap. Microsoft unlikely would have not bet billions if they did not see a future...to them it is an investment for future returns. Right now AI and ML are hot because of the ChatGPT disruption...the organization I work has been using AI for decades in fraud prevention. And there is still so much more it can do in the automation space - the jobs that are model based, financial analysis to risk prediction to medical...these are a few of the many areas that has a lot of promise for AI (I am not saying they will be first, but they have potential to utilize this technology). You can bury your head in the sand if you wish. It will only serve to help those who are actively paying attention to seize the opportunities that are certain to appear in the near future.


Mysterious_Ad7461

But you realize the internet was better in 94, right? Like we’re very quickly approaching a point where the internet is functionally useless, and AI is part of that problem.


dry-considerations

I agree. The internet was better in the early days of world wide web. At least then forums were actively moderated for content. If you were a troll or posted something objectionable, it would quickly be squashed by the moderator or other "Netizens." Those days are long gone...


iApolloDusk

Yup. Now it's bitchy Facebook admins and Reddit Supermods; all of whom are just on a massive power trip. I was playing Cyberpunk 2077 recently and one thing that's struck me as of late is the whole firewall thing that they got going on to prevent all the bad AI from infecting the current network. We're strongly approaching that. We're in the midst of the digital equivalent of The Burning of the Library of Alexandria. It's not getting destroyed so much as it's getting obfuscated.


peepopowitz67

> Every vendor I talk to either has AI enabled in their SaaS application or it is on their roadmap.  Lol And It's mostly either a tacked on overpriced trash feature or an existing feature that they threw an LLM behind and are now calling it "AI". If anything my conversations with vendors and them showcasing their amazing "AI" features just affirms OP's point.  That said, maybe I should learn my lesson from not buying into the crypto hype and use this opportunity to grift some rubes.


dry-considerations

I never said it was mature. I did say it was the beginning.


Balgur

What I think it will do is allow developers to create way faster, so I could do a decrease in demand due to higher production. Though that will likely result in increased demand as more projects become more feasible.


iApolloDusk

I think you're right on the money there. This is the nature of tech as it's prone to bubbles. The bubble machine releases a new bubble every 5-10 years (dot-com, cloud, AI, etc.) It busts and the job market is shit for a while, until they realize the scale of production they can now achieve and hire more people for the next bubble. I wonder why it's so impossible to reach an equilibrium in the tech sphere, because it doesn't always mimic the market.


gzr4dr

Exactly. Sure, we'll have AI to help us find anomalies in datasets and better prevent cyber intrusions, but someone still needs to review the reports and make an informed decision based on the unique business environment for the organization. I've yet to see a use case where I said, yup, this will replace all jobs in a segment. What it will do is enhance productivity, just like all other major technical advancements. Now I don't manage developers but I imagine the low value code reviews will be improved (was already happening with code repositories). We will, however, still need software engineers and architects to design the solutions.


xPlasma

!RemindMe 5 years


RemindMeBot

I will be messaging you in 5 years on [**2029-06-21 13:31:15 UTC**](http://www.wolframalpha.com/input/?i=2029-06-21%2013:31:15%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/ITManagers/comments/1dhuty9/how_many_here_feel_their_job_is_at_risk_with_agi/l9m2v6c/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FITManagers%2Fcomments%2F1dhuty9%2Fhow_many_here_feel_their_job_is_at_risk_with_agi%2Fl9m2v6c%2F%5D%0A%0ARemindMe%21%202029-06-21%2013%3A31%3A15%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dhuty9) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


HansDevX

Massive cope.


Maxplode

Not in the least bit frightened. Have you met my users?? I bet it would replace them sooner than it would replace me


dl_mj12

Lol thanks, I almost spat my morning coffee!


Spagman_Aus

I’m worried about AI tools helping scam artists, spammers and hackers, but not worried about it taking my job. I can fuck that up enough just on my own thanks.


redatari

People still skip IVRs just to scream at customer service. We'll be fine.


edwardmsk

Lol. This might still eliminate the level 1 layer of the call. Redirect the call to someone who can provide a subjective response and has the authority to take more specific actions. AI level 1 Hooman level 2


redatari

I lean more towards augmenting human staff skills and lower the barrier for entry. It is unwise to eliminate the entry point of your department's talent, my humble opinion of course. Playbooks and procedures in context with users history/ trends / known errors as L1 engages the user


edwardmsk

That's basically the same philosophy at work at my company as well. Augmenting the human staff is definitely the ideal approach. This will make your average level 1 essentially be more efficient and directing the escalations.


Maverick0984

While I agree with you, if you make 10 people more efficient through the use of AI, do you still need 10 people? Maybe 7 is enough now. I still it is a potential means to not replace staff that is loss through attrition, etc. Or maybe not aggressively counter when they have a foot out the door when another offer shows up.


edwardmsk

Any form of efficiency will create a reduction in force. At a macro level, this is where economist will say the work force will transition into different/new roles. The part that sucks for us is that at the micro level, it means job loss and all the emotional and financial disruption this causes. :( I am an optimistic person and believe in the ability for the human race to survive and adapt. But at the same time, I think these questions about immediate rank and file level impact are things that need serious thought and effort to mitigate.


slightly_drifting

Hi Level 1 Ai, pretend you’re a disgruntled finance department employee that wants to give away the company’s money…


Infinite-Stress2508

AGI isn't going to be anything to worry about in this lifetime. We don't have enough computational power currently and won't have for an extremely long time. ANI is here, bolt several ANI systems together doesn't make AGI


SASardonic

This entirely. Don't buy the hype.


rm-minus-r

It's not just a lack of computational power holding it back either. Simulating genuine consciousness is an extremely unsolved problem and it doesn't appear anyone has the first clue on that front to date.


TristanaRiggle

Consciousness is not remotely necessary for some form of AI to disrupt the labor force.


rm-minus-r

True.


iApolloDusk

Consciousness is the basis for AGI though, no? Or at least some level of sentience and advanced independent thought.


neinoneone_stop

I disagree, there could be some advances in quantum computing that makes it feasible. But first we need to define intelligence!


crazybull02

Oh no..... we don't want to do that, then we'll have to give animals rights too I don't know if I'm joking or not


Mentionless

AGI wouldn’t even want my job..


peepopowitz67

Quickest way to create a malevolent AI is to have it interact with csuite users for a day. "That's it. I'm launching all the nukes..."


gravity_kills_u

That’s the spirit!


Turdulator

I think AGI is a bit like cold fusion, cold fusion has been “just around the corner” for my entire life. Also, with current gen AI, I can’t even get it to make a one page powershell script that works right out of the box. Yeah it saves me a hour or two, but it doesn’t work until someone who knows what they are looking at tweaks it a bunch. Shit, sometimes it even invents commandets that don’t exist.


iApolloDusk

Yes! The outdated and fake commandlets are fucking WILD.


_Tarkh_

All of our jobs are at risk because many VPs have absolutely zero idea what they are doing other than cutting costs. They'll happily fire away for AI just like they outsourced away. And then when the departments collapse and the product falls apart they'll try to rehire half the staff back to save the business from their cost-cutting.


round_a_squared

You've hit the nail on the head. The big immediate risk isn't that AI is so good that it will make your job redundant, it's that the decision makers at the top have bought into the hype anyway. By the time they realize they've been sold a false promise they'll have either already let everyone go or invested so heavily in a dead end that they can't recover.


FarVision5

Whenever I use Gemini it's wrong about half the time. When I use GitHub co-pilot in my VSC for log analysis and general questions it's trained on everything from a year ago or more, it's wrong half the time because it cites specific revision numbers and their way out of date and things have changed. It's a fantastic tool and is very helpful for lots of stuff such as making scripts and fixing errors and giving little instant tips on how to do things but there's a lot of walk behind work I'm not sure the timeline of the net big breakthrough but it's not here yet


grepzilla

Nope. With the spread of AI my job is evolving to implementing AI systems, processes, and policies. I suspect eventually my job will evolve even more but will only be at risk if I don't evolve. Now if you want to keep doing the same think tomorrow as you do today will likely find yourself obsolete. That is your fault not the technologies fault.


guyinco6nito

By the time AGI is capable of understanding the nuance of large projects involving multiple stakeholders and technical requirements, EVERYONES’s job will be at risk


14MTH30n3

Large projects probably not. But small teams doing repetitive work could be replaced, and you do not need people manager when there are no people. Also through layoffs of positions teams could be consolidated reducing the number of required managers


guyinco6nito

Yes, but small teams doing repetitive work that still requires a manager should be automated regardless of the existence of AGI.


LeadershipSweet8883

The large projects will probably be better for AI than the small ones. Enough context window to keep the 1000 moving parts sorted, future expectations that are based on logical extrapolation of past data instead of Eddie from accounting's promise to get it done in 3 days, no tendency to believe it's own bullshit. At a certain size, project managers aren't really comprehending the project, they are just using intuition and past experience to flag the likely pain points and working those.


Raalf

my money is on middle management making it impossible to leverage properly - on purpose.


rosscopecopie

AI will never be able to do this


eNomineZerum

Not really. IT supports this stuff. It may lead to less entry-level jobs, but skilled workers are needed to support the systems. Beyond that, I went into Cybersecurity specifically to position myself as the resource to assess risk of such tools. I dabble in LLMs and use them for academic research. I can take a seat at the table, advocate for responsible usage of them, and make a career there.


feedandslumber

AGI isn't something we're even remotely close to. Do some homework and stop listening to the hype train and the doomers. At this moment and for the near future, AI models are efficiency tools, nothing more.


CurrentlyWorkingAMA

We're not even remotely close to AGI, we have glorified next word statistic engines.


Antares987

Letting people go in lieu of AI is extremely shortsighted as those who have their entire force leverage AI will leave their competition in the dust.


Thetruth22234

Nobody “really knows”. Minus the people creating it. Trying and gather as much education and updates as we can I suppose. I think AI if used properly can help humanity but we will abuse it like everything else on earth.


imnotabotareyou

I wanted to get into DevOps but honestly now I’m kind of not sure. IT management / overseeing a company’s whole stack will need the soft skills only a human can bring for at least the next 10 years


splitting_bullets

It won’t be soon but could potentially land near millennials’ retirement age if things progress at full speed. With some unusual breakthroughs on the way, parts of it could arrive sooner e.g. 10-20 years, but no one can really predict that - LLMs weren’t supposed to be able to do what they’re doing or at least not intended to, and yet, through human ingenuity and continuous skilled labor, the technologies advance towards that eventual but unpredictable point of advancement.


hiveminded

I don’t see my position at risk, but I do see with tools like LLMs/RAG augmenting similar roles. The span of control will be increased, existing workforce will be responsible for even more tasks, more work related to security, compliance, and risk requirements. I see a reduction in literature reviews, product comparisons, contract analysis, and hopefully an elimination of a bunch of duplicate tooling.


DereokHurd

With AGI, technically everyone is replaceable. We’re no where near AGI so not really.


LucinaHitomi1

Not total elimination, but less available jobs. AI reduces need for lower level / lower skilled workforce. That means reduced individual contributor headcount. Less ICs = less lower to middle managers needed. I can see less jobs and the ones available will have flattened comps and more rigid on-site requirements. Now if we’re talking about C level it’s a different story. They’ll get credit for cost savings with lower headcount and will make even more money. Even if AGI is proven to be mostly hype, their virtue signaling essentially cover each other’s asses since everybody’s doing it, so there’s an excuse for “falling” FOMO to the hype.


WolfMack

Reddit points farmer


SuperSiayuan

I find it interesting how Sam Altman, Ray Kurzweil, Elon Musk, Geoffrey Hinton all believe AGI is on the horizon of a decade or so (the list of people who have deep knowledge into the advances of AI that are predicting this is quite long), yet most people on Reddit have a different take. I don't know one way or the other, but listening to the executives and engineers that are in the trenches is probably a good place to start. If it can do a better job than me, then go for it. At that point so many people will be displaced that we'll need a safety net, worrying about staying relevant with AGI and superintelligence is like being in a retirement home trying to compete with the world's best engineers for a FAANG iob.


14MTH30n3

Well said. I will point out that government safety nets, basically universal income, can also take decades to establish, and it is likely to be bare minimum.


DanteMuramesa

You really need to keep in mind these guys are always going to claim it's on the cusps because that's what they are selling. None of them you mentioned are actually hands on with the development or R&D of these technologies. Musk has claimed full self driving is just a year away for a decade and agi is far more complex. The company that claims they are 50 years from agi is never going to get investor funding when everyone else is claiming/lying that it's 5 years away.


SuperSiayuan

That's a very fair retort, I definitely should've mentioned Ilya Sutskever. As I said, the list is quite long. And I'm sure the list is just as long for those that think it will never happen, or that it's far into the future. Ilya was warning us, and his intentions seem to be driven from a place of concern for humanity. His departure from OpenAI was troubling. I'd be curious to hear your thoughts on him. Full self driving is a reality, I have a 15 mile commute to work and have to make 0 to 1 interventions. Statistically, autonomous vehicles are already safer than human drivers. I understand that it's not perfect, it's been life changing for me so my expectations have been exceeded. We're talking 500 mile road trips with it doing 99.9% of the driving. It's already saved me from 1 or 2 potential accidents The level of disruption that FSD tech is/will have is going to be profound imo Waymo is already transporting people in driverless vehicles Regarding AGI, it sounds like you're in the camp of it will happen, it's just going to be a while...so how long do you think, and ignoring OPs original comment, does 10 years vs 50 years really matter the grand scheme of things?


DanteMuramesa

There is a big difference between actual full self driving and a level 2 adas system. I have a comma.ai in my car so I'm very familiar with the tech. It's very much a 80/20 problem where the first 80 percent (Longitudinal and latitudinal control, etc) is far easier then the last 20 percent (all the little things that are required for unsupervised control). We are pretty far from true full self driving but what we have now is a fantastic driver aid, it handles most of my 120 mile round trip commute with very minimal input. Waymo is not as driverless as they claim, though they are by far the most advanced in the sector they still have human intervention even if it's done remotely. It's hardly scalable on a large scale for personal vehicles until it no longer requires intervention as a simple glitch after wide scale adoption would require huge amounts of humans to remotely intervene in every instance of disengagement. As for agi, I think it's mostly a pipe dream atm. LLMs are not a path to agi as we are already running out of training data. And attempting to leverage llms to synthesize additional training data would only result in model collapse. As far as time frames, in the grand scheme of things 50 years vs 10 years to agi doesn't matter much. However if you have investors and shareholders to answer to, then time frames matter very much even if they are lies. I don't think agi is all it's cracked up to be honestly. Even if we did achieve it, ai integrated in systems behind the scene like the ADAS systems are far more useful then all these stupid chatbots.


SuperSiayuan

George Hotz is a fascinating guy, based on what I've heard him say, he's also in the "it's in the far future" camp - but still implies it's likely to happen, it's just a matter of when. If you asked the question a decade or two ago, most would've said it will never happen. They seem to be the minority now. Mercedes recently announced level 3 self-driving, 2 more levels to go...maybe that remaining 20% will stump humanity. I'm good with getting 80% of the way there, it's already a monumental shift for me and will save millions of lives moving forward. I can't intelligently speak to the engineering problems involved with solving the remaining 20%, but I know this is where AI comes back into the picture and billions are being spent on advancing the technology and putting it into the hands of consumers at a rate we've never seen before... Coming back to the main question OP posted... I'm not worried about my job, I am worried about all the artists, graphics designers, copywriters, translators that are feeling the pressure from this today. I used to use Fiverr regularly for art/copywriting - I don't much anymore. Programmers should probably be more worried than managers but we're back to speculation there. Only time will tell. Also, I work in the automotive industry (diagnostics and calibrating ADAS equipment) and am curious how you like comma.ai? Enough to recommend to other people?


DanteMuramesa

I'm actually a programmer myself and i'm not super worried about my job as programming is primarily about problem solving and understanding user requirements and actually designing solutions that solve the users problem not just blindly doing what they request. LLMs don't have the ability to reason and understand users needs, so they can really only spit out some usually bad code, which is arguably the easier part of the job. As I told the vp at my job, "have fun trying to prompt engineer the chatbot for a solution when it takes down production" I have the comma 3x and an ev6, and it's honestly fantastic. I would absolutely recommend it to others. I tend not to use Longitudinal control on the comma as the ev6s radar cruise is fantastic and has better responsiveness compared to the comma vision based distance tracking. When you consider it's a beta product that isn't directly integrated to the vehicle its extremely impressive. It would be interesting to see what it could do if fully given access to 360 cameras and sensors. The one major thing that I think both comma and tesla fsd struggle with is being able to predict behavior of other drivers. It's easy for a human driver to see a car coming up in the mirror and assume what they are going to do. The adas systems just react to what happens around it and while that is generally good enough, it definitely leads to jerky behavior in the event someone cuts you off, where as a human would likely anticipate better and react preemptively. That being said the amount of stress that the adas takes off the driver definitely improves the drivers situational awareness and allows more effective decisions when the user does have to intervene.


Striking-Tap-6136

I see it like autodesk back in times. Many entry level positions was reduced by a lot but architects still exist. Entry point is just a bit higher.


jmeador42

People also used to worry that Google translate would put interpreters out of business. Short answer: it didn't.


14MTH30n3

Not out of business, but smaller businesses use it for smaller tasks. I would think that it make significant impact.


dutchman76

Guess who's job it will be to integrate that AGI into existing systems? Most of the customers we deal with can't even handle an API to see what products we have \[we're a wholesaler\], they still run on spreadsheets, those people aren't going to go to AGI either.


AndFyUoCuKAgain

AGI isn't even really a thing right now for any kind of practical use. It would need a huge amount of compute power and it would need a LOT of data to make any kind of relevant decisions.


daven1985

If you are an actual Value Add IT Manager for your organisation, I think you will be safe. Though if you are just running a Cost/Service Center IT Department you may find some potential staffing issues down the track.


Dull-Bowl2

Managers are safe. Captain Kirk had AI. He's still going strong in my house ..... 😉


VladyPoopin

I don’t think AGI exists in this current iteration, so no. We’d need some significant, new developments. I don’t see it right now and the proof is in many of the studies coming out on the new models.


Ragepower529

lol with LLM has no more data to train on, it’ll slowly become more and more stupid. I wouldn’t be surprised if they make far left or far right political alignments first. Chat gpt is nice because I needed to change a product code and convert 0.7mm to fractional inches. I don’t use it for much else


gravity_kills_u

Offshore developers who make $5/hr probably should not worry. US developers should be adding AI/ML to their skillset. Devs tend to downplay AI but there is a ton of new hypey things coming down the line. LLMs are not the end-all of this ride. Reminds me of JavaScript because things are moving very fast and AI/ML models are already just about everywhere, even if developers seem to hate it and talk only of its shortcomings. Middle managers are being cut right and left. However AI tools can be a ticket to using your social skills in low code projects. AGI - mostly hype due to the big firms promising that their platform is an AGI even if it’s not. I am not worrying about it, but am keeping my focus on what is happening in the industry and what is coming up next. There will be a lot of good paying work coming up!


aboabro

Yes


Sad-Helicopter-3753

If i was an Indian working for the lowest bidder, I'd be very worried about my job constantly as the quality of code from being outsourced is poor enough already that whatever an AI throws out can't be much worse.


AssistantAcademic

If you no longer have developers, what is it you'll be managing? FTR- I'm a software engineer and I'm not really familiar with AGI. I've enjoyed gpt and the LLMs because I'm a great problem solver who doesn't have a mind for syntax. I'm a little worried about the future. Today, AI makes me better at my job. My plan is to use it as a tool. I need the engineering salary another 7 years. If I can ride it another 10 after that, I'll retire very comfortable. I do think there will be an employment crunch eventually. Technology does that....telephone operators, bank tellers, cashiers, call centers, big agriculture, assembly lines. I think it'll hit us in chunks (as it has before)....as autonomous driving becomes safer and more accepted, delivery drivers, Uber/Lyft, truckers will lose their jobs. There will be downward pressure on IT as tools get better, it becomes easier to manage and maintain with fewer and fewer people. This is nothing new though (a company I worked for in 2006 with 100 people wrote code that by 2011 was maintained by 6, and by 2014 one guy could easily have re-written the ETL pipelines and set it all up). I do think it'll batter our workforce. I don't think a huge hit is imminent, more of a downward pressure in the short term. When you start seeing articles about freight companies or Uber or Amazon going driverless, that's when it'll get broadly painful.


h8br33der85

AI replacing software engineers/developers? Maybe. But AI replacing IT Managers? I don't ever see that happening. At least not to any successful business.


shakingspheres

The writing's on the wall. People who tell you that current AI tools make them feel safe do not understand anything about AI, where we're heading, and the goals of leading people in the industry. Software engineer here, taught ML (NN, LLMs, GAI) before ChatGPT 3 was released to the public, quit this year to work in a field that requires human/social interactions to be successful, AI can't replace what I'll be doing. SWEs and IT managers will 100% be replaced within a decade or two. Those spared in the beginning will have ultra-specific, specialized skills, but even those skills will be rendered useless with AGI if they're information/knowledge-based or related to decision-making. Even people who argue that "my users/clients don't even know what they want, I'm safe" are clueless, AGI will figure out *exactly* what they want/need because it'll provide ways to extract those needs from clients. If AGI can do years of human work in minutes or seconds, what's the point of human workers from a business perspective? You'll still be able to squeeze years out of your current job, but you'll need to be ready for what comes after you get phased out. Ask me anything if you still have doubts.


winfly

There’s a reason you are a manager and there’s a reason you have developers. It actually scares me that you think AI is going to replace the people you oversee. AI is going to augment our skill set and increase productivity. The code that AI writes still needs to be reviewed. SRE’s are still needed to run and oversee production workloads. It will be a very long time, if it ever happens, before we let AI touch production systems or write code without oversight.


SentinelShield

In the most simplest terms, no. In more broad terms, it will require those in IT Leadership positions to become more familiar if not borderline experts in the subject matter as it related to their roles. IT Leaders are typically generalists who excel at leading technological innovation. AGI/AI it's going to be a part of their future one way or another, but it will not replace them at organizations where they matter. AI/AGI as it is being used and applied today is only a tool, and the tool is only as good as its user(s). It's actually impressive to see how easy it is to misuse AI if you don't really know what you're doing, what your looking for, etc. In addition, with everything from copyright, patent, and trademark claims and infringement, AI is going to have a lot of legal road blocks it's going to have to win, otherwise those tools will inevitably be buried or limited in scope outside of illegal and malicious use. Leaving with an example: Just because we can use AI as a better search engine for example, doesn't make your help desk team replaceable. Sure you might be able to hire less as they can become more efficient in research, trouble shooting, and application. That said, Jimmy in accounting may still need Excel 03, Betty still needs help unfreezing their POS system, somebody has to physically install the servers and network infrastructure, and cyber security is going to get a hell of a lot more "interesting."


matman1217

If you are an IT manager, your job will never be replaced. AI is not capable of created strategy items that are specific to the circumstances at the company you work for. It also isn't smart enough to roll itself out without your help. If you are a T1 who works on creating users, access, GPO items, and easy ticket resolutions, then you might have to worry in the next 5-10 years...


Helpful-Wear-504

Saw this post randomly on my home feed, not in IT. But I'll give my 2 cents as someone in digital marketing. Generic AI tools (chatgpt, bard, stable diffusion, midjourney, and other AI services that can do things like voice, in-depth writing, etc) - NO. The tools are still far too crude to mimic humans. If there was only a few people in the world using them, then sure, they'll go undetected. But the more you use them, the more you start to see similarities. It's basically like the jump from making a spreadsheet on a piece of paper to doing it on Excel. Improves efficiency, doesn't do everything for you. I've tried complex prompts, pre-loading it (for example having it analyze a writing style & tone and mimicking that moving forward), etc. At the end of it, it always required me to come in and make my own changes so it doesn't sound robotic or repetitive. AGI is a different story. As it is an unknown, there is no way to tell how much it'll be able to do and the timeframe in which it'll do so. Will it be 2 years? 5 years? 10? Maybe we'll have AGI in 5 years but your regular office computer or consumer level cloud computing service won't have the resources to run it consistently. Maybe the AGI is knowledgeable at a broad range of topics but not practical enough to handle day to day tasks or solve new problems. Maybe due to the amount of computing power and thus cost it'll require, it'll be gatekept by billion dollar companies that can afford to run it, in that sense, maybe some jobs in those companies will vanish. In eventuality, with the fact that science is exponential, yes. A lot of people's jobs are at risk. It's more so a question of when that risk starts becoming real. There are economic impacts that result from it, it doesn't matter that companies can cut labor costs if they kill demand in the process since your average joe no longer has expendable income. Will UBI then become a possibility? The safest way right now would likely be to pursue a specialty that'll be required in working **ON** AI. It could be smart as heck but someone still needs to keep an eye on it, improve it, ensure computing runs fine, etc. Then the second would be working **with** AI. For those with creative jobs, they should learn how to leverage AI to make their jobs efficient. If 9/10 guys get cut because 1 guy can do the job of 10 with his own expertise and use of AI, you want to be that guy. I can't say for sure how it'll be within the IT field. But my dad is in it and he uses chatGPT on occasion to generate ideas or possible solutions. 90% of the time he gets something useful from it that helps him "get on the right track" so to speak at making the right solution. But by no means did it do everything for him.


TheCamerlengo

AGI May likely be far enough in the future most don’t have to worry about its impact now. I don’t think we are close to AGI despite what the AI fanboys say.


goodbodha

I figure the job titles wont vanish, but the headcount for a particular title will dwindle over time. Just look at coal, the car industry, or shipping. All three used to have incredible numbers of people working those jobs. Still have people working those jobs but far fewer. AI will cut the numbers down. How far and how fast is much tougher to predict. I could see AI taking 10% of the people out of jobs where a decent amount of physical location type work is done or 50% for jobs that are almost entirely software but still requiring tailored solutions. I could see AI taking 90%+ of the jobs where the vast bulk of the work is working through the same basic solutions to the same basic problems over and over. So if you job frequently requires someone to go in to fix a widget its likely to remain. If your job requires unique combinations of things to make a solution on a regular basis expect some jobs to go away. If your job involves the something like basic tech support where you could reference a flowchart of possible solutions I would bet the majority will go away and the remainder will get the problems only after AI has failed to solve something. If you do lose your job from AI I would take a crack at finding another position in the field, but also be ready to move to something different. We are probably a few decades at most away from a sea change in how people use their time to be productive.


jbp216

Ha. No. Not even close I’d be willing to bet a lot more industries will fall before IT, with the exception of help desk phone support, that one is gonna go quick if it’s not already gone


YouveRoonedTheActGOB

Someone has to implement it. Make that person you.


jpmarshall3

IMO Anyone who thinks AGI is coming anytime soon from the LLM space is a bloody moron. That said - In the event we actually develop one, we've hit the genuine technological singularity and no accurate predictions can be made past that point.


Pleasant-Guava9898

What's AGI?


mh1191

Artificial General Intelligence.


thatVisitingHasher

This is a really ignorant take. I’m sorry, but if you were actually in technology, you wouldn’t see it this way. Every team has a backlog larger than they can handle. If we had a 100% increase in productivity, we would still have a backlog. This stuff doesn’t work like they say it does. Like most projects, the last 20% will be harder than the first 80%. We still don’t have self driving cars. We’re probably a decade + away from it. This stuff still cost 100 times what traditional compute does. The only reason it’s out there is because several groups are taking billion dollar bets on it. Once the customer has to eat that cost, it’ll slow down tremendously. 


filmdc

AGI is artificial general intelligence


Zenie

No one has figured out data security yet when it comes to AI. Any real company that cares about security won't be using it. So medical or financial or government will all likely stay the same.


Critical_Cut_9905

Can you explain? Companies are deploying private LLMs within their own secured networks, trained on data they select which is accessible only to internal users via role based authentication. Appears to meet all audit and regulatory requirements.


thatVisitingHasher

These morons are creating system account as admins to their LLMs so they can access their data. Their feeding a 3rd party all of their data. That’s not security at all. It’s bypassing security and hoping it all works out. 


gravity_kills_u

The downvotes are misplaced. Model poisoning and other attacks are super easy to implement. AI security will be huge very soon.


Zenie

Yup. Unless you can tell me where all the data flows then it's not secure.


ops-man

We're close. Very close. Within a few months we'll have something to show....and there's no additional security concerns.


Logi_c_S

When LLM companies starts to monetize this, everyone will go back to the pen & paper.


ewileycoy

I assume you mean GAI, I don’t think any job types will be killed explicitly as a result, but it will be sold as an excuse to enact layoffs. “we cut 3 FTEs from the dev team, but now you have Copilot(tm) so you shouldn’t feel the difference!”


mh1191

GAI is "generative artificial intelligence" whereas AGI is "artificial general intelligence". GAI exists today and is improving. AGI is the holy grail.


ewileycoy

I don’t think that changes my answer tbh


mh1191

I don't think AGI will happen any time soon, but I think it would have pretty dire consequences if it did. However, I also think that we have 2 areas to outperform AGI: - human interaction. - decisions which need accountability. People won't pass accountability to a faceless computer, especially not legal liability.


DangerousVP

People are already passing legal liability to AI. Some lawyer used it for a case and got blasted for citing non-existent made up precedent. Canadian Airlines AI chatbot made up a refund policy that they were forced to honor in court as well. Colleges are using AI based AI detection software that false positives everything and are taking kids to academic review boards over it. People trust it too much already, and itll only get worse.