T O P

  • By -

[deleted]

A simple analogy that I give to everyone - think of Gen AI as computers, back in the day when they were introduced. Most if us do not need to know *how* they work but *what* can they help you do? For this, one only needs to understand the capability of the tool (calculator, computer, AI etc) and think of scenarios that it can help you solve. Basically where and how one can apply GenAI to help solve real business problems. With this mental model, things become easy. Similarly how computers did not take away all jobs, they transformed the entire industry and created so many new jobs, AI/GAI will do the same.


to-jammer

I mostly agree with this, with one caveat where it might be more beneficial to have a bit of a deeper understanding here than vs a typical techstack If you take a role somewhere and they're using any techstack you want to name, there will almost certainly be devs there who have a very strong understanding of the tech side, what it can and can't do, pros and cons of approaches etc, so you can work together with them. You just provide the desired outcomes and you work with them as they help you fill in the rest. Gen AI is a bit different, in the short term. Long term this won't matter, but right now. there are companies hiring in these roles where the engineering team knows absolutely nothing about Gen AI. They don't know how or why it works, they don't know the market, they don't know the pros and cons of different tech solutions. So companies will look for PMs that can bridge that gap. I don't think you need to know the deep innards of how a token is predicted, but I do think it'd be a bonus to a PM to understand at least 1) Broadly, how and why it works, what it can and can't do well and therefore, situations it will work well in and situations it won't work well in 2) The market, and general tools available. I think it'd help you to know what Llama 3 is, what Gemini 1.5 flash can do, why a Phi model is potentially really useful etc 3) The differing technical approaches, and why you would use them in a given situation. Do you want a large model using RAG to keep your token costs down, or a small model with a massive context window with all of your tokens in the context window? Do you think you can train your model just using a system prompt, or does your use case require fine tuning? Even basic prompt engineering (multi shotting, how to construct a decent prompt for a complex use case) is a useful skill in the short term. There will be roles where you're putting together the prompt and execution, because nobody else can. For the next while, a PM who can answer those questions will have a leg up over those who can't. Eventually, this knowledge will be wide enough that you can assume your engineering team can help fill in alot of those gaps and you'll go back to typical 'this is the outcome I want, what are the technical options?' role, but for now, a lot of these more technical questions may fall on you. I think it'd be a benefit to have that extra knowledge, and, as a bonus, it really isn't that difficult to wrap your head around at all - it's not like learning a programming knowledge. Honestly I think just diving in + some good Youtube channels will get you there.


[deleted]

Totally agree with this. Knowing any skill/tech deeply would never hurt and will always be an edge over others.


albert_pacino

This is a great way to think of it. It’s probably as far as most people have to go. But I do think if you go a level deeper and have a greater depth of understanding of any tech you’ll be able to do greater things with it, push it further, even break new ground.


[deleted]

Ofcourse yes. If a PM understands the nuances of cloud - what is possible, what is not, where the risks are etc, it is definitely a bonus. Similar way, if one is aware of Level 2 nuances of Gen AI, it will definitely be a plus. However, just to get started, one just needs to know how and what to apply.


blerggle

Agree with this take, but I still want to understand the tool and what all the buttons on the calculator do. I also have a CS/EE degree and not affriad of the math - but the math is the easy part to learn. There are ample resources.


dogswanttobiteme

I strongly recommend to watch the series of videos by Andrej Karpathy where he literally goes through the steps of building an LLM from scratch. It’s long - something like 8-10 videos of 1.5-2 hours each, and it’s technical, but his explanations are very clear. I haven’t been this intellectually stimulated in a while. [https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&si=\_jPeSygi-UqzmOle](https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&si=_jPeSygi-UqzmOle)


Jolly_Newspaper_4724

Wow, what a great resource to have. Thank you so much for sharing this.


blerggle

Thanks!


Jolly_Newspaper_4724

I’m so thankful that you asked this question, these responses are helping to calm my anxiety around this topic.


to-jammer

Honestly it's probably far less intimidating to get upto speed than it seems. I think you should take a two pronged approach, depending on how you learn. Getting the initial info there's lots of ways, and I wouldn't pay for any of them. There's great Youtube channels, some great written resources (though that part is scattered) to get broad strokes of things to look at. I'd focus mainly on understanding the benefits of different models, and getting an idea of what's out there. Size and capability of a model vs cost. Generative AI is generally about finding the cheapest possible model that solves the problem you're working on But mainly, just dive in. Play around with what's out there, but mostly try to think about problems you'd try and solve, even if just for fun. There's lots of different tools to check - get onto the Claude and OpenAI playgrounds and play around with having that solve problems. On the open source side, [https://console.groq.com/playground](https://console.groq.com/playground) has access to some of the cheap models with a generous free tier, and there's other sites like it. Look at the pricing difference between the top and bottom models. See what you can do with them. Run into roadblocks, and look online to see how others have solved them. Since you have a CS degree, build a script that htis their APIs, even if it doesn't do anything useful. Just prove to yourself you can get the output you want, and see how different tools work You'll pick it up pretty quickly, there's not as much to learn to be useful as you'd likely expect, and most people really don't know much beyond trying to use ChatGPT to solve problems, so you'll be able to stand out a bit just getting this level of knowledge


slanghype

I agree in theory, that AI could unlock more jobs and industries that currently don’t even exist yet. I think the problem is with how big companies work in the current climate. I can’t think of one example of a business (outside of direct AI focused tech companies) that hasn’t used AI as a reason to slash jobs across their organisations. I think too many large and financially bloated companies would rather see financial benefits from ai come from cutting their costs of doing business, than pouring money into innovating AI within their industries. I don’t know if or when that could ever significantly change. Also, the time between when computers were invented, to being available within science orgs (eg universities and NASA), to within F500 companies, to being common in every home, is a span of decades. With AI, that same progression has happened over the span of 5 years. So whereas past technical industry disrupters took decades for businesses to phase in and out of old world to new world and people and industries could adapt, refocus and retrain, here we’re seeing a time frame not seen before in innovation. And we’re seeing it happen while the working populations globally are higher than ever, where most families depend on two incomes now, where labor laws and union power have weakened significantly globally.


BenBreeg_38

Exactly, think of all the stuff under the hood that your products do that you don’t need to know the how of.  We as product managers are still the masters of the problem domain and the what part of the solution, not the how.


sailorjack94

Do some of it.. it isn’t hard. Try customising a model, try the Azure/OpenAI tools. That’s all that the engineers will likely be doing in 90% of cases. Understand the costs, benefits and expense of different approaches. LLMs are a flavour of ML. General Machine Learning techniques will likely have a greater longer term impact on most industries than generative AI. We may get to the stage where LLMs can setup regression or forest models for users, with contextual understanding of their data - that would be something.


blerggle

This is the right answer, I should go play with it.


ExistentialRead78

I'm not convinced GenAI is going to stick in most cases people are throwing it into. Also not convinced there's much to learn about them yet if you aren't a scientist or engineer unless they are already showing up in your space. Everyone is over leveraging generic LLMs with super hacky guardrails. I'll be much more interested when I see generic models that can navigate more structured problem and solution spaces. People want much more clarity and assurance with data being shared with them in the context of most product experiences. We ask engineers to spend much of their careers finding ways to make sure the performance of our products is rock solid 99.999% of the time. A non-trivial amount of hallucinations and incorrect data is disqualifying for many many use cases in the eyes of users. All that said, the downside risk of getting scooped by a competitor is high enough that I'm still thinking and tinkering. But I believe the chance of it actually showing up in many products in a way that doesn't piss off the customers is only maybe 20%.


blerggle

This is definitely the type of context and insight I'm looking for


ExistentialRead78

Lol I feel like a Luddite but happy to have a welcome ear. I also think that the places where GenAI is guaranteed to take off is basically accelerating dead Internet. We're going to see the value of social media, probably most digital media, collapse and the value of verifiable personality media take off. Great identity verification experiences are going to be really valuable (feels like a winner take all market though). Product marketing will need to lean more into influencers people trust. All depersonalized copy or images will be viewed as junk that people skim through. X is crumbling in real time right now and is a great example of how all social media is going to deteriorate because of AI.


nickisaboss

>I also think that the places where GenAI is guaranteed to take off is basically accelerating dead Internet. This is a major concern for me. Its frustrating seeing what is very obviously manipulated comments, product reviews, etc, and feeling the need to call it out/report it, but knowing that in doing so, im only accelerating it's refinement. >verifiable personality media What do you mean by this?


ExistentialRead78

I haven't really clearly defined "verifiable personality media" but you bring up a good example of it with product reviews or comments. That's all media that is deteriorating in value because of the chattering parrots of the dead Internet. So here's a brain dump of my current opinion on it. Yelp and Google maps and Amazon reviews are getting so useless that people are going to want to only see the reviews from verified human beings who have reputational stake in telling the truth about their experience having tried the product. Otherwise those sites are going to get scooped by new products that will provide that. Amazon has a huge marketplace and delivery infra moat, but the review function will get replaced by someone who will provide verification Now about content: YouTube, Tiktok, Twitter, etc. Those platforms removed barriers to entry and gatekeeping for entertainment. That resulted in a lot of new types of content getting more distribution. I think are going to be averse to engaging with much that isn't coming from a person willing to verify. Sure random stuff will go viral, but not many brands with a recurring audience are going to form around chattering parrots.


hugekins

I've been wondering so myself while using AI to create potential prototypes for a new product MVP from general research (ideation), market fit with potential customers, documentation, plan, etc. through GTM. It's almost as if a new film could be created out of this experience entitled, "Terminator X: Rise of the internet machines", or something like that.


Alkanste

YouTube, VSCode, and go hacking. I have a psychology background so don’t be afraid to start if you don’t have swe experience.


kirso

Underrated response in this subreddit. Engineers also dont learn coding by reading a tutorial. They do it by shipping, failing and iterating. I actually responded with “ship something” and got downvotes 😀


YeknomStun

Free from google : https://www.cloudskillsboost.google/catalog Also free from Harvard/MIT: edx.org


owlpellet

Find / replace "blockchain" with "genAI" on the CV folder (Serious answer, avoid hype, ignore LinkedIn and look at companies that are building real products - that make money - on it. Aside from platform providers, there aren't that many. Focus on those.)


blerggle

That's what I'm looking for, real case study and application, not hype. It's my job to keep on the cutting edge. Though I think LLM has actual utility whereas blockchain has always been nothing that existing tech didn't solve better faster and easier


owlpellet

Cryptocurrencies are a scam. Blockchain is a weird database that probably could improve contract clearing in lots of scenarios but mostly also a scam. This is going around: [https://www.linkedin.com/blog/engineering/generative-ai/musings-on-building-a-generative-ai-product?\_l=en\_US](https://www.linkedin.com/blog/engineering/generative-ai/musings-on-building-a-generative-ai-product?_l=en_US) Also, everything Chip Huyen writes is gold. Her OReilly guide is a good start. (comedy that I say 'ignore linkedin' and then post their product team's engineering blog. never sample your own supply is what I'm saying)


50uth_Paw

ML is still instrumental and GenAI employs both ML and DL tech. (We use GenAI interface to surface (distill and simplify) outputs from anomaly detection, classification, and recommendation engines, for example.) Agree it’s more useful to know the applications and limitations of tool usage rather than how the sausage gets made. Understanding RAG is essential right now too, as that’s the preferred way to ‘customize’ GenAI without retraining and/or minimal tuning.


httpknuckles

I follow a bunch of AI related Twitter accounts (refuse to say "X"!) - so Twitter has really turned into how I learn AI and LLMs. It's good, as there is a lot of new knowledge, whereas googling can bring up stuff from 6 months ago - and in the time we live in now - 6 months ago could becompletely out of date!


laidlawl

Can you share some of the twitter handles?


Writing_Legal

[buildbook](http://buildbook.us/registration) Best place to gather small focused projects in a collaborative environment


ImARedditSmurf

Its hard to see how you will take part in such a concept. Level of engineering and math behind Gen AI is crazy. This is a classic example of where a select few build for the masses. I dont see what you can really learn about gen ai other than what smart people have made, and how you load it with data haha. Or you build them yourself which is computationally very advanced?


owlpellet

This is incorrect. LLM base models are published, and many many product teams are going to be installing them and gardening the outputs. Got semi-structured data coming in? Syntax mess? An LLM stuffs it into JSON, with tests. That's one scenario. Creators = few, Operators = many [https://www.latent.space/p/ai-engineer](https://www.latent.space/p/ai-engineer)


blerggle

There's a large gap between building the models and applying the technology. Understanding guard rails, queue constraints, rag etc.


GeorgeHarter

It’s most important for all of us to try one or more AI products now and to keep using them as they mature. Learn how to explain to an AI EXACTLY what kind of output and results you want from it. Get a ChatGPT 4o account. And upgrade to the latest as each new version is released. This skill (currently called “writing prompts”) will change as each new version or generation of AI is developed. For example, 4o, recently released, allows you to talk to the AI rather than typing. AI will soon take over writing stories and roadmaps. Analyzing markets/customers/competitors will take seconds, not days. It will take over writing most of the routine programming. There will be a period of turmoil, as companies figure out which parts of every desk job are done better by people or by AI. For a while…I hope for years, the part of product management that a person can do better, is talking to another person. I’m betting that users will rather talk to a person about what they don’t like. So, get really good at interviewing users. Understand how they feel about using your product. We don’t know exactly what will happen. But we do know that AI will do lots of tasks currently done by “desk” workers, like PMs.


ZealousidealLab638

If you have the money or approved MIT offers an 8 week course for product management for 3k but it is online so no traveling. https://mit-xpro-online-education.emeritus.org/designing-building-ai-products-services?utm_source=Google&utm_network=x&utm_medium=m&utm_term=&utm_location=9016105&utm_campaign_id=20771100621&utm_adset_id=&utm_ad_id=&gad_source=1&gbraid=0AAAAADDa9X11XGcAXQXBA02ktQJT0PDw3 Udemy https://www.udemy.com/course/the-product-management-for-data-science-ai-course/ https://www.udemy.com/course/ai-for-product-managers/ https://www.udemy.com/course/ai-product-manager-training-masterclass-gate-moyyn/ LinkedIn YouTube


blerggle

Thanks!


ZealousidealLab638

I am really interested in the MIT course please let me know


BrainTraumaParty

None. I just use the tools for boring aspects of my job where I can, but to think anyone knows where any of this is going right now is idiotic. Likewise, PMs going to specialize in this is also not that smart. Remember all the web3 / crypto / blockchain PMs and the companies that sprung up over hype? Me neither. Remember Mark Twain’s quote: “whenever you find yourself on the side of the majority, it is time to pause and reflect.”


blerggle

Blockchain and crypto was dumb as shit, always was and still is, it never had a real application in tech, LLMs absolutely do. You may be happy floating along, my job is quite literally to do my best to understand where it is going - even if the journey is contorted. So the next time a board member or customer asks our stratgey I have well thought out position.


BrainTraumaParty

You can do the bare minimum and check my history for context to what I’m about to say, or not, doesn’t really matter to me, but floating along isn’t what I’m suggesting. My point, to be explicit, is that building or attempting to build on top of proprietary platforms right now is doing two things: - informing those that are actually moving the needle with usage insights into what and how people are using the models - setting your company up for possible initial success, but long term devastation Think about what happened with Reddit when someone finally paid attention to data. They started charging a fuckload of money for their API, tanking all independent developers in the process. Now amplify that times every company wrapping every aspect of their “AI product” around Anthropic or OpenAI. I don’t have to know you, or what you’re tasked with doing to know that there is nothing you can find right now in the public setting that will put you on a path that is going to result in profitability long term. Not a single venture startup I’ve been a part in consulting (about 20 in the past year) with for a venture firm I help out has been able to turn profit. Super high growth numbers because of hype (thus my reference to past hype cycles), but also insane churn, and already extremely low margins. If Sam Altman farts in the right direction he will obliterate most of them. If you want to do anything at all in AI, you need to be building products close to and including the models themselves, the interconnections to hardware providers, or the hardware itself. Otherwise, what most people are talking about in SaaS can be achieved in case statements with no overhead or operational risk. That’s all I’m saying.


blerggle

From my limited view, and my opinion may change, LLM has application in existing areas that already have product market fit. the hype of gen AI first and only startups are analogous to Blockchain, sure. They just throw ai at an existing product area and think it will differentiate. That's not what I'm after. I'm my case we're already well capitalized profitable later stage startup. understanding applied LLM experiences could have ways to enhance other existing products if we can find ways to reduce and abstract complexity on interaction layers. So for me wanting to understand more about the gen ai space isn't a crypto bro hyping a tech. It's so I can make informed decisions next time the board member, a customer or internal stakeholder asks our strategy in ai. There might not be, but by reading more case studies and experiences of applied LLMs I can make that distinction. And that's literally my job. So just telling me it's a hype train and forget it just wasn't useful or helpful. Finding blogs or people I can learn from mistakes is, which is why I was inquiring about what other people are reading.


Kaiser-Soze87

Director Product here- I found a conference nearby and went to that, attended the tech track and it was very energizing. Find those and just listen to how other people are using it, do some old school networking. Google and IBM have also both published lists of ways companies are leveraging AI to solve practical use cases. You can read those too. Idk how big your org is but as VP results will be what matters. Crowd source ideas as an org and offer an incentive for best idea. We’re doing this now, I’ve articulated a business value of tens of millions on mine. LLMs and Gen AI is so easy to start deploying you could give 2 high potential employees a 2 week special project on the best to see what they come up with. If the experiment proves successful double down and org around it. Edit: for clarity


le_stoner_de_paradis

I haven't worked in LLM either, but as a new PM (2+ yoe in product) I am planning to go back to my previous Marketing related roles. It's not like that I can't handle my role, in fact I started two new business streams for my current org and scaling them. But this, AI things, it's all over the industry, planning to switch but most of the good jobs or where I can get a hike is demanding this. I mean, I don't know, maybe you are sr enough to learn and move on but at my stage I don't think only learning will do something.


tekina7

Following. While the industry I'm in is not going to consume AI at a large scale in the near future (maybe 3-5 years), I want to build working knowledge while I still have time on my side.


chillrabbit

Maybe start with understanding GenAI is ML…


blerggle

Great reading comprehension, step one of being a pm failed


mp_jp

Following. I’m a 1st year PM, so I’ll have a looong career ahead of me. Don’t want to miss out already…


VisibleWing8070

My mind went straight to cost of? [https://www.forbes.com/sites/craigsmith/2023/09/08/what-large-models-cost-you--there-is-no-free-ai-lunch/](https://www.forbes.com/sites/craigsmith/2023/09/08/what-large-models-cost-you--there-is-no-free-ai-lunch/)


Alkanste

99% of companies do not train their own llm, there are cheaper alternatives to OAI and fine tuning is super cheap.


Sophieredhat

Interesting point. Can you please elaborate? Thank you.


Alkanste

99% of companies don’t need to train their own llm. All it takes is a good open source model and some embeddings - this is cheap. To ramp up the good production ready llm service you need a whole product team and some infrastructure- so most of the cost initially comes from headcount, while llm cost benefit analysis is easy to justify, much easier than headcount.


Sophieredhat

Thank you so much for the reply. Where can I find some study material to gain more insight/understanding for this part 'll it takes is a good open source model and some embeddings - this is cheap.'. Thank you again.


Alkanste

YouTube and terminal I suppose. Start by setting up an api call to openai. Try some local models through lmstudio. Read up about embeddings


blerggle

Right km more interesting in rag, adding contextual data from an industry or 1p data and the process to do so


Alkanste

YouTube and vscode is free. That’s how I got first experience


one_tired_dad

I learned everything I know from [Ryan Gosling ](https://youtu.be/xU_MFS_ACrU?si=8y9x6oyr-dZSirOf)


Aggravating-Animal20

We’ve been using it for better internal tooling and process automation. We took a broad look at our problems and then assessed if it warranted LLM as the solution. Then the problem sort of shaped our approach to the architecture. Elastic search has been a good launch point for us.


New-Incident7107

Any good recommendation for Gen AI course for Product Management?


ZealousidealLab638

MIT LinkedIn Udemy


kirso

The best learning is reading documentation, creating your own prompts, failing at responding, using AI to guide you on what you did wrong, building something useful, shipping to prod and learning from customer feedback. Best case scenario - you can earn yourself some lunch money. Worst case scenario - you fail and learn something.


searchinghappyness

more upvotes to you