cover of episode Using AI to evaluate employee performance with Rippling’s COO Matt MacInnis

Using AI to evaluate employee performance with Rippling’s COO Matt MacInnis

2024/9/25
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Chapters Transcript
People
M
Matt MacInnis
Topics
Matt MacInnis介绍了Rippling公司新推出的AI产品Talent Signal,该产品通过分析员工的工作成果(例如代码、客户互动等)来评估员工绩效,生成"高潜质"、"典型"和"需要关注"三种信号,辅助管理者进行绩效评估和员工沟通。Talent Signal不依赖人口统计数据,避免了潜在的偏见。由于AI评估员工绩效尚处于早期阶段,存在一定的风险,因此Rippling公司采取了早期访问计划,并限制了产品的应用范围,以逐步建立信任,并确保其安全性和可靠性。Talent Signal的最终目标是提供一个独立于管理者主观偏见的客观信号,更准确地反映员工的工作绩效,帮助公司更好地识别高潜质员工和需要支持的员工,从而提升团队整体绩效。Rippling公司在开发Talent Signal的过程中,重视员工的反馈,并制定了相应的政策,以确保AI工具不会被滥用,管理者仍然需要结合自身判断进行综合评估。Talent Signal的应用场景首先集中在个体贡献者,例如销售人员、支持人员和工程师,未来可能会扩展到管理者层面。Rippling公司选择开发Talent Signal的原因是其拥有大量员工数据,并且AI能够有效地处理和分析这些数据,从而解决传统绩效管理中存在的一些问题,例如管理者主观偏见等。 Sarah和Elad与Matt MacInnis讨论了Rippling公司的产品策略以及Talent Signal的优势和潜在风险。他们探讨了如何将AI评估融入到实际决策中,以及如何平衡AI工具的应用与人工判断。

Deep Dive

Chapters
Rippling is an all-in-one platform for HR, IT, and finance, aiming to streamline company operations. It offers around 25 unique products, adding new ones frequently, and benefits from a compound startup strategy. This approach enhances efficiency and cross-selling opportunities.
  • Rippling unifies HR, IT, and finance.
  • It employs a compound startup strategy with about 25 products.
  • The company has a strong financial outlook due to efficient cross-selling.

Shownotes Transcript

Translations:
中文

Hi, listeners. Welcome back to No Priors. Today, Elad and I have a spicy one. We're here with Matt McInnis, the COO of Rippling, the juggernaut workforce management platform that unifies HR, IT, finance, and more. They're launching a new AI product that looks at the work output of employees and generates performance management signals. Sound terrifying? Let's discuss.

It's so good to have you. Thank you for having me. So I think a lot of our audience will know Rippling or use Rippling. Yep. But for anybody who's missing it, what does the company do? Yeah, it's an all-in-one platform for HR, IT, and finance. We do all the boring stuff, but the important stuff to help you run your company. So we want to like eliminate the administrative burden of running a company. That's all the like official language, but most

People come to us and say they need payroll and we have payroll. They need a device management solution. We have one of those two. So we do all that stuff. The rumor is, you know, many hundreds of millions of dollars in revenue growing fast. Anything else you can say about scale? It's going well. We've got about 3,500 employees. We've got tens of thousands of customers using the platform. So I'd say we're doing something right. I guess also one of the things that you all have really pioneered is this

notion of reintroducing compound startups or bundled products across a suite of different things. How many different products do you offer now and what's the velocity in terms of adding new ones? We have on the order of like 25 unique SKUs that a customer can buy from us. Products come in different shapes and sizes and so like we ship small new things every quarter and then we definitely do like big things

every couple of quarters or so. We're about to ship scheduling, which again sounds unsexy, but is actually really cool. We shipped an applicant tracking system for recruiting, tacking these sorts of things onto our HCM suite. We do a lot of this partially because we have so many founders in the business. We have over 150 people who have started companies that now work at Rippling. It's like an explicit strategy to go out

and try to either give talented entrepreneurs whose business ideas didn't quite work out like, "Hey, hand raised, I've been there." A safe place to land and continue either pursuing what they were interested in or do something new at Rippling. And so that's worked out really well for us on the velocity front for shipping new products. The compound startup thing obviously has been in the zeitgeist a little bit in the valley. It's just obviously a huge tailwind for us that businesses generally want to consolidate as much of their software onto a single platform as they can. So we're going to keep pursuing this and keep recruiting awesome, talented entrepreneurs and

ship new stuff all the time. Makes sense. And I guess one way to think about your business is almost like instead of one company growing at a certain rate, you're like 25 startups all compounding from a smaller base, which I think is very exciting in terms of the potential upside. It makes like if you're... One of the things that I wished I had learned way earlier in my career as an entrepreneur was just like basic corporate finance. Like understanding an income statement and a balance sheet and how those things play together and what investors look at in that context.

And I get it now, for the record. I think I've mostly figured that stuff out. But when you look at the income statement for Rippling and you think about the 25 businesses or just like the major product suites like IT and finance and spend, they're sort of subscale businesses at some level on their own. And then in aggregate, you have this beautiful top line picture. But from an efficiency standpoint, like...

It's okay today, but there's clearly just going to be this blossoming of efficiency over time for us as these different suites all play to one another. In SaaS software, most people don't totally understand that for scaled businesses, the unit economics of your business converge

at the cross-sell motion. Like your new logo sales motion is super important, but as you sell your products into your ever-growing customer base, like your economics start to look like that more than the new logo sales motion because you always have a lot more in the cross-sell bucket. And so for us, the compound startup thing also has these beautiful financial dynamics that we have a lot of things we can sell to our existing customer base over time and that's helped us a lot.

And one of the things that you're here to talk about actually is a new product that kind of ties together a lot of the other ones in some ways. Do you want to talk a bit more about what that is and how you all are starting to move into AI? Yeah, I mean, the pendulum swinging toward consolidation has these obvious surface level benefits, right, of better sales efficiency and customers being able to save money by not paying multiple sales teams to acquire them. But that's like super...

super basic, like it's really surface level. Where the magic really comes is where there's something common underneath all of these different applications that you're building that provides you with either a scale advantage or what I like to call kind of like your vibranium advantage.

So you have some sort of superpower at the core of your platform that lets you do things that other companies just sort of look at and think like, how the hell did they do that? Like, why are they able to do that? And we can't. And for us, it's like our deep understanding of the employee graph and about employee data. So everything that we build runs on these common rails of like a deep understanding of data about the employees in your business. And so the question is like, what happens when you start to marry all of this data in a single platform? Like, what's some cool stuff you could do with it? And then you toss in the question of AI and like, what could a large company

language model accomplish with this data and this like real understanding of its structure and its history? And that was one of the big questions we started asking ourselves a few years ago and started investing in this new thing. So there's a new product that we're just releasing called Talent Signal.

the ability of this system to read the work product of employees and like marry the data that we have on whom you've hired into your company at what job level with what you know all the basic data about their job history married together with the actual work product that they produce to yield like an insight into how those employees are doing and

you know this is obviously going to be super powerful and useful and it's sort of this thing that i think everyone knows is coming at some level that like ai is going to contribute in some way to evaluating human performance and so we knew there was an opportunity here and that's what talent signal is gonna is gonna deliver it actually feels like a pretty big break to uh be looking at what you describe as work product because traditional hr and it systems they don't necessarily have that work product data in them whether you've had

a job as an individual contributor, you know, for some period of time reporting to a middle manager. And like, um, I did that for a while early in my career at Apple and the real

sort of crunch point in your relationship with the manager comes around performance review time, where you have some opinion on how you've done and your peers and others around you have an opinion on how you've done and your manager has an opinion on how you've done. And everybody gets in a room and after they've written the feedback, you know, they do this thing called calibration, where managers try to hold themselves to a common standard and they all try to hold one another accountable to a standard way of evaluating against

the rubric. But the truth is like the manager has never really hasn't really sat there and like looked at everything you've done, particularly if this is over like a, you know, six month time horizon or a 12 month time horizon, they just don't have enough time to do that. And so if you, there's a bunch of like really interesting articles written about this in many different sources. I recently read one in HBR where they talk about the manager vibe. So

So like if the manager has a good vibe about an employee and there's ambiguity about their performance in the review process, then that opens up this massive gaping hole for the vibe to be the basis of the performance review. And likewise, if you have a negative vibe on an employee and there's some ambiguity about their performance, like then they're going to drive that negativity through that crack like a Mack truck.

And the question is, how do you get around this tendency in these fundamentally human processes? And the answer is, we go to the source. You bring the facts to the discussion. And so talent signal by reasoning from the work product only. It doesn't have access to demographic data. It doesn't know your race, ethnicity, your age, your work location. It just knows...

this is the source code you wrote or these are the customer interactions that you had as a support agent. Then it generates this thing called a signal,

that is a stamp basically that says that this person is high potential, this person is typical, or this person is in need of attention. We call it pay attention, but they're effectively at risk and directs the manager to go and spend time with them. But it surfaces all of these concrete work product examples that the manager can use to go and have like a good coaching conversation with the employee. Do ICs get to see it? The ICs can see it when the manager has let them. And this is actually a thing that we've debated

like quite a bit. Talent signal is not making employment decisions. It's just giving this independent signal to the manager about how the employee is doing. A calibrated signal. Yeah, well, it is calibrated and that's actually really important because one of the pieces of data that we do feed the model is someone's job level. And then we try to calibrate that actually across all the companies that the data is trained on. Does it end up showing calibration relative to both the individual company and overall pool? We don't separate it out. We give you only the localized data

version. And so you tend to see like a pseudo normalized distribution. So you see like in a population of like 50 engineers, you'll always see some people who are flagged as being high potential and you'll always see some that need attention, even if in the global model, you know, they were all really good company. Yeah, exactly. You know, because it's not particularly useful otherwise. And this is all stuff that like is part of this early access program that we're doing. Like a couple of things that I should share about this, because I think

your listeners are like going to clue in now like wow the stakes on this are pretty high

like getting this right is awesome, but getting it wrong sounds kind of dangerous. We are doing this as part of an early access program. And the way that the product works is that it generates one signal one time for one employee at their 90 day mark. Even people who have been at your company for like three years, we can generate a signal, but we're only going to base it on the first 90 days of their work product. And the reason that we're doing it this way is because companies that look at this can see, okay, this thing actually made like a pretty darn good assessment at day 90. And it took us like,

12 months to figure out that this person was not a fit for our company or that this person was going to be an exceptional member of the team. That builds trust in the model over time. And we think like, I don't know if you guys have ever heard the Overton window concept, right? Like this idea that people are only ready for a certain amount of change in how they think about a certain problem. And for us, it was actually really important to contemplate in the design of the product that we not stretch the Overton window too far, like,

And also, by limiting it to the first 90 days, we get to build trust with the employees, with the managers, and have them kind of understand the implications of this thing and whether it's accurate for their particular circumstances. And over time, we can expand how it's applied. These are all the different issues that we've contemplated as we've gone along. But if somebody's been around them for three years and we have a 90-day signal, is that still relevant to that person who's been around for three years? Nope.

highly not likely to be like incrementally useful information at that point. The idea there is like, hey, here's what we would have said at day 90 for this person. It's backtesting. Yeah, it's backtesting and establishing some level of credibility for the model because we obviously done a bunch of testing with this and thought it was quite accurate and certainly instilled confidence in us that like it's a useful signal. So a lot of the use of it is actually for new employees versus people who've been around at a company for a

This version of the product, V1, as we step into this baby steps, is to do the 90-day signal for new hires. And so the more people you hire, the more useful it is. So high growth companies, obviously, are going to get more value out of this initially. But the sky is the limit, obviously, as this thing evolves and we all gain more trust in the model. I want to talk about risk too, but what is the aspiration for how this changes performance management?

For me, the sort of motivating factor here, honestly, it's the bad manager. If you're an employee and you are working in the bowels of the organization on hard problems, your manager, a little lazy, doesn't sort of recognize the quality of your contributions, shows up at that calibration meeting with a better vibe on somebody else and they get the promotion. Talent Signal walks into that environment

and slams your work product down on the table and says like, what about this? I can give you a concrete example of an underrepresented profile at Rippling when we were building this product. She was an engineer in India who was working on one of our toughest problems and she was singled out as a high potential employee. And she was in fact pretty early in her tenure at the company.

And we paid attention to that and we talked to the manager about it. It was sort of an eyebrow raising moment where she was kind of lifted from obscurity by the model that was like, I don't know what your vibe is on this person, but like, man, they seem to be contributing at a high level. And here are concrete examples of how they've done so. So the lazy manager who doesn't like represent things the way they ought to is held accountable by their manager when they look at the total organization through this tool.

and it does a better job of representing the employee. And obviously, I can talk all day about lifting people from obscurity, but it also has the team performance impact of signaling that someone needs support. If they're not performing well, if they haven't ramped well,

giving that signal to the manager and the manager's manager knowing about it is hugely valuable to overall team performance too. So the vision here is like to have an independent, when I say independent, it's independent of the biases of the manager, it's independent of all the noise that sits in the company and just cuts at the heart of one vector on this employee, which is their work product.

and gives them a chance to shine. You just imagine this is the first time in kind of the recent history of the concept of performance management in companies where there is an orthogonal input that can really upset

with facts how people are doing this. What did you learn from dogfooding at Rippling? We started talking about this internally quite some time ago. And as the product has gotten more mature and as we've talked about it more and more with employees, the feedback from employees has been super useful to informing the policies that we set up. You know, I'll give you a couple of examples. Like, no one's allowed to make

any significant decision using the model alone. So anytime you talk about employment decisions, promotions, that kind of thing, you're not allowed to just point at Talent Signal and say, "It said X." You've got to have your own independent assessment of

the inputs. So it's like manager reasoning about work product Talent Signal pointed them to. Talent Signal is like a cheat sheet, but the manager has to do what is fundamentally a human process, which is to evaluate the whole person. The policies that we've set up internally prohibit blind following of Talent Signal and require

the manager to express judgment around what they saw in the study. And like, look, we dog food the heck out of everything at Rippling. Parker, our CEO, he runs payroll for the company. Like every pay run goes through him. He also approves every expense above 10 bucks. It's a lot. We can talk, we want to talk all day about that. Sometimes I just eat the 10 bucks, you know, like what's the point? Of fighting Parker on the expense policy? Yeah, exactly. Amazing. It was Uber Comfort and not Uber X. But anyway, the...

AI stuff, he's obviously very close to the development of this. And the employees have been, I would say, really thoughtfully engaged in balancing being good sports as dog fooders, but also sort of making sure that their own rights are represented in the development of this technology. One of the biggest objections I can imagine, especially as you get to evaluating people whose job might be like, you know,

Classic middle manager, I make other people successful. It's about collaboration or focusing people on the right tasks that is not captured in a concrete work product. What's your response to that? I mean, so first of all, Talent Signal focuses on individual contributors in terms of developing signals. So for salespeople and support agents and individual contributor engineers, it only does a signal for them.

We haven't gotten into the game of managers yet. That's going to be interesting for us or for someone to dig into. But there is this question of like, what's it looking at? And is it sort of like, you know, this overlord looking at everything that I'm doing? What we needed to do in the development of the product was find the highest correlate, like find the best R squared. Like what is the input you can give the model that is most predictive of the output, which is were they promoted?

Were they terminated for performance? You know, did they stay at the same level for a long period of time? Just in general, what was their career outcome in the period studied? When we did these sort of preliminary studies, the screaming signal was work product. You know, like if you want to know if someone's a good engineer, look at their contributions, like look at their source code. And don't just look at like, you know, definitely don't just look at

how much they do, but like really reason about the quality of the code contributions. Think about security issues. Look at pull requests. Look at comments on pull requests. These foundational models do an excellent job of thinking about source code and writing source code. And so they're actually really excellent engines for assessing the quality. That was one of the coolest things I thought about seeing the demo, like when it was looking at assessment of, for example, like maintainability, extensibility, right? Because that requires like code reasoning.

Yeah, it has an opinion on this and it's able to express it really eloquently. And then the manager has to go in and use their own judgment. I'll give you another example. This is an example from a customer who's been using the product. So the CTO of one of the like alpha test companies went in and saw that somebody he didn't think was a very strong engineer was flagged as high potential. And he was like,

Okay, like that does not jive with my priors. He had priors. It doesn't jive with his vibe, right? Like it's not my vibe about this employee. So he goes in and looks at the source code and he goes,

I see what's happening. He's like, I wrote all this source code. And we're like, huh? Like, tell us more. And he's like, well, this employee has been struggling. And so I've been spending time with them shoulder to shoulder, like writing code and coaching them through this stuff. And like what the model has picked up on is this really high quality contribution that only happens when I'm sitting next to this person. And it was like, aha, okay, cool. So it's sort of like...

an unknowable misattribution. How do you think more generally about managers? You mentioned that you don't currently assess them. Andy Grove used to always talk about how the output of a manager is the output of their team, and that's how you're supposed to assess them. So to some extent, you could argue you have some signal you can aggregate up. So when you look in the product, it does aggregate at the manager level to show you the sort of distribution of high potential, typical, and needs attention employees. That part, we're sort of saying to customers, like, use it informationally to sort of spot where there might be hotspots.

but don't totally judge the manager on the basis like- So the question is, is that a reflection of hiring or is that a reflection of execution? So I guess it's hard to sometimes tease those things out. But now you kind of get why we call it talent signal because it's a signal. It's like, "Uh-huh, okay, the little yellow light bulb going off over here." So I was just curious, how did you converge on this as a thing that you're gonna do for AI?

And was it a big exploration? Was it more like, hey, we actually have something here that we've aggregated data. This AI seems to be good at interpreting certain types of data. Yeah. I'm just kind of curious how you landed here of all the things that you could do with foundation models. We thought about a lot of the obvious AI use cases. I'm going to zoom out for a sec and maybe toot the company's horn a bit. You have a dollar.

And there's a bunch of things you can do with it. Back to this corporate finance topic. Like, there's a bunch of things you can do with that dollar. If you invest that dollar back in the front and out the back of the machine comes two, like, don't take the two. Put the two in the front. Get four out the back. Put four on the front. Get eight out the back. This is why SaaS software businesses run at such a deep cash deficit over the course of their

early years. Now, if you can't do that because you don't know what technology you're going to build next, if you don't know how to invest it in sales and marketing to go and acquire the next customer, if you don't know how to invest it in R&D to go and build the next product that's going to generate incremental revenue, then you might do something like stock buyback. And that means that the most creative idea that you could come up with with this cash that your business is generating

is to just like juice the share price. And like even worse is a dividend. 'Cause like now I can't even do that. I'm just gonna like literally just gonna give it, I don't know what to do with this money. I'm just gonna give it back to you. Like what would I do with this money? This is like such a bad signal on a company if the best thing they can think of is a dividend. Now by contrast, companies like Rippling and many companies in Silicon Valley

not only know what to do or think they know what to do with the next incremental dollar, but they want even more dollars than they have access to and so they use equity capital to go out and get a bunch more cash that they can use to pump in the front of that machine and get even more dollars out the back. You look at some of the highest performing companies in Silicon Valley and they reach profitability or that, you know, some of them do at least.

It's still centered around one idea or one product that they have done a really good job of scaling up. And one of the like super unique things about Rippling, and it's like so easy for us as a team to take this for granted, is that we have this massive list of projects that if we were to go build them, they would turn into revenue.

Like we know the next product we want to build and the one after that and the one after that. And the only challenge is like, can we hire enough engineers and not run out of money? You know, because we know that in the long run, this is all going to work. Oh, wait, I think a common objection in like classic, not rippling Silicon Valley, do one thing well type companies is really hard to focus on that many things. It's really hard to do that many things well. It's hard to keep it cohesive. How do you teach the sales team that? How do you think about cohesion? You just work harder.

You know, like you just get the right people into the right jobs and get enough leaders into the business who can deal with sort of the fractal of complexity. And this is also the traditional enterprise sales playbook from the 90s, right? And I think it's almost like we had an era of 10 years in the 2000s where we forgot about this and everybody became single point products. And then there's you guys, there's HubSpot, there's Datadog, like a variety of people have built out these sort of bundled products and the cross-sell motion around a single sort of core product.

either system of record or type of identity or something else. So, I mean, there's the old saying from Netscape, from the Netscape days where

All of innovation is either bundling or unbundling or some variation of that. So now we're in an era of bundling again. Yeah, history repeats itself. History doesn't repeat itself, but it rhymes. And like we're definitely in the rhyming phase of like the big platform stories from the 90s. But talent signal doesn't look like bundling. It looks like something like pretty different. This is why it doesn't repeat itself, but it rhymes. Because the technology that emerges in these new situations offers new opportunity. And so for us...

We have all of these things we want to build, but the guiding principle is always what can we alone do? What can we uniquely do with this new tool? Vibranium. Vibranium. We have vibranium in this underlying platform. What is AI plus vibranium equals what? What does it yield? And what I would say about other companies that are doing AI products is that for the longest time, their roadmap sucked. They didn't know what their next proximal feature was going to be that was going to generate revenue.

They didn't have another SKU idea with 100% chance of generating incremental business. And they kept filling in additional features that made existing customers happy and may have given them sort of marginal cross-sell opportunities, but they didn't have the next big thing that they could tack on. AI comes storming into the scene and now all of a sudden everybody's a freaking AI company because it's offered them this opportunity to at least masquerade as a company that knows what to do with the next proximal R&D dollar. We've never had that problem.

And so guess what we didn't do? We didn't build a chatbot. We didn't build a co-pilot. We didn't build any of these surface-level obvious capabilities. We're going to build them. They'll be in there at some point. Who cares? It's not going to sell a single extra subscription of software. We said, we're going to skip that. We're going to fast forward. We're going to take these super expensive AI engineers who are really hard to recruit, easy to retain because it's such a great place to work, but hard to recruit, have them build something that has the chance to be

the opportunity cost of which is like for sure worth it. Because the opportunity cost of putting them on the chatbot thing

ain't there relative to what we could otherwise be going and building. Are there any other types of new AI products that are in the pipeline for you all? We've got a bunch of stuff we're working on in the AI world, but getting this one right is like, you know, we don't, we're not like peeling people off of this project to go work on project number two. Like we really want to get this one right out the gates. There is some new stuff coming from the company that's not directly AI related, but is about really scaled data, like super high scale data. We've already built this like really

really beautiful data platform underneath Rippling. It's kind of like our AWS, like we're going to have our AWS moment at some point in the next quarter or so. But we're here to talk about Talent Signal. So it sits on this data platform. And when you want to install Talent Signal for your engineering team, what you do is you just you install the GitHub app.

on Rippling and it replicates your source code repository into this secure, well-guarded environment, but there it's going to do the analysis on the source code.

You know, when you plug in Salesforce, we're replicating a lot of the data out of your Salesforce instance, and that's heavy duty. I mean, the size of our Salesforce instance is massive. And so really it was about how do we marry the HRIS data with the scaled data platform where everything is really beautifully structured and in particular, all the employee data is dereferenced elegantly. In other words, we always know who's who in all of these other systems.

and then say, okay, now what business problems can we solve with that? And like, it was so obvious that this was the opportunity because we were seeing inside of these workflow products

And GitHub can't do this because GitHub doesn't know who you've promoted. They don't know who did well. They don't know who you've had to let go of for performance reasons. Salesforce doesn't know that either. And so I'm sure there's going to be really cool code quality evaluation tools built into many of these workflow systems, but ain't none of them going to know what happened from a human perspective in the way that we do. And that's why this is kind of our magic talent.

Do you think you need a particular type of culture or leadership to be an early adopter of Talent Signal? Or maybe is there a ready signal on that in terms of your alpha partners? Oh, for sure. I mean, look, there are companies that we've engaged on this who have looked at it and said, like, we're going to not be an early adopter on this one. And of course, we totally respect that. I have the sense that, like, AI in the conversation about human performance is 0.1% of the way successful.

You know, like there's a lot more to come on this. You know, I'm mostly comfortable saying that it's like an inevitability that LLMs are going to be involved in assessing human performance in many different contexts. Yeah, you know, it's interesting. A friend of mine who's a CEO of a public company told me that he sometimes uses some of the chat-related products to talk about employee issues, where he'll chat and say, hey, I'm trying to work through this thing with an employee. What are some of the things that I should be doing? How should I think about it?

And so you already start to see sort of glimmers of that future emerging. What do you think are some of the principles that people should be building against in order to make sure that they're approaching it in sort of a thoughtful way or to your point, they're not just sort of deferring the decision to AI? I do think job number one is to understand, like you can become numb to the impact that this kind of stuff can have on people's lives. I think if you're like, if you're not in the AI world and

and you hear people like me talking about like risks the risks or ai safety or the ethics it sounds weird you're like why are they talking about ethics and risk like it answered my question about whether you know how to convert a half of a cup of you know oil to ounces you know because that's most normal people's ex you know experience of they as a very benign friendly approachable thing

But it doesn't take you too long when you contemplate it in this context to think about, okay, so like, you know, if a manager were to run off and make decisions purely on this, that any hallucinations or misattributions could actually be really consequential to people's lives. And this is why the way that Rippling is approaching this, right, we're doing this as an early access program. We're constraining it to the first 90 days. The signal is awesome. Like, it looks like it's going to be super useful for people. And also, we're very conscious of the risk of bias that might be amplified or introduced later.

through the whole thing and so when you ask about people who want to start using these tools in these kinds of contexts what advice you might give them it's like number one you got to understand the stakes number two is like even if the system is arguably bulletproof like you have to go to ground and still do your job as a manager you've got to go inspect the context don't you

let the misattribution that I described earlier around somebody who got a lot of coaching from their manager influence your thinking about them. Just see it for what it is. Not just the talent signal thing, but just AI more broadly. Just see it for what it is. You have to have some understanding of the underpinnings of these systems in order to be able to judge the quality of their output. And I think it's probably too high a bar to say that

everybody out there who's potentially a user of these kinds of tools is ready for that. So who is, in terms of CEOs or HR leaders or whoever else is choosing to do it now? It's pretty clear that the companies that have chosen to partner with us already on this are reasonably performance-oriented.

like very interested in finding new tools to compete. Like I think it's easy to go back to a sports analogy where if I could tell you that you're a coach and you've got a team and you're going for Olympic gold and

And it's a beta version of something that assesses your form on the court or kind of depends on what sport we're talking about that like you're pretty keen to give it a shot and see if it can help you juice team performance. And if you're careful and you mitigate the downside risk, like it could give you a leg up in what is a very competitive environment. There are a lot of business people, CTOs, like the engineering side of this.

The sales leaders who are interested in this, I mean, sales is hyper, hyper competitive. And so if they can get a leg up, this is just like part of the arms race for sales. And then support teams are so coaching oriented already. Like a support team is generally so focused on rubric adherence and weekly air checks with their employees to make sure that they're communicating the right way about the new product or using the right tone. They already have these cultures. And so

And really, I suppose one of the things we've gotten right about this is that when we selected sales engineering and support as the areas to build the first version, those are already organizations that have a culture of competitiveness and a culture of looking to find the next incremental advantage for themselves. And then I think it rolls up.

to the company culture where companies have said they're going to wait this round out and some of the more hard edged or competitive tight environments, those guys have said they want to play ball. I think those are also three disciplines where there's also a lot of coaching. And so there's products like Gong where I've seen people like share

calls so that people can learn off of each other. For sure. Customer support, obviously there's a lot of training. Yeah. You know, code, sometimes people pair program. So it does also feel like the places where the coaching aspect of what you talked about can become really valuable. Yes, 100%. It seems like an obvious trigger for the AI pitchfork crowd to come at you. Yeah. I will say that like,

i'm thankful for the pitchforkers like i'm thankful for the people who are going to hold us accountable and and criticize and or critique you know the quality of work uh that we're putting out with this product because it's really easy to sort of inhale your own exhaust and get excited about the potential without necessarily understanding the full picture and so when someone comes at us and asks hard questions about bias or asks hard questions about

you know, the unintended consequences of involving AI in decisions this important, we're going to listen and we're going to learn. Feedback is a gift, like it's a real thing. And so I know that there'll be some people who raise an eyebrow at, you know, what we're doing. And then all I can say is like, we're really committed to learning from them and making sure that we make this a tool that works for everybody. Great. Thanks so much for joining us today. I am really glad you guys let me do it. Thanks, Matt.

Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.