WEBVTT
00:00:00.281 --> 00:00:01.264
Hey, what is up?
00:00:01.264 --> 00:00:04.411
Welcome to this episode of the Wall Entrepreneur to Entrepreneur podcast.
00:00:04.411 --> 00:00:24.551
As always, I'm your host, brian LoFermento, I feel like last week, and it's so frequently lately here on the show and just in life in general, and certainly in business, we have so many interesting conversations that kind of revolve around AI, and that's why I'm so excited to welcome today's guest to the show, because this is someone who thinks about AI, I'm pretty sure all the time to the show.
00:00:24.551 --> 00:00:35.570
Because this is someone who thinks about AI I'm pretty sure all the time and someone who has such interesting thoughts and, most importantly, more than thinking about it, it's someone who's doing something about it with a really cool company that's addressing future things that we're all going to have to confront when it comes to AI.
00:00:35.570 --> 00:00:38.289
So let me tell you all about today's guest and entrepreneur.
00:00:38.289 --> 00:00:39.642
His name is Sean Ayrton.
00:00:39.962 --> 00:00:49.174
Sean is the CEO and founder of Galini, which is a Y Combinator and general catalyst-backed startup helping enterprises deploy AI responsibly.
00:00:49.174 --> 00:00:52.060
That's the key word in this conversation.
00:00:52.060 --> 00:01:05.379
His partner in crime is his former college roommate and dear friend, raul Zabla, who has built and managed complex systems at Morgan Stanley, bridgewater and Ridgeline line.
00:01:05.379 --> 00:01:12.421
Prior to founding Galini, sean was a leader in McKinsey's software and telecom practice in New York, where he drove $500 million in revenue growth for Fortune 500 software and telecom companies.
00:01:12.421 --> 00:01:23.643
He has a track record of helping his clients achieve breakout revenue growth, two of which were acquired for just a casual $20 billion, with a B $20 billion each Outside of work.
00:01:23.643 --> 00:01:25.347
Sean is an avid sports fan and player.
00:01:25.347 --> 00:01:33.751
He grew up in Dubai playing cricket for Junior National Team and captained the UPenn cricket team in college, which we're so grateful for Sean's connections to UPenn as well.
00:01:33.751 --> 00:01:38.989
Obviously, that is an educational institution that is near and dear to our show as well through our partnership.
00:01:38.989 --> 00:01:41.061
So we are all going to learn a lot from Sean.
00:01:41.061 --> 00:01:44.108
I know he's going to make us think a lot, so I'm not going to say anything else.
00:01:44.409 --> 00:01:46.953
Let's dive straight into my interview with Sean Ayrton.
00:01:46.953 --> 00:01:53.429
All right, sean, I am so very excited that you're here with us today.
00:01:53.429 --> 00:01:55.245
First things first, welcome to the show.
00:01:55.245 --> 00:01:57.075
Thank you, brian.
00:01:57.075 --> 00:01:57.759
Thanks for having me.
00:01:57.759 --> 00:02:04.129
Heck, yes, I'm excited to hear all the ways that your mind thinks about and works through this world of AI.
00:02:04.129 --> 00:02:06.954
But before we get there, take us beyond the bio.
00:02:06.954 --> 00:02:07.602
Who's Sean?
00:02:07.602 --> 00:02:09.147
How'd you start doing all these cool things?
00:02:10.419 --> 00:02:13.289
Yeah, of course I'm Sean Outon here.
00:02:13.340 --> 00:02:23.430
I grew up in Dubai, so abroad, you know it was an incredible sort of journey watching the city grow from a desert to the city it is today, in a very short amount of time.
00:02:23.961 --> 00:02:42.854
It sort of left me with the takeaway of the learning that you can actually dream big and achieve things and not everything has to go as planned, but overall it's sort of an endless path there, and so I came to the US very excited to sort of be at the heart of technology and innovation in the world.
00:02:42.854 --> 00:03:03.110
Studied at UPenn, always had a mixed interest between wanting to be a sports person, maybe not being good enough to do it, and really enjoying the intersection of technology and business and figuring out what could happen there and the potential, and so that's been sort of my two things that have driven me through most of my life.
00:03:03.110 --> 00:03:19.231
I spent a bunch of time at McKinsey before this working with some incredible clients and colleagues, you know, at that intersection of software and technology and, you know, always had the itch to do something entrepreneurial, finally had the opportunity to do so and have jumped in and done it.
00:03:19.840 --> 00:03:21.443
Yeah, I love that overview, Sean.
00:03:21.443 --> 00:03:30.724
What I'm so fascinated I'm excited to hear you go a little bit deeper here is the stark contrast, because it stands out to me about the difference between McKinsey and now being an entrepreneur yourself.
00:03:30.724 --> 00:03:52.365
Because I will confess this to you while we're here on the air, sean is that a lot of entrepreneurs I know that behind closed doors they kind of view the McKinseys of the world as the kings of corporate jargon, which obviously for us as entrepreneurs, we're making the most of all the resources, however limited they may be, for us to further the world and we kind of go counter to the traditional ways that things have been done.
00:03:52.365 --> 00:03:57.788
Talk to me about that difference, because now I would imagine you're moving so fast in a very rapidly evolving field.
00:03:57.788 --> 00:03:59.280
What does that difference look like?
00:04:00.301 --> 00:04:01.723
It is funny you bring that up, brian.
00:04:01.723 --> 00:04:10.873
I think one of the first things and this is a fairly new part of my life we kicked this off in September of last year and just sort of wrapped the Y Combinator went through.
00:04:10.873 --> 00:04:12.656
The Y Combinator accelerated on the West Coast.
00:04:12.656 --> 00:04:18.425
I have to say there's a good amount of unlearning that has happened in those 10 weeks at a very accelerated pace.
00:04:18.425 --> 00:04:20.327
For exactly that you're referring to.
00:04:20.668 --> 00:04:27.896
I do think there are some incredible things I learned from McKinsey that have actually given me a sort of a competitive advantage in the entrepreneurial game.
00:04:27.896 --> 00:04:31.110
A few of those things are how to navigate a corporation.
00:04:31.110 --> 00:04:39.189
It sounds easy, but knowing who to sell to, what type of conversation to have, when to engage different stakeholders to sort of drive decision making.
00:04:39.189 --> 00:04:39.910
You know how to engage them.
00:04:39.910 --> 00:04:41.432
You know how to engage them.
00:04:41.432 --> 00:04:50.923
I think those were having a seat at that table for seven, eight years before.
00:04:50.923 --> 00:04:52.990
This has really, you know, almost given me the insider perspective of what it takes.
00:04:52.990 --> 00:04:57.925
But that said, there is a lot that I've had to unlearn as well, predominantly around the appetite of making mistakes.
00:04:57.925 --> 00:05:12.100
I think in most corporate jobs and consulting is no exception is you know, making mistakes is not really kosher, but in entrepreneurship, if you're not making at least one blunder every day, you're not moving fast enough and not learning fast enough.
00:05:12.100 --> 00:05:15.391
So that was probably the biggest unlearning that had to happen in this journey.
00:05:16.040 --> 00:05:17.442
Yeah, so well said, Sean.
00:05:17.442 --> 00:05:29.827
I love the fact that you call that out because it is an inevitable part of any and every entrepreneurial journey, which, of course, I'm going to use that as a segue to talk about your entrepreneurial journey, because you're doing really cool things in the world of AI.
00:05:29.827 --> 00:05:34.271
We obviously talk about AI quite frequently here on this show, but talk to us about Galini.
00:05:34.271 --> 00:05:34.992
What is it?
00:05:34.992 --> 00:05:36.144
What made you start it?
00:05:36.144 --> 00:05:37.245
Where did that idea come from?
00:05:37.245 --> 00:05:38.605
And why now, Sean?
00:05:38.605 --> 00:05:47.595
Because it's changing every single week and I would imagine that you dumped or jumped headfirst into an industry that you had to make sense of and you continuously have to stay ahead of the curve on.
00:05:48.939 --> 00:05:58.321
Yeah Well, firstly, I don't think there's ever been a better time to start a company, so we can come back to that at the end, but I would encourage anyone listening to this podcast take the jump.
00:05:58.321 --> 00:06:03.629
It has never been cheaper, it's never been easier and the sort of investment appetite has never been better.
00:06:03.629 --> 00:06:08.858
Uh, and, more importantly, your ability to bring an idea to reality has never been faster.
00:06:08.858 --> 00:06:10.863
Um, so I would recommend that strongly.
00:06:10.863 --> 00:06:12.773
What drove my co-founder and I?
00:06:12.773 --> 00:06:14.880
I mean, we've known each other for 15 years at this point.
00:06:14.880 --> 00:06:37.596
We were roommates in college, uh, you know, from the time we were babies almost, uh, and we sort of missed two or three of the seminal waves of technology, either because we were too young or we were international, so we needed to play the Visa game in the US, including the cloud wave, the internet wave I guess we were a little bit too young for the mobile wave.
00:06:37.596 --> 00:06:53.351
That this is a paradigm shifting sort of platform of technology that's going to redefine the way everyone works, everyone sort of conducts life and everyone lives in the next five to seven years, and sitting on the sidelines just didn't sit well with us.
00:06:53.351 --> 00:07:01.401
So that sort of you know the core motivation I think practically both of us have me from an advisory capacity and him from an actual building capacity.
00:07:01.440 --> 00:07:15.192
He was making a lot of the systems in many of the top institutions realize both the potential of the latest form of AI I mean AI has been around a while the generative AI and the transformer architecture but also the risk that it has.
00:07:15.192 --> 00:07:32.875
Fundamentally, it's a stochastic system, which means it's a probabilistic response, and there are a lot of industries where, for very good reasons, there's regulation around what can and cannot happen, what can and cannot be said by systems like, broadly, the financial sector, the healthcare sector, government services.
00:07:32.875 --> 00:07:43.394
And what we saw very viscerally is many enterprises are stuck in sort of the pilot mode and unable to do the enterprise scale because they're unable to manage this risk.
00:07:43.394 --> 00:07:49.684
So that is really the problem that we left to help solve, as we are both very pro-AI, but we want it to be responsible.
00:07:49.684 --> 00:08:00.346
We want folks to do it in a way that they can control and control the customer's experience as well, and that's what led us to start Galini, which are essentially guardrails for AI applications.
00:08:00.809 --> 00:08:14.690
We work very closely with product leaders and engineering leaders to help them accelerate their their ai deployment journey yeah, I love that overview, sean, especially because there's so many considerations and obviously we're going to go deeper into quite a few of those avenues during our conversation today.
00:08:14.690 --> 00:08:18.627
But the first place that I want to start is those guardrails, because I think it's fascinating.
00:08:18.627 --> 00:08:21.963
I'll confess here, while we're on the air together, that I love scrolling through reddit.
00:08:21.963 --> 00:08:33.168
I really love seeing what the public are talking about, and right now it's kind of it's almost a meme at this point of how far can we push these large language models like ChatGPT, how far can we push them?
00:08:33.168 --> 00:08:35.561
At what point are they going to say, nope, I can't go there?
00:08:35.561 --> 00:08:37.683
At what point are those guardrails going to kick in?
00:08:37.745 --> 00:08:40.668
And in some of those guardrails, sean, people don't like.
00:08:40.668 --> 00:08:41.870
Some of them, people can see.
00:08:41.870 --> 00:08:46.082
Okay, it doesn't make sense for AI to be sharing this type of information with me.
00:08:46.082 --> 00:08:49.321
Talk to me about those guardrails, because obviously there are good guardrails.
00:08:49.321 --> 00:08:50.664
There are not so great guardrails.
00:08:50.664 --> 00:08:53.552
How do you distinguish the two from each other?
00:08:54.760 --> 00:09:03.591
Yeah, it's a very astute question, brian, and, to be honest, when you talk to five or 10 different people about what a guardrail is, everyone has a different interpretation of what it means.
00:09:03.591 --> 00:09:10.967
When we are referring to guardrails, we are actually referring to the enterprise use of guardrails.
00:09:10.967 --> 00:09:18.110
So this is sort of most enterprises have corporate policies that govern the way employees interact with each other, access to data privacy and controls.
00:09:18.110 --> 00:09:27.312
In a post AI world where things are becoming more and more agentic and you have systems that, honestly, will very soon, if they're not already, behave like employees.
00:09:27.312 --> 00:09:29.548
They have access to employee databases.
00:09:29.548 --> 00:09:35.644
They have reasoning modules where it can decide what to access, when and how to string together sort of pieces of information.
00:09:35.644 --> 00:09:46.746
That particular, that opens up a risk vector for many enterprises where the potential is obvious, but the risk is also pretty large, and so that is what we mean by guardrails.
00:09:47.139 --> 00:09:58.128
Now there's been a lot of discourse on more of the consumer-facing applications whether it's OpenAI or Grok or name your application these days and whether they should and should not be guardrails.
00:09:58.128 --> 00:10:03.067
I know that this also sort of moves towards a political issue, so it's not something we have a strong stance on.
00:10:03.067 --> 00:10:11.604
I think overall, we're very pro-AI and we want it to be done responsibly.
00:10:11.604 --> 00:10:12.285
We're also pro-open source.
00:10:12.285 --> 00:10:23.365
I think recently, with DeepSeek and some of the other innovations that have happened, it's very clear that those models are catching up to some of the cutting edge propriety models and we're very excited to sort of leverage that trend as well.
00:10:23.385 --> 00:10:26.653
Going forward, yeah, I love the fact that you really make that distinction.
00:10:26.653 --> 00:10:42.575
You talk about those enterprise considerations because there are so many and, quite frankly, obviously you and I are going to be talking enterprise because you work within that realm, but I would argue, all of us as business owners every single person that's tuning into this conversation we all have to think about the ways that we use it, because we have our own data.
00:10:42.575 --> 00:10:44.524
We have our own customer data.
00:10:44.524 --> 00:10:50.041
There's a lot of sensitive stuff that a lot of us are feeding through AI these days, and free plans versus paid plans.
00:10:50.041 --> 00:10:51.605
There's so many considerations there.
00:10:51.605 --> 00:11:00.440
But one thing that I really appreciate is that you call out so distinctly that there are guardrails, not just on AI outputs, but also the inputs.
00:11:00.440 --> 00:11:02.143
What is it that we're feeding into it?
00:11:02.143 --> 00:11:13.350
Talk to us about those considerations, because I feel like a lot of us are just freely using it without thinking about both sides of that equation of not just what's the AI giving to me, but what am I giving to it?
00:11:14.639 --> 00:11:16.285
Yeah, that's a fantastic point.
00:11:16.285 --> 00:11:20.988
I mean the first thing I'll mention, whether you're you know, consumer use cases or even SMB use cases.
00:11:20.988 --> 00:11:26.525
Look at the terms and conditions on the different websites for how they handle data, what they do about it.
00:11:26.525 --> 00:11:28.407
I think that is very critical.
00:11:28.407 --> 00:11:29.386
You could be.
00:11:29.386 --> 00:11:30.559
I'll give you an example.
00:11:30.679 --> 00:11:46.586
Last week I was at the T3 conference, which is a wealth management technology conference, and it was very clear, for better or worse, that many advisors were using some of the tools like ChatGPT and sort of uploading PII and sensitive client information because regulation hasn't caught up there.
00:11:46.586 --> 00:11:55.871
It's probably fine, but it's at least something you need to disclose to your customers because it is going to an open source model or an openly accessible model.
00:11:55.871 --> 00:12:01.241
So I would sort of take away is definitely look at the terms and conditions and just think through.
00:12:01.241 --> 00:12:02.687
Do the what do they call it?
00:12:02.687 --> 00:12:04.565
The PR test or the public test?
00:12:04.625 --> 00:12:10.019
If you were in the newspaper, would you feel comfortable or not comfortable saying a particular statement?
00:12:10.019 --> 00:12:12.248
And if you're not comfortable, it's good to explore solutions.
00:12:12.248 --> 00:12:32.986
That said, you know not a proponent of adding costs to your business model, so don't be not suggesting that there are solutions that you can use that are you know a very low cost, but at least be aware of the risks there for that, yeah, I love the fact that you're also introducing us to so many different players, sean.
00:12:32.895 --> 00:12:34.111
We're talking about potential government regulation.
00:12:34.111 --> 00:12:35.296
We're talking about enterprise level.
00:12:35.296 --> 00:12:38.403
We're even talking about consumers responsibilities of ourselves.
00:12:38.403 --> 00:12:47.772
If I go to a dental office, of course I'd like to know what the heck they're doing with my data, and it was much simpler 50 years ago when everything was just pen and paper, but it's much more complex today.
00:12:47.772 --> 00:12:49.221
So I want to ask you this question.
00:12:49.221 --> 00:12:51.145
Obviously, there's no one answer to it.
00:12:51.145 --> 00:12:54.123
I'm sure it's a mix of everything, but who's responsible here?
00:12:54.123 --> 00:12:56.390
Who's going to drive that change?
00:12:56.390 --> 00:13:15.322
Because, when I think about, part of our value add as entrepreneurs is that we drive a lot of change when it comes to technology, when it it comes to innovation, to the point that you said earlier is that, sean, there's a lower penalty for you and I taking risks and failing than there is for a McKinsey, so we get to drive some of those experiments and that change.
00:13:15.322 --> 00:13:23.567
Who's going to be driving that change in this world of AI and those guardrails and use cases and privacy and data and all these things we're talking about?
00:13:25.437 --> 00:13:28.871
Yeah, that's a million dollar question, brian, I think you know.
00:13:28.871 --> 00:13:37.985
Ideally we'd want the government to play a part in this, but we know that they usually is a little bit slow in the adoption and understanding of the latest technology.
00:13:37.985 --> 00:13:51.301
We also know that historically, a lot of the innovation has come from some of the larger companies, but this is a very, very unique sort of thing that we're seeing in the market, where the vast majority of innovation here is actually coming from the startups of the world.
00:13:51.301 --> 00:13:56.852
You know, sort of our fellow founders, for lack of a better word.
00:13:56.852 --> 00:14:04.921
So I think the onus from a, you know, an ethical standpoint is on us to make sure that we are using sort of our technology.
00:14:04.921 --> 00:14:08.094
I hate to draw the metaphor, but you know what's that Spider-Man or Superman quote of like?
00:14:08.094 --> 00:14:09.275
With power comes responsibility.
00:14:09.275 --> 00:14:22.297
I do think that really falls on founders to and sort of early technology experimenters with AI to take this onus on themselves until they become standards and norms that are applicable in our space.
00:14:22.918 --> 00:14:27.796
Yeah, it was totally a softball question, sean, because obviously Galini is plugging a lot of that gaps.
00:14:27.796 --> 00:14:31.402
I want to put you on the spot a little bit because I obviously can see your website.
00:14:31.402 --> 00:14:43.964
Most listeners can't see you and I right now and they definitely can't see your website as we're talking, but you have a graphic on your website that I feel like describes so succinctly where Galini fits into this mix.
00:14:43.964 --> 00:14:50.629
I love the fact that you've got the user's input feeds not to get an output, but first feeds into Galini.
00:14:50.629 --> 00:14:59.743
That makes sense of all of the things that we're talking about regulations, company policy, all of those then feeds it into the AI and gets that output.
00:14:59.743 --> 00:15:05.691
Walk us through how the heck that works, because the visual is worth a thousand words for me, truly, but I want you to explain it for listeners.
00:15:06.692 --> 00:15:07.235
Absolutely.
00:15:07.235 --> 00:15:13.876
I mean the metaphor here for technologists out there is a firewall, but instead of being a security firewall, it's an AI compliance firewall.
00:15:13.876 --> 00:15:15.635
That's exactly how it works, brian.
00:15:15.635 --> 00:15:21.775
So, as a listener or someone, how you would use something like this?
00:15:21.775 --> 00:15:24.169
There are two parts to the solution.
00:15:24.169 --> 00:15:33.482
The first one is a user would type in a query, a prompt, a voice note, an agentic instruction, whatever the input format is.
00:15:33.482 --> 00:15:36.839
It would hit the Galini API and they would get a response.
00:15:36.839 --> 00:15:44.355
The developer team Based on the response, they could do something about it, and the same thing happens on the output side.
00:15:44.355 --> 00:15:54.506
You know you could be masking PII, you could be having off-topic certain conversations or you know topics are off limits for the purpose of your application.
00:15:54.506 --> 00:16:02.083
You could control what the agents can and cannot access within as you're building sort of more agentic applications.
00:16:02.083 --> 00:16:03.471
So it essentially is.
00:16:03.471 --> 00:16:11.931
You can think of it like your safety blanket or safety layer between you know users and a model that is not fully in your control.
00:16:12.572 --> 00:16:15.941
Yeah, I love the way that you use analogies to illustrate that point.
00:16:15.941 --> 00:16:25.241
I think it's so important for all of us to understand where things are living, because now we're sending data all across the world these days with any and every query that we're sending out there.
00:16:25.241 --> 00:16:42.399
Sean, when I think about the work that you're doing, what really excites me is the fact that you guys are taking that responsibility to make sense of company policy, to make sense of all the things that at the enterprise level, they hope is happening, but you then build and deploy that solution within their environment.
00:16:42.399 --> 00:16:45.092
My question to you is what's the spark for them?
00:16:45.092 --> 00:16:46.195
What's that catalyst?
00:16:46.195 --> 00:16:55.134
Is it a pain point that enterprise level companies are already saying wait, we recognize that this is something we want to get on top of or I'm going to throw the insurance industry under the bus here.
00:16:55.134 --> 00:17:02.381
Is it the case of like insurance, where none of us want it but unfortunately, when something happens, we're really glad that we have it?
00:17:02.381 --> 00:17:06.914
Where's that catalyst or that spark for them to say, wait, let's prioritize this.
00:17:08.175 --> 00:17:09.438
It's a great question, brian.
00:17:09.438 --> 00:17:10.278
I think we're.
00:17:10.278 --> 00:17:11.140
I think two things.
00:17:11.140 --> 00:17:12.801
One is we're a little bit early in the market.
00:17:12.801 --> 00:17:16.326
I think people are still early in their AI journeys, ai adoptions.
00:17:16.326 --> 00:17:19.353
They're figuring out what AI means for their company beyond.
00:17:19.353 --> 00:17:23.563
You know the Microsoft co-pilots of the world or you know the consumer facing applications.
00:17:23.563 --> 00:17:25.979
So I do think this is an evolving conversation.
00:17:25.979 --> 00:17:32.088
If you ask me in a couple of years it'll probably my answer will change the motivations that we've discussed so far.
00:17:32.088 --> 00:17:35.480
It is I'd say it's 50-50.
00:17:35.851 --> 00:17:44.935
There are some sort of proactive leaders that are very like taking a very proactive approach around how to manage something like this, and so they'd fall in your first camp.
00:17:44.935 --> 00:17:52.642
But I'd say maybe like 60, 65% of the folks that we've spoken to are almost taking the insurance angle of gosh.
00:17:52.642 --> 00:17:57.500
Is the upside of launching this capability to our customers worth the risk of the downside?
00:17:57.500 --> 00:18:02.561
And unfortunately, almost every week there's a new news article of somebody butchering this.
00:18:02.561 --> 00:18:05.478
I mean, I don't want to put any names out there, but it's.
00:18:05.478 --> 00:18:09.940
This quick news search will give you a sense of what these are.
00:18:09.940 --> 00:18:16.002
So you know, it is more of the insurance as a driver today for enterprise adoption of guardrails.
00:18:16.002 --> 00:18:18.074
But you know our hope is in the future.
00:18:18.074 --> 00:18:21.122
Folks will take more and more of a proactive lens here.
00:18:22.090 --> 00:18:24.986
Yeah, I really even not just speaking from a technical perspective.
00:18:24.986 --> 00:18:39.056
I love getting inside your executive mind and hearing you call out the fact that, yeah, we're early in the market, and I feel like that's how everybody kind of feels about AI right now is it's unbelievable saying we're early because it is obviously incredible already but it's just evolving at such a rapid rate.
00:18:39.056 --> 00:18:42.750
So I'll put your entrepreneurial and executive mind on the spot here.
00:18:42.750 --> 00:18:44.492
How do you make sense of that?
00:18:44.492 --> 00:18:48.999
Are there some times where you go to bed at night and you think are we too early here?
00:18:48.999 --> 00:18:51.962
Are we fixing a problem that other people aren't aware of just yet?
00:18:51.962 --> 00:18:57.771
Or what are those conversations like when you're in the marketplace, when you're talking to your potential clients, your existing clients?
00:18:57.771 --> 00:19:02.691
How do you make sense of the timeline, of where you're fitting in into the more macro landscape?
00:19:04.574 --> 00:19:04.874
I do.
00:19:04.874 --> 00:19:07.278
I spend time on it all the time, brian, all the time.
00:19:07.278 --> 00:19:12.026
It's, I guess, the fun and the not fun parts about being an entrepreneur.
00:19:12.026 --> 00:19:14.977
You constantly have to question yourself, your business model, kind of.
00:19:14.977 --> 00:19:16.981
You know, your timing, what you offer in the market.
00:19:16.981 --> 00:19:33.435
You know, one of the incredible things we learned from YC on this was they try to simplify a very, very ambiguous you know thing, which is entrepreneurship, into sort of different pieces of advice at different stages of your journey, and most of us that are early.
00:19:33.435 --> 00:19:35.260
It is build something people want.
00:19:35.260 --> 00:19:40.911
That is sort of the, the mandate, and they've sort of distilled years of very successful founders into that.
00:19:40.911 --> 00:19:44.804
So the way I sort of handle that is I keep talking to customers.
00:19:44.804 --> 00:19:54.954
I keep sort of uh, you know, going to conferences, speaking to leaders, um, speaking at ISAF, which is a large audit conference later this week.
00:19:54.954 --> 00:20:03.304
Being right front and center with customers in their workflows in the discussion is the nearest way to figure out.
00:20:03.685 --> 00:20:04.589
Are we too early or not?
00:20:04.589 --> 00:20:08.856
I think we are on the earlier side here, but we're not going to be for too long.
00:20:08.856 --> 00:20:15.512
I don't think there's been a technology that has evolved as quickly at a global scale as we are seeing it evolve.
00:20:15.512 --> 00:20:33.023
Jenny, even the open source Manus release a week or two ago or I think it was last week is essentially taking OpenAI's operator and bringing it to the world for a much lower price point, and that will completely change the way folks interact with technology.
00:20:33.023 --> 00:20:37.201
The actual UI with which even consumers use technology will completely change.
00:20:37.201 --> 00:20:38.637
Imagine if you don't have to.
00:20:38.637 --> 00:20:43.220
You want to use a piece of software and you don't have to figure out how the software works.
00:20:43.220 --> 00:20:47.121
You just have a need and you express the need and it is solved.
00:20:47.121 --> 00:20:51.954
That is a world we're heading into very, very soon, just to pick on one example.
00:20:52.455 --> 00:20:53.921
Yeah, I love that, Sean, I'll tell you.
00:20:53.921 --> 00:21:13.412
Obviously, as someone who talks to business owners for a living, I completely agree with you that I think that, even though it feels early right now, that next stage is going to come so quickly, and that's why I really appreciate the fact that you and your co-founder, you're so clear on the fact that there are more regulated industries that make more sense for you to dip into Government industry, for example, financial sector.
00:21:13.412 --> 00:21:19.675
Obviously, those are industries that have very sensitive data, that have a lot of data, that are leveraging AI in different ways.
00:21:19.675 --> 00:21:31.156
Talk to us about that industry-specific targeting those conversations, how you've identified those and really why those are going to be forced to be the leaders in this because of the sensitivity of their data.
00:21:32.259 --> 00:21:35.230
Yeah, I mean, that is where the pain is most felt, right?
00:21:35.230 --> 00:21:43.385
I think that the reason why we're starting there, as you said, it's regulated and the penalties are already established and large for, you know, violating those.
00:21:43.385 --> 00:21:45.270
I can give you a couple examples.
00:21:45.270 --> 00:22:03.017
We're working with a public safety provider who's essentially bringing the next version of their video technology to market to help with crime prevention and mitigation and we're helping them put guardrails around how that technology is used.
00:22:03.017 --> 00:22:19.642
We're working with another global government and early discussions with them around amazing internal capabilities that they're trying to deploy for their citizens that would just bring access to information and resources to their fingertips, but need to do it in a responsible way and sort of.
00:22:19.642 --> 00:22:22.559
There's a big data challenge around that, as you can imagine.
00:22:22.559 --> 00:22:25.490
So you know those are some examples there.
00:22:25.771 --> 00:22:44.924
We're having a lot of conversations with folks all across the financial services landscape, particularly investment advisors, around, honestly, the pain of compliance and how to use technology to help ease that from a delivery standpoint in a model that's pretty tough.
00:22:44.924 --> 00:22:46.236
There are pretty thin margins there.
00:22:46.236 --> 00:22:48.213
We're talking to banks.
00:22:48.213 --> 00:22:48.836
That's pretty tough.
00:22:48.836 --> 00:22:49.859
There are pretty thin margins there.
00:22:49.859 --> 00:22:51.064
We're talking to banks.
00:22:51.104 --> 00:23:00.655
And then, the last thing I'll mention, we're also talking to chronic care and other healthcare delivery practices, as they think about using AI both in their operations but in customer service.
00:23:00.655 --> 00:23:13.799
And the last thing you want is to launch a 24-7, always available AI chatbot where you ask hey, my back's hurting, I think I injured it in some way, and it starts giving you medical advice and puts the practice up to get sued.
00:23:13.799 --> 00:23:28.040
So those are some of the low-hanging fruit, but the vision we have in our head is, as AI becomes more human-like, we have a set of social norm or regulations of how we interact in companies with each other.
00:23:28.040 --> 00:23:29.604
What is that protocol?
00:23:29.604 --> 00:23:32.717
What does that look like for AI?
00:23:32.717 --> 00:23:41.282
And it makes sense that this is not going to be solved by an individual provider, and so we want to ideally be that protocol in five years.
00:23:42.069 --> 00:23:47.814
Yeah, sean, I love the real life examples because it immediately shows us how much we should all support some level of guardrails.
00:23:47.814 --> 00:23:50.943
Because the back pain example yeah, none of us want AI to.
00:23:50.943 --> 00:23:56.558
We already have WebMD to terrify us when anything is wrong with our bodies, so we don't need AI to be tacking onto that.
00:23:56.558 --> 00:23:58.171
I want to switch gears a little bit, sean.
00:23:58.171 --> 00:24:03.053
I've been so excited not only to talk to you with regards to AI, but also to get inside of your entrepreneurial mind.
00:24:03.053 --> 00:24:07.019
It's such a fun part of these conversations for me because I think it's fascinating.