Notes from Patent Attorney Ken Murray at the StartupFolsom AI Meetup

At the latest StartupFolsom AI Meetup, we had a talk that landed somewhere between “practical field manual” and “confessional from a recovering shiny-object chaser.”

The speaker was Ken Murray (tech/patent attorney, engineer by background, former Exxon-style optimization guy), and his core message was refreshingly un-hyped:

AI isn’t magic. It’s leverage. And leverage makes you responsible for what you do with it.

Below is a founder-friendly recap of the big ideas, the gotchas, and the stuff you can actually apply Monday morning.

1) Ken’s origin story: “digital twins” before it was cool

Ken started by describing the kind of work that makes modern “AI transformation” decks look like finger painting: early digital-twin-style modeling in oil and gas.

His approach was simple and brutal:

  • Collect empirical data from the field

  • Feed it into algorithms

  • Model what’s really happening

  • Optimize based on reality, not vibes

He told a story about modeling a producing field and driving a major production increase with a relatively modest investment. The point wasn’t “oil is awesome.” The point was:

A digital twin is only as good as the data you’re willing to go get.

And that theme kept coming back all night.

2) AI makes you more thorough, not necessarily faster

You’ve heard the pitch: “AI makes everything faster, cheaper, easier.”

Ken’s response (from a patent attorney who lives and dies by accuracy): Nope.

What he’s seeing in real work:

  • AI often increases the number of directions you explore

  • which increases how much you verify

  • which increases how thorough the final deliverable is

  • and sometimes increases time spent, not decreases it

His framing was basically:

AI doesn’t remove work. It moves you into harder work sooner.

You get to the “interesting edge cases” faster… and that’s where the clock starts.

3) “Hallucinations” aren’t spooky — they’re a research bill you ignored

Ken really didn’t love the word hallucination. His argument was more blunt:

If you relied on a fake case citation (or any wrong output), the deeper problem is you didn’t verify.

In legal work, that can get you sanctioned. In startup work, it can get you killed in the market (or sued, or both).

Rule of thumb he kept reinforcing:

  • Use AI to surface possibilities

  • Then do the real verification work

  • Especially when citations, math, finances, or compliance are involved

One line that stuck: AI can produce something that sounds like it should exist. That’s not the same as it being real.

4) The best use case: adversarial review

Ken’s favorite practical pattern was not “generate me a perfect document.”

It was this:

  1. Draft (or review) the contract/spec/plan yourself

  2. Feed it to an LLM and ask: what did I miss?

  3. Run it through multiple LLMs and compare answers

  4. Treat the AI like an adversary trying to break your logic

That adversarial posture is gold for founders too:

  • Product requirements

  • Security assumptions

  • Vendor contracts

  • Hiring plans

  • Go-to-market narratives

  • Investor decks (especially risk sections)

If you want a one-sentence workflow:

Don’t ask AI to replace your brain. Ask it to attack your blind spots.

5) Confidentiality: keep it generic or keep it out

The meetup discussion got real when the topic turned to confidentiality.

Ken’s stance: the upside is huge, but you must change how you work:

  • Don’t paste sensitive client specifics into public tools

  • Keep examples generic

  • Strip identifying details

  • Use real experts for final validation (AI is a bridge, not the destination)

One moment that got a laugh: the AI accidentally kept a proper name in a document and Ken basically scolded it like a junior associate. The laugh was earned, but the lesson wasn’t a joke:

If you’re not actively policing confidentiality, you’re rolling the dice with other people’s data.

Founders: same issue applies to customer lists, roadmap specifics, proprietary methods, and anything you’d cry about losing in discovery.

6) The learning curve is real: budget 100–200 hours

Ken offered a number that felt honest: to get genuinely useful (not “toy demo useful”) you’re looking at 100–200 hours of hands-on time.

Not because AI is hard, but because your domain is hard.

You’re not learning buttons. You’re learning:

  • what prompts reliably produce usable output for your work

  • where the model is unreliable

  • how to structure inputs

  • how to force consistency

  • how to verify quickly without fooling yourself

And yes—he also warned about the “bright shiny object” trap: AI is fun, and fun is dangerously convincing.

7) Limitations that matter in the real world

Ken listed limitations that weren’t theoretical—they were “this just broke my workflow today” limitations:

  • Math / numerical reasoning can be unreliable

  • Context windows (limits on how much it can ingest at once) are still a practical constraint

  • Formatting can be weirdly inconsistent (the “it said it made the doc… and it’s blank” pain is real)

  • Technical diagrams / true 3D understanding are often weak

  • Multi-document consistency can drift across large document sets

  • The model can be led to your preferred answer if you phrase things carelessly (or intentionally)

A helpful distinction someone raised in the room:

LLMs model language about the problem — not necessarily the problem itself.

That explains why they can write like an expert and still miss basic physical constraints.

8) The big myth: “AI will replace professionals”

Ken doesn’t buy it (with a caveat).

He thinks AI will automate:

  • initial document review

  • standard drafting/templates

  • basic summaries and first-pass research

But it won’t replace:

  • strategy

  • judgment

  • negotiation nuance

  • stakeholder relationships

  • responsibility for outcomes

The line that got nods:

It’s not AI that takes your job. It’s someone using AI better than you.

(Yes, that stings. It’s supposed to.)

9) Client expectation whiplash: value goes up, but they want the price to go down

Ken described a very specific professional-services problem that founders will recognize instantly:

  • Clients think “AI = 90% cheaper”

  • But the deliverable is often higher quality, more comprehensive, and better risk-managed

  • Which takes more thought, not less

So you end up having the “value vs hours” argument:

  • They want a discount because they heard AI is fast

  • You’re delivering more value because AI exposed more issues

  • Everybody is technically right, which is the worst kind of argument

Founders building AI-enabled services: this is your future customer conversation too. Start learning how to price outcomes, not keystrokes.

10) IP in an AI world: patents still matter — but trade secrets matter more (and are harder)

The meetup ended with a very founder-relevant question:

Should you patent AI workflows? Or just ship and get traction?

Ken’s take was balanced:

  • Markets are moving faster

  • Reverse-engineering is easier

  • Patents can still be a barrier to entry

  • But you also need strong trade secret discipline (NDAs, access controls, process tracking)

On budgeting, he threw out a practical rule of thumb he’s seen in successful companies:

  • 10–15% of investment allocated to IP (patents + branding + trade secret systems)

And yes, he pushed back hard on the “we raised $15M, let’s spend $6K on IP” fantasy. That’s like buying a race car and protecting it with a bike lock.

Practical takeaways for founders (the “do this next” list)

Use AI like a red-team partner

  • “What am I missing?”

  • “What would an adversary argue?”

  • “Where could this fail in the real world?”

Build a verification habit

  • For numbers, legal claims, compliance, citations: verify externally

  • For strategic decisions: verify with humans who’ve been punched by reality before

Create a confidentiality workflow

  • Strip specifics

  • Use placeholders

  • Assume anything you paste could leak someday

Manage the addiction factor

AI can feel productive while you’re actually avoiding the hard deliverable.
If that’s you (no judgment), set a rule like:

  • “Deliverable first, rabbit holes second.”

If you’re building a defensible business, think about IP early

Not “file a patent tomorrow,” but:

  • What’s patentable?

  • What must remain secret?

  • What operational controls do we need to keep it secret?

  • What do investors/partners expect to see?

Closing thought

Ken’s best theme was also the least sexy:

AI doesn’t drive the bus. You do.

It will absolutely help you see more options, catch more mistakes, and raise the quality of your work. But it also makes it easier to fool yourself quickly and confidently—which is basically the Dunning–Kruger effect on rocket fuel.

If you want the competitive advantage, it’s not secret prompts. It’s this:

Use AI aggressively — and verify like your reputation depends on it (because it does).

If you want, I can also:

  • rewrite this in a more “StartupFolsom newsletter” tone,

  • turn it into a shorter LinkedIn post + 5 pull quotes,

  • or create a clean blog version with an intro, CTA, and SEO title/description.

Slides

Here are the slides from Ken’s presentation.


About Ken Murray

Leveraging a multi-faceted career in engineering, Ken Murray began his legal career as principal patent analyst for the University of California, Davis. He founded Murray Tech Law, located in the heart of the dynamic and progressive city of Davis, California, just a few blocks from the highly-respected University of California, Davis. Over the past 14 years, Ken has created an innovative law and technology intellectual property practice. During that time, he formed, invested in, and worked with several high-tech companies involved in energy, software, medical devices, advanced telecommunications, and healthcare software.

Ken is a registered patent attorney licensed to practice before the U.S. Patent and Trademark Office, as well as a member of the California State Bar. He graduated cum laude from the University of South Carolina-Columbia and received his law degree from the University of California, Davis.

Oh, yes – Ken is also an accomplished heavy crane and equipment operator.

Next
Next

From Cringe to Clarity: Fixing Startup Taglines