How I Buy - Ben Zittlau, Distinguished Architect, Jobber - Nudge.ai - Relationship Intelligence for Sales

How I Buy – Ben Zittlau, Distinguished Architect, Jobber

By Steve Woods in #HowIBuy

Selling to product organizations can be difficult.  If the technology stack is deeply entrenched, a new option would need to be at least an order of magnitude better than the incumbent to have a reasonable chance of displacing it.  In the #HowIBuy interviews we’ve done to date, this challenge of the product roadmap has appeared multiple times.

This week, I managed to catch up with Ben Zittlau, Distinguished Architect at Jobber (they provide web-based business management software specifically for home service companies), and we took the conversation further upstream; into the experimentation and education that takes place as ideas are explored and potentially added to the product roadmap.

Here are Ben’s insights:

 

Major new projects and new architectures are a point in time when there’s a great opportunity for new vendors to become part of your stack.  Can you tell us how those projects are approached?

Let’s start with an example, we recently began the process of using machine learning approaches to identify and mitigate fraud in our customer base.  It was truly an experiment; we started with no idea whether it would work.  The challenge was to experiment with a broad, overall problem set – fraud mitigation – and find a way to tackle the problem.

 

How does the thinking and experimentation with new ideas play out?

My role is as an architect.  As such, I try to not be a gatekeeper.  I’d rather ask thoughtful and incisive questions around constraints and ramifications than tell a person not to go in a certain direction. In looking at fraud, there are lots of areas that the potential solution could go in, first of all is it a people process or something that we’re going to solve with technology.   I don’t start with a bias towards technology as a component of the solution even though by nature and title I’m a technologist.

We had a resource who had a passion for machine learning and a lot of experience with it, so he was keen to get involved.  That, in combination with some early experimentation, led us down a path whereby the machine learning approach became the favoured one.

 

Something this novel doesn’t seem like it’s classically driven as a business use case and a product specification.  How do these kind of projects take shape?

It starts with a business challenge, but one that is defined quite loosely.  We had a payment system as part of our product.  At the start, we put some very basic fraud detection capabilities in front of it, but we needed to see it operate in the real world to understand the shape of the problem better.  What would fraudulent accounts look like?  What could we do to mitigate their impact?

The role of business was to say “hey, there’s a problem here”.  We had set a benchmark at the business level for what we could tolerate, and a few events had pushed us outside of that tolerance zone.  From that point, the problem was very open ended.  I’m happy to solve this kind of problem with no technology at all, if we could do it through a people process.

Projects like this are always uncertain.  There are unknowns in the technology, in the approach, and in the results.  In order to manage the risk and the structure, we will time-box the experimentation as there’s no way to tell if you’re going to succeed.  

Within the organization, we push hard for problem-centric thinking as it removes the constraints of how a problem will be solved.  If all you know is the problem space, and the time-box for how long to invest in the experiment, it opens up a lot of creative thinking on how you want to tackle the problem.

 

When you’re stepping into a new area, it’s all unknowns.  How do you understand if you and your team knows enough?  How do you know when to start building and when to do more research?

That’s where the time-boxing becomes quite useful.  We’ll explore for two weeks and see if we’re making progress.  We don’t need to solve the problem in that period, but we’ll definitely learn a lot, and we’ve capped our investment.

As we get in deeper the requirements of a particular solution become more clear.  As we dug in on the machine learning approach, the need for specific data signals became obvious.  We looked manually at fraudulent accounts and compared them to regular accounts to see what kinds of signals we might be able to observe.  

In our case, we were able to look at fraudulent accounts side-by-side with regular accounts and notice a few things that were curious or odd discrepancies.  It turned out that some of those quirks of fraudulent accounts were possible to identify algorithmically, so that gave our machine learning systems a starting point in terms of what to look for.

When we were a bit deeper, we were able to lean on the team member with the most significant expertise in machine learning.  He was able to help us map problems to the most likely available technology approach within the machine learning space.  A lot of engineering is like that, you need to understand the tools you have at your disposal and know how to map them to the specific problems that you are trying to solve.

 

How do you look at solutions in a time-boxed environment and cut through the marketing hype?

There’s a balance between velocity and cost in any engineering environment.  For example, in running an Elastic Search cluster in a production environment, that is something that we could do, but we may not want to do.  If we can pay for someone to take that off our plates, it increases our team’s velocity.  The cost we incur may be worth it.  

I don’t want the team to throw money at problems that they don’t understand as there’s a very high chance that the money will not be tied to the problem at hand.  If I can understand a problem set, where the risk profile is clear, like hosting an Elastic Search cluster, we don’t need to understand everything under the hood, but if we lack understanding of the core problem, I’m not comfortable throwing money at it.

 

How do you evaluate architectures that are new and unknown?  One area of the new architecture might be better, but another area might be worse.  How do you evaluate at scale?

We’ll often do that evaluation in a partial deploy scenario.  For example, with search, we built the new architecture to be feature compatible with the existing structure, and rolled it out to a small percentage of the user base to see how it scaled.  At 20% of the user base we started to see early indicators of performance issues; way earlier than we had anticipated it would.

In some cases, you can see these issues using load testing, but in many cases production has nuances that are very difficult to understand and replicate, so being able to partial deploy and pull back at the first indication that performance is not what it should be.

 

How do you move forward with a project? What happens once you’re out of the “time-box” experimentation phase?

We’re still a very agile environment and that’s something I’m working hard to maintain as part of the culture.  When we start into a project we know that we don’t know everything about it.  Sometimes there will be something that we learn that causes us to question whether the original idea is still worth pursuing.  That’s okay.  We might have learned that the technology doesn’t deliver, the UX is too complicated, or the business value less than we had anticipated.  Those are moments to reconsider and perhaps stop the project.

Obviously it’s always a balance, and we don’t want to be second-guessing our decisions every hour.   That’s where sprints are a useful construct.  Once we’ve committed to a sprint, we’ll push to the end of it, but that does not guarantee that there will be another sprint afterwards it the context of the original decision has changed.

 

When you’re looking at adding a technology to your architecture, success is determined by not only what the technology is capable but also how well you’ve got it configured.  How does that deep knowledge get into your organization?

We’ll happily work with vendors – the good ones are happy to connect our engineering team to their engineering team.  The best vendors can deal with a situation where we’ll say “we’re seeing this problem” and respond with “here’s a better architectural approach for you to consider”.  Those are the kind of vendors we like to partner with.

I put quite a bit of time into being connected with the community.  It can be invaluable when you’re dealing with a technology challenge, as there’s a significant chance that if you’re having a problem, someone else will have dealt with a similar challenge.  Often you can find a key person with years of experience in an area and find a short-cut to the solution.

Sometimes we’ll just get stuck and we have to level-up our understanding.  We had to go right to the source-code for PostgreSQL for one challenge that we were having.  It’s something that I expect of my senior engineering team; they should be able to go to that level and not get stuck.

 

Thanks for your insights on the how these type of major architectural transformations happen Ben!  It was tremendously valuable!

Steve Woods
CTO and Co-Founder
Pricing
Pro For individuals or small teams who want to automatically capture contacts and activities in Salesforce $30 per user/month Free 30 Day Trial Get Started!
Business For companies who want to understand and close each deal in their pipeline $40 per user/month + $1,000 /month base fee Chat with us