Engineering vs. science vs. commercial in carbon dioxide removal
Erica Dorr and Samara Vantil of Rainbow on where the lines break
This is a summary of episode #398 of the Reversing Climate Change podcast. You can listen to it on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts. Subscribing, rating, and reviewing Reversing Climate Change on any platform is very much appreciated and makes a huge difference for the show! You can also listen to the full show right below this paragraph.
Quick Takeaways
The science-versus-engineering binary is a useful stereotype but a bad map of reality. In applied fields like carbon removal, most good scientists use engineering mindsets and most good engineers do science work. The real gap may be between the technical teams and the commercial side.
Rainbow’s certification engineers sit at the center of everything: they take the methodology that the science team writes and implement it with real projects, while staying in constant contact with the commercial team about timelines, costs, and project developer needs. They’re the bridge.
Erica deliberately keeps some distance from commercial pressures when writing methodologies. She doesn’t want to know that loosening a requirement would close a deal. But she does ask project developers what things cost, because feasibility has to factor in somewhere.
The hardest decisions are the ones where the data you need is both highly sensitive to the outcome and extremely expensive or difficult to obtain. That’s where most of the uncomfortable time gets spent, and where conservative discounts become a practical compromise.
There is a race-to-the-bottom dynamic in carbon markets, and every registry faces the pressure of project developers shopping around for whoever will give them the most credits with the least friction. Knowing whether you’re cutting scope for the right reasons or to avoid losing a deal is, as Erica put it, the ultimate question.
Where Scientists End and Engineers Begin (and Why It Doesn’t Matter as Much as I Thought)
Earlier this year I wrote a piece for Rainbow arguing that carbon markets need field engineers, not just scientists. It was a provocation. I was making the case that too much of the CDR world treats its work as a science problem when it’s really an engineering problem: messy, iterative, bound by cost and feasibility constraints that no paper can anticipate.
Erica Dorr, Rainbow’s head of science, read it and called me out. Not publicly or harshly, but clearly. She thought I’d drawn the lines too neatly, and I wrote a follow-up piece making the case for what scientists actually do in carbon removal, which turns out to be much more applied and much less detached from reality than the caricature I’d inadvertently painted.
Today I brought both subjects of those essays onto the show: Erica representing science, and Samara Vantil, one of Rainbow’s environmental engineers on the certification team, representing engineering. The goal was synthesis.
The stereotype and what’s behind it
We started with the most basic question possible: what is science and what is engineering? Erica gave the textbook answer, which she immediately labeled a cliche. Science creates knowledge. Engineering turns knowledge into solutions. Then both of them spent the next ten minutes explaining why that’s not really how it works.
Samara described doing what she called “mini applied science” when the methodology is vague and the certification team has to figure out how to implement leakage assessments for biochar projects. Erica described using an engineering mindset every day while writing methodologies, thinking about what’s practical, what project developers can actually do, what the certification team will be able to implement. If she doesn’t think that way at step one, she creates a mess for everyone downstream.
The picture that emerged is two disciplines that are much closer than they are distant. Both grounded in technical foundations. Both doing a mix of the other’s work. Both, frankly, more comfortable with each other than with the commercial side of the house.
The real gap
This was the turn I didn’t expect. I asked whether the bigger distance might be between the technical teams and the commercial team, and Samara confirmed it immediately. She sits at the center: taking what science builds, applying it to real projects, and staying in constant contact with commercial about timelines and deliverables. She understands that sending a lab sample from Africa to Europe costs at least 600 euros. She knows when a requirement is going to break a project developer’s economics. She goes back to science and says: this won’t work, we need to change it.
But Erica described her own relationship to commercial differently. She deliberately keeps some distance. She doesn’t want to hear that loosening a requirement would close a deal. She doesn’t want that information in her head when she’s setting the bar, because once you know it, you can’t unknow it, and the temptation to accommodate becomes structural.
And yet she also asks project developers what things cost. She wants to understand feasibility. She just wants to encounter that information in the context of methodology development, not in the context of a pending sale. It’s a subtle but important distinction about where the firewall belongs.
The quadrant
Erica described her decision framework as a quadrant: how sensitive is the outcome to this requirement, and how difficult is it to obtain the necessary information? Low sensitivity, low difficulty? Require it, fine. High sensitivity, low difficulty? Obviously require it. High sensitivity, high difficulty? That’s where the real work happens. That’s where you’re making judgment calls about conservative discounts, about asking for the next-best data point, about accepting some uncertainty in exchange for actually being able to certify real projects.
Samara added the risk dimension: what are the consequences of getting it wrong? And she gave a concrete example. They conducted a leakage assessment and determined that a discount factor needed to be applied to a project. The project developer would get fewer credits. They wouldn’t be happy. But it was the right call, and the team made it anyway, knowing that the quality of the resulting credits justified the friction.
The race to the bottom question
I brought up Charm’s blog post about reducing sampling because it may have been unneeded, and it received an unimpressed reaction from a friend of mine. This is the tension that everyone in carbon markets lives with. On one side: the climate urgency, the need to move fast, the recognition that the last marginal percent of accuracy costs far more than it’s worth. On the other side: trust. Scandal. The knowledge that if everyone optimizes for speed and cost, eventually the whole system comes down.
I used my favorite uncomfortable analogy: the optimal number of travel deaths is non-zero. We could all drive at 10 miles per hour, but we’ve collectively decided the cost isn’t worth it. Carbon removal has an equivalent trade-off lurking in every requirement, every sampling protocol, every decision about what precision is enough. And the honest answer is that the line is arbitrary. It’s a judgment call. It sits on a curve where at some point the next unit of quality costs 20 units of economics, and reasonable people can disagree about where to draw it.
Erica’s response was that she wished these trade-offs could be made more transparently, with real cost data from project developers about the diminishing returns of additional sampling. In practice, she doesn’t always get great answers, because developers don’t always know their own cost structures at that level of detail. But the aspiration is a data-driven approach to the rigor-feasibility balance, rather than one driven by vibes, competition, or whoever yells loudest.
How do you know you’re doing it for the right reasons?
This was the question I ended on, and Erica called it the ultimate question. Project developers can shop their projects around. A different registry might not require that extra test. The incentive to accommodate exists for everyone, and it doesn’t go away just because you’re aware of it. How do you distinguish between “this requirement doesn’t add enough value to justify its cost” and “we’re about to lose this deal”?
I don’t think there’s a clean answer, and I don’t think anyone pretended there was one. What I appreciated about this conversation is that Samara and Erica engaged with the discomfort of it directly. They described making calls that cost projects money and knowing it was right. They described the daily struggle of placing requirements somewhere other than at the maximum possible rigor. They acknowledged that the balance is a moving target, and that the commercial pressures exist whether or not you choose to listen to them.
These are the questions that everyone in carbon markets faces. The fact that Rainbow let its science lead and its certification engineer talk about them on a podcast, openly, without spin, is exactly the kind of institutional behavior I want to see more of.
Full Transcript
Ross Kenyon: Samara and Erica, thank you so much for being here.
Erica Dorr: Yeah, thanks for having us.
Samara Vantil: Thanks for having us. Always a pleasure to talk to you.
Ross Kenyon: Yes, thanks. We had a lot of fun working on these articles. Started off working on one about putting scientists back in their place and just cutting them down to size. And then getting hit back by the science team here and trying to find appropriate balance. These are like the most basic questions of all, but perhaps the most difficult. What is science and what is engineering and what actually is the difference between these two disciplines?
Erica Dorr: Oh man. I would say the most high-level difference that maybe is a cliche is science is the pursuit of knowledge or creating knowledge. And engineering is all about taking that knowledge and turning it into solutions or making real-world physical change, like creating things and building things with it. That’s the most high-level definition I think of. What do you think, Samara?
Samara Vantil: I totally agree with exactly what Erica mentioned. Usually science is the one that is building all the knowledge foundations and the engineers are the ones that take this foundation and apply it into real-world case scenarios to solve problems, basically.
Ross Kenyon: Okay. I think, what’s the word that you used for this, Erica? You said all models are lies, but some are useful. And maybe this way of understanding the difference between science and engineering is a case of this. Why isn’t this the whole story, or why is that a useful starting point but maybe not where you should conclude?
Erica Dorr: Well, it’s a stereotype and a cliche, and stereotypes are useful as a shortcut or a fast forward to understanding the big picture. They’re like a shorthand for us, but of course there’s more nuance to that. I think science and engineering, it’s not so much about a role or an inherent characteristic of one person. They’re approaches, really. And most people, most good scientists or good engineers, exhibit characteristics of the opposite one. There’s probably very few of us who are 100% pure science or pure engineering. There’s a lot of crossover, especially in rather applied or solutions-oriented fields like CDR.
Ross Kenyon: Samara, do you feel like you oftentimes end up doing science even though you’re an engineer?
Samara Vantil: Well, I was actually thinking about this when we got the invitation for the podcast. Sometimes when we encounter some very vague definitions in the methodologies, for instance, and that leaves a space for the certification team to define what should be done by the project, is when we actually need to do some sort of research. Very much a mini applied science. And one example that I could give about that is that now we’re having very long discussions about how we should implement leakage assessment in all our biochar projects. The leakage assessment is so broad, and it should be because it can vary depending on where the project is located, what they’re doing, and so on. And on the operational side, we need to provide guidance to the developers on how to do that assessment well. And because of that, we also need to do the assessment to provide guidelines and guidance. So it’s something that we really apply science when analyzing those leakage assessments. And one thing that I found particularly interesting about this science application is that we had very long discussions with the science team about how we should conduct it. So it was a super nice way of sharing how we could apply the scientific knowledge to the certification of those projects.
Ross Kenyon: Erica, do you end up doing some engineering or not really?
Erica Dorr: I mean, doing engineering, what does that mean? I think I use an engineering mindset or approach sometimes. No, more than sometimes. Quite often. I think that’s one of the unique things we try to make sure we do at Rainbow, is that when we, on the science side, on the science team, a lot of the day-to-day work means working on our methodologies, making methodologies, revising them, creating requirements. And I think I use this engineering mindset or solutions-oriented, practical, moving-things-forward mindset. Finding the balance in where to set our requirements. It’s a long process from tackling a new technology, learning about it, writing the methodology, making decisions, setting requirements, and then starting to work with Samara and the certification team full of engineers to implement it and get the methodology live, set up questions for project developers to respond to. And if I, on the science side at step one creating these requirements, am not thinking in a little bit of the engineering mindset, and I’m not thinking about how to make this practical, or what’s this going to look like, how are the project developers going to use it, how is the certification team going to use it, even is this going to be feasible, then I will probably end up making a mess for Samara and the certification team and the engineers. You have to have those constraints and concerns in mind and balance those with the scientific needs when making the methodologies. Even though that’s usually put in the role of the science team and science work.
Ross Kenyon: Why do we have this idea that scientists don’t really care about real-world application and they’re detached from operationalizing anything? I feel like I work with a lot of scientists. Granted, it’s mostly in carbon removal, and the kinds of jobs that are available and the kinds of people that it attracts are probably not the purely theoretical scientists, but people who actually want to have some more immediate impact in the world rather than waiting for the downstream effects of the article that you write that gets cited a thousand times and leads to a new fusion reactor or something. But that isn’t everyone, and certainly not everyone in science. So why do we just have this idea in our head that there’s one way to be a scientist and it basically maps to nothing in carbon removal? And also, I’m very open to the fact that I am projecting out from my own experience and false understanding of what it means to be a scientist as a non-scientist, and that I’m creating the myth that I’m trying to debunk at the same time.
Erica Dorr: Yeah, I think it comes from the caricature that we have of scientists or just the common image that we have of scientists in society. Scientists in TV shows are always the stereotypical person in a lab coat, detached.
Ross Kenyon: I thought of this, this is like the worst example possible, but it’s Gene Wilder in Young Frankenstein.
Erica Dorr: Oh gosh.
Ross Kenyon: Is that who you want to be? I could have chosen Einstein or someone. Thank you. Yeah, Frankenstein. Sorry. But yeah, continue.
Erica Dorr: Yeah. No, I think it’s just from the scientist side. I would argue that the imprecise stereotype is also true on the other side. You say engineer, and I think someone who drives a train. Okay, that’s what you think when you’re a child, but you grow up and then you think, oh yeah, engineers are the ones designing bridges.
Ross Kenyon: That trains go over. Yes, that’s true.
Samara Vantil: To my family I’m a calculator.
Ross Kenyon: Calculator. Okay. Please continue though, Erica.
Erica Dorr: Yeah. I think these stereotypes just come from people who are not familiar with the field needing to make a simple mental model of what does a scientist look like, what does an engineer look like, what do they do? I would argue that most people who are scientists, which if we’re defining that as people who have PhDs, which is a decent shorthand for how we could define a scientist, most people who have PhDs are not actively working in academia. I don’t know how many of the scientists out there meet this stereotypical image of people in theoretical, fundamental research trying to understand knowledge that’s really upstream from a solution, in the long chain of research from making a first discovery, getting an idea, testing it all the way down to a solution. We probably think of scientists as only being the ones who are really far upstream in that chain of events, and that is reserved for something I would call more the fundamental scientists. And there’s a huge category of applied scientists, and then the spectrum gets a little blurry from there as science merges with engineering and it gets deployed.
Ross Kenyon: What do you think, Miss Calculator? You agree with that? And what are you working with as an engineer too? I imagine you have your own stereotypes. People probably think you’re a boring nerd. And a different kind of boring nerd too.
Samara Vantil: Yeah.
Ross Kenyon: You’re not. But they probably think that.
Samara Vantil: No, yeah, exactly. I think that people see engineers as a highly logical person, a very detail-oriented person with great math skills, but often not really good at creative skills or at interpersonal skills. Maybe someone who can build a bridge, that can repair anything, but would struggle to maybe communicate. I think that this is the biggest stereotype that we could see. But in reality, engineers are so diverse and can work with teams, clients, managing projects, which is what we do at Rainbow in the operations team, and use our creativity as much as we could use our logic to communicate with people in a way that actually makes things easier and allows us to implement the knowledge we have to solve real problems.
Ross Kenyon: I imagine a fair amount of innovation comes out of engineering as well. Taking an idea that is a bit more abstract and then trying to build it in the real world. I don’t know how you could possibly measure the amount of innovation that takes place at which layer, but I imagine a huge amount gets learned just from trying to operationalize literally anything. Engineering must be driving a lot of the learning as much as the science too.
Samara Vantil: Yeah, exactly. It could be a huge innovation, but it could also make all the difference in a very, very small thing. A very detailed thing is knowing when to ask the right question, having a very good picture of the overall processes but also understanding each and every piece of a flow in detail. And asking the right question to make things work.
Erica Dorr: And I would add maybe that definitely engineers through their iteration and on-the-ground experience come to new innovations and solutions. But maybe the science role would be the only one, I hesitate to say only because we’re going to think of exceptions, but probably the only one that would come up with new, deeper understanding of why those innovations work, maybe. And so the engineers will try things, figure out what works, what doesn’t, but maybe not discover the underlying mechanisms of what is making this work or not.
Ross Kenyon: Yeah, I was thinking that maybe one incorrect but useful model to understand the difference here is Thomas Kuhn’s The Structure of Scientific Revolutions, where he talks about the difference between people who make paradigm-reorienting kinds of discoveries, something like Newtonian physics going into relativity. It doesn’t happen that often, but when you do it totally reorganizes the systems of knowledge that exist. But most science also takes place at what he calls normal science, which is just the small stepwise improvements of what we know within that paradigm. And if you wanted to apply it to science versus engineering, you would say something like science is working at the paradigm-defining level, and engineering is normal science. That’s too fine a point to put on it though, because they are informing each other and plenty of science takes place much below that. Erica, have you redefined the system of world knowledge that exists in your scientific work? My guess is you probably did more work on the normal science level, because there’s not that many people who reinvent the world.
Erica Dorr: I wish I made a paradigm-shifting discovery. No way. No.
Ross Kenyon: There’d be no Nobel probably if you did.
Erica Dorr: No, I made a little inch or centimeter forward in my field of study.
Ross Kenyon: Yeah, which is totally important work. It’s hard to even say that without it sounding pejorative. A little bit inherently hierarchical where you’re like, well, there’s paradigm science and then there’s normal science, and something about that just sounds a little inherently insulting. I don’t mean it that way. The world is built of marginal improvements. That is often what is making our world work. Has your world been improved that much by general and special relativity being discovered? My guess is maybe in some ways. Maybe cool, quantum computing is now a thing. Our nature of time, maybe the movie Contact makes a little bit more sense. But besides that, I think a lot of the smaller incremental changes, eking out an extra 4% efficiency on a motherboard, those kinds of changes are really driving a lot of world innovation. Disagree if you want.
Erica Dorr: No, that’s totally true. And I think a lot of scientists are probably happy with their small contribution to incremental change because small contributions are modest and safe. And we would be kind of skeptical or critical of big changes. I mean, as they happen in real time, it’s pretty rare that you’re witnessing such a paradigm change, but as they happen, the scientists will be like the last ones to accept it maybe, because we want to be so critical and hesitant of such changes. So no, I’m cool with it. Can I speak for scientists when I say we’re cool with incremental small change?
Ross Kenyon: You may speak for the entire class of people. I did a show, gosh, it’s been more than a year ago, on philosophy of science and we read that book and then we read a Feyerabend book about anarchistic epistemology in science. And scientists do resist changes in paradigm, actually quite vociferously. They do not want them because they’ve built their whole lives oftentimes working within a paradigm that if you upend it, it’s like, cool, that knowledge is now irrelevant and outmoded. One of the examples that I remember so strongly is that the Ptolemaic system of how the solar system operates allowed people to make fairly accurate predictions of planetary movements except for when things would go into retrograde for a small period of time. But with that one exception, it still worked pretty well. So obviously geocentrism is still true. There’s a couple things that are weird with the retrograde cycling, but besides that it’s fine. And it took a long time for that to actually change. So it’s funny to think of scientists as people who are guided purely by the pursuit of knowledge who also face their own egoic personal resistance to change. It doesn’t even work like that. Scientists are also bought into a mental map of the world that when you challenge it, they react probably as strongly as anyone else when you challenge their model of the world.
Erica Dorr: Yeah. And if you’re an expert in that domain and it’s challenged, then it’s fair. You’re probably one of the people who are best positioned to challenge such paradigm shifts, and we need people to hold their ground and challenge such large changes. But it probably makes it even more difficult to come around to the changes and accept them when you have such deep knowledge in the other way or the conventional way.
Ross Kenyon: I’m trying to think if something like this even happens in engineering to the same extent. One example I can think of is maybe how concrete changed architecture. That seems like more of an engineering kind of example where you’re like, cool, we can do rounded shapes now. It doesn’t have to be rectilinear everything. And there were people who resisted that and fought back. You think back to something like John Ruskin and the Arts and Crafts movement being like, no on concrete, we like natural materials. We’re not going to do it this way. But are there examples of big paradigm-challenging, people-resistant-to-change ideas within engineering that I just don’t have the expertise to know about?
Samara Vantil: Well, I don’t think I can say something straightforward like that, but I’d say that engineers are usually so focused on the logic of things and the math of how things work that they’re kind of hard to make them change their ideas, I’d say. Because it’s a more objective way of seeing things in general, like the stereotype of an engineer. But at the end of the day, as you mentioned, like changing the shape of buildings, it’s like, okay, that’s the problem. We need to solve that problem. How will we solve it? At the end of the day it ends like that. I’d say engineering is less of how science would work, because things are more fixed. We can work with improvements. Yeah, there is a new material, how are we going to use that, how are we implementing it? But then it’s more, as we say in French, like it’s more in a box.
Ross Kenyon: I think of the personality types or professions that I mix with the least successfully, I think engineering is the most. Everything I just said, I think there are many engineers that would be like, who gives a crap about everything you just said? Force equals mass times acceleration. I need to calculate whether this pillar in a bridge can support the weight distributed over the top of it. Why are you talking about epistemology? What does this have to do with me? Is that a stereotype? Is that even a true thing? Are engineers wanting to hang out on that freaky theoretical level?
Samara Vantil: I think it can be both, but it would be more like a stereotype, I think. Because we, as I said, engineers have such a variety of backgrounds and personalities. We could use our creativity to change things as well. Like, why are we going to make a square shape foundation where we can make it circular?
Ross Kenyon: Okay. I’m going to summarize a little bit. Much of what you’re saying sounds like your training and your professions are closer than they are distant, and many of these skills overlap and much of the training does, and they’re often quite complementary. Is there a bigger distance between engineering and science than between commercial? I feel like maybe the gap is more on that side and you two are maybe two peas in a pod. And I’ve certainly been in cases where I’ve seen product teams, science teams come up with a way of doing things that were very logical and would likely work, but also commercial is like, who wants this thing? This is engineered for a customer which does not exist. Why have we done it this way? This hurts the unit cost. You added this much precision to it, but it also raises the cost of verifying this by some amount that hurts the margin. This does not hit the targets correctly. I’ve seen conflict on that direction too. Is the gap there maybe bigger than it is between each of your two professions?
Samara Vantil: I would say that this gap might be bigger between science and commercial than between operations and commercial.
Ross Kenyon: Oh, okay. Tell me.
Samara Vantil: I’d say that we, working in the operations team, we are at the center of everything that happens at the company, at Rainbow. So we take everything that science made and we apply it, but we are also in constant touch with the commercial team to deliver the credits, to deliver the projects, and in constant contact with them to understand their needs. Like if there is an offtake agreement and then we need to go faster within a given project, and so on. And because of that, and because we deal a lot with project developers, we understand their challenges. And when, for instance, it is way better to have as many laboratory tests for a given biochar production batch as we can to improve the certainty of it. But at the end of the day, it increases costs for the developer and I know that. So then if science defines a given requirement to provide a certain amount of samples, of lab samples, we would know that this wouldn’t work. Because it would be more expensive on the project developer side. So then we would go back to science and say, hey, this will not work because it costs like at least 600 euros to send a sample from Africa to Europe, to a lab. So then we need to go back. We need to change this. So I think that we have more of this commercial mindset as well. Erica, what do you think?
Ross Kenyon: Wait, I want the counterattack here, Erica.
Erica Dorr: Oh, Samara and I are usually aligned, unfortunately for you. But yeah. I think Samara changed my mind. I was going to say that no, of course, we’re so aligned. We’re much closer between science and engineering than with commercial. That’s why we’re all lumped together in STEM. Which is a huge and useful category. We clearly share some deep foundations there, which is just technical topics, I guess. So yeah, we definitely in our discipline and approach come from roots that are more similar. But then maybe you throw us into the real world and we start to diverge a bit. And our engineers who are more close to the field, close to the real-world solutions, are more sympathetic to the same concerns that the commercial team would have. And on the science side, from my perspective of methodology development, on the one hand I kind of try to keep a bit of distance from commercial. I don’t want to be too influenced by them. I don’t want to hear them talking about, ooh, we could sign this project if only we loosen this requirement. I don’t want to know that, and we don’t have those conversations. So in the operation of a standard, science definitely is a bit more distant from commercial. And maybe engineering, operations, Samara, is our go-between. When we really need to get down to figuring out, is this requirement really not usable, for example. I mean, we try to investigate that upfront when we are developing a methodology. We have lots of interviews with project developers asking them what kind of requirements sound feasible or not. But we maybe can’t anticipate everything. We discover a lot of things through iterations, through project certification as Samara works on. So we yeah, definitely interface then with operations and engineers to get a better idea of what is feasible or what is maybe desirable, rather than getting that information from commercial.
Ross Kenyon: Samara, is that pretty much how you see it? Are you a little bit of a bridge while trying to protect science’s separation?
Samara Vantil: I think we are really in between. We are literally the bridge that bridges this entire gap. But it’s not because we have a shorter gap with the commercial side that we leave the science behind, if you know what I mean. We are always trying to keep this balance between keeping the scientific rigor while being agile in whatever we do.
Ross Kenyon: Seems like a good balance. And commercial has its own stereotype here too, and I’ve seen it be both true and false. The stereotype of commercial that’s true is having a very aggressive salesperson who will get off a call and then go into leadership or the product team and be like, hey, I just got off a call, I told them that we could do this feature or make this kind of credit, when can we do that? And the engineering science product team is like, yeah, that’s not what we do. Also, that violates several things that we just do not do. And why did you tell this person that you were going to do it? And they say, well, I could make the sale in the room, I was going to get us to the next thing. And you’re way far out ahead of your skis on either scientific and/or ethical grounds. I’ve seen that be true, by the way. That is not just a stereotype. And I guess that’s the thing about stereotypes is sometimes there’s enough truth in there where they can be useful models. But some of the best salespeople I’ve seen have also been extremely talented listeners, very patient, very good at understanding people, making sure they get what they want. It isn’t just smile and dial and you’re trying to force a product down their throat that they don’t want, or lying or just trying to get something done even though it doesn’t make any sense.
Erica Dorr: And I think we could also maybe flip it the other way around of how would commercial leverage us to make decisions. For example, sometimes we’re talking about kind of strategy decisions. Where should we go? What’s the market like, what do we see? We get the privilege of talking to lots of different people in the market. So we have these discussions sometimes, kind of big picture, where to go. And the science in me, the scientist in me says, well, why are we just talking about it? Let’s go run some surveys, let’s get some data, let’s make a data-driven decision. Which is not necessarily how commercial or strategy works. So there’s probably friction going in both ways.
Samara Vantil: Yeah, no, I was just going to complement that. Usually the commercial side, they want things very much fast. Like hey, we just have this project within this given methodology, when is it going to be ready? Erica, please let us know, we need it for yesterday. And yeah, that’s tough to balance sometimes.
Ross Kenyon: Without a doubt it is. I think the example especially that you gave, Erica, is how it should work at a well-functioning team. Because all of these ways of seeing the world and interacting with it have their things that they’re superpowered at and their blind spots. And you need to have some amount of teamwork here to balance between the best and the worst parts of your professional abilities. And I think when teams are good at that, like for instance, I work primarily on the commercial side: commercial strategy, storytelling, product marketing and marketing. The bad version of marketing is the smart people of the company build something and then the marketers figure out how to tell it to people. That’s literally the worst way to use a marketer. That’s the lowest-level way to use a marketer. One of the better ways to use a marketer is to have them at the product level, embedded, and figure out what are features that we could create or strategic changes we could make to the company that would make what we’re doing much more marketable, much more attractive naturally for organic storytelling, that would make the product more saleable and that would improve our margins on the product. That being said, I can even think of a counter example of this where there was a blog post from Charm that came out a few months ago. Peter Reinhardt’s thing is often like, the best way to improve is to cut scope and to cut the amount of things that you’re doing. They’re always trying to simplify at Charm. That’s a big thing that he’s been on the thought leadership rounds for a long time on. I think it’s really smart advice because people tend to put a hat on a hat, as the expression goes, and keep adding stuff and it doesn’t necessarily help as much as cutting can. I was talking to someone else who works in this space and they were like, I hated this blog post. All they said is that they’re going to sample less. Why is that cool cutting of scope? What if we needed that? Why is it just a race to the bottom? And so who’s right? The person that’s worried that this is a race to the bottom, or the fact that we need to operationalize lower-cost ways of getting to a similar amount of quality? And I’m sure that there are good reasons for Charm to think this extra sample doesn’t help as much as it costs and therefore there are good reasons to cut. But also you have to know on a macro level, there is a really strong race to the bottom dynamic within carbon markets and carbon crediting that we are all at risk of. Cutting good science and good quality because it makes the commercial case better. And by doing that, you’re creating a very large risk at some point that there’s a bank run and things collapse because everyone has done this to such an extent that eventually the whole house of cards comes down. And how do you know which of those it is? Holy crap. How are you going to answer that? I don’t even know if there’s a question in there. There’s like 16 observations, so good luck to each of you.
Erica Dorr: Oh man. Small observation. I hope that they made such a decision with a data-driven approach looking at—
Ross Kenyon: I have no reason to think that they didn’t, by the way.
Erica Dorr: Like looking at the diminishing returns of each sample. I mean, that’s a valid conversation that I wish we could be more open about. When for me, setting requirements, writing a methodology, I often ask project developers as we’re developing the requirements, how much does this sample cost? What is your break-even, what margin do you really have?
Ross Kenyon: Is that really your job though, Erica? Is that really your place to do that?
Erica Dorr: Well, I’m not going to lie, I don’t always get great answers, and so I don’t always rigorously account for that in the methodology. But that’s what I’m saying. I wish that’s something that we could take an honest look at and be more transparent about. It’s difficult because it gets into the project developer’s operational data, their business plan, cost structure. Maybe they don’t even know at that point what is the cost of all their sampling. But that would be the ideal way, in my opinion, to balance on the one side what the scientific evidence and underlying knowledge of the topic says we need, which usually is a very, very high bar. And oftentimes we can implement that in projects, in MRV. But sometimes we really struggle with that. For example, enhanced rock weathering, where we’re having a tough time figuring out what exactly is going to work, what’s going to be commercially viable, how much can we ask for when you’re looking at massive deployments across thousands of hectares. How many samples do you need? We always talk about it’s a balance of the scientific rigor and the operational feasibility. And I wish that when talking about operational feasibility, we could have a more data-driven approach to these diminishing returns on, for example, taking more samples. Cost-benefit of reducing uncertainty versus how much it costs to get those extra samples.
Samara Vantil: Yeah. But analyzing the uncertainty is also something that you do when developing the methodology. So it’s something that you kind of already know a little bit beforehand, how this will impact and how we can give up on a certain given precise data point based on the returns that it will provide us based on this uncertainty. So I think this is actually very useful, something that science does that’s very useful on the operational side as well. And when we certify our first projects, for instance, is also when we can refine the type of information that they can provide us. If we can ask for more, if we should ask for less because it doesn’t really make much difference at the end of the day.
Ross Kenyon: Yes. I cited more on Charm’s side after having that happen. I thought it was a really courageous thing to say, because one of the things I think we’re all sick of hearing in carbon removal is how important quality is. And everyone’s saying it all the time. You’re like, table stakes. Yeah, and okay, fine. But not to keep referencing podcasts that I’ve made in the past. Everyone should go listen to every podcast I’ve ever done, so I never have to say this ever again. But there’s a show I did last year about how an economist friend, like a decade ago, told me that the optimal number of travel deaths is non-zero. We could all drive cars that go 10 miles an hour, but we’ve all collectively decided that that is just not worth it. We’re willing to accept some amount of risk to get where we’re going much more quickly. And carbon removal has a hard time grappling with this because what actually is the correct line? There’s no non-arbitrary line. It’s going to be mapped on a curve somewhere of, this amount of risk, this amount of trust in this credit is worth this price. And you can keep adding, you can go to one mile an hour if you want, and basically nothing bad is ever going to happen. But is it worth the price that you pay for it? And I think with quality, people forget how much quality costs, and at some point the curve starts pointing much more vertically and you’re like, cool, for an extra unit of quality it costs you 20 units of economics, as opposed to getting you to the 80/20 in the first place. Might be good enough, or might do a lot of the work.
Erica Dorr: And sometimes it’s probably where it’s just taking a cut. If you’re talking about the last percent of accuracy, if it results in underestimating credits by taking a conservative cut, I would say probably a lot of project developers would rather take that route. And I would feel okay with it to a certain extent, as long as we’re not over-issuing credits. Maybe a bit of an engineering take on it.
Ross Kenyon: Precisely. I’m leaving space to hear more of that too. Yeah, tell me.
Samara Vantil: Yeah, I totally agree. Sometimes, and it actually happens, not quite often, but sometimes if the developer doesn’t have a precise data point that we really need and we need to take a decision, it’s either okay, either you get certified using a very conservative assumption, or you don’t get certified at all. We will probably go for the first option, where we define what is the best assumption to take, analyze the uncertainties of it, apply a discount factor for instance, and probably underestimate the amount of credits that they will get at the end of the day. So it will cost them money anyway, but it will probably cost less.
Erica Dorr: Yeah. It’s hard because in carbon removal and in carbon markets, I feel like we’re balancing two extreme and really valuable constraints. On the one hand, the climate urgency and the fact that we all really want to get things going and figure this out and start tackling these existential issues. But on the other side, we’re constrained by trust and scandal issues in carbon markets. And it’s hard to thread the needle. I don’t know if you could call it threading the needle when it’s two such far-away extremes. But how to balance both of those existential threats to carbon removal. If we go too far on the move fast and take conservative discounts and just make it work, then we risk shooting ourselves in the foot and at least risk weakening carbon markets as such a promising way to finance and grow carbon removal. Which nobody wants that. So it’s really hard to balance between those two constraints.
Ross Kenyon: That’s what I was going to ask about. How do you know you’re doing this for the right reasons? Because there’s also competitive pressures where project developers might shop their projects around and be like, who’s going to give me the most credits? Who’s going to be the easiest to work with? And that doesn’t necessarily always lead to higher quality. There’s a tendency to be like, oh, well this expensive kind of annoying process, a different registry is not making them do. So they’re going to go over there. And that is an incentive that one doesn’t have to respond to, but someone might, given enough time, given enough registries. Someone might offer them the exact kind of package that they’re looking for. And how do you make sure you’re doing it for a reason of, oh, this last marginal test we’re asking for only improves accuracy a very small amount, and it costs way too much and therefore isn’t worth it? And how do you know you’re not doing it because, oh, we’re going to lose this deal if we don’t do something that’s going to harm quality or increase uncertainty by a lot?
Erica Dorr: That’s a hard question. That’s like the ultimate question. Decisions on requirements. Where do we want to place our level of rigor? Again, we talk so much about rigor in carbon markets, quality. If we were being the most rigorous, then nothing would ever happen. So we all take into account reality in most of these requirements and are not 100% the most rigorous. So it’s a daily struggle deciding where to place it. It’s informed by, for example, what is the sensitivity of the outcome to this decision. Which is just a way of saying how important is it. Some things, if we’re adding this requirement, it would make a very small marginal change, then it probably could be excluded. We then weigh that with how difficult is it to meet this requirement or provide this extra data point or provide this proof. And I often see it as a quadrant of how important the thing is, how sensitive is the outcome to it, and also how difficult is it to obtain more information. So if it’s low sensitivity but low difficulty, easy to get, then we’ll say yeah, sure, okay, we can require that or we can recommend it, but it doesn’t really matter too much either way. The worst case scenario is the things that are really sensitive, really important, but also really hard to get. And that’s where we spend most of the uncomfortable time making the tough decisions, where we probably just go talk to more people and get more information and continue working through it.
Samara Vantil: Yeah, to what Erica mentioned, I would also add, what are the risks of the choice that we are making as well. And one thing that I really like about how we do things is that we are not really, of course sales team please don’t hear me saying that, but we are not really concerned about saying, oh, this project is not really eligible anymore, if we keep the highest bar that we can. One example that I could give is, coming back to the leakage again. We made this leakage assessment where we decided, based on the research that we made, that a discount factor should be applied to that project. So then they would have less credits. They would not be happy. And then we were like, okay, are we actually going to do this? But we know it’s the right thing to do. We know it’s what the methodology says. We know that this will increase by a lot the quality of the credits. So okay, let’s do this. Even if we knew that the project would not be happy, it was also good for them because their credits would meet a higher bar.
Ross Kenyon: Samara and Erica, thanks so much for being on. Really respect how transparent you are about showing your work and your thinking around these. These are questions that everyone in the space faces. These are tensions that exist in basically every company. And all of these personality types and professions probably also exist at all these companies. So I’m hoping this is a fairly universal show and people can catch glimpses of themselves and their own companies in doing this. But I respect that you were able to engage with some of these ideas so openly, because they are, what did you say, Erica? The ultimate question? Yeah. They’re right to the fundament. That is the work, I guess.
Erica Dorr: Yeah. Thank you so much for having us. Super interesting stuff.
Samara Vantil: Thank you so much, Ross. Very nice.




