Tesla and Vicor discuss the crucial trade-offs made in automotive power systems design
Podcast with Paul Yeaman, Director of Application Engineering, Vicor and Tesla engineer Milovan Kovacevic
As the electrification of the automobiles increases, power system design has never been more scrutinized. An electric vehicle can use up to 20X more electrical power than a traditional internal combustion engine vehicle. That power increase requires proportional size and weight increase for the power electronics which limits vehicle range. Vicor power-dense solutions enable downsizing the power electronics by 60-70% more than competitors by using compact power-dense modules, saving space and weight in the power delivery network.
In this Moore's Lobby podcast, host Dave Finch dives deep into power system design considerations with Paul Yeaman, Director of Applications Engineering at Vicor and with Tesla engineer Milovan Kovacevic about the challenges he has faced with automotive electrification.
DAVE FINCH: Welcome back to The Lobby. If successful power electronics design for automotive applications comes down to power quality, thermal performance, noise characteristics, and size, then it might be worthwhile to look at the pros and cons of power modules. Paul Yeaman, director of applications engineering at Vicor, sheds some light on these highly efficient regulatory-compliant solutions for high- and low-voltage automobile electronics.
When I start thinking about – let’s say we’re starting from a very high-voltage battery – we’re going to down-convert to 48 volts a second would be 12 volts, and then on and on, but immediately I start thinking about, boy, every time you do that, you’re introducing losses. For electric vehicle applications, I start thinking about a shorter range on the vehicle. I also start thinking about additional weight. Something I have noticed is that your switching converters are achieving upwards of 96%, 97% efficiency.
PAUL YEAMAN: Yeah. First of all, one of the reasons behind going with a higher voltage – 48 volts as opposed to 12 volts – is to limit distribution losses, which are the resistive losses that come about by pumping high currents through an automobile chassis or any kind of a system.
The tradeoff going to a higher voltage is you now essentially need to down-convert to that voltage, so the important thing is that your conversion losses need to be less than the losses that you would incur by distributing that higher current at that lower voltage. That represents, essentially, the tradeoff between do I pump 12 volts throughout the entire system, or do I store energy at 48 volts and down-convert to 12 volts? That represents kind of where does it make sense to do one versus the other?
To talk a little bit more about the 97-98% efficient converter, one of the ways to make that tradeoff more favorable towards convert as opposed to distribute the lower voltage is to basically push your technology to higher and higher efficiency. Now, in some respects, there are some things about power conversion technology that’s never going to change. One of those things is copper. There’s no such thing as more efficient copper.
There are some things that are changing quite rapidly. And in that category, I would put MOSFETs. If you take, for example, silicon MOSFETs and you look at some subsequent generations of silicon MOSFETs, they continue to get lower Rds (on), lower gate charge, which means higher figure of merit in terms of being able to switch them more efficiently.
If you look at emerging technologies, like silicon carbide, again, you have a step change in terms of performance with some of these emerging wide-bandgap devices, which I’m sure you’re heard very much, and I’m sure you’ve even had podcast guests that talk a lot about the advantages of wide-bandgap technology. So, on the MOSFET side of things, there is a lot of innovation and there’s a lot of development.
Where Vicor fits into that is – where Vicor looks at innovating is kind of at the topology level. In other words, we take advantage of emerging and better MOSFETs with each subsequent generation. But what we also do is we look at how do we more efficiently convert or regulate the electricity from one voltage to another? We’re looking at things like control for power topologies, things like zero-voltage switching and zero-current switching, which are control techniques which allow you to turn on and off a MOSFET at a point where you minimize the switching losses.
Magnetics kind of have a little bit of the copper and the MOSFET dilemma. There are certain things about magnetics that aren’t going to change. They’re inherent to the material itself. But at the same time, there’s a lot of room for innovating in terms of materials and coming up with better magnetic materials that represent lower losses or better magnetic materials that can provide lower losses at higher switching frequencies and things like that.
DAVE FINCH: Yeah, and it’s interesting you mention higher switching frequencies, because it also makes me wonder how we’re manipulating our modulation approaches, for example – the kind of feedback from the circuit that we’re collecting to make very timely reactions and adjustments. Do these sorts of things play in, or are they negligible when you consider just the raw physics of the components?
PAUL YEAMAN: Yeah, that’s actually a great question. The thing about high switching frequency is high switching frequency actually goes all the way back to our copper dilemma, right? Again, we’re not going to make copper more efficient. The way that we can essentially address that challenge is can we make a power converter that uses less copper to move electricity from input to output? Can we make a converter smaller and more efficient?
Because if you think about, for example, a winding around an inductor or a transformer, the length of that winding is copper. The length of that winding is directly in proportion to the resistance that that will have. And the resistance is essentially a physical element that we can’t change. We can’t change the resistivity of copper. But if we can make that inductor or that transformer smaller, we can make that length – the path length of that winding – shorter. And by doing so, we can make a system that has less resistive loss, even though we’re still using the same material to carry the electricity.
High switching frequency is key to being able to make a smaller converter, because if you switch at a higher switching frequency, your energy storage elements like your transformers and inductors and anything that holds kind of inductive energy can shrink – and capacitors factor in there as well. Higher switching frequency and being able to shrink things means being able to make them smaller, which means being able to use less copper and to have less resistive losses.
The tradeoff to higher switching frequency is that the higher that you switch in frequency, the more switching losses you incur. By turning on and off your MOSFETs at faster rates, any gains that you have in terms of smaller parts can quickly be given back if you’re pumping a lot of switching losses into the system.
That’s why Vicor looks at control technologies and topologies that promote zero voltage and zero hard switching, because if you can limit or eliminate switching losses in your power converter, now you’ve solved this tradeoff between switching frequency and switching losses. Now you’re able to make things much smaller and get the benefits of efficiency. That’s a perfect example of the kind of tradeoffs that you face in the power electronics world.
DAVE FINCH: Absolutely. With different frequencies, do you find that noise either is exacerbated or minimized at certain frequency ranges, or is it just switching noise that exists irrespective of frequency?
PAUL YEAMAN: So certain types of noise – for example, another term for zero voltage and zero hard switching is soft switching, right? I guess part of the physics behind that is that if you’re hard switching, the way that you keep your switching losses as low as possible is you essentially turn your switch on and off as quickly as possible.
In other words, if you think about a MOSFET as a variable resistance, when the MOSFET is fully on, the resistance is extremely low, you have very low loss. When the MOSFET is off, there’s no current flowing, resistance is very high. There’s zero loss there. It’s that in-between time where the MOSFET is transitioning from one resistance to another that most of your losses occur. So, if you’re hard switching a device, if you’re not zero voltage or zero hard switching, you want to transition through that region as quickly as possible.
Anytime you take something electrical and you shoot the voltage or the current very rapidly from one point to another through inductance and parasitics within the circuit, you essentially shoot energy into other areas of the system. That’s essentially the genesis of electromagnetic interference or conductive noise.
By definition, if you’re hard switching something, you’re going to have more noise associated with it. When you’re soft switching something, first of all, if there’s zero volts across it, you don’t need to very quickly change the voltage or the currents, because essentially, at that point – at that zero-volt point – there’s no voltage, there’s no current that’s being forced through that variable resistance. That’s probably a slightly simplified way of looking at it, but I think it’s actually quite accurate.
So, by essentially soft switching, what we’re really saying is we’re really talking about slow switching something. And when you slow switch something, you don’t have that parasitic spike of voltage or current that squirts into other areas of the system and causes noise.
That’s not to say that any soft-switch converter has absolutely no noise. I mean, there are other sources of noise. But if you take a hard-switch converter and a zero-voltage-switch converter and you look at the noise signature from the two converters, you’re going to see a big difference just by what results from soft switching versus hard switching.
DAVE FINCH: One thing you mentioned earlier in the conversation is power quality. What sorts of power quality issues are you solving for, whether at Vicor or just in general as a power management engineer?
PAUL YEAMAN: Yeah, so I think there are a number of different things that apply to a number of different industries. When I am referring to power quality, I’m essentially referring to probably a number of different things.
One of those things is EMI – how clean is the power? When your power converter is working, how much noise is coupled into your radio or how much noise is interfering with the Wi-Fi signal or things like that – you know, just really important aspects for functionality of the rest of the system.
But what I’m also referring to is basically the load’s ability to handle variations in voltage. Going back to our earlier conversation about where does the line between converting down from 48 to 12 versus simply having your 12 volts distributed throughout the system – where is that line drawn? One of the things is basically just as you increase the amount of current that you’re distributing through a system and the power quality gets worse, there’s a lot of benefits to being able to have a stiff voltage source. And one of the focuses in power quality is being able to develop a system that provides a very stiff voltage source that can handle whatever kind of slew rate the loads provide on it.
For example, just stepping away from automotive for a second and looking a little bit at the computing side of things, microprocessors require a voltage source that can handle tens of thousands of amperes change in microseconds. That kind of a slew rate requires an extremely low impedance at extremely high frequencies. The difference between having a voltage deviation of 70 millivolts to a load step versus 75 millivolts may be the difference between having a working processor and having the blue screen of death. Power quality is very important in those kinds of situations – that you can have a power source that can power whatever load or whatever transient the load might throw at it.
DAVE FINCH: My follow-up question to that would be the thermal management that has to go along with being able to manage such a dynamic load. So, in the case of the modules that you’re producing, that’s again another great example of like, man, how do they squeeze that much performance into something so compact and lightweight and not have sort of the thermal issues that plague so many systems?
PAUL YEAMAN: Yeah, that’s a good question. So, I think there’s probably two elements to that. One element goes back to size. The smaller you can make something, the closer you can put the source of the heat to essentially a surface that can allow that heat to be conducted away from the device. So, you think about a power supply that’s housed in a gigantic box, and the MOSFETs are kind of buried inside that box on a printed circuit board. You need to attach them to a heatsink, and then you need to have a fan that blows air across that heatsink to get the heat from those MOSFETs out. If you look at how we at Vicor have shrunk our power components to smallest possible size, we now have fractions of a millimeter between the MOSFETs that are processing the power and the surface of our packages or our components.
The second one is kind of, I guess you could say, the motherhood and apple pie, which is, if you make a more efficient power converter, there’s less heat that you have to remove in the first place. So that’s the other, somewhat more straightforward way of solving a thermal problem is by reducing your losses to begin with.
DAVE FINCH: Right. You know, it strikes me that the real strong case for going for something like a module seems like you no longer have to be somebody with a lifetime of expertise in order to get a very stiff, responsive, and very clean power source.
The second piece of that, though, is the fact that this thing has to get through certain certifications. It seems like that would be an advantage, too. If you’re a design engineer, you buy a module, and all of those lifetimes of design expertise have been kind of baked into the module, and it’s certified. (laughter) So it starts to look like a very attractive proposition.
PAUL YEAMAN: Yeah. I mean, the way I like to look it or the way I like to think about it is that the engineers at our customer – we’re not trying to take away their job. What we’re trying to do is we’re trying to solve – and very effectively solve – power challenges that otherwise they would be on tap to solve on their own. Rather than reinventing the wheel, the customer can focus on their unique challenges that they have in their system and not have to worry about the power conversion challenges that can be solved, essentially, at a global level and really apply universally to various systems.
And you’re absolutely right – an automotive power supply is going to have different standards that are going to need to be met compared to a microprocessor power supply or an industrial robot power supply. Those standards are going to be different, and those standards are going to be industry-unique. But to the extent that all three of those industries requires a voltage, a current, and a certain level of power, a certain level of power quality and cleanliness and stiffness, let’s solve those global challenges kind of across the board and then allow the customer to focus on challenges that are unique to their system and their industry.
DAVE FINCH: Once again, a sincere thanks to our guests, Milovan Kovacevic and Paul Yeaman, who joined us this week from their home offices to make me a little smarter about power electronics.
Paul Yeaman 与行业中的技术领导者广泛合作，开发和实施了系统中领先的电源解决方案，这些解决方案满足行业中最严苛的电源需求。由于经常接触新技术带来的电源挑战，Paul 了解电源行业的广泛趋势，并致力于确保创新者能够整合电源解决方案以满足这些需求。Paul 在电力电子行业的设计和应用工程领域有 20 多年的经验。
Paul Yeaman, 应用工程高级总监
This podcast was originally published by EDN.
Listen to the Moore's Lobby podcast to learn more about crucial trade-offs in automotive power system design with Tesla and Vicor