Jump to content
The mkiv Supra Owners Club

Compression, Boost pressure and Det


Scooter

Recommended Posts

Generalising, but the higher the compression and the higher the boost the more likely det will occur, yeh?...

 

Hence a NA to NA-T can only support (say) 0.35 bar (5psi) before you seriously need to look to lower the compression via headgasket or pistons.

 

Lets assume its safe to do the above. People always talk about how a single feels so different at 0.8 bar compared to stockers at the same boost. I understand this but what i'm curious about is whether a T88 at 0.35 bar is more of a det risk than a T57 at 0.35 bar, it can't just be the boost level on its own thats the potential problem can it?

 

 

Also if na's can't cope with 0.6 bar, how come TT's design to run on 0.8bar are ok to run on 1.4bar, or do all 'built' singles have an even lower compression than stock TT's (if they do its not frequently quoted from what i've seen). If its a stock compression engine running a single is it on the ragged edge a NA supposedly is at 0.5/6 bar above its designed boost? (or does a tt have a higher capacity as at least it was designed from the outset to actually see boost?)

 

To summarise......

 

Should NA-T consider boost pressure AND turbo choice at any given compression?

 

Should TT guys going single look to (again i say 'look to' as from all the threads i see compression ratio is not very often mentioned) decrease compression if running high Bar/psi...........

Link to comment
Share on other sites

Det is caused by a combination of factors, but its basically uncontrolled ignition in the combustions chamber. Too high a compression ratio, or too much boost can cause the air to compress to the point that it will ignite spontaneously.

 

Likewise with poor engine cooling, poor charge cooling, sharp bits in poorly designed or poorly fettled combustion chambers acting as localised hot spots. These can also cause the mixture to go bang before you want it to - more than likely in more than one place at once. It is these multiple ignition sources that give rise to the high pressures and stresses in the engine when the multiple flame-fronts collide.

 

Oil contamination can also cause det. Not strictly through charge dillution, but oil in the fuel/air mixture act as a barrier for the flame front, thereby causing the combustion to stop and start rather than progressing smoothly formthe spark plug outwards.

 

I'm not sure about actual figures but as I understand it a big single is more adiabatically efficient than the stock twins: i.e. it will heat up the charge less as it compresses it - thereby reducing the det risk. I believe this has been covered on here in the past in some detail. The TT engine will also have the lower geometric compression ratio, so a TT converted to big single should be the lowest det risk of all. An NA with a single will be a much higher det risk because of its higher geometric compression ratio and the stock TT will be somewhere in between.

 

I would think that risk of det would be the most important thing to consider when going NA-T so the most efficient turbo and the best charge cooling for the guesstimated mass airflow would be paramount. Lowering the geometric compresson ratio using a thick HG might tend to lessen the effect of the squish lands in the combustion chamber so the in-cylinder mtion would not be as good and the fuel-air mixture not as even (or homogenious as it is properly called). How much of a real world effect this might have on det is anyone's guess.

 

I would have thought that the TT guy's better option would be run an EMU and pull spark if you want to run silly boost, but that's IanC territory :)

Link to comment
Share on other sites

 

I'm not sure about actual figures but as I understand it a big single is more adiabatically efficient than the stock twins: i.e. it will heat up the charge less as it compresses it - thereby reducing the det risk. I believe this has been covered on here in the past in some detail.

 

i see, that seems reasonable, so increased efficiency allows a higher boost for the same compression (other factors being equal)

 

does this mean in my T88 vs T57 comparison that if the T88 was more efficient it would be less risky (further from det) than the T57 (again all other things being equal) despite making potentially a fair bit more BHP? I'm assuming a larger turbo is generally more adiabatically efficient?

 

I'm just thinking that if you had a NA-T that was going to always be on stock compression and only run 0.35bar of boost, then a large laggy turbo at low boost may be ideal rather than the smaller T57 T60's that seem to come with the kits that are available?

Link to comment
Share on other sites

Also if na's can't cope with 0.6 bar, how come TT's design to run on 0.8bar are ok to run on 1.4bar, or do all 'built' singles have an even lower compression than stock TT's (if they do its not frequently quoted from what i've seen). If its a stock compression engine running a single is it on the ragged edge a NA supposedly is at 0.5/6 bar above its designed boost? (or does a tt have a higher capacity as at least it was designed from the outset to actually see boost?)

 

Does it help to think in terms of absolute pressure instead? Then you'd have say 2.4 / 1.8 = 133% for a TT vs. 1.5 / 1 = 150% for NA-T. In fact, the NA wouldn't have 1 bar but some amount of vacuum (so less than 1 bar), but even ignoring that the NA-T is a greater increase of boost.

 

Despite all I've read on this forum and elsewhere, I still can't fully get my head around the fact that big singles seem to give a lot more power for a given level of boost. Really not sure what I'm missing there - is adiabatical efficiency the only major factor in that?

Link to comment
Share on other sites

.

Despite all I've read on this forum and elsewhere, I still can't fully get my head around the fact that big singles seem to give a lot more power for a given level of boost. Really not sure what I'm missing there - is adiabatical efficiency the only major factor in that?

 

isn't a large part that theres more air (in a bigger turbo) being compressed/higher flow rate............ie in a stock sequential TT below 3500rpm and on turbo 1 boost is (pretty much) the same as at 6000rpm on both turbos, it just makes so much more power because there's two turbo's pushing/flowing twice the air albeit at the same pressure..........

Link to comment
Share on other sites

Pretty simple, big laggy turbo pushes more air for less pressure, and in the process heats air less, so for any given inter cooling along the way, has less work to do, becomes more efficient, so cooler charge + less pressure = less chance of det,just reverse this for smaller turbos.

Link to comment
Share on other sites

A case in point is/was Smarty's TTC conversion. Advancing the timing on a sequential system below 4000rpm, when one small turbo is caning away, is a hopeless task. Detonation starts almost immediately with any advance. In TTC mode, with two turbos running, I could add 15deg of timing. This is because the two turbos are each flowing half as much air than the one. Speeds are down and compression is much more efficient, so charge temps are much lower. This staves off detonation quite nicely.

 

It's mostly about charge temps and heat in the cylinder head. You run 11:1 afrs because the extra fuel gives you charge cooling, not because it gives more power. That's why water injection is good, and even nitrous as it has a huge cooling effect.

 

As for the power outputs, I've got a recent case study on that front - we just swapped JBs SMIC from a buggered one to a new one, and with the same boost controller settings the overall pressure dropped by 0.1bar. Same turbo speed, same amount of air sucked in, but different pressure. That means the temperature of the air coming out of the intercooler is much lower than it used to be, so the pressure drops. The upshot from this is - wind the boost controller up so it goes back to the original boost levels :) The intake air will still be cooler than it was but more oxygen will get in the combustion chamber. marvellous :thumbs: More power for the same pressure levels.

 

Turbos have efficiency islands in those horribly complex maps you see now and again. This efficiency is how much it heats the air charge up when it outputs a certain boost pressure. I can't recall the maths and have no desire to but that isn't relevant here anyway. You'll find the moderate single turbos get their maximum efficiency roughly around 1.4bar of boost, whereas the dinky stockers are probably at peak efficiency at 0.6bar on their own, 0.8bar in a pair.

 

By the time they get to 1.2bar they are flowing more air, yes, but pro rata are heating it up way more than at 0.8bar. Just beyond this there is a tipping point where even higher boost pressure heats the air charge up so much it counteracts the extra flow and actually starts lowering the amount of oxygen in the air charge.

 

So 1.2bar on a T67 is close to peak efficiency and it's a mahoosive turbo anyway, so the air charge is incredibly denser than a pair of stockers caning away at the same pressure output. Hence, more than 50% more power for the same boost pressure.

 

-Ian

Link to comment
Share on other sites

By the time they get to 1.2bar they are flowing more air, yes, but pro rata are heating it up way more than at 0.8bar. Just beyond this there is a tipping point where even higher boost pressure heats the air charge up so much it counteracts the extra flow and actually starts lowering the amount of oxygen in the air charge.

 

So 1.2bar on a T67 is close to peak efficiency and it's a mahoosive turbo anyway, so the air charge is incredibly denser than a pair of stockers caning away at the same pressure output. Hence, more than 50% more power for the same boost pressure.

 

:goodpost:

 

For that last example, what would the intake temps be for each case, roughly?

If the comparison was done at 0.8 bar, would the power difference be much less?

Link to comment
Share on other sites

 

So 1.2bar on a T67 is close to peak efficiency and it's a mahoosive turbo anyway, so the air charge is incredibly denser than a pair of stockers caning away at the same pressure output. Hence, more than 50% more power for the same boost pressure.

 

-Ian

 

So if i've taken in the information correctly, at a given boost pressure larger turbos will heat the air less, and flow more air ( a larger volume compressed to the same pressure)

More and cooler air into the engine = better BHP and lower chance of Det.

 

So why do stage 1 NA-T kits that are destined to be installed on stock compression engines have T57/61's? Wouldn't they make more power and could maybe even run a smidge more boost with large turbos?

Link to comment
Share on other sites

From one of the best turbo sites I know.

 

In turbocharged engines there is a fine balancing act when it comes to making a lot of power on low octane fuel. In most cases, ignition timing must be retarded as the boost pressure rises above a critical point and finally there reaches a further point where the engine simply loses power. If the timing was not retarded with increasing boost, destructive preignition or detonation would occur. Normal combustion is characterized by smooth, even burning of the fuel/air mixture. Detonation is characterized by rapid, uncontrolled temperature and pressure rises more closely akin to an explosion. It's effects are similar to taking a hammer to the top of your pistons.

 

Most engines make maximum power when peak cylinder pressures are obtained with the crankshaft around 15 degrees after TDC. Experimentation with increasing boost and decreasing timing basically alters where and how much force is produced on the crankshaft. Severely retarded timing causes high exhaust gas temperatures which can lead to preignition and exhaust valve and turbo damage.

 

We have a hypothetical engine. It's a 2.0L, 4 valve per cylinder, 4 cylinder type with a 9.0 to 1 compression ratio and it's turbocharged. On the dyno, the motor puts out 200hp at 4psi boost with the timing at the stock setting of 35 degrees on 92 octane pump gas with an air/fuel ratio of 14 to 1. We retard the timing to 30 degrees and can now run 7psi and make 225hp before detonation occurs. Now we richen the mixture to 12 to 1 AFR and find we can get 8psi and 235 hp before detonation occurs. The last thing we can consider is to lower the compression ratio to 7 to1. Back on the dyno, we can now run 10psi with 33 degrees of timing with an AFR of 12 to 1 and we get 270 hp on the best pull.

 

We decide to do a test with our 9 to 1 compression ratio using some 118 octane leaded race gas. The best pull is 490 hp with 35 degrees of timing at 21 psi. On the 7 to 1 engine, we manage 560 hp with 35 degrees of timing at 25psi. To get totally stupid, we fit some larger injectors and remap the EFI system for126 octane methanol. At 30psi we get 700hp with 35 degrees of timing!

 

While all of these figures are hypothetical, they are very representative of the gains to be had using high octane fuel. Simply by changing fuel we took the 7 to 1 engine from 270 to 700 hp.

 

From all of the changes made, we can deduce the effect certain changes on hp;

 

Retarding the ignition timing allows slightly more boost to be run and gain of 12.5%.

 

Richening the mixture allows slightly more boost to be run for a small hp gain however, past about 11.5 to 1 AFR most engines will start to lose power and even encounter rich misfire.

 

Lowering the compression ratio allows more boost to be run with less retard for a substantial hp gain.

 

Increasing the octane rating of the fuel has a massive effect on maximum obtainable hp.

 

We have seen that there are limits on what can be done running pump gas on an engine with a relatively high compression ratio. High compression engines are therefore poor candidates for high boost pressures on pump fuel. On high octane fuels, the compression ratio becomes relatively unimportant. Ultimate hp levels on high octane fuel are mainly determined by the physical strength of the engine. This was clearly demonstrated in the turbo Formula 1 era of a decade ago where 1.5L engines were producing up to 1100 hp at 60psi on a witches brew of aromatics. Most fully prepared street engines of this displacement would have trouble producing half of this power for a short time, even with many racing parts fitted.

 

Most factory turbocharged engines rely on a mix of relatively low compression ratios, mild boost and a dose of ignition retard under boost to avoid detonation. Power outputs on these engines are not stellar but these motors can usually be seriously thrashed without damage. Trying to exceed the factory outputs by any appreciable margins without higher octane fuel usually results in some type of engine failure. Remember, the factory spent many millions engineering a reasonable compromise in power, emissions, fuel economy and reliability for the readily available pump fuel. Despite what many people think, they probably don't know as much about this topic as the engineers do.

 

One last method of increasing power on turbo engines running on low octane fuel is water injection. This method was evaluated scientifically by H. Ricardo in the 1930s on a dyno and showed considerable promise. He was able to double power output on the same fuel with the aid of water injection.

 

First widespread use of water injection was in WW2 on supercharged and turbocharged aircraft engines for takeoff and emergency power increases. The water was usually mixed with 50% methanol and enough was on hand for 10-20 minutes use. Water/methanol injection was widely used on the mighty turbocompound engines of the '50s and '60s before the advent of the jet engine. In the automotive world, it was used in the '70s and '80s when turbos suddenly became cool again and where EFI and computer controlled ignitions were still a bit crude. Some Formula 1 teams experimented with water injection for qualifying with success until banned.

Link to comment
Share on other sites

So if i've taken in the information correctly, at a given boost pressure larger turbos will heat the air less, and flow more air ( a larger volume compressed to the same pressure)

More and cooler air into the engine = better BHP and lower chance of Det.

 

So why do stage 1 NA-T kits that are destined to be installed on stock compression engines have T57/61's? Wouldn't they make more power and could maybe even run a smidge more boost with large turbos?

 

I'd assume 1) drivability - a small turbo will come on boost fast, especially with a higher compression engine running it. A big one may never really get going. 2) cost - smaller turbos are cheaper.

 

Turbo sizing is a bit of a dark art, in fact running a big turbo at low boost levels can cause stall. You can also end up outside the best efficiency zone by running too little pressure, so a 57/61 size intended for a high compression low boost application is probably a much better choice than a honking great T71.

 

I'm at the edge of what I know now so I'll have to shut up :D

 

As for outlet temperatures, I've no idea what they are with a big single or the stock twins in either configuration. I know Corky Bell reports some temperatures of over 100degC when testing a small turbo setup.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. You might also be interested in our Guidelines, Privacy Policy and Terms of Use.