Re: Alternative Aircraft Ranking System

This library contains the ratings of various weapons systems according to objective models carefully worked out and verified by HPCA.
Craiglxviii
Posts: 2041
Joined: Thu Nov 17, 2022 7:25 am

Re: Alternative Aircraft Ranking System

Post by Craiglxviii »

How are you on factoring ammunition supply? One of the big criticisms of the 4 to 6 x .50” change in the Wildcat, for instance, was that although two guns were added the same overall ammunition capacity was kept. So fire duration was reduced (with the pilots maintaining that four guns were adequate to kill Japanese aircraft).

The ratings should reward higher ammunition capacity, more bursts on target, more firing opportunities etc. all make for a more effective fighter.

Edit.

Also, have you:

Factored for interrupted guns? Bf.109 & Fw.190 both had interrupted guns, as did all of the Italian machines, the P-39 & P-63, I think most Soviet fighters too all fired something through the propellor disc. From memory Stuart had worked the penalisation out as 0.5 of an uninterrupted gun.

Factored for convergence? Wing-mounted guns require convergence and that maximises their effectiveness in one particular point in space. Beyond that their effectiveness reduces slightly. Again from memory Stuart had this as either 0.7 or 0.8 that of a centreline gun.

Once these corrections are made you should see the twin-engined fighters climb back up the rankings somewhat, with their nose-mounted guns.
Vendetta
Posts: 288
Joined: Tue Feb 28, 2023 8:11 pm

Re: Alternative Aircraft Ranking System

Post by Vendetta »

It’ll be a long ways off since it involves looking up several hundred different planes again. I was going through the comparisons first to see if anything else stood out as being in need of adjustment.
Johnnie Lyle wrote: Mon Mar 13, 2023 9:35 pm How are you vs Stuart calculating range?

I am very leery about range differences not being properly accounted for, especially in a WWII context.
RANGE FACTOR is range of the plane in km, divided by 25, with the result raised to the power of 0.95. The range figure plugged into this equation is an average of combat radius with ferry range, and if found, combat range with drop tanks.
Stuart’s 1.0 model assigned one point to every 50 miles of range on internal fuel, plus drop tanks.

I give one point per kilometer per hour of speed, Stuart gives one for every 10 mph over 200 mph.

Let’s take an imaginary plane with a speed of 600 km/h that has a normal range of 900 km, but a range on drop tanks of 1300 km and a ferry range of 2000 km.

That speed converts to 373 mph, so it gets 17.3 points for speed in Stuart’s system. For range, Stuart would use the drop tank figure (807.8 miles) and give it 16.2 points for range. Speed and range are almost equally weighted.

In mine we’d take a mean of normal and drop tank range (1100 km) and then average that with the ferry range to get 1550 km. That 1550 km divides by 25 to get 62, which ^.95 = 50.4 - so that’s around 8.4% of the points it gets from speed. Range is not insignificant, but it’s more of a tie-breaking factor between planes of similar speeds.

What these scores are meant to represent is a very debatable question. If it’s which of the two would be more likely to win in a head-to-head fight, range is relatively unimportant; if it’s which one is operationally capable of undertaking a greater variety of missions, range is very important.

My thinking fell more towards the former. I saw too many cases for my liking of plane A being rated higher than plane B in spite of having little hope of defeating plane B if they actually crossed paths for real.
kdahm
Posts: 897
Joined: Thu Feb 02, 2023 3:08 pm

Re: Alternative Aircraft Ranking System

Post by kdahm »

I think you're fundamentally misunderstanding the uses and reasons for the methodology in Stuart's model. It's not intended for single plane use and there's significant fuzziness associated evaluating the numbers that come out of the formula.

Stuart was very clear that differences in the pilot, how the plane was working on a given day, the tactical situation, and other such factors drown out moderate differences in the numbers. For one thing, the range number is nice, but a plane that's fully loaded will not maneuver as well as a plane down to a 1/4 of a tank, but the plane with the longer range can reach fights at a longer combat radius than a short ranged one. A razorback or a bubble canopy on a P-47, for another example.

Small differences in the numbers are also not meaningful. Let's say a plane has a rating of 157.1. Comparing it to other planes with ratings of say 150 to 165 isn't particularly significant, because it's the reasons why each plane has a different rating from the baseline, and the methodology of the force using it, that isn't captured with the full number. For example, defending your own airbases vs. fighter CAP over bombers hitting the other country's capital.

It's meant for evaluating entire units or air forces. A base filled with fighters rating 185 is likely better off than one with fighters rated 170, as long as the fighters match the doctrine of the force. Is a carrier loaded with Plane A better than one loaded with Plane B?

Also, Stuart did say, and I agree with him wholeheartedly, that fictional planes, concept studies, and even one or two -of prototypes are very limited in the accuracy of the numbers that go into the rating. Those figures for speed, range, armament, and everything else are simply too optimistic and real service tends to bring the marketing hype down a lot in reality.

Finally, this is not the end-all set of rating numbers for planes. There are other ways of calculating them, other formulas, and other ways of weighting the numbers. Stuart's model was done the way it was because of it's simplicity. The hard part is collecting all of the data to be plugged into the equation, and there's plenty of room for discussion about which set of data to be using. When I look at your model, I see a lot of logarithms, averaging, and otherwise adjusting things, which has the effect of regression to mean.

I wish we had those discussion still available so I could point to exact posts that are discussing your concerns. They should be in the 2020 download, so there's hope they can be recovered.
Vendetta
Posts: 288
Joined: Tue Feb 28, 2023 8:11 pm

Re: Alternative Aircraft Ranking System

Post by Vendetta »

kdahm wrote: Tue Mar 14, 2023 4:42 am I think you're fundamentally misunderstanding the uses and reasons for the methodology in Stuart's model. It's not intended for single plane use and there's significant fuzziness associated evaluating the numbers that come out of the formula.

Stuart was very clear that differences in the pilot, how the plane was working on a given day, the tactical situation, and other such factors drown out moderate differences in the numbers. For one thing, the range number is nice, but a plane that's fully loaded will not maneuver as well as a plane down to a 1/4 of a tank, but the plane with the longer range can reach fights at a longer combat radius than a short ranged one. A razorback or a bubble canopy on a P-47, for another example.

Small differences in the numbers are also not meaningful. Let's say a plane has a rating of 157.1. Comparing it to other planes with ratings of say 150 to 165 isn't particularly significant, because it's the reasons why each plane has a different rating from the baseline, and the methodology of the force using it, that isn't captured with the full number. For example, defending your own airbases vs. fighter CAP over bombers hitting the other country's capital.

It's meant for evaluating entire units or air forces. A base filled with fighters rating 185 is likely better off than one with fighters rated 170, as long as the fighters match the doctrine of the force. Is a carrier loaded with Plane A better than one loaded with Plane B?

Also, Stuart did say, and I agree with him wholeheartedly, that fictional planes, concept studies, and even one or two -of prototypes are very limited in the accuracy of the numbers that go into the rating. Those figures for speed, range, armament, and everything else are simply too optimistic and real service tends to bring the marketing hype down a lot in reality.

Finally, this is not the end-all set of rating numbers for planes. There are other ways of calculating them, other formulas, and other ways of weighting the numbers. Stuart's model was done the way it was because of it's simplicity. The hard part is collecting all of the data to be plugged into the equation, and there's plenty of room for discussion about which set of data to be using. When I look at your model, I see a lot of logarithms, averaging, and otherwise adjusting things, which has the effect of regression to mean.

I wish we had those discussion still available so I could point to exact posts that are discussing your concerns. They should be in the 2020 download, so there's hope they can be recovered.
I did lurk on the old forum, so I read every word of discussion on the aircraft library forum at some point. In fact, I believe I have the full set of the old ratings saved somewhere. I understand the logic of the old system; there are simply cases where I believe it produced bad results. The Hurricane IIC scores competitively with any model of the Bf 109 or Fw 190, and is downright better than a lot of them. Is there anyone who would rather have a fleet of Hurricanes than a fleet of Focke-Wulfs, or thinks they would be evenly matched?

Stuart’s logic tended to be prescriptivist; if historical experience or pilot feedback contradicted the results, then the pilots were wrong or they were using the plane wrong. There is some validity in that approach, since that was in fact true in some cases. But there are others where I’m not satisfied with that answer. My approach has been more descriptivist; if the historical record disagrees with my results, then there’s probably something wrong with my model.

The complex math is there to bring factors that vary more widely more widely into closer balance with one another. Top speeds for instance, vary by a factor of about three between the slowest biplanes and the fastest jets. Armaments, however, vary much more widely. A plane with four Hispanos is throwing over 20 times the weight of lead per second as a plane with a pair of .30 cals. If you assign even weight to speed and firepower, it means difference in firepower is going to be much more important as a deciding factor than difference in speed. And historically, we know that isn't true. If your plane is 100 km/h faster than your opponent's and climbs 50% faster, it matters relatively little if he has six machine guns or two - most of the time, he won't be able to get in a position where he can hit you at all.

I agree that fine differences in scores don't matter too much - what's important is sorting planes of comparable ability into roughly the right place. But when the system is telling me that a Hurricane from 1941 and a Bf 109 K-4 are in the same neighborhood, then it is just flat out wrong. My interest is producing a system for the same overall purpose as Stuart's which produces fewer errors of that nature.

I agree also on rating paper planes, prototypes, etc. That's why I went out of my way to clearly label them all so they can be taken with the appropriate grain of salt. I've done it anyway for my own personal interests in devising or evaluating alternate history works, game modding, etc. In some cases, plugging their data into my charts has been helpful for highlighting obvious errors. Nick Sumner's Drake's Drum, for instance, featured the He 351, a German superprop derived from the P.1076 draft project. The charts showed that he (or Heinkel) was underestimating the weight of the plane by a huge margin - it was producing an outrageous power-to-ratio completely out of line with even the most powerful prop planes.
Calder
Posts: 1002
Joined: Fri Dec 09, 2022 10:03 pm

Re: Alternative Aircraft Ranking System

Post by Calder »

Shrug, part of the difference between the two systems is two people look at the problem in different ways. Stuart looked at the algorithm strategically. In his formula range was a much bigger part of the score. He was more interested in what roles a plane could be used.

While Vendetta is much more interested in how planes do tactically. If the two planes met in a dogfight over the front lines which plane was more likely to win? I don't think either approach is wrong. It is just two different ways of approaching the issue.
Post Reply