You're comparing the Sun's illuminance at Earth (10^5 lux at 1 AU) to all starlight combined (10^-4 lux), then trying to work backward to what a single star should provide. That's not how this works.
The question isn't "what's the ratio between sunlight and all starlight." The question is: what happens when you move the Sun to stellar distances using inverse square law?
That's the Sun at Sirius's distance. Multiply by 25 for Sirius's actual luminosity: ~7.5×10^-6 lux.
Your own Wikipedia source says the faintest stars visible to naked eye are around 10^-5 to 10^-4 lux. So we're borderline at best, and that's with the 25× boost.
But moreover, you said "the difference between all starlight and just Sirius would be around 10^2." There are ~5,000-9,000 stars visible to the naked eye. If Sirius provides 1/100th of all visible starlight, and there are thousands of other stars, the math doesn't work. You can't have one star be 1% of the total while thousands of others make up the rest - unless most stars are providing almost nothing, which contradicts the "slightly brighter" compensation model.
Address the core issue: inverse square law predicts invisibility. The 25× luminosity factor is insufficient compensation. Citing aggregate starlight illuminance doesn't resolve this.
It's been a long time since my astrophysics, but I think the seeming contradiction you're running into might be from treating lux (illuminance) as a measure of emitted energy, when its actually a measure of received energy.
The Sun's (or any star's) emitted energy is measured in terms of solar luminosity.[1] The nominal value of solar luminosity is 3.83×10^26 watts. At twenty five times as luminous, Sirus' luminosity is 9.5710^27 watts. We can divide that by your 296 billon times, which gives.. 3.2x10^16 watts as what actually makes it to Earth. If the we convert that back into solar luminosity (to figure out the apparent brightness at Earth), its 8.3595 10^-11.
Now, if we look up at the sky, and check how bright the Sun and Sirius are from Earth on the magnitude scale, which each step is ~2.5 times brighter than the one below it (and vice versa), the Sun has an apparent magnitude of -27, while Sirus' is -1.46. I.e. the Sun in the sky is about 8 billion times brighter that Sirus is. That's within an order of magnitude of what its calculated solar luminosity should be. Again, it seems about right.
You're comparing the Sun's illuminance at Earth (10^5 lux at 1 AU) to all starlight combined (10^-4 lux), then trying to work backward to what a single star should provide. That's not how this works.
The question isn't "what's the ratio between sunlight and all starlight." The question is: what happens when you move the Sun to stellar distances using inverse square law?
At 1 AU: ~10^5 lux
At 544,000 AU: 10^5 / (544,000)^2 = 10^5 / 3×10^11 ≈ 3×10^-7 lux
That's the Sun at Sirius's distance. Multiply by 25 for Sirius's actual luminosity: ~7.5×10^-6 lux.
Your own Wikipedia source says the faintest stars visible to naked eye are around 10^-5 to 10^-4 lux. So we're borderline at best, and that's with the 25× boost.
But moreover, you said "the difference between all starlight and just Sirius would be around 10^2." There are ~5,000-9,000 stars visible to the naked eye. If Sirius provides 1/100th of all visible starlight, and there are thousands of other stars, the math doesn't work. You can't have one star be 1% of the total while thousands of others make up the rest - unless most stars are providing almost nothing, which contradicts the "slightly brighter" compensation model.
Address the core issue: inverse square law predicts invisibility. The 25× luminosity factor is insufficient compensation. Citing aggregate starlight illuminance doesn't resolve this.