Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Signal reflections in electronic circuits (lcamtuf.substack.com)
185 points by zdw on Nov 27, 2023 | hide | past | favorite | 75 comments


One time we got a new batch of assembled boards that would randomly quit working. This was for an older product that we had been selling for years without issue. Turns out an IC supplier had moved their chip to a smaller process. This smaller chip had less gate capacitance and was making much sharper corners on it's square waves.

It was slow data, like 10mhz or something. We were safely under the 1/10 wavelength rule with the old, softer corners. With the new sharp corners we were over the 1/10 wavelength rule and the circuit was ringing so hard it would cause nearby chips to latch up. We added some termination resistors and it was fixed.

Also note that this kind of thing can be hard to diagnose, because when you put the scope probes on, some of the ringing energy goes into the scope and it tends to improve the situation as long as the probe is there.


>Also note that this kind of thing can be hard to diagnose, because when you put the scope probes on, some of the ringing energy goes into the scope and it tends to improve the situation as long as the probe is there.

When you're in the thick of it trying to debug, these absolutely make you want to scream. But once you figure it out and have a minute to catch your breath, they are oh so satisfying that you did figure it out.


No electronic component is ideal, not even wire.

There are properties like resistance, effective capacitance, impedance and reactance that would not intuitively be associated with a pure conductor, but neglecting little things like this can contribute to behavior that might raise its ugly head in unforseen ways if you are not aware.

What about back-emf in the antique world of vacuum tube audio working at hundreds of volts where you are driving what's known to be a very reactive but low-impedance load?

As the signal from the amp into the voice coil displaces the speaker cone from its resting position, the spring action of the cone surround works to return the cone to the non-energized point. Constantly, this has always happened.

But with tubes the impedance and voltage are so high that a major step-down audio output transformer was used to isolate the high voltage from the speakers as well as properly match the impedance.

No audio transformer is needed for solid state amplifiers since the transistors work at relatively safe voltages and impedance is not a big issue.

Either way as the speakers spring back, their voice coil moving on its own across the magnetic field, it generates a pulse of electricity from the speaker itself that appears at the output of the amplifier but did not actually come from the amp.

At high power and especially with square-wave type distortion often seen in musical instrument amps, this back-emf from the speakers can be stepped up to over 1000V on its reversed way back to the tubes through the "step-down" output transformer. When the tubes are only rated for a few hundred volts this is not ideal, and if the tube does not suffer internally, it can still cause a spark to jump between two adjacent pins on the tube socket. Once this happens both the tube and the socket can be ruined due to conductive carbon formation within the bakelite tube base and/or tube socket.

This may be an extreme example, but wires are not perfect conductors and circuit boards are even less perfect as insulators.


I learned the hard way recently that my attempt at visualizing pin states on prototype boards using LEDs without current limiting resistors was a bad idea: it caused the LED to light up when the output pin was turned HIGH, but the downstream receiver of the pin didn't register the HIGH because the LED stole too many volts

I only figured this out after building and tearing down my prototype and reassembling each of the modular bits, verifying they worked, and then noticing it didn't work after integration until I removed the LEDs. I have since learned I also could have used a buffer driver or LED driver, alkthough that adds more complexity to my simple prototype.

(in case you're wondering, yes, I know you're supposed to use a current limiting resistor, but I've also observed that my 5mm LEDs work just fine when given regulated 5V, they end up dropping 4.6V and consuming 40mA, which is about double the current they are rated for.)


How bright do you need the LEDs to be? Even 20 mA is a huge amount for modern electronics. On my boards I have 100 kohm resistors with green LEDs and they are very visible even at some 20 uA of current


Not really bright at all (sitting next to the device in a room where I control the lighting). My current LEDs are 5mm white, Vf 3V, max 20mA and I just hooked one up to my variable power supply and it looks like I can run it at 3mA and it's still super bright.

At times I have run them much lower, to the point where the light is just barely visible. If I set my power supply to cap out at 2.6V instead of 3, the current reading is 0.000, which I think must be below 1mA, and it's still quite visible.

(i'm not an ee expert so I frequently make thinkos related to voltage and current, but I think I've mostly got LEDs down).


> when you put the scope probes on, some of the ringing energy goes into the scope and it tends to improve the situation as long as the probe is there.

The software equivalent is so-called "load-bearing printf"s, where for example `printf("broken_var: %i\n",broken_var);` causes broken_var to not get optimized out and so start working correctly. But in hardware that happens even when you (effectively) inspect broken_var in a attached debugger, because the closest thing you have to a debugger (eg, oscilloscope) effectively just is a load-bearing printf.


Generally, a scope probe is best represented as a very small capacitance to ground, followed with a rather large resistance. That capacitance tends to shut most high frequencies to ground.

Of course, as things are with the black arts of RF engineering, there are possible situations where that same probe would make things worse, or appear inductive.


I wonder if the quantum physics equivalent is heisenberg's uncertainty principle. Probably not, but I have observed "observing a detail of a system often interferes with the running of the system"


In a past life, I worked in a video post house with a very competent engingering department. Without fail, if we had a problem that required an engineer, they would come in, see the problem, and fix it. Unless, if the only engineer available was the director of the group. The problems were never repeatable when he was present, and everything worked fine. This is why there was a constant request to have a life sized cutout of him to leave in the room to ensure the equipment behaved properly


I usually refer to these as Heisenbugs.


Ahhh yes, the scope probe loading "Heisen Bug"!


It is just a pet peeve of mine (disclosure, I am a terminally degreed engineer), but it is something a lot of non-EE people forget: circuits do not "push" current, your wall outlet does not "push amps" and this applies to micro-electronics (as well as pico, nano, fempto, etc)... the circuit is always a combination of the source and load.

So, if the load is not equal to the source (or vice versa, does not matter), by conservation of energy, we have some "leftover"... in signals, this leftover is going to travel back from whence it came, causing (possibly) attenuation or amplification depending on timing, which some might call distortion.

It can be tricky to keep this in mind when you are working with complex circuits, and it gets more difficult as speeds increase and signal sizes go down.


A good real world example is fluid flow such as a gas compressed through a pipe, say a trumpet or an exhaust. If the standing wave in the pipe isn't optimal at the exit of the pipe a part of the energy will end up reflected which causes a very real obstruction and higher pressure. In a trumpet this manifests as a resonant filter that will leave all but a couple of frequencies without significant damping and that is the note (+ overtones) that you hear. In an exhaust such tuning will result in the system performing optimally at certain RPMs, and if you match that to when you want the engine to produce maximum power you can create a peak by get rid of the exhaust gases at that peak. Hence resonant exhaust systems tuned to the theoretical peak of an engine.


> real world example

electric/electronic circuits operate in the real world. mechanical example is what you're after.


That's a cool analogy!

I wonder if the exhaust could by dynamic i.e change length depending on the RPMs


The answer is: yes. The results: 6% gain or thereabouts with a fairly impressive amount of extra complexity. But it can be done.

https://www.researchgate.net/publication/322390752_Continuou...


Way back when Jet Skis had 2-stroke engines, you could buy an aftermarket exhaust pipe with water injection in a couple key spots. This allowed the pipe to be dynamically tuned to different RPMs by changing the speed of sound within the pipe.

The stock pipe was tuned to a specific RPM, allowing the reflected exhaust pulse to return to the cylinder just as the port was closing and “slam the door” on the fresh air/fuel charge.

With variable water injection, you could have more than one optimal RPM.

(All this is from memory as I read the ads in Splash magazine.)


Oh that's so clever, to use the water that is there anyway as a means of affecting the dynamics of the pipe. Whoever came up with that was on another level.


Many antennas do this (screwdriver antennas come to mind), so I’d expect some similar ability to do this with exhaust. The only issue I can think of is turbulence from the seams of segments that extend.


Here is a YouTuber who is making a physically based audio engine using fluid dynamics (demos so far for various engines, a trumpet, a steam whistle, dev videos)

https://www.youtube.com/channel/UCV0t1y4h_6-2SqEpXBXgwFQ

A lot of what you wrote reminded me of what he is working on - the trumpet sim is not entirely accurate yet but watching those reflections bounce and forth like you describe is very interesting.

Trumpet Video: https://www.youtube.com/watch?v=rGNUHigqUBM&t=87s


That's amazing, incredible programming and analysis skills on display there.


> circuits do not "push" current

Except for inductors. See: Tesla coils.

If you forget a snubbing diode on a large motor (which is very inductive), you'll likely see some fires on your PCB. Inductors (and inductive loads) will push current even if the other side doesn't want it (ex: even if the other side is a 10 Gig-ohm resistance, the current will continue and possibly spike the voltage to millions-of-volts and shoot lightning out to allow the inductor to keep pushing the current)


I didn’t get a terminal EE degree (it is more fun to tell the magic smoke what to do, than try and keep it in the chip), but you can of course design a circuit that will adjust voltage to keep current constant (within some bounds, etc etc). Could that not reasonably be said to be “pushing current”?


If there's no inductor (and inductors are dangerous: see my other comment), then nothing on that PCB is pushing any current. Its all illusions created by "pulling" current.

> but you can of course design a circuit that will adjust voltage to keep current constant

So there's two designs and they're different in important ways. But first: the common part of _both_ designs is that the transistor is working as a "controlled resistor". The question is where you place this special resistor. The other commonality is that "negative-feedback" can configure this transistor to reach the appropriate resistance very easily.

So with the common stuff out of the way: we have two designs. "Series Regulator" and "Shunt Regulators" (traditionally voltage-regulators, but they could be current in practice. I'll discuss as if they're current regulators).

1. Series Regulator -- The transistor is treated as an adjustable resistor "in series" with the rest of the circuit. This "pinches down" the voltage/current to the level deemed acceptable to the engineer. Ex: If "downstream", you sense a 100-Ohm load and you have a target-current of 10mA, and your source voltage is 5V, you set the transistor so that its equivalent to 400-Ohms (total a 500-ohm system, so 10mA goes through).

But if the downstream circuit changes (a button was pressed and a motor is now being driven), and the downstream circuit now looks like a 10-Ohm load, to keep the constant 10mA current your Series-Regulator will automatically set the transistor to act like a 490-Ohm resistor (keeping the 500-ohm system, so 10mA remains constant).

2. Shunt Regulator -- The transistor is treated as an adjustable resistor "in parallel" with the rest of the circuit. This "diverts" excess energy to ground, causing the rest of the circuit to effectively function within its specifications. Ex: If "downstream", you sense a 100-Ohm load and you have a target-current of 10mA and your source current is 50mA, you set the transistor so that it is equivalent to 25-Ohms. This shunts 40mA to ground, and the remaining 10mA goes to the 100-Ohm load.

But if the downstream circuit changes (a button was pressed and a motor is now driven), and the downstream circuit now looks like a 10-Ohm load... to keep the constant 10mA current your Shunt-regulator will automatically set the transistor to act like a 2.5-Ohm load. This shunts 40mA to ground and the remaining 10mA goes to the 100-Ohm load.

-----------

Traditionally, series and shunt regulators sense voltage (not current), but its not very difficult to turn a voltage-regulator into a current-regulator instead.

Series regulators are your typical 7905 or whatever. They are more efficient (as you can tell by their obvious operation) and simpler to use.

Shunt regulators are traditionally Zener Diodes, or other circuits that are based "like" a Zener Diode. They can generate constant voltage offsets reliably (ex: if you have a 9V line from a series regulator, and you need a 7V reference, you can use a shunt-regulator to very accurately create -2V).

--------------

As you can see, its all "pulling tricks".

Of course, the switching regulator (Ex: MC34063. Don't use, this is an old chip lol. But maybe TI's Simple Switcher series, or similar), truly "push" current thanks to an externally supplied inductor... and as a result lead to far superior efficiency specs.

Another "pushing" trick is a charge-pump. You can turn on capacitors in such a way that they double the voltage. That's the thing about "pushing", you need an ability to increase voltage until the "downstream" circuit acts the way you like.

Inductors (and capacitors, to a lesser extent) _can_ push. But its dangerous and somewhat difficult to design well. (Fortunately, we have pre-made modules like TI's Simple Switcher or Microchip's MCP1640, etc. etc. that do the job for us automatically... as well as pre-made power supplies).


I like your explanation and examples.


it's easier to think everything is a transmission line and everything is propagation with finite impedances rather than thinking everything is an idealized element and figuring out what the exceptions are.


These are some nice measurements (note that the first scope capture is measuring something totally different from the other scope captures), but there are a couple errors. First, the correct term for the apparent impedance of a long transmission line is "characteristic impedance", not "specific impedance". "Specific" means "per unit of mass". Second, the usual equivalent circuit for a transmission line is parallel capacitors separated by series inductors. The article tries a hybrid explanation with a propagating wave charging capacitors, which doesn't really work.

I've been getting into microwave circuit design this year for work. One "fun" thing I've learned is that even at low gigahertz frequencies, if you want to make accurate impedance measurements with a network analyzer you have to adjust for signal propagation delays with picosecond precision. On the boards I've made, every millimeter of transmission line is about 5.56 picoseconds of delay, which is about 10 degrees of phase shift at 5 gigahertz.

Parasitic inductance and capacitance also makes those nice, simple terminating resistors look at lot less simple, but that's another story...


Correct me if I am wrong, but, if impedance is the ratio of capacitance to inductance, so 50ohm coax, Z = sqrt(L/C), then C/L = 1/50^2 so the inductance is 2500 times less. For a short length, he may be choosing to ignore the L as it probably not significant for the exercise.


You have it backwards. The inductance is much greater than the capacitance. But the important thing is that neither can be neglected. Without inductance, you just have a bunch of capacitors in parallel. Without capacitance, you have a bunch of inductors in series. Both cases simplify to one big component. You need alternating L and C to get wavelike propagation.


yes, I stand corrected. It then seems to make less sense he says he is ignoring the inductance when considering the capacitance.


What's amazing to me is that you can buy a device called a NanoVNA for $300[1], which will test transmission lines from 50 Khz to 4 Ghz, with a remarkable degree of precision. This used to be the domain on instruments costing $20,000 and up.

I myself have a knock-off that goes to about 1 Ghz and was even cheaper, and it was quite instructive in helping me get a feel for reflections, etc... and came in handy installing some SMD capacitors in the right places on a 100 Watt UHF amplifier for the 440 Mhz band. A few mm of movement made all the difference in the world in terms of matching.

[1] https://nanorfe.com/nanovna-v2.html


Can it test USB cables (with some rig of course)? There seems to be quite some demand for that.


It could test one pair of wires in the cable, but not all of them at the same time. There are likely much better testing setups out there.


For anyone who cares enough to watch a 2 hour video on it and hasn’t seen it yet, this Rick Hartley video is the recent gold standard of explaining the issues at play:

https://www.youtube.com/live/ySuUZEjARPY

The mindblowing thing to internalize is that you don’t route energy as electrons through copper traces (they move incredibly slow) but merely use the copper to guide a bunch of waves traveling outside of them, where they are up to all kinds of shenanigans like coupling, reflecting etc.


All I know about this is that during grad school, I had to crawl under a multi-ton supercooled superconducting magnet (an NMR machine) and turn two knobs to make a number get as small as possible- this was adjusting the impedance of the RF (NMR works by putting a sample in a huge magnetic field and then bombarding it with RF, then switching from transmitter to receiver and detecting the faint echos of energy leaving your system).

The magnetic field is so strong you can't have any digital electronics within meters of the NMR- but you can put some impedance matching analog network along with a couple potentiometers and 7-segment displays so a human brain and hands (which are much less affected by magnetism) can reduce radio reflections.

RF (and all AC) still seems like spooky magic to me, fortunately I don't deal with signals above 100khz.


There's a reason one of the go-to books[1] in this area is subtitled "A Handbook of Black Magic".

[1]https://www.amazon.com/High-Speed-Digital-Design-Handbook/dp...


When DDR SDRAM came out a magazine illustrated their article with light pulses traveling between the bricks and the core. From my electronics classes it was clear to me that the line would be either up or down, and these pulses were just for illustration. What would an illustrator know about electronics, right?

The article then calculated how many bits must be in-flight on the board to achieve the higher data rate given the bus width and the distance :-)


I did that with the electronics hobby group kids couple weeks ago. Hahaha.


This StackExchange answer is a really great explanation of characteristic impedance:

https://electronics.stackexchange.com/questions/59208/transm...

This one is OK too:

https://electronics.stackexchange.com/questions/150222/why-i...

The Art of Electronics 3rd Ed has a great explanation of series and parallel circuit terminations that I also highly recommend for understanding this, I think in chapter 15.


This video from Bell Labs showing various behaviors of waves using a nifty mechanical wave machine to make everything visible might be enlightening.

The whole thing is useful, but those who are just interested in impedance and reflections can skip to chapters 4 and 5.

https://www.youtube.com/watch?v=DovunOxlY1k


I just started watching this, and I'm immediately impressed with the production values when compared to today's "creator" content. All of the edits are matched. There are no jump cuts. Well thought out material vs random thoughts attempted to be cut together into something coherent is totally lost, and yet we reward those that smash cut stuff together equally for those that do it well. Generational differences in yet another example of I'm getting old (er, I am old)


An easy way to understand what is going on with signal reflections is to consider a very long wire -- say, to the moon. When you put a voltage on the wire current starts flowing, but that current has nothing to do with what is on the other end of the wire, because it is far away and there is this thing called the speed of light. So the current has to do not with the load but with characteristics of the wire. Eventually the voltage/current will get to the other end of the wire -- the reflection is what happens when the voltage/current in the wire does not match what is required by the load connected to the end of the wire. The problem, especially at high frequencies, is that this doesn't require a wire to the moon to see, it can happen in inches. Try debugging a 400G Ethernet link composed of 16 traces about 12 inches long on a PC board -- can be very hairy.


Bless you hardware people in your dealings with high voltage wizardry. I have worked with hardware engineers in both high wattage POE++ and high voltage pace maker applications and feel like dealing with these effects are the nearest thing I have ever seen to a dark art that no one truly understands. They appear to have the most confidence in a board design one spin, and despair filled agony the next when some resonance builds and blows away our EMF testing. I'm not sure I've ever witnessed such engineering frustration. At least to some degree software issues are based on something you can grasp and see. I feel like these types of hardware issues are like staring into a crystal ball. My opinion as an outsider of course.


It seems a little less magical when you do it for a long time, but yes, it is kind of magic.

I wish, as a practicing EE, that it was easier to transmit the knowledge in a faster way than doing it for a long time. Unfortunately, nobody seems to have figured out how to teach it.

It bums me out a little bit. I really love what I do, and I think it's magic too. I wish I could share that with more people.


Well cheaper / better simulators would be better. There’s software that can simulate these EMF but they’re very expensive.


Anyone who knows enough to write an EMF simulator probably knows there’s only a handful of people in the world who need it.

Sadly, but justifiably, this means that they go get paying jobs.


Yah I'm hopeful that one of the CERN physicists will add even a basic one to KiCad. That seems like the best chance of it becoming available to us lesser mortals. ;)


Computer simulation has been an absolute game changer for these things.


Does that handle the enclosure the board goes into at all? I haven't been in the game for awhile and am curious. We always had different behavior with our enclosure and of course when the sometimes poorly designed enclosure caused strain on the board that added a whole new layer.


> Does that handle the enclosure the board goes into at all?

It might. For example, Lukas Henkel is designing an open source laptop and heavily relying on simulation for that. In the post ([1]), he provides visualization of antenna performance for different placements inside the laptop case.

I quote, in case, if LinkedIn wants people to log in to view the post:

>I want to optimize the antenna positioning in my laptop design using open-source tools.

>The shown simulation is a 3D electromagnetic field simulation performed with the open-source tool Elmer FEM. The aluminum laptop case will have a large impact on the antenna gain and directionality. For correctly iterating on a good antenna positioning, it is necessary to integrate the complex 3D geometry of the laptop case into the simulation.

>I´m currently exploring three possible locations for the antenna. There may also be one option to make the laptop case itself as a part of a cavity antenna.

>The tools used for the shown simulation are all free and open source:

>Mesh generation: Salome_Meca

>Solver: Elmer FEM

>Visualization: Paraview

1. https://www.linkedin.com/posts/lukas-henkel-ovt_opensource-d...


That's a very interesting question. There are a ton of specialized tools for EMC/EMI simulation, I think they all work on the basic principle of assuming that a portion of your board will be exposed and that that part will be causing some radiation and will receive some from outside. After that it's decisions that will affect cost that will likely drive how far you want to strive for perfection, you could encase everything in grounded copper and it would be as close to perfection as you can manage or you can stick your board in plastic and live with the consequences. Usually, depending on how critical the application is some interference is acceptable as long as the device continues to operate. But on radiated power there are some very strict limits and for say a motor controller any malfunction could cause serious trouble so you will want to be extra careful there. The hardest to control for me: wiring, connectors, density of carbon sprayed inside plastic, anything using inductors (including what orientation to put them in, you want them at right angles to the surface of the board for mechanical stability and lack of coupling between inductors on the same board, but you want them parallel to the board to minimize radiated power and susceptibility to EMI).

Apart from shielding and carefully modeling your board (especially the ground plane, supply and any connections that carry a significant fraction of inbound power) you will always end up testing for compliance and that's the gold standard. I think simulation is very useful and can cut down on the number of physical test runs significantly but I've yet to see a design that did exactly what was predicted. Wiring, what happens just outside of the board enclosure, environmental factors, it all adds up. I see simulation as a way to be more efficient, not as a silver bullet to be able to guarantee certification is a one-shot, but possibly others have better experience.

There are a couple of very simple tricks to test for EMI sensitivity (a handful of coins, an old fashioned piezo based stove lighter and the oldest cell phone you can get in close proximity to the board), as well as a simple field strength meter. Between those you can probably identify and eliminate the worst and after that it's trial run time at the certification authorities if you are making a device that is to be used in a regulated market.

Especially the piezo lighter is interesting, I've had circuits that required substantial redesign just to get them to the point that they would not lock up hard.

What's interesting about this stuff is how un-intuitive some of it is. Note that this field is continuously in development and that new tools and techniques are brought to market all the time. Look the other way for a few years and you feel like a dinosaur.


> When PCB reflections start getting in the way of digital signaling, the usual culprit is a low-impedance source driving a comparatively high-impedance load (e.g., a MOSFET gate).

Depends on the MOSFET. Most discrete MOSFETs have pretty hefty capacitance at their gate, looking like a short to high-frequency inputs.

> The simplest remedy may be adding a “sink” resistor on the receiving end, connected to the signal’s return path. This is usually paired with a series resistor on the driving side, both to limit peak current and to at least roughly match the specific impedance of the trace.

The simplest remedy would be to use series resistor matched to the trace impedance to prevent the reflection bouncing multiple times, probe at the gate and only add parallel termination resistor if it is actually needed. Because then you need to work around the voltage drop.


Even the significantly smaller capacitance of some kind of CMOS input will appear as short for some amount of time. In combination with the series termination this will low-pass filter the signal, which is (maybe somewhat counterintuitively) often desirable for high-speed logic (it reduces EMC issues and the effect of excessive power consumption due to reduced slew rate is mostly limited to the first input buffer on chip, which cleans the edges for subsequent circuitry. Within reason, obviously).

When driving some kind of power MOSFET you want it to switch as fast as possible because of the power loss in the linear region.


In RF electronics, wires end up looking like transmission lines.

Here’s one of the classic lectures on wave propagation and reflections, where it looks at similarities in wave behaviour across mechanical, electrical, acoustic and optical systems.

AT&T Similarities of Wave Behaviour (1959): https://youtu.be/DovunOxlY1k


>In RF electronics, wires end up looking like transmission lines.

This might be needlessly pedantic, but you've got this a bit wrong from a philosophical standpoint. Wires don't end up looking like transmission lines, they always ARE transmission lines. At low frequencies, transmission lines end up looking like the idealized cartoon wires that we use in circuit analysis.


Correct me if I'm wrong, but this sounds like the electrical version of "water hammer" in pipes. The electrons have velocity. When you suddenly change the impedance of the medium, the extra electrons already on the medium that can no longer fit into the transition will bounce back.


Thinking of an electron as a thing that travels from the start of the circuit to the end results in a bad mental model.

Think of it as a wave, and then watch this video https://youtu.be/DovunOxlY1k. Then you will have a strong mental model.


Fun fact, the average speed of an electron in a circuit is typically that of a snail, unless you're really, REALLY heating up the wire.


>the electrical version of "water hammer" in pipes

God that's so annoying to hear in your apartment at night. I wish building developers wouldn't be so stingy and install more insulation or a water ahmmer damper.


I think a better analogy for water hammer would be inductance. Trying to suddenly open an inductive circuit (close a valve) produces a voltage spike (pressure spike).


The Commodore 128 had a bodge wire that was added very late in the development cycle, and it was there to get rid of an instance of signal reflection. Bil Herd mentions it in his book, and I think this video of his goes into more depth: https://youtube.com/watch?v=kQXdEdsT5qw


> actually, these had nowhere to go, so here’s your energy back

Best way to describe reflection of signals in a wire I've ever seen. It's not accurate in the sense that the source does not push energy down the wire, but still a good way to see it. Maybe I'll finally remember if you get a positive or negative reflection when the termination resistor is too low or too high.


Some computer BIOSes had this feature where they would analyze the plugged in Ethernet cable using this signal reflection to determine if there is a break in the cable, and would tell you approximately where (in meters).


Still common in the networking world; as an example, Ciena optical gear can do Optical Time Domain Reflectometer(try) (OTDR) periodically to measure the length of the physical circuit, and the nature of the test also reveals where cables are joined based on reflections. It's also how you measure where a break is, at which point you dig out the geo-coords for the path the cable takes to work out where someone just ran an optical cable finder (backhoe/JCB digger/whatever you call it where you are).


The feature is in the Ethernet NIC, not the BIOS. It's been present in pretty much all hardware implementations since the 1980s. There's a fun story of TDR on "thick-wire" Ethernet being used to characterize nuclear detonations[1]

[1] https://web.archive.org/web/20160507110751/http://www.csd.uw...


In the thin-coax ethernet days, pretty standard tool in the bag was a Fluke[1] portable Time Domain Reflexomter to do this. Much more precise than the NIC based measurements. Very useful for finding exactly where someones 'cable management' ended up nailing through the coax, or where someone ignored bend radius limits when installing.

[1] Other vendors made them...I just always had a Fluke


Here is a nice animation of reflections I found useful: https://www.youtube.com/watch?v=ozeYaikI11g


The team at Antmicro put together this pretty cool visualisation flow for this stuff using only open source tools -> https://antmicro.com/blog/2023/11/open-source-signal-integri...


I think EE folks should learn about sheaves. It would help make better sense of how electricity works. There is obviously topological elements they are ignoring.


I don't think anything is missing. RF folks just look at things for what they actually are, from traces to resistors: reactive components. These complexities are fully embraced during FEA.

This is only an issue for people who are crossing into the realm where the "parasitics" they were taught to ignore have become proper components of their circuits.


There are no parasitics, it's a topological phenomenon. AGI will solve this problem though and I know how to do it for $80B. Tell your friends and maybe we can get VCs to finance AI for electrical engineering. Solving electrical engineering requires a lot of money. "Electromagnetic Theory and Computation" is a good reference on topological properties of electromagnetism but AGI would already have the relevant knowledge and agree with me. Homology and Cohomology would be standard training for all future AGI dealing with electricity.


> There are no parasitics, it's a topological phenomenon.

This is why my only use of the word was in quotes. I'm not disagreeing. RF people, where these effects are non-negligible/primary, embrace this topological reality completely, with their tooling. What you're saying isn't new, it is known. The only people calling it "parasitics" are coming from the simplified models, where these effects were negligible or mitigated by some rule-of-thumb to make them negligible.

There has been work on what you're saying for many decades, as can be seen in the tooling. The AI aspect of it has been attempted through the decades, without success, yet. But, this is a very active area of research.


Which company in the a16z portfolio is working on this?


Interesting how similar the reaction is to an equivalent gas dynamics problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: