<azonenberg>
So i think the easiest way would be to add that as metadata to the waveform object
<azonenberg>
we have x and y axis units
<azonenberg>
so we could add uncertainty headers to each as well
<azonenberg>
i'm thinking for a first order implementation, uncertainties are probably gain and offset
<azonenberg>
i.e. there is an error term that varies linearly with the magnitude of the sample
<azonenberg>
and an error term whose magnitude is constant
<azonenberg>
so basically INL and DNL
<azonenberg>
given the magnitude of a given sample you could then calculate the total uncertainty as errorA + value*errorB
<d1b2>
<Hardkrash> the uncertainties should linearly compute themselves.
<azonenberg>
then we'd probably want another value indicating the type of uncertainty too
<azonenberg>
is this RMS error? peak to peak? 95% confidence?
<azonenberg>
anyway, file a ticket
<azonenberg>
i think it would be a useful feature but we need to spend a bit more time thinking about how to design it
<azonenberg>
in particular, should it be a property of the unit object or a separate attribute?
<azonenberg>
and is there any reason we could ever need a per-sample error (i.e. some measurements have different uncertainties than others in the same waveform)? That would be much more resource intensive than a fixed uncertianty object attached to the entire waveform
<azonenberg>
Maybe the best option is for the uncertainty to be part of the unit object?
<azonenberg>
so that way we can appropriately handle linear combinations of different units with different error terms
<azonenberg>
We'd also have to figure out how to properly propagate uncertainties through filters
<azonenberg>
for example if you add or subtract two channels the uncertainty of the result is now the sum of the errors of the inputs, right?
<d1b2>
<Hardkrash> I can't recall the specifics, long time ago i knew from college lab classes. now I let that library compute it for me.
<azonenberg>
in fact, if we wanted to be nice and rigorous
<d1b2>
<Hardkrash> but what you said sounds about right.
<azonenberg>
we could even include default error terms in instrument driver classes based on datasheet specs
<d1b2>
<Hardkrash> Ohh, Ahh!
<azonenberg>
and have the option for the user to include actual error terms based on calibrations
<d1b2>
<Hardkrash> that's getting fancy!
<d1b2>
<Hardkrash> i think that this really only applies to output units used to setup the scales, not per sample.
<azonenberg>
Yeah i agree, i cant think of any situation for per sample errors being useful
<azonenberg>
So i think this should be an extension of the unit class
<d1b2>
<Hardkrash> so should keep the computations minimal.
<azonenberg>
so a unit could hypothetically be "volts with a 95% confidence interval of +/- 5 mV offset and 1% magnituide"
<d1b2>
<Hardkrash> yep, and when divided by 0.1mΩ 1% resistor -> Amps at xyz
<azonenberg>
Exactly
<azonenberg>
if we wanted to be *really* fancy we could add a rendering mode with error bars :p
<d1b2>
<Hardkrash> and we now found the next herd of yaks...
<sorear>
the problem with sticking an uncertainty on each data point is that people are going to assume the uncertainties are uncorrelated, but most of the causes you just mentioned are systematic
<d1b2>
<Hardkrash> interesting point
<azonenberg>
it doesnt matter if you just sum the extrema, right?
<sorear>
heteroskedasticity seems like more of a thing to deal with _after_ filters, especially division...
<azonenberg>
And yes. propagating uncertainty through filters will be its own adventure
<sorear>
if they're uncorrelated you add them in quadrature, if they're correlated you add them linearly
<azonenberg>
Hmmm
<azonenberg>
i think to start, we'll have uncertainty be a property of an instrument channel only, and only supported on a handful of drivers until we figure out a good way to present it to the user
<azonenberg>
the unit object's default uncertainty will be "unspecified" meaning accuracy of the measurement is unknown
<azonenberg>
and we can gradually propagate it more and more as we improve handling of it
<azonenberg>
make sense?
<azonenberg>
i dont want to ever imply a measurement is more accurate than it is
<azonenberg>
So we'll start by having all measurements be unknown error, then gradually pin down bounds
<d1b2>
<Hardkrash> sounds like a reasonable approach, might be good to get some input from research lab folks.
<d1b2>
<Hardkrash> How they would want their data rendered from odd ball things.
<azonenberg>
Yeah the xdevs people would be good to talk to about this
<azonenberg>
File a ticket against the scopehal repo first, we need to define the object model
<azonenberg>
then we'll think separately about how to render it in scopehal-apps
<azonenberg>
hardkrash: If you file the ticket now i'll tweet out a link and see if i can get some commentary from folks on how best to handle it
<azonenberg>
that should get more eyes on it
<d1b2>
<Hardkrash> Working on it.
ericonr has joined #scopehal
<d1b2>
<Hardkrash> how is this copy? """ Uncertainties with data channels collected either from test equipment or test setups could provide insight into how much you can trust collected and presented data. Starting at a base level of test equipment capabilities and then combinations of other parameters, how do we store this metadata and how does it pass from raw data through filters and up to the user for presentation? One example from IRC conversations would be: >
<d1b2>
a unit could hypothetically be "volts with a 95% confidence interval of +/- 5 mV offset and 1% magnitude" This would be measured by a scope channel, then a math filter might divide that channel with a 100mΩ 1% resistor to yield Amp units with the proper combination of error propagation. Linear error propagation may not be the correct option for errors, further discussion is needed. """
<azonenberg>
Yeah
<_whitenotifier-4>
[scopehal] hardkrash opened issue #399: RFC: Uncertainties associated with channel data. - https://git.io/JYkYZ
<azonenberg>
ok i need to do a full writeup on xdevs
<azonenberg>
But my D1330-PS probe system came in
<azonenberg>
this is hilarious
<azonenberg>
Do you know what the mounting bracket they use for the handheld browser (Dxx30-PT) is?
<azonenberg>
Lego Technic parts
<azonenberg>
All of the Dxx0-PT-INTERLOCK parts are unmodified lego pieces. I had my suspicion just from looking at them, then popped one under the microscope and found a LEGO(R) stamp on the plastic
<azonenberg>
The handpiece Dxx0-PT-WAND and Dxx0-PT-XYZ-POSITIONER, as well as the ball joint Dxx0-PT-SWIVEL, appear to be custom devices including a mix of custom injection molded plastic, custom machined metal, and lego pieces
<azonenberg>
i mean i'm not complaining, it works well
<azonenberg>
i may have to look into lego parts for my own probe holding purposes lol
<monochroma>
XD
<monochroma>
azonenberg: any photos?
<azonenberg>
Not yet, it just came in a couple minutes ago. i'll text you a quick phone pic
<azonenberg>
the gray piece has an actual lego stamp on it, several of the others are obvious lego technic parts as well
<azonenberg>
The wand and Dxx30-PT look to be a technic part with custom plastic around it
<azonenberg>
curious if the lego group knows or if they just went to the closest toys-r-us and ordered a bunch lol
<miek>
reminds me of that company making near-field probes that uses ballpoint pen parts
<azonenberg>
nearfield probes?
<azonenberg>
the only probe i know built in a ballpoint pen is the auburn instruments transmission line probe
<azonenberg>
the rather sloppy AKL-PT1 competitor that claims 12 GHz bandwidth absurdly flat
<azonenberg>
I talked to an engineer at signalhound who uses it, says it's a solid probe for narrowband usage but definitely is not flat and needs to be calibrated with a signal generator to get accurate response
<azonenberg>
i.e. you have to know what the attenuation is at your freq of interest and it's not great for broadband stuff like fast digital
<azonenberg>
has an 8+ dB S21 range across the band
<azonenberg>
Anyway i love the idea of using legos for probe positioning and i may have to experiment a bit lol
<d1b2>
<mubes> This SDS2104X+ is likely to take a while. I've made a new driver but there's a lot of borkedness in the SCPI....like no INR command and :TRIGGER:STATUS? stops responding to you when the scope is in stop mode 😦 You can also make it fall over in a big heap if you look at it wrong.
<d1b2>
<mubes> ...hoping some of this is stuff I'm doing wrong and/or can work around as opposed to real issues.