Embedded portable device with a teeny ARM processor. Sadly, no room for linux anything or even an RTC. Every time it connected to a phone, the phone would set its clock so the timestamps were somewhat close to being accurate.
However, if you swapped out the AAA battery and DIDN’T connect it to the phone at least once, all your subsequent readings would go back to zero epoch and would be forgotten 🤷🏻♂️
If it had been up to me, I would have included a proper real-time-clock in the design and done things a lot differently.
But the device was designed by one company and the BLE and processor module by another. For some ungodly reason neither trusted each other, so nobody was given access to the firmware source on either side. I worked for a third company that was their customer paying the bill. I was allowed to see the firmware for both sides, but only read only, on laptops provided by each company, one at a time, in a conference room with their own people watching everything. Yeah, it was strange.
I was there because the MCU and the BLE processor sometimes glitched and introduced random noise. Turned out the connection between the two parts were unshielded UART with no error detection/correction 🤦🏻♂️
It was concidental that we hit the date glitch. Took all our effort just to get them to add a checksum and retry. The tiny MCU was maxed out of space. No way to fit in any more code for date math.
Thanks. On the plus side, I got to try ‘soup dumpling’ – still the best I’ve ever had. And Kaoliang, the most gut-busting distilled beverage known to mankind. OTOH, the product shipped, won lots of awards, and got national coverage for the company.
Nothing to do with timezones, but still, fun times.
Unix time doesn’t help with timezones… It’s always in UTC.
Unix timestamps also get a bit weird because of leap seconds. Unix timestamps have no support for leap seconds (the POSIX spec says a Unix day is always exactly 86400 seconds), so they’re usually implemented by repeating the same timestamp twice. This means that the timestamp is ambiguous for that repeated second - one timestamp actually refers to two different moments in time. To quote the example from Wikipedia:
Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number 1483142400 is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.
Some systems instead spread a positive leap second across the entire day (making each second a very very tiny bit longer) but technically this violates POSIX since it’s modifying the length of a second.
Aren’t timestamps fun?
Luckily, the standards body that deals with leap seconds has said they’ll be discontinued by 2035, so at least it’s one less thing that developers dealing with timestamps will have to worry about.
Don’t try to write your own date/time code. Just don’t. Use something built by someone else.
If I remember correctly, they’re updating the standards to allow for more deviation between UTC time and “actual time”. They’ll likely replace leap seconds with a leap minute that happens much less frequently, implemented by spreading it across the whole day, similar to the leap second workaround I mentioned.
Unix time doesn’t help with timezones… It’s always in UTC.
Unix timestamp is always in UTC which is why it’s helpful. It’s seconds since Jan 1st 1970 UTC. Libraries let you specify timezone usually if you need to convert from/to a human readable string.
Don’t try to write your own date/time code. Just don’t. Use something built by someone else.
…yes that’s why UNIX timestamps are helpful, because it’s a constant standard across all the libraries.
Some systems instead spread a positive leap second across the entire day (making each second a very very tiny bit longer) but technically this violates POSIX since it’s modifying the length of a second.
Unix timestamp is always in UTC which is why it’s helpful.
Any time you show the time to a user, you have to use a timezone. That’s why the unix timestamp has limited usefulness - it doesn’t do a lot on its own and practically all use cases for times require the timezone to be known (unless you’re dealing with a system that can both store and display dates in UTC). Even for things like “add one week to this timestamp”, you can’t do that without being timezone-aware, since it’s not always an exact number of seconds as you need to take Daylight Saving transitions and leap seconds into account.
Then that system should be trashed.
A lot of systems just don’t handle leap seconds well. Many years ago, Reddit was down for four hours because their systems couldn’t deal with leap seconds. Smearing the extra second across the whole day causes fewer issues as software doesn’t have to be built to handle an extra second in the day.
This is why we have pre-built libraries and Unix time.
Embedded portable device with a teeny ARM processor. Sadly, no room for linux anything or even an RTC. Every time it connected to a phone, the phone would set its clock so the timestamps were somewhat close to being accurate.
However, if you swapped out the AAA battery and DIDN’T connect it to the phone at least once, all your subsequent readings would go back to zero epoch and would be forgotten 🤷🏻♂️
Good times.
Some absolute and utter legend of a man made a Unix kernel for the fucking ZILOG Z80, you have no excuses
(It’s called UZI and it’s written in K&R C for some obscure CP/M compiler)
If it had been up to me, I would have included a proper real-time-clock in the design and done things a lot differently.
But the device was designed by one company and the BLE and processor module by another. For some ungodly reason neither trusted each other, so nobody was given access to the firmware source on either side. I worked for a third company that was their customer paying the bill. I was allowed to see the firmware for both sides, but only read only, on laptops provided by each company, one at a time, in a conference room with their own people watching everything. Yeah, it was strange.
I was there because the MCU and the BLE processor sometimes glitched and introduced random noise. Turned out the connection between the two parts were unshielded UART with no error detection/correction 🤦🏻♂️
It was concidental that we hit the date glitch. Took all our effort just to get them to add a checksum and retry. The tiny MCU was maxed out of space. No way to fit in any more code for date math.
God I’m sorry you had to go through that much middle management bullshit
Thanks. On the plus side, I got to try ‘soup dumpling’ – still the best I’ve ever had. And Kaoliang, the most gut-busting distilled beverage known to mankind. OTOH, the product shipped, won lots of awards, and got national coverage for the company.
Nothing to do with timezones, but still, fun times.
too bad unix time only has 14 years of life left in it.
Edit: this only applies to 32 bit Unix time. The 64 bit lifespan is a little longer, at 584 billion years. Whoops lol.
No
Unix time doesn’t help with timezones… It’s always in UTC.
Unix timestamps also get a bit weird because of leap seconds. Unix timestamps have no support for leap seconds (the POSIX spec says a Unix day is always exactly 86400 seconds), so they’re usually implemented by repeating the same timestamp twice. This means that the timestamp is ambiguous for that repeated second - one timestamp actually refers to two different moments in time. To quote the example from Wikipedia:
Some systems instead spread a positive leap second across the entire day (making each second a very very tiny bit longer) but technically this violates POSIX since it’s modifying the length of a second.
Aren’t timestamps fun?
Luckily, the standards body that deals with leap seconds has said they’ll be discontinued by 2035, so at least it’s one less thing that developers dealing with timestamps will have to worry about.
Don’t try to write your own date/time code. Just don’t. Use something built by someone else.
Did they figure out a way of making the earth spin more reliably per how the humans want it to?
If I remember correctly, they’re updating the standards to allow for more deviation between UTC time and “actual time”. They’ll likely replace leap seconds with a leap minute that happens much less frequently, implemented by spreading it across the whole day, similar to the leap second workaround I mentioned.
Unix timestamp is always in UTC which is why it’s helpful. It’s seconds since Jan 1st 1970 UTC. Libraries let you specify timezone usually if you need to convert from/to a human readable string.
…yes that’s why UNIX timestamps are helpful, because it’s a constant standard across all the libraries.
Then that system should be trashed.
Any time you show the time to a user, you have to use a timezone. That’s why the unix timestamp has limited usefulness - it doesn’t do a lot on its own and practically all use cases for times require the timezone to be known (unless you’re dealing with a system that can both store and display dates in UTC). Even for things like “add one week to this timestamp”, you can’t do that without being timezone-aware, since it’s not always an exact number of seconds as you need to take Daylight Saving transitions and leap seconds into account.
A lot of systems just don’t handle leap seconds well. Many years ago, Reddit was down for four hours because their systems couldn’t deal with leap seconds. Smearing the extra second across the whole day causes fewer issues as software doesn’t have to be built to handle an extra second in the day.
Careful with the exact phrasing here - while the epoch was at midnight in GMT, the time from which time_t is measured also exists in other timezones.
Then that the library that does it should be trashed.
https://en.wikipedia.org/wiki/Unix_time
UNIX time is trash.