It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Year 2038 Problem

page: 2
6
<< 1   >>

log in

join
share:

posted on Jan, 17 2018 @ 04:21 PM
link   

originally posted by: Tempter
Who runs a 32-bit Unix platform? Anyone?



The major OSs are LP64 architectures, meaning that integers are 32 bit, so the problem will persist even in a 64 bit OS.



posted on Jan, 17 2018 @ 10:49 PM
link   

originally posted by: GetHyped

originally posted by: Tempter
Who runs a 32-bit Unix platform? Anyone?



The major OSs are LP64 architectures, meaning that integers are 32 bit, so the problem will persist even in a 64 bit OS.

That's what I was trying to get at, in most programming languages on most systems, when you write "int" it means a 32 bit integer, and it's very common for inexperienced coders to write software which stores the time in that format.



posted on Jan, 21 2018 @ 10:05 PM
link   

originally posted by: Blue Shift
In 20 years, I'll get right on it!


In 20 years you'll go to jail for breaking into somone else's technology.



posted on Jan, 24 2018 @ 02:17 PM
link   

originally posted by: Tempter
Who runs a 32-bit Unix platform? Anyone?



Lots of smaller pieces of hardware for consumer uses are still 32 bit (though by 2038 they won't be). However, there's a lot of legacy hardware such as industrial controllers that do still use 32 bit unix platforms. It's quite common actually, and when some of these systems have predicted lifespans of 50-75 years, it means they settle in place and it's a big hassle for businesses to revamp them. In many cases this is going to mean replacing a lot of components, and possibly retooling entire industrial processes because the hardware won't support changes.



posted on Jan, 24 2018 @ 02:20 PM
link   

originally posted by: TrueBrit
a reply to: wtfatta

Well, I have no idea how to fix this short term issue, between now and 2038, but I will say that from now, all computers and software ought to be built in such a way that the time and date fields can run till the heat death of the universe, without any problems like this arising again.


This problem actually predates the Y2K problem, it was known about in the 70's. The hardware simply wasn't in a place to fix it at the time because memory was at a premium. That's less true these days (but is still a factor in many embedded systems) but still occasionally an issue.

Dealing with large numbers is a rather interesting problem actually.



posted on Jan, 24 2018 @ 05:45 PM
link   
Funnily enough it is a massive problem which is already starting to bite now. For obvious reasons I won't name any names, but several large global banks have already seen failures in systems that calculate mortgage interest over 25 year periods (and longer on commercial mortgages).

Similarly, almost every single core router on the Internet uses a signed 32bit integer for date stamping data packets for routing.

Is it really serious? Yes, it absolutely is. Does a 64bit (or 128bit or whatever gets invented in the future) system solve this problem - no.

The real problem is when Unix was created - and indeed most of the operating systems in use today - it was assumed that in 50 odd years time something new would replace them.

This was correct of course, but two issues remained - nobody changed how date and time is handled because they didn't want to break compatibility and every other OS was written to be loosely compatible with things like date standards.

There are several ways of fixing it - but the problem is that there is no "one right way" to do it. Changing date to an unsigned integer fixes the issue on *nix, but breaks everything that interconnects it. Changing values to a 64bit integer (signed or unsigned) means many systems - even some "64 bit" systems can't then handle the long word in one cycle which breaks interrupt timings and just setting a new epoch means nothing before the new epoch is considered valid data.

So it is a very real challenge - however many people are genuinely working on the problem.

The Y2k problem was really only a 'non problem' because many industries (especially my industry - telecoms) started locking developers into rooms to 'fix' the issues about 6 years before it. By the time mass media was reporting the Y2k bug and predicting the end of the world, practically all essential systems had already been patched or workarounds were in place - the biggest risk were smaller 'custom' systems generally used for accounting and the like which were written by small independent outfits, many of whom ceased to exit long before any issues were known about.

So yes, Very real problem, much greater problem than the Y2k bug and a lot more technically challenging (the signed 32bit date format was how we fixed Y2k, as before that date was always an unsigned 32bit format so we can't use that trick again) but...

...no doom porn - too many of us are working on the problem. The worst case scenario is generally considered to be with embedded devices that are dependent on date and time information. From a consumer point of view, worst anyone will see is garbage dates in log files on unpacked systems


(Source : I was one of the monkeys at British Telecom fixing Y2K issues way before Y2k... and it was possibly the most peaceful shift I ever worked on 31st Dec 2000 because the fixes worked and were tested at least 2 years before!)



posted on Jan, 24 2018 @ 06:39 PM
link   

originally posted by: AdKiller
In 20 years you'll go to jail for breaking into somone else's technology.

Oh, I'm just kidding. I'll probably be dead before then.



posted on Jan, 24 2018 @ 07:01 PM
link   
I knew this sounded familiar!

ATS: John Titor's Y2K.

Basically, you need to know the John Titor story a bit, JT was a time traveler from 2036, who goes back to find the holy grail, IBM 5100, to fix the Y2038 bug!!

Debunked (???, I think so), ATS: John Titor and the Secrect capablities of the IBM 5100.

The real problem will be with NTP bug that hits in 2036 (Wikipedia).

Also ATS: Reference For John Titor Threads.

The Y2038 bug has been known about for a while (see Wikipedia's entry about AOL crashing). Any 64-bit *NIX OS running on a 64-bit machine is already beyond this one.

Thanks OP for the trip down memory lane!!




posted on Jan, 25 2018 @ 06:24 AM
link   
a reply to: Aazadan

See, time and date data should be the SMALLEST piece of information, no matter how long it runs or how old it gets.

There is WAY more data in a photograph or a video, than should EVER be generated by simply remembering what bloody time of day or night it is, and in which year one happens to find oneself!



posted on Jan, 25 2018 @ 09:02 AM
link   

originally posted by: TrueBrit
a reply to: Aazadan

See, time and date data should be the SMALLEST piece of information, no matter how long it runs or how old it gets.

There is WAY more data in a photograph or a video, than should EVER be generated by simply remembering what bloody time of day or night it is, and in which year one happens to find oneself!


Ok, so basically, date is stored as milliseconds from the epoch which is Jan 1st 1970. It's stored as a 32 bit integer which means 32 1's and 0's. This has a maximum signed value of 2^31 (the final bit is used for positive vs negative) or 2,147,483,647. So 2,147,483,647 milliseconds after the epoch the value rolls around from being all 1's to being all 0's which will place the value at -2,147,483,647 which corresponds to a date in December 1901.

Many programs as part of requesting date and time only reserve 4 bytes in memory (32 bits) for the return value. This will all need changed to 8 bytes (64 bits) so that the number received can be larger.

A photograph doesn't have this issue because the memory set aside for it is equal to the size of the photo, and the size of the data for that photo never changes. This problem is related to the amount of memory required slowly incrementing over time.



posted on Jan, 25 2018 @ 09:44 AM
link   
a reply to: Aazadan


But surely the computer does not need to remember all that time. It only need remember which day it actually IS on any day of the week, so that things it stores can be examined in chronological order... but as to keeping a record of days gone by in years past in many cases, this is surely unnecessary, since files created will have their creation date and last alteration dates, assigned to them specifically, making all further effort on the computers part to remember passed time, absolutely meaningless?

Hell, as long as files within the computer are correctly marked, a computer should only need to remember MUCH less of the time passed while it is has been in operation, than is currently the case. Sure, computers whose purpose is specifically to record the passage of time may require more memory in this regard, but no general use computer benefits from remembering how many seconds it has been operational for in its lifetime!



posted on Jan, 25 2018 @ 10:05 AM
link   
a reply to: TrueBrit

It's no more remembering dates than the number 2018 is remembering 2017. It's simply the written expression of what's current. I don't know the exact reason we use milliseconds from a specific date, but I assume it has to do with a combination of performance and memory saving (memory isn't much of an issue on desktops anymore, but it's a big concern in embedded devices). If your datetime operation had to update a year, month, day, hour, minute, second, and millisecond field you're giving it 7 operations to perform to retrieve all the information where as the current system only requires one would significantly impact the performance of lower end devices, again embedded systems.

Basically, it's an issue with writing out dates. In a 32 bit format you can only write out 4.3 million values (you may remember this issue with computers a decade ago where you couldn't functionally go beyond 4 GB of memory). Those values for time end in 2038. There's fixes available such as massive hardware upgrades to go to 64 bit, or lots of software patches to change the epoch. The problem isn't unsolvable, it's just something that requires attention to fix.



posted on Jan, 25 2018 @ 12:44 PM
link   

originally posted by: Aazadan
a reply to: TrueBrit

I don't know the exact reason we use milliseconds from a specific date,


It's a legacy from the days when data was transferred from machine to machine by tape on batch systems - so the dispatcher on each system knew when to write or read data. For example, Usenet until the mid 90s was distributed by tape. Large organisations used to get a tape nightly, whereas smaller operators may have only received a tape a couple of times a month. The use of millisecond date stamps meant that posts were in the correct threaded order.

Similarly in transaction processing, we use nanoseconds (in Unix you'd call $(date +%n) for nanoseconds since the last millisecond) as this allows a very high resolution timer for synchronous events and it previously stopped race conditions as there was no chance of more than one write per nanosecond, of course, now we are pushing that limit.

That is why in Unix we have the system timer in seconds, the event timer in milliseconds and the dispatcher in milliseconds+nano seconds.

It is also why if you compile your own kernel you can select the timer resolution from 1s (default for servers), 250ms (for desktops) and 500ms for realtime critical systems.



posted on Jan, 25 2018 @ 12:55 PM
link   
a reply to: Mikeapollo

tl;dr

Unix was from the days when computers performed 10s of operation per second, it was never designed for systems that run millions of operations per second.

The reason we have these date issues is simply because Unix worked too well - there has never been any reason to change it and of course every generation of programmer has assumed that creaky old Unix will be replaced by something new that won't have these issues. The reality is nothing 'better' has come along because we keep patching and improving Unix which, despite the Windows and Apple fans views, practically runs any autonomous systems that you use daily including telephone exchanges, gateway/edge routers, etc etc All 'invisible' tech



posted on Jul, 25 2018 @ 05:01 AM
link   

The latest time that can be represented in Unix's signed 32-bit integer time format is 03:14:07 UTC on Tuesday, 19 January 2038 (231-1 = 2,147,483,647 seconds after 1 January 1970).[1] Times beyond that will wrap around and be stored internally as a negative number, which these systems will interpret as having occurred on 13 December 1901 rather than 19 January 2038


We will travel back in time ... lol

I guess this sounds a little serious of an issue if not fixed. I'm sure our smarties on are on top of it and because they have predicted what will happen in the future they can prevent us getting stuck in the past.

If a computer reverts back to 1901...will years prior data be gone from all systems or just irretrievable ? Sorry I'm not techy enough to know. What about all those data storage facilities... wonder how they will be affected.

leolady
edit on 25-7-2018 by leolady because: (no reason given)



posted on Jul, 25 2018 @ 05:21 AM
link   
a reply to: wtfatta

This problem will solve itself, As unix upgrades to 64 and 128 bit operating systems. In fact most already have for example red hat is 128 bit operating system and no longer supports 32 bit it will tell you to upgrade.



new topics

    top topics



     
    6
    << 1   >>

    log in

    join