Unix Timestamps Explained: A Developer's Complete Reference
Everything developers need to know about Unix timestamps — what they are, how to convert them, common pitfalls, and code examples in JavaScript, Python, SQL, and more.
January 14, 2025
Time is surprisingly hard to handle in software. Time zones, daylight saving transitions, leap seconds, calendar differences between locales — all of these make date/time programming treacherous. The Unix timestamp was invented to solve this problem with a single, universal number. Every developer should understand how it works.
What is a Unix Timestamp?
A Unix timestamp is the number of seconds that have elapsed since midnight UTC on January 1, 1970 — a moment known as the Unix Epoch. It is timezone-independent, unambiguous, and a single integer that machines can easily sort, compare, and do arithmetic on.
For example, the Unix timestamp 1700000000 represents November 14, 2023, 22:13:20 UTC. No matter where in the world a server or client is located, this integer unambiguously identifies the same moment in time.
Seconds vs. Milliseconds
The original Unix timestamp is in seconds. A 10-digit number like 1700000000 is a seconds-based Unix timestamp.
JavaScript (and many modern APIs) use milliseconds — 1000× the seconds value. Date.now() returns milliseconds. A millisecond timestamp has 13 digits:1700000000000.
This is a common source of bugs. If you pass a milliseconds timestamp to a function that expects seconds (or vice versa), you will get dates that are wildly wrong (either 1000× too far in the future or 1970 dates). Always check which unit your system expects.
Quick rule of thumb: 10 digits = seconds, 13 digits = milliseconds.
Why January 1, 1970?
The Unix Epoch was chosen somewhat arbitrarily when Unix was being developed at Bell Labs in the early 1970s. The engineers needed a recent, memorable date that was before any of their existing data. January 1, 1970 was a round, recent date that fit the 32-bit integer constraints of early Unix systems. It stuck.
The Year 2038 Problem
On 32-bit systems that store Unix time as a signed 32-bit integer, the maximum value is 2,147,483,647 — which corresponds to January 19, 2038, 03:14:07 UTC. After that moment, the counter overflows to a negative number, causing systems to think the date has rolled back to 1901.
This is the "Y2K38 problem." Modern systems use 64-bit integers, which can represent timestamps for hundreds of billions of years, so this is only a concern for legacy embedded systems, old database schemas using 32-bit integer columns, and some older firmware.
Getting the Current Timestamp in Code
Every major language has a straightforward way to get the current Unix timestamp:
- JavaScript:
Math.floor(Date.now() / 1000)(seconds) orDate.now()(milliseconds) - Python:
import time; int(time.time()) - PHP:
time() - Ruby:
Time.now.to_i - Go:
time.Now().Unix() - MySQL:
UNIX_TIMESTAMP() - PostgreSQL:
EXTRACT(EPOCH FROM NOW())::INTEGER - Bash/Linux:
date +%s
Converting Timestamps to Human-Readable Dates
JavaScript:
const ts = 1700000000;
const date = new Date(ts * 1000); // multiply by 1000 for ms
console.log(date.toISOString()); // "2023-11-14T22:13:20.000Z"
console.log(date.toLocaleString()); // local timezonePython:
from datetime import datetime, timezone
ts = 1700000000
dt = datetime.fromtimestamp(ts, tz=timezone.utc)
print(dt.isoformat()) # 2023-11-14T22:13:20+00:00Common Use Cases for Unix Timestamps
- Database storage — Storing timestamps as integers is compact, timezone-neutral, and makes range queries fast. Convert to local time only when displaying to users.
- API communication — REST APIs often return timestamps as Unix time to avoid timezone ambiguity and parsing differences across languages.
- JWT expiry — JSON Web Tokens use
iat(issued at) andexp(expiry) as Unix timestamps. - Log analysis — Server logs with Unix timestamps can be sorted numerically without any date parsing.
- Cache expiry — Set TTL as a Unix timestamp so cache invalidation is a simple integer comparison.
- Scheduling — Store scheduled job times as Unix timestamps for timezone-independent execution.
Quick Reference
- 1 minute = 60 seconds
- 1 hour = 3,600 seconds
- 1 day = 86,400 seconds
- 1 week = 604,800 seconds
- 1 month ≈ 2,592,000 seconds (30 days)
- 1 year ≈ 31,536,000 seconds (365 days)
Summary
Unix timestamps are the standard way to represent moments in time in software — a single integer counting seconds since January 1, 1970 UTC. They are timezone-neutral, easy to sort and compare, and supported in every programming language. The main pitfalls to watch for are the seconds vs. milliseconds distinction and the Y2K38 issue on 32-bit legacy systems. Use a timestamp converter whenever you need to translate between Unix time and human-readable dates.