Unix Epoch Timestamps: What They Are and How to Use Them
If you've ever looked at a database column full of numbers like 1711459200 and wondered what on earth they meant, you've encountered Unix epoch timestamps. They're one of the most fundamental concepts in computing, and once you understand them, you'll see them everywhere — in APIs, log files, JWTs, and more.
This guide explains what epoch timestamps are, why they're so widely used, and how to work with them across different programming languages and tools.
What Is Unix Epoch Time?
A Unix epoch timestamp is simply the number of seconds that have elapsed since January 1, 1970, at 00:00:00 UTC. This specific moment in time is called the Unix epoch, and it serves as the universal reference point for timekeeping in computing.
For example, the timestamp 1711459200 represents March 26, 2024, at 16:00:00 UTC. The timestamp 0 is midnight on January 1, 1970. Negative timestamps represent dates before the epoch — -86400 is December 31, 1969.
Why January 1, 1970? When Ken Thompson and Dennis Ritchie were building the original Unix operating system at Bell Labs in the early 1970s, they needed an arbitrary starting point for their time system. They chose the beginning of the current decade at the time — a round, recent date that was easy to work with. The original implementation used a 32-bit unsigned integer counting seconds, which could cover roughly 136 years from that starting point. The choice stuck, and nearly every operating system and programming language adopted it as the standard.
Why Developers Use Timestamps
There are several compelling reasons why epoch timestamps are the preferred way to represent time in software systems:
- Timezone-independent — A timestamp always represents the same instant in time, no matter where in the world you read it. The number
1711459200means the same thing in Tokyo, London, and New York. There's no ambiguity about which timezone is implied. - Trivially sortable — Because timestamps are plain integers, sorting events chronologically is as simple as sorting numbers. No date parsing required.
- Compact storage — A 32-bit integer takes up just 4 bytes. Even a 64-bit timestamp is only 8 bytes. Compare that to a string like
"2024-03-26T16:00:00+00:00", which takes 25 bytes. - Easy arithmetic — Want to know what time it will be in 24 hours? Add
86400(the number of seconds in a day). Need the difference between two events? Subtract one timestamp from the other. - Universal across systems — Whether you're working with a PostgreSQL database, a REST API, a Unix shell, or an embedded device, epoch timestamps work the same way. They're the lingua franca of time.
Converting Timestamps
The most common task with epoch timestamps is converting between a human-readable date and the raw number. Here's how the maths works.
Epoch to Human Date
Take the timestamp 1774540800. To understand what date this represents, you divide by the number of seconds in progressively larger time units:
1774540800 seconds
÷ 86400 seconds/day = 20538.66... days since epoch
÷ 365.25 days/year ≈ 56.23 years after 1970
→ Approximately March 2026
In practice, you'll use a library or tool rather than doing this by hand, because you need to account for leap years, varying month lengths, and leap seconds. But the concept is straightforward: it's just counting seconds from a fixed point.
Human Date to Epoch
Going the other direction, to convert March 26, 2026 00:00:00 UTC to a timestamp, you'd count every second from January 1, 1970 to that moment. From 1970 to 2026 is 56 years, which includes 14 leap years (1972, 1976, ..., 2024), giving us:
(56 × 365 + 14) days × 86400 seconds/day
= (20440 + 14) × 86400
= 20454 × 86400
= 1767225600 (January 1, 2026)
+ 84 days to March 26 × 86400
= 1767225600 + 7257600
= 1774483200
Tip: Never try to compute timestamps by hand in production code. Every language has built-in functions that handle leap years, leap seconds, and edge cases correctly. Use them.
Timestamps in Different Languages
Every major programming language provides a way to get the current Unix timestamp. Here's a quick reference:
| Language | Get Current Timestamp | Unit |
|---|---|---|
| JavaScript | Math.floor(Date.now() / 1000) | Seconds |
| Python | time.time() | Seconds (float) |
| Swift | Date().timeIntervalSince1970 | Seconds (Double) |
| Go | time.Now().Unix() | Seconds |
| Ruby | Time.now.to_i | Seconds |
And here's how to convert a timestamp back to a date object in each language:
# JavaScript
new Date(1774540800 * 1000) // Note: JS Date() expects milliseconds
# Python
from datetime import datetime, timezone
datetime.fromtimestamp(1774540800, tz=timezone.utc)
# Swift
let date = Date(timeIntervalSince1970: 1774540800)
# Go
t := time.Unix(1774540800, 0)
# Ruby
Time.at(1774540800).utc
Tip: In JavaScript, Date.now() returns milliseconds, not seconds. Always divide by 1000 when you need a standard Unix timestamp, and multiply by 1000 when creating a Date from one.
Milliseconds vs Seconds
One of the most common sources of bugs when working with timestamps is mixing up seconds and milliseconds. Some platforms use milliseconds since the epoch (1000x larger numbers), while most use seconds.
| Unit | Used By | Example (March 2026) |
|---|---|---|
| Seconds | Python, Go, Ruby, PHP, Unix shell, JWT, most APIs | 1774540800 |
| Milliseconds | JavaScript, Java, Dart, Elasticsearch | 1774540800000 |
The easiest way to tell which you're looking at: count the digits. In the current era, a seconds-based timestamp has 10 digits (e.g., 1774540800), while a milliseconds-based timestamp has 13 digits (e.g., 1774540800000).
Warning: If you accidentally interpret a millisecond timestamp as seconds, you'll get a date roughly 56,000 years in the future. If you interpret seconds as milliseconds, you'll get a date in January 1970. Both are telltale signs of a unit mismatch.
The Year 2038 Problem
The Year 2038 problem is the epoch timestamp equivalent of the Y2K bug. Here's why it matters.
The original Unix timestamp was stored as a 32-bit signed integer. A signed 32-bit integer can hold a maximum value of 2,147,483,647. That number of seconds after the epoch corresponds to:
January 19, 2038, at 03:14:07 UTC
One second later, the integer overflows. On systems that haven't been updated, the timestamp wraps around to -2,147,483,648, which the system interprets as December 13, 1901. This could cause crashes, data corruption, or wildly incorrect date calculations.
The fix is straightforward: use a 64-bit integer instead. A signed 64-bit integer can count up to approximately 292 billion years into the future, which should be sufficient. Most modern operating systems, databases, and programming languages have already made this transition:
- Linux migrated to 64-bit
time_ton 64-bit systems years ago, and added 64-bit support for 32-bit ARM in kernel 5.6 (2020) - macOS and iOS use 64-bit timestamps on all current hardware
- Windows uses 64-bit FILETIME internally
- Python, Go, Swift, Rust all use 64-bit (or larger) time representations
Warning: The risk isn't in your application code — it's in embedded systems, legacy databases with 32-bit integer columns, and old file formats. If you maintain any system that stores timestamps as 32-bit integers, plan your migration before 2038.
Timezones and UTC
A Unix timestamp is always in UTC. There is no such thing as a "local" epoch timestamp. The number 1774540800 represents the same absolute moment regardless of where you are in the world. Timezone only comes into play when you display that timestamp to a human.
This is actually one of the great strengths of timestamps. You store a single integer, and each user's device converts it to their local time for display. No need to store timezone information alongside the timestamp.
Common Pitfalls
- Generating timestamps in local time — Some poorly written code converts a local datetime to a timestamp without first converting to UTC. The resulting number is wrong by the UTC offset. Always use UTC-aware functions when generating timestamps.
- Daylight Saving Time gaps — When converting a timestamp to a local time string, be aware that some local times don't exist (when clocks spring forward) and some exist twice (when clocks fall back). The timestamp itself is unambiguous, but the local representation may not be.
- Assuming fixed UTC offsets — A timezone like "US/Eastern" is not always UTC-5. It's UTC-5 in winter and UTC-4 in summer. Never hardcode offsets — use proper timezone databases (like IANA/Olson).
# Wrong: creates a timestamp based on local time
import time
wrong = int(time.mktime(time.strptime("2026-03-26", "%Y-%m-%d")))
# Right: explicitly use UTC
import calendar, time
right = calendar.timegm(time.strptime("2026-03-26", "%Y-%m-%d"))
Command-Line Tricks
The terminal is one of the fastest ways to work with epoch timestamps. Both Linux and macOS have built-in tools.
Get the Current Timestamp
# Works on both Linux and macOS
date +%s
This prints the current Unix timestamp in seconds, for example 1774540800.
Convert a Timestamp to a Date
# Linux (GNU date)
date -d @1774540800
# macOS (BSD date)
date -r 1774540800
Both commands output something like Thu Mar 26 00:00:00 UTC 2026 (the exact format depends on your locale settings).
Convert a Date to a Timestamp
# Linux (GNU date)
date -d "2026-03-26 00:00:00 UTC" +%s
# macOS (BSD date)
date -j -u -f "%Y-%m-%d %H:%M:%S" "2026-03-26 00:00:00" +%s
Quick Arithmetic
# What time is 24 hours from now?
echo $(( $(date +%s) + 86400 )) | xargs date -r # macOS
echo $(( $(date +%s) + 86400 )) | xargs -I{} date -d @{} # Linux
# How many days between two timestamps?
echo $(( (1774540800 - 1711459200) / 86400 )) days
Tip: Remember the magic number 86400 — that's the number of seconds in a day (60 × 60 × 24). Other useful constants: 3600 (one hour) and 604800 (one week).
Real-World Uses
Epoch timestamps are everywhere in modern software. Here are the most common places you'll encounter them:
- API responses — Most REST and GraphQL APIs return timestamps as epoch integers. Twitter, Stripe, GitHub, and countless others use this format for
created_at,updated_at, and similar fields. - Log files — Syslog, application logs, and structured logging formats (like JSON logs) commonly use epoch timestamps because they're unambiguous and easy to parse programmatically.
- Database columns — Storing dates as integer timestamps in SQLite, PostgreSQL, or MySQL is a common pattern. It avoids timezone confusion and makes range queries fast (just compare integers).
- JWT tokens — JSON Web Tokens use epoch timestamps for the
exp(expiration),iat(issued at), andnbf(not before) claims. If you inspect a JWT, these fields are always in seconds since the epoch. - Cache expiry — HTTP cache headers like
max-ageuse seconds. Memcached and Redis use epoch timestamps for key expiration.Cache-Control: max-age=3600means the resource is fresh for 3600 seconds (one hour). - Cron and scheduling — Task schedulers often record last-run and next-run times as epoch timestamps, making it easy to calculate intervals and detect missed runs.
- Build systems and CI/CD — Build timestamps, deployment markers, and artifact versioning frequently use epoch time for unique, sortable identifiers.
Convert Timestamps Instantly
BoltKit's EpochTime tool lets you convert between timestamps and human-readable dates in real time, with a live ticker, timezone picker, and bidirectional conversion. Free on iPhone and iPad.
Get BoltKit Free