Unix Timestamps: Epoch Time & Date Conversions
Unix timestamps power every log file, database record, and API call. Understand how epoch time works, why it matters, and how to convert, compare, and debug timestamps like a pro.
What Is a Unix Timestamp?
A Unix timestamp — also known as epoch time, POSIX time, or Unix time — is a system for tracking time as a single running total of seconds. Specifically, it counts the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC, a moment known as the Unix epoch. This deceptively simple concept underpins virtually every modern computer system.
The beauty of Unix timestamps lies in their universality. While human-readable dates vary by timezone, locale, and format (is it MM/DD/YYYY or DD/MM/YYYY?), a Unix timestamp is a single integer that means the same thing everywhere. The timestamp 1709510400 represents March 4, 2024, 00:00:00 UTC regardless of whether you're in Tokyo, New York, or London.
Unix timestamps are used in database records, log files, API responses, JWT tokens, file systems, cron jobs, cache expiration headers, and countless other places where a compact, unambiguous, timezone-neutral time representation is needed.
Seconds vs Milliseconds: The Precision Question
The original Unix timestamp counts seconds since the epoch, producing a 10-digit number for current dates (e.g., 1709510400). However, many modern systems use millisecond precision — 13-digit numbers (e.g., 1709510400000) — to capture sub-second accuracy. This distinction is one of the most common sources of time-related bugs in software.
JavaScript's Date.now() and Java's System.currentTimeMillis() return milliseconds. Python's time.time() returns seconds as a float. PostgreSQL's EXTRACT(EPOCH FROM timestamp) returns seconds. Knowing which precision your system expects is critical — dividing when you should multiply (or vice versa) produces dates in 1970 or the year 53,000.
A practical rule: if the number has 10 digits, it's seconds. If it has 13 digits, it's milliseconds. If it has 16 digits, it's microseconds (used by some databases and high-frequency systems). Zutily's converter auto-detects the precision for you, but understanding this distinction will save you hours of debugging.
Converting Timestamps: The Core Operations
The two fundamental operations are timestamp-to-date and date-to-timestamp. To convert a Unix timestamp to a date in JavaScript: new Date(timestampInMs). To go the other direction: Math.floor(date.getTime() / 1000) for seconds or date.getTime() for milliseconds.
In Python: datetime.utcfromtimestamp(ts) converts a timestamp to a UTC datetime object. In Java: Instant.ofEpochSecond(ts) or Instant.ofEpochMilli(ts). In SQL (PostgreSQL): TO_TIMESTAMP(ts) converts epoch seconds to a timestamp with time zone. In PHP: date('Y-m-d H:i:s', $ts).
When converting dates to timestamps, always be explicit about the timezone. The same calendar date and time represent different epoch values in different timezones. 'March 4, 2024, 12:00:00' in UTC is 1709553600, but in EST (UTC-5) it's 1709571600 — a difference of 18,000 seconds (5 hours).
Our online converter handles both directions and outputs results in multiple formats simultaneously — UTC string, ISO 8601, local time, relative time, day of week, day of year, and week number — so you can quickly verify and cross-reference.
The Year 2038 Problem (Y2K38)
The Year 2038 problem is the time-domain equivalent of the Y2K bug. Systems that store Unix timestamps as 32-bit signed integers can only represent dates up to January 19, 2038, at 03:14:07 UTC — the timestamp 2147483647 (the maximum value of a signed 32-bit integer). One second later, the integer overflows and wraps to -2147483648, which corresponds to December 13, 1901.
While most modern 64-bit systems are unaffected (a 64-bit signed integer can represent dates until the year 292,277,026,596), embedded systems, legacy databases, and older file formats still use 32-bit timestamps. IoT devices, industrial controllers, and financial systems with long planning horizons are particularly at risk.
The fix is straightforward: migrate to 64-bit timestamps. Most modern operating systems, databases, and programming languages have already made this transition. Linux kernels since 5.6 use 64-bit timestamps even on 32-bit hardware. However, data stored in legacy 32-bit formats still needs to be migrated before 2038.
Timezones, UTC, and Common Pitfalls
Unix timestamps are inherently UTC — they represent a specific instant in time without any timezone offset. This is both their greatest strength and a common source of confusion. When you see 'Tuesday, March 4, 2024' as the output of a timestamp conversion, that date is in UTC unless your converter explicitly applies a timezone offset.
The most common timezone mistake is creating a Date object from a timestamp and then displaying it without specifying UTC. In JavaScript, new Date(ts * 1000).toString() shows the date in the browser's local timezone, while new Date(ts * 1000).toUTCString() shows UTC. This difference catches countless developers.
Daylight Saving Time (DST) adds another layer of complexity. Some UTC offsets change twice a year — EST (UTC-5) becomes EDT (UTC-4) in summer. If your application stores local times instead of UTC timestamps, you can end up with ambiguous or non-existent times during DST transitions. The golden rule: always store time as UTC timestamps, and convert to local time only for display.
Our converter shows both UTC and your local time simultaneously, making it easy to verify timezone conversions and catch offset errors.
Timestamps in Databases
Different databases handle timestamps differently, and understanding these differences prevents subtle data bugs. PostgreSQL's TIMESTAMP WITH TIME ZONE (timestamptz) stores timestamps internally as UTC epoch microseconds and converts to the session timezone on output. MySQL's TIMESTAMP type also stores UTC but converts based on the server's time_zone setting.
MongoDB stores dates as 64-bit millisecond epoch values. SQLite has no native date type — dates are stored as text, real (Julian day), or integer (Unix epoch). Redis uses Unix timestamps for key expiration (EXPIREAT). Understanding your database's internal representation helps you avoid precision loss and timezone errors.
Best practice: always use your database's native timestamp type rather than storing epoch integers as plain integers. Native types enable date arithmetic, range queries, timezone conversion, and proper indexing. If you must store raw epoch values, document whether they're seconds or milliseconds.
Negative Timestamps and Historical Dates
Negative Unix timestamps represent dates before the Unix epoch (January 1, 1970). The timestamp -86400 corresponds to December 31, 1969. The timestamp -2208988800 represents January 1, 1900. This makes Unix timestamps suitable for representing any date in modern history.
However, support for negative timestamps varies by system. JavaScript's Date object handles them correctly, but some databases and APIs may reject or mishandle negative values. If your application needs to store historical dates, verify that your entire stack — from API to database to frontend — supports negative epochs.
For dates far in the past (before the Gregorian calendar reform of 1582) or far in the future, consider using a different time representation. Unix timestamps are best suited for dates within a few centuries of the present.
Best Practices for Working with Timestamps
Store time as UTC Unix timestamps wherever possible. This eliminates timezone ambiguity, simplifies arithmetic (duration = end - start), makes data portable across systems, and avoids DST-related bugs. Convert to local time only at the display layer.
Use millisecond precision by default in new systems. The storage overhead is negligible (8 bytes instead of 4 for 64-bit), and millisecond precision prevents rounding errors in duration calculations and provides sufficient accuracy for most applications.
Validate timestamps before processing. Check that the value falls within a reasonable range for your application. A common defensive check: if the timestamp represents a date before your application was created or more than 100 years in the future, it's likely invalid or in the wrong precision.
Use monotonic clocks for measuring durations. System clocks can jump forward or backward due to NTP synchronization, DST changes, or manual adjustments. For measuring elapsed time, use performance.now() in JavaScript, time.monotonic() in Python, or System.nanoTime() in Java.
Zutily's free Unix Timestamp Converter handles both directions — timestamp to date and date to timestamp — with auto-detection, multiple output formats, live clock, and one-click copy. All processing happens entirely in your browser. No data is ever sent to a server.
Try the Tools Mentioned
Free, instant, and private — right in your browser.