We serve a lot of video. In fact, just last month, our viewers watched over 15 terabytes of it—and with several new courses planned, our monthly bandwidth will more than double next year.
Aside from making a rather large portion of our courses completely free to view, a big part of the reason our bandwidth is so high is because we offer our videos in full 4K resolution. Creating high quality videos for programmers is our core business and serving 4K video was non-negotiable for us.
As a testament to our high bar for quality (and to make things incredibly difficult on ourselves), we also decided to create two versions of each video– a light version and a dark version for when the user changes the UI color scheme. What good is a dark UI if the videos don't match? Unfortunately, this meant doubling the amount of videos we needed to encode and store on the cloud.
When we built the platform that hosts all of our video courses, we set out to find a video hosting solution that was:
Huge file sizes, lengthy encoding processes, hundreds (thousands?) of variations of resolutions, aspect ratios, codecs, bitrates, color spaces and framerates are all things that make video incredibly complicated— and that's to say nothing of serving them from the cloud in an efficient and reliable manner.
There are tons of paid services that handle the hard parts of hosting and delivering video. We looked into several of these options—and while many of them met our needs, they fell short on one critical requirement: price.
In our search for a video hosting solution, we found that most companies scaled their pricing on a combination of three things, encoding time, storage (disk space) and bandwidth. We quickly realized that our bandwidth (mostly egress) needs was going to be the main cost driver for us.
That's when I started looking at cloud storage options. More particularly, cloud storage providers with affordable egress bandwidth rates.
When Cloudflare's R2 Object Storage launched in 2022, it made a splash due to their claim of completely free egress bandwidth. It sounds too good to be true, but basically Cloudflare R2 offers free egress (data out) because they already pay for a set amount of data to move in and out of their network. Since they mostly use data coming in for services that protect websites, they can afford to let data go out (egress) without charging for it.
R2's free tier offers 10 GB storage / month and 10 million (Class B) requests / month. While we're well under the 10 million requests per month, we are over the 10 GB mark for storage, so we knew we'd have to pay a nominal fee there ($0.015 / GB).
After some digging around on pricing pages and a few sales emails later, we were able to compile how R2's storage and delivery costs would stack up against the other folks we were talking to.
Here's what we found:
Provider | Monthly Cost |
---|---|
Cloudflare R2 | $2.18 / mo free egress + $0.015 / GB (storage) |
Vimeo | $1,500 / mo 15,000 GB * $0.10 / GB (bandwidth) |
Wistia | $5,019 / mo $399/mo + 14,000 GB * $0.33 / GB (bandwidth) |
Mux | $1,271.88 / mo 350,000 min * $0.0036 / min (bandwidth) |
Now, one thing that can't be ignored is that Cloudflare R2 is simply an object storage provider. Not only do Vimeo, Wistia and Mux offer storage, but also handle all of the other complicated parts of hosting and serving online video. This includes encoding, custom web players, analytics, marketing tools, integrations with other services, and the list goes on.
Given our use case, my experience with video encoding (via FFmpeg), and working with web video players, I saw these prices as us paying for a whole lot more than we really needed.
If we were going to host our videos on Cloudflare, at this point, the primary thing I needed to solve for was how to properly encode and transfer over 20 hours of 4K video to our R2 bucket.
I totally could have just encoded 4K mp4 files and stuck them in a <video>
element
and been done with it, but our users on slower connections would experience slow initial load times
and lots of buffering issues. To solve this issue, we needed a way to serve a lower resolution
version of the video based on the user's connection speed.
A little bit of research brought me to adaptive bitrate streaming using HTTP Live Streaming (HLS). This approach allows us to efficiently deliver high-quality video content, adapting to each viewer's internet speed in real-time.
When a user loads one of our videos, their connection speed is automatically detected, and the player selects the best possible video quality that their connection can handle. If their internet speed fluctuates, HLS seamlessly switches to higher or lower bitrates to maintain smooth playback without interruptions. This dynamic adjustment makes sure that all our viewers, whether on high-speed broadband or slower mobile networks, enjoy a consistent and enjoyable viewing experience.
HTTP Live Streaming (HLS) is a streaming protocol developed by Apple. It works by breaking a video into a sequence of small HTTP-based video downloads, each typically lasting around 3-8 seconds. These files can be played one after another, allowing the video player to adjust the quality of the stream in real-time based on the viewer's network conditions.
When it comes to video encoding, FFmpeg is our go-to tool. FFmpeg is a powerful, open-source multimedia framework that can decode, encode, transcode, mux, demux, stream, filter, and play almost anything that humans and machines have created. It’s a staple in the video processing industry, trusted by many of the companies we’ve mentioned, such as Vimeo and Mux, to handle their encoding needs on their servers.
When you use a cloud video service like Vimeo, Wistia or Mux, they completely handle the encoding for you on their servers. Encoding on the cloud is a must for certain apps and use cases, but our encoding needs are somewhat sporadic and 'lumpy'—happening mostly prior to major course launches a few times per year. So encoding locally (especially on my beefy Mac Studio) was something that worked well for us. By doing so, we get a lot of control over our encoding processes and save some compute costs while we're at it.
ffprobe
To make sure our videos meet the highest quality standards while remaining as efficient as possible,
I took a look at the encoding settings used by Vimeo and Mux using the following
ffprobe
command:
ffprobe -v quiet -print_format json -show_streams -show_format https://stream.mux.com/F3tggXWeen7Dy00coMp1n4j3KHP6ZGhVM.m3u8
ffprobe gathers information from multimedia streams and returns super detailed information about that stream. By analyzing their codecs, formats, and bitrate configurations, we were able to fine-tune our own encoding parameters to strike the perfect balance between quality and performance.
We've crafted a custom FFmpeg script to streamline the encoding process. This script automates the conversion of high-resolution videos into various formats suitable for HLS.
Here's an sample of the FFmpeg command we use:
# Define resolutions, bitrates, and output names
RESOLUTIONS=("1280x720" "1920x1080" "3840x2160")
BITRATES=("1200k" "2500k" "8000k")
OUTPUTS=("720p" "1080p" "2160p")
PLAYLISTS=()
# Loop over the variants
for i in "${!RESOLUTIONS[@]}"; do
RES="${RESOLUTIONS[$i]}"
BITRATE="${BITRATES[$i]}"
OUTPUT_NAME="${OUTPUTS[$i]}"
PLAYLIST="${OUTPUT_NAME}.m3u8"
PLAYLISTS+=("$PLAYLIST")
# Set profile and level based on resolution
if [ "$OUTPUT_NAME" == "2160p" ]; then
PROFILE="high"
LEVEL="5.1"
elif [ "$OUTPUT_NAME" == "1080p" ]; then
PROFILE="high"
LEVEL="4.2"
else
PROFILE="main"
LEVEL="3.1"
fi
echo "Processing $OUTPUT_NAME..."
if ! ffmpeg -y -i "$INPUT_FILE" \
-c:v libx264 -preset veryfast -profile:v "$PROFILE" -level:v "$LEVEL" -b:v "$BITRATE" -s "$RES" \
-c:a aac -b:a 128k -ac 2 \
-g $GOP_SIZE -keyint_min $GOP_SIZE -sc_threshold 0 \
-force_key_frames "expr:gte(t,n_forced*4)" \
-hls_time 4 -hls_list_size 0 -hls_flags independent_segments \
-hls_segment_filename "$OUTPUT_DIR/${OUTPUT_NAME}_%03d.ts" \
"$OUTPUT_DIR/$PLAYLIST"; then
echo "Error: Failed to process $OUTPUT_NAME."
exit 1
fi
done
You can view the full script here.
rclone
If you use the FFmpeg script above, you'll find that I tell it to put all the HLS files into a parent folder named after the input video file. This is a pretty easy way to stay organized and construct stream URLs later on in the process.
Another great thing about R2 is that it's fully compatible with AWS's S3 API. This means that
any tool or utility that works with S3 should work seamlessly with R2. That's why I'm
able to use rclone
, a CLI utility that can copy and sync files between your local
machine and an R2 bucket.
To upload our encoded videos to Cloudflare R2, we use rclone
's copy
and
sync
commands. The copy
command is perfect for transferring new files,
while sync
ensures that our local and remote directories are always in sync, deleting
any files in the destination that no longer exist in the source.
# Copy files to R2
rclone copy /local/path r2:bucket/bucket-name
# Sync files with R2
rclone sync /local/path r2:bucket/bucket-name
Once your files are up on R2, they are ready to be played through any HLS-compatible web player. I found this tool to be a lot of help as I was testing my streams on various browsers and devices.
In each of your HLS subfolders, you'll find a .m3u8
file. This is essentially a
manifest file that describes the structure and location of the video segments. Simply paste the URL
of your .m3u8
file into the tool, and it will start playing your video stream.
Here's
a video from our course, Mastering Postgres.
Currently, iOS Safari is the only browser that supports using a <video>
element to
play HLS streams natively. You'll likely want more support than that and that's why HLS.js
exists. HLS.js is a JavaScript library that brings HLS support to browsers that do not natively
support it. By integrating HLS.js into your web application, you can ensure that your videos play
smoothly across all major browsers, including Chrome, Firefox, and Edge.
Here's a basic example of how to use HLS.js to embed an HLS stream in your web application:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>HLS Video Example</title>
<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
</head>
<body>
<video id="video" controls></video>
<script>
if (Hls.isSupported()) {
var video = document.getElementById("video");
var hls = new Hls();
hls.loadSource("https://your-cdn-url.com/path/to/your/playlist.m3u8");
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function () {
video.play();
});
} else if (video.canPlayType("application/vnd.apple.mpegurl")) {
video.src = "https://your-cdn-url.com/path/to/your/playlist.m3u8";
video.addEventListener("canplay", function () {
video.play();
});
}
</script>
</body>
</html>
This script checks if HLS.js is supported in the browser. If it is, it uses HLS.js to load and play the HLS stream. If not, it falls back to the native HLS support available in Safari.
Now, if you don't want to fuss with HLS.js, the good folks at Mux have abstracted that away with their custom web element package called hls-video-element.
After installing the package into your projects, using their <hls-video>
makes
playing streams in your app incredibly simple.
<hls-video controls src="https://stream.mux.com/r4rOE02cc95tbe3I00302nlrHfT023Q3IedFJW029w018KxZA.m3u8"></hls-video>
You can now interact and build upon the <hls-video>
element just as you would a
normal <video>
element as it matches the HTML5 <video>
API.
By following these steps, you can ensure that your video content is accessible and performs well across a wide range of devices and network conditions.
Delivering high-quality 4K video content doesn't have to break the bank. By leveraging adaptive bitrate streaming with HLS, encoding with FFmpeg, utilizing Cloudflare R2 for storage, and ensuring seamless playback with HLS.js, we've built a robust and cost-effective video delivery pipeline.
Other than the drastic cost savings, another huge benefit is completely owning our entire video pipeline. Want to automatically create 5 second long GIFs for each video? Add it to the script. Want OpenAI's newest model to create chapters for you? Add it to the script!
Feel free to reach out or connect with me on Twitter with any questions or share your own experiences with video streaming solutions in the comments below!