I tried looking for that thread again and I only found the exact opposite comment from the Cloudflare founder:
>Not abuse. Thanks for being a customer. Bandwidth at scale is effectively free.[0]
I distinctly remember such a thread though.
Edit: I did find these but neither are what I remember:
https://news.ycombinator.com/item?id=42263554
What kind of latency/throughput are people getting from R2? Does it benefit from parallelism in the same way s3 does?
[0]: https://developers.cloudflare.com/r2/pricing/#class-a-operat...
Not sure about now, but upload speeds were very inconsistent when we tested it a year or so ago.
# Download ClickHouse:
curl https://clickhouse.com/ | sh
./clickhouse local
# Attach the table:
CREATE TABLE hackernews_history UUID '66491946-56e3-4790-a112-d2dc3963e68a'
(
update_time DateTime DEFAULT now(),
id UInt32,
deleted UInt8,
type Enum8('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5),
by LowCardinality(String),
time DateTime,
text String,
dead UInt8,
parent UInt32,
poll UInt32,
kids Array(UInt32),
url String,
score Int32,
title String,
parts Array(UInt32),
descendants Int32
)
ENGINE = ReplacingMergeTree(update_time)
ORDER BY id
SETTINGS refresh_parts_interval = 60,
disk = disk(readonly = true, type = 's3_plain_rewritable', endpoint = 'https://clicklake-test-2.s3.eu-central-1.amazonaws.com/', use_environment_credentials = false);
# Run queries:
SELECT time, decodeHTMLComponent(extractTextFromHTML(text)) AS t
FROM hackernews_history ORDER BY time DESC LIMIT 10 \G
# Download everything as Parquet/JSON/CSV...
SELECT * FROM hackernews_history INTO OUTFILE 'dump.parquet'
Also available on the public Playground: https://play.clickhouse.com/