10 comments

  • rozenmd 12 hours ago
    Great write-up!

    I had a similar project back in August when I realised my DB's performance (Postgres) was blocking me from implementing features users commonly ask for (querying out to 30 days of historical uptime data).

    I was already blown away at the performance (200ms to query what Postgres was doing in 500-600ms), but then I realized I hadn't put an index on the Clickhouse table. Now the query returns in 50-70ms, and that includes network time.

  • saisrirampur 10 hours ago
    Sai from ClickHouse here. Very compelling story! Really love your emphasis on using the right tool for the right job - power of row vs column stores.

    We recently added a MySQL/MariaDB CDC connector in ClickPipes on ClickHouse Cloud. This would have simplified your migration from MariaDB.

    https://clickhouse.com/docs/integrations/clickpipes/mysql https://clickhouse.com/docs/integrations/clickpipes/mysql/so...

  • est 2 hours ago
    can you just buffer some writes in Vector and eliminate Kafka?

    I setup some Vector to buffer ElasticSearch writes years ago, also for logs, it ran so well without any problems that I almost fogot about it.

  • nasretdinov 11 hours ago
    BTW you could've used e.g. kittenhouse (https://github.com/YuriyNasretdinov/kittenhouse, my fork) or just a simpler buffer table, with 2 layers and a larger aggregation period than in the example.

    Alternatively, you could've used async insert functionality built into ClickHouse: https://clickhouse.com/docs/optimize/asynchronous-inserts . All of these solutions are operationally simpler than Kafka + Vector, although obviously it's all tradeoffs.

    • devmor 11 hours ago
      There were a lot of simpler options that came to mind while reading through this, frankly.

      But I imagine the writeup eschews myriad future concerns and does not entirely illustrate the pressure and stress of trying to solve such a high-scale problem.

      Ultimately, going with a somewhat more complex solution that involves additional architecture but has been tried and tested by a 3rd party that you trust can sometimes be the more fitting end result. Assurance often weighs more than simplicity, I think.

      • nasretdinov 11 hours ago
        While kittenhouse is, unfortunately, abandonware (even though you can still use it and it works), you can't say the same about e.g. async inserts in ClickHouse: it's a very simple and robust solution to tackle exactly the problem the PHP (and some other languages') backends often face when trying to use ClickHouse
      • ajayvk 7 hours ago
        Yes, had similar questions. Wouldn't tuning the settings for the buffer table have helped avoid the TOO_MANY_LINKS error?
  • tlaverdure 11 hours ago
    Thanks for sharing. I really enjoyed the breakdown, and great to see small tech companies helping each other out!
  • frenchmajesty 11 hours ago
    Thanks for sharing I enjoyed reading this.
  • mperham 10 hours ago
    Seems weird not to use Redis as the buffering layer + minutely cron job. Seems a lot simpler than installing Kafka + Vector.
    • SteveNuts 8 hours ago
      Vector is very simple to operate and (mostly) stateless, and can handle buffering if you choose.

      Kafka and Redis is a "pick your poison" IMO, scaling and operating those have their own headaches.

  • albertgoeswoof 10 hours ago
    Currently at the millions stage with https://mailpace.com relying mostly on Postgres

    Tbh this terrifies me! We don’t just have to log the requests but also store the full emails for a few days, and they can be up to 50 mib in total size.

    But it will be exciting when we get there!

  • ch2026 6 hours ago
    1) clickhouse async_insert would have solved all your issues: https://clickhouse.com/docs/optimize/asynchronous-inserts

    1a) If you’re still having too many files/parts, then fix your partition by, and mergetree primary key.

    2) why are you writing to kafka when vector dev does buffering / batching?

    3) if you insist on kafka, https://clickhouse.com/docs/engines/table-engines/integratio... consumes directly from kafka (or since you’re on CHC, use clickhouse pipes) — what’s the point of vector here?

    Your current solution is unnecessarily complex. I’m guessing the core problem is your merge tree primary key is wrong.

    • momothereal 6 hours ago
      Writing to Kafka allowed them to continue their current ingestion process into MariaDB at the same time as ClickHouse. Kafka consumer groups allow the data to be consumed twice by different consumer pools that have different throughput without introducing bottlenecks.

      From experience the Kafka tables in ClickHouse are not stable at a high volumes, and harder to debug when things go sideways. It is also easier to mutate your data before ingestion using Vector's VRL scripting language vs. ClickHouse table views (SQL) when dealing with complex data that needs to be denormalized into a flat table.

      • ch2026 5 hours ago
        > Writing to Kafka allowed them to continue their current ingestion process into MariaDB at the same time as ClickHouse.

        The one they're going to shut down as soon as this works? Yeah, great reason to make a permanent tech choice for a temporary need. Versus just keeping the MariaDB stuff exactly the same on the PHP side and writing to 2 destinations until cutover is achieved. Kafka is wholly unnecessary here. Vector is great tech but likely not needed. Kafka + Vector is absolutely the incorrect solution.

        Their core problem is the destination table schema (which they did not provide) and a very poorly chosen primary key + partition.

  • fnord77 10 hours ago
    How does Clickhouse compare to Druid, Pinot or Star Tree?