NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Unlocking high-performance PostgreSQL with key memory optimizations (stormatics.tech)
qeternity 70 days ago [-]
Does not instill confidence when the queries they provide don't work.

For anyone curious, the corrected query:

SELECT sum(blks_hit)::numeric / nullif(sum(blks_hit + blks_read), 0) AS cache_hit_ratio FROM pg_stat_database;

itiswatitis 70 days ago [-]
I just ran their query, and it works
qeternity 69 days ago [-]
It does not in PG17.
bdd 68 days ago [-]
Works on this Postgres 17.7:

postgres=# show server_version; server_version ------------------------------- 17.7 (Debian 17.7-3.pgdg13+1) (1 row)

postgres=# SELECT sum(blks_hit)/nullif(sum(blks_hit+blks_read),0) AS cache_hit_ratio FROM Pg_stat_database; cache_hit_ratio ------------------------ 0.99448341937558994728 (1 row)

tehlike 70 days ago [-]
I generally use this: https://pgtune.leopard.in.ua/
arkh 70 days ago [-]
Yup, I was expecting pgtune being mentioned in the article.

And maybe something like HammerDB to check performances.

fix4fun 70 days ago [-]
Nice tool. Do you know maybe similar tool for MySQL ?
iberator 70 days ago [-]
[flagged]
HackerThemAll 68 days ago [-]
"How do we size shared_buffers? A common starting rule of thumb is:"

I have an improved rule of thumb:

Give a lot of RAM. Give it all the RAM you can. And then buy some more and also give it to shared_buffers. Buffering data in RAM/CPU cache is crucial for performance.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 07:54:43 GMT+0000 (UTC) with Wasmer Edge.