Article
Everyone needs a small toy to play with, so I reasoned: Why not buy a toy that will lower my energy costs? For my home, I consequently ordered a 10.5 kWp photovoltaic system. The system came with an inverted Kostal Pico inverter to ensure that electricity could be used directly by the grid.
Kostal also sells a tool that lets you monitor your electricity production over time. However, why spend money when you can do it yourself using a quick shell script? Most importantly: PostgreSQL
I chose to publish this code because there is so little on the internet that demonstrates how to access Kostal Pico:
#!/bin/sh
KUSER=pvserver
KPASS=pvwr
KIP=192.168.0.201
PGUSER=hs
PGDATABASE=test
SQL=”WITH x AS (SELECT date_trunc(‘hour’, tstamp) AS hour, \
round(avg(v1), 2) source_1, \
round(avg(v2), 2) source_2, \
round(avg(v3), 2) source_3, \
round(avg(v1 + v2 + v3), 2) AS total \
FROM (SELECT *, ‘2013-04-10 05:00:00+02’::timestamptz + (t || ‘seconds’)::interval AS tstamp \
FROM kostal) AS a \
GROUP BY 1) \
SELECT y AS time, source_1, source_2, source_3, total \
FROM generate_series generate_series ((SELECT min(hour) FROM x), (SELECT max(hour) FROM x), ‘1 hour’) AS y \
LEFT JOIN (SELECT * FROM x) AS z \
ON (z.hour = y) \
ORDER BY 1 ; “
wget http://$KUSER:$KPASS@$KIP/LogDaten.dat -O – 2> /dev/null | \
sed ‘1,7d’ | \
sed -e ‘s/[ \t]\+/;/gi’ -e ‘s/^;//g’ | \
grep -v ‘h;’ | \
grep -v ‘POR’ | \
cut -f1,4,9,14 -d ‘;’ – | \
awk ‘BEGIN { print “CREATE TEMPORARY TABLE tmp_kostal (t int8, v1 int4, v2 int4, v3 int4); \
COPY tmp_kostal FROM stdin DELIMITER \x27;\x27 ;” }
{ print }
END { print “\\.\n ; \
INSERT INTO kostal SELECT * FROM tmp_kostal EXCEPT SELECT * FROM kostal; ” } \
{ print “”$SQL”” } \
‘ |
psql $PGDATABASE -U $PGUSER
echo “running analysis …”
psql $PGDATABASE -U $PGUSER -c “$SQL”
Kostal Pico provides you with textual data through an unpleasant web interface. Yes, the interface is obscene, and it took some time to understand what those columns meant. The unfortunate thing about it is that the data stream only contains the number of seconds since the system went into production, rather than a real timestamp. If the system has been shut down for maintenance, however, this counter will NOT advance, but I have left this out because it would be too complicated for a prototype to take into account.
We merely fetch the data using wget and pipe it through various processing steps. Here, it’s crucial to note that we must remove some lines and columns in order to convert this into a PostgreSQL-friendly format (in this case, semi-colon separated).
The data in my case will have four columns:
test=# SELECT * FROM kostal ORDER BY t DESC LIMIT 10;
t | v1 | v2 | v3
———+—–+—–+—–
2435059 | 793 | 548 | 651
2434159 | 412 | 285 | 317
2433259 | 309 | 213 | 255
2432359 | 561 | 388 | 454
2431459 | 476 | 330 | 341
2430559 | 423 | 293 | 303
2429659 | 449 | 310 | 348
2428759 | 236 | 163 | 188
2427859 | 136 | 94 | 106
2426959 | 105 | 73 | 83
(10 rows)
The timestamp we have already discussed is the first column in our PostgreSQL table. The following three columns show my three solar fields. Each of those fields will report its production data. To ensure that we can easily run the script repeatedly without destroying anything, we merge the downloaded data into our already-existing table after it has been downloaded. Kostal may not always send all the data it has, but that doesn’t matter because PostgreSQL will persist the data anyhow; all we have to do is fill in the gaps. Because we only receive one row every 15 minutes (making the amount of data in our PostgreSQL table almost irrelevant), our merging process can be a little rudimentary in this case.
investigating a timeseries
Now that we have the timeseries in our database, we can analyze them using some SQL. I added the time the system was started up to the SQL directly to strengthen the code a little. We should undoubtedly calculate that using a windowing function, but in the event that the HTTP request itself fails, the resulting chart would be incorrect.
One problem is that I desired a full timeseries. Meaning: Kostal Pico won’t give us data for this time if there is no production during the night, so we must find a way to fill in the blanks. We accomplish this by outer joining generate_series to our agglomerated data.
The end result is as follows:
2013-04-26 04:00:00+02 | | | |
2013-04-26 05:00:00+02 | 0.00 | 0.00 | 0.00 | 0.00
2013-04-26 06:00:00+02 | 113.00 | 84.50 | 60.00 | 257.50
2013-04-26 07:00:00+02 | 1687.75 | 1173.75 | 349.75 | 3211.25
2013-04-26 08:00:00+02 | 2873.00 | 1980.00 | 1098.50 | 5951.50
2013-04-26 09:00:00+02 | 3353.50 | 2306.00 | 1672.75 | 7332.25
2013-04-26 10:00:00+02 | 3539.75 | 2429.00 | 2097.75 | 8066.50
2013-04-26 11:00:00+02 | 3469.50 | 2385.75 | 2377.00 | 8232.25
2013-04-26 12:00:00+02 | 3250.50 | 2233.50 | 2526.50 | 8010.50
2013-04-26 13:00:00+02 | 2823.00 | 1938.50 | 2517.50 | 7279.00
2013-04-26 14:00:00+02 | 2179.00 | 1491.75 | 2346.75 | 6017.50
2013-04-26 15:00:00+02 | 1322.75 | 868.00 | 2041.00 | 4231.75
2013-04-26 16:00:00+02 | 481.25 | 311.25 | 967.50 | 1760.00
2013-04-26 17:00:00+02 | 357.50 | 242.00 | 407.50 | 1007.00
2013-04-26 18:00:00+02 | 438.00 | 301.75 | 408.50 | 1148.25
2013-04-26 19:00:00+02 | 121.50 | 83.25 | 110.50 | 315.25
2013-04-26 20:00:00+02 | 9.50 | 5.00 | 7.00 | 21.50
2013-04-26 21:00:00+02 | | | |
2013-04-26 22:00:00+02 | | | |
A nice production peak can be observed in the early morning. Given that roughly 1/3 of the system’s panels are moving toward the south and 1/3 are moving toward the east, this is to be expected. The data appears accurate; a production peak of 8.2 kilowatt hours (kwh) just before noon is entirely reasonable.
The nice thing about this is that you can easily pipe the data to gnuplot or any other program to create a nice chart.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises
- 10 February 2026
- Database Performance Management
Introduction For modern technology enterprises, revenue no longer flows from a single product or channel. It is generated across complex digital platforms—SaaS applications, subscription engines, usage-based billing systems, digital marketplaces, data products, and AI-driven services. These revenue platforms are expected to scale continuously, operate globally, and deliver consistent user experiences in real time. At the … Continue reading “Scaling Revenue Platforms on Smarter Databases: Enteros’ AI SQL–Driven Management for Tech Enterprises”
Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution
Introduction The real estate sector is undergoing a fundamental digital transformation. Property management platforms, smart building systems, tenant experience applications, investment analytics, IoT-driven facilities management, and AI-powered valuation models now form the backbone of modern real estate enterprises. From global REITs and commercial property firms to proptech platforms and smart city operators, data-driven systems are … Continue reading “Beyond Cloud Bills in Real Estate: Enteros’ AI Platform for Database Management and Cost Attribution”
Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence
- 9 February 2026
- Database Performance Management
Introduction Real estate has always been an asset‑heavy, capital‑intensive industry. From commercial portfolios and residential developments to REITs and PropTech platforms, profitability depends on precise financial control. Yet while real estate organizations apply rigorous financial discipline to assets, leases, and investments, their IT and data environments often lack the same level of cost transparency. Modern … Continue reading “Real Estate IT Economics with Financial Precision: Enteros’ Cost Attribution Intelligence”
Managing Database Growth with Financial Precision: Enteros for Tech Leaders
Introduction For technology enterprises, databases are no longer just systems of record—they are engines of innovation. SaaS platforms, AI applications, digital marketplaces, analytics products, and customer-facing services all depend on rapidly growing databases that must scale continuously, remain highly performant, and stay available around the clock. But as database environments grow, so do costs. Cloud … Continue reading “Managing Database Growth with Financial Precision: Enteros for Tech Leaders”