Skip to content

Commit 21f2469

Browse files
committed
change: MeetingStats now have consistent timestamps per poll interval.
1 parent 7d657af commit 21f2469

3 files changed

Lines changed: 43 additions & 13 deletions

File tree

bbblb/model.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -426,6 +426,9 @@ class MeetingStats(Base):
426426
__tablename__ = "meeting_stats"
427427

428428
id: Mapped[int] = mapped_column(primary_key=True)
429+
#: Timestamp of the poll run. This SHOULD be identical for all
430+
#: entries created during the same poll interval, so we can group
431+
#: over the timestamp later.
429432
ts: Mapped[datetime.datetime] = mapped_column(
430433
DateTime(timezone=True), insert_default=utcnow, nullable=False
431434
)

bbblb/services/poller.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,11 @@ def __init__(self, config: BBBLBConfig):
3737
self.minsuccess = config.POLL_RECOVER
3838
self.stats_enabled = config.POLL_STATS
3939

40+
#: Start of a poll interval. Used as a common timestamp for all
41+
#: MeetingStats entries created during a single poll run. This
42+
#: allows later grouping by poll interval.
43+
self._poll_start = 0.0
44+
4045
async def on_start(self, db: DBContext, locks: LockManager, bbb: BBBHelper):
4146
self.db = db
4247
self.lock = locks.create(
@@ -67,12 +72,13 @@ async def poll_loop(self):
6772

6873
if not await self.lock.check():
6974
LOG.warning(f"We lost the {self.lock.name!r} lock!?")
70-
break
75+
return
7176

7277
async with self.db.session() as session:
7378
result = await session.execute(model.Server.select())
7479
servers = result.scalars()
7580

81+
self._poll_start = model.utcnow()
7682
futures = [
7783
asyncio.ensure_future(self.poll_one(server.id)) for server in servers
7884
]
@@ -160,6 +166,7 @@ async def poll_one(self, server_id):
160166
meeting = meetings[meeting_id]
161167
meeting_stats.append(
162168
model.MeetingStats(
169+
ts=self._poll_start,
163170
uuid=meeting.uuid,
164171
meeting_id=meeting.external_id,
165172
tenant_fk=meeting.tenant_fk,

docs/accounting.rst

Lines changed: 32 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -63,27 +63,47 @@ a timestamp (`ts`), the `uuid` of the meeting, the reuseable external `meeting_i
6363
was used to create the meeting, the owning tenant (`tenant_fk`), and three metric values
6464
named `users`, `voice` and `video`.
6565

66-
Here is an (untested) example PostgreSQL query returning some useful aggregations. It
67-
fetches all rows in a certain time range, calculates min/max/avg values per meeting
68-
(per `uuid`), then groups those together by `tenant_fk` to get meaningfull aggregated
69-
values per tenant:
66+
The timestamp (`ts`) will be the exact same for all measurements taken during a single
67+
poll interval. It marks the start of the poll interval, not the exact time of an
68+
individual measurement. This is is done on purpose so you can group by the timestamp to
69+
get a consistend view of the entire cluster at a specific time.
70+
71+
Here is an example that calculates user counts for the entire cluster over time.
72+
It uses the fact mentioned above that all measurements taken during a single poll interval
73+
will have the exact same timestamp.
74+
75+
.. code:: sql
76+
77+
SELECT
78+
ts,
79+
COUNT(*) as meetings,
80+
SUM(users) AS users
81+
FROM meeting_stats
82+
GROUP BY ts
83+
ORDER BY ts
84+
85+
Here is a more complex PostgreSQL example. It fetches all rows in a certain time range,
86+
calculates min/max/avg values per meeting (per `uuid`), then groups those together by
87+
`tenant_fk` to get meaningfull aggregated values per tenant.
7088

7189
.. code:: sql
7290
7391
SELECT
74-
tenants.name,
92+
tenants.name AS tenant,
93+
/* Number of meetings */
94+
COUNT(*) AS meetings,
7595
/* Total number of meeting minutes spent by all users combined */
76-
SUM(users_avg * EXTRACT(epoch FROM duration)) / 60,
96+
SUM(users_avg * EXTRACT(epoch FROM duration)) / 60 AS meeting_minutes,
7797
/* Average meeting duration in minutes */
78-
AVG(EXTRACT(epoch FROM duration)) / 60,
98+
AVG(EXTRACT(epoch FROM duration)) / 60 AS duration_avg,
7999
/* Aveage meeting size */
80-
AVG(users_avg),
100+
AVG(users_avg) AS users_avg,
81101
/* Maximum meeting size */
82-
MAX(users_max),
102+
MAX(users_max) AS users_max,
103+
/* Number of meetings with more than 25 users peak */
104+
COUNT(CASE WHEN users_max > 100 THEN 1 END) AS large_25,
83105
/* Number of meetings with more than 100 users peak */
84-
COUNT(CASE WHEN users_max > 100 THEN 1 END),
85-
/* Number of meetings */
86-
COUNT(*)
106+
COUNT(CASE WHEN users_max > 100 THEN 1 END) AS large_100
87107
FROM (
88108
SELECT
89109
tenant_fk,

0 commit comments

Comments
 (0)