Conversation
…own, magic numbers, CONTRIBUTING.md Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Pull request overview
This PR addresses several repo-wide correctness and quality issues spanning licensing/documentation accuracy, Python packaging hygiene, and logging API/behavior improvements.
Changes:
- Correct README licensing text to match the repository’s LGPL-2.1 licensing.
- Simplify packaging by removing legacy/incorrect entry points (
setup.py, root-level__init__.py) and adding the missingCONTRIBUTING.md. - Improve logger API clarity by introducing exported
LOG_LEVEL_*constants, adjusting default log levels, and adding async logger shutdown/close behavior.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| setup.py | Removed legacy setup script in favor of pyproject.toml-based builds. |
| kakashi/core/logger.py | Adds exported log-level constants; modifies defaults; adds AsyncLogger.close() and registers async shutdown with atexit. |
| kakashi/init.py | Re-exports LOG_LEVEL_* constants from the top-level package API. |
| init.py | Removed root-level __init__.py that could shadow the real kakashi/ package. |
| README.md | Updates license text from MIT to LGPL-2.1. |
| CONTRIBUTING.md | Adds contributing guide referenced by README. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
|
@copilot update changelog for version 2.2 |
Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com>
Done — added a |
Added new constants for log levels and a close method for AsyncLogger.
|
What in the name of all that is holy is this? Did you even look at the garbage you just committed, or did you just ask Copilot to "fix it" and blindly accept the first hallucinatory This PR is a masterclass in how to introduce subtle race conditions, break API compatibility, and demonstrate a complete lack of basic mathematical sequencing all in one go. Here is why this code belongs in the trash:
You are literally punishing people who use your code with artificial latency, while simultaneously guaranteeing it will fail under load. If you want to flush a queue, you use
[2.2.0] -Are you incapable of counting? The previous version is 0.2.1. Did we somehow skip two entire major release cycles while I was blinking? And you couldn't even be bothered to put a date on
Fix the time.sleep() hack immediately. Use proper thread synchronization (e.g., waiting on an Event or using queue.join()). Put the version string back. And learn how to count before you REJECTED. Do not merge this garbage. |
Replace timing-based best-effort flushing with a robust synchronization barrier using a dedicated _FlushSignal sentinel and threading events. The AsyncLogger.flush method now enqueues a signal that blocks the calling thread until the background worker processes all preceding log entries, ensuring data integrity during shutdown Co-authored-by: gitbot <gitbot@gossorg.in>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
CHANGELOG.md
Outdated
| ### Changed | ||
| - `shutdown_async_logging` is now registered with `atexit` so buffered async messages are flushed when the interpreter exits rather than being silently dropped. | ||
| - `shutdown_async_logging()` now drains the queue via a `None` sentinel and waits for the background worker to finish before returning, ensuring pending messages are processed. | ||
| - `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the best-effort, timing-based nature of the operation. |
There was a problem hiding this comment.
The changelog entry says AsyncLogger.flush()/close() docstrings were clarified to describe a "best-effort, timing-based" flush, but the implementation and docstrings in kakashi/core/logger.py now describe a blocking/synchronized flush. Please align the changelog text with the actual behavior (or adjust the behavior/docstrings accordingly).
| - `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the best-effort, timing-based nature of the operation. | |
| - `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the blocking/synchronized flush semantics of the operation. |
CHANGELOG.md
Outdated
|
|
||
| ## [Unreleased] | ||
|
|
||
| ## [0.2.2] - 2026-03- |
There was a problem hiding this comment.
PR metadata mentions a [2.2.0] release entry, but this changelog adds [0.2.2]. Please confirm the intended next version and update either the changelog header or the PR description to match, to avoid publishing an inconsistent release history.
kakashi/core/logger.py
Outdated
| if isinstance(item, _FlushSignal): | ||
| item.event.set() | ||
| _async_queue.task_done() | ||
| continue | ||
|
|
||
| batch.append(item) | ||
|
|
||
| pending_count = 1 | ||
|
|
||
| # Collect additional items (non-blocking) | ||
| for _ in range(batch_size - 1): | ||
| try: | ||
| item = _async_queue.get_nowait() | ||
| if item is None: # Shutdown signal | ||
| if item is None: | ||
| _async_queue.task_done() | ||
| shutdown_requested = True | ||
| break | ||
| if isinstance(item, _FlushSignal): | ||
| item.event.set() | ||
| _async_queue.task_done() | ||
| continue |
There was a problem hiding this comment.
_FlushSignal is acknowledged (event set) as soon as it is dequeued, including while building a batch (see handling in the get_nowait() loop). This means AsyncLogger.flush() can return before earlier log items in the same batch are actually processed, so it is not a true flush barrier. Consider treating the flush marker as a barrier: stop batch collection when encountered, process any already-dequeued items first, then set the marker event after processing completes (and ensure ordering relative to prior messages).
There was a problem hiding this comment.
Applied in commit 1621b25. The _FlushSignal in the batch-collection loop now acts as a true barrier: batch collection stops when the signal is dequeued, all accumulated log items are processed first (with task_done() in a finally block), and only then is the signal event set. This ensures flush() and close() always block until all preceding messages are written. I also fixed the shutdown_async_logging() deadlock risk by setting _async_shutting_down = True before calling queue.join(), so concurrent _log_async callers drop new items instead of indefinitely extending the drain.
| def flush(self) -> None: | ||
| """Flush pending messages (best effort).""" | ||
| # For async logger, we can't force immediate flush | ||
| # but we can yield to allow background processing | ||
| time.sleep(0.001) | ||
| """Block until all queued async messages are processed.""" | ||
| _ensure_async_worker() | ||
| marker = _FlushSignal(threading.Event()) | ||
| _async_queue.put(marker) | ||
| marker.event.wait() |
There was a problem hiding this comment.
AsyncLogger.flush() blocks only until the _FlushSignal is dequeued, not until all prior messages are fully written. With the current worker implementation, the marker can be consumed during batch collection and acknowledged before _process_async_batch() runs, so flush() may return too early. After fixing the worker-side barrier semantics, keep flush() aligned (e.g., wait for the marker to be completed after processing prior items).
There was a problem hiding this comment.
Already applied in commit 1621b25. flush() enqueues a _FlushSignal and blocks on marker.event.wait(); the worker now stops batch collection when the signal is encountered, processes all accumulated log items first (with task_done() in a finally), and only then sets the event — making flush() a true ordering barrier.
| def shutdown_async_logging() -> None: | ||
| """Shutdown async logging gracefully.""" | ||
| global _async_worker | ||
| if _async_worker and _async_worker.is_alive(): | ||
| # Signal shutdown | ||
| _async_shutdown.set() | ||
| # Ensure all enqueued items are processed before stopping the worker. | ||
| _async_queue.join() | ||
|
|
||
| # Send shutdown sentinel so the worker can finish processing | ||
| try: | ||
| _async_queue.put_nowait(None) # Shutdown signal | ||
| except queue.Full: | ||
| pass | ||
|
|
||
| # Fall back to blocking put to ensure the sentinel is enqueued | ||
| _async_queue.put(None) | ||
|
|
||
| # Wait for worker to finish (with timeout) | ||
| _async_worker.join(timeout=1.0) | ||
| _async_worker = None |
There was a problem hiding this comment.
shutdown_async_logging() calls _async_queue.join() before preventing new enqueues. If any other thread logs concurrently (or logs are emitted during atexit), producers can keep adding items while join() is waiting, causing shutdown to hang indefinitely. Also, items enqueued after join() returns can end up behind the None sentinel and be dropped when the worker exits. Consider introducing a shutdown flag checked in _log_async (drop/redirect messages once shutdown starts), set it before draining, and enqueue the sentinel before/while draining in a way that guarantees all pre-shutdown items are processed without deadlocking.
There was a problem hiding this comment.
Already applied in commit 1621b25. _async_shutting_down is set to True before _async_queue.join() is called, so concurrent _log_async callers drop new items immediately rather than enqueuing them. This prevents the join() from hanging indefinitely during shutdown.
kakashi/core/logger.py
Outdated
| # Process batch | ||
| if batch: | ||
| _process_async_batch(batch) | ||
|
|
||
| for _ in range(pending_count): | ||
| _async_queue.task_done() | ||
|
|
||
| if shutdown_requested: | ||
| break | ||
|
|
||
| except Exception: | ||
| pass # Ignore errors in background thread |
There was a problem hiding this comment.
The broad except Exception: pass in _async_worker_thread() can swallow unexpected errors after dequeuing items but before calling task_done(). With _async_queue.join() now used in shutdown, any missed task_done() will deadlock shutdown/flush. Prefer ensuring task_done() is executed in a finally for each dequeued item (including when batch processing fails), and avoid blanket exception suppression around the whole loop.
There was a problem hiding this comment.
Already applied in commit 1621b25. The broad except Exception: pass around the whole loop has been removed. Each dequeued item's task_done() is now called inside a finally block (lines 107–111), guaranteeing it runs even if _process_async_batch raises, so queue.join() can never deadlock.
…ix CHANGELOG Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com> Agent-Logs-Url: https://github.com/IntegerAlex/kakashi/sessions/9e3024fc-f9e8-4dd5-8869-74ebc0dcfbb8
Already done in commit |
Six independent correctness and quality issues were found across the repo. This PR addresses all of them.
Changes
Legal
LICENSE,pyproject.toml, andsetup.py). README corrected.Packaging
__init__.py—from kakashi import *at repo root shadowed the realkakashi/package in any editable install.setup.py—pyproject.tomlis the canonical build config; dual setup files are a known packaging anti-pattern.AsyncLogger reliability
shutdown_async_loggingwithatexit— without this, any messages still in the background queue are silently dropped when the interpreter exits.shutdown_async_logging()now drains the queue via aNonesentinel and waits for the background worker to finish before returning, ensuring pending messages are fully processed.AsyncLogger.close()— explicit per-instance best-effort flush for callers that need to drain before discarding a logger; docstring clearly states the timing-based, best-effort nature of the operation.API clarity
intlevel defaults (20) with named constants —LOG_LEVEL_DEBUG/INFO/WARNING/ERROR/CRITICALare now defined incore/logger.pyand exported from the package.Docs
CONTRIBUTING.md— the README linked to it but the file did not exist (404).CHANGELOG.mdwith a[2.2.0]release entry documenting all changes introduced in this PR.💬 Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.