Skip to content

Fix license mismatch, import shadowing, packaging debt, AsyncLogger message loss, magic numbers, missing CONTRIBUTING.md, and update CHANGELOG#7

Open
Copilot wants to merge 9 commits intomasterfrom
copilot/fix-license-mismatch-issue
Open

Fix license mismatch, import shadowing, packaging debt, AsyncLogger message loss, magic numbers, missing CONTRIBUTING.md, and update CHANGELOG#7
Copilot wants to merge 9 commits intomasterfrom
copilot/fix-license-mismatch-issue

Conversation

Copy link

Copilot AI commented Mar 17, 2026

Six independent correctness and quality issues were found across the repo. This PR addresses all of them.

Changes

Legal

  • README claimed MIT; actual license is LGPL-2.1 (matching LICENSE, pyproject.toml, and setup.py). README corrected.

Packaging

  • Removed root-level __init__.pyfrom kakashi import * at repo root shadowed the real kakashi/ package in any editable install.
  • Removed setup.pypyproject.toml is the canonical build config; dual setup files are a known packaging anti-pattern.

AsyncLogger reliability

  • Registered shutdown_async_logging with atexit — without this, any messages still in the background queue are silently dropped when the interpreter exits.
  • shutdown_async_logging() now drains the queue via a None sentinel and waits for the background worker to finish before returning, ensuring pending messages are fully processed.
  • Added AsyncLogger.close() — explicit per-instance best-effort flush for callers that need to drain before discarding a logger; docstring clearly states the timing-based, best-effort nature of the operation.

API clarity

  • Replaced bare int level defaults (20) with named constantsLOG_LEVEL_DEBUG/INFO/WARNING/ERROR/CRITICAL are now defined in core/logger.py and exported from the package.
from kakashi import get_logger, LOG_LEVEL_WARNING

logger = get_logger("myapp", min_level=LOG_LEVEL_WARNING)

Docs

  • Created CONTRIBUTING.md — the README linked to it but the file did not exist (404).
  • Updated CHANGELOG.md with a [2.2.0] release entry documenting all changes introduced in this PR.

💬 Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.

Copilot AI and others added 2 commits March 17, 2026 18:39
…own, magic numbers, CONTRIBUTING.md

Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com>
@vercel
Copy link

vercel bot commented Mar 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
kakashi Ready Ready Preview, Comment Mar 21, 2026 3:29pm

@IntegerAlex IntegerAlex requested a review from Copilot March 17, 2026 18:45
@IntegerAlex IntegerAlex marked this pull request as ready for review March 17, 2026 18:46
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses several repo-wide correctness and quality issues spanning licensing/documentation accuracy, Python packaging hygiene, and logging API/behavior improvements.

Changes:

  • Correct README licensing text to match the repository’s LGPL-2.1 licensing.
  • Simplify packaging by removing legacy/incorrect entry points (setup.py, root-level __init__.py) and adding the missing CONTRIBUTING.md.
  • Improve logger API clarity by introducing exported LOG_LEVEL_* constants, adjusting default log levels, and adding async logger shutdown/close behavior.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
setup.py Removed legacy setup script in favor of pyproject.toml-based builds.
kakashi/core/logger.py Adds exported log-level constants; modifies defaults; adds AsyncLogger.close() and registers async shutdown with atexit.
kakashi/init.py Re-exports LOG_LEVEL_* constants from the top-level package API.
init.py Removed root-level __init__.py that could shadow the real kakashi/ package.
README.md Updates license text from MIT to LGPL-2.1.
CONTRIBUTING.md Adds contributing guide referenced by README.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

IntegerAlex and others added 3 commits March 18, 2026 00:22
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
@IntegerAlex
Copy link
Owner

@copilot update changelog for version 2.2

Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com>
Copy link
Author

Copilot AI commented Mar 17, 2026

@copilot update changelog for version 2.2

Done — added a [2.2.0] entry to CHANGELOG.md in commit 28749e8, covering all the changes in this PR (new LOG_LEVEL_* constants, AsyncLogger.close(), atexit shutdown, queue-draining shutdown fix, removed setup.py and root __init__.py, license fix, and new CONTRIBUTING.md).

Copilot AI changed the title Fix license mismatch, import shadowing, packaging debt, AsyncLogger message loss, magic numbers, and missing CONTRIBUTING.md Fix license mismatch, import shadowing, packaging debt, AsyncLogger message loss, magic numbers, missing CONTRIBUTING.md, and update CHANGELOG Mar 17, 2026
Copilot AI requested a review from IntegerAlex March 17, 2026 19:00
Added new constants for log levels and a close method for AsyncLogger.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@akshat-agribid
Copy link

What in the name of all that is holy is this? Did you even look at the garbage you just committed, or did you just ask Copilot to "fix it" and blindly accept the first hallucinatory
fever dream it spat out?

This PR is a masterclass in how to introduce subtle race conditions, break API compatibility, and demonstrate a complete lack of basic mathematical sequencing all in one go.

Here is why this code belongs in the trash:

  1. AsyncLogger.close() and flush() are an absolute joke:
    You put time.sleep(0.05) and called it a "best-effort flush"? Are you out of your mind? This is exactly the kind of incompetent, race-condition-inducing trash that makes asynchronous
    code a nightmare. What happens if the underlying disk is slow and it takes 100ms? You silently drop logs. What happens if the queue is completely empty and the worker is already done?
    You block the caller for 50ms for absolutely NO REASON.

You are literally punishing people who use your code with artificial latency, while simultaneously guaranteeing it will fail under load. If you want to flush a queue, you use
queue.join(), or push an event/sentinel and wait on it. "Best effort" is not an excuse for "braindead time-based polling."

  1. You nuked version from orbit:
    You deleted the root init.py (which is fine, it shouldn't have been there anyway), but you completely "forgot" to move version and author into kakashi/init.py. So now
    anyone (or any tool) relying on kakashi.version gets an AttributeError and crashes. Did you even run a basic sanity check?

  2. Changelog versioning stroke:

[2.2.0] -

Are you incapable of counting? The previous version is 0.2.1. Did we somehow skip two entire major release cycles while I was blinking? And you couldn't even be bothered to put a date on
it. If you're going to update a changelog, at least look at the previous line.

  1. shutdown_async_logging() sentinel:
    You added atexit.register(shutdown_async_logging), which is frankly the only sane thing in this entire diff. But given the rest of the file, I'm shocked you didn't just put time.sleep(5)
    at program exit and call it a day.

Fix the time.sleep() hack immediately. Use proper thread synchronization (e.g., waiting on an Event or using queue.join()). Put the version string back. And learn how to count before you
touch the changelog again.

REJECTED. Do not merge this garbage.
@copilot

Replace timing-based best-effort flushing with a robust synchronization
barrier using a dedicated _FlushSignal sentinel and threading events.
The AsyncLogger.flush method now enqueues a signal that blocks the
calling thread until the background worker processes all preceding log
entries, ensuring data integrity during shutdown

Co-authored-by: gitbot <gitbot@gossorg.in>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 8 out of 8 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

CHANGELOG.md Outdated
### Changed
- `shutdown_async_logging` is now registered with `atexit` so buffered async messages are flushed when the interpreter exits rather than being silently dropped.
- `shutdown_async_logging()` now drains the queue via a `None` sentinel and waits for the background worker to finish before returning, ensuring pending messages are processed.
- `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the best-effort, timing-based nature of the operation.
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changelog entry says AsyncLogger.flush()/close() docstrings were clarified to describe a "best-effort, timing-based" flush, but the implementation and docstrings in kakashi/core/logger.py now describe a blocking/synchronized flush. Please align the changelog text with the actual behavior (or adjust the behavior/docstrings accordingly).

Suggested change
- `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the best-effort, timing-based nature of the operation.
- `AsyncLogger.flush()` and `AsyncLogger.close()` docstrings clarified to accurately describe the blocking/synchronized flush semantics of the operation.

Copilot uses AI. Check for mistakes.
CHANGELOG.md Outdated

## [Unreleased]

## [0.2.2] - 2026-03-
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR metadata mentions a [2.2.0] release entry, but this changelog adds [0.2.2]. Please confirm the intended next version and update either the changelog header or the PR description to match, to avoid publishing an inconsistent release history.

Copilot uses AI. Check for mistakes.
Comment on lines +78 to +97
if isinstance(item, _FlushSignal):
item.event.set()
_async_queue.task_done()
continue

batch.append(item)

pending_count = 1

# Collect additional items (non-blocking)
for _ in range(batch_size - 1):
try:
item = _async_queue.get_nowait()
if item is None: # Shutdown signal
if item is None:
_async_queue.task_done()
shutdown_requested = True
break
if isinstance(item, _FlushSignal):
item.event.set()
_async_queue.task_done()
continue
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_FlushSignal is acknowledged (event set) as soon as it is dequeued, including while building a batch (see handling in the get_nowait() loop). This means AsyncLogger.flush() can return before earlier log items in the same batch are actually processed, so it is not a true flush barrier. Consider treating the flush marker as a barrier: stop batch collection when encountered, process any already-dequeued items first, then set the marker event after processing completes (and ensure ordering relative to prior messages).

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot apply changes based on this feedback

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied in commit 1621b25. The _FlushSignal in the batch-collection loop now acts as a true barrier: batch collection stops when the signal is dequeued, all accumulated log items are processed first (with task_done() in a finally block), and only then is the signal event set. This ensures flush() and close() always block until all preceding messages are written. I also fixed the shutdown_async_logging() deadlock risk by setting _async_shutting_down = True before calling queue.join(), so concurrent _log_async callers drop new items instead of indefinitely extending the drain.

Comment on lines 335 to +340
def flush(self) -> None:
"""Flush pending messages (best effort)."""
# For async logger, we can't force immediate flush
# but we can yield to allow background processing
time.sleep(0.001)
"""Block until all queued async messages are processed."""
_ensure_async_worker()
marker = _FlushSignal(threading.Event())
_async_queue.put(marker)
marker.event.wait()
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AsyncLogger.flush() blocks only until the _FlushSignal is dequeued, not until all prior messages are fully written. With the current worker implementation, the marker can be consumed during batch collection and acknowledged before _process_async_batch() runs, so flush() may return too early. After fixing the worker-side barrier semantics, keep flush() aligned (e.g., wait for the marker to be completed after processing prior items).

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot apply changes based on this feedback

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already applied in commit 1621b25. flush() enqueues a _FlushSignal and blocks on marker.event.wait(); the worker now stops batch collection when the signal is encountered, processes all accumulated log items first (with task_done() in a finally), and only then sets the event — making flush() a true ordering barrier.

Comment on lines 398 to 414
def shutdown_async_logging() -> None:
"""Shutdown async logging gracefully."""
global _async_worker
if _async_worker and _async_worker.is_alive():
# Signal shutdown
_async_shutdown.set()
# Ensure all enqueued items are processed before stopping the worker.
_async_queue.join()

# Send shutdown sentinel so the worker can finish processing
try:
_async_queue.put_nowait(None) # Shutdown signal
except queue.Full:
pass

# Fall back to blocking put to ensure the sentinel is enqueued
_async_queue.put(None)

# Wait for worker to finish (with timeout)
_async_worker.join(timeout=1.0)
_async_worker = None
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shutdown_async_logging() calls _async_queue.join() before preventing new enqueues. If any other thread logs concurrently (or logs are emitted during atexit), producers can keep adding items while join() is waiting, causing shutdown to hang indefinitely. Also, items enqueued after join() returns can end up behind the None sentinel and be dropped when the worker exits. Consider introducing a shutdown flag checked in _log_async (drop/redirect messages once shutdown starts), set it before draining, and enqueue the sentinel before/while draining in a way that guarantees all pre-shutdown items are processed without deadlocking.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot apply changes based on this feedback

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already applied in commit 1621b25. _async_shutting_down is set to True before _async_queue.join() is called, so concurrent _log_async callers drop new items immediately rather than enqueuing them. This prevents the join() from hanging indefinitely during shutdown.

Comment on lines 106 to 116
# Process batch
if batch:
_process_async_batch(batch)

for _ in range(pending_count):
_async_queue.task_done()

if shutdown_requested:
break

except Exception:
pass # Ignore errors in background thread
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The broad except Exception: pass in _async_worker_thread() can swallow unexpected errors after dequeuing items but before calling task_done(). With _async_queue.join() now used in shutdown, any missed task_done() will deadlock shutdown/flush. Prefer ensuring task_done() is executed in a finally for each dequeued item (including when batch processing fails), and avoid blanket exception suppression around the whole loop.

Copilot uses AI. Check for mistakes.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot apply changes based on this feedback

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already applied in commit 1621b25. The broad except Exception: pass around the whole loop has been removed. Each dequeued item's task_done() is now called inside a finally block (lines 107–111), guaranteeing it runs even if _process_async_batch raises, so queue.join() can never deadlock.

…ix CHANGELOG

Co-authored-by: IntegerAlex <84370725+IntegerAlex@users.noreply.github.com>
Agent-Logs-Url: https://github.com/IntegerAlex/kakashi/sessions/9e3024fc-f9e8-4dd5-8869-74ebc0dcfbb8
Copy link
Author

Copilot AI commented Mar 21, 2026

@copilot update changelog for version 2.2

Already done in commit 1621b25 — the CHANGELOG now has the [0.2.2] entry with a corrected date and accurate description of the blocking/synchronized flush semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants