-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Overview
Hello, I have a random comment here. That's probably not worth you implementing, but it was just a thought that I had
I noticed when I did load_dotenvx That it took 50 to 60 milliseconds to load my envs
This didn't quite make sense to me and I would assumed that it'd be faster. But after looking at the code, I think it makes sense because it's actually shelling out to the dotenvx CLI (not 100 percent sure on this but I think so).
Obviously this approach makes sense when the goal is just trying to get things up and working fast and to increase maintainability (only have to maintain the CLI & python SDK gets immediate CLI improvements). But I'm curious if we could make this a hell of a lot faster by not shelling out to the CLI and instead replicating the dotenvx logic within Python itself.
Now the thing is, I have no idea how complicated the dotenvx logic is. And to duplicate this logic in both the CLI and Python SDK seems like it could be a maintainability nightmare, especially when I know you guys aren't a company and have limited resources
Maybe there's some other benefits to shelling out to the dotenvx that I don't understand
Use Case
Here's my use case so you have a better understanding:
I'm using Lambda with a Dockerfile. Before I was just loading the environment variables directly through the Lambda console. But now I want to switch to dotenvx to include the .env file in my Docker build directly and then decrypt at runtime. This follows a similar pattern to your Docker file tutorial.
Now the problem here is that in order to do this with lambda I need to use the load_dotenvx method in my app code (because lambda owns the CMD on the dockerfile so I cannot "donenvx run" on my app), and this command is taking 50 to 60 milliseconds to load my envs. This may not seem like a big deal at first, but the thing is with lambda, this will increase my cold starts on every single lambda that I spin up by this 50-60 ms. So it actually ends up being kind of significant.
I really wanted to dotenvx here for consistency between local/dev/prod and it cleaned up my env logic a lot. But unfortunately in this scenario, because of the performance degradation on cold starts, it's kind of neutralizing the benefits and I don't know if it's worth it.
TL;DR
- How difficult would it be to make this Python SDK faster? My guess is building the logic directly in the CLI as opposed to shelling out to dotenvx CLI could make this significantly faster but I am unsure
Thank You!
By the way, I absolutely love dotenvx, I'm a huge fan. Although it's not working exactly as I want in this niche lambda case. I have been using it all over my code base, and it's been game-changing for me.
I think I've found some really unique use cases for the benefits of dotenvx outside of the typical status quo. One is, given that now I can push my env files, I have enabled my AI agents to view env files and make code changes to env files without sacrificing security. This is absolutely massive. I ran into many coding problems before because my agents didn't have access to my env files, leaving them with an always partial understanding of my code base. But now I do not have this problem anymore
It's also just been a massive productivity boost when coding with other developers. I don't have to share local ENV files anymore. This is absolutely massive, it saves my team hours upon hours a week