2023-12-02
Goal
To make sure that my updated changes to Taskmaster App are recognized in Production. Also to finish up working on Taskmaster App (I got the whooooooole Saturday). Some possible ideas include: - Implement auto-reset on tasks based on timezone - Add functionality to alter other parts of an individual task (whether it's private or public, whether it has been retired, etc.) - Add ability to create new tasks to the UI - Add ability for user to download their tasks from the site
Notes
Task App - Push Arrow Shake to Production
I wanted to kick off today by finding out why I'm not seeing the changes I need reflected in the Taskmaster App's frontend. First I double-checked the Static Files that were being served to the web page and found that we were using old CSS files. I decided to use the Heroku CLI's bash scripting that I learned about in 2023-11-30 to check and see if my files were updated:
heroku run bash --app dimmin
Which I could break out of using the command
exit
I tried deleting the taskmaster folder via the staticfiles folder
rm -r taskmaster
and re-running collectstatic
to see if it could be changed internally.
heroku run python manage.py collectstatic
which replaced the taskmaster
static folder as I was hoping it would. Unfortunately I saw no changes. I wanted to try using a text editor to see if I could make changes to the site that way and check if the style CSS for the Task App was actually updated. If it is the correct file then I know it's another Cache issue. Otherwise it's not getting the right files and might need to be updated.
I found this stack overflow post which talked about installing vim for the Heroku CLI but it failed. Damn! I found out it failed because I needed to add a 'name'
field to the C:\Users\Billy\AppData\Local\heroku\package.json
file of my app ('dimmin'
). Then the install worked perfectly:
heroku plugins:install https://github.com/naaman/heroku-vim
But for some reason it wasn't recognized by Heroku CLI. Thankfully since the file is short I was able to use a different Bash command of
cat style.css
to show me the actual contents of the file. Here I saw that it wasn't actually the updated version of the taskmaster style.css
file I needed. The reason the file wasn't updated (even though I get the "everything is up to date" message from the CLI when I try to push to Production) was because the file hadn't been updated in the static
folder. Even though the file was correct in my local taskmaster/static/taskmaster/css/style.css
folder, the file in static/taskmaster/css/style.css
was actually unchanged. I should really just delete the static folder because collectstatic
is already functional at server startup and will generate the static files from the project.
Anyways it worked! Yay! I also now have an extra tool in my toolbelt to debug the site, it was very useful to confirm that the error was that the correct style.css
file was not being pushed to production. Yay again!
Auto-Reset Based on User's Timezone
Since we've already implemented the reset functionality into the app, all we need to do is call that at midnight for our user given their timezone. Thankfully the user's timezone is refreshed every time they enter the site via the Accounts App. We can therefore check every hour based on a user's timezone whether it's midnight, then reset completed tasks back to their original states. Time to finally learn more about Celery!
Scheduled Jobs on my Web App
Sometimes we're gonna want to run some kind of script on a scheduled basis. In this case we want to check when it's midnight for all of our users. To do this we'll need to use Celery, a scheduling app that can execute a heroku command for us.
Looks like I'll need to use Redis for this, so based on this help page I installed django-redis
via
pip install django-redis
Then I re-froze my installed package requirements via
pip freeze > requirements.txt
I followed the tutorial to establish Redis as my Cache system in the config/settings.py
file with the following code:
# Establish Redis as the Caching System
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.redis.RedisCache",
"LOCATION": os.environ.get('REDIS_URL')
}
}
Which of course means I'll have to find out what the heck my REDIS_URL
will be.
I logged into Heroku and added the Heroku Data for Redis Heroku Resource. Currently, this costs about $3 / month which I'm a little disappointed about but that's low enough to not be too bothered. It looks like this means that Heroku Resources will take up a total of $8 / month to have and my site will cost ~$30 / month to run this way. Not too crazy but I can see it getting expensive quickly so I'll have to keep an eye on this. Either way I'd say it's worth it to have scheduled tasks. There's a lot of stuff I could do with that.
Then, I was able to navigate to my DIMMiN Heroku Data for Redis Settings route of the Heroku platform and find the REDIS_URL
Environment Variables I needed under the URI setting.
I was feeling like a cowboy so I decided to add the REDIS_URL
to my Heroku environment variables with the command
heroku config:set REDIS_URL=redis_uri
then push it to Production. Ideally I should get into the habit of pushing this to a Staging environment first so I don't constantly break my site but hey, that's the beauty of doing this as a hobby... And it crashed!
After consulting with ChatGPT, I think it's crashing because I haven't added the necessary changes to my Procfile. I updated my procfile to:
web: gunicorn config.wsgi --log-file -
worker: celery -A config worker --loglevel=info
Also, in the config/celery.py
code I had to change the app name from 'dimmin'
to 'config'
. Not sure why I called my main app config
and not dimmin
, but hey it solved the issue and I'm back online yeeeeeee boiiiiiiii. I can also see that it's being called periodically in my heroku logs
which is interesting. Now that it's set up and working in my DIMMiN App, it's time to see what kind of tasks I can really do with Celery.
Originally this task was located in the taskmaster/views.py
file. I decided to create a new file called tasks.py
in the Taskmaster App (fitting for an app called taskmaster
) so I could separate periodically scheduled tasks from the other code. I think that the code in a Django View should only be related to what content to surface to the user and I'd like to keep it isolated that way.
ChatGPT gave me some good guidance on setting up a test run by adding Celery Beat. The process looked like this:
1) Add the shared task as a function to the app/tasks.py
file via a function with the @shared_task decorator
2) A variable called CELERY_BEAT_SCHEDULE
was added to the config/settings.py
file that told the scheduler to run that specific function every minute
3) Updated my Procfile to include a new worker
I don't see anything happening on my local when I run the server locally. I'm gonna try and push it to Production and see what happens YOLO ahaaa...
Looks like the workers are counted as their own Heroku Dynos (which of course Heroku charges for). My Heroku Resources are now up to $22 / month. Damn! This is what I was worried about. Oh well I can always remove it if the cost becomes too high. I just don't wanna end up spending $1k a year on my damn website.
Anyways I needed to add the CELERY_BROKER_URL
to the config/settings.py
file to get the worker to actually do something. Now the worker is working! But it's not actually sending me the message in the logs. The worker was also complaining that I wasn't using an SSL Certificate / using SSL Encryption in general.
I followed this guide to fix it by pip installing kombu-fernet-serializers
and adding that to my celery app's configuration. For some reason it kept asking for a KOMBU_FERNET_KEY
even though I already set it like 3 times. I think it's because I need to set it in my Anaconda environment. I set my KMOBU_FERNET_KEY
via the following command:
conda env config vars set KOMBU_FERNET_KEY=your_generated_key
which solved the issue! Now I can keep messing around with it on my Local Version! Yay!
Now I wanted to try making actual changes to the database via my test task. I created another dummy task called test_celery
and had it reset the state of my test task. Using pgadmin I was able to see that the test task has the unique ID of 9. Therefore, I was able to use the following function to reset the task every 10 seconds:
@shared_task
def test_celery():
Task.objects.filter(id=9).update(is_complete=False, is_checked_in=False)
If this works I should see my test task (now completed) return to its original state every 10 seconds. Then I can simply route the command to execute scheduled_reset_task_status
every hour.
Interestingly, when I ran the command
heroku run celery -A config worker --loglevel=info
I got a working version of both my tasks being executed. I could both see my log printing out that celery was working AND I saw that the state of my task was changed in my database! I got an error. Been working at this for a while so I'm take a break for today, we'll see if I come back later.
Results
- Updated Taskmaster App Static Files
- Also found source of frequent staticfile issues
- Added Celery Beats to schedule asynchronous tasks
Next Time
- Actually get celery beats commands to execute properly so I can FINALLY get the timezone functionality