2024-12-10-Tuesday
created: 2024-12-10 05:15 tags: - daily-notes
Tuesday, December 10, 2024
<< Timestamps/2024/12-December/2024-12-09-Monday|Yesterday | Timestamps/2024/12-December/2024-12-11-Wednesday|Tomorrow >>
🎯 Goal
- [x] Figure out how to extract data from my Obsidian notes and load them into my Digital Garden within my BigBrain App.
🌟 Results
- Figured out how to extract data from my Obsidian vault and load them into my Digital Garden within my BigBrain App
🌱 Next Time
- Automate the ETL pipeline to go from a Local Version of my notes to executing one command that updates the Production version of my Obsidian notes
- Start working on a devlog page where I can publish my notes from my DIMMiN App development (hello!)
📝 Notes
It looks like uniqueness wasn't enforced at the Note
level in the BigBrain App because it was meant to be enforced at the slug level:
# Make notes unique to one digital garden (within the `Note` model)
class Meta:
unique_together = ('vault', 'slug')
indexes = [
models.Index(fields=['slug']),
models.Index(fields=['title']),
]
I ran some tests to validate that this was possible, so now notes can have the same title if they're in different vaults but they still need to have different Slugs. I tried the following:
print(slugify(f"{self.vault.name} {self.note1.slug}"))
which produced the following Slug:
test-vault-note-1
I'd like to be able to access individual notes at routes such as:
dimmin.com/digital-garden/test-vault/note-1
I deleted all of my Vault
s in my local database using the Django Shell
python manage.py shell
from apps.bigbrain.models import Vault
Vault.objects.all().delete()
I was then able to create Vault
s just fine with Slugs auto-generated (and excluded from the admin view). This auto-generation of saved fields may be useful in the future...
Anyways I updated the Django Management Command and it successfully uploaded the data from my Obsidian notebook into my BigBrain App. I also added the relative_path
field to the notes so that I could track where in the directory a note came from. There were just a few more things I needed to check. After that I was able to successfully load in all of the Markdown files from my vaults (and use metadata so that the information reflected the version on my computer instead of when I bulk uploaded the data).
The next step here is to build out the ETL pipeline that executes the following steps:
- Uploads the vault to S3,
- Copies the data from the S3 Bucket to the DIMMiN App hosted on Heroku
- Executes the Django Management Command
- Deletes the S3 Bucket (optional, as it may save time to only upload changed files).
Also I'm not sure if the metadata of the files will be preserved when they are uploaded to S3. Thankfully there appears to be a process for manually adjusting the S3 Metadata which is nice. I'll work on building out this pipeline tomorrow.
Notes created today
List FROM "" WHERE file.cday = date("2024-12-10") SORT file.ctime asc
Notes last touched today
List FROM "" WHERE file.mday = date("2024-12-10") SORT file.mtime asc