https://www.joshholtz.com/blog/2021/06/23/automating-ios-shortcuts-the-cron-job-way
sandbox for the other blog to practise posting and try out ideas. build up that muscle memory.
Pages
Sunday, July 30, 2023
Saturday, July 29, 2023
scale of the day: modded byz
some kind of modified byzantine
1st | 2nd | 3rd | 4th | 5th | 6th |
---|---|---|---|---|---|
1 | b2 | 3 | 5 | b6 | 7 |
1 | b3 | b5 | 5 | b7 | 7 |
1 | b3 | 3 | 5 | b6 | 6 |
1 | b2 | 3 | 4 | b5 | 6 |
1 | b3 | 3 | 4 | b6 | 7 |
1 | b2 | 2 | 4 | b6 | 6 |
Monday, July 24, 2023
First time deep frying
How to get crispy fries
Disclaimer I don’t know what I’m talking about I just saw some stuff on the internet.
hot tips
- Brine?
- baking powder?
- baking soda?
- double fry?
Double fry
- On lower heat
- Again on higher
First fry pulls the moisture out and the second one crisps
Baking powder
Or baking soda? Bs is 3x as alkaline as bp. Supposed to make it crispier.
Results
I had fun.
Good luck.
Friday, July 21, 2023
Thursday, July 20, 2023
Tuesday, July 18, 2023
Saturday, July 15, 2023
Thursday, July 13, 2023
Monday, July 10, 2023
Text mining projects
Data science/ Text mining project ideas:
- sentiment analysis
- topic modeling
- text classification
- named entity recognition
- text summarization
- Fake news detection
Sentiment Analysis of Product Reviews: Build a sentiment analysis model to analyze customer reviews of products or services. Use a dataset of reviews (e.g., from e-commerce websites) and apply machine learning techniques to classify the sentiment as positive, negative, or neutral.
Topic Modeling of News Articles: Use a collection of news articles from different sources and apply topic modeling techniques to uncover the dominant themes or topics within the dataset. Use algorithms like Latent Dirichlet Allocation (LDA) to identify key topics and analyze the distribution of topics across the documents.
Text Classification for Document Categorization: Build a text classification model to automatically categorize documents into predefined categories. Use a dataset of labeled documents and train a machine learning model (e.g., Naive Bayes, Support Vector Machines) to predict the category of new, unseen documents.
Named Entity Recognition (NER) in Biomedical Text: Work with text data from biomedical literature or clinical notes and develop a named entity recognition system to identify and classify entities like genes, diseases, drugs, or medical procedures mentioned in the text.
Text Summarization of News Articles: Create a text summarization model that takes a news article as input and generates a concise summary of the article. Explore extractive or abstractive approaches to generate summaries and evaluate the quality of the generated summaries against human-created summaries.
Fake News Detection: Develop a machine learning model to detect fake news or misinformation. Use a dataset of news articles labeled as fake or real news and build a classifier to predict the authenticity of news articles based on their content.
Latent dirichlet allocation
A popular probabilistic topic modeling technique used for analyzing large collections of documents. It is a statistical model that uncovers latent (hidden) topics within a corpus of text. LDA assumes that each document in the corpus is a mixture of various topics, and each topic is a distribution over words.
Here's a high-level overview of how LDA works:
Data Representation: The input to LDA is a collection of text documents. The documents are typically preprocessed by removing stopwords, stemming words, and converting them to a numerical representation such as a bag-of-words or TF-IDF matrix.
Model Building:
- Initialization: LDA randomly assigns each word in each document to a topic.
- Iterative Process: LDA iterates through multiple steps to refine the topic assignments and estimate the topic-word and document-topic distributions.
- For each word in each document, LDA calculates the probability of the word belonging to each topic based on the current topic-word and document-topic distributions.
- The word is then re-assigned to a topic based on these probabilities.
- This process is repeated for all words in all documents, updating the topic assignments.
- After multiple iterations, the algorithm converges, and the topic-word and document-topic distributions stabilize.
Topic Inference: Once the model is trained, you can infer the underlying topic distributions of new, unseen documents. The model calculates the probability of each topic in the new document based on the learned distributions.
Interpretation: After training, you can interpret the discovered topics by examining the most probable words associated with each topic. These word distributions help identify the main themes or topics within the corpus.
LDA assumes that documents are generated based on a probabilistic process involving a finite mixture of topics. The goal of LDA is to estimate the topic-word and document-topic distributions that best explain the observed document collection. It allows you to uncover the latent structure in the text corpus and identify the underlying themes or topics without requiring pre-defined categories.
LDA has various applications, including document clustering, text categorization, recommendation systems, and information retrieval. It provides a valuable tool for exploring and understanding large textual datasets by revealing the hidden topics that characterize the documents.
Friday, July 7, 2023
E minor w the byzantine sauce
E minor from 3 Byzantine Perspectives
root | degree | chords |
---|---|---|
c | 3 | em6 |
eb | 2 | e7, emΔ |
b | 4 | emΔ9, edim |
HOW TO GET THE BYZANTINE SCALE
major scale but instead b2 b6 = byzantine
take the harmonic minor but instead sharp the 4
this gives you the hungarian minor scale
A.K.A “the double harmonic minor scale.”
4th mode of byzantine = hungarian minor
5th mode of hungarian minor = byzantine scale
in c byzantine = F Hungarian minor
(it puts the fun in fhungarian)
in c the chord on the 1st degree is C Δ
the 5 chord G is a 7 flat 5 no d, an e though
more like an inversion of em6
“so thats if you had 3 different byz-scales starting on c, eb, and b. running each scale over a basic em gives you those tensions”
HUNGARIAN MINOR
Mode | Name of scale | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees |
---|---|---|---|---|---|---|---|---|---|
1 | Double Harmonic Minor | 1 | 2 | ♭3 | ♯4 | 5 | ♭6 | 7 | 8 |
2 | Oriental | 1 | ♭2 | 3 | 4 | ♭5 | 6 | ♭7 | 8 |
3 | Ionian ♯2 ♯5 | 1 | ♯2 | 3 | 4 | ♯5 | 6 | 7 | 8 |
4 | Locrian 3 7 | 1 | ♭2 | 3 | 4 | ♭5 | ♭6 | 7 | 8 |
5 | Double harmonic major or Phrygian Dominant ♯7 | 1 | ♭2 | 3 | 4 | 5 | ♭6 | 7 | 8 |
6 | Lydian ♯2 ♯6 | 1 | ♯2 | 3 | ♯4 | 5 | ♯6 | 7 | 8 |
7 | Ultraphrygian or Phrygian ♭4 7 | 1 | ♭2 | ♭3 | ♭4 | 5 | ♭6 | 7 | 8 |
BYZANTINE SCALE
Mode | Name of scale | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees | Degrees |
---|---|---|---|---|---|---|---|---|---|
1 | BYZANTINE | 1 | ♭2 | 3 | 4 | 5 | ♭6 | 7 | 8 |
2 | Lydian ♯2 ♯6 | 1 | ♯2 | 3 | ♯4 | 5 | ♯6 | 7 | 8 |
3 | Ultraphrygian | 1 | ♭2 | ♭3 | ♭4 | 5 | ♭6 | 7 | 8 |
4 | Hungarian/Gypsy minor | 1 | 2 | ♭3 | ♯4 | 5 | ♭6 | 7 | 8 |
5 | Oriental | 1 | ♭2 | 3 | 4 | ♭5 | 6 | ♭7 | 8 |
6 | Ionian ♯2 ♯5 | 1 | ♯2 | 3 | 4 | ♯5 | 6 | 7 | 8 |
7 | Locrian 3 7 | 1 | ♭2 | 3 | 4 | ♭5 | ♭6 | 7 | 8 |
newboat RSS config and setup
getting newboat up on a raspberry pi was breezy once i realized how to import feeds
thats a brickwall that you hit as soon as you get it installed and try to run it you get the error that you cant open it till you add at least one feed. I already had a bunch of feeds saved in OPML file just needed to air drop it over. BUT THE BIGGEST HEADACHE was it was saying it cant do it. but heres the deal..
you first need to create a file called urls inside .newsboat (dot newsboat) then you can enter the command
$ newsboat -i file.opml
***
THE CONFIG
in config make a file called config and inside put this:
auto-reload yes
browser "open -a Google\\ Chrome %u"
macro y set browser "mpv %u" ; open-in-browser ; set browser "elinks %u"
***
thats my basic config for newsboat it opens in chrome by default with o, if you want mpv press comma then y.
obv install mpv if you dont have it
Thursday, July 6, 2023
getting wiki tables into markdown
How to get tables from wikipedia into markdown without loosing the formatting
OCR is cool, the thing on iphone can preserve some formatting but I was loosing the formatting on my mac. if you mess with my tables thats a deal breaker for me. heres my web tools solution:
of course you could use pandoc
$ pandoc input.csv -t markdown -o output.md
theres a python package called csvtomd
pip install csvtomd
$ csvtomd input.csv > output.md