Saturday, July 29, 2023

scale of the day: modded byz

some kind of modified byzantine

1st 2nd 3rd 4th 5th 6th
1 b2 3 5 b6 7
1 b3 b5 5 b7 7
1 b3 3 5 b6 6
1 b2 3 4 b5 6
1 b3 3 4 b6 7
1 b2 2 4 b6 6

Monday, July 24, 2023

First time deep frying



How to get crispy fries

Disclaimer I don’t know what I’m talking about I just saw some stuff on the internet.

hot tips

  • Brine?
  • baking powder?
  • baking soda?
  • double fry?

Double fry

  1. On lower heat
  2. Again on higher

First fry pulls the moisture out and the second one crisps

Baking powder

Or baking soda? Bs is 3x as alkaline as bp. Supposed to make it crispier.

Results

I had fun.
Good luck.

Thursday, July 13, 2023

Monday, July 10, 2023

Text mining projects


Data science/ Text mining project ideas:

  • sentiment analysis 
  • topic modeling
  • text classification
  • named entity recognition
  • text summarization
  • Fake news detection

  1. Sentiment Analysis of Product Reviews: Build a sentiment analysis model to analyze customer reviews of products or services. Use a dataset of reviews (e.g., from e-commerce websites) and apply machine learning techniques to classify the sentiment as positive, negative, or neutral.

  1. Topic Modeling of News Articles: Use a collection of news articles from different sources and apply topic modeling techniques to uncover the dominant themes or topics within the dataset. Use algorithms like Latent Dirichlet Allocation (LDA) to identify key topics and analyze the distribution of topics across the documents.


    1. Text Classification for Document Categorization: Build a text classification model to automatically categorize documents into predefined categories. Use a dataset of labeled documents and train a machine learning model (e.g., Naive Bayes, Support Vector Machines) to predict the category of new, unseen documents.

    2. Named Entity Recognition (NER) in Biomedical Text: Work with text data from biomedical literature or clinical notes and develop a named entity recognition system to identify and classify entities like genes, diseases, drugs, or medical procedures mentioned in the text.

    3. Text Summarization of News Articles: Create a text summarization model that takes a news article as input and generates a concise summary of the article. Explore extractive or abstractive approaches to generate summaries and evaluate the quality of the generated summaries against human-created summaries.

    4. Fake News Detection: Develop a machine learning model to detect fake news or misinformation. Use a dataset of news articles labeled as fake or real news and build a classifier to predict the authenticity of news articles based on their content.

    Latent dirichlet allocation

    A popular probabilistic topic modeling technique used for analyzing large collections of documents. It is a statistical model that uncovers latent (hidden) topics within a corpus of text. LDA assumes that each document in the corpus is a mixture of various topics, and each topic is a distribution over words.

    Here's a high-level overview of how LDA works:

    1. Data Representation: The input to LDA is a collection of text documents. The documents are typically preprocessed by removing stopwords, stemming words, and converting them to a numerical representation such as a bag-of-words or TF-IDF matrix.

    2. Model Building:

      • Initialization: LDA randomly assigns each word in each document to a topic.
      • Iterative Process: LDA iterates through multiple steps to refine the topic assignments and estimate the topic-word and document-topic distributions.
        • For each word in each document, LDA calculates the probability of the word belonging to each topic based on the current topic-word and document-topic distributions.
        • The word is then re-assigned to a topic based on these probabilities.
        • This process is repeated for all words in all documents, updating the topic assignments.
      • After multiple iterations, the algorithm converges, and the topic-word and document-topic distributions stabilize.
    3. Topic Inference: Once the model is trained, you can infer the underlying topic distributions of new, unseen documents. The model calculates the probability of each topic in the new document based on the learned distributions.

    4. Interpretation: After training, you can interpret the discovered topics by examining the most probable words associated with each topic. These word distributions help identify the main themes or topics within the corpus.

    LDA assumes that documents are generated based on a probabilistic process involving a finite mixture of topics. The goal of LDA is to estimate the topic-word and document-topic distributions that best explain the observed document collection. It allows you to uncover the latent structure in the text corpus and identify the underlying themes or topics without requiring pre-defined categories.

    LDA has various applications, including document clustering, text categorization, recommendation systems, and information retrieval. It provides a valuable tool for exploring and understanding large textual datasets by revealing the hidden topics that characterize the documents.



Friday, July 7, 2023

E minor w the byzantine sauce

E minor from 3 Byzantine Perspectives

root degree chords
c 3 em6
eb 2 e7, emΔ
b 4 emΔ9, edim

HOW TO GET THE BYZANTINE SCALE
major scale but instead b2 b6 = byzantine
take the harmonic minor but instead sharp the 4
this gives you the hungarian minor scale
A.K.A “the double harmonic minor scale.”

4th mode of byzantine = hungarian minor
5th mode of hungarian minor = byzantine scale
in c byzantine = F Hungarian minor
(it puts the fun in fhungarian)
in c the chord on the 1st degree is C Δ
the 5 chord G is a 7 flat 5 no d, an e though
more like an inversion of em6



“so thats if you had 3 different byz-scales starting on c, eb, and b. running each scale over a basic em gives you those tensions”



HUNGARIAN MINOR

Mode Name of scale Degrees Degrees Degrees Degrees Degrees Degrees Degrees Degrees
1 Double Harmonic Minor 1 2 ♭3 ♯4 5 ♭6 7 8
2 Oriental 1 ♭2 3 4 ♭5 6 ♭7 8
3 Ionian ♯2 ♯5 1 ♯2 3 4 ♯5 6 7 8
4 Locrian 3 7 1 ♭2 3 4 ♭5 ♭6 7 8
5 Double harmonic major or Phrygian Dominant ♯7 1 ♭2 3 4 5 ♭6 7 8
6 Lydian ♯2 ♯6 1 ♯2 3 ♯4 5 ♯6 7 8
7 Ultraphrygian or Phrygian ♭4 7 1 ♭2 ♭3 ♭4 5 ♭6 7 8


BYZANTINE SCALE

Mode Name of scale Degrees Degrees Degrees Degrees Degrees Degrees Degrees Degrees
1 BYZANTINE 1 ♭2 3 4 5 ♭6 7 8
2 Lydian ♯2 ♯6 1 ♯2 3 ♯4 5 ♯6 7 8
3 Ultraphrygian 1 ♭2 ♭3 ♭4 5 ♭6 7 8
4 Hungarian/Gypsy minor 1 2 ♭3 ♯4 5 ♭6 7 8
5 Oriental 1 ♭2 3 4 ♭5 6 ♭7 8
6 Ionian ♯2 ♯5 1 ♯2 3 4 ♯5 6 7 8
7 Locrian 3 7 1 ♭2 3 4 ♭5 ♭6 7 8

newboat RSS config and setup

getting newboat up on a raspberry pi was breezy once i realized how to import feeds

thats a brickwall that you hit as soon as you get it installed and try to run it you get the error that you cant open it till you add at least one feed. I already had a bunch of feeds saved in OPML file just needed to air drop it over. BUT THE BIGGEST HEADACHE was it was saying it cant do it.  but heres the deal..

you first need to create a file called urls inside .newsboat  (dot newsboat) then you can enter the command

$ newsboat -i file.opml

***

THE CONFIG

in config make a file called config and inside put this: 

auto-reload yes

browser "open -a Google\\ Chrome %u"

macro y set browser "mpv %u" ; open-in-browser ; set browser "elinks %u"


***

thats my basic config for newsboat it opens in chrome by default with o, if you want mpv press comma then y.

obv install mpv if you dont have it

Thursday, July 6, 2023

getting wiki tables into markdown

 How to get tables from wikipedia into markdown without loosing the formatting

OCR is cool, the thing on iphone can preserve some formatting but I was loosing the formatting on my mac. if you mess with my tables thats a deal breaker for me. heres my web tools solution:

wikitable2csv 

csv2markdown

of course you could use pandoc

$ pandoc input.csv -t markdown -o output.md

tables generator is ok too

theres a python package called csvtomd

pip install csvtomd

$ csvtomd input.csv > output.md


Capture text from Screen on mac with shortcuts

Capture text from Screen on mac with shortcuts


I need to get more into using shortcuts. 
I feel like ive barely scratched the surface on what you can do with a mac.
I completely glossed over some really basic stuff like using mission control and 4 finger swiping.
I disabled mission control because it was interfering with milkytracker hotkeys.
cmd left right to go  to start and end of the line
alt left right to got fwd back a word
learned those from ali abdaal.

I picked up a bunch of tricks from linux land.
bunch of times ive installed some 3rd party app to do something basic that was built in all along.
amethyst is a tiling window manager thats kind of like dwm or i3 
i dont like how i cant get to my desktop to drag something to it when its on though
its better just to 3 finger swipe up and have all the work spaces
dragging a window on top of another window splits it which is what i wanted all along from a tiling window manger.
i got a lot from learning vim motions 
speedy text editing, when i made it a priority to look up shortcuts it was annoying to not have them.
but having cmd and alt arrows with shift is good enough and then i dont have to deal with getting in and out of normal mode in vim.

 saw photopea had an extension for OCR. (optical character recognition)
turns out thats stock in ios 15 (too bad my se only goes up to 14)
I kind of like how my phone is dumb enough to find pictures of my notes illegible.
it gives me some peace of mind.
but OCR is a huge feature like being able to auto translate stuff.
ive tried some OCR and it ruined the formatting
i just learned in the notes app you can preserve it as a block thats huuuge.

i havent been using the notes app to its fullest potential.
I got into mweb,  textastic, collections, vimwiki,  general db before it could make sense.
database features, tags and categories I learned about that from HUGO. 
obsidian/ notion really got me thinking about categorizing things.

messing around with vimwiki and installing things from scratch taught me to appreciate things.
when all the features are there but you dont understand them because you didnt put them there yourself.
that makes it easy to over look the value. its camouflaged in plain sight.
 
ios constantly filling up on storage pushed me back to basics. when i had 5 apps that kind of did the same thing i was forced to think how can i simply this.