Commit
·
084d615
1
Parent(s):
32f8bf6
commit
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- audio/.claude/settings.local.json +11 -0
- audio/106.mp3 +3 -0
- audio/107.mp3 +3 -0
- audio/108.mp3 +3 -0
- audio/109.mp3 +3 -0
- audio/110.mp3 +3 -0
- audio/111.mp3 +3 -0
- audio/112.mp3 +3 -0
- audio/113.mp3 +3 -0
- audio/114.mp3 +3 -0
- audio/115.mp3 +3 -0
- audio/116.mp3 +3 -0
- audio/117.mp3 +3 -0
- audio/118.mp3 +3 -0
- audio/119.mp3 +3 -0
- audio/120.mp3 +3 -0
- audio/121.mp3 +3 -0
- audio/122.mp3 +3 -0
- audio/123.mp3 +3 -0
- audio/124.mp3 +3 -0
- audio/125.mp3 +3 -0
- audio/126.mp3 +3 -0
- audio/127.mp3 +3 -0
- audio/128.mp3 +3 -0
- audio/129.mp3 +3 -0
- audio/130.mp3 +3 -0
- audio/131.mp3 +3 -0
- audio/132.mp3 +3 -0
- audio/133.mp3 +3 -0
- audio/134.mp3 +3 -0
- audio/135.mp3 +3 -0
- audio/136.mp3 +3 -0
- audio/137.mp3 +3 -0
- audio/138.mp3 +3 -0
- transcripts/uncorrected/106.txt +15 -0
- transcripts/uncorrected/107.txt +7 -0
- transcripts/uncorrected/108.txt +11 -0
- transcripts/uncorrected/109.txt +17 -0
- transcripts/uncorrected/110.txt +13 -0
- transcripts/uncorrected/111.txt +5 -0
- transcripts/uncorrected/112.txt +5 -0
- transcripts/uncorrected/113.txt +7 -0
- transcripts/uncorrected/114.txt +9 -0
- transcripts/uncorrected/115.txt +13 -0
- transcripts/uncorrected/116.txt +3 -0
- transcripts/uncorrected/117.txt +9 -0
- transcripts/uncorrected/118.txt +3 -0
- transcripts/uncorrected/119.txt +7 -0
- transcripts/uncorrected/120.txt +7 -0
- transcripts/uncorrected/121.txt +3 -0
audio/.claude/settings.local.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"permissions": {
|
| 3 |
+
"allow": [
|
| 4 |
+
"Bash(git checkout:*)",
|
| 5 |
+
"Bash(git branch:*)",
|
| 6 |
+
"Bash(git push:*)"
|
| 7 |
+
],
|
| 8 |
+
"deny": [],
|
| 9 |
+
"ask": []
|
| 10 |
+
}
|
| 11 |
+
}
|
audio/106.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc730bd3778d23585dc29230711c9b457fe03bc850048bebe2b64326e2fec867
|
| 3 |
+
size 1195244
|
audio/107.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:01eed4864f25b24ba884d7301e65f70c79a3976bc15ef802851771a2378d6900
|
| 3 |
+
size 3082796
|
audio/108.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7d13d628f866e3b919557a038c086e50ec833178d0ebdace0f82856f89648c55
|
| 3 |
+
size 3555116
|
audio/109.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6850c4f107264ce5b76483343ab4b483f2efabe45e74007d1f93e616e61c8244
|
| 3 |
+
size 3589676
|
audio/110.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d00599d3919bb5ae0cade9a3c0466f5c0a4be5c88e063872ab653f41c559586
|
| 3 |
+
size 1840235
|
audio/111.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36ab597df831bfd434e6b36268a85bf28577953802e3bedd5eb6c78370eb4d83
|
| 3 |
+
size 1408364
|
audio/112.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:407436b368702db090420ede39de07a412427a8744e85bfa153489aa0f184afc
|
| 3 |
+
size 2026436
|
audio/113.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e79f2efe462ce355e52353ada0ec48f2f38a95152b8fc5a2f6d7de9b71be141c
|
| 3 |
+
size 1009196
|
audio/114.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8837d43c94fae93338ba684eeea8ca34ea2651739175a08540a8e95d7c69f827
|
| 3 |
+
size 1843244
|
audio/115.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:44081118b29aab3a48fc05ba9bfeee35c7bdfbc0bc1febaa265f0f8fe527df38
|
| 3 |
+
size 1025324
|
audio/116.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:002da85d30a0a1c3b1170f71bc5accda4048cb340f90b9259e69e57b5dac66a5
|
| 3 |
+
size 940076
|
audio/117.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:163a494deeaf2756086bd0a74d5b4cca53084fc2c49bff7c8fdc71cb9f98190a
|
| 3 |
+
size 2008556
|
audio/118.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:92d935e6ad7efd69d53eda1811e58f8665003891445d2a2c21655857e65c1f79
|
| 3 |
+
size 1143404
|
audio/119.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f0a3053de45eb3af3f550b5537f328eb5eaab17631b86d0a2134a3b4c8417fd
|
| 3 |
+
size 5435756
|
audio/120.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb83a519381068b4f94814af61d5faf9cab6b1d938fc0ccb82569450124fd0a7
|
| 3 |
+
size 3452721
|
audio/121.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f00ce052c299be61c91fff279e9a5eb53b98d5ae38106e07b5e50c0438b0cb13
|
| 3 |
+
size 1584044
|
audio/122.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9de9bf2dbe8d202d27535e943fbf363b6dd1384679f447c8b85aadc0befe18ae
|
| 3 |
+
size 3131756
|
audio/123.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61d1579b37799ad2973d83f25ed219ace0dfd917af59c28cf6b731ba42f4dd02
|
| 3 |
+
size 2294139
|
audio/124.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f3e8ca6ddc29ffbb4e12e91fec4a62f8d9f10445f07663002827cc3b522dcbd5
|
| 3 |
+
size 2507298
|
audio/125.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:acbdb672d974c2a97f6bd22ce9d279cbb0993160925b8a265b178e582e1690c9
|
| 3 |
+
size 1057188
|
audio/126.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d7d6f0fc438386e1899903d615a17029d2f94ae60ff55de5551c4f52c17483b
|
| 3 |
+
size 1047784
|
audio/127.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e0ea8fd0238dc9832f379d59de2d7f3f772c5589dbaa2773d9aefd0941427e29
|
| 3 |
+
size 888236
|
audio/128.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7373d80239298dbd1734034c6698ded0bb532511b0abb4e8cea33bd3c973f01d
|
| 3 |
+
size 3327596
|
audio/129.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e419c132f8fc0ec652a1ddf879dae4e245208e602d577d9fab9624ecd76abcc0
|
| 3 |
+
size 780524
|
audio/130.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a24ef88f2f688b4389215c9a121112c8113f7350983183402f8d6236429b720c
|
| 3 |
+
size 1520684
|
audio/131.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2872f14a162c2e9a40527401d68f42a41ed24a3854fe7f8188607daa1f07c8b4
|
| 3 |
+
size 1428524
|
audio/132.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d7d4c58c17a01dbd7d79b43c198930e145da384869ededa76fba45fd1a71ccd5
|
| 3 |
+
size 2604473
|
audio/133.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2271ff918630384ac932f1ba1ca4d35d72cf123ecdcc1cc9179d07965273aecf
|
| 3 |
+
size 1147468
|
audio/134.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:63a76711da5b953ebdd78a57925367a16dcbe29a3cef3882b391026dbdfbb229
|
| 3 |
+
size 2319843
|
audio/135.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5db2fa9f1422c65201eaf1b0452f4dd266d9627b72dd92c496e296c7a9d2ea1c
|
| 3 |
+
size 2573036
|
audio/136.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7e49c4f843350c56ccb7736eefc09ed71d4e5dfb914845b083df8b3aba7943a4
|
| 3 |
+
size 1386331
|
audio/137.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:27231d2eba0821695e0d991cbabd5579529d6bb87fe31058f6a5d2243ac1ed43
|
| 3 |
+
size 2991295
|
audio/138.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ca8d96a533a7c93fff1a138553866f54356257d0f279830306577722888ce7c
|
| 3 |
+
size 1706156
|
transcripts/uncorrected/106.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Generate one for door opening event detected.
|
| 2 |
+
|
| 3 |
+
Door closing event detected.
|
| 4 |
+
|
| 5 |
+
Bathroom usage detected.
|
| 6 |
+
|
| 7 |
+
All air conditioners are now turning on.
|
| 8 |
+
|
| 9 |
+
All air conditioners will now turn off.
|
| 10 |
+
|
| 11 |
+
Current temperature in the nursery is, just a placeholder.
|
| 12 |
+
|
| 13 |
+
Current humidity in the nursery is.
|
| 14 |
+
|
| 15 |
+
Air conditioning advised.
|
transcripts/uncorrected/107.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So a very powerful agent to add to the library. For the email drafting one, I will gather these in a tag called agent plans and they can be there together. This would be a very big one if it could be pulled off. And again, I come back to the question of if there's a front load transcription that doesn't need to route everything through voice notes. For example, if the N8N form element had a voice capture node, it would solve all of this very elegantly.
|
| 2 |
+
|
| 3 |
+
So if I dictate an email and I use a transformation in voice notes, I need to say every time it's for me. It gets the names wrong all the time; it gets the style wrong. So it saves a lot of time but still a lot of nudging. System prompt to an email sending agent is like done very successfully in the past; it works very well. So the note comes in tagged, it goes to this, and the output should be basically ready to send every time. So that's not worth creating just to get to that step.
|
| 4 |
+
|
| 5 |
+
What it would be worth creating to do is a contact matching and have to be saved into and so on. So, I'm going to go ahead and start the draft probably initially until it's validated or whatever human in the loop thing they have now. In other words, if I say send this to Ronnie or an email to Ronnie or an email to Stephanie or the people that I'm sending to frequently with the Google contacts integration which exists, I imagine they have set up an MCP.
|
| 6 |
+
|
| 7 |
+
It can retrieve the person's email based on the entity match, put them in the to field, and that way I could just dictate emails that basically would be hopefully a queue of emails ready to go with one button of a push. So that would be a really cool one to try depending on if the contacts MCP is mature enough to support this.
|
transcripts/uncorrected/108.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, so I have the email header images done. I'm going to add in green invoice. So I just cleaned up the workflows I had created previously just to standardize on the structure. And in the JSON payload for the invoice and for the receipt, you get a detailed breakdown of the MAM capture, which can be very helpful because assuming I write this data out to a table, I can at any point in time see exactly what my position and so on.
|
| 2 |
+
|
| 3 |
+
So, the question is against MAM based on the amount. So each time if, let's say my test, tests invoice for one Shackle and 18 Agurot of MAM were written, so that gets written as the MAM owed and then you can top that up or check the position of it. That's number one for invoices and for receipts, I'm just going to create separate workflows for the sake of it, although they could be branched into one.
|
| 4 |
+
|
| 5 |
+
I'm going to suppress and Sibghts Billing in green invoice so that they can get a custom email with the pay URL which is also provided. There are four URLs provided in the payload. One is the payment URL for the client. You got the document URL in English, Hebrew, and Source. So you can actually download both and put them into your object storage.
|
| 6 |
+
|
| 7 |
+
And then for expenses, the question remains if for Hanna at this point is it worth, probably not worth editing. Check the API docs again. It still seems to me as if you can't create, you can create an expense in Green Invoice, and many more. The only problem with this is that you can't have them put a document in your database and then have them do their document parsing.
|
| 8 |
+
|
| 9 |
+
You can retrieve stuff, but if you're not using it for actual expense logging, there's no value in retrieving the expenses stored there because there won't be any stored. As far as I know, there's no ingress to the expense, although I should probably clarify with Green Invoice. There's no ingress where you can provide, upload an expense that you got and then collect the information.
|
| 10 |
+
|
| 11 |
+
In any event, you have to go in and edit it by hand so it's probably not worth, it's probably worth just using this other system but the subscription is back in order with you because it's certainly a lot more, just better to use basically than the other one.
|
transcripts/uncorrected/109.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I have a thought that I look into and which I thought I'd share with a friend working in SEO because it occurred to me that this is I feel what will this is would be a very productive way forward.
|
| 2 |
+
|
| 3 |
+
So given the huge rise of AI, people are naturally concerned and aggrieved about IP protection and specifically large language models ingesting their blog content and websites into their training data without their consent, which is very reasonable.
|
| 4 |
+
|
| 5 |
+
On the other hand, there's a huge opportunity for building thought leadership and branding by actually making it easy, as easy as possible for bots to scrape up your content.
|
| 6 |
+
|
| 7 |
+
On the blocking side, you have companies like Cloudflare which are rolling out very quickly AI blocking features which are basically targeted denialists.
|
| 8 |
+
|
| 9 |
+
I am curious to know and to see whether on the other side there are actually companies saying AI traffic is massive. It's a very legitimate referral source.
|
| 10 |
+
|
| 11 |
+
When we're dealing with search engine traffic, we don't try to put up walls to make it hard for Google and others to index their sites. Why is that the approach you want to take with AI?
|
| 12 |
+
|
| 13 |
+
I mean maybe you want to block your real IP; it could be your image galleries, but there's a big potential advantage in actually making it easy.
|
| 14 |
+
|
| 15 |
+
So what does AI like? It likes structured data, it likes very clean metadata, and I'm curious to see if any companies and technologies are targeting this.
|
| 16 |
+
|
| 17 |
+
Consultants explicitly branding themselves as optimize your site for AI readability, and if not, I predict that this will be a very big demand for this as people instinctively rush to block and then realize that, hang on, our competitors are getting user referral traffic from ChatGPT, etc. Let's undo that and make it easy.
|
transcripts/uncorrected/110.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd be interested to know, looking into AI agent workflows at the moment and beginning to gradually embrace the concept that multi-agent workflows have an important place.
|
| 2 |
+
|
| 3 |
+
I say that because I tend to think always that I tried to consolidate, simplify, consolidate, get one agent to do many things.
|
| 4 |
+
|
| 5 |
+
But I'm seeing examples where I have in a recent workflow an agent just for optimizing the user's voicemail transcription as a prompt.
|
| 6 |
+
|
| 7 |
+
Then one for cleaning it up for a Gmail complaint HTML.
|
| 8 |
+
|
| 9 |
+
I'd be curious to know in production use cases, can you give me some credible examples where multi-agent workflows are common and where you might actually have a significant amount of them in a chain?
|
| 10 |
+
|
| 11 |
+
Give me a couple of examples of, let's say, workflows where you might need three agents like I have and one where you might credibly need eight or even more AI agents all in a sequential chain.
|
| 12 |
+
|
| 13 |
+
Thank you.
|
transcripts/uncorrected/111.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, so I would say that the URLs being gathered are correct. They are valid. But I'm really more interested in coverage from the last three years. And there's a lot more than this.
|
| 2 |
+
|
| 3 |
+
So what would you recommend as the most, what's going to give us the most effective results to just get this done? A paid service, a paid API that can get from what we have now, which is a basic selection, to a much more nuanced list of maybe 50 or even 60 stratified really by types, podcast, press coverage, etc.
|
| 4 |
+
|
| 5 |
+
The structured output is very good, that part is excellent. It's just a discovery that it's falling down on.
|
transcripts/uncorrected/112.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So a note to self, a reminder that I still, so the voice node export was that I did yesterday had about 1700 notes and it, I put it up in Hugging Set privately of course. This is a private data set, I have it locally and I didn't, my plan was to run Olam over it and say this is a really good and representative data set of all the type of notes that I take over about 3 months.
|
| 2 |
+
|
| 3 |
+
What I wanted to do was entity recognition. This is a to-do list, this is a shopping list and then if we could firstly determine the top 50 recurring entities and then come up with a schema of let's say here are the top 20, here are the top 50 with the purpose of creating the most effective tagging system and on the back of the tagging system then creating the automations.
|
| 4 |
+
|
| 5 |
+
So I need to identify the most recurring entities. Some exactitude. Problem for this workflow is that it's a substantial body of text. So I'm not sure that the way to go would be asking the LLM to iterate over the entire dataset. Or maybe it would just sample a little bit of text from the notes and then it would kind of work gradually.
|
transcripts/uncorrected/113.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm doing a lot of Python development on this computer.
|
| 2 |
+
|
| 3 |
+
I want to do a bit of Android development too and who knows what else will come in the future.
|
| 4 |
+
|
| 5 |
+
I'd like you to evaluate the packages I have on this computer, the environments I'm set up for in terms of development packages specifically, and see if you can identify any obvious gaps for what I have on the computer.
|
| 6 |
+
|
| 7 |
+
And this is Ubuntu of course and install anything that would be primarily better for development including any SDKs that I might be missing, packaging files, that kind of thing.
|
transcripts/uncorrected/114.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I'm going to try to do a kind of evaluation, not a scientific one, but just a kind of back of the hand one, looking at the respective merits of Sonnet, Nano 4.1 or whatever the latest OpenAI one is.
|
| 2 |
+
|
| 3 |
+
The contender is the increasingly diverse range of tools that Windsurf is bringing in, and it would be good to pay close attention and see what else is emerging in this landscape.
|
| 4 |
+
|
| 5 |
+
Sonnet 4 is the most expensive and seems to be kind of the go-to but frequently seems to really struggle on context.
|
| 6 |
+
|
| 7 |
+
I feel like Google are going to eventually catch up with it in the model.
|
| 8 |
+
|
| 9 |
+
They don't have the same sort of competitive product as they do, and just seeing really which is probably the least frustrating in turning out stuff actually working.
|
transcripts/uncorrected/115.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
One experiment that I think would be very useful to do would be to try with the N8N API.
|
| 2 |
+
|
| 3 |
+
I don't know if there is yet an MCP for N8N.
|
| 4 |
+
|
| 5 |
+
That would be incredible for a self-hosted N8N server.
|
| 6 |
+
|
| 7 |
+
Then I could really connect with Windsurf to either the local or the remote one.
|
| 8 |
+
|
| 9 |
+
And say, you know, I want to create this workflow, can you start it?
|
| 10 |
+
|
| 11 |
+
And actually develop it, execute the commands to build the workflow basically.
|
| 12 |
+
|
| 13 |
+
So that's worth really looking into.
|
transcripts/uncorrected/116.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So a DevOps agent would be really interesting to try. I'm going to see if something's been made for this. The way I'm using Windsurf at the moment, which is, you know, have it on my local, then connect to the 1.2 VM, then connect to this environment, and then have it just run stuff on the command line.
|
| 2 |
+
|
| 3 |
+
I've seen if anything's actually been made to do this, as in intentionally that's the purpose. And if so, it might be. If not, it might be an idea, but we just have to create a proof of concept firstly and to validate it. But, you know, it could be nice.
|
transcripts/uncorrected/117.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I'm currently using windsurf IDE in order to work on a large variety of projects, particularly including using it to actually do repairs on the local file system, which is a sort of unofficial off-label application for an agentic code editor, but I find this highly, highly effective.
|
| 2 |
+
|
| 3 |
+
And the other agent code IDEs which can be used with a cloud LLM might be more cost effective. That's the only problem with Windsurf; especially if you use Sonnet 4, it becomes very expensive very quickly.
|
| 4 |
+
|
| 5 |
+
I don't think agentic on a local, with a locally run LLM is an option. But it struck me that VS Code can be paired with just about any Cloud LLM, any extension.
|
| 6 |
+
|
| 7 |
+
Among what's currently out there for agentic assistance, is there any that offers a truly different value proposition to Windsurf in the sense of being really affordable for almost truly unlimited usage?
|
| 8 |
+
|
| 9 |
+
With Windsurf, it's just the usage caps, as well as the APIs they use frequently seem to run into exhaustion due to the sheer volume of users that they have.
|
transcripts/uncorrected/118.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I want to work today find out some for Contentful if there's Sanity Studio the pros and cons of that exactly but it would be good to see what they because the only thing that troubles me about this one is that the they're pushing the hire account and it's unaffordable for some people.
|
| 2 |
+
|
| 3 |
+
I could I've asked them if there's for private users if they do a deal or something but that's the only thing is vendor lock if they ever pull this free tier I don't find myself screwed basically.
|
transcripts/uncorrected/119.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I'm currently using N8N quite extensively for automations. It is an excellent platform, very powerful, learning curve for sure. The issue I find is that, so I'm using Windsurf IDE for a lot of things in general, especially for automation. What I find with N8N is that it can take a very long time to configure a workflow when you're creating each step manually in an automation chain. But if I can, on the other hand, prompt an AI agent in Windsurf to generate a Python script to achieve the same thing, it could take me minutes rather than potentially even hours. So it's a lot more efficient to do it at the code level just in Python.
|
| 2 |
+
|
| 3 |
+
What I'm thinking is that as basically all I need is a server with Python script running, which is the core of what probably N8N is under the hood, it might be more efficient to begin migrating some of these scripts or creating some of these scripts deployed directly on a server in a code environment. This would give me the ability to have an AI agent connect directly to the server, edit my workflows, edit the, etc.
|
| 4 |
+
|
| 5 |
+
So my question is as follows: if I wanted to take that approach, one of the useful things in N8N, of course, is the ability to save credentials which can be used across your scripts for the different integrations. Is there any platform? I'm always trying to avoid reinventing the wheel. Is there any platform that is intended for this? I keep thinking to myself that if a business comes along and wants to create different workflows for integrating different services, whether it's, you know, it could be relatively mundane back office operations, I can't imagine that they're going to go to N8N. Maybe they are. But I wonder if there's a platform that is intended to provide the overall framework for holding together a bunch of automation.
|
| 6 |
+
|
| 7 |
+
There might be a GUI for actually managing the Python scripts, managing the environment variables, and that would provide the code frontend to N8N. It would still, however, be important to be able to edit them locally, in other words, to edit or deploy scripts that then get synced up to the deployment environment. But what would you say is the sort of code-first approach here? Is there a framework that is the equivalent of N8N for this? Or would most people just deploy their own Python script library to a server, have it run, and that's how they manage automation scripts in production?
|
transcripts/uncorrected/120.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I came across recently there's a I didn't realize that there's already the first class of sensor processing units that are available at a consumer price point and in a consumer form factor. I'm referring to the Google Coral series for about $60 etc. They're most popular for computer vision workflows.
|
| 2 |
+
|
| 3 |
+
I came across them looking at motion detection for IP cameras, and what I'm unclear about, I'm not sure about, is to what extent, because they're so small and relatively cheap, they actually going to make, let's say you had a computer that wasn't really very well equipped for running stuff with AI. Let's just say a very basic GPU. If running one of these or doing the more conventional upgrade of putting in a better GPU, would you expect something significant from this small addition?
|
| 4 |
+
|
| 5 |
+
It seems to me that it's doubtful that it would really bring that much more inference versus a more traditional hardware upgrade. But the other question that I have is for speech to text, which is something that I'm looking at for a long time. I know that Google's Pixel phones for on-device STT are considered the best in class.
|
| 6 |
+
|
| 7 |
+
So from a hardware standpoint, is there anything like, why is it, if you wanted to get something, if the Coral is an example of something that you can achieve big things for relatively cheap by focusing on the right hardware for the task, is there anything comparable for speech to text?
|
transcripts/uncorrected/121.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So if we're struggling with Gemini, it seems to me that this may be a good task for a model like Gemini in conjunction with a search engine API of some kind. Try it one more time with the fix. And if you can think of an external help, maybe Perplexity I was thinking. Really, whatever could have a good pool of historical news mentions or just general scraping, to be honest. Firecrawl, if that's an option.
|
| 2 |
+
|
| 3 |
+
If the next attempt fails, try to think of more diverse approaches to gather this list. I think it should definitely be feasible. There's also a lot of interviews of him on YouTube and on podcasts, so something that could scrape YouTube and add those to the array would be helpful as well.
|