danielrosehill commited on
Commit
f6dee47
·
1 Parent(s): 1fc1cf3
audio/41.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ef66f3eb86ac58048e128a2a8e20b6d62374ee374f118100e44fdd134a8828c
3
+ size 1371285
audio/42.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f32d5ea2bef37843dfe1ac284585f784269f565964f264ed67efa8eae2ede69f
3
+ size 4571180
audio/43.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:400875109515ee06b5a962a58efd2762721add9a41129543222f39ba47879e57
3
+ size 1683116
audio/44.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ae9be4d87082ca30db4a8d844c5b565794ee8729779d172e87f1282cfc22798
3
+ size 2840829
audio/45.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc2099cc010f32250551240dedcdd0369297abe98834fa7d31aece96be61a199
3
+ size 136842
audio/46.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:838c67fed83b1a7448aca5cf21e5c5e5b2a44b547031e8fd92c8bfaabd0c00d4
3
+ size 559916
audio/47.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc9b64cb10cee41d94efc66bd1d09ff0e910d48df4c88951ea0a4193ef62de2e
3
+ size 499436
audio/48.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cce277f27051ec29efec8539b3041defa9e93f07fac99c36d9416158ce797372
3
+ size 798956
audio/49.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4d2c330b9f4d9920c6b8f4ad2b4bbd324e76f86c96fd965e830ee3e7b36907b
3
+ size 1941164
audio/50.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c4117fec7378f5cdd0bee3b0bc2cba57bb422a2f1a7aeeac92885efe5b12ef6
3
+ size 572565
audio/51.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a6e0fdddd0e8a31903440ed3b9c5ebd7769eceec3e72585cabfe2c0385ea99e
3
+ size 441535
transcripts/uncorrected/41.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ I have a question for you, AI. So, okay, let's say I'm taking photographs on my phone, and I want my Android, and I want to rotate, like I want to take a photo in landscape.
2
+
3
+ So I really dislike the auto-rotate feature on the phone in general, because I find it's always too sensitive, and it ends up turning my phone.
4
+
5
+ So in my many years using Androids, I always have it off, so my phone's always in portrait.
6
+
7
+ What I would like is to have a camera app that doesn't require me to leave the portrait mode on the whole phone in order to take photos.
8
+
9
+ Because what happens to me all the time is I take a landscape and I think the camera app has rotated, but it hasn't.
10
+
11
+ And then I end up having to rotate them all afterwards.
12
+
13
+ And it's very, as you can imagine, a very frustrating endeavor.
14
+
15
+ Do any of them have reliable in-app rotation so one doesn't need to make this compromise?
transcripts/uncorrected/42.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ For my automation, that's putting notes in N8N. I want to make sure that I have the prompts captured. Also for that workflow, I just realized that the prompts, I'm not sure it really matters to have them linked. I'll probably have them right now. Like it was just a prompts table, because the prompts are scratched anyway in the combined note. So, just having the prompts by themselves isn't actually a bad idea at all, because without the linkage to the outputs that were generated from them, because if I really wanted to know what output did that generate, it's kind of irrelevant actually that question.
2
+
3
+ I think that storing the prompts, especially when they contain contextual data, may actually be, they're building up a prompt library. The prompts could be cleaned afterwards once they captured in the first place. But once you build up that prompt library, in other words, the prompts could be mined for context, then those can be stored and that can actually save repetitive prompting down the line. Whereas the outputs is a separate data store. Having the prompts that came from it is important in the sense of if you want to read something and say, okay, well, what did I ask that generated this question? But that data is kind of first entry a lot of the times. It's a raw AI output.
4
+
5
+ So I think that capturing the prompts may actually even be better. I would need to just do it completely separately. And then, so that's just the prompts database basically. It was used, user executed prompts, and such and such a data around this. I could search my previous prompts and run them again.
6
+
7
+ So really the whole AI database I was working on for such a long time, storing prompts, outputs, system prompts, and contextual data, kind of remains actually an important idea. Those kind of remain the constituent elements, notwithstanding the rise of agents because that's really just a list of credentials and integrations. I think that basic data structure I was working on last year, or began last year, should be, make sure that that's still in N8N. I don't think I've contexted it there, because that's still pretty good.
transcripts/uncorrected/43.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Could you check the logic here for capturing voice recordings in this app? I got an error saying it couldn't fetch the recording, and I'm not sure if that was an API error trying to pull the transcript in from OpenAI or that was an error in the recording.
2
+
3
+ In any event, it's important that the app should have good handling for longer recordings, let's say potentially up to five minutes or even longer than five minutes, but at least three to five minutes duration.
4
+
5
+ To be able to capture, record that efficiently within the browser, upload that to OpenAI, and then pass the data on and the rest of the workflow.
6
+
7
+ Check the robustness of that and make any improvements needed to get it working properly.
transcripts/uncorrected/44.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I have a question regarding speech-to-text which I'm using a lot at the moment. One of my goals is to, over time, create a personal fine-tuned SDT model. But at the moment, Whisper is pretty good. I don't have any immediate motivation or need to do this.
2
+
3
+ What is a bigger challenge is that currently STT APIs or models rely upon a language. The user specifies the language and it uses the appropriate model for the language. The issue I find is that I live in Israel and frequently I'll transcribe notes in which I use Hebrew words. And the English model obviously will not work. It doesn't work, and many more. Because it's expecting English and there are some Hebrew intermixed, and it's giving botched transliterations of them.
4
+
5
+ My question is, has anybody developed a speech to text model for populations that might mix languages? Let's take the case of maybe a Hispanic community where people use English and Spanish. A lot of the times in these worlds, a sort of creole develops in which people mix different languages, their mother tongue and their adopted language kind of fluidly.
6
+
7
+ So we're very interested to know if anyone has thought about this use case. Probably you'd need a speech-to-text model that knows the speaker will be speaking in predominantly one language, but maybe some words in a second. And then some ability to identify which parts of the speech were in that other language. Anyone has looked into this challenge?
transcripts/uncorrected/45.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ I need to charge up the Nothing Plus earplugs, they're in my satchel, to make sure they have some battery life.
transcripts/uncorrected/46.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I want to check on this computer if possible to next clean up tasks to identify any large files that might be there accidentally.
2
+
3
+ Directly, particularly large files that may be duplicates or I don't need anymore.
4
+
5
+ I can delete them.
6
+
7
+ Redundant packages, overlapping packages, that kind of thing.
transcripts/uncorrected/47.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ See if there is an MCP for Olam itself, which would be interesting to know. And I'll see in VS Code where the MCPs are added, and I'll use that, I think, for the updated ones.
transcripts/uncorrected/48.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Can you check to see for social previewing if all the requisite elements are in place?
2
+
3
+ I should have an OG image and an OG title description and as part of the content model I should probably consider having a Twitter image as well though I'm never too sure what the dimensions for that should be.
4
+
5
+ I want to make sure firstly that if those elements are there, even if they're currently empty, and many others.
transcripts/uncorrected/49.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Yeah, I'd like you to see if any two models that might be. Firstly, check what I have in Olama. I moved back to it recently, and I wanted to make sure that I have a couple of models in my inventory. The first is a good embedding model. I like to use Nomic, but there's light, so a couple more might be useful. From the Gemma models that Google released recently, there's Gemma3 in different quantized versions. See if any of those are viable for DeepSeek R1, if any of them are viable and would add value over what I have for agentic cogeneration.
2
+
3
+ And finally, Mistral, I'm almost certain that I already have it, and so on. So, that's the first thing that I wanted to talk about. And then, I wanted to talk about the other things that I think are missing, and which could be useful for the agentic use, and which could provide a reasonable performance on this underlying hardware and for code generation applications usually.
transcripts/uncorrected/50.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ I want to see where my Nimbot printers are, where the cartridges are. Actually proved quite a successful purchase, but I have one more round of tape.
2
+
3
+ And I want to as well see what they do with Thermal Printer. Thermal Label Printer, see what they can offer for that.
transcripts/uncorrected/51.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ I should call my startup name today to see if they have any estimate for when the Shaleach will be delivering the Teodat Z'Hud so that I can make sure that I'm home.
2
+
3
+ I didn't get any notification. I got one missed call yesterday. I don't think it was connected but I just want to make sure that I haven't missed it and need to collect it.