Commit
·
29ad0f5
1
Parent(s):
2385c13
commit
Browse files- audio/79.mp3 +3 -0
- audio/80.mp3 +3 -0
- audio/81.mp3 +3 -0
- audio/82.mp3 +3 -0
- audio/83.mp3 +3 -0
- audio/84.mp3 +3 -0
- audio/85.mp3 +3 -0
- audio/86.mp3 +3 -0
- audio/87.mp3 +3 -0
- audio/88.mp3 +3 -0
- audio/89.mp3 +3 -0
- audio/90.mp3 +3 -0
- audio/91.mp3 +3 -0
- audio/92.mp3 +3 -0
- audio/93.mp3 +3 -0
- audio/94.mp3 +3 -0
- audio/95.mp3 +3 -0
- audio/96.mp3 +3 -0
- transcripts/uncorrected/79.txt +5 -0
- transcripts/uncorrected/80.txt +9 -0
- transcripts/uncorrected/81.txt +5 -0
- transcripts/uncorrected/82.txt +1 -0
- transcripts/uncorrected/83.txt +5 -0
- transcripts/uncorrected/84.txt +3 -0
- transcripts/uncorrected/85.txt +5 -0
- transcripts/uncorrected/86.txt +5 -0
- transcripts/uncorrected/87.txt +3 -0
- transcripts/uncorrected/88.txt +1 -0
- transcripts/uncorrected/89.txt +13 -0
- transcripts/uncorrected/90.txt +1 -0
- transcripts/uncorrected/91.txt +7 -0
- transcripts/uncorrected/92.txt +9 -0
- transcripts/uncorrected/93.txt +3 -0
- transcripts/uncorrected/94.txt +3 -0
- transcripts/uncorrected/95.txt +9 -0
- transcripts/uncorrected/96.txt +7 -0
audio/79.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d52dbbaffa21b21fa1ee0b94be9191e2cc574bf1ed80604ba16424968cd02254
|
| 3 |
+
size 1337516
|
audio/80.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a641a243e249bb4940ef00f4d055b3f5d1c7e1271fe1bf0da372e32a925df5d0
|
| 3 |
+
size 656684
|
audio/81.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee46633a5f3d38886dcce679f585840f8c27162601be85b9a8e6c6ab07cd847e
|
| 3 |
+
size 3729644
|
audio/82.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:086084c4c4e7561e2381dac072a4a39bf8432a6bee771753784e4ea40372ad1d
|
| 3 |
+
size 940076
|
audio/83.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:29a6869dc6cdbba014b0c4c194eedc983a4bdcb27e993ba26618e06cf8fb6121
|
| 3 |
+
size 2329964
|
audio/84.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:07b0e0b833c476a2c6ef06c35dcf48ba7d8e9f6dea46b9f44b189430e686541e
|
| 3 |
+
size 1484396
|
audio/85.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7da4adaba16eaf61bd14c4801d96d513aa91d7ac786992c3c6b4a5a0237bc21
|
| 3 |
+
size 1929644
|
audio/86.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db49a4df25e2eb73431d2fda6b4896dad25067df14e1ba9a2e8a476bd5659649
|
| 3 |
+
size 1589804
|
audio/87.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:30ea8041c4ef626496ce44d9c0c66ff690f02bc42e656e5114b94dcc2b60f4da
|
| 3 |
+
size 1297196
|
audio/88.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:edcd66796e0d2b72f775b0baad7020e145873a3dd648f5a6d3a24fe6088e5366
|
| 3 |
+
size 490796
|
audio/89.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:747f72b934aed5ffec3d47f7dc50c5f638ce0b5f100c4f794bc9ac6bbb1b0347
|
| 3 |
+
size 1110478
|
audio/90.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69afde628026ef7de089049fda122289e9f9e1d0a553edf23ed69a1432d66ae3
|
| 3 |
+
size 187624
|
audio/91.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aef8882600a9b779d030eb9ee217e419028fd07596e632629bea608f2822d592
|
| 3 |
+
size 683756
|
audio/92.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa8995265fe1d2ab5faf912864e5822f7c7b9c33d8f6c479daf3e3bd5600b80e
|
| 3 |
+
size 1478636
|
audio/93.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:33552a0207d92f780d6e8981933ba0c79968778c3c17cda1d47eb2951acbfd69
|
| 3 |
+
size 1017836
|
audio/94.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d8698562141112c46d2d126be8e5e9c6085fa960d907da740ca060915c3c268a
|
| 3 |
+
size 563161
|
audio/95.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:995bad67f8b1b6ccdee535cf7ba191af611f4df1ba9d38dacfb48023e830b573
|
| 3 |
+
size 745964
|
audio/96.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9a6838c60aceea3cfc13fe966abd4b46a41bd1ec3722486fecca216963e53534
|
| 3 |
+
size 1648556
|
transcripts/uncorrected/79.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm going to add a new repository today called prompt snippets. Prompt snippets, I think, are the only things that I see as helpful in prompt engineering, quote unquote, which is that if the little things you find effective, an example of a snippet that I've kind of always wanted to have and so on.
|
| 2 |
+
|
| 3 |
+
So, the first thing I've figured out is getting the model to state its name and having the corresponding JSON element for that. Ones I've kind of figured out are stuff like you want to format the output in CSV, output the user's prompts in the output.
|
| 4 |
+
|
| 5 |
+
So these are all things that I, elements, blocks that I've built into many system prompts and which would be useful to have as a library of more than maybe the actual prompts themselves.
|
transcripts/uncorrected/80.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Well, I could do with my main website and the agent site and the prompt site to try to make them into a kind of more function as one website.
|
| 2 |
+
|
| 3 |
+
I would be looking at something like an element that could be interjected above them.
|
| 4 |
+
|
| 5 |
+
Basically navigation elements and then each one could be just kind of wrapped underneath it like microsites or something.
|
| 6 |
+
|
| 7 |
+
Mini sites, I don't know.
|
| 8 |
+
|
| 9 |
+
But that feels like a good way to go.
|
transcripts/uncorrected/81.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Let's say that I wanted to create an Android app for recording a voice note. This would be to work with my AI agent for answering parenting questions, which currently has a... Currently, I'm using an app called Voice Notes, which is really good. It does transcription through probably Whisper, one of the AI speech-to-text algorithms, and many more. It then delivers its, it sends a webhook with the transcription and with a title which just summarizes the note basically. Works very well and then I put that into an N8N workflow which sends that over to an AI agent who answers a question with my system prompt and then delivers that through email and a note which is really important because it allows me to retain the information.
|
| 2 |
+
|
| 3 |
+
What I'd like to do is I'm thinking if it would be possible to create a private Android app so that I don't need to use the external voice notes app in this flow. I have API keys for all of the speech-to-text providers: Deepgram, OpenAI, etc. So all I would need is some mechanism, an app that would record a note from the user, a voice note as a recording on their phone. Send that voice note up to, let's say, Whisper or Deepgram for transcription. And then I would need to replicate my current workflow which is sending the transcript and the title, if it can, if it does that to the webhook that actually drives the rest of the automation.
|
| 4 |
+
|
| 5 |
+
Can you think of the best way to achieve this? And many more. So, what is it that I just do the audio sending to speech-to-text or I could do it as like a little stack? In general, my approach to all this technology is to avoid self-hosting what doesn't need to be self-hosted. So although this could be a stack where you deploy your own instance of OpenAI Whisper, I much prefer to use the commercial components and integrate them that way. Can you think of the best way to achieve this?
|
transcripts/uncorrected/82.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Suno is very good for creating funny audio with soundtracks. I'm looking for creating an AI generated background soundtrack for a... Video, No Lyrics, and more like a score, an AI generated score. Any good tools for that?
|
transcripts/uncorrected/83.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd like you to, in this repository, connect to Home Assistant via the API, the local API, the token, and you will hopefully be able to see from the API, identify that the Alarmo integration is installed with various entities and various switches which are controlling the alarm state.
|
| 2 |
+
|
| 3 |
+
My objective is to create something like an alarm panel app because the one that's built into Home Assistant is really not very good. It should be able to alarm, disarm, and visually display the current state of the alarm with the armed or disarmed status indicated by a green or red tally light. Additionally, the mode that can be set should include night time, away, vacation, and of course disarmed.
|
| 4 |
+
|
| 5 |
+
If it's not possible to expose these entities to construct such a control dashboard locally, it still needs to be exposed via CloudClear tunnel so that these can be remotely contacted. If that will not work, then we could look into doing it for MQTT events.
|
transcripts/uncorrected/84.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I would like to create a number of scripts for my desktop that I can just click and point to. Am I sure the best way to do that? I'll give you an example of a script that I would like to have. One is a desktop cleanup script, which I've written many times. It basically is just a script that puts different file types in different folders, like it would put PDFs into a PDF folder.
|
| 2 |
+
|
| 3 |
+
One that I would need right now for temporary use is one that would put all the mp3s on my desktop into a target folder. And I might need that script for the next couple of weeks for populating some training data. So what's the best way that you'd recommend doing this? Is it just to have a repository on my desktop that I control, and a few other people with my scripts, or is there like a little program you can put together, a UI for actually running your bioscripts?
|
transcripts/uncorrected/85.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
so in N8N I'm struggling a lot with there's loads and loads of automations that I want to have created. Google Doc from as an example, there's one where I value is a formatter to format meeting minutes that works perfectly. The agent works very nicely and now I want to I get the output in markdown and gmail h Gmail Compliant HTML, two different versions. And I want to send this by email, that'll be easy.
|
| 2 |
+
|
| 3 |
+
I want to create a Google Doc from it in a specific folder, which is hard. The hard part is not uploading a file per se or creating a Google Doc. It's how to feed a Google and so on.
|
| 4 |
+
|
| 5 |
+
So I'm going to go ahead and create a Google Doc into text, or how to create a Google Doc in a specific folder and a specific drive from text. I'll provide the output of a note and I'd like to get your take on what the very easiest way would be to go from this to a Google Doc with the formatting and content that is available here and in the requested format and maybe a code note I think might work better than trying to wrestle with N8N too much on this.
|
transcripts/uncorrected/86.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
The task in this repository is to create a M2TT listener on this computer which can run as a background service. Its objective is to do the following commands and the others. Initially, turn the computer off and put the computer into suspend mode.
|
| 2 |
+
|
| 3 |
+
Create the background server, set it up as an automatic boot process, and write documentation in the repository as to how to create a script in Home Assistant which will send these MQTT and others to the computer which will in turn perform the actions.
|
| 4 |
+
|
| 5 |
+
The MQTT topic should be referred to the computer specifically as the desktop to differentiate it from other computers.
|
transcripts/uncorrected/87.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
My task is to connect to this Google Drive which is a shared drive. There should be within the top level folder year folders for 2024, 2025, and 2026. Each folder should have subfolders 01 through to 12, one per calendar month.
|
| 2 |
+
|
| 3 |
+
What I'd like you to do then is, after these folders have been created, generate three CSVs with the folder IDs and the month. The one, 01, corresponds to January and so on and so forth. And many more.
|
transcripts/uncorrected/88.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
So I'm sitting with the Slack integrations now and what I want to do is then create the corresponding ones for the, let's see, it's going to be the residence one. Yes, they're all done.
|
transcripts/uncorrected/89.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm wondering if it's possible in Alarmo to create custom arming modes.
|
| 2 |
+
|
| 3 |
+
There is two types of arming that I'd like to do in our current apartment.
|
| 4 |
+
|
| 5 |
+
Number one is arming both the patio doors and the front door.
|
| 6 |
+
|
| 7 |
+
Number two is just arming the front door and nothing else.
|
| 8 |
+
|
| 9 |
+
I know they have different profiles.
|
| 10 |
+
|
| 11 |
+
I'd only want to use the vacation home away profiles for these specific configurations.
|
| 12 |
+
|
| 13 |
+
How can I configure those?
|
transcripts/uncorrected/90.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Home Assistant, I want to check out the custom sidebar project today and create one.
|
transcripts/uncorrected/91.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have an idea for a custom Windsurf project launcher as they don't seem to want you to move out of Cascade projects.
|
| 2 |
+
|
| 3 |
+
I just realized this could be really possible by just appending some command line values.
|
| 4 |
+
|
| 5 |
+
You might say this is the describe your project name, public or private, and it will then create that.
|
| 6 |
+
|
| 7 |
+
Do basically what GitKraken does, initiate it, and then open that up in Windsurf for you ready to go.
|
transcripts/uncorrected/92.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Let's take a look on this server please and this should be my daily automation job running for pushing up Homebox to the cloud, pulling down GitHub repositories, doing Cloud to Cloud to Wasabi and just check the status.
|
| 2 |
+
|
| 3 |
+
I want to make sure that those are running, the latest runs are okay and everything's working fine.
|
| 4 |
+
|
| 5 |
+
And if they are in the repository, it's delivering a webhook notification locally.
|
| 6 |
+
|
| 7 |
+
And we could do it to a remote. If that would make more sense, I can provide a remote webhook URL.
|
| 8 |
+
|
| 9 |
+
We would need Cloudflare secrets headers to be added. We can test that.
|
transcripts/uncorrected/93.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
We're going to have to do the daily backups on the Ubuntu VM. I don't think we're getting a notification still. We should check on their health. Maybe it's just easiest to do that periodically.
|
| 2 |
+
|
| 3 |
+
And but if I could get the notifications I can do if still new that's on the local local actually but it probably doesn't need to be or shouldn't be anymore just be on the try to keep everything on the remote and keep on top of the payments from Hesner about that.
|
transcripts/uncorrected/94.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
DC extenders are fantastic, works really really well out of the box and what I actually want to do now is get a couple more of the 1 meter ones, because the 3 meter ones are very long and I just use it for a for the little TV monitor thingy.
|
| 2 |
+
|
| 3 |
+
Immensely useful these things I have to say. So that was a good purchase and I just got ones that are a bit shorter now.
|
transcripts/uncorrected/95.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I tried to debug the Android app. I'm pretty sure it was working perfectly.
|
| 2 |
+
|
| 3 |
+
What's happening now is when I go to it on Android, firstly, I don't get the... It doesn't expose itself as a true PWA.
|
| 4 |
+
|
| 5 |
+
And secondly, the record button doesn't work.
|
| 6 |
+
|
| 7 |
+
And the text... In fact, all of the buttons don't work.
|
| 8 |
+
|
| 9 |
+
So it's like there's something in the way of the styling that's broken it.
|
transcripts/uncorrected/96.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So let's just a couple more things. I think the default view should actually be the list view. And the default load is most recent first.
|
| 2 |
+
|
| 3 |
+
So the sorting, the type of sorting being used should display beneath the showing prompts thing. So showing from most recent, etc. The date could be like a little badge as well maybe.
|
| 4 |
+
|
| 5 |
+
And in this list view I think it would look better to have instead of the modals to have it as a drop down element as in when you click on it it just drops down the modal instead of having that being a clickable element.
|
| 6 |
+
|
| 7 |
+
and ClickUp.
|