Commit
·
1fc1cf3
1
Parent(s):
9be95f1
commit
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- audio/15.mp3 +3 -0
- audio/16.mp3 +3 -0
- audio/17.mp3 +3 -0
- audio/18.mp3 +3 -0
- audio/19.mp3 +3 -0
- audio/20.mp3 +3 -0
- audio/21.mp3 +3 -0
- audio/22.mp3 +3 -0
- audio/23.mp3 +3 -0
- audio/24.mp3 +3 -0
- audio/25.mp3 +3 -0
- audio/26.mp3 +3 -0
- audio/27.mp3 +3 -0
- audio/28.mp3 +3 -0
- audio/29.mp3 +3 -0
- audio/30.mp3 +3 -0
- audio/31.mp3 +3 -0
- audio/32.mp3 +3 -0
- audio/33.mp3 +3 -0
- audio/34.mp3 +3 -0
- audio/35.mp3 +3 -0
- audio/36.mp3 +3 -0
- audio/37.mp3 +3 -0
- audio/38.mp3 +3 -0
- audio/39.mp3 +3 -0
- audio/40.mp3 +3 -0
- transcripts/uncorrected/15.txt +3 -0
- transcripts/uncorrected/16.txt +5 -0
- transcripts/uncorrected/17.txt +1 -0
- transcripts/uncorrected/18.txt +7 -0
- transcripts/uncorrected/19.txt +1 -0
- transcripts/uncorrected/20.txt +5 -0
- transcripts/uncorrected/21.txt +5 -0
- transcripts/uncorrected/22.txt +3 -0
- transcripts/uncorrected/23.txt +13 -0
- transcripts/uncorrected/24.txt +17 -0
- transcripts/uncorrected/25.txt +17 -0
- transcripts/uncorrected/26.txt +3 -0
- transcripts/uncorrected/27.txt +1 -0
- transcripts/uncorrected/28.txt +3 -0
- transcripts/uncorrected/29.txt +5 -0
- transcripts/uncorrected/30.txt +7 -0
- transcripts/uncorrected/31.txt +3 -0
- transcripts/uncorrected/32.txt +7 -0
- transcripts/uncorrected/33.txt +15 -0
- transcripts/uncorrected/34.txt +9 -0
- transcripts/uncorrected/35.txt +5 -0
- transcripts/uncorrected/36.txt +15 -0
- transcripts/uncorrected/37.txt +7 -0
- transcripts/uncorrected/38.txt +5 -0
audio/15.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c838b0d42c901039cb8e734a86d011ff9cbbc03c6316e0e77caf2f792ebd99b2
|
| 3 |
+
size 1437164
|
audio/16.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1687bfa5e6e7b24994c4d303f985f89dc4f22a2740b39ee2cc2729209ec3e981
|
| 3 |
+
size 2255084
|
audio/17.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bdba01dc5037bd1bb2c0aaf9d33ff4fbfa0fdde0dbf8c8ba56fdd97e68022b60
|
| 3 |
+
size 407276
|
audio/18.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b31623057936093301e0f0f9222b0c8007c130d8f4eb93e99f6355c511fca9c6
|
| 3 |
+
size 1602476
|
audio/19.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:daa56a8417c5a9ec3df48225927d677cdd002d53dac0ad665184db68d79c8735
|
| 3 |
+
size 335276
|
audio/20.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:28003cd503a1ba1b714af7e4d4da361b24e32dd642ff441b53458b0b6716763e
|
| 3 |
+
size 1395108
|
audio/21.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f5fbab7bfddc148f937b34f66e708203b5a7738332a5111226a41f52ed37968
|
| 3 |
+
size 2828204
|
audio/22.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f0d3b82ba3ced15c8064e4cfea6f076d12d7e2f1f24c0b5df35fff9d625be38
|
| 3 |
+
size 1781036
|
audio/23.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d71431d79049e74ca5377d2cdc2a35f4daf267bc52f65d67a1d02609b0c724f6
|
| 3 |
+
size 1736684
|
audio/24.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf14286f6a48f315eb16b404cfd361fecae61d3ba3b58f3c6bc1ae0159dd828d
|
| 3 |
+
size 3625757
|
audio/25.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2ea5c3b11d3daa0c52d3ce67efbf363cb9226cfc7202f454d21ebab18ccb365c
|
| 3 |
+
size 1501061
|
audio/26.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e6bd7ab1b2541cb869c5a4bf9ab85b1801fa16a27818a73e7190712cde47d79c
|
| 3 |
+
size 774764
|
audio/27.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:681b3d360d4fec740eb570f9f07b48233b5df94a5babe202600df5a551e41df7
|
| 3 |
+
size 173996
|
audio/28.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:58d24d96e15926bf1481b0735671ce36b42211e3974d8de695a90f38eb0b6642
|
| 3 |
+
size 1205036
|
audio/29.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b27ddb7c383e018c9e8e2981bbf0ccac751fb4fd9285c3adbd4c466fee736c7
|
| 3 |
+
size 2324859
|
audio/30.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b01a4dc27ecaa25875e3d4be2bc3839e782e9b5e7e0b6940bee8a0f1988e121
|
| 3 |
+
size 3737979
|
audio/31.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:adea043ebdd94dfd3a605cf89aa0c37b7ebe01741f803c24a4f4591de2ff8aab
|
| 3 |
+
size 805786
|
audio/32.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:63bd0b45242927d7e5200b5b38a2ac9c3474539545a8265fa460466d6e64c963
|
| 3 |
+
size 1112359
|
audio/33.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0cc2987d79150b49b4b24d95be82e7ac8f3e6ca3fee664125ed42191700854b
|
| 3 |
+
size 1691756
|
audio/34.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c34f111f0c78650a5be003ccbc305f9fdcff83311fc03d71db3f73edf0099725
|
| 3 |
+
size 1791404
|
audio/35.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:864412652eb3e9bc77fec3e18e04c6f0c4ecb7647e1b001ab2f9b0e971b5cb72
|
| 3 |
+
size 1688876
|
audio/36.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f8ecfa9501726b7e86eeebce966221b73599df7fb678d5975192d1f8227add7
|
| 3 |
+
size 2796524
|
audio/37.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:962281f20db1dba125461e16f00c4dc9951811e15741fecba6a70c5a5759c021
|
| 3 |
+
size 3919724
|
audio/38.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a26aafc8fac46a3b033b0a5937d4217dc02317f1176a960bdaad463aadd4723f
|
| 3 |
+
size 1127276
|
audio/39.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d285ac7e1aa19444c8c8437a18d6f220d012c91c3ab1ecda40dab53bda165390
|
| 3 |
+
size 1647404
|
audio/40.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0a7b943821f4f74e7200782b771effcf73a2b38b290d59355a51a8609ec02e20
|
| 3 |
+
size 2267756
|
transcripts/uncorrected/15.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Alright so just a couple of what's the Android app it's looking overall very good. There was I just moved to the base of the repository there was a multimedia folder that I have with all the different icons that I created for it. For the Android app specifically there is one called icon.png which I'd like to have if possible as the icon for the app.
|
| 2 |
+
|
| 3 |
+
I had a video as well. It'd be pretty funny if that when you launch the app if that loaded it's the greeting video and then the thumbs up one was for submitting one. It should be called Ask Dr. Herman and maybe Herman's headshot can be in place of that logo like we don't need that icon that one of the headshot as much and refer to him in the body text as a medical horse.
|
transcripts/uncorrected/16.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Could we make a couple of changes to the application here? Firstly, I've added some images I'd like to be used in the app. When it's loading up, the loading circle can display. That's the third one. The headshot should be integrated. Second image. And finally, when the user submits a question and it's successfully captured, the pop-up can appear briefly.
|
| 2 |
+
|
| 3 |
+
Functionality I'd like to add is the ability for the user to send the data through a form response, i.e., textually. And then the recording could be called voice mode. So there's text mode where you type it in, voice mode where you use your voice. And either way it should send to the same webhook.
|
| 4 |
+
|
| 5 |
+
For the moment, let's continue using the testing webhook with the but one thing in the payload where it says user methods and one is text and one is one will add voice input to that to the payload to tag it. Other than that, it should be the AI speech to text key should be the hardcoded, the pay code, the webhook should be initially we can either have a toggle, but eventually it will just be the production one. But I'd like to try it with those changes implemented.
|
transcripts/uncorrected/17.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Are newborns naturally biphasic sleepers? They say humans used to sleep for two periods until relatively recently. Do newborns have that concept?
|
transcripts/uncorrected/18.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
okay we still have the black because you can still scroll out to you can start the feed at far below zero so again it should start at 100.
|
| 2 |
+
|
| 3 |
+
secondly we don't need the two the duplicate PTZ panels the one the overlay one is very good so we don't need on below we can get rid of that.
|
| 4 |
+
|
| 5 |
+
finally in the behavior that I would like in general is that by default everything is muted and maybe there's a button that says play audio so when you click when you click on click that it goes live.
|
| 6 |
+
|
| 7 |
+
so in other words the cameras are all muted and if you want to listen to an audio stream you do that.
|
transcripts/uncorrected/19.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
See if an MPD endpoint can be authenticated with a Cloudflare tunnel such that it could be local or remote that they could deliver into it music and it would be authenticated with headers.
|
transcripts/uncorrected/20.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So what I like to do when I get home is I'll do a test. I'll arm an event testing the automated army. I'll set a time period that will cover the test window and then I can just disable it after validation.
|
| 2 |
+
|
| 3 |
+
So during this time, you know, at a certain hour, automatically arm the system, then two minutes later with the configuration, whatever, it will catch the doors. Then two minutes later I'll open one of the doors and hope that it triggers off the alarm. That'll be validation one.
|
| 4 |
+
|
| 5 |
+
Validation two will be that the system is armed and it automatically disarms at a certain time and then test after the disarmed time when it says it's been disarmed and validate that the thing doesn't trigger an alarm.
|
transcripts/uncorrected/21.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I recently migrated two blogs over to Contentful. And I'm using the defaults, the built-in CDN for the media uploads. The reason that I did this was I moved over to static site generators from WordPress a few years ago. I always really disliked WordPress and having to manage servers just to stand up a blog. But I always wanted control over whenever I write something, a post, and let's say I add images. I tend to like to write on the internet, so I like to write in these content editors, and I don't want to have to go to any trouble really to create a backup of my stuff, which I always want to do.
|
| 2 |
+
|
| 3 |
+
So the headless static site builders work very well. I could create my content and locally push it through GitHub, publish it. But writing in a code editor was, of course, very inconvenient and made it not fun, basically. I think Contentful and headless CMS is the way forward. They're very... I moved over to them in order to just learn how it works and to be with that technology for the long term.
|
| 4 |
+
|
| 5 |
+
Now I need to just work in the reverse. I need to get my... let's say I create a post in Contentful. I want to firstly automate the deployment, and secondly, I want to automate a daily backup that pulls in the content and pulls in the CDN images. It has to happen automatically. Can you think of the best way to do this? Is it server to server, backing up to Wasabi? Can I do a local pull down? Can I extract the posts and the media all from the API and use that to take my local backup?
|
transcripts/uncorrected/22.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So, I have to run... I thought I have unattended upgrades installed on this computer, but it seems that I have to run... If I do sudo apt-get upgrade I'm able to see packages that have been updated and a lot of them are actually the ones that I'm most interested in, namely the IDE.
|
| 2 |
+
|
| 3 |
+
Windsurf is really important; I always want to keep on top of those upgrades. So I'm wondering if there's any way to do like an automated whatever would catch the sudo apt upgrade to automatically accept it. That would be useful.
|
transcripts/uncorrected/23.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Could you suggest a way to automate the following? I'll describe the task I'm currently doing.
|
| 2 |
+
|
| 3 |
+
So I am creating a training data set for an AI project. What I'm doing is downloading voice notes I previously recorded into an app. It has a web UI, which I'm currently on, voicenotes.com.app, and each note is listed from top to bottom.
|
| 4 |
+
|
| 5 |
+
So I'm working backwards from my newest note all the way back. I have about 1800 notes, so it's really going to be a very tedious process to do manually.
|
| 6 |
+
|
| 7 |
+
I have the start text, my current position, where I am on the page, which begins migrating websites to Netlify. And then there's a note there, August 3rd.
|
| 8 |
+
|
| 9 |
+
What I'm doing is there is a button on the far right of each card. I click on it and then I click on the download button in the menu. That downloads the mp3, and then I move on to the next one.
|
| 10 |
+
|
| 11 |
+
I scroll down just a little bit, do it, continue. That's the whole process.
|
| 12 |
+
|
| 13 |
+
I think for browserless, it could be doable, but I would need to authenticate the session. How can that be done?
|
transcripts/uncorrected/24.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So a question for you here. So I'm learning automation at the moment. I'm using N8N and it's very interesting. I have a question really regarding what would be the main automation drivers, the automation builders.
|
| 2 |
+
|
| 3 |
+
So most of the automation platforms I've used so far, Node-RED, N8N, ActivePieces, Zapier, Make, etc. all follow a somewhat similar design in the sense that they're kind of driven around really integrations and pipelines.
|
| 4 |
+
|
| 5 |
+
With N8N, I think it's almost kind of a hybrid platform in the sense that you can do it fully code or you can just have some code nodes.
|
| 6 |
+
|
| 7 |
+
So, and then on the other end of the scale, you of course have fully code written automations where there's no UI and everything is just tested and then deployed to a server for production.
|
| 8 |
+
|
| 9 |
+
My question is business usage. Let consider two levels of scale. First would be let say a medium company that is using automations for something like let just say accounts processing.
|
| 10 |
+
|
| 11 |
+
Let's say accounts and payments are using it to process, detect entities in invoices with a document detection pipeline.
|
| 12 |
+
|
| 13 |
+
And then let's take a big company, an enterprise company, let's say Microsoft or a multinational which might have massive needs for automating.
|
| 14 |
+
|
| 15 |
+
So both those levels of scale, the first smaller level of scale, what do you think people are using to actually build, manage, and version control the automations and who's managing them?
|
| 16 |
+
|
| 17 |
+
And then at the top at the enterprise level of scale, the Microsofts of the world or banks for example which have huge volumes of concurrent transactions, credit card providers that they're continuously processing, and in the case of banks and heavily regulated industries, what might they be using to deploy automations and control them?
|
transcripts/uncorrected/25.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd like to create an automation at home in which when we come into the house and you come home the air conditioning turns on.
|
| 2 |
+
|
| 3 |
+
So we have a climate device in the living room and the summer we want it on cold so all that part is good.
|
| 4 |
+
|
| 5 |
+
We have a front door sensor for security.
|
| 6 |
+
|
| 7 |
+
What I realized is that there's no way from that to distinguish a person leaving from a person coming in.
|
| 8 |
+
|
| 9 |
+
This little world is gonna look like the same event from the sensor.
|
| 10 |
+
|
| 11 |
+
So what do you think of this idea?
|
| 12 |
+
|
| 13 |
+
If we had something like a smart lock it would be pretty easy.
|
| 14 |
+
|
| 15 |
+
When the door is unlocked, coming in, you turn on the AC.
|
| 16 |
+
|
| 17 |
+
But pending that, can you think of any sensor that we could have or a way to leverage our existing sensors that would allow this to work?
|
transcripts/uncorrected/26.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have an idea. A new public repository is created in GitHub that feeds into the indexer, an agent running in the index repository, which autonomously adds that repository to the right indexing page, gives it a description, and then commits and pushes all without any supervision.
|
| 2 |
+
|
| 3 |
+
Thereby keeping the index continuously updated and the user just has to create repositories.
|
transcripts/uncorrected/27.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
What is the average temperature in Jerusalem during the month of July?
|
transcripts/uncorrected/28.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So this home server, yesterday I was looking into getting a dev container on it. The general workflow that I do is that I'll develop stuff locally and then deploy onto the server. And you can see the structure, there's a folder called repos, and then I'll deploy the folders within the repositories.
|
| 2 |
+
|
| 3 |
+
My question is for, it seems maybe that if I want to clone a number of repositories, it's better to do this kind of stuff in a batch. So let's say, is there a batch manager that I can say the end of a CICD pipeline that I can set up on the server saying that the repos are here and I can periodically update the list that's pulled in and they'll populate.
|
transcripts/uncorrected/29.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A quick question there regarding stickers. So this is a battery case and it's an example of... I'd like to be able to make little labels. When I got my label maker, the Q700, and I printed the paper labels.
|
| 2 |
+
|
| 3 |
+
So the paper labels, I guess they really don't look the best on... This is a battery case and it has a, I'm not sure what that material is called, but it's a very common type of material, neoprene.
|
| 4 |
+
|
| 5 |
+
Neoprene case anyway, this is one where the paper label sometimes have a hard time adhering to, and I'd like to have a little label that says batteries. But it would be, I think, a white label like white text on transparent, which I can get in the! They are all better than many other PC episodes.
|
transcripts/uncorrected/30.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
One question, so there's a great app called Alfred that basically lets you use spare phones as IP cameras and it works quite nicely. But the issue that I have with that approach is that it's a little bit awkward; you need to find an old phone to keep it charged up all the time. You know, what? But the approach is very good. I like the idea of saying we need a camera for, let's say, our son. We're on traveling or something and we're going to use a device for that. It's battery powered and you might get a few hours of use out of it and then you can keep it on the charge.
|
| 2 |
+
|
| 3 |
+
In other words, I think it makes much more sense to have a standalone device that is for this purpose and just has the app on it rather than trying to reuse a phone for this. So I'm wondering if there's anything like my requirements or my preferences for IP monitoring or I don't like using the apps that the cheaper Chinese manufacturers try to push you into. I like doing at home; I have RTSP and even when we out there, very good apps for that. So it makes much more sense to me to just have a camera that can do a local stream.
|
| 4 |
+
|
| 5 |
+
So the requirements to sum them all up together are a battery powered piece of hardware that can last for a few hours and it can be, I would think, something that has a quarter inch that it can be put up on a little monopod so that it can see into bassinet, etc. That kind of mounting is really important. And does RTSP camera stream that when it's up you can check it? Sound monitoring as well? If you can think of anything that's been actually designed for this purpose.
|
| 6 |
+
|
| 7 |
+
You certainly have battery powered IP cameras, stuff like doorbell cameras, outdoor cameras, but they're more geared towards periodic alerting, like there's motion detection than they do a stream for a while. I'm talking about something that you might charge it up and expect to get four or five hours of continuous streaming from it. Can you think of anything appropriate for that or which has even been maybe designed for this use in mind with connectivity to the network over Wi-Fi?
|
transcripts/uncorrected/31.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm wondering, are you aware of any battery powered rechargeable ZigBee lights which could be turned on for a reasonable amount of time, chargeable via, let's say, USB-C?
|
| 2 |
+
|
| 3 |
+
Can be activated, deactivated from ZigBee and which are like kind of an LCD display like different colors, let's say?
|
transcripts/uncorrected/32.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm looking for a battery powered Zigbee light. I'll be capable of staying on a battery charge for a decent amount of time. Specifically, it would be for notifying of an alarm state.
|
| 2 |
+
|
| 3 |
+
So ideally it would be people displaying in red and green according to the state. I can do that via the automations of course.
|
| 4 |
+
|
| 5 |
+
But it shouldn't be a big sort of conspicuous light that attracts attention. I'm thinking about something quite low key.
|
| 6 |
+
|
| 7 |
+
Anything that matches a fit.
|
transcripts/uncorrected/33.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So let's say we've established that home assistant is not the way to go for this.
|
| 2 |
+
|
| 3 |
+
So what would be the best way?
|
| 4 |
+
|
| 5 |
+
Let's say I have the four RTSP streams on my network, and I'd like to create, rebroadcast in whatever format is going to be the most compatible for directly streaming in Home Assistant.
|
| 6 |
+
|
| 7 |
+
But the actual work here would be done locally on a lightweight server.
|
| 8 |
+
|
| 9 |
+
So whether that's going from RTSP to whatever is best optimized for streaming in a web browser and that also can be tunneled remotely.
|
| 10 |
+
|
| 11 |
+
For simplification, it might be nice to be able to actually have one stream address and then just different substreams for each of the four cameras.
|
| 12 |
+
|
| 13 |
+
So that if we bring more cameras online, I can just integrate them into this and then add downstream.
|
| 14 |
+
|
| 15 |
+
What would be the best way to do that?
|
transcripts/uncorrected/34.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So regarding the design of this blog, the initial thing I had in mind was that the blog logo, the sloth, would appear on every page, and many more.
|
| 2 |
+
|
| 3 |
+
I think the menu layout could be a lot better and probably makes more sense of the posts; bars come before the internal pages and for continuity and similarity of branding.
|
| 4 |
+
|
| 5 |
+
I like the background color in the sloth photo; it's a certain type of navy.
|
| 6 |
+
|
| 7 |
+
Probably if we had that blue as the kind of blue standardized in the CSS for the top bar and then maybe a little kind of strip of the footer as well.
|
| 8 |
+
|
| 9 |
+
And the sloth somewhere in the footer component too.
|
transcripts/uncorrected/35.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
And then I could do an experimentary configuration. For example, I'm thinking about a blog drafting one, a blog drafting agent. And I'm going to start that off in a repository.
|
| 2 |
+
|
| 3 |
+
Now what I can do, what I'd like to do as a, just as a thing, is to, I have an API key for N8N in a repository called a model repository and the goal is I talk to the agent in Cascade. It has access to the N8N API and it will then create a workflow according to what I need.
|
| 4 |
+
|
| 5 |
+
And the eventual objective is if this proof of concept is successful, that we'll be able to iterate upon it and create ones that I actually need. But it's just a proof of concept. Firstly, you see, firstly, cloud authentication, etc. Secondly, can it work?
|
transcripts/uncorrected/36.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd like to give your take on what might be a drop-in component for probably the most important part of the workflows for the past while.
|
| 2 |
+
|
| 3 |
+
What I'm beginning to work on now, which is using this app called Voice Notes, which does transcription through, I think it's Whisper probably.
|
| 4 |
+
|
| 5 |
+
You record with your microphone through a browser or through an app. It sends it off for speech to text transcription and then you get your transcription.
|
| 6 |
+
|
| 7 |
+
It does webhook delivery as well. So that's the initial way that I'm getting a lot of information in for workflows ideas to do notes.
|
| 8 |
+
|
| 9 |
+
And I have a whole plan for what I can do with the notes on the other side of the chain.
|
| 10 |
+
|
| 11 |
+
Let's say that I wanted to stand up a self-hosted tool, a component, which only had the task of capturing the recording, capturing the stream from the user, sending it off for transcription and saying, and so on.
|
| 12 |
+
|
| 13 |
+
So if I'm getting transcription sent, that goes off to whatever the transcription tools that we're going to use is, let's say OpenAI, and then it's going to deliver when it gets it back from the speech-to-text model, it's going to send off a webhook and from there I can do what I'm doing currently.
|
| 14 |
+
|
| 15 |
+
So the question is, does it make sense to build that from scratch, a little lightweight voice capture interface, or does it make sense, or is there a component that would be very much primed and ready to do this?
|
transcripts/uncorrected/37.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
What I would like to do in this website is to build a static website. It's going to deploy to a subdomain on my personal website and the objective is to create a single and the local navigation for the different repositories that I've set up. There's two entities that I'd like to gather. The first of these is GitHub repositories. The second of these is Hugging Face datasets.
|
| 2 |
+
|
| 3 |
+
So what I like to do is have at the top just a little brief message: "This is an index of my repositories." I click up to the index of my repositories and a tab index to do, or maybe a tick you can tick GitHub hugging face or buttons for GitHub repos hugging face datasets because those are the two things that I'm adding or highlighting here.
|
| 4 |
+
|
| 5 |
+
And the ability to source by recently added or by alphabetical, and each one can just be a short whatever entity can be pulled from the API, like a short description, creation date, and then a nicely displayed link to it. The information for both to populate both of these for GitHub obviously it's only going to be my public repositories, not the private ones, and likewise for Hugging Face, it'll be both.
|
| 6 |
+
|
| 7 |
+
So whatever can be pulled in from the API, write a script that will be an import script and then a folder for the actual website will deploy through Netlify, and the Netlify MCP server can be used for that. But firstly, let's get the site set up.
|
transcripts/uncorrected/38.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Give me a list of topics, discrete topics that would be very useful in building out multi-agent workflows in N8N, including using sub-agent workflows, using agents for actually managing your inbox, calendar, task lists, processing voice notes, a list of very practical applications for AI agents.
|
| 2 |
+
|
| 3 |
+
Generate a list of search terms to use, all containing search terms and then N8N, building a playlist on YouTube and then as well for the code nodes where you need to use JavaScript, Python and also HTTP requests, creating them in N8N.
|
| 4 |
+
|
| 5 |
+
Give me an extensive list of search terms that would cover all those topics too.
|