Commit
·
207d0a6
1
Parent(s):
d448106
commit
Browse files- audio/157.mp3 +3 -0
- audio/158.mp3 +3 -0
- audio/159.mp3 +3 -0
- audio/160.mp3 +3 -0
- audio/161.mp3 +3 -0
- audio/162.mp3 +3 -0
- audio/163.mp3 +3 -0
- audio/164.mp3 +3 -0
- audio/165.mp3 +3 -0
- transcripts/uncorrected/139.txt +5 -0
- transcripts/uncorrected/140.txt +7 -0
- transcripts/uncorrected/141.txt +11 -0
- transcripts/uncorrected/142.txt +9 -0
- transcripts/uncorrected/143.txt +5 -0
- transcripts/uncorrected/144.txt +9 -0
- transcripts/uncorrected/145.txt +9 -0
- transcripts/uncorrected/146.txt +5 -0
- transcripts/uncorrected/147.txt +1 -0
- transcripts/uncorrected/148.txt +1 -0
- transcripts/uncorrected/149.txt +5 -0
- transcripts/uncorrected/150.txt +5 -0
- transcripts/uncorrected/151.txt +3 -0
- transcripts/uncorrected/152.txt +7 -0
- transcripts/uncorrected/153.txt +9 -0
- transcripts/uncorrected/154.txt +7 -0
- transcripts/uncorrected/155.txt +7 -0
- transcripts/uncorrected/156.txt +9 -0
- transcripts/uncorrected/157.txt +11 -0
- transcripts/uncorrected/158.txt +7 -0
- transcripts/uncorrected/159.txt +9 -0
- transcripts/uncorrected/160.txt +13 -0
- transcripts/uncorrected/161.txt +5 -0
- transcripts/uncorrected/162.txt +7 -0
- transcripts/uncorrected/163.txt +5 -0
- transcripts/uncorrected/164.txt +3 -0
- transcripts/uncorrected/165.txt +7 -0
audio/157.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3e8b5af9cc12f43792780c12fab9a53b90c921dcd5e6c3affb46ae21c5b3a7b9
|
| 3 |
+
size 2759084
|
audio/158.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:093ad7d1f9baa6dd297fcf2d7a4573b521d85e89873e6038981e20680b922f20
|
| 3 |
+
size 937196
|
audio/159.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3d5c8a8107f9c7b9a39f38105177ac00d4179a38461eb29da89d0b394a87384e
|
| 3 |
+
size 3958316
|
audio/160.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e93c894c30b1b61e6d894cda30e9d1d93142f6c67571fe14e19a96a25e938243
|
| 3 |
+
size 1636480
|
audio/161.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a72f4466d73cbff6d1a529b2ca27e9d1051a9ad57395e07f805043fea3c85d82
|
| 3 |
+
size 1209644
|
audio/162.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0684c74b90e82652ac2fb45c6538060c507068f4b2826711631dbaaa8bfbb61
|
| 3 |
+
size 905516
|
audio/163.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2219846c832ed6861302b39d34129a9cc23ee32a08bdb9978601712d528ebed6
|
| 3 |
+
size 991916
|
audio/164.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cda0a81e3c82a3cdbbf21588233c694514b98dbeabd0c23b606b8ec05ae92cd1
|
| 3 |
+
size 1002284
|
audio/165.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d504fd570de0cbe739fd8754c12e1f578fec8b683dbef96f7d8712724e3b27ae
|
| 3 |
+
size 1252844
|
transcripts/uncorrected/139.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So the other thing that occurred to me is for the local cameras we're looking at. I think it actually makes more sense. I'm trying to figure where the latency is coming from. And I think I figured it out. I couldn't get TP-Link to, or I looked at TP-Link, I looked at the network traffic, and it didn't seem to me as if it was actually calling home at all. Maybe there was metadata, but I didn't see any evidence that the video stream was actually being relayed out of the network, which is a good thing.
|
| 2 |
+
|
| 3 |
+
So it must be using some kind of a proxy. The issue, if I'm looking at it in Home Assistant to camera streams, is I'm actually connecting via CloudFlare. So it's a hairpin, essentially.
|
| 4 |
+
|
| 5 |
+
So that's why the local RTSP, if the streams are actually staying within the network as they should, should be much, much more stable and reliable.
|
transcripts/uncorrected/140.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So, I need to add the fourth camera today, fourth and final camera and clean out that section of my office and then update the Ezra Cam appliance to have that working at least.
|
| 2 |
+
|
| 3 |
+
And then finally for the MQTT controllers, I'm going to add one for my desktop instead as the safety seems to be more reliable, so you have a background job running on the desktop itself and then you can send sleep, wake up, screen on, screen off events just via MQTT.
|
| 4 |
+
|
| 5 |
+
And then for the MQTT client on Raspberry Pi as the screen on and off events didn't work out, what I'm going to do instead is add a ZigBee switch and for the one on the display screen, the toggle doesn't work, it doesn't work that well.
|
| 6 |
+
|
| 7 |
+
I think that might actually be easier just to create a program from scratch that can run on maybe different versions of both. It has the camera URLs baked into it, has the MQTT topics and then just does the switching that way.
|
transcripts/uncorrected/141.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, one more question.
|
| 2 |
+
|
| 3 |
+
So I've been for some time looking for a good organizer or asking for a good organizer in Ubuntu Linux for media organization and browsing, usually in the context of photographs.
|
| 4 |
+
|
| 5 |
+
So let's say it's a video library, and I have maybe 200 clips or something.
|
| 6 |
+
|
| 7 |
+
And I want just an easy way to go through that folder, preview the ones, preview them, create some subdirectories and move them in just to organize my sheet folders.
|
| 8 |
+
|
| 9 |
+
What would you recommend?
|
| 10 |
+
|
| 11 |
+
Is there a standalone tool for doing this on Ubuntu?
|
transcripts/uncorrected/142.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I have a question for you. My immediate need is I'm looking for a document processing API for entity extraction from documents. And what I discovered very quickly is that it's very hard to find one that is either not very complicated, i.e. Google Cloud, or which is intended for enterprise and the rest of the world.
|
| 2 |
+
|
| 3 |
+
There's lots and lots of small developers on the internet and Reddit always trying to find different APIs for different needs and frequently running into this issue. It seems as well to me that when I want to try out something in API and see if it'll work, I might put in $10 of a balance to the account, but it's a very inefficient way to operate.
|
| 4 |
+
|
| 5 |
+
In general, I think the idea of finding APIs one by one for specific needs is quite inefficient. I've heard and come across Eden AI quite a lot and their model seems to be that they aggregate APIs, especially for AI uses, which seems like a very good idea for me.
|
| 6 |
+
|
| 7 |
+
It's a bit like I use Open Router for LLM APIs and I like the idea a lot. So what I'm looking for is something that's one, trustworthy, has a good reputation, and B, has a good library.
|
| 8 |
+
|
| 9 |
+
This one is mostly AI APIs, but I think it actually would be useful to have even a broader library than that, but it's a good start. Does Eden have a good reputation, and is there anyone else you'd recommend that would do what I'm looking for?
|
transcripts/uncorrected/143.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Here's something that would be useful. So, when I generally put my phone on to do not disturb mode overnight, sometimes I want to put it on to DND generally. Like, just around the clock, because I hate getting pinged by unnecessary notifications. And I know that I can add this phone as a DND exemption so any phone call will make it through.
|
| 2 |
+
|
| 3 |
+
My question is, let's say I did this. Inevitably, especially being somewhere I live, there's a good degree of spam and unwanted phone calls. So what I'd really like would be probably the best way to do this. Like, let's say that I wanted, I think through caller advertise, but I'm not sure if it really has, you know, how robust it really is in terms of accurately identifying spam calls.
|
| 4 |
+
|
| 5 |
+
N4 09 ! . G AMP lemme check if I can spot something here.
|
transcripts/uncorrected/144.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I've been looking for quite some time for a blog structure that really made sense to me, that I can post to regularly and keep the data. I did WordPress for a number of years and then I sort of just gave up on that. I got tired of hosting servers and the backup seemed really overkill.
|
| 2 |
+
|
| 3 |
+
So what I wanted to do was... The static site generators appealed a lot to me; I could create my content, but it was and many others. So it was very hard to, it wasn't intuitive, it wasn't conducive to actually writing, updating any of the blogs because it was just so hard to write code, write in an IDE. It doesn't make sense really for creating content.
|
| 4 |
+
|
| 5 |
+
So then I was thinking, well, if I could have somewhere on the internet to create the SaaS tool and then just kind of deploy it, capturing the code and the images, that would work. So this got me onto Sanity Studio and then Contentful.
|
| 6 |
+
|
| 7 |
+
The issue that I'm having at the moment is that there's a lot of work involved in getting Contentful onto your design in terms of rendering the content it provides. I think it's a very clever idea; it's just a little difficult to execute.
|
| 8 |
+
|
| 9 |
+
The other concern I have is because I've gone in this direction, I've gone from having full control of my content, but it's hard to write, to having it's easier to create and much more fluid in this manner. But now I'm not sure if all the images are going into the CDN of Contentful and the text, and I now have to think about how to back it up. Contentful seems very much pivoted towards enterprise markets. I'm wondering, is there anything that's kind of maybe a bit more suitable for my needs?
|
transcripts/uncorrected/145.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I want to finish off the import of the personal blog images and all the ones in.
|
| 2 |
+
|
| 3 |
+
Next, maybe the beauty of this model for the content delivery is that I don't need to actually deal with the question of how to do content for right now.
|
| 4 |
+
|
| 5 |
+
In fact, I could just say I'll start a fresh Astro project and probably might be easier than this painful migration and saying when it's ready.
|
| 6 |
+
|
| 7 |
+
When the agentico generations are a little bit better, then I can populate.
|
| 8 |
+
|
| 9 |
+
Right now, I'll just continue to work with this.
|
transcripts/uncorrected/146.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
To fix up the AI recognition parameter for detecting prompts, it's going to start firing off far too many into the funnel.
|
| 2 |
+
|
| 3 |
+
It could be that I have the automation that's driven by a prompt run, so a little bit more specific than just what I have currently.
|
| 4 |
+
|
| 5 |
+
Just to prevent a huge amount of these from being sent off to both workloads.
|
transcripts/uncorrected/147.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
There's some guy, I think it's a philosopher who has the, he would say something like, flow is the key to happiness. I think it's a book, in fact. What is that name of the guy in the book?
|
transcripts/uncorrected/148.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Two things to follow up on today. Teodat the Hoot ID card and the ink that never arrived. And I gotta get a refund for my protein bars.
|
transcripts/uncorrected/149.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, for reference, the key to getting the merge entities to work in N8N is if you have multiple points in the automation, each with feeding different data. This is where it got tricky, and this is the resolution.
|
| 2 |
+
|
| 3 |
+
If you try to do a set field, you can't do a set field that's going to be cutting across different outputs. It doesn't work. So you do need to use a merge. And what you need to do specifically is a firstly a combined merge and you just add them by position. That means that you don't need to match any fields. Combined by position means that the order of the inputs will dictate the order of the JSON that is the merge.
|
| 4 |
+
|
| 5 |
+
So for example, if I start and many more.
|
transcripts/uncorrected/150.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, so basically, I have moved the finished clips into a done folder, so we've successfully concatenated and compressed to the first two hours of footage up to 3am, and we still have the rest to do.
|
| 2 |
+
|
| 3 |
+
Now the issue is that using the CPU processing for concatenation etc is still very slow. I'm not really in a rush to finish this.
|
| 4 |
+
|
| 5 |
+
What I'd like to do is after I get all the individual clips to put them together, so keep trying to figure out any way to use the GPU to speed up this or to just also the workload so that the rest of the computer doesn't slow down.
|
transcripts/uncorrected/151.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Great recommendation from ChatGPT that I will note here. I asked it if there was anything like a sorting file organizer, like there are for photographs, but for video clips because it's horrible to sort to try to go through all these tiny thumbnails. It directed me to this thing called Tag Spaces, which I never heard about.
|
| 2 |
+
|
| 3 |
+
It only does local, I think, but it does exactly what I was looking for. It has a little video preview tab on the side, and it's really nice to organize in. So this is going to be a great, great find for organizing videos and shoots into subfolders. It's available as an app image, and I will put it on my laptop as well. Very, and very, very handy.
|
transcripts/uncorrected/152.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Now it's green invoice, I'm about to deal with regarding the sandbox, just for expense processing as well. I mentioned before that it's totally feasible to do it.
|
| 2 |
+
|
| 3 |
+
Green invoice, I suspect it might be now that I'm a lot better with N8N to actually download the expense processing one, but there's definitely a sandbox you can use.
|
| 4 |
+
|
| 5 |
+
So we don't need to actually have to do the expenses manually. Like you can get an existing expense, there's a get request and you can do even a URL, actually should be there as well.
|
| 6 |
+
|
| 7 |
+
The download URL is there, so it should be doable.
|
transcripts/uncorrected/153.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Could you give me the intended workflow for a headless blog configuration with Contentful as the CMS and deploying into Netlify with a Next.js frontend?
|
| 2 |
+
|
| 3 |
+
And so the point I'd like to get at is that the only time we need to push through the repository is for changes to the design or trying out new content types.
|
| 4 |
+
|
| 5 |
+
But that for content updates, they're just pulled directly from the CMS live.
|
| 6 |
+
|
| 7 |
+
Or if it's directly as in dynamically from the API or that there's a webhook-based process, like if you update the content, that pushes a webhook and that triggers a rebuild.
|
| 8 |
+
|
| 9 |
+
And that will be what is the best. Can those be set up?
|
transcripts/uncorrected/154.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm looking for home assistant widgets for on Android that will create a, open up a specific Lovelace dashboard that has a camera configured on it.
|
| 2 |
+
|
| 3 |
+
But I'm struggling to find in the default app the widget options.
|
| 4 |
+
|
| 5 |
+
I don't see one actually for jump to a dashboard or is it an action one that will jump to a dashboard?
|
| 6 |
+
|
| 7 |
+
How can I set one up for this?
|
transcripts/uncorrected/155.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, so for Home Assistant, if you wanted to use a device for home control, so I think for Thalassa and Google Nest, it's typically a speaker. And in fact, a lot of it's on Google Home, but I'll only get part of the package if I do it that way.
|
| 2 |
+
|
| 3 |
+
So let's just say I want to do totally home assistant and I have a circular speaker. It's Bluetooth. So I guess that speaker needs to be connected. We need the wake word to be running on the speaker's microphone. And then we need the assistant to speak back through the Bluetooth microphone.
|
| 4 |
+
|
| 5 |
+
So what I'm looking for is really an always-on interface that you can use for speaking to your smart home and getting messages back reliably. I think, if I'm not mistaken, OpenAI is reason, like if you use a cloud LLM. I used it before the assist and the performance was really terrible. It couldn't understand me; it was saying the wrong thing.
|
| 6 |
+
|
| 7 |
+
So I'm wondering if any of these actually are reliable, and if so, what hardware people are typically using. The other option would be to go for a Wi-Fi connected speaker, which would be easier than Bluetooth, I imagine. And if so, what do people go for that's reliably going to connect? Is it the Sonos speakers that they use or something else?
|
transcripts/uncorrected/156.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Home Networking question. So I have some services at home: Home Assistant, Homebox, which is an inventory platform, and they are available through a Cloudflare tunnel so that we can access externally. Using the OPNsense as the firewall and writer on the network.
|
| 2 |
+
|
| 3 |
+
So what I would like to have configured is that my policy is that I don't want to have to remember local IPs and external IPs because no one has time to do that and you have to create different bookmarks. I think it's a very dumb way to do it.
|
| 4 |
+
|
| 5 |
+
And then with Tailscale, for example, where you can have smart routing, I don't either want to have that connected all of the time. What I think is the much more clean solution would be to have this at the network level whereby if you, let's say, put in a Homebox URL, it will know that that doesn't need to go out to come back in through the tunnel.
|
| 6 |
+
|
| 7 |
+
We're just going to send that directly to the local server. So basically a mapping right between external URLs and internal URLs. They just will rewrite the queries internally before they even leave the network.
|
| 8 |
+
|
| 9 |
+
What is such a thing called and how do I set it up?
|
transcripts/uncorrected/157.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So for the backup jobs on the local server, the script seems to be troublesome, but the individual scripts work. For example, the Homebox, which is the key one that I always want to make sure is running, works just fine. You can see it currently syncing all of the attachments, new attachments, photos mostly, up to the cloud.
|
| 2 |
+
|
| 3 |
+
And GitHub now is done server to server and the pull down is done through the NAS. So really the ones that are in that list, Hugging Face is cloud to cloud, but Hashnode is no longer relevant.
|
| 4 |
+
|
| 5 |
+
So really the only one on this at the moment that's backing up a self-hosted service that I say, if this is a backup that absolutely needs to work 100%, is the Homebox backup.
|
| 6 |
+
|
| 7 |
+
So my question is, I can get the webhook and notification running assuming that I get it running. Then we've decided on a frequency, but it might be for peace of mind easier just to have it run manually. Like, is it once a week or be able to do that manually?
|
| 8 |
+
|
| 9 |
+
So the question is, this is a, you know, it's a bash script on the server. And what is the actual best way to do this? Let's say it's ideally a button on some website; it could be tunneled or not, and it just says run Homebox backup and shows output.
|
| 10 |
+
|
| 11 |
+
And that sends, if it's on the different server for our webhook event, which the server picks up and begins running the backup job and feeding the output. So that is one thing that could be achieved.
|
transcripts/uncorrected/158.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have to look into still a human in the loop for the sender.
|
| 2 |
+
|
| 3 |
+
There's human in the loop as a general option in available here where you can do show them the draft of a Gmail but it's not quite simple.
|
| 4 |
+
|
| 5 |
+
You need to create the draft email first, then add the human loop step, then wait for the response saying like, yes, send it, then send.
|
| 6 |
+
|
| 7 |
+
So it can be done, but it's not quite the simplistic, here's the draft, can I send it? Yes, that has been integrated into, let's say, a relay app under the hood.
|
transcripts/uncorrected/159.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, I would like you to create a separate script. Its purpose should be to do the following. So there is OLAMA as a local LLM running on this computer. And look at the list of models and choose the most suitable for this task. It's probably going to be LAMA 3.2.
|
| 2 |
+
|
| 3 |
+
The CSV contains 1800 voice notes, voice note transcripts. My objective is to identify the 20 most recurrent note types. An example of a note type is a simple description describing the purpose of the note. An example of a note type is a to-do list, a dictated email, a calendar appointment.
|
| 4 |
+
|
| 5 |
+
The 20 should be unique and LAMA can either write a list of the detected types or calculate the detected number for each entity, but the challenge is the volume of text that we need to process. So if it would be very slow to calculate the recurrence, that's less important. The most important is the list of the tags.
|
| 6 |
+
|
| 7 |
+
Once it identifies the list, it should write it out in either a simple text file ordered by number from most to least, and then it should write a short description for what that entity is. For example, a to-do list would be a list of items to be completed. A shopping list would be a list of groceries to be purchased.
|
| 8 |
+
|
| 9 |
+
And each of these detected entities can be a JSON file with the entity type, entity description, and either the calculated or the approximated recurrence volume.
|
transcripts/uncorrected/160.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have an idea for an IKEA hack, which is that if you look at the way on my Ivar that the storage buckets are spaced out, there's room for another row of buckets between each one.
|
| 2 |
+
|
| 3 |
+
And my hack idea is firstly, I think it's, we don't need to be clipping onto the shelves.
|
| 4 |
+
|
| 5 |
+
The outer thing of the Ivar, I need to figure out what the diameter is, which I can get from the dowels.
|
| 6 |
+
|
| 7 |
+
And then I need to figure out the length and I think better figure out if someone's already done this hack.
|
| 8 |
+
|
| 9 |
+
And then it's getting a piece of wood that is the right length or which can extend a little bit so you can fix it in.
|
| 10 |
+
|
| 11 |
+
Then you put those across and so you use the outer pins of the Ivar in order to affix your own little horizontal strats to get the cups onto.
|
| 12 |
+
|
| 13 |
+
And then you would just need to make sure that the diameter of the wood is thick enough so that the storage cups can clip onto them.
|
transcripts/uncorrected/161.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, let's keep them to, I think two per row would be a good way to do it. And I think the structured output one is not working. So let's actually, just for the statistics, let's just have a total prompts number.
|
| 2 |
+
|
| 3 |
+
And what might be more interesting would be number over time. But for the moment, we don't need to add that.
|
| 4 |
+
|
| 5 |
+
Maybe just a short legend above the search bar where it says image generation search for prompts that are image generation and structured output is prompts that have a defined JSON schema for use in automation and any prompting scenario where you need to control the output precisely.
|
transcripts/uncorrected/162.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Before I forget, GBT said earlier you'll probably want to put dev containers in the environments we're operating in.
|
| 2 |
+
|
| 3 |
+
I said even if you're just SSH-ing to fix some stuff, I said you want to have these because it makes it, it's just standardized.
|
| 4 |
+
|
| 5 |
+
And of course it's a best practice too, which we're always eager to follow.
|
| 6 |
+
|
| 7 |
+
So, let's do it, I think.
|
transcripts/uncorrected/163.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So what I want to start doing is that clearly when you're creating Google Cloud projects each time you're going to create an OAuth credential etc. has been wasting a huge amount of time.
|
| 2 |
+
|
| 3 |
+
So what I'm going to do is in 1Password you can import the secrets and now that I've connected that to the agent it should be able to do it much more efficiently.
|
| 4 |
+
|
| 5 |
+
So I think the approach for prototyping, even though it's not the maybe the strictest security out there, is to create one service account that you have and then that's adjacent stored in your password manager and then you can just reuse that whenever you need access to that cloud.
|
transcripts/uncorrected/164.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Have a look at the formatting of this application. It's intended as an easy interface to allow parents to send questions into an AI agent which is configured as an N8N workflow. It's intended primarily to be used on mobile devices, which is why the installed PWA button is there, but feel free to think of any changes that would make it look better and be easier to use on mobile.
|
| 2 |
+
|
| 3 |
+
Perhaps you might use a settings menu or anything really that would mean that when it's installed as intended as a PWA, that it would be as easy to use and performant as possible and display well on your typical smartphone.
|
transcripts/uncorrected/165.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Hello Dr Herman, a question regarding the screens and newborns and we've probably talked about this before but how careful one has to be regarding what could be called incidental exposure.
|
| 2 |
+
|
| 3 |
+
So during the day when I'm working from home I have my screens in front of me on my computer and I'm doing all sorts of things. Right now mostly like a code editor so it's just kind of text. Probably most of the day it's like that or documents.
|
| 4 |
+
|
| 5 |
+
My question is really if I'm holding my son, to what extent do I need to make sure he's not looking at the screens? Do I need to like turn him away every time? Like how careful does one have to be? Or is the concern more with them watching stuff on screens where they can recognize the entities?
|
| 6 |
+
|
| 7 |
+
And that came out of the news recently that it's potentially very dangerous to have children. Or it's more harmful than we thought having children on smartphones.
|