Commit
·
32f8bf6
1
Parent(s):
29ad0f5
commit
Browse files- audio/100.mp3 +3 -0
- audio/101.mp3 +3 -0
- audio/102.mp3 +3 -0
- audio/103.mp3 +3 -0
- audio/104.mp3 +3 -0
- audio/97.mp3 +3 -0
- audio/98.mp3 +3 -0
- audio/99.mp3 +3 -0
- transcripts/uncorrected/100.txt +3 -0
- transcripts/uncorrected/101.txt +7 -0
- transcripts/uncorrected/102.txt +5 -0
- transcripts/uncorrected/103.txt +11 -0
- transcripts/uncorrected/104.txt +17 -0
- transcripts/uncorrected/97.txt +5 -0
- transcripts/uncorrected/98.txt +7 -0
- transcripts/uncorrected/99.txt +7 -0
audio/100.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:088aa146dd2897548affba3f363a38800587f27be10a6e9dc8be78b1ed1d98ca
|
| 3 |
+
size 2587436
|
audio/101.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f87f3d4b5cb317735310cd461f98781c9ad4b32a8186563ed46d82ab726ef13a
|
| 3 |
+
size 692937
|
audio/102.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:116190283d216c3e457183e2bf915c61acc5457fbd56e4fc749dcff70c78b5b4
|
| 3 |
+
size 2198217
|
audio/103.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:44d060a6ad8f3df93d03fdeb3f927407c3495f6eb03627119b14597be04d13bc
|
| 3 |
+
size 6726596
|
audio/104.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba0c577b485c5a98e32dc77aa4ed7523a21787accaa10f305f54c32974689910
|
| 3 |
+
size 2342636
|
audio/97.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b11c4fefa623fa810f4e4dba5341caa53bc3bfa2c136105fa629d3f9c3f55a5
|
| 3 |
+
size 1137644
|
audio/98.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b6620fddb93d84abdf816243eb4f71f4cbd661a4f4e5eccd305301fb1c60660b
|
| 3 |
+
size 1619756
|
audio/99.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3dcfbbe66126a02579fd51663cbf86397378950569c163743a1429a3b38f928d
|
| 3 |
+
size 907244
|
transcripts/uncorrected/100.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Create a desktop IP camera viewer that will run on Linux. The four IP camera RTSP streams are in their repository. The first page should be a quadrant display maintaining a fixed aspect ratio, so that when opened in full screen or smaller, it should maintain the correct aspect ratio so that the individual feeds are viewable. Then, you should be able to toggle between the four. If you click on a camera from the grid view, it should take you to that specific feed.
|
| 2 |
+
|
| 3 |
+
By default, all the feeds should be muted; that is to say, the audio feed should not be streaming. You can unmute, which will stream the audio feed. The other thing should be digital zoom, so you should be able to zoom in and out within each feed viewer using plus and minus on the keyboard. I'll use the arrow keys to navigate between, to pan around the cropped digital view. So, for example, if I start by zooming in a bit, I can then use the arrow keys to move within the parameters of the zoomed-in frame. Zoom should only be increaseable from 100%.
|
transcripts/uncorrected/101.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Probably the most useful prompts to keep, many of which are still waiting to run, would be the detailed spec definitions.
|
| 2 |
+
|
| 3 |
+
For example, the smartwatch spec, the lav mic spec, that were quite detailed and most of those are still probably relevant.
|
| 4 |
+
|
| 5 |
+
And they're just in the slush pile somewhere.
|
| 6 |
+
|
| 7 |
+
I gathered those up, I would say.
|
transcripts/uncorrected/102.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
We have to diagnose the cause of why the Zigbee bulbs are a little bit sluggish to react. They take about 10 or 15 seconds. And check with moving back to Mosquito if there's any reason why there might be that latency in them getting the messages from the coordinator.
|
| 2 |
+
|
| 3 |
+
And for the red alert automation, for the green, orange, and red, I need to make sure that the, I guess for green only, no, for orange and for green, it needs to go as the cycling count is unreliable. Just at the end of those scripts, it should turn the RGB lights off. So it'll go orange for two minutes. Oh, it smells so smelly too. Then turn off.
|
| 4 |
+
|
| 5 |
+
And likewise for the green one, it'll go green for two minutes. And then there'll be just a single command to turn those, turn the RGB lights off.
|
transcripts/uncorrected/103.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So the notes for a blog post to share targets audience. In fact, it'd be actually quite good and give out the briefs I used to get from web pals, and those guys that would actually be a great format for an AI agent. The idea I was planning yesterday of like this, where I capture a thought or a briefer notes, and I might add to that these target market SEO keywords, title, target length, and the services I would like to offer. They better off as two different topics.
|
| 2 |
+
|
| 3 |
+
The first is agents and assistants. I feel like there is traffic for that search, then it would be a good one to have a piece for because people are wondering, well, what the difference? And that could be quite fairly clean to write, but also making the point that there's actually a lot of overlap. A point that doesn't get made enough is that a lot of things are actually better off as assistants and not as agents. That as MCP comes online and matures, it doesn't mean that everything needs to use it.
|
| 4 |
+
|
| 5 |
+
The next one is within the world of agents. I don't know if I want to make a controversial take. Maybe this might ignite some thought among people, which is that we're calling two completely different things agents at the moment: conversational and instructional. The case in point for builders would be Mind Studio for workflow-based agents, and ChatGPT has recently launched Agent Mode for chatbots that have MCP and AI.
|
| 6 |
+
|
| 7 |
+
And right now, my personal opinion is that we need one... Users want and need one platform for both of them. But also that it's when people are trying to... We need actually a name. We need a name for... We need vocabulary at the moment actually because right now they're both calling... There isn't really a good word for a single term workflow of an agent that's created by a form and it puts it through a pipeline. People would just call that an agent, and they'd call a chatbot that can send your emails or return your inbox. They'd call that an agent too, and they're completely different user experiences.
|
| 8 |
+
|
| 9 |
+
So I think the point would be to delineate between the two, firstly, and secondly to... I think the bottom line that people might want is if I'm writing this for as a prospective sales hook, people will say, well, we don't want to just hear this. We don't want to hear pontification or musing. We want to hear what are the takeaways.
|
| 10 |
+
|
| 11 |
+
And I think the takeaways from this might be from you have to ask what does a business, why does a business need to care about this? And the point really is that different, you might need different platforms. Here's what's out there in the market; it could be a prediction as well that I predict that these will fuse in 2025. But for now, that's why there's kind of a couple of different tools. I think a lot of people are wondering if they have a name because there are just such incredible... We could use Crew AI as an example of a very established code-first agentic framework, and then we could use Claude code or ChatGPT agent mode. So we now have a few good examples to draw upon in both categories of what they are.
|
transcripts/uncorrected/104.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Could you give me a guide on how a typical Docker development and deployment process works exactly?
|
| 2 |
+
|
| 3 |
+
So let's say I'm working on a project locally and I'm using Docker.
|
| 4 |
+
|
| 5 |
+
So I presume when I work on my codebase, I'll then build a Docker image for testing.
|
| 6 |
+
|
| 7 |
+
All of these are the ones that we like to use for testing.
|
| 8 |
+
|
| 9 |
+
But you generally want something, if I'm not mistaken, that is hot loading.
|
| 10 |
+
|
| 11 |
+
Or does that happen before you dockerize something?
|
| 12 |
+
|
| 13 |
+
And let's say that I have a database as well in the project and I want to have a volume.
|
| 14 |
+
|
| 15 |
+
Let's say that I want to do Docker deployment whereby I actually start off the data in the app as well as the app.
|
| 16 |
+
|
| 17 |
+
To give an example, it might be an inventory manager and some other developers, and many more.
|
transcripts/uncorrected/97.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd like to deploy this as a local website, a mini site. It's just going to consist of this bot. I can install it through an Android SDK, or I can host it locally on my server, which is 10.0.0.4 as a Docker environment.
|
| 2 |
+
|
| 3 |
+
I think it actually makes the most sense to firstly start with a simple one-page website. Ideally, it'll look perfectly good on mobile that can be installed as a responsive app. If we need to add later a dedicated Android app, we can do so.
|
| 4 |
+
|
| 5 |
+
It's made with type, form, bot, so choose the best embed architecture and we'll deploy it then.
|
transcripts/uncorrected/98.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
To consider the case of an AI agent that is deployed as an N8N workflow, by which there is a frontend sending to a webhook and then the backend automation is configured in N8N.
|
| 2 |
+
|
| 3 |
+
Let's say instead the backend automation consists of sending the prompt to an agent and then sending it to a couple of integrated services via API calls.
|
| 4 |
+
|
| 5 |
+
Let's say one wanted to move this to a single deployment in which the whole logic, including the AI component, were deployed on a single server.
|
| 6 |
+
|
| 7 |
+
Could this be deployed on Netlify, Rassel, etc? And if so, what would my grading be? Agent Intel.
|
transcripts/uncorrected/99.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, I think the app is good enough for deployment. Could we deploy it now to the Ubuntu VM at 10.0.0.4?
|
| 2 |
+
|
| 3 |
+
There is a Cloudflare tunnel in the Docker network that we would need to be attached to it so we can reach it from the external network.
|
| 4 |
+
|
| 5 |
+
So it could be Dockerized as part of the deployment, but that's on the local network.
|
| 6 |
+
|
| 7 |
+
So let's try to get it deployed onto the server, ideally in some kind of pipeline that could be replicated through this GitHub repository.
|