danielrosehill commited on
Commit
eefc50f
·
1 Parent(s): 616e22b

New transcripts

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
annotations/167.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bf0802a11fea8d48fcb20cef2e757b3de6ea46d6821cedfea2a7b3b2dece4cc
3
+ size 932
annotations/168.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11977717133a41a5807880402bd2267a729dd4d32c39be52b0b9dc67259f632f
3
+ size 812
annotations/169.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4874781cbb02c348ae94ed13898e3666031f1ab74e09805adeefd215fc8c540
3
+ size 799
annotations/170.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:369d19b68ed80c164a11333552dd9948b1d08ea93b6060d3a3fc6f568c3463a4
3
+ size 792
annotations/171.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62479fccec01a482e5325d4c0935c5e82aac07c0c0741a1fd0d62099a5c724b5
3
+ size 801
annotations/172.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5bfa4699bc624522e74ff292c56f2106dff9b2dada2f9a72387c4a142ff1318
3
+ size 822
annotations/173.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2834671168f89f741dec155b43cd013803b766a318cb6ff356d47f0d9389bf6d
3
+ size 801
annotations/174.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8a92f5c93c0dccc77d83e5667d08dc179838171182db0536d4a03763948d6a
3
+ size 804
annotations/175.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec6c8a0f1e3cef6d1b84415f9192a6ae24e83b1d7059bcab639aeca149b93c5d
3
+ size 789
annotations/176.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25817316075d9c51a129bafc23e9236e921402bae3fff964364912bf4b4f447a
3
+ size 810
annotations/177.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24703e3c7f25b30ca6e7a43f56c6d7bf28b1473892c7ad55cd4a512861a2d47b
3
+ size 804
annotations/178.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bda811f72ad2836f7c752270f2ebc3903e2f879e8a759cccc5a64a31ee53069
3
+ size 810
annotations/179.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f680c18a79f7809c25416554ca7acbc9d724ef7d9458e264e59fd08ce1cd484
3
+ size 801
annotations/180.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c746536c56fbcfa188b3cf31fdfdeb2dfe3deb5f9d6e6f4bf0faabe7ac53c60
3
+ size 802
annotations/181.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d61751614e0fd949a36ec3d4155f380a1bc7a0ae3fa36d3c982fa25da50c0343
3
+ size 806
annotations/182.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:750f76dadfb1879500ec6425eca7a2d8f05e70e399365662402e23cd60a9981f
3
+ size 809
annotations/183.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd0595b3c42a33923bb3ddf105cc16dd2161f6c4e42825de91055ecd7c45497f
3
+ size 802
annotations/184.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75215c15ccd0003c21f0896a2b5cd4314058affce222f589768339a2fc3f06c2
3
+ size 795
audio/167.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8e8df866e77ea3f601a14a164d7308c7b7b817402993634f56f6aad3b54a11
3
+ size 927404
audio/168.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7ed9e2ba97d3231457b3e699f67130488af59df2827599cecbaa4f054e1ccf1
3
+ size 1524716
audio/169.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf26ac73ec8ab43d38c87570b6b4b31dd11f583fba15221f524a6e7917ccaace
3
+ size 887084
audio/170.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e47a89f43f0de0b54b995ac790cf0041677f8944e208b009dc1caa6029fcf414
3
+ size 888236
audio/171.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9142439d4fafb1423c518b6aaca61ae77e40a257f7f61be2c4167585a5f02f72
3
+ size 3235436
audio/172.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e831b32171884e6ac24dd23e3e54973da861f48f44a47a5e8fbecf2bc6720438
3
+ size 4359902
audio/173.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4a345b9c3eca6edbb9d3c84f69910b206b6a93f453daf1309b41c7ec56fb2f7
3
+ size 1121763
audio/174.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de35c8502abe369ee5876eea1d354a292d135675764ea7606e835a146c7b191c
3
+ size 8816183
audio/175.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:def7f2c6b46dcd8c9283c0b0bf7a9d992780a2dca4a1e67177d9f153c8fe3599
3
+ size 845283
audio/176.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11f1df546ed67b3505a8c3b2df20009b897062b7aea869bb01ccc8b873583bdf
3
+ size 1950956
audio/177.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:706082dc880f42aa397f9aee429f2f8a4d62fa19417e106e646d6031f91e4f11
3
+ size 7845164
audio/178.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4abd166ccade3f644d84f0b394d7c64f90c5b8adee074b6b0bcfb53b95b8e07d
3
+ size 2814956
audio/179.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55a4246d9bf9bfdfc1b28b4add8b4a8c746b7473f10e28ff3f909709017b04eb
3
+ size 2070764
audio/180.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a692a5efca0e78afd528ef4edcff7d69e64cbb5989ae20f31ad01cf2faeb271
3
+ size 2577644
audio/181.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc2e20543f4f7eada7275e0c7b9fc256023320eb7f067edc41e84ff81c5f633c
3
+ size 3235436
audio/182.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4249ca4f032cf3a438c3f004a48ed2da00c563e603ef284c892302689999bb96
3
+ size 2980844
audio/183.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5df94fb343c121e973b335952996dfee728aa804f5adfae688651654e30a1c1b
3
+ size 2566124
audio/184.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:736e83bcc12261e6d83c46b915a26d9c4fc4fbffdf441a4b6a1bc896300acf83
3
+ size 649051
transcripts/uncorrected/167.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ I'm on to the docs index page, I've just updated it now. And our last night I added it, and I think the microsite, or the website, the docs repository site, actually still works. I'm not sure if it was migrated to Vercel yet.<br><br>In any case, I can do so if it wasn't. And then I can add basically update it with all the sub modules updated sub modules based on the ones that I have and then link to it from my main website.
transcripts/uncorrected/168.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ I'd like to consider a wee factor and then just give me your thoughts about this so currently it's a file based backend what I was wondering is would it make more sense to have a lightweight database backend SQLite let's say and and the important part of the utility which is the Hugging Face dataset push is what I'm using for the classification model would actually be a job whereby locally it will create the dataset from the local backend.<br><br>In other words, rather than having this sit in place as files, it's going to be constructed periodically. Basically when I say okay I've uploaded another batch, let's push, would that be easier and more logical to integrate with the front end?
transcripts/uncorrected/169.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, so just some changes that I'd like to make to the UI. When I upload a voice note, I'd like to capture as metadata the upload time and date, the original file name, the original file format, which in most cases will be MP3.<br><br>In addition to transcript, I'd like to have, so that should actually be called uncorrected transcript. I'd like to also have a text field for corrected transcript. And again this is captured as metadata.
transcripts/uncorrected/170.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ The purpose of the repository basically is to model or suggest the idea of using AI agents to scope out gap filling and extending multi-agent networks based on their inferred understanding of the purpose of a multi-agent network.<br><br>I think iterative workflow is the best. It suggests to the user what about this agent the user says yes or no, rather than the batch system. Although it could do both, but let's make the defaults the kind of individual review system.
transcripts/uncorrected/171.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ The intended functionality of the user interface is that I'll upload the voice note and the automatically generated transcript, which came from the voice note transcription. I'm not going to manually correct the transcript but I would like to record some annotations and the UI would save these to a folder which will actually serve as the dataset itself and what I do want is that the link is preserved so that each either there a TXT file for the raw transcript and the audio or it's recorded at the metadata level.<br><br>It should be sequential. So starting with one. and what would be the most useful way for the UI would be that there's a drag and drop interface for uploading the audio because I'll be populating these from voice notes. So I'll be downloading an mp3 from their website and then the two things I do are upload to my UI and copy and paste in the text.<br><br>So what I'd like to happen on the backend is that the first file is renamed 1, saved as 1.mp3 let's say, and the text corresponding to it is 1.txt, or they're just linked at the metadata level.
transcripts/uncorrected/172.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Building a Reporting Disclosure. I have a few thoughts. One, I can create a model. A model is actually quite feasible. It would be, but it's a data annotation project. It's saying, here's a PDF, here are the actual variables. In other words, here's the scope 3, scope 2, scope 1, here are the units, train it like that.<br><br>Second thought is if I did want to put together a dataset of sustainability disclosure reports, I think you could argue a public fair use clause for the PDFs being there.<br><br>And then the one I did with Gemini the other day which was basically a parsing AI tool seemed to work and could probably be used in production and which works even maybe as a way of trying to get in touch with Google is they have They have definitely an AI for good division who may let's say provide Gemini credits for the actual deployment of it on Cloud Run. Because from my first run of it, it was very, very promising for the task of parsing the reports.<br><br>And that would greatly the feature would be when it extracts the data human human in the loop is done by seeing what it is matching it to a company in the database or to a known company Let's take Google itself as an example. Detects its stock ticker, detects its stock exchange. And then you click like add to database meaning that you're adding the validated data and it could even pull out the metadata from the document pull out the source and that would be a great way of building up a human validated database in other words you take the reports you say either everything everything looks good to me or this is wrong either way you add it then of course you've got the missing financials and the rest of the world.<br><br>But that would probably be because there is thousands of sustainability disclosures, especially when you consider I think beyond the US globally, and it's beyond. So certainly it's a task for a model, but it's also human in the loop. The ultimate question is if Gemini stock performs 99% sufficiently well in the task of extracting this data from the sustainability reports. A model might actually not even be necessary because out of the box it's almost perfect. That is, I suspect, what the case would be.
transcripts/uncorrected/173.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Is there an app for two people on Android phones wanting to do karaoke? So what I'm thinking is, especially for Android TV, the karaoke track is up on the TV. And then the two people participating in the karaoke, their phones, the Android devices are the karaoke microphones.<br><br>And it's live so they're going on to the... so it's a karaoke experience but using your phones as karaoke microphones. I guess it would work with the website. Does this exist?
transcripts/uncorrected/174.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Alright, so the plan is for this repository, I want to create an audio media streaming interface for my home network. And there's a few things I want to roll into this one too.<br><br>Number 1 is media playback. So I have a volume on the NAS called AudioShare. The NAS is 10.0.0.50. So connect to the NAS, you'll find the AudioShare volume and let's mount that as the media library. It'll have a lot of tracks already populated.<br><br>Second thing is a soundboard. So I'll create a folder within that audio share volume called soundboard. And in the soundboard I just upload some stupid sound effects I do one to start it off Like laughing sound.<br><br>And then I also want to create a intercom system. and the functionality for the intercom is that from this computer, sorry from the interface which will be audio.residence.jlm.com I'd like to have the push to talk and the start and stop. PUSH TO TALK<br><br>So for the speaker networking this is where I would like you to give me your thoughts on what makes the most sense So I've used before MPD. I've installed MPD clients on... So the devices are, there is a device called Nursery Pi in SSH. Bedroom Pi, R-Pi and Smart TV. Each one is connected to a speaker. That's the network.<br><br>I tried MPD, putting an MPD client on each device. MPD has been the most reliable But it seems kind of a pity to use this when there are protocols like SnapServer that are designed specifically for this use case. However, using Home Assistant, I found SnapServer to be very buggy. I could never really get it to work and many more and the system that's reliable.<br><br>I find with MPD, because you need to select the speaker on the client devices, those bindings frequently broke. So I'd like to have something that kind of, the speakers are really never going to change. In the sense that I'm going to, I have a sound card for the Raspberry Pi. That's the speaker. and for as long as I use this system that's gonna be the configuration. So I want to set up something that once it's in place it's pretty much just gonna work.<br><br>So I leave that call up to you and please create a... Create a folder in the repository providing your recommendations just before you begin and what you suggest as the best implementation for the multi-speaker network whether it is broadcasting to a bunch of MCD clients from the Web UI or whether it's creating a single Snap server or something else that manages the networking I don't envision much of a need to select individual speakers by which I mean, I think that for the most part the occasions I'm using this I'll just play media to the pool but of course it would be nice to be able to select that !
transcripts/uncorrected/175.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, when I return to the byte, the DAC, I want to download the user manual or I'll just photograph the one. I want to connect the reference speakers with RCA.<br><br>And I also said to Hannah that I would reconnect her turntable because I took away the switch and the connection. So I will reconnect that. And then hopefully it will just work and that will be it.<br><br>And I want to also take down the serial number for the inventory. And that would be it added into the into the system.
transcripts/uncorrected/176.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, so this is a good start. In the dashboard, the developmental milestones and all this data. So currently, if I change tab, if I go into the measurement tab or the report tab, it loses the data in the dashboard. I'm guessing because it's running the prompts again. So that's a bad design. It should load it once per user session and then hold that data so that when the user, when they navigate across the app, they're not going to force the prompts to run all over again as they do now.<br><br>The second thing is that the design of the elements is a bit bad. It's quite bad, I would say. Let's have them as accordions that are nested by default or will only show the first paragraph of text and the user can click down to expand them. And maybe for each one like psychosocial development if the AI could generate a subtitle so that there's a headline under that and then the user can choose to go into it.<br><br>In measurements, when you add a measurement, sometimes we don't have either figures for height or weight, so you shouldn't be prevented from saving the measurement if you're lacking one of those data points.
transcripts/uncorrected/177.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, I'd like to create an app which does the following. The purpose of the app is to visualize how different countries, ideologies, systems approach common policy challenges. An example of a policy challenge that I'm just providing for explaining how I could see this working is second-hand smoke control. Some countries have very strict regulations, some countries have very lax enforcement. And probably there is not really much distinction by system of government but the user prompts it called policy visualizer and the user enters a policy challenge. So another example might be minimum alcohol purchasing laws.<br><br>Once Gemini receives this prompt, its task will be to research how different countries in the first instance approach this topic. And from that analysis, it can identify commonalities or clusters. The research process happens in the back end. And the user is shown some kind of progress indicators like researching what it's doing basically. Not a huge amount of verbosity but just a few cues so the user knows that it's not stuck or it's actually doing something.<br><br>Once Gemini concludes its first pass it will have grouped not necessarily every country in the world but based on the clusters it identifies it found groups. Each group is given a label. The label might be laissez-faire, permissive. These may be either recognized labels or what Gemini feels it's best to describe them as. And the countries are displayed with their national flags in alphabetical order.<br><br>The next functionality is that the user can click on the cluster and Gemini will describe what it is about this law that it considered them to be a cluster. In other words, the way in which they approach the challenge. That's a modal. Then the user can click on any country and it can see how that country approaches it. So I might click on the flag of Germany and either an accordion or a modal it show how Germany approaches in this case gun control and its cluster.<br><br>Country level is always a tab and only if there's other taxonomies. By taxonomy I mean that we think there's a very, Gemini says there's a very big difference and how different right-wing versus left-wing approaches we're going to do. We're going to create one more tab with that. But that should be kind of only if there's very compelling reason to do so. Or if it has significant data to share. So if it feels like there's enough data about how US states approach an issue at the state level, it might create a tab called US States and then follow the same pattern in which it groups them into clusters.<br><br>The objective is to, rather than searching through Google to see how different countries do different things, to start with your question and then get this visualisation. And I think the icing on the cake would be an analysis. So this is a visual presentation and then there may be analysis showing significant differences, some similarities. So there's like a report, a textual report, but the main tab, because I think it's the most interesting one, is the visualization, the policy visualizer.
transcripts/uncorrected/178.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, I'd like to create a sustainability report parser which will operate as follows. The user will provide a link to a sustainability disclosure or better they will upload a PDF. That's the expectation.<br><br>Upon receiving the PDF from the user the app will load the PDF in a frame. Gemini will identify on which page sustainability, The disclosure data for Scope 321 emissions is reported. And the PDF will load up in the frame, the viewer, with that page skipped to that page, and the data highlighted with a yellow overlay, slight highlight.<br><br>And beneath it Gemini will output the table for the top level in other words the summary of the scope 321 emissions with a short text description of what they were in summary the units detected scope 321 itemize then a disclaimer under that that this detection is based on automated processing may be incorrect and so on.
transcripts/uncorrected/179.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ This is called Impact Report Finder. The objective is that the user will provide the name of a company and the AI tool, Gemini, will attempt to find any voluntary sustainability disclosures, impact disclosures that they've written from the internet and it will send them by year. If they include data about their GSD admissions there will be a tick symbol and there will be a link to the result and there will be a direct link to the PDF. and Jeff.<br><br>So after the user provides the name of the company, there can be a... if Gemini needs to disambiguate, it will ask the user in a text box below, can you clarify and then the user can hit submit again, otherwise it's more than an interactive chat app, it just provides those search results in that specific format with the reports chronologically from by year, if there's multiple ones by year, by date of release, and then if they have GSG data, a link to the data sheet if it's separate, or just the PDF, but basically annotated table of links.
transcripts/uncorrected/180.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ I'd like to create an app which will do the following. It's a voice-to-voice app. The user will record a voice message. The voice recording in the app. The voice recording gets sent to Gemini with a transcript. Gemini's task is to create an abbreviated version of the Voice Message, as short as possible. Essentially cleaning it up. This stage is not shown to the user.<br><br>But what happens next is that it gets text to speech, it gets synthesized, the user can choose between a male or a female voice. Yeah, and once that, once the generated audio is created, it presents to the user, the user can download it. So it's essentially taking audio from the user, cleaning it, condensing it, synthesizing it, and then download.<br><br>Come up with an imaginative name for this use case.