instruction stringlengths 0 30k β |
|---|
What should the fct entry defining aix in vsam look like?
I searched online but couldn't find any explanation. |
I want to restrict my application to only support "string" type values for json configuration.
Json schema by default supports a set of datatypes such as integer, boolean, etc however, the parsing logic in my application only supports string values.
How do I make sure the schema does not define a property of any type except string.
Allow - { "key1": "A", "key2": "B" }
Reject - { "key1": "A", "key2": true} or { "key1": "A", "key2": ["1","2"]} |
How to restrict json schema to not support a particular data type |
I know how to style a checkbox using the `:checked + label` method. But this only applies if the label is independant of the input
<label>Label</label>
<input type="checkbox" />
Is it possible to do the same level of styling if the input is wrapped in the label?
<label>
label
<input type="checkbox" />
</label>
I'd prefer a purely CSS method for doing this. No JS please.
|
Styling checkbox when input is inside the label |
|css|input| |
I have a dataframe with two integer columns:
```python
data = [("A",1, 5), ("B",3, 8), ("C",2, 7)]
df = spark.createDataFrame(data, ["type","min", "max"])
```
I'm trying to use `random.randrange(start,stop,step)` to generate random numbers within the min/max range provided by the columns for each type and (obviously) failing.
I'm stumped! I think I'm trying to use a function designed to be used on explicit data, on columns of data - but I'm not sure how to get around that.
I tried:
```python
df = df.withColumn("rand",randrange(col("min"),col("max")))
```
> TypeError: int() argument must be a string, a bytes-like object or a real number, not 'Column'
I also tried:
```python
def rando(start,stop):
return randrange(start,stop)
randoUDF = F.udf(rando,IntegerType())
df = df.withColumn("rand",randoUDF("min","max"))
```
>'ValueError: empty range for randrange() (0,0,0)'
Same outcome for `df = df.withColumn("rand",randoUDF(col("min"),col("max"))) `
The min,max columns are definitely fully populated...
Questions I've looked at (not exhaustive!):
- https://stackoverflow.com/questions/22842289/generate-n-unique-random-numbers-within-a-range
Not quite right
- https://stackoverflow.com/questions/62701828/pyspark-how-to-generate-random-numbers-within-a-certain-range-of-a-column-valu
Theoretically I could use the way the score column is generated but there's millions of rows in the dataset so this feels like a bad solution |
How to generate random numbers within a range defined by other columns |
the gulp-pug doesn't work and say unexpected token "indent"
see the images below
[gulp code and nodejs command prompt](https://i.stack.imgur.com/E8H8P.png)
[index.pug and nodejs command prompt](https://i.stack.imgur.com/EoYtr.png)
[package.json and nodejs command prompt](https://i.stack.imgur.com/PNPf9.png)
i use gulp v3.9.1 and node v8.11.3 and npm v6.9.0
i use this version because the course i watch use it and the new versions of gulp and node
and npm is only work with ECMAScript6 and even if i use the lastest versions of gulp and
node and npm and try to use ECMAScript 6 i get the same problem . |
The core problem is that **you cannot *directly* capture an *elevated* process' output streams from a *non-elevated* process**. This constraint, as reflected in the .NET APIs and as detailed below, *may* come down to a security-minded constraint at the _system_ level.
In .NET terms, setting `.UseShellExecute = true` on a [`System.Diagnostics.ProcessStartInfo`](https://learn.microsoft.com/en-US/dotnet/api/System.Diagnostics.ProcessStartInfo) instance - which is the prerequisite for being able to use `.Verb = "RunAs"` in order to run *elevated* - means that you cannot use the `.RedirectStandardOutput` and `.RedirectStandardError` properties for capturing the launched process' output.
* In fact, trying to start a process with _both_ `.RedirectStandardOutput` and/or `.RedirectStandardError` set to `true` _and_ `.UseShellExecute` set to `true` causes an _exception_ that specifically says that these property (groups) cannot be used together.
* The fact that this exception occurs whether or not the `.Verb` property is set to `.Verb = "RunAs"` or even set at all actually suggests that it isn't _elevation_ per se that prevents output capturing, but _any_ [Windows shell](https://en.wikipedia.org/wiki/Windows_shell) operation.
* Conversely, unfortunately, filling in the `.Verb` property without also setting `.UseShellExecute` set to `true` cause the `.Verb` property to be _quietly ignored_, meaning that _no_ elevation happens with `.Verb = "RunAs"` - which is what would happen with your code.
The **workaround** is to **perform the output capturing _as part of the elevated process_ by sending the output to _files_**, which in turn requires you to **call _via a shell_ and use its redirection features, namely `>`** (short of the specific target process itself offering a way to save its output to a file).
Since you're calling PowerShell (i.e. a shell) anyway, you can just incorporate redirections to (temporary) files and read them on the C# side afterwards; e.g.:
```csharp
var tmpFile = System.IO.Path.GetTempFileName();
await run.RunCommand(
$"Stop-Process -Name \\\"Docker Desktop\\\" *>\\\"{tmpFile}\\\"",
"powershell",
true); //Command #1
// Now examine the contents of tmpFile
```
Note:
* `*>` is used as the [redirection](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Redirection) so as to capture _all_ of PowerShell's [output streams](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Output_Streams) in the target file (you could capture error output separately, however, with `2>`).
* Also note that you're better off using just _one_ PowerShell process to run all your PowerShell commands, and that there's a [`Restart-Service`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/restart-service) command that combines `Stop-Service` and `Start-Service`; simply sequence the commands with `;`
|
When runnin the following code: trying to learn from this post (https://thetechbuffet.substack.com/p/evaluate-llms-with-trulens?utm_source=profile&utm_medium=reader2)
from trulens_eval import TruCustomApp
tru_rag = TruCustomApp(
rag,
app_id="RAG-einstein:v1",
feedbacks=[
f_groundedness,
f_qa_relevance,
f_context_relevance,
],
)
I get the following:
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [22], in <module>
1 from trulens_eval import TruCustomApp
----> 3 tru_rag = TruCustomApp(
4 rag,
5 app_id="RAG-einstein:v1",
6 feedbacks=[
7 f_groundedness,
8 f_qa_relevance,
9 f_context_relevance,
10 ],
11 )
ValidationError: 1 validation error for TruCustomApp records_with_pending_feedback_results
Input should be an instance of Queue [type=is_instance_of, input_value=<queue.Queue object at 0x000001310D6CDF40>, input_type=Queue]
For further information visit https://errors.pydantic.dev/2.6/v/is_instance_of
I have tried several variations of the trulens_eval package which bring other problems
current trulens_eval.__version__ is '0.23.0'
Using python3.8 |
I am trying to create a k8s cluster using Google Compute Engine, Terraform, and Ansible. I created three vms through terraform and installed docker and kubernetes on them through Ansible. I want to use Calico as a network add-on. I received connection refused error on 6443 port everytime. After some debugging I found the problem in this part.
- name: Kubeadmin init - only master
shell: |
kubeadm init --service-cidr 10.96.0.0/12 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address 0.0.0.0
when:
- ansible_facts['hostname'] == 'master'
- name: Copy kubeconfig
shell: |
mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
kubeadm token create --print-join-command > /tmp/.token
when:
- ansible_facts['hostname'] == 'master'
After I connected to the vm through google cloud platform, and runned kubectl get nodes it gives me connection refused error. Then I runned this part again with my own user on google cloud platform.
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Then try again kubectl get nodes it gives me the node. So what should i do to solve this on ansible I cannot add a calico addon because of this error I think problem causes from the users(root, my own user)?
|
Kubernetes cluster on GCE connection refused error |
|kubernetes|google-cloud-platform|ansible|google-compute-engine|calico| |
Ruby newbtard here...
I have an csv file (logevents.csv) that has a "message" column.
The "message" column contains rows of json data.
Using Ruby, I'd like to convert the json data's name:value pairs to columnname:rowvalue in a 2nd csv file.
Here's the 1st row of the csv file:
message
"{""version"":""0"",""id"":""fdd11d8a-ef17-75ae-cf50-077285bb7e15"",""detail-type"":""Auth0 log"",""source"":""aws.partner/auth0.com/website-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs"",""account"":""654654277766"",""time"":""2024-03-27T12:30:51Z"",""region"":""us-east-2"",""resources"":\[\],""detail"":{""log_id"":""90020240327123051583073000000000000001223372067726119722"",""data"":{""date"":""2024-03-27T12:30:51.531Z"",""type"":""seacft"",""description"":"""",""connection_id"":"""",""client_id"":""v00a8B5f1sgCDjVhneXMbMmwxlsbYoHq"",""client_name"":""Website Dev"",""ip"":""32.174.36.217"",""user_agent"":""Someday v1.10.3"",""details"":{""code"":""\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*5kW""},""hostname"":""website-dev.us.auth0.com"",""user_id"":""auth0|648a230ee5ad48ee2ebfb212"",""user_name"":""don.collins+dev@website.com"",""auth0_client"":{""name"":""omniauth-auth0"",""version"":""2.6.0"",""env"":{""ruby"":""2.6.5"",""rails"":""6.1.7.4""}},""$event_schema"":{""version"":""1.0.0""},""log_id"":""90020240327123051583073000000000000001223372067726119722""}}}"
For each row, I'd like above to be written to another csv file but with the name:value pairs pivoted into column:rowvalue with a "," (comma) as the delimiter for the column names and row values, ala:
version,id,detail-type,source,account ....etc
0,fdd11d8a-ef17-75ae-cf50-077285bb7e15,Auth0 log,aws.partner/auth0.com/website-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs,654654277766 ....etc
I have been trying to accomplish this via this ruby script (runtimetest.rb):
```
require 'csv'
require 'json'
CSV.open("C:/Ruby/dev/logevents2.csv", "w") do |csv| #open new file for write
JSON.parse(File.open("C:/Ruby/dev/logevents.csv").read).each do |hash| #open json to parse
csv << hash.values #write value to file
end
end
```
But at runtime the csv file contents (logevents.csv) are written on screen with "unexpected token" message:
C:\Users\dclad>runtimetest.rb
C:/Ruby32-x64/lib/ruby/3.2.0/json/common.rb:216:in `parse': unexpected token at '"version"":""0"",""id"":""fdd11d8a-ef17-75ae-cf50-077285bb7e15"",""detail-type"":""Auth0 log"",""source"":""aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs"",""account"":""654654277766"", ........
Tried this, I have been trying to accomplish this via this ruby script (runtimetest.rb):
```
require 'csv'
require 'json'
CSV.open("C:/Ruby/dev/logevents2.csv", "w") do |csv| #open new file for write
JSON.parse(File.open("C:/Ruby/dev/logevents.csv").read).each do |hash| #open json to parse
csv << hash.values #write value to file
end
end
```
Was expecting output to be column, row table in 2nd csv:
version,id,detail-type,source,account ....etc
0,fdd11d8a-ef17-75ae-cf50-077285bb7e15,Auth0 log,aws.partner/auth0.com/trulab-dev-c36bb924-cf05-4a5b-8400-7bdfbfe0806c/auth0.logs,654654277766 ....etc
I may be going about this all wrong.
Any suggestions would be greatly appreciated!
Best Regards,
Donald
|
discord.py discord-slash import |
|python|discord|discord.py| |
null |
# Initialization Process Requires Forking
systemd waits for a daemon to initialize itself if the daemon forks. In your situation, that's pretty much the only way you have to do this.
The daemon offering the HTTP service must do all of its initialization in the main thread, once that initialization is done and the socket is listening for connections, it will `fork()`. The main process then exits. At that point systemd knows that your process was successfully initialized (exit 0) or not (exit 1).
Such a service receives the [Type=...][1] value of `forking` as follow:
[Service]
Type=forking
...
_**Note:** If you are writing new code, consider not using fork. systemd already creates a new process for you so you do not have to fork. That was an old System V boot requirement for services._
# "Requires" will make sure the process waits
The other services have to wait so they have to require the first to be started. Say your first service is called A, you would have a [Requires][2] like this:
[Unit]
...
Requires=A
...
# Program with Patience in Mind
Of course, there is always another way which is for the other services to know to be patient. That means try to connect to the HTTP port, if it fails, sleep for a bit (in your case, 1 or 2 seconds would be just fine) then try again, until it works.
I have developed both methods and they both work very well.
_**Note:** A powerful aspect to this method, if service **A** gets restarted, you'd get a new socket. This server can then auto-reconnect to the new socket when it detects that the old one goes down. This means you don't have to restart the other services when restarting service **A**. I like this method, but it's a bit more work to make sure it's all properly implemented._
# Use the systemd Auto-Restart Feature?
Another way, maybe, would be to use the [restart on failure][3]. So if the child attempts to connect to that HTTP service and fails, it should fail, right? systemd can automatically restart your process over and over again until it succeeds. It's sucky, but if you have no control over the code of those daemons, it's probably the easiest way.
[Service]
...
Restart=on-failure
RestartSec=10
#SuccessExitStatus=3 7 # if success is not always just 0
...
This example waits 10 seconds after a failure before attempting to restart.
# Hack (last resort, not recommended)
You could attempt a hack, although I do not ever recommend such things because something could happen that breaks such... in the services, change the files so that they have a sleep 60 then start the main process. For that, just write a script like so:
#!/bin/sh
sleep 60
"$@"
Then in the .service files, call that script as in:
ExecStart=/path/to/script /path/to/service args to service
This will run the script instead of directly your code. The script will first sleep for 60 seconds and then try to run your service. So if for some reason this time the HTTP service takes 90 seconds... it will still fail.
Still, this can be useful to know since that script could do all sorts of things, such as use the [`nc`][4] tool to probe the port before actually starting the service process. You could even write your own probing tool.
#!/bin/sh
while true
do
sleep 1
if probe
then
break
fi
done
"$@"
However, notice that such a loop is blocking until `probe` returns with exit code 0.
[1]: https://www.freedesktop.org/software/systemd/man/systemd.service.html#Type=
[2]: https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requires=
[3]: https://www.freedesktop.org/software/systemd/man/systemd.service.html#Restart=
[4]: http://linux.die.net/man/1/nc |
How can I optimize this transposition table for connect 4 AI? |
|go|hashmap|bitboard| |
This is the script to run typescript project:
node --require ts-node/register --loader=ts-node/esm --trace-warnings ./src/home-client.ts
With pm2, I tried:
pm2 start ./src/home-client.ts --node-args="--require ts-node/register --loader=ts-node/esm --trace-warnings"
Logs display an error: `0|home-cli | error: Script not found "ts-node/register"`
Whats the correct way to run typescript project with pm2 and node?
|
**You are trying to add a class to an element based on the value of a range slider. there are a couple of issues in your code.**
1. The mousemove event is not suitable for detecting changes in the value of a range slider. you should use the input event.
2. The condition if (this.value == "1", "2", "3", "4", "5") is incorrect. You can't use multiple values like that in an equality check. You should use a range check instead.
Here is the corrected code:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
$(document).ready(function() {
$("#range-slider").on('input', function() {
var value = parseInt($(this).val());
if (value >= 1 && value <= 5) {
$('.other-element').addClass('is-active').html(`Slider Active Value is: ${value}`);
} else {
$('.other-element').removeClass('is-active').html(`Slider Value is: ${value}`);
}
});
});
<!-- language: lang-css -->
.other-element {
margin-top: 20px;
padding: 10px;
background-color: #f1f1f1;
transition: all 200ms ease;
}
.other-element.is-active {
background-color: lightblue;
}
<!-- language: lang-html -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
<input type="range" id="range-slider" min="0" value="7" max="9">
<div class="other-element">value</div>
<!-- end snippet -->
|
java activity
```
public class MainActivity extends AppCompatActivity {
ActivityMainBinding binding;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
binding = ActivityMainBinding.inflate(getLayoutInflater());
setContentView(binding.getRoot());
replaceFragment(new MainFragment());
NavigationBar();
ImageButton search = (ImageButton) findViewById(R.id.search);
search.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Toast.makeText(MainActivity.this, "running", Toast.LENGTH_LONG).show();
}
});
}
}
```
when im assigning a listener for "search" app crashes.
xml fragment
```
<FrameLayout
android:layout_width="match_parent"
android:layout_height="70dp"
android:orientation="horizontal"
android:background="@color/white"
android:paddingLeft="24dp"
android:paddingRight="24dp">
<ImageButton
android:id="@+id/search"
android:layout_width="40dp"
android:layout_height="40dp"
android:src="@drawable/search_normal"
android:layout_gravity="right|center"
android:scaleType="center"
android:background="@android:color/transparent"/>
</FrameLayout>
```
pls, help
i\`ve tried a lot of things. i\`ve watched a lot of videos and googled i didnt find the answer |
Can defer recover prevent mutex Unlock? |
|go|mutex| |
I've just installed Wordpress 6.4.3 & MAMP and managed to edit functions.php and saved my changes.
Twenty Twenty-Four: Theme Functions (functions.php)
When I tried to make more changes to the file, Wordpress started displaying the following error message:
> Something went wrong. Your change may not have been saved. Please try again. There is also a chance that you may need to manually fix and upload the file over FTP.
When I try to edit my website pages, I started getting the following error message now:
> Updating failed. The response is not a valid JSON response.
This is what I've added to functions.php successfully and cannot edit it anymore
if ($_SERVER["REQUEST_METHOD"] === "POST") {
$name = sanitize_text_field($_POST["name"]);
$email = sanitize_email($_POST["email"]);
$message = sanitize_textarea_field($_POST["description"]);
// Add code to save the form data to the database
global $wpdb;
$table_name = $wpdb->prefix . 'request_form';
$data = array(
'name' => $name,
'email' => $email,
'description' => $message,
'submission_time' => current_time('mysql')
);
$insert_result = $wpdb->insert($table_name, $data);
if ($insert_result === false) {
$response = array(
'success' => false,
'message' => 'Error saving the form data.',
);
} else {
$response = array(
'success' => true,
'message' => 'Form data saved successfully.'
);
}
// Return the JSON response
header('Content-Type: application/json');
echo json_encode($response);
exit;
}
What I have done:
- I've checked permissions of the file and gave read and write access to everyone on my localhost.
- Restarted the server
- When I changed http to https for wordpress Address (URL) and Site Address (URL) under Settings > General and saved the changes, I was redirected to the following address http://localhost:8888/wp-admin/options.php and can see the following:
<br />
<b>Notice</b>: Undefined index: name in
<b>/Applications/MAMP/htdocs/wp-
content/themes/twentytwentyfour/functions.php</b> on line
<b>209</b><br />
<br />
<b>Notice</b>: Undefined index: email in
<b>/Applications/MAMP/htdocs/wp-
content/themes/twentytwentyfour/functions.php</b> on line
<b>210</b><br />
<br />
<b>Notice</b>: Undefined index: description in
<b>/Applications/MAMP/htdocs/wp-
content/themes/twentytwentyfour/functions.php</b> on line
<b>211</b><br />
<div id="error"><p class="wpdberror"><strong>WordPress database
error:</strong> [Unknown column 'submission_time' in
'field list']<br /><code>INSERT INTO `wp_request_form`
(`name`, `email`, `description`, `submission_time`) VALUES
('', '', '', '2024-03-30
07:10:51')</code></p></div>{"success":false,"message":"Error
saving the form data."}
|
You need to run at the end of each day a function that protects that day's sheet. To automate that, add a [time-driven trigger](https://developers.google.com/apps-script/guides/triggers/installable#time-driven_triggers) and set it to fire at 23:00 hours.
The code to protect the current day's sheet could look like this:
```lang-js
function protectTodaysSheet() {
const ss = SpreadsheetApp.getActive();
const sheetName = new Date().getDate();
const sheet = ss.getSheetByName(sheetName);
protectSheet_(sheet);
}
```
You can find my implementation of `protectSheet_()` (and its friends) at
[Protecting Entire Sheet using apps script](https://stackoverflow.com/a/78068531/13045193). |
I am confused about do I need to set environment variables after installing openjdk from ***sudo dnf install java-latest-openjdk-devel.x86_64*** as the fedora documentation doesn't provide any clarification about it. Please help!
I installed the openjdk from the it's official repository now I am confused whether to add its path to the environment variables or not. |
About installing openjdk on Fedora workstation |
|java-11| |
null |
This bug was [fixed in CU1][1]:
> 2081891 : Fixes an exception that occurs when `JSON_ARRAY`/`JSON_OBJECT` return values are used in a parameter in functions that take strings. After you apply this fix, return values of `JSON_ARRAY` and `JSON_OBJECT` are made coercible and can be used as string parameters.
So I ***strongly*** suggest you upgrade to [the latest CU][2], for this and many other reasons (security, performance, other bug fixes).
[1]: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate1#2081891
[2]: https://www.sqlserverversions.com/2021/07/sql-server-2022-versions.html |
I am trying to display my post on my home page. But i get an error "NameError in Home#index". Here is the code - i am new to coding so the code my look wierd. I have tried to use Chatgpt and nothing seem to help. Does anyone have any suggestions?
before_action :set_stories
Home Controller:
def set_stories
@stories = List.where(user: [current_user])
end
Index View (index.html.erb):
<div class="d-flex flex-column gap-3">
<!-- Story Section Start -->
<% @stories.each do |stories_list| %>
<%= render 'story/stories_list', stories_list: stories_list %>
<% end %>
Partial for Story List (_stories_list.html.erb):
<div class="card d-flex flex-row align-items-center gap-3 px-3" style="width: 25rem; height: 7rem; overflow-x:scroll;" >
<% @stories.each do |story| %>
<% (0...1).each do %>
<%= render 'story/story', story: story %>
<% end %>
<% end %>
</div>
Partial for Story (_story.html.erb):
<div class="d-flex flex-column justify-content-center align-items-center">
<div class="col-lg-4 col-md-6 col-sm-8 p-3 mb-4 profile-post" style="height:20rem;position: relative">
<%= link_to @list do %>
<div class="row">
<% if list.file.attached? %>
<% if list.file.image? %>
<div class="img-fluid">
<%= image_tag(list.file, class: "img-thumbnail rounded-circle border border-2 border-primary", style: "width: 3.5rem") %>
</div>
<% elsif list.file.video? %>
<%= video_tag(url_for(list.file), class: "img-thumbnail rounded-circle border border-2 border-primary", style: "width: 3.5rem", autoplay: false, loop: false, muted: false, controls: true) %>
<% end %>
<% end %>
</div>
<% end %>
</div>
</div>
i was able to get the listings to display however the layout is
wrong it is showing the data twice - i think this is teh problem
<% @stories.each do |story| %>
<% (0...1).each do %>
<%= render 'story/story', story: story %>
<% end %>
<% end %> |
I noticed that ffmpeg algorithm does not always select the video with best video *and audio* quality
For example, consider following HLS playlist (based on a real-world example I encountered):
```
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-MEDIA:URI="audio_64.m3u8",TYPE=AUDIO,GROUP-ID="6",NAME="audio 0",DEFAULT=YES,AUTOSELECT=YES
#EXT-X-MEDIA:URI="audio_128.m3u8",TYPE=AUDIO,GROUP-ID="7",NAME="audio 0",DEFAULT=YES,AUTOSELECT=YES
#EXT-X-STREAM-INF:PROGRAM-ID=0,CLOSED-CAPTIONS=NONE,BANDWIDTH=2966645,AVERAGE-BANDWIDTH=2586708,RESOLUTION=1080x1920,FRAME-RATE=30,CODECS="avc1.640028,mp4a.40.2",AUDIO="6"
video_1080.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=0,CLOSED-CAPTIONS=NONE,BANDWIDTH=3030821,AVERAGE-BANDWIDTH=2650778,RESOLUTION=1080x1920,FRAME-RATE=30,CODECS="avc1.640028,mp4a.40.2",AUDIO="7"
video_1080.m3u8
```
By the bitrates you can see that the both variants have the same quality, but different sizes, as indicated by `BANDWIDTH`. Each video has `AUDIO=<id>`
that references audio with corresponding `GROUP-ID=<id>`
Judging by value of `BANDWIDTH`, video with `AUDIO="7"` has better audio quality. I have also verified that manually
ffmpeg seems to look only at `RESOLUTION`. Since both videos have same resolution, it picks the first video, as noted in [docs](https://ffmpeg.org/ffmpeg.html#Automatic-stream-selection):
> It will select that stream based upon the following criteria:
> - **for video, it is the stream with the highest resolution**,
> - for audio, it is the stream with the most channels,
> - ...
>
> **In the case where several streams of the same type rate equally, the stream with the lowest index is chosen**.
# Fix
To fix that you can manually select the best streams, and then instruct ffmpeg to use them when downloading
## Finding best streams
Use
```
ffprobe -i <path-to.m3u8> -print_format json -show_streams
```
to show information about programs and streams in the playlist files, and select the best streams (JSON is used as example, see `-print_format` option [here](https://www.ffmpeg.org/ffprobe.html))
Continuing with JSON as example, to find the best audio stream, check the
streams with `codec_type: "audio"` and compare them by `bit_rate`
For best video, check streams with `codec_type: "video"` and compare them by `tags.variant_bitrate`
## Telling ffmpeg to use those streams
Once you've found the best stream, remember `program_id` where this stream is located, and also index of the stream in the `streams` array
Next, pass `-map` option for each stream you want to download:
```
ffmpeg -i <path-to.m3u8> -map 0:p:<program-id>:<stream-index> ...
```
For explanation of the value format, see [stream specifiers](https://ffmpeg.org/ffmpeg-all.html#toc-Stream-specifiers-1) |
I am creating a memoization example with a function that adds up / averages the elements of an array and compares it with the cached ones to retrieve them in case they are already stored.
In addition, I want to store only if the result of the function differs considerably (passes a threshold e.g. 5000 below).
I created an example using a decorator to do so, the results using the decorator is slightly slower than without the memoization which is not OK, also is the logic of the decorator correct ?
My code is attached below:
import time
import random
from collections import OrderedDict
def memoize(f):
cache = {}
def g(*args):
sum_key_arr = sum(args[0])
print(sum_key_arr)
if sum_key_arr not in cache:
for key, value in OrderedDict(sorted(cache.items())).items():# key in dict cannot be an array so I use the sum of the array as the key
if abs(sum_key_arr - key) <= 5000:#threshold is great here so that all values are approximated!
#print('approximated')
return cache[key]
else:
#print('not approximated')
cache[sum_key_arr] = f(args[0],args[1])
return cache[sum_key_arr]
return g
@memoize
def aggregate(dict_list_arr,operation):
if operation == 'avg':
return sum(dict_list_arr) / len(list(dict_list_arr))
if operation == 'sum':
return sum(dict_list_arr)
return None
t = time.time()
for i in range(200,150000):
res = aggregate(list(range(i)),'avg')
elapsed = time.time() - t
print(res)
print(elapsed)
|
crash app when lisnener setOnClickListener |
|java|android| |
null |
My script add fields to a pointlayer. This happens while iterating a polygon layer. For each feature I look for the closest point from a dataset. This point gets written to a pointlayer (REL_LAWIS_profiles). Code:
```
## D.4 Write LAWIS snowprofile layer
lawislayer = QgsVectorLayer('Point?crs = epsg:4326', 'lawisprofiles', 'memory')
lawislayer_path = str(outpath_REL + "lawis_layer.shp")
_writer = QgsVectorFileWriter.writeAsVectorFormat(lawislayer,lawislayer_path,'utf-8',driverName='ESRI Shapefile')
REL_LAWIS_profiles = iface.addVectorLayer(lawislayer_path,"REL_LAWIS_profiles", "ogr")
## D.5 LAWIS snowprofile layer write attribute fields
lawisprovider = REL_LAWIS_profiles.dataProvider()
lawisprovider.addAttributes([QgsField("REL_ID", QVariant.String),#QVariant.String
QgsField("ID", QVariant.Int),
QgsField("NAME", QVariant.String),
QgsField("DATE", QVariant.String),
QgsField("ALTIDUDE", QVariant.Int),
QgsField("ASPECT", QVariant.String),
QgsField("SLOPE", QVariant.Int),
QgsField("SD", QVariant.Int),
QgsField("ECTN1", QVariant.Int),
QgsField("ECTN2", QVariant.Int),
QgsField("ECTN3", QVariant.Int),
QgsField("ECTN4", QVariant.Int),
QgsField("COMMENTS", QVariant.String),
QgsField("PDF", QVariant.String)])
REL_LAWIS_profiles.updateFields()
## get layer for data collection
lawis_Pts = QgsProject.instance().mapLayersByName('REL_LAWIS_profiles')[0]
## look for closest point and get data for fields....
```
In a second step features are added and values get assigned to the fields:
```
## GET FIELD ID FROM lawis_pts
rel_id_idx = feat.fields().lookupField('REL_ID') # feat because inside a for loop of anotehr layer
id_idx = lawis_Pts.fields().lookupField('ID')
name_idx = lawis_Pts.fields().lookupField('NAME')
date_idx = lawis_Pts.fields().lookupField('DATE')
l_alti_idx = lawis_Pts.fields().lookupField('ALTIDUDE')
l_aspect_idx = lawis_Pts.fields().lookupField('ASPECT')
l_slo_idx = lawis_Pts.fields().lookupField('SLOPE')
l_sd_idx = lawis_Pts.fields().lookupField('SD')
l_ectn_idx1 = lawis_Pts.fields().lookupField('ECTN1')
l_ectn_idx2 = lawis_Pts.fields().lookupField('ECTN2')
l_ectn_idx3 = lawis_Pts.fields().lookupField('ECTN3')
l_ectn_idx4 = lawis_Pts.fields().lookupField('ECTN4')
com_idx = lawis_Pts.fields().lookupField('COMMENTS')
pdf_idx = lawis_Pts.fields().lookupField('PDF')
## ADD FEATURES TO lawis_Pts.
lawis_Pts.startEditing()
lawisfeat = QgsFeature()
lawisfeat.setGeometry( QgsGeometry.fromPointXY(QgsPointXY(lawisprofile_long,lawisprofile_lat)))
lawisprovider.addFeatures([lawisfeat])
lawis_Pts.commitChanges()
## CHANGE VALUES OF SELECTED FEATURE
lawis_Pts.startEditing()
for lfeat in selection:
lawis_Pts.changeAttributeValue(lfeat.id(), rel_id_idx, REL_LAWIS_ID)
lawis_Pts.changeAttributeValue(lfeat.id(), id_idx, LAWIS_id)
lawis_Pts.changeAttributeValue(lfeat.id(), name_idx, LAWIS_NAME)
lawis_Pts.changeAttributeValue(lfeat.id(), date_idx, LAWIS_DATE)
lawis_Pts.changeAttributeValue(lfeat.id(), l_alti_idx, LAWIS_ALTIDUDE)
lawis_Pts.changeAttributeValue(lfeat.id(), l_aspect_idx, LAWIS_ASPECT)
lawis_Pts.changeAttributeValue(lfeat.id(), l_slo_idx, LAWIS_SLOPE)
lawis_Pts.changeAttributeValue(lfeat.id(), l_sd_idx, LAWIS_SD)
lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx1, LAWIS_ECTN1)
lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx2, LAWIS_ECTN2)
lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx3, LAWIS_ECTN3)
lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx4, LAWIS_ECTN4)
lawis_Pts.changeAttributeValue(lfeat.id(), com_idx, LAWIS_COMMENTS)
lawis_Pts.changeAttributeValue(lfeat.id(), pdf_idx, LAWIS_PDFlink) lawis_Pts.commitChanges()
```
In the table the layer has 14 fields, but if the values are not written to the fields.
I checked values but no issue there. Then i checked if all fields exists and field Nr 12 does (at least for python) not exist. With this:
```
fields = lawis_Pts.fields()
for field in fields:
print(field.name())
```
I checked right after adding the fields if all fields get added. But there are only 13 fields (REL_ID,ID,NAME,DATE,ALTIDUDE,ASPECT,SLOPE,SD,ECTN1,ECTN2,ECTN3,COMMENTS,PDF). So I found out the problem is ECTN4. Also checken the Idx of ECTN4 by
```
print(l_ectn_idx4)
```
which gave me -1, which means it's not existing. If i But If I remove the layer which is added by the script and then add the layer manually and look for the field it is there also using code. I assume there has to be a problem how I add the layer, but I just can't find the reason for this behavior. I'm thankful for any ideas! |
I just created my first Next JS 14 App with App Router v4 authentication and authorization.
I was so proud, too, everything works great!
There is just one caveat I cannot seem to solve: Server Actions.
They are supposed to replace API routes, making things a lot smoother and easier.
They do work perfectly, but the problem is: I cannot protect them.
App setup:
App folder structure with t "actions" folder side by side the app folder. It contains all server actions.
One of the files is db.js containing all database related logic, including this piece of code:
```
export async function executeQuery(query, values = []) {
// Ensure that the connection pool is initialized
await connectDatabase();
// Execute the query using the connection pool
const connection = await pool.getConnection();
try {
const [results] = await connection.execute(query, values);
return results;
} finally {
connection.release(); // Release the connection back to the pool
}
}
```
This basically does all database queries with a normal mysql2 database, I am not using an adapter.
All relevant routes are protected in the middleware (here just an example):
```
export { default } from "next-auth/middleware";
export const config = { matcher: ["/profile" "/dashboard" ] };
```
Now, when I call "executeQuery" on the public route "home" like this (did it just to test this problem):
```
import {useSession} from "next-auth/react";
import { executeQuery } from "../actions/db";
import { useState } from "react";
export default /*async*/ function Home() {
const session = useSession();
const [data, setData] = useState(null);
return (
<div className="base-container" style={{ textAlign: "center", flexDirection: "column" }}>
<h1>Home</h1>
{session?.user?.firstname}, {session?.user?.lastname}
<input type="text" name="id" className="base-input" style={{border: "1px solid white"}}></input>
<button onClick={async () => {
let id = document.querySelector('input[name="id"]').value;
const result = await executeQuery("SELECT * FROM users WHERE id = ?", [id]);
setData(JSON.stringify(result[0]));
}}> Get email for input user</button>
<label style={{border: "1px solid white"}}>{data}</label>
</div>
);
}
```
Here, the firstname/lastname only show up when logged in, all good.
I was expecting for Next Auth to prohibit at least the usage of the server action when not logged in here, but that was not the case. As the server functions are also used before loggin in and they should also only be used in a certain way, that would also not help completely.
Putting in a random ID into the input field (which matches an ID in the database; and yes I could use UUIDs, but I don't want to rely on only that, or do people do that?) I can pull all data related to "users" in the database; I could even put in more elaborate queries and get everything I want.
That is of course an unacceptable security issue.
The only way I can think of right now is manually check every single server action call whether they are ok to make it. But this cannot be the right way, as this is prone to errors, too.
How do I correctly secure server actions (this one but also others that may be sensitive)? Or do I have to use APIs after all, as API routes can be protected with Next Auth easily...?
I am a little stumped here, so I would really appreciate some input on how this is supposed to work. |
Include additional options when initializing the app. You can find the necessary keys in the Firebase console under project settings.
Future<void> main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(
options: FirebaseOptions(
apiKey: 'your_api_key',
appId: 'your_app_id',
messagingSenderId: 'your_messaging_sender_id',
projectId: 'your_project_id',
storageBucket: 'your_storage_bucket',
)
);
runApp(MyApp());
} |
{"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":988169,"DisplayName":"pkc456"},{"Id":2422778,"DisplayName":"Mike Szyndel"}],"SiteSpecificCloseReasonIds":[18]} |
{"Voters":[{"Id":23899445,"DisplayName":"Tanisha Sharma"}]} |
I have a table in Supabase which has reported_location column. I want to listen to inserts in Flutter within a given radius.
I am using the code below but it isn't working.
late RealtimeChannel _violationsListener;
void subscribeToViolations(LatLng userLocation) {
const radius = 10000; // 10 km in meters
final userLocationPoint =
'ST_SetSRID(ST_MakePoint(${userLocation.longitude}, ${userLocation.latitude}), 4326)';
_violationsListener = supabase
.channel('violations-channel')
.onPostgresChanges(
event: PostgresChangeEvent.insert,
schema: 'public',
table: 'violations',
filter: PostgresChangeFilter(
type: PostgresChangeFilterType.lt,
column: 'ST_Distance(reporting_location, $userLocationPoint)',
value: radius.toString(),
),
callback: (payload) async {
final json = payload.newRecord;
debugPrint(
'Printing subscribed message================================');
debugPrint(json.toString());
},
)
.subscribe();
} |
Listening to table inserts in supabase within radius of user location in Flutter |
|supabase|supabase-flutter|supabase-realtime| |
I have implemented till Statements and State in Tree Walk Interpreter. I am pissed with an error. Can you please help me fix this :
>
> print 5 + 4; [line ] Error at 'print': Expect expression. Exception in thread "main" com.piscan.Zealot.Parser$ParseError at com.piscan.Zealot.Parser.error(Parser.java:131) at com.piscan.Zealot.Parser.primary(Parser.java:227) at com.piscan.Zealot.Parser.unary(Parser.java:202) at com.piscan.Zealot.Parser.factor(Parser.java:185) at com.piscan.Zealot.Parser.term(Parser.java:173) at com.piscan.Zealot.Parser.comparison(Parser.java:160) at com.piscan.Zealot.Parser.equality(Parser.java:50) at com.piscan.Zealot.Parser.expression(Parser.java:45) at com.piscan.Zealot.Parser.expressionStatement(Parser.java:78) at com.piscan.Zealot.Parser.statement(Parser.java:66) at com.piscan.Zealot.Parser.parse(Parser.java:37) at com.piscan.Zealot.Zealot.run(Zealot.java:85) at com.piscan.Zealot.Zealot.runPrompt(Zealot.java:66) at com.piscan.Zealot.Zealot.main(Zealot.java:25)
>
here's the code [Github](https://github.com/kmr-ankitt/Zealot)
This program is meant to evaluate
> print 5 + 5;
> as 10
but it is showing above mentioned error. |
I would go for a **lookup array** instead of programming edge-cases.
The reason for this is that when there are edge cases in data handling, you have to see if you can also solve it dynamically. So don't program it in the code. Should (multiple) edge-cases be necessary in the future. You could also save the lookup array as data somewhere _(a data-file)_ and adjust it without having to change the program. _(so no recompile is needed)_
So, create a lookup array which contains the **valid** indices. This way you can easily do a normal random on a consecutive array. The selected value is the index you should use on the original array.
public class Program
{
private static Random _rnd = new Random();
public static void Main()
{
// Some example array containing all the values.
var myArray = "abcdefghijklmnop".ToArray();
// The lookup array containing the indices which are valid.
var rndLookup = new[] { 2, 3, 4, 5, 6, 7, 8, 15 };
// Choose a random index of the lookup-array and use
// the length as maximum.
var rndIndex = _rnd.Next(rndLookup.Length);
// Select the value from the original array, via the lookup-array.
// It would be wise to check if there is no index out of bounds
// On the original array.
Console.WriteLine("The random value is: " +myArray[rndLookup[rndIndex]]);
}
} |
i have this page component for each product information:
```
export default function Product() {
const { data } = useContext(ShopContext);
const { id } = useParams();
if (!data) {
return <div>Loading product...</div>;
}
const product = data.find((item) => item.id === Number(id));
return (
<div>
{product.title}
</div>
);
}
```
somehow product is undefined though data and id can be logged into console and their value is available, i made sure of it like this:
```
export default function Product() {
const { data } = useContext(ShopContext);
const { id } = useParams();
if (!data) {
return <div>Loading product...</div>;
}
const product = data.find((item) => item.id === Number(id));
return (
<div>
<div>{id}</div> {/*this div is displayed as expected*/}
<div> {/*this div is displayed as expected*/}
{data.map((item) => (
<div key={item.id}>{item.title}</div>
))}
</div>
<div>{product?.title}</div> {/*this div is empty*/}
</div>
);
}
```
i really can't find a solution when i even don't understand what's going on but i did it this way too and still nothing on the page (the url is correct and loading div is displayed):
```
export default function Product() {
const { data } = useContext(ShopContext);
const { id } = useParams();
if (!data) {
return <div>Loading product...</div>;
}
return (
<div>
{data.map(item => (
<div key={item.id}>
{item.id === id ? item.title : null}
</div>
))}
</div>
);
}
```
for additional information i'm fetching the data from [fakestoreapi.com](fakestoreapi.com) in the app component and it works fine in other components. here's the fetching piece:
```
useEffect(() => {
async function FetchData() {
try {
const response = await fetch("https://fakestoreapi.com/products");
if (!response.ok) {
throw new Error(`HTTP error: Status ${response.status}`);
}
let postsData = await response.json();
postsData.sort((a, b) => {
const nameA = a.title.toLowerCase();
const nameB = b.title.toLowerCase();
return nameA.localeCompare(nameB);
});
setData(postsData);
setError(null);
} catch (err) {
setData(null);
setError(err.message);
} finally {
setLoading(false);
}
}
FetchData();
}, []);
```
this is the context:
```
import { createContext } from "react";
export const ShopContext = createContext({
data: [],
loading: true,
error: "",
setData: () => {},
cart: [],
addToCart: () => {},
removeFromCart: () => {},
});
```
and this is its states in app component:
```
const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
const [cart, setCart] = useState([]);
const addToCart = (productId) => {
....
};
const removeFromCart = (productId) => {
.....
};
return (
<ShopContext.Provider
value={{
data,
loading,
error,
setData,
cart,
addToCart,
removeFromCart,
}}
>
<Header />
<Body />
<Footer />
</ShopContext.Provider>
);
```
i really don't know what is the problem and what to search.
|
Protect Server Actions with Next Auth in Next JS 14 |
|next.js|server|authorization|action|next-auth| |
null |
I am trying to create custom video controls for my react native app but it seems like I am missing something. Currently only the pause/play functionality work. The fast forward/ rewinding and slider don't work. Here is the code for the pause and play functionality that works.
```
const togglePlayPause = async () => {
if (videoRef.current) {
if (isPlaying) {
await videoRef.current.pauseAsync();
} else {
await videoRef.current.playAsync();
}
setIsPlaying(!isPlaying);
}
};
<View style={styles.controls}>
<TouchableOpacity onPress={togglePlayPause} style={styles.controlButton}>
{isPlaying?
<FontAwesome5 name="pause" size={30} color="white" />
:
<FontAwesome5 name="play" size={30} color="white" />
}
</TouchableOpacity>
</View>
```
I think I did the right thing but it's not working so probably not. Could someone tell me what I am doing wrong? This is the rest of the code including the pause and play functionally I mentioned earlier
```
import { View, Text,Dimensions, ImageBackground,StyleSheet, StatusBar, FlatList,Image, TouchableOpacity } from 'react-native'
import { TapGestureHandler, State } from 'react-native-gesture-handler';
import { FontAwesome5 } from '@expo/vector-icons';
import Slider from '@react-native-community/slider';
import React,{useState,useEffect,useRef} from 'react'
import EchoIcon from './Echo';
import ThumbsDownIcon from './ThumbsDown';
import ThumbsUpIcon from './ThumbsUp';
import { Video,ResizeMode } from 'expo-av';
import { supabase } from '../../../../supabase1';
import CommentThumbsUp from '../../../../animations/CommentThumbsUp';
import CommentThumbsDown from '../../../../animations/CommentThumbsDown';
export default function ViewPostMedia({navigation,route}) {
const screenWidth = Dimensions.get('window').width;
const SCREEN_HEIGHT = (Dimensions.get('window').height)
const {media} =route.params;
const {postID} = route.params;
const {initialLikes} = route.params
const {initialDislikes} = route.params
const {initialEchoes} = route.params
const {mediaType} = route.params
const [Comments, setComments] = useState([])
const videoRef = useRef(null);
const [isPlaying, setIsPlaying] = useState(false);
const [sliderValue, setSliderValue] = useState(0);
const [videoDuration, setVideoDuration] = useState(0);
const [sliderBeingDragged, setSliderBeingDragged] = useState(false);
const [doubleTapRight, setDoubleTapRight] = useState(false);
const [doubleTapLeft, setDoubleTapLeft] = useState(false);
useEffect(() => {
const getVideoDuration = async () => {
if (videoRef.current) {
const { durationMillis } = await videoRef.current.getStatusAsync();
console.log('Video duration:', durationMillis / 1000);
setVideoDuration(durationMillis / 1000);
}
};
getVideoDuration();
}, [videoRef.current]);
// Update slider position based on video progress
useEffect(() => {
const updateSliderPosition = () => {
if (videoRef.current) {
videoRef.current.getStatusAsync().then((status) => {
const { positionMillis, durationMillis } = status;
const currentPosition = positionMillis / 1000;
const progress = currentPosition / (durationMillis / 1000);
setSliderValue(progress);
});
}
};
const intervalId = setInterval(updateSliderPosition, 1000); // Update every second
return () => clearInterval(intervalId); // Clean up interval
}, [videoRef.current]);
const onSlidingStart = () => {
setSliderBeingDragged(true);
};
const onSlidingComplete = async (value) => {
setSliderBeingDragged(false);
setSliderValue(value);
const newPosition = value * videoDuration;
await videoRef.current.setPositionAsync(newPosition);
};
// Function to handle slider value change
const onSliderValueChange = (value) => {
setSliderValue(value);
};
const togglePlayPause = async () => {
if (videoRef.current) {
if (isPlaying) {
await videoRef.current.pauseAsync();
} else {
await videoRef.current.playAsync();
}
setIsPlaying(!isPlaying);
}
};
useEffect(() => {
const handleDoubleTap = async () => {
if (doubleTapRight) {
// Move video 5 seconds ahead
const newPosition = Math.min(videoDuration, await videoRef.current.getStatusAsync().then((status) => {
const { positionMillis } = status;
return (positionMillis / 1000) + 5; // Convert to seconds
}));
console.log("New position to the right:",newPosition)
await videoRef.current.setPositionAsync(newPosition);
console.log('Position set successfully.',newPosition);
await videoRef.current.playAsync();
console.log('Video playback started.');
} else if (doubleTapLeft) {
// Move video 5 seconds behind
const newPosition = Math.max(0, await videoRef.current.getStatusAsync().then((status) => {
const { positionMillis } = status;
return (positionMillis / 1000) - 5; // Convert to seconds
}));
console.log("New position to the left:",newPosition)
await videoRef.current.setPositionAsync(newPosition);
console.log('Position set successfully.',newPosition);
await videoRef.current.playAsync();
console.log('Video playback started.');
}
};
handleDoubleTap();
}, [doubleTapRight, doubleTapLeft]);
useEffect(() => {
fetchComments();
}, [postID]);
const formatTimestamp = (timestamp) => {
const currentDate = new Date();
const postDate = new Date(timestamp);
const timeDifference = currentDate - postDate;
const secondsDifference = Math.floor(timeDifference / 1000);
const minutesDifference = Math.floor(secondsDifference / 60);
const hoursDifference = Math.floor(minutesDifference / 60);
const daysDifference = Math.floor(hoursDifference / 24);
if (secondsDifference < 60) {
return `${secondsDifference}s ago`;
} else if (minutesDifference < 60) {
return `${minutesDifference}m ago`;
} else if (hoursDifference < 24) {
return `${hoursDifference}h ago`;
} else {
return `${daysDifference}d ago`;
}
};
const fetchComments = async () => {
try {
const { data: commentsData, error: commentsError } = await supabase
.from('comments')
.select('*')
.eq('post_id', postID);
if (commentsError) {
console.error('Error fetching comments:', commentsError);
return;
}
// Fetch user details for each comment
const commentsWithUserDetails = await Promise.all(
commentsData.map(async (comment) => {
const { data: userData, error: userError } = await supabase
.from('profiles')
.select('*')
.eq('id', comment.user_id)
.single();
if (userError) {
console.error(`Error fetching user details for comment ${comment.comment_id}:`, userError);
return comment;
}
return {
...comment,
userDetails: userData || {},
};
})
);
setComments(commentsWithUserDetails || []);
} catch (error) {
console.error('Error in fetchComments function:', error);
}
};
const renderItem = ({ item: comment }) => (
<View style={{borderTopWidth:1, borderTopColor:'#B3B3B3'}}>
<View style={styles.commentcontainer}>
<View style={styles.profileanduser}>
<View style={styles.postprofilepicturecontainer}>
<Image style={styles.postprofilepicture} source={{ uri: comment.userDetails?.profile_picture_url }} />
</View>
<Text style={styles.usernameposttext}>@{comment.userDetails?.username}</Text>
<Text style={styles.timetext}>{formatTimestamp(comment.created_at)}</Text>
</View>
<View style={styles.commentviewtextcontainer}>
<Text style={styles.commentsviewtext}>{comment.text}</Text>
<View style={styles.commentreactioncontainer}>
<CommentThumbsUp postId={comment.comment_id} initialLikes={comment.likes} />
<Text style={styles.userpostreactionstext}>{comment.likes}</Text>
<CommentThumbsDown postID={comment.comment_id} initialDislikes={comment.dislikes} />
<Text style={styles.userpostreactionstext}>{comment.dislikes}</Text>
</View>
</View>
</View>
</View>
);
return (
<View >
<StatusBar backgroundColor={'transparent'}/>
{ mediaType === 'image' ?
<ImageBackground style={{width:screenWidth,height:SCREEN_HEIGHT}} source={{uri:media}}>
<View style={styles.bottomTab}>
<ThumbsUpIcon postID={postID} initialLikes={initialLikes} />
<ThumbsDownIcon postID={postID} initialDislikes={initialDislikes} />
<EchoIcon postID={postID} initialEchoes={initialEchoes}/>
</View>
</ImageBackground>
:
<View>
<View style={styles.mediacontainer}>
<Video
ref={videoRef}
style={styles.media}
source={{uri:media}}
useNativeControls={false}
resizeMode={ResizeMode.COVER}
onPlaybackStatusUpdate={(status) => {
setIsPlaying(status.isPlaying);
}}
isLooping={true}
/>
</View>
<View style={styles.controls}>
<TouchableOpacity onPress={togglePlayPause} style={styles.controlButton}>
{isPlaying?
<FontAwesome5 name="pause" size={30} color="white" />
:
<FontAwesome5 name="play" size={30} color="white" />
}
</TouchableOpacity>
</View>
<View style={styles.sliderContainer}>
<Slider
style={{ width: 300, height: 40 }}s
minimumValue={0}
maximumValue={1}
value={sliderValue}
onValueChange={onSliderValueChange}
onSlidingStart={onSlidingStart}
onSlidingComplete={onSlidingComplete}
minimumTrackTintColor="#784EF8"
maximumTrackTintColor="white"
thumbTintColor="#784EF8"
/>
</View>
<TapGestureHandler
onHandlerStateChange={({ nativeEvent }) => {
if (nativeEvent.state === State.ACTIVE) {
setDoubleTapRight(true);
setTimeout(() => {
setDoubleTapRight(false);
}, 300); // Adjust timeout as needed
}
}}
numberOfTaps={2}
maxDelayMs={300}
>
<View style={styles.rightDoubleTapArea}></View>
</TapGestureHandler>
<TapGestureHandler
onHandlerStateChange={({ nativeEvent }) => {
if (nativeEvent.state === State.ACTIVE) {
setDoubleTapLeft(true);
setTimeout(() => {
setDoubleTapLeft(false);
}, 300); // Adjust timeout as needed
}
}}
numberOfTaps={2}
maxDelayMs={300}
>
<View style={styles.leftDoubleTapArea}></View>
</TapGestureHandler>
<View >
<FlatList
data={Comments}
keyExtractor={(comment) => comment.comment_id.toString()}
renderItem={renderItem}
/>
</View>
</View>
}
</View>
)
}
```
|
What I'm I doing wrong with my custom video controls |
|javascript|react-native|video-capture| |
null |
Just wait a little bit, generally, it takes time as there will be too many things to analyze. |
currently I doing a mobile app using flutter as frontend and laravel php as backend, usually i need to serve and open xampp to use the local api and database, so what if I dont want locally anymore, buy a server right? any server suggestion for this kind of situation and how to configure that, this is my first time make the app online.
I try the linode host server and the url could access through browser and shows apache web page but I try to replace my local ipv4 address with server ip to call api online but it always say url not working |
Deploy Flutter and Laravel php mobile app on the host server |
|php|laravel|flutter|server|host| |
null |
I'm very new to the ways of Minecraft modding for forge 1.20.1, and I've been using the tutorial by Kaupenjo for guidance. I've created 1 item and 1 creative tab and they both do in fact appear in the game when I run the test client; however I encountered bugs such as the item being untextured and the tab still having the directory name rather than the name I established for it in the language file (en_us).
After some examination of my code I found silly errors relating to the item's model file (the refined_emerald.json) turns out I forgot how to write in json for a minute their and accidentally put it like this `"parent:item/generated"` instead of putting it like this `"parent" : "item/generated"`. The second bug related to the model file was that it was called refined_emeral.json instead of refined_emerald.json. So after changing those and then discovering that I forgot to put the name for the creative mode tab in the language file, after fixing that I ran the test client.
Well as it turns out the test client never actually loaded my changes and is stuck before I changed anything. It there something that I should do after every change is made before running the client that I'm not aware of? Like changing the version number? (I just started development yesterday)
For all reference here is a link to the GitHub repository:
https://github.com/In0ctScuirrle/Scuirrles-Starforge-1.20.1 |
Changes made to forge minecraft mod not appearing in test client |
|java|minecraft-forge| |
null |
This is the original data that I want to edit -
```
"Languages" : [
"English",
"French"
],
```
And this is what I want to achieve by inserting the data such that it will leave 2 null values if there is empty spaces when entering the data at position 5 and if there is only 1 empty space then it should only add 1 null data in the array and then it should add Tamil into the array -
```
"Languages" : [
"English",
"French",
null,
null,
"Tamil"
],
```
This is what I tried -
```
$push: {
"Languages" : {
$each: ["Tamil"],
$position: 4
}
}
``` |
I have implemented till Statements and State in Tree Walk Interpreter. I am pissed with an error |
|java|compiler-construction|crafting-interpreters| |
null |
I am tryin to use the EMR studio workspace (notebooks) with EMR serverless application but it give me this error when I go to select the kernel (like python3). I have explored all the docs on the polices and trust polices but I don't understand why I am getting this error as a root user.
[Error when selecting kernel][1]
[1]: https://i.stack.imgur.com/8gjIp.png
My trust policy for the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "123"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:elasticmapreduce:us-east-2:123:*"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "emr-serverless.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:SetContext"
]
}
]
}
My policies attached to the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EMRServerlessInteractiveAccess",
"Effect": "Allow",
"Action": "emr-serverless:AccessInteractiveEndpoints",
"Resource": "arn:aws:emr-serverless:us-east-2:123:/applications/*"
},
{
"Sid": "ReadAccessForEMRSamples",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*.elasticmapreduce",
"arn:aws:s3:::*.elasticmapreduce/*"
]
},
{
"Sid": "EMRServerlessRuntimeRoleAccess",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
},
{
"Sid": "FullAccessToOutputBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetEncryptionConfiguration",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::s3bu",
"arn:aws:s3:::s3bu/*"
]
},
{
"Sid": "GlueCreateAndReadDataCatalog",
"Effect": "Allow",
"Action": [
"glue:GetDatabase",
"glue:CreateDatabase",
"glue:GetDataBases",
"glue:CreateTable",
"glue:GetTable",
"glue:UpdateTable",
"glue:DeleteTable",
"glue:GetTables",
"glue:GetPartition",
"glue:GetPartitions",
"glue:CreatePartition",
"glue:BatchCreatePartition",
"glue:GetUserDefinedFunctions"
],
"Resource": [
"*"
]
},
{
"Sid": "AllowEMRReadOnlyActions",
"Effect": "Allow",
"Action": [
"elasticmapreduce:ListInstances",
"elasticmapreduce:DescribeCluster",
"elasticmapreduce:ListSteps"
],
"Resource": "*"
},
{
"Sid": "AllowEC2ENIActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENIAttributeAction",
"Effect": "Allow",
"Action": [
"ec2:ModifyNetworkInterfaceAttribute"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:network-interface/*",
"arn:aws:ec2:*:*:security-group/*"
]
},
{
"Sid": "AllowEC2SecurityGroupActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteNetworkInterfacePermission"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationInVPCWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:vpc/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingEMRTagsDuringDefaultSecurityGroupCreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true",
"ec2:CreateAction": "CreateSecurityGroup"
}
}
},
{
"Sid": "AllowEC2ENICreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENICreationInSubnetAndSecurityGroupWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:subnet/*",
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingTagsDuringEC2ENICreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:network-interface/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateNetworkInterface"
}
}
},
{
"Sid": "AllowEC2ReadOnlyActions",
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Sid": "AllowSecretsManagerReadOnlyActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": "arn:aws:secretsmanager:*:*:secret:*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowWorkspaceCollaboration",
"Effect": "Allow",
"Action": [
"iam:GetUser",
"iam:GetRole",
"iam:ListUsers",
"iam:ListRoles",
"sso:GetManagedApplicationInstance",
"sso-directory:SearchUsers"
],
"Resource": "*"
}
]
}
|
I am trying to use the EMR studio workspace (notebooks) with EMR serverless application but its giving me this error when I go to select the kernel (like python3). I have explored all the docs on the polices and trust polices but I don't understand why I am getting this error as a root user.
[Error when selecting kernel][1]
[1]: https://i.stack.imgur.com/8gjIp.png
My trust policy for the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "123"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:elasticmapreduce:us-east-2:123:*"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "emr-serverless.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:SetContext"
]
}
]
}
My policies attached to the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EMRServerlessInteractiveAccess",
"Effect": "Allow",
"Action": "emr-serverless:AccessInteractiveEndpoints",
"Resource": "arn:aws:emr-serverless:us-east-2:123:/applications/*"
},
{
"Sid": "ReadAccessForEMRSamples",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*.elasticmapreduce",
"arn:aws:s3:::*.elasticmapreduce/*"
]
},
{
"Sid": "EMRServerlessRuntimeRoleAccess",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
},
{
"Sid": "FullAccessToOutputBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetEncryptionConfiguration",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::s3bu",
"arn:aws:s3:::s3bu/*"
]
},
{
"Sid": "GlueCreateAndReadDataCatalog",
"Effect": "Allow",
"Action": [
"glue:GetDatabase",
"glue:CreateDatabase",
"glue:GetDataBases",
"glue:CreateTable",
"glue:GetTable",
"glue:UpdateTable",
"glue:DeleteTable",
"glue:GetTables",
"glue:GetPartition",
"glue:GetPartitions",
"glue:CreatePartition",
"glue:BatchCreatePartition",
"glue:GetUserDefinedFunctions"
],
"Resource": [
"*"
]
},
{
"Sid": "AllowEMRReadOnlyActions",
"Effect": "Allow",
"Action": [
"elasticmapreduce:ListInstances",
"elasticmapreduce:DescribeCluster",
"elasticmapreduce:ListSteps"
],
"Resource": "*"
},
{
"Sid": "AllowEC2ENIActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENIAttributeAction",
"Effect": "Allow",
"Action": [
"ec2:ModifyNetworkInterfaceAttribute"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:network-interface/*",
"arn:aws:ec2:*:*:security-group/*"
]
},
{
"Sid": "AllowEC2SecurityGroupActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteNetworkInterfacePermission"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationInVPCWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:vpc/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingEMRTagsDuringDefaultSecurityGroupCreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true",
"ec2:CreateAction": "CreateSecurityGroup"
}
}
},
{
"Sid": "AllowEC2ENICreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENICreationInSubnetAndSecurityGroupWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:subnet/*",
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingTagsDuringEC2ENICreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:network-interface/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateNetworkInterface"
}
}
},
{
"Sid": "AllowEC2ReadOnlyActions",
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Sid": "AllowSecretsManagerReadOnlyActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": "arn:aws:secretsmanager:*:*:secret:*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowWorkspaceCollaboration",
"Effect": "Allow",
"Action": [
"iam:GetUser",
"iam:GetRole",
"iam:ListUsers",
"iam:ListRoles",
"sso:GetManagedApplicationInstance",
"sso-directory:SearchUsers"
],
"Resource": "*"
}
]
}
|
**TL;DR**
```python
pl_series = [pl.Series(name, values)
for name, values in zip(features, shuffle_arr.T)]
X_train_permuted = (
X_train_permuted.with_columns(
pl_series
)
)
```
-----
Let's work with a simple example.
**X_train_permuted**
```python
import polars as pl
import numpy as np
np.random.seed(0)
data = {f'feature_{i}': np.random.rand(4) for i in range(0,3)}
X_train_permuted = pl.DataFrame(data)
X_train_permuted
shape: (4, 3)
βββββββββββββ¬ββββββββββββ¬ββββββββββββ
β feature_0 β feature_1 β feature_2 β
β --- β --- β --- β
β f64 β f64 β f64 β
βββββββββββββͺββββββββββββͺββββββββββββ‘
β 0.548814 β 0.423655 β 0.963663 β
β 0.715189 β 0.645894 β 0.383442 β
β 0.602763 β 0.437587 β 0.791725 β
β 0.544883 β 0.891773 β 0.528895 β
βββββββββββββ΄ββββββββββββ΄ββββββββββββ
```
**Shuffle `feature_1` and `feature_2`**
Use a list to keep track of the features you are shuffling: `features = ["feature_0", "feature_1"]`
```python
features = ["feature_0", "feature_1"]
shuffle_arr = np.array(X_train_permuted[:, features])
from sklearn.utils import check_random_state
random_state = check_random_state(42)
random_seed = random_state.randint(np.iinfo(np.int32).max + 1)
random_state.shuffle(shuffle_arr)
shuffle_arr
array([[0.71518937, 0.64589411],
[0.60276338, 0.43758721],
[0.5488135 , 0.4236548 ],
[0.54488318, 0.891773 ]])
```
**Replace associated columns in `X_train_permuted` with `shuffle_arr` values**
* Use [`pl.DataFrame.with_columns`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.with_columns.html).
* Pass a list (here: `pl_series`) with a [`pl.Series`](https://docs.pola.rs/py-polars/html/reference/series/index.html) for each shuffled feature, using a list comprehension (applying [`zip`](https://docs.python.org/3.3/library/functions.html#zip)). Make sure to transpose `shuffle_arr` to access to columns (see [`.T`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.T.html#numpy.ndarray.T)).
```python
pl_series = [pl.Series(name, values)
for name, values in zip(features, shuffle_arr.T)]
X_train_permuted = (
X_train_permuted.with_columns(
pl_series
)
)
X_train_permuted
shape: (4, 3)
βββββββββββββ¬ββββββββββββ¬ββββββββββββ
β feature_0 β feature_1 β feature_2 β
β --- β --- β --- β
β f64 β f64 β f64 β
βββββββββββββͺββββββββββββͺββββββββββββ‘
β 0.715189 β 0.645894 β 0.963663 β
β 0.602763 β 0.437587 β 0.383442 β
β 0.548814 β 0.423655 β 0.791725 β
β 0.544883 β 0.891773 β 0.528895 β
βββββββββββββ΄ββββββββββββ΄ββββββββββββ
```
|
Steps to fix **[error 2147942402 (0x80070002) when launching `ubuntu.exe']** :-
Step 1 : Open command prompt. Then click on down arrow button then click on
settings.
[enter image description here][1]
Step 2 : Select Ubuntu from left panel. Click on Command line option.
[enter image description here][2]
Step 3 : Change ubuntu.exe file name to **wsl.exe** and save.
[enter image description here][3]
[1]: https://i.stack.imgur.com/4EVyT.png
[2]: https://i.stack.imgur.com/Cr5aT.png
[3]: https://i.stack.imgur.com/T6Ldk.png
Step 4 : Close the command prompt and reopen it. Now Ubuntu can be opened in command prompt. |
I've been working on a python discord bot and wanted to containerize it, which has worked pretty well, but while testing one of the features (bot -> open API) via HTTPS I'm getting the following error:
`ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:811)`
I've read various articles and tutorials online but they either seem to half answer my question or partially relate to other applications altogether, such as configuring Nginx which I think is just muddying the water a little.
So far I've encountered people mentioning to create and move some certs and one answer saying to include "--network host" into the dockerfile, but it doesn't seem like there is any issue with the network connectivity itself
I was tempted to just change the request URL to use HTTP instead as there's no credentials or sensitive data being transmitted but would feel a lot more comfortable knowing it's using HTTPS instead.
My dockerfile is as below (note: I added the 'RUN apt-get update'.... block after my investigations hoping that would generate a certificate and the error would magically clear up but that's not the case).
`FROM python:3.10-bullseye
COPY requirements.txt /app/
COPY ./bot/ /app
RUN apt-get update \
&& apt-get install openssl \
&& apt-get install ca-certificates
RUN update-ca-certificates
WORKDIR /app
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "-u", "v1.py"]`
I tried a little bit basic of diagnostics through the container like checking the directories for certs and trying to curl to a HTTPS URL but being brand new to docker I'm not really sure what I'm looking for or how to progress any further so any help would be appreciated
- Googling tutorials
- Googling stackover flow + reddit questions
- Basic (networking) diagnostics |
Docker container unable to make HTTPS requests to external API |
|python|linux|docker| |
null |
I am stuck. Newb to go and client-go. As an exercise I am trying to emulate
this script which summarizes statuses of pods that are not running by namespace
and reason.
```sh
kubectl get po -A --no-headers |
awk '
BEGIN {
SUBSEP=" "
format = "%-20s %20s %5s\n"
printf format, "NAMESPACE", "STATUS", "COUNT"
}
!/Running/ {a[$1,$4]++}
END {
for (i in a) {split(i,t); printf format, t[1],t[2],a[i]}
}
' | sort
```
This script produces output similar to this:
```sh
$ notrunning
NAMESPACE STATUS COUNT
namespace-01 InvalidImageName 2
namespace-02 InvalidImageName 1
namespace-02 Init:ImagePullBackOff 1
namespace-03 CrashLoopBackOff 2
namespace-03 InvalidImageName 9
namespace-04 Init:ErrImagePull 1
```
I can't find where kubectl is getting the status or reason. I'm trying code similar
to this (leaving out some error checking for brevity.). I am not getting the results I expect. Any help or suggestions would be appreciated!
```go
type PodSummary struct {
NotRunning int
Summary map[PodKey]int // Map of namespace+state to count
}
type PodKey struct {
Namespace string
Status string
}
func getPodSummary(kubeconfig, cluster string) PodSummary {
clientset, _ := getClientsetForContext(kubeconfig, cluster)
pods, _ := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
summary := PodSummary{Summary: make(map[PodKey]int)}
for _, pod := range pods.Items {
if pod.Status.Phase != "Running" && pod.Status.Phase != "Succeeded" { // need to check "Completed" also?
podNS := pod.Namespace
summary.NotRunning++
var pk PodKey
for _,containerStatus := range pod.Status.ContainerStatuses {
if containerStatus.State.Waiting != nil {
pk = PodKey {podNS, string(containerStatus.State.Waiting.Reason)}
break
} else { // cannot find it, use this instead.
pk = PodKey {podNS, string(pod.Status.Phase)}
}
}
summary.Summary[pk]++
}
}
return summary
}
```
I was expecting to get detailed reasons why the pods are failing. Instead I got results like "Pending" which isn't helpful or what I wanted. |
How to define AIX(alternate index) in CICS FCT entry? |
|aix|cics|vsam| |
I need a brazilian programmer to develop a circular supply chain using system dynamics in anylogic for my TCC. If anyone knows who already does this kind of work with a good price, please let me know.
I am no good at programming, and as I am close to my due date, I need help on the programming side, so I can conclude my final dissertation (my supervisor is aware and agrees with geeting payed help to support on this). |
How to simulate a circular supply chain using SD in anylogic |
|anylogic|logistics|systemdynamics| |
null |
I came across this problem. The question was: Will this code compile successfully or will it return an error?
```
#include <stdio.h>
int main(void)
{
int first = 10;
int second = 20;
int third = 30;
{
int third = second - first;
printf("%d\n", third);
}
printf("%d\n", third);
return 0;
}
```
I personally think that this code should give an error as we are re initializing the variable `third` in the main function whereas the answer to this problem was that this code will run successfully with output 10 and 30.
Then I compiled this code on VS Code and it gave an error but on some online compilers it ran successfully with no errors, can somebody please explain?
I don't think there can be two variables with the same name inside the curly braces inside `main`. If `third` was initialized after the curly braces instead before it would work completely fine. Like this:
```
#include <stdio.h>
int main(void)
{
int first = 10;
int second = 20;
{
int third = second - first;
printf("%d\n", third);
}
int third = 30;
printf("%d\n", third);
return 0;
}
``` |
Why will this code compile although it defines two variables with the same name? |
I am trying to perform Kafka and Spark Streaming in colab, using the movielens 1m dataset. ([Download Here](https://files.grouplens.org/datasets/movielens/ml-1m.zip))
I am having trouble with the .readStream function when reading data from kafka into my spark session, particularly at the `.load()` line:
```
# Read data from Kafka as a DataFrame
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", kafka_bootstrap_servers) \
.option("subscribe", kafka_topic_name) \
.load()
```
I am getting this error:
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-55-97fbf29e82ce> in <cell line: 14>()
17 .option("kafka.bootstrap.servers", kafka_bootstrap_servers) \
18 .option("subscribe", kafka_topic_name) \
---> 19 .load()
20
21 # Convert value column from Kafka to string
3 frames
/usr/local/lib/python3.10/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o179.load.
: java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.sql.kafka010.KafkaSourceProvider$
at org.apache.spark.sql.kafka010.KafkaSourceProvider.org$apache$spark$sql$kafka010$KafkaSourceProvider$$validateStreamOptions(KafkaSourceProvider.scala:338)
at org.apache.spark.sql.kafka010.KafkaSourceProvider.sourceSchema(KafkaSourceProvider.scala:71)
at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:233)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:118)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:118)
at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:36)
at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:169)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
```
To solve the issue, I tried using versions of kafka and spark configured for scala 2.13, and nothing changed. Current versions of kafka and spark i'm using:
Kafka: [kafka_2.12-3.5.1](https://archive.apache.org/dist/kafka/3.5.1/kafka_2.12-3.5.1.tgz) (for scala 2.12)
Spark: [spark-3.5.1-bin-hadoop3](https://dlcdn.apache.org/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3.tgz)
Maven: ["https://repo1.maven.org/maven2/org/apache/spark/spark-sql-kafka-0-10_2.12/3.5.1/spark-sql-kafka-0-10_2.12-3.5.1.jar"]("https://repo1.maven.org/maven2/org/apache/spark/spark-sql-kafka-0-10_2.12/3.5.1/spark-sql-kafka-0-10_2.12-3.5.1.jar")
I also examined the environment variables which have been set correctly, so that isn't the issue.
I also tried to include `kafka-clients` dependencies as follows:
```
spark = SparkSession.builder \
.appName("StructuredStreamingExample") \
.config("spark.jars.packages", "org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1,org.apache.kafka:kafka-clients:3.5.1") \
.getOrCreate()
```
No luck.
What can I try to get this to work?
Link to the full colab notebook: [kafka-spark-streaming.colab](https://colab.research.google.com/drive/1cx5JlKUDKJ3V3illyDrdiSItumIjpM-7?usp=sharing)
|
Troubleshoot .readStream function not working in kafka-spark streaming (pyspark in cilab notebook) |
|pyspark|apache-kafka|bigdata|google-colaboratory|spark-structured-streaming| |
null |
I'm attempting to display a list of movies on a website using Jinja2 (a template engine for Python) and Bootstrap (a front-end framework). However, I'm having difficulty getting the movie cards to display correctly.When trying to display the movie cards using Jinja2 and Bootstrap, the cards aren't being displayed as expected. I'm facing difficulties in correctly displaying the background image of the card, as well as ensuring that the movie information is displayed clearly and organized.
```
<!--{% extends 'base.html' %}
{% block conteudo %}
<h2 style="text-align: center;">Teste de filmes</h2>
<hr>
<ul class="list-group">
{% for filme in filmes %}
<li>{{filme.title}}</li>
<p>{{ filme.overview }}</p>
<p>Release Date: {{ filme.release_date }}</p>
<p>Vote Average: {{ filme.vote_average }}</p>
<p>Vote Count: {{ filme.vote_count }}</p>
<hr>
{% endfor %}
</ul>
{% endblock conteudo %}-->
{% extends 'base.html' %}
{% block conteudo %}
<h2 style="text-align:center;">Lista de Filmes</h2>
<hr>
<div class="row">
{% for filme in filmes %}
<div class="col-md-4">
<div class="card" style="width: 18rem;">
<img src="http://image.tmdb.org/t/p/w500{{filme.backdrop_path}}" class="card-img-top" alt="...">
<div class="card-body">
<h5 class="card-title">{{filme.title}}</h5>
<p class="card-text">{{filme.overview}}</p>
<hr>
<h4>Nota mΓ©dia<span class="badge bg-secondary">{{filme.vote_average}}</span></h4>
</div>
</div>
</div>
{% if loop.index % 3 == 0 %}
</div><div class="row">
{% endif %}
{% endfor %}
</div>
{% endblock %}
```
Checking if the URL of the movie's background image is correct and accessible.
Ensuring that all Bootstrap classes are being applied correctly.
Verifying that the movies variable is being passed correctly to the template.
Any help or suggestions would be greatly appreciated! Thank you!
[click to see project image][1]
[1]: https://i.stack.imgur.com/1rPAV.png
this more files of the project:
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>AppPython</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css" integrity="sha384-T3c6CoIi6uLrA9TneNEoa7RxnatzjcDSCmG1MXxSR1GAsXEV/Dwwykc2MPK8M2HN" crossorigin="anonymous">
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-dark bg-dark">
<div class="container-fluid">
<a class="navbar-brand" href="#">AppPython</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="{{url_for('principal')}}">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{url_for('filmes')}}">Filmes</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{url_for('sobre')}}">Sobre</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Dropdown
</a>
<ul class="dropdown-menu" aria-labelledby="navbarDropdown">
<li><a class="dropdown-item" href="#">Action</a></li>
<li><a class="dropdown-item" href="#">Another action</a></li>
<li><hr class="dropdown-divider"></li>
<li><a class="dropdown-item" href="#">Something else here</a></li>
</ul>
</li>
<li class="nav-item">
<a class="nav-link disabled">Disabled</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="container">
{% block conteudo %}
{% endblock conteudo %}
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-C6RzsynM9kWDrMNeT87bh95OGNyZPhcTNXj1NW7RuBCsyN/o0jlpcV8Qyq46cDfL" crossorigin="anonymous"></script>
</body>
</html>
```
sobre.html:
```
{% extends 'base.html' %}
{% block conteudo %}
<h2 style="text-align: center;">DiΓ‘rio do Professor</h2>
<hr>
{% for registro in registros %}
<!-- <details>
<summary>{{registro.aluno}}</summary>
<p>{{registro.nota}}</p>
</details> -->
<p>
<a class="btn btn-secondary" data-bs-toggle="collapse" href="#collapse_{{ registro.aluno }}" role="button" aria-expanded="false" aria-controls="collapse_{{ registro.aluno }}">
{{registro.aluno}}
</a>
</p>
<div class="collapse" id="collapse_{{ registro.aluno }}">
<div class="card card-body">
{{registro.nota}}
</div>
</div>
{% endfor %}
<form action="{{url_for('sobre')}}" method="POST">
<div class="form-group">
<label>Aluno</label>
<input type="text" name="aluno" class="form-control" placeholder="Digite o nome do aluno" required>
</div>
<div class="form-group">
<label>Nota</label>
<input type="text" class="form-control" name="nota" placeholder="Digite uma nota" required>
</div>
<button class="btn btn-success">Adicionar</button>
</form>
{% endblock conteudo %}
```
idex.html:
```
{% extends 'base.html' %}
{% block conteudo %}
<h2 style="text-align: center;">Lista de Frutas</h2>
<hr>
<ul class="list-group">
{% for fruta in frutas %}
<li class="list-group-item">{{fruta}}</li>
{% endfor %}
<form action="{{url_for('principal')}}" method="POST">
<div class="form-group">
<label for="exemploFruta">Fruta</label>
<input class="form-control" id="exemploFruta" type="text" name="fruta" placeholder="Digita uma fruta:">
</div>
<button class="btn btn-success">Adicionar</button>
</form>
</ul>
{% endblock conteudo %}
``` |
To animate anything in after you can simply use
.loading::after{
content:'';
display:inline-block;
width:50px;
height:50px;
background-image:url('https://picsum.photos/id/63/50/50');
}
.loading.rotate::after{
animation: rotation 2s infinite linear;
}
@keyframes rotation {
from {
transform: rotate(0deg);
}
to {
transform: rotate(359deg);
}
}
And use, e.g.
<div (click)="toogle=!toogle" class="loading" [class.rotate]="toogle">
here
</div>
|
|c|scope| |
How do I add WebSocket middleware with database connection to ASP.NET application? |
|asp.net|.net|websocket|middleware| |
The `wrapper` returns functionality that you need to use first before you can access the store. You are passing `wrapper` directly as the `store`.
See [wrapper usage][1].
* Using the `useWrappedStore` hook:
```jsx
import { Provider } from 'react-redux';
import { wrapper } from '../store/store';
function MyApp({ Component, pageProps }) {
const { store, props } = wrapper.useWrappedStore(pageProps);
return (
<Provider store={store}>
<Component {...props} />
</Provider>
);
}
export default MyApp;
```
* Using the `withRedux` Higher Order Component:
```jsx
import { Provider } from 'react-redux';
import { wrapper } from '../store/store';
function MyApp({ Component, pageProps }) {
return (
<Component {...props} />
);
}
export default wrapper.withRedux(MyApp);
```
[1]: https://github.com/kirill-konshin/next-redux-wrapper?tab=readme-ov-file#usage |
Hello fellow programmer.
I think this recursive function could work in your favor:
function Convert-XmlElementToList {
param (
[System.Xml.XmlElement]$xmlElement
)
$result = @()
# Recursively process child elements
foreach ($childNode in $xmlElement.ChildNodes) {
if ($childNode -is [System.Xml.XmlElement]) {
$result += Convert-XmlElementToList -xmlElement $childNode
}
}
# Convert the current element to a PSObject
$elementProperties = @{
'Name' = $xmlElement.Name
'InnerText' = $xmlElement.InnerText
'Attributes' = @{}
}
foreach ($attribute in $xmlElement.Attributes) {
$elementProperties['Attributes'][$attribute.Name] = $attribute.Value
}
$psObject = New-Object -TypeName PSObject -Property $elementProperties
$result += $psObject
return $result
}
This is how u could use it:
$xml = @"
<root>
<parent>
<child1 attribute1="value1">text1</child1>
<child2 attribute2="value2">text2</child2>
</parent>
</root>
"@
$xmlDocument = [xml]$xml
$rootElement = $xmlDocument.DocumentElement
$resultList = Convert-XmlElementToList -xmlElement $rootElement
$resultList | Format-Table
|
I think the first question you should is answer is:
> *Am I going to do this myself or am I going to hire somebody to help me?*
This choice will, to some extend, determine the other choices you have.
If you do hire a professional, then I would suggest to discuss these things with that person. It can be hard to find someone you can work with for a longer time. The choices you make will have to be compatible with their capabilities.
If you're going to do this by yourself, you can only do what you know and feel comfortable with. Wordpress is fantastic to get you started quickly, and can do a lot, but also has some clear disadvantages (bloated, slow, vulnerable, seo, costly, etc). Going with something like Laravel is more complicated, but still gives you a nice head start. Creating forms yourself means you need to be a web designer, PHP programmer, database manager, etc. It's a full-stack job. **Not easy** by any means.
Just keep in mind that these questions never stop coming. Every few years you need to radically update your code, and sometimes you need to shift to something completely new. Don't regard this as a one off, but more as a continuous evolving matter. |
```none
/Users/macbookair/apps/ussd/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:55
throw new Error(`Please install ${moduleName} package manually`);
^
Error: Please install mysql2 package manually
at ConnectionManager._loadDialectModule (/Users/macbookair/apps/ussd/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:55:15)
at new ConnectionManager (/Users/macbookair/apps/ussd/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:30:21)
at new MysqlDialect (/Users/macbookair/apps/ussd/node_modules/sequelize/lib/dialects/mysql/index.js:13:30)
at new Sequelize (/Users/macbookair/apps/ussd/node_modules/sequelize/lib/sequelize.js:194:20)
at Object.<anonymous> (/Users/macbookair/apps/ussd/model/db.js:30:15)
at Module._compile (node:internal/modules/cjs/loader:1256:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
at Module.load (node:internal/modules/cjs/loader:1119:32)
at Module._load (node:internal/modules/cjs/loader:960:12)
at Module.require (node:internal/modules/cjs/loader:1143:19)
Node.js v18.17.1
[nodemon] app crashed - waiting for file changes before starting...
rs
```
Tried `npm i mysql2` but it fails to run |