instruction stringlengths 0 30k β |
|---|
From within a Python script, I'm launching an executable in a subprocess (for anyone interested: it's the terraria server).
```
# Python
server_process = subprocess.Popen("my_executable arg1 arg2 <&0", shell=True, stdin=subprocess.PIPE)
```
------------------
**Side note 1**: this actually spawns *two* processes: one shell to run the command, and another actually running `my_executable`. That's why I have the `<&0`, to make sure that both will get the inputs I'll be writing.
```
$ ps aux | grep my_executable
# I've removed everything but the PIDs and CLIs below
14699 /bin/sh -c my_executable arg1 arg2 <&0
14701 my_executable arg1 arg2
```
---------------
Anyways, things seem to work: the expected output from `my_executable` starts appearing in `journalctl` (the script's part of a `systemd` service). However for some reason, no matter how I try, I can't seem to *write* to stdin from python
```
# Python
server_process.stdin.write(b'exit\n')
server_process.stdin.flush()
```
`my_executable` terminates itself upon reading `exit`, so that should kill it: but nothing happens, and I don't see it logging a request to terminate
But here's the wrinkle: I can write to stdin via `/proc` just fine!
```
# Python
print(server_process.pid) # Just as a sanity check: gives 14699, which matches the "ps" output
# Shell
$ sudo echo exit > /proc/14699/fd/0 # The process sees the termination request and proceeds appropriately!
```
Maybe there's some extra buffering going on of which I'm unaware? Why does the write via shell work, and the write via python not work?
--------------------------
**Side note 2**: What's *really* strange is that, from that Python script, running the bash command via `subprocess.Run()` doesn't work either!
```
# Python
subprocess.run(f"echo exit > /proc/{server_process.pid}/fd/0", shell=True, capture_output=True, text=True)
# my_executable doesn't do anything
```
I'm not that interested in making this approach work anyways, but perhaps that's a relevant bit of information
|
Writes via Python's subprocess.Popen.stdin aren't consumed but using shell to write to /proc/pid/fd/0 works? |
|python|python-3.x|subprocess|stdin| |
Got the problem solved. It seems that in 7.1 de default is config.force_ssl = true. Olders versions were not.
Since I was configuring my server without SSL, that was the problem.
Just turn it to false, and it will work.
|
I am using the `thrust::copy_if` function of the Thrust library coupled with counting iterators to get the indices of nonzero elements in an array. I also need to get the number of copied elements.
I am using the code from the 'counting_iterator.cu' example, except that in my application I need to reuse pre-allocated arrays, so I wrap them with `thrust::device_ptr` and then pass them to the `thrust::copy_if` function. This is the code:
using namespace thrust;
int output[5];
thrust::device_ptr<int> tp_output = device_pointer_cast(output);
float stencil[5];
stencil[0] = 0;
stencil[1] = 0;
stencil[2] = 1;
stencil[3] = 0;
stencil[4] = 1;
device_ptr<float> tp_stencil = device_pointer_cast(stencil);
device_vector<int>::iterator output_end = copy_if(make_counting_iterator<int>(0),
make_counting_iterator<int>(5),
tp_stencil,
tp_output,
_1 == 1);
int number_of_ones = output_end - tp_output;
If I comment the last line of code, the function fills correctly the output array. However, when I uncomment it, I get the following compilation error:
```
1>C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.5\include\thrust/iterator/iterator_adaptor.h(223): error : no operator "-" matches these operands
1> operand types are: int *const - const thrust::device_ptr<int>
1> detected during:
1> instantiation of "thrust::iterator_adaptor<Derived, Base, Value, System, Traversal, Reference, Difference>::difference_type thrust::iterator_adaptor<Derived, Base, Value, System, Traversal, Reference, Difference>::distance_to(const thrust::iterator_adaptor<OtherDerived, OtherIterator, V, S, T, R, D> &) const [with Derived=thrust::detail::normal_iterator<thrust::device_ptr<int>>, Base=thrust::device_ptr<int>, Value=thrust::use_default, System=thrust::use_default, Traversal=thrust::use_default, Reference=thrust::use_default, Difference=thrust::use_default, OtherDerived=thrust::device_ptr<int>, OtherIterator=int *, V=signed int, S=thrust::device_system_tag, T=thrust::random_access_traversal_tag, R=thrust::device_reference<int>, D=ptrdiff_t]"
1> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.5\include\thrust/iterator/iterator_facade.h(181): here
1> instantiation of "Facade1::difference_type thrust::iterator_core_access::distance_from(const Facade1 &, const Facade2 &, thrust::detail::true_type) [with Facade1=thrust::detail::normal_iterator<thrust::device_ptr<int>>, Facade2=thrust::device_ptr<int>]"
1> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.5\include\thrust/iterator/iterator_facade.h(202): here
1> instantiation of "thrust::detail::distance_from_result<Facade1, Facade2>::type thrust::iterator_core_access::distance_from(const Facade1 &, const Facade2 &) [with Facade1=thrust::detail::normal_iterator<thrust::device_ptr<int>>, Facade2=thrust::device_ptr<int>]"
1> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.5\include\thrust/iterator/iterator_facade.h(506): here
1> instantiation of "thrust::detail::distance_from_result<thrust::iterator_facade<Derived1, Value1, System1, Traversal1, Reference1, Difference1>, thrust::iterator_facade<Derived2, Value2, System2, Traversal2, Reference2, Difference2>>::type thrust::operator-(const thrust::iterator_facade<Derived1, Value1, System1, Traversal1, Reference1, Difference1> &, const thrust::iterator_facade<Derived2, Value2, System2, Traversal2, Reference2, Difference2> &) [with Derived1=thrust::detail::normal_iterator<thrust::device_ptr<int>>, Value1=signed int, System1=thrust::device_system_tag, Traversal1=thrust::random_access_traversal_tag, Reference1=thrust::device_reference<int>, Difference1=signed int, Derived2=thrust::device_ptr<int>, Value2=signed int, System2=thrust::device_system_tag, Traversal2=thrust::random_access_traversal_tag, Reference2=thrust::device_reference<int>, Difference2=signed int]"
1> C:/ProgramData/NVIDIA Corporation/CUDA Samples/v5.5/7_CUDALibraries/nsgaIIparallelo_23ott/rank_cuda.cu(70): here
```
If I use `thrust::device_vector` for the output array instead, everything is okay:
using namespace thrust;
thrust::device_vector<int> output(5);
float stencil[5];
stencil[0] = 0;
stencil[1] = 0;
stencil[2] = 1;
stencil[3] = 0;
stencil[4] = 1;
device_ptr<float> tp_stencil = device_pointer_cast(stencil);
device_vector<int>::iterator output_end = copy_if(make_counting_iterator<int>(0),
make_counting_iterator<int>(5),
tp_stencil,
output.begin(),
_1 == 1);
int number_of_ones = output_end - output.begin();
Can you suggest any solution to this problem? Thank you.
|
I posted this question few years ago and forgot about it, although I have been using a solution that seems quiet straight forward.
For any deeplink we can create a method like `func processDeeplink(for url: URL)` which inputs our URL and process it according to the path.
Depending on the code architecture we can place it in either a shared class or AppDelegate instance.
Now in our app, when we need to trigger the deeplink of our own app, we would just detect the domain and call our own method which would process the deeplink.
> In a nutshell: No need to add any specific configuration in Associated
> Domains. Just call your own method to process the deeplink. |
[https://my-show-rj.vercel.app/](https://my-show-rj.vercel.app/)
Click on menu options (plays, categories,and tvshow) than the error occurs. Vercel|404:NOT_FOUND
**Vercel.json file**{"rewrites":[{"source":"/(.\*)","destination":"/"}]}
**And I also try**{"routes":[{"src":"/(.\*)","dest":"/"}]}
**And I try this to**{"routes":[{ "handle":"filesystem"},{"src":"/.*","dest":"/index.html"}]}
[Vercel problem](https://i.stack.imgur.com/Gk9Le.png) |
We're using Cypress for E2E testing and are about to embark on the task of migrating away from tag and classname selectors to data attributes in order to make the selectors less fragile.
My question is around best practices. [Cypress recommends][1] using `data-cy` or `data-test` or `data-testid`.
Some of the most difficult selectors involve picking a row and column from a table. Example:
```html
<!-- example of hard-to-test markup -->
<table class='users-table'>
<thead>
<th>Name</th>
<th>Email</th>
<th>Phone</th>
<thead>
<tbody>
<tr>
<td>Bob Fish</td>
<td>bob@fish.co</td>
<td>123-123-1234</td>
</tr>
<td>Shaggy Rogers</td>
<td>shag@mysteryinc.com</td>
<td>509-123-1235</td>
</tr>
<tbody>
</table>
```
Now if we use `data-test` as recommended, I would do something like this:
```html
<table data-test='users-table'>
...
<tbody>
<tr data-test='user-id-1'>
<td data-test='name-col'>...
<td data-test='email-col'>...
<td data-test='phone-col'>...
```
Now I can find some `td` with a certain value like
```javascript
cy.contains('[data-test="users-table"] [data-test="name-col"]', user.name).should('be.visible')
```
or better:
```javascript
cy.get(`[data-test="users-table"] [data-test="user-id-${user.id}"]`).within(() => {
cy.get('[data-test="name-col"]').should('have.text', user.name)
cy.get('[data-test="email-col"]').should('have.text', user.email)
cy.get('[data-test="phone-col"]').should('have.text', user.phone)
})
```
But in the spirit of "semantic markup", I feel like I want to do something like this:
```html
<table data-entity='users'>
...
<tbody>
<tr data-entity-id='1'>
<td data-col='name'>...
<td data-col='email'>...
<td data-col='phone'>...
</tr>
<tr data-entity-id='2'> ...
```
This would allow me to not munge together `data-test="[noun]-[value]"` attribute values like `user-id-1`, at the expense of having to come up with my own consistent set of `data-` attributes (`data-entity`, `data-col`, etc.)
So I'm curious if there are established best-practices or if it's specifically recommended against. Does anyone vary the `data-` attributes to be more semantically meaningful like my last example? Or is the most common practice to use the same `data-test` attribute everywhere, and cram more information into the attribute value? (Which I hopefully come up with a consistent way of constructing those values!) Is it awash one way vs the other or are there benefits either way?
(Please provide references in your answer to sources!)
As an aside, I've also started reading up on [Cypress Testing Library][2] which seems like it could somewhat help by retrieving some elements in a semantically meaningful way (like role or label) *but* there would still be tons of markup that would not be covered, unless maybe I started throwing `role=` on everything which seems like a dirty hack and probably against ARIA or some other w3c standard.
[1]: https://docs.cypress.io/guides/references/best-practices#Selecting-Elements
[2]: https://github.com/testing-library/cypress-testing-library?tab=readme-ov-file#usage |
Account checker in python |
|python| |
null |
I got the same error while working on streamlit.
To solve this, you have to change the enocoding of config.toml to UTF-8:
- go to File Explorer
and search for config.toml
- open the config file with Notepad and change encoding to UTF-8 and save.
![IMAGE of utf-8 changing][1]
[1]: https://i.stack.imgur.com/sVLDp.png |
i was given an assignment to find an algorithm that takes an array A such that for every x<y we have the first appearance of x coming before the first appearance of y
and sorts it in average time of O(n)
an example array could be 1,2,1,30,1,1,2,1,40,30,1,40,2, 50, 40, 50, 30
and the output should be 1, 1, 1, 1, 1, 1, 2, 2, 2, 30, 30, 30, 40, 40, 40, 50, 50
i should show that the suggested algorithm runs in O(n) on average.
i have no clue where to start..
the only thing i know is how to theoretically analyse the average case of a loop or nested loops
for very specific cases
any help would be really appreciated. |
Given partially sorted array of type x<y => first apperance of x comes before first of y, sort in average O(n) |
|algorithm|complexity-theory| |
null |
After googling for a while, I found a hack to handle double render in useEffect logic. There may exist other ways to solve my specific problem but the hack I found will do fine for me.
As react-18 renders the useEffect hook twice in development mode, disabling the strict mode won't make the render once. For which we need to use a boolean variable to handle such cases.
useEffect(() => {
let ignore = false;
fetchStuff().then(res => {
if (!ignore) setResult(res)
})
return () => { ignore = true }
}, [])
For more details, check out [this][1] blog post.
**Note:** The main problem was not about mutation. Updating function is working totally fine. Rendering the useEffect twice caused the error to be occured. :)
[1]: https://medium.com/geekculture/the-tricky-behavior-of-useeffect-hook-in-react-18-282ef4fb570a |
You only need one set of rules. (Providing you are OK to have an "empty" `taal` URL parameter when the language code is omitted - you should be.<!--, as it's simply a case of checking for `!empty()` in PHP, rather than `isset()` -->)
`([^/]*)` - Rather than check for _anything_ in the first (language code) path segment, you need to be more specific and either check for the specific (2-char?) language codes you are expecting, eg. `(nl|en|id)`, or check for any lowercase 2-char path segment, eg. `([a-z]{2})`, providing that does not conflict with a `page`? And then make the first path segment _optional_.
For example, a single set of rules to handle both cases:
RewriteRule ^(?:([a-z]{2})/)?([^/]+)/([^/]+)/([^/]+)/$ /index.php?taal=$1&page=$2&title=$3&beginnenbij=$4 [L]
RewriteRule ^(?:([a-z]{2})/)?([^/]+)/([^/]+)/$ /index.php?taal=$1&page=$2&title=$3 [L]
RewriteRule ^(?:([a-z]{2})/)?([^/]+)/$ /index.php?taal=$1&title=$2 [L]
Note that the first path segment (including the first slash) is made _optional_ with the trailing `?` and this group in non-capturing (as denoted by the `(?:` prefix. However, the inner group that captures the language code _is_ capturing. When the language code is omitted (ie. the first path segment is not lowercase 2-chars) then the `$1` backreference is simply _empty_, but is still present, so the following backreferences are not renumbered.
Note also that I changed the quantifier from `*` to `+` for subsequent path segments. It doesn't make much sense to use `*` here, which makes it "look-like" it allows an _empty_ path-segment, eg. `/nl/foo//baz` - which would never be matched by the first rule anyway since the double slash in the middle is "resolved away" in the URL-path that the `RewriteRule` matches against so the URL would fall-through to the rule that follows anyway. |
I had the same error, for me it was because another .cs file was accindently selected as googlejson build |
I am following this tutorial to run Oracle in a Docker image
https://oralytics.com/2021/10/01/oracle-21c-xe-database-and-docker-setup/
I already have the image and now I should run
sqlplus system/SysPassword1@//localhost/XEPDB1
But this commands triggers a username and password ask (as on the image below), and when I try to put these information as:
- username: system
- password: SysPassword1@ (also tried "SysPassword1@" and SysPassword1)
it does not work with the following error
SP2-0306: Invalid option.
Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM|SYSBACKUP|SYSDG|SYSKM|SYSRAC}] [edition=value]]
where <logon> ::= <username>[/<password>][@<connect_identifier>]
<proxy> ::= <proxyuser>[<username>][/<password>][@<connect_identifier>]
Enter user-name: system
Enter password:
ERROR:
ORA-12162: TNS:net service name is incorrectly specified
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
Don't know what to do here.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/eTKjw.png |
sqlplus myusername/mypassword@ORCL not working with Oracle on Docker |
|oracle|docker|sqlplus| |
I changed to READ_MEDIA_IMAGES,READ_MEDIA_VIDEO,READ_MEDIA_AUDIO and write the code and give all permissions and also not working
[enter image description here](https://i.stack.imgur.com/T1KEy.jpg)
[enter image description here](https://i.stack.imgur.com/z18mQ.jpg)
` @RequiresApi(api = Build.VERSION_CODES.TIRAMISU)
private boolean checkPermission() {
int result = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE);
int result1 = ContextCompat.checkSelfPermission(getApplicationContext(), READ_MEDIA_AUDIO);
int result2 = ContextCompat.checkSelfPermission(getApplicationContext(), MANAGE_EXTERNAL_STORAGE);
int result3 = ContextCompat.checkSelfPermission(getApplicationContext(), CAMERA);
int result4 = ContextCompat.checkSelfPermission(getApplicationContext(), READ_MEDIA_IMAGES);
int result5 = ContextCompat.checkSelfPermission(getApplicationContext(), READ_MEDIA_VIDEO);
int result6 = ContextCompat.checkSelfPermission(getApplicationContext(), ACCESS_MEDIA_LOCATION);
int result7
= ContextCompat.checkSelfPermission(getApplicationContext(), MOUNT_UNMOUNT_FILESYSTEMS);
return result == PackageManager.PERMISSION_GRANTED && result1 == PackageManager.PERMISSION_GRANTED;
}
@RequiresApi(api = Build.VERSION_CODES.TIRAMISU)
private void requestPermission() {
ActivityCompat.requestPermissions(this, new String[]{WRITE_EXTERNAL_STORAGE, CAMERA,MANAGE_EXTERNAL_STORAGE,READ_MEDIA_IMAGES,READ_MEDIA_VIDEO,READ_MEDIA_AUDIO,MOUNT_UNMOUNT_FILESYSTEMS,ACCESS_MEDIA_LOCATION}, PERMISSION_REQUEST_CODE);
}
@Override
public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults); // Call super method
switch (requestCode) {
case PERMISSION_REQUEST_CODE:
if (grantResults.length > 0) {
boolean locationAccepted = grantResults[0] == PackageManager.PERMISSION_GRANTED;
boolean cameraAccepted = grantResults[1] == PackageManager.PERMISSION_GRANTED;
if (locationAccepted && cameraAccepted)
Toast.makeText(this, "Permission Granted, Now you can access this app.", Toast.LENGTH_LONG).show();
else {
Toast.makeText(this, "Permission Denied, You cannot use this app.", Toast.LENGTH_LONG).show();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if (shouldShowRequestPermissionRationale(WRITE_EXTERNAL_STORAGE)) {
showMessageOKCancel("You need to allow access to both the permissions",
new DialogInterface.OnClickListener() {
@RequiresApi(api = Build.VERSION_CODES.TIRAMISU)
@Override
public void onClick(DialogInterface dialog, int which) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
requestPermissions(new String[]{WRITE_EXTERNAL_STORAGE, CAMERA,MANAGE_EXTERNAL_STORAGE,READ_MEDIA_IMAGES,READ_MEDIA_VIDEO,READ_MEDIA_AUDIO,MOUNT_UNMOUNT_FILESYSTEMS,ACCESS_MEDIA_LOCATION},
PERMISSION_REQUEST_CODE);
}
}
});
return;
}
}
}
}
break;
}
}
private void showMessageOKCancel(String message, DialogInterface.OnClickListener okListener) {
new AlertDialog.Builder(MainActivity.this)
.setMessage(message)
.setPositiveButton("OK", okListener)
.setNegativeButton("Cancel", null)
.create()
.show();
}
}` |
Sdk 34 MANAGE_EXTERNAL_STORAGE not working |
|android|android-14| |
null |
I have table with two attribute 'first_interval' and 'second_interval' and I must add new column 'date', but I cant do it.
Connect to my table:
table = database.Table('table231')
database = boto3.resource(
'dynamodb',
endpoint_url = config.USER_STORAGE_URL,
region_name = 'ru-central1',
aws_access_key_id = config.AWS_PUBLIC_KEY,
aws_secret_access_key = config.AWS_SECRET_KEY
)
I used update_item but I took error 'botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: Missing value for required parameter "first_interval"' |
I had the same error, for me it was because another .cs file was accidentally selected to be build as googleservicesjson |
I am trying to make a simple api request to fetch some data for trending movies of the week. I do not see it in the web browser console though. Any suggestions? This is the first API that I have tried on my own. I am a coding bootcamp graduate. Anything thing helps!
here is the repository:https://github.com/JoelGetzke/JoelsMovies
Here is the code:
```
const apiKey = '535b82486819425b363ecd51e605db3c';
const apiUrl = 'https://api.themoviedb.org';
// Example endpoint to get a list of movies
const endpoint = `${apiUrl}/3/trending/movie/week`;
// Constructing the request URL with API key
const requestUrl = `${endpoint}?api_key=${apiKey}`;
// Making a GET request using fetch API
fetch(requestUrl)
.then(response => {
// Check if the response is successful (status code 200)
if (response.ok) {
// Parse the JSON response
return response.json();
}
// If response is not successful, throw an error
throw new Error('Network response was not ok.');
})
.then(data => {
// Do something with the data
console.log(data);
})
.catch(error => {
// Handle errors
console.error('There was a problem with the fetch operation:', error);
});
```
I tried to refresh the page and open up the browser consol.log. It is currently reading:
index.html:89
GET http://127.0.0.1:5500/main.js net::ERR_ABORTED 404 (Not Found)
index.html:1 Refused to execute script from 'http://127.0.0.1:5500/main.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
|
In our company, we have a private repo on Github where we have our internal nuget packages.
On my laptop, on another API solution, I can restore the nuget packages using my github credentials.
I'm configuring the github actions to check the PR and deploy the API but when I want to restore nugets, I have an 403 error:
> Your request could not be authenticated by the GitHub Packages
> service. Please ensure your access token is valid and has the
> appropriate scopes configured.
I have read many documentations and posts on Stack overflow explaining how to use a PAT. But this is not the solution I want to use. I don't want the worklow linked to my account. If I'm not there anymore in the company I want the workflow to continue to work.
So I decided to use the Github App, I'm able to generate the token.
- uses: actions/create-github-app-token@v1
id: app-token
with:
app-id: ${{ vars.APP_ID }}
private-key: ${{ secrets.PRIVATE_KEY }}
Then I add the nuget source but I don't know how to get the token generated previously and I'm even not sure that it could work.
- name: Restore .NET project Dependencies
run: dotnet nuget update source SKDotNetPackages --source "https://nuget.pkg.github.com/sidekick-interactive/index.json" --username ${{ github.event.pull_request.user.login }} --password ${{ secrets.GITHUB_TOKEN }} --store-password-in-clear-text
Do you have any idea how to restore nuget from private github feed using Github App and no PAT?
|
How to restore nuget from private github feed using Github App and no PAT? |
|github-actions| |
It seems that pydoc has a bug when it comes to Python files with DB models.
I ended up using pdoc.
pip install pdoc
pdoc objects.py |
I have such action before install files:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/Lulth.png
I want to delete all jar files in app (if such are available)
As result jar files don't delete.
In logs:
[INFO] com.install4j.runtime.beans.actions.files.DeleteFileAction [ID 270]: Execute action
Property directoryFilter: null
Property fileFilter: com.install4j.script.I4jScript_Internal_115
Property files: [.]
Property filesRoot: null
Property backupForRollback: false
Property recursive: false
Property rollbackSupported: false
Property showFileNames: true
Property showProgress: false
Execute action successful after 2 ms
P.S. Tested on Linux. Try to obtain max privileges is on for Linux build. |
Install4j Not delete file |
|install4j| |
Am I able to specify that a parameter is to be a list of lists (of e.g. strings) in my function description?
Here is what I have, followed by the error:
FUNCS_DESC_create_pptx = { # create_pptx(new_file_name, title, subtitle, slide_titles, slide_bullets, filepath_to_save=None):
"name": "create_pptx",
"description": "Creates a (powerpoint) presentation for the user.",
"parameters": {
"type": "object",
"properties": {
"new_file_name": {
"type": "string",
"description": "The name of the new text file to be created; should be descriptive and concise. It will be saved as {new_file_name}.pptx automatically.",
},
"title": {
"type": "string",
"description": "The title of the presentation, for the first slide.",
},
"subtitle": {
"type": "string",
"description": "The subtitle of the presentation, this also goes on the first slide.",
},
"slide_titles": {
"type": "list[string]",
"description": "The list of slide titles for the presentation, for each slide.",
},
"slide_bullets": {
"type": "list[list[string]]",
"description": "The list of bullet points for each slide inr the presentation; each item is a list of strings, each of those strings is a bullet point.",
},
},
"required": ["new_file_name", "title", "subtitle", "slide_titles", "slide_bullets"],
},
}
Error:
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\openai\resources\chat\completions.py", line 645, in create
return self._post(
^^^^^^^^^^^
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\openai\_base_client.py", line 1088, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\openai\_base_client.py", line 853, in request
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\DELL\AppData\Local\Programs\Python\Python312\Lib\site-packages\openai\_base_client.py", line 930, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'create_pptx':
'list[list[string]]' is not valid under any of the given schemas.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Interestingly it mentions the `list[list[string]]` rather than the `list[string]` before it, so maybe the former is pushing it a little too far?
|
Why does method reference can be used instead of PropertyChangeListener? |
|java|javabeans|propertychangelistener| |
If you just change all the `actionSheet`s in your code to [`confirmationDialog`][1], it works totally fine. `actionSheet` has been deprecated in iOS 17.
Instead of using factory methods from `Alert.Button`, you can just create regular `Button`s with different `role`s.
```
struct ContentView: View {
@State private var showFirstActionSheet = false
@State private var showSecondActionSheet = false
var body: some View {
VStack{
VStack{
Button("Show First Action Sheet"){
showFirstActionSheet = true
}
.confirmationDialog("First Action Sheet", isPresented: $showFirstActionSheet, titleVisibility: .visible) {
Button("Cancel", role: .destructive) {
}
Button("Yes") {
doSomething()
}
Button("Cancel", role: .cancel) {
}
} message: {
Text("Some message!")
}
}
}
.confirmationDialog("Second Action Sheet", isPresented: $showSecondActionSheet, titleVisibility: .visible) {
Button("Cancel", role: .destructive) {
}
Button("Yes") {
}
Button("Cancel", role: .cancel) {
}
} message: {
Text("Some message!")
}
}
func doSomething(){
showSecondActionSheet = true
}
}
```
[1]: https://developer.apple.com/documentation/swiftui/view/confirmationdialog(_:ispresented:titlevisibility:actions:message:)-2s7pz |
I doubt your program is "crashing" at that particular line of code, because you have exception handling there, which would catch any error thrown in that method.
The only notable difference between the old and the new variant, is that the new one is asynchronous. And somewhere in the callstack that leads to this particular method to be called, you don't properly `await` the result of the asynchronous method. Ie you have something like this
public static void Main(string[] args) {
...
methodA();
}
void methodA() {
...
methodB();
}
async Task methodB() {
...
await methodC();
}
async Task<...> methodC() {
try {
HttpResponseMessage responseMessage = await client.GetAsync(requestUrl);
...
return ...
} catch (Exception ex) {
...
}
}
So when you now call `methodA()` from a synchronous context, it calls `methodB()` without awaiting its result. That's perfectly valid (eventhough it will generate a warning in Visual Studio), but at this point you lose your asynchronous context. This means, that `methodA()` will finish *before* `methodB()` and this will bubble up the callstack to the `Main()` method, which will also reach its end before `methodB()` is finished.
And when the `Main` method ends, the program terminates. Regardless of whether there are still asynchronous operations pending or not.
This probably happend when you refactored `JiraRequest(string api)` from being synchronous to asynchonous. |
I trying to build account checker in python,
when i run scripts to check the account is working or not like a u can sign in on website with that username and password or not.
i getting always failed to login and when i try to put my real account in combo.json still cant login into website idk why
i trying to my scripts can login into website with combo.json check the accont status if is working say u loggined or say login failed like that
```
import sys
import requests
import json
def check_credentials(username, password):
req = requests.session()
param = {"username": username, "password": password}
response = req.post("https://accounts.spotify.com/en/login", data=param)
if response.status_code == 200 and "incorrect" not in response.text:
return True
else:
return False
def main():
try:
with open("combo.json", "r") as file:
combos = json.load(file)
for combo in combos:
username = combo["username"]
password = combo["password"]
if check_credentials(username, password):
print(f"Login successful for: {username}:{password}")
else:
print(f"Login failed for: {username}:{password}")
except FileNotFoundError:
print("combo.json file not found.")
if __name__ == "__main__":
main()
```
```
[
{"username": "user1", "password": "password1"},
{"username": "user2", "password": "password2"},
]
``` |
I trying to build account checker in python,
when i run scripts to check the account is working or not like a u can sign in on website with that username and password or not.
i getting always failed to login and when i try to put my real account in combo.json still cant login into website idk why
i trying to my scripts can login into website with combo.json check the accont status if is working say u loggined or say login failed like that
```
import sys
import requests
import json
def check_credentials(username, password):
req = requests.session()
param = {"username": username, "password": password}
response = req.post("https://accounts.spotify.com/en/login", data=param)
if response.status_code == 200 and "incorrect" not in response.text:
return True
else:
return False
def main():
try:
with open("combo.json", "r") as file:
combos = json.load(file)
for combo in combos:
username = combo["username"]
password = combo["password"]
if check_credentials(username, password):
print(f"Login successful for: {username}:{password}")
else:
print(f"Login failed for: {username}:{password}")
except FileNotFoundError:
print("combo.json file not found.")
if __name__ == "__main__":
main()
```
```
[
{"username": "user1", "password": "password1"},
{"username": "user2", "password": "password2"}
]
``` |
[enter image description here][1]I got some problems with how i have to start writing the code. The problem is how i write fortran code to input the data of river section coordinates like using 'READ' as the command to input many coordinates in every sections with different shape. Maybe you guys can help me with that problems?.
i've tried writing the codes, but it doesnt work. I was expecting that the program work with just input the coordinate in many different river cross section while the code is compiling in fortran.
[1]: https://i.stack.imgur.com/XYCIs.png |
Why my pics and css disapears on diffirent pages? |
|html|spring-boot| |
null |
Pre-compile the expression and its Jacobian to function objects, then pass them into `minimize`. Note that it attempts to be somewhat smarter than your intended gradient descent which may or may not be helpful; this defaults to Kraft's sequential least-squares programming method.
```python
import string
import numpy as np
import scipy.optimize
import sympy
expr_str = (
'0.2*a**3 - 0.1*a**2*b - 0.9*a**2*c + 0.5*a**2 - 0.1*a*b**2 - 0.4*a*b*c '
'+ 0.5*a*b - 0.6*a*c**2 - 0.5*a*c - 0.6*a - 0.1*b**3 + 0.4*b**2*c '
'- 0.4*b**2 + 0.7*b*c**2 - 0.4*b*c + 0.9*b + 0.09*c**3 - 0.6*c**2 + 0.4*c + 0.22'
)
symbols = {
c: sympy.Symbol(name=c, real=True, finite=True)
for c in string.ascii_lowercase
if c in expr_str
}
expr = sympy.parse_expr(s=expr_str, local_dict=symbols)
expr_callable = sympy.lambdify(
args=[symbols.values()], expr=expr,
)
jac = [
sympy.diff(expr, sym)
for sym in symbols.values()
]
jac_callable = sympy.lambdify(
args=[symbols.values()], expr=jac,
)
result = scipy.optimize.minimize(
fun=expr_callable,
jac=jac_callable,
x0=np.zeros(len(symbols)),
bounds=scipy.optimize.Bounds(lb=-1, ub=1),
)
assert result.success
print(result)
```
```none
CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
success: True
status: 0
fun: -4.020346688237437
x: [ 5.139e-01 -1.000e+00 -1.000e+00]
nit: 5
jac: [-1.222e-08 3.839e+00 4.398e+00]
nfev: 7
njev: 7
hess_inv: <3x3 LbfgsInvHessProduct with dtype=float64>
```
If you really want a symbolic solution to `a` where `b` and `c` are equal to -1, you can then work backward, substituting `b` and `c` into the first Jacobian component and solving for `a`. |
***EDIT 4***
Ok, I hope this additional info seals it.
Firefox DOES bubble the synthetic CLICK that I generate on SPACEBAR / ENTER, but it DOES NOT bubble the mouse CLICK.
CHROME, OPERA, and EDGE work as expected. (both events bubble)
All Browsers bubble INPUT events for both mouse and synthetic clicks.
Both events are COMPOSED, CANCELLABLE, and BUBBLE.
The only difference I have seen is the synthetic CLICK event has had its TARGET set to ABC-TOGGLE#myToggle whereas the TARGET for the mouse CLICK is still input checkbox.
Note: I do understand that SPACEBAR is normally a first class CLICK event for checkboxes but I have stopPropagation() for uniformity with CR.
Why won't the mouse CLICKs traverse the DOMs?
***END EDIT 4***
**EDIT 3**
Ok, I'm an idiot! Never knew spacebar *is* an intrinsic click :-(
**END EDIT 3**
**EDIT 2**
More info: Chrome triggers both a Click and an Input event in that order. Firefox agrees with spacebar/synthetic event but won't surface a Click to the light DOM when the mouse is clicked.
[This][1] now works for FireFox and Chrome both TickBox and Toggle. But the SpaceBar CarrriageReturn functionality does not work. Still on it.
**END EDIT 2**
**EDIT 1**
Firefox does in fact bubble an event through to the light DOM but it is an INPUT event; not a CLICK event. Why?
**END EDIT 1**
The documentations says events triggered on elements in the Shadow DOM that make up a Web Component should pass from shadow to light DOM with the target, currentTarget, originalTarget et al coalesced into the Web-Component tagName, id, and so on.
This is not happening with FireFox in this [exaple][1]
in test_toggle.html I wish to be notified of a "click" event on my "disTick" tickbox so that I can disable the Toggle: -
document.getElementById("disTick").addEventListener("click", (e) => {
if (e.target.checked)
myToggle.disabled=true;
else
myToggle.disabled=false;
});
document.body.addEventListener("change", (e) => {console.log("****"+e.target.tagName);});
It gets called on Chrome, Edge, Opera, but not FireFox.
(I have tried to trap click, input, and chang events on BODY but nothing bubbles.)
Curiously enough, my synthetic event does bubble up (from toggle.js)
this.#toggle.addEventListener('keyup', (e) => {
if (e.altKey || e.ctrlKey || e.isComposing || e.metaKey || e.shiftKey)
return;
if (e.keyCode == 32 || e.keyCode == 13)
{
console.log("key = " + e.keyCode);
e.preventDefault();
this.#toggle.click();
}
});
What am I missing/ Why isn't the original click bubbling through light DOM on FireFox?
[1]: https://richardmaher.github.io/CustElements/test_toggle.html |
Simple movie API request not showing up in the console log |
|javascript|api|error-handling|fetch|console.log| |
null |
I am developing a project that involves processing text data. My goal is to correct errors specifically related to unnecessary characters and spaces in texts. I'm looking for recommendations on suitable Python libraries and tools that could help address these issues.
Extraneous spaces:
- Correct: "We boug ht a new car yesterday." to "We bought a new car yesterday."
- Correct: "Today was a ve ry goo d da y." to "Today was a very good day."
- Correct: "Hel lo! Ho w are you do ing?" to "Hello! How are you doing?"
I have explored several existing solutions, but most of them were either too basic for our needs or demanded significant computational resources. Additionally, it's crucial for my project to handle data processing internally to ensure data privacy and security. Therefore, I need a tool that allows for easy customization, can be integrated into an existing project without substantial additional hardware investments, and operates without relying on external API calls.
What I expect from the solution:
- Easy customization and integration capabilities.
- Should not require significant computational resources.
- Must operate locally and not rely on external API calls for data processing.
I would appreciate any suggestions on suitable Python libraries, tools, or open-source projects that can help solve the mentioned issues with extraneous characters and spaces, in line with these requirements. |
Seeking Python Libraries for Removing Extraneous Characters and Spaces in Text |
|python|text|nlp|text-processing| |
null |
From the "works for me" department:
```
In [31]: a = product("abcd", repeat=3)
In [32]: next(a), next(a)
Out[32]: (('a', 'a', 'a'), ('a', 'a', 'b'))
In [33]: b = pickle.loads(pickle.dumps(a))
In [34]: next(b)
Out[34]: ('a', 'a', 'c')
In [35]: next(a)
Out[35]: ('a', 'a', 'c')
```
|
I am reading Mastering STM32 by Carmine Noviello and got to the chapter where I am about to work on the hello-nucelo project. I followed everything to the letter and still got the following error (see attachment):[error screenshot][1]
*23:33:25 **** Incremental Build of configuration Debug for project hello-nucleo ****
make all
process_begin: CreateProcess(NULL, echo "Building file: ../system/src/stm32f7-hal/stm32f7xx_hal.c", ...) failed.
make (e=2): Le fichier spοΏ½cifiοΏ½ est introuvable.
make: *** [system/src/stm32f7-hal/subdir.mk:57: system/src/stm32f7-hal/stm32f7xx_hal.o] Error 2
"make all" terminated with exit code 2. Build might be incomplete.
23:33:26 Build Failed. 1 errors, 0 warnings. (took 522ms)*
What can be the problem in this case?
I tried playing around with the settings and tool chain editor tabs based on things I have seen online but nothing worked.
[1]: https://i.stack.imgur.com/ns7ck.png |
I am working with a school project where I need to obtain the analyst price targets estimates of certain stocks from the Yahoo Finance Website (this is mandatory).
When I tried to use it by beautiful soup I couldn't scrape it (I believe JS is adjusting the page as it loads), so I turned to selenium to obtain such data. However when I'm trying to obtain the elements through the XPATH, it returns error as if it doesn't exist. I am using EC in case it needs to load, but its not working. I've tried modifying the wait times up until 2 minutes, so that is not the issue
Code below:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--headless")
chrome_options.add_argument(f'user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36')
chrome_options.add_argument("window-size=1920,1080")
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://finance.yahoo.com/quote/BBAJIOO.MX?.tsrc=fin-srch")
driver.delete_all_cookies()
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="Col2-11-QuoteModule-Proxy"]/div/section/div')))
Anyone have an idea as to why this is happening? and how can I obtain such ratings?
Image below of desired ratings
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/mBp3Q.png
This is the example of the HTML Code:
<div aria-label="Low 60 Current 64.59 Average 69.25 High 76.8" class="Px(10px)">
<div class="Pos(r) Pb(30px) H(1em)">
<div class="Start(75%) W(25%) Pos(a) H(1em) Bdbc($seperatorColor) Bdbw(2px) Bdbs(d) T(30px)"></div>
<div class="Pos(a) H(1em) Bdbc($seperatorColor) Bdbw(2px) Bdbs(s) T(30px) W(100%)"></div>
<div class="Pos(a) D(ib) T(35px)" data-test="analyst-cur-tg" style="left: 27.3214%;">
<div class="W(7px) H(7px) Bgc($linkActiveColor) Bdrs(50%) Z(1) B(-5px) Translate3d($half3dTranslate) Pos(r)"></div>
<div class="Bgc($linkActiveColor) Start(0) T(5px) W(1px) H(17px) Z(0) Pos(r)"></div>
<div class="Miw(100px) T(6px) C($linkActiveColor) Pos(r) Fz(s) Fw(500) D(ib) Ta(c) Translate3d($half3dTranslate)"><span>Current</span> <span>64.59</span></div>
</div>
<div class="Pos(a) D(ib) T(-1px)" data-test="analyst-avg-tg" style="left: 55.0595%;">
<div class="Pos(r) T(5px) Miw(100px) Fz(s) Fw(500) D(ib) C($primaryColor)Ta(c) Translate3d($half3dTranslate)"><span>Average</span> <span>69.25</span></div>
<div class="Pos(r) Bgc($tertiaryColor) W(1px) H(17px) Z(0) T(6px) Start(-1px)"></div>
<div class="W(8px) H(8px) Bgc(t) Bd Bdc($seperatorColor) Bdrs(50%) Z(1) B(-6px) Pos(r) Translate3d($half3dTranslate)"></div>
</div><span class="W(6px) H(6px) Bgc($tertiaryColor) Bdrs(50%) Z(0) B(-5px) Start(0) Pos(a) Translate3d($half153dTranslate)"></span><span class="W(6px) H(6px) Bgc($tertiaryColor) Bdrs(50%) Z(0) B(-5px) Pos(a) Translate3d($zero153dTranslate) Start(100%)"></span></div>
<div class="Ov(a) Fz(xs) Mt(10px) C($tertiaryColor)">
<div class="Pos(r) Fl(start) Fz(xs) C($tertiaryColor) "><span>Low</span> <span>60.00</span></div>
<div class="Pos(r) Fl(end) Fz(xs) C($tertiaryColor) "><span>High</span> <span>76.80</span></div>
</div>
</div> |
i need help about this problem
problem occurred in the table switching circuit of invert. Anticipating your answers.
Error in port widths or dimensions. Output port 1 of 'DPC/powergui/EquivalentModel1/Gates/From27' is a [1*2]matrix
Error in port widths or dimensions. Invalid dimension has been specified for input port 27 of 'DPC/powergui/EquivalentModel1/Gates/Mux'.
[[enter image description here](https://i.stack.imgur.com/nkGnH.png)](https://i.stack.imgur.com/Ly56J.png)
i try simulink DPC by inverter NPC . |
"Error in port widths or dimensions" while producting 27 |
|matlab| |
null |
I have a class:
```
#include <array>
#include <ranges>
#include <algorithm>
struct VectorUnitIndexInitTag {};
struct VectorFillInitTag {};
template<class T, std::size_t N> requires std::is_arithmetic_v<T> && (N >= 1)
class Vector
{
public:
static constexpr std::size_t Dimension = N;
using iterator = typename std::array<T, Dimension>::iterator;
using const_iterator = typename std::array<T, Dimension>::const_iterator;
constexpr Vector() noexcept : Elements{} {}
private:
constexpr Vector(VectorUnitIndexInitTag, size_t Index) : Vector()
{
Elements[Index] = static_cast<T>(1);
}
constexpr Vector(VectorFillInitTag, T Value) : Vector()
{
std::ranges::copy(std::ranges::repeat_view(Value, Dimension), begin());
}
public:
constexpr ~Vector() = default;
inline constexpr iterator begin() noexcept
{
return Elements.begin();
}
inline constexpr iterator end() noexcept
{
return Elements.end();
}
static const constinit Vector ZeroVector;//{};
template<size_t I> requires (I < N)
static const constinit Vector UnitVector;//{ VectorUnitIndexInitTag{}, I };
static const constinit Vector OneVector;//{ VectorFillInitTag{}, 1 };
private:
std::array<T, Dimension> Elements;
};
template<class T, size_t N> requires std::is_arithmetic_v<T> && (N >= 1)
const constinit Vector<T, N> Vector<T, N>::ZeroVector{};
template<class T, size_t N> requires std::is_arithmetic_v<T> && (N >= 1)
template<size_t I> requires (I < N)
const constinit Vector<T, N> Vector<T, N>::UnitVector{ VectorUnitIndexInitTag{}, I };
template<class T, size_t N> requires std::is_arithmetic_v<T> && (N >= 1)
const constinit Vector<T, N> Vector<T, N>::OneVector{ VectorFillInitTag{}, 1 };
int main()
{
Vector<float, 7> boo = Vector<float, 7>::ZeroVector;
Vector<float, 7> boo2 = Vector<float, 7>::UnitVector<3>;
Vector<float, 7> boo3 = Vector<float, 7>::OneVector;
}
```
The above compiles on compiler explorer with MSVC, GCC (trunk), and Clang (trunk) with C++23 compiler flags.
When I change `const constinit` to `constexpr` for the three static member variables, this fails to compile, except on GCC (trunk) for all 3 and on MSVC for `UnitVector`, with the redefinitions removed and the initializers moved to the declarations inside the class. MSVC (for the other 2) and Clang (for all 3) seem to suggest that upon declaration of the variables, the Vector class is an incomplete type. But then why does it compile on GCC (for all 3) and MSVC (for only `UnitVector`) and why does `const constinit` compile? |
First, to be clear: VIM can not execute python code. You need a python interpreter for this. For example, an IPython shell.
Second, to execute python code written in the VIM text editor you can use IPython and a vim plugin, [vim-ipython-cell][1].
This plugin allows Jupyter Notebook style cell execution using a custom defined ipython shell. I have been working with this since a while and I can recommend.
It builds on a more basic vim plugin [vim-slime][2], which technically can also do what you want.
Another option, which I haven't tried, is [nvim-ipy][3].
[1]: https://github.com/hanschen/vim-ipython-cell
[2]: https://github.com/jpalardy/vim-slime
[3]: https://github.com/bfredl/nvim-ipy |
|vba|filtering| |
I use torch.jit.script to save model and using C++ to load itοΌ
>
```
#include <torch/script.h>
using namespace std;
int main() {
std::string model_path = "G:/modelscriptcuda.pt";
try {
torch::jit::script::Module module = torch::jit::load(model_path, torch::kCUDA);
} catch (const c10::Error& e) {
std::cerr << "error loading the model: " << e.what() << std::endl;
return -1;
}}
```
And I got this error:
```
error loading the model: [enforce fail at ..\..\caffe2\serialize\inline_container.cc:197] . file not found: modelscriptcuda/version
(no backtrace available)
```
|
Can not load scripted model using torch::jit::load |
|torch|jit|caffe2| |
null |
If something works in debug mode and not normally, mostly its the speed with which things happen in playwright.
try adding below to know it better before EmailValidationMessage
await Task.Delay(10000)
await Expect(emailValidationErrorMessage).ToBeVisibleAsync();
Alternatively you can add timeout in TobeVisible so it checks again after certain seconds
await Expect(emailValidationErrorMessage).ToBeVisibleAsync(new() {Timeout = 10000);
This will try the validation again after 10 seconds.
|
My opinion is try to Wrap your `SuperTextIconButton` with a `FittedBox`.
Container(
alignment: Alignment.center,
// back icon
height: Get.height * 0.0494,
width: Get.width * 0.1,
decoration: BoxDecoration(
color: const Color(0xff2a2a2a),
borderRadius: BorderRadius.circular(10),
),
//edit this lines
child: FittedBox(
fit: BoxFit.cover,
child:SuperTextIconButton(
'Back',
onPressed: () => Get.back(),
getIcon: Icons.arrow_back_ios_new,
buttonColor: const Color(0xff7ED550),
),
)
)
if you want to know more details, refer to this [documentation][1] about `FittedBox`
[1]: https://api.flutter.dev/flutter/widgets/FittedBox-class.html |
I recently ran into an issue with one of my DB tables were it shows error:
"Warning: A form on this page has more than 2000 fields. On submission, some of the fields might be ignored, due to PHP's max_input_vars configuration."
So I created a test.php file with:
<?php
phpinfo();
?>
After loading the page in my browser I could see the `max_input_vars` was set as 1000
So I checked where the `Loaded Configuration File` was located and went in the terminal to edit it, then I had issue where it wasn't updating, and then I found out the semi-colon needed to be removed so I did that also.. still no effect... I contacted IONOS (company that owns the server) and they were useless..
Eventually, I realised I could just go into `Plesk > Tools & Settings > PHP Settings > 8.2.17 FPM application` and change the php.ini file directly in there, and after hitting "Ok" it actually updated it and applied it, so now when I go back to my test.php file I can actually see the new value, but when I refresh my table in the database it still throws the same warning message... so now I'm just completely lost and have no idea what to do..
Also, my table only has around 500 rows and 10 columns, so it's not even that big.. I'm not sure if something else is going wrong here or what..
Does anyone know if there are any other steps I can take?
Thanks. |
I use MediaWiki version 1.41.0. Notifications does not work, and I can not find settings were specific control can be made. I have an Admin account, and an account with my own name.
A page I add the notification with the * on the right top. When clicking I see the message "Adding the page in your notification list." So the page notification is marked. I made the notification for my own account, and the I made an edit on that same page with the Admin account. And vice versa does also not work either.
The e-mail system does work. An e-mail is send when creating a new account. I also can send an e-mail to an other user. Only the notifications does not work.
What can be the problem, and how to solve it. |
On Mediawiki the notifications does not work. But e-mails for creating an new account are. How can this be solved? |
|mediawiki| |
I have a nested list which has alway 5 values. The elements of the list can be different.
I like to copy the list into two seperate lists and exchange in both new lists always two values.
The code is doing this but for the second loop the first list will be again overwritten.
I donΒ΄t know why
Example:
test = [['1', '1', '1', '2024-03-03', "100"], ['1', '1', '2', '2024-05-03', "200"], ['1', '2', '2', '2024-05-03', "200"], ['1', '3', '3', '2024-01-03', "200"]]
print(test) # Okay
lst_01 = test.copy() # Okay
lst_02 = test.copy() # Okay
for item in lst_01:
item[3] = "2024-01-01"
item[4] = ""
print(lst_01) # Okay
print(lst_02) # Not Okay: Overwritten
for item in lst_02:
item[3] = "2024-12-31"
item[4] = ""
print(lst_01) # Mistake: First list again overwritten
print(lst_02) # Okay
sum = lst_01 + lst_02 # List lst_01 and lst_02 equal. This is not correct
print(sum)
-----------------------------------------------------------------
Result of the code:
list(sum)
[['1', '1', '1', '2024-12-31', ''], ['1', '1', '2', '2024-12-31', ''], ['1', '2', '2', '2024-12-31', ''], ['1', '3', '3', '2024-12-31', ''], ['1', '1', '1', '2024-12-31', ''], ['1', '1', '2', '2024-12-31', ''], ['1', '2', '2', '2024-12-31', ''], ['1', '3', '3', '2024-12-31', '']]
The date is always "2024-12-31" for the first list "lst_01"
Wher is my mistake?
Thanks for a feedback
I expect a new list base on the content of "test" list with changed values. The new list has exactly the doubled numbers of elements with new values |
Python, Exchange specific values in a nested list |
|python|list|nested| |
null |
It seems that pydoc has a bug when it comes to Python files with DB models.
I ended up using pdoc and it generated the documentation with no issues.
pip install pdoc
pdoc objects.py |
I have 12 bytes being sent over a web socket used to represent 3 float values. The other program does this:
float m_floatArray[3];
...
Serial.write((byte*) m_floatArray, 12); //12 b/c 3 floats at 4 bytes each
The data I receive in my python program looks like this:
#the data is printed from: received = ser.read(12)
received = '\xe4C\x82\xbd-\xe4-\xbe&\x00\x12\xc0'
I want to essentially do this in my python program:
x = getFirstFloat(received)
y = getSecondFloat(received)
z = getThirdFloat(received)
How can I parse my data?
|
For the sake of completeness, here is a solution using the configuration file samconfig.toml
You can add parameter overrides like this:
```
[prod.deploy.parameters]
parameter_overrides="ParameterKey=Environment,ParameterValue=prod"
```
In the template.yaml (as mentioned in other answers):
```
Parameters:
Environment:
Type: String
AllowedValues:
- staging
- prod
```
And the deploy cmd is:
```sam deploy --config-env prod```
or
```sam deploy --config-env staging```
substitute prod with staging/dev etc. |
[FATAL] [DBT-05509] Failed to connect to the specified database (ORCLCDB).
CAUSE: OS Authentication might be disabled for this database (ORCLCDB).
ACTION: Specify a valid sysdba user name and password to connect to the database.
cat: /opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log: No such file or directory
cat: /opt/oracle/cfgtoollogs/dbca/ORCLCDB.log: No such file or directory
im facing this error when setting up oracle on k8s cluster using helm charts
im not sure what to do exactly i tried accessing the opt dir didnt find the oracle dir , as well as i cannot enter the pod as its not running |
Oracle settiong up on k8s cluster using helm charts enterprise edition |
|oracle|kubernetes|kubernetes-helm|enterprise| |
null |
I started working with cms moodle not long ago, I am trying to create my theme that will match the expected design, but I faced a problem that I don't understand how the theme system works in this cms. the documentation is not very helpful. How can I create my own navigation template and have full control over the styles?
This is also a general question as I don't understand where the templates come from? Some are from the parent theme and some are from the parent theme, and what is rendering? Can you turn it off Booststrap and replace it with tailwind?
### Basic theme architecture:
βββ config.php
βββ lang
β βββ en
β β βββ theme_dpu.php
βββ lib.php
βββ pix
β βββ favicon.ico
β βββ screenshot.png
βββ scss
β βββ post.scss
βββ settings.php
βββ templates
βββ version.php
|
General questions about creating a custom theme cms Modlle |
|php|moodle|moodle-theme|moodle-boost| |
null |
In Normal Operation, Maven Calculates and Downloads Transitive Dependencies
---
There's a presumption in the question that it was _normal and expected_ by Wiremock's development team that you would manually add jetty-util as an explicit dependency when not using the standalone artifact. That's not true: *This was a bug.*
The alternative to standalone jars is not needing to add dependencies yourself. The alternative to standalone JARs is having the POM for a project specify compatibility constraints on all its dependencies, so that Maven can combine all the constraints of all the dependencies and try to calculate a dependency set that satisfies all those constraints.
---
### Vendoring Dependencies Sets Up Conflicts
<sub>(A quick terminology note: To "vendor" a dependency is to redistribute it yourself; this is what "standalone" JARs do).</sub>
Let's say that you're using library A, which requires C version 1.2.3 or newer (but not past 1.4.99); and library B, which requires C version 1.3.0 or newer (but not 2.0+).
When Maven is calculating a dependency set, this is easy: it knows that the overall project requires a version of C that's at least 1.3.0, and not 1.5+. It thus might pick, say, 1.4.15.
By contrast, if library A bundled in C==1.2.3 to a standalone jar, _that version of C is on the overall project's classpath_, so library B can end up getting C 1.2.3 even though it doesn't work with any version of C older than 1.3.0.
There are mechanisms like [OSGi](https://en.wikipedia.org/wiki/OSGi) to automate setup of multiple classloaders with different dependency chains active within the same JVM at the same time to allow concurrent use of conflicting library versions, but these add a great deal of complexity on their own -- it's very much best avoided.
---
Versioning Dependencies Prevents Security Updates
---
You're using Library A in standalone mode, which _vendors_ C 1.2.3.
There's a major vulnerability discovered in C releases older than 1.4.13. Now what?
If you weren't operating in standalone mode, Maven could resolve to the newest available version that was compatible with everything in your dependency chain; but because you're letting your dependencies pull in versions of transitive dependencies _they_ packaged at build time, now you need to wait until library A publishes a new release.
If they only publish that new release on a branch you aren't compatible with -- too bad, so sad, nothing you can do about it without a bunch of build engineering work to pull content out of the jar they distributed at build time. |
I'm currently trying to implement a basic google uauth2 with passport.js, this is the function I'm trying to rewrite/adapt to mongoose
function verify(accessToken, refreshToken, profile, cb) {
db.get('SELECT * FROM federated_credentials WHERE provider = ? AND subject = ?', [
'https://accounts.google.com',
profile.id
], function(err, cred) {
if (err) { return cb(err); }
if (!cred) {
// The account at Google has not logged in to this app before. Create a
// new user record and associate it with the Google account.
db.run('INSERT INTO users (name) VALUES (?)', [
profile.displayName
], function(err) {
if (err) { return cb(err); }
var id = this.lastID;
db.run('INSERT INTO federated_credentials (user_id, provider, subject) VALUES (?, ?, ?)', [
id,
'https://accounts.google.com',
profile.id
], function(err) {
if (err) { return cb(err); }
var user = {
id: id,
name: profile.displayName
};
return cb(null, user);
});
});
} else {
// The account at Google has previously logged in to the app. Get the
// user record associated with the Google account and log the user in.
db.get('SELECT * FROM users WHERE id = ?', [ cred.user_id ], function(err, user) {
if (err) { return cb(err); }
if (!user) { return cb(null, false); }
return cb(null, user);
});
}
});
And this is what I rewrote:
import passport from "passport";
import { Strategy as GoogleStrategy } from "passport-google-oauth20";
import User from "./models/User";
import FederatedCredentials from "./models/FederatedCredentials";
async function verify(_accessToken, _refreshToken, profile, cb) {
let user = null;
try {
const result = await FederatedCredentials.findOne({
provider: "https://accounts.google.com",
subject: profile.id,
}).exec();
if (result instanceof Error) {
cb(result);
return;
} else if (result === undefined || result === null) {
{
try {
user = await User.create({
username: profile.displayName,
email: profile.emails![0].value,
});
await user.save();
} catch (err) {
return cb(err as Error);
}
}
if (user) {
try {
const fc = await FederatedCredentials.create({
userId: user._id,
provider: "https://accounts.google.com",
subject: profile.id,
});
await fc.save();
} catch (err) {
return cb(err as Error);
}
} else {
cb(null, user);
return;
}
} else {
try {
const userToFind = await User.findById(result.userId).exec();
if (userToFind instanceof Error) return cb(userToFind);
else if (!userToFind) return cb(null, false);
else return cb(null, userToFind);
} catch (err) {
return cb(err as Error);
}
}
} catch (err) {
return cb(err as Error);
}
},
I'm getting the "Promise returned in function argument where a void return was expected", tried looking up the explanation of the error online but didn't get it. What am I doing wrong? |
I am reading Mastering STM32 by Carmine Noviello and got to the chapter where I am about to work on the hello-nucelo project. I followed everything to the letter and still got the following error (see attachment): [error screenshot][1]
*23:33:25 **** Incremental Build of configuration Debug for project hello-nucleo ****
make all
process_begin: CreateProcess(NULL, echo "Building file: ../system/src/stm32f7-hal/stm32f7xx_hal.c", ...) failed.
make (e=2): Le fichier spοΏ½cifiοΏ½ est introuvable.
make: *** [system/src/stm32f7-hal/subdir.mk:57: system/src/stm32f7-hal/stm32f7xx_hal.o] Error 2
"make all" terminated with exit code 2. Build might be incomplete.
23:33:26 Build Failed. 1 errors, 0 warnings. (took 522ms)*
What can be the problem in this case?
I tried playing around with the settings and tool chain editor tabs based on things I have seen online but nothing worked.
[1]: https://i.stack.imgur.com/ns7ck.png |
The conflict in the package tree can be resolved by making sure that the divergent versions are always in the `require-dev` sections of the composer.json for the packages that use them. Then downstream packages will not try to load them.
In this case that makes sense because tests for each package should be scoped to the package itself. |
for me i just update goRouter to the last version |
|powershell|voip|linphone| |
Or, a way to obtain the current window/tab/pane index numbers for a Windows Terminal "viewport" from within a WSL2 Ubuntu bash shell?
Background and Use Case:
I would like to use Windows Terminal as my primary terminal multiplexer and use Gnu Screen only for session persistence in my WSL2 + Ubuntu dev environment. That is, I don't want open up multiple tabs and panes within Gnu Screen itself, but instead want to use Windows Terminal to open multiple bash sessions in different windows, tabs, and panes to my WSL2 Ubuntu instance, arranged in whatever dynamic layout I desire.
This works, but the problem is when I manually close Windows Terminal or I have to reboot my computer, upon restart Windows Terminal will faithfully recreate its layout and reopen fresh bash shells, but I lose the state of each previous bash session.
I've used Gnu Screen for a long time and would like to use it to reconnect my detached screen sessions. But I don't want to just reattach the 'next available' detached screen session randomly to each recreated viewport. Rather, each screen session should be reconnected to the same corresponding bash shell associated with the original window/tab/pane viewport in the recreated Windows Terminal layout after it restarts. This way, no matter what order I create my windows, tabs, and panes in Windows Terminal, or close/reopen or move them around, my bash screen sessions will be preserved in the same original viewport after a restart.
In theory this should be easy. As Windows Terminal recreates its layout, the .bashrc script of each reopened bash shell in each viewport could be instructed to reattach to its former screen session, thus restoring all bash viewports to their matching bash screen sessions exactly as before.
To do this, I would need to name each screen session as it is created with a unique identifier associated with the specific Windows Terminal viewport ID that it is tied to. Or worst case, construct a unique ID from the window/tab/panel index numbers associated with the bash viewport in Windows Terminal. And if needed, I could even run a background bash process to monitor these WT values for each bash shell and rename its associated screen session automatically if they change.
I did notice the "WT_SESSION" and WSL_INTEROP bash shell environment variables in my WSL2 Ubuntu bash environment. Unfortunately those values change after restarting Windows Terminal, even for the same recreated viewport and bash shell. And I have not been able to find a way to obtain the current Windows Terminal window/tab/pane index values from within the bash shell to construct my own unique viewport identifier to name my screen sessions with.
Does anyone know how I could get the window/tab/pane index numbers of the current Windows Terminal viewport from within the Ubuntu bash environment? Or obtain a unique layout "viewport" identifier that persists between restarts of Windows Terminal? |
Promise returned in function argument where a void return was expected, passportjs |
|typescript|mongoose|passport.js| |