instruction stringlengths 0 30k ⌀ |
|---|
Node.js Server + Socket.IO + Android Mobile Applicatoin XHR Polling Error...? |
|android|node.js|sockets|io-socket| |
I would like to see a three-way diff of three files, with every difference output as conflict in the git style. That is, common lines are shown verbatim, and differing sections of a file are shown with conflict markers "<<<<<<", "||||||", "======", and ">>>>>>" (also called "conflict brackets"):
```
common line 1
common line 2
<<<<<<<
text from mine.txt
|||||||
text from base.txt
more text from base.txt
=======
text from yours.txt
>>>>>>>
common line 3
common line 4
<<<<<<<
same text in mine.txt and yours.txt, none in base.txt
|||||||
=======
same text in mine.txt and yours.txt, none in base.txt
>>>>>>>
common line 5
common line 6
```
Crucially, I would like **every difference** to be marked with conflict brackets, including differences that are mergeable.
Here are some options that do not work:
* `git diff` only takes two files as input; that is, it compares two things rather than three.
* `git merge-file` does not show mergeable differences (it merges them).
* `diff3 -m` is like `git merge-file`: close: it shows the whole file, with conflict markers for
conflicts, but it does not show conflict markers for mergeable differences.
* `diff3` shows all the differences, even mergeable ones, but not in the given format.
* `diff3 -A` does not show all the differences, and mergeable ones are not output with conflict markers.
I can write a program that takes the `diff3` output and the original files, and outputs the conflicted file in git style. However, I would prefer to avoid that if I can.
|
Historically, the Visual Studio project files weren't XML files, and were gradually migrated to the now common XML MsBuild format. (This is discussed [here](https://github.com/dotnet/msbuild/issues/1730#issuecomment-671344494).) In particular, the VDPROJ project (old-style MSI installer project) is not an XML project at all, but this project type is now no longer supported in Visual Studio, and WIXPROJ (Wix Toolset project - a third-party project for the creation of MSI installer) is recommended instead. (The project type can also be supported using a Visual Studio extension.) The leftover of this era is the solution (SLN) file, which still hasn't been migrated to XML; see [this question](https://stackoverflow.com/questions/6656016/why-doesnt-sln-files-use-xml-format-in-net). |
Somehow I am passing a nil to a NSDictionary when I select an image from the media library or take a photo with the camera (using expo image picker)
Exception '*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]:
attempt to insert nil object from objects[0]' was thrown while invoking
sendRequest on target Networking with params (
{
data = {
blob = {
"__collector" = {
};
blobId = "0961d8b6-cdc7-4d9b-9ef0-772c0f9141d2";
lastModified = 1710514486398;
offset = 0;
size = 3735997;
type = "";
};
trackingName = unknown;
};
Here is the sending function with the blob in it. Let me know if you need to see anything else.
const sendToBackend = async (uri) => {
setImageLoading(true);
const blob = await new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.onload = function() {
resolve(xhr.response);
};
xhr.onerror = function(e) {
console.error(e);
reject(new TypeError("Network request failed"));
};
xhr.responseType = "blob";
xhr.open("GET", uri, true);
xhr.send(null);
});
const filename = new Date().getTime() + ".jpg";
const storageRef = ref(storage, 'profileimages/' + filename);
const uploadTask = uploadBytes(storageRef, blob );
try {
await uploadTask.then((snapshot) => {
console.log('Uploaded a blob or file!');
getDownloadURL(snapshot.ref)
.then((downloadURL) => {
console.log('File available at', downloadURL);
updateDoc(docRef, {
image: downloadURL
})
.then(() => {
console.log("Document successfully updated!");
getUser();
setImageLoading(false);
})
.catch((error) => {
console.error("Error updating document: ", error);
setImageLoading(true);
});
});
});
} catch (error) {
console.error(error);
setImageLoading(true);
}
blob.close();
}; |
[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] |
|javascript|react-native| |
Yes. (see https://registry.khronos.org/OpenGL/specs/gl/glspec46.core.pdf#page=166&zoom=100,168,308) from rule 9 and 10 for `std430`: If the member is an array of S structures, the base alignment of the structure is N, where
N is the largest base alignment value of any of its members. In the case of `std140`, this would be rounded up to the basic alignment of a `vec4`; this rule does not apply to `std430`. However, since the largest element in your case is a `vec4`, it makes no difference in this case. |
I installed Visual Studio 2022 and created new ASP.NET Core web app project with .NET 8.0 framework, but after creating project, while viewing solution explorer there is no templates of ASP.NET application, I am confused
I installed .NET 8.0 and Visual Studio 2022, in Visual Studio I installed 3 workloads - ASP.NET & web development, Azure development, .NET desktop development.
1. I created new project in Visual Studio 2022 by choosing "ASP.NET Core web app (razor pages)"
[creating project](https://i.stack.imgur.com/bp3ro.png)
2.and given project name, location [configuration](https://i.stack.imgur.com/UhL5g.png)
3.and then I chose .NET 8.0 framework and created the project [enter image description here](https://i.stack.imgur.com/WNIXu.png)
4. this is the result, it says solution 0 projects[enter image description here](https://i.stack.imgur.com/Dq0cT.png)
But the templates , .sln and .csproj are created in specified folder, then I right click on solution [enter image description here](https://i.stack.imgur.com/w2MIq.png) and try to add existing .csproj file
but the final result is this[enter image description here](https://i.stack.imgur.com/OPYRh.png) |
ASP.NET core web app not get created in visual studio 2022 |
|asp.net|visual-studio|asp.net-core|azure-devops|.net-8.0| |
null |
I am trying to open and run a model that was created in AnyLogic 7 in AnyLogic 8 (8.8.6, specifically), but I get the following error:[error message for opening model](https://i.stack.imgur.com/EKONC.png). Please advise. This model was created by my advisor and published on github with access via the Data Availability Statement in McEwan et al 2015_PLoS One. The source code of the AnyLogic model can be downloaded from https://github.com/gmcewan/SalmonFarmRefugia.
|
I am trying to open/run a model that was created in AnyLogic 7 in the AnyLogic 8 software and I am unable to open the model |
|anylogic| |
null |
I am a python developer and I am completely new in Java; is there a way in java to do something like this ?
array[start:end] = [True] * (end -start)
and something like this?
array[start:end:2] = [True] * ((end-start)/2)
I found something like this:
Arrays.copyOfRange(oldArray, startIndex, endIndex);
but I do not know how to use it in my case.
|
The problem is that my current implementation of the custom error component is using type casting which I would like to get rid of.
Here is what I tried. I have following form values:
```ts
type Item = {
type: string;
code: string;
color?: string;
value: number;
}
type MyFormValues = { items: Item[] }
```
From `react-hook-form` I receive e.g. following errors for `items`:
```ts
"items": [
{
"type": {
"message": "validation_message_required_value",
...
},
"value": {
"message": "validation_message_required_value",
...
},
}
]
```
And then I want to display all the errors for each item within one component. So in my case there are four fields next to each other and below them there would be the two error messages like this:
```ts
Type: "Field is required."
Value: "Field is required."
```
Currently I have this implementation:
```ts
import React from 'react';
import { FieldError, FieldErrorsImpl, Merge } from 'react-hook-form';
import { FieldValues } from 'react-hook-form/dist/types';
type ItemErrorProps<T extends FieldValues> = {
error?: Merge<FieldError, FieldErrorsImpl<T>>;
};
const CustomFieldError = <T extends FieldValues>({
error,
}: ItemErrorProps<T>) => {
if (!error) {
return null;
}
return (
<>
{Object.keys(error).map(fieldErrorKey => {
const fieldError = (error[fieldErrorKey] ?? {}) as FieldError;
const message = fieldError?.message;
const messageKey = ['form_field', fieldErrorKey]
.filter(x => x)
.join('_');
return (
// not included in this example to keep it short
<Message
key={fieldErrorKey}
messageKey={messageKey}
message={message}
/>
);
})}
</>
);
};
```
It works but it uses a type casting `as FieldError`. Is it possible to make it work without the type casting and also for items with other properties? |
My website incorporates multilingual functionality, allowing content to be displayed in various languages. To facilitate this, language-specific data is stored in the MySQL database within JSON fields. This approach enables flexible storage of language-specific information, with each JSON field containing data in different languages.
Example of the table:
articles - title - json - example:
`{"en": "English Title", "es": "Título en Español", "id": "Judul dalam Bahasa Indonesia"}`
And this is my Article model:
```
protected function casts(): array
{
return [
'title' => 'json',
'resume' => 'json',
'content' => 'json',
];
}
```
I tried to retrieve only the desired language value from the JSON attribute in my Laravel model using the following approach:
```
protected function title(): Attribute
{
return Attribute::make(
get: fn ($value) => $value['en'],
set: fn ($value) => $value['en'],
);
}
```
Despite various attempts, I'm still unable to retrieve only the desired language value. When attempting to access the specific language using syntax like $value->en, I encounter errors such as "Cannot access offset of type string on string".
My objective is to receive JSON data from articles in the format like articles[{ "id" : 1, "title" : "english title"}], containing only the necessary language data. However, I'm currently receiving the entire JSON object instead.
I've explored using Laravel's JSON.php class as documented, but the issue persists.
Additionally, I tried the following alternative approach:
`protected function getTitleAttribute() { return $this->title['en']; }`
However, the problem remains unresolved.
In summary, I'm struggling to access a single value from the JSON attribute in my model, and even defining a cast hasn't resolved the issue. This limitation hampers my ability to efficiently transmit only the necessary data to my React frontend. |
How to make Laravel attribute cast get only one value? |
|php|laravel|laravel-9|laravel-10| |
null |
I have a list of collapsible divs that is populated with data from a geoJson file and updated dynamically when you zoom in and out of a map, to reflect the markers that are within the bounds of the map. When you click on the Collapsible div it opens to show details about the feature. I would like to add a link/button underneath the information shown that says 'Zoom To Marker', that zooms the map to the marker location (no surprise there!) when it is clicked. I have tried many ways of doing this but whatever I do the ZoomTo() function I call is always undefined. The ZoomTo() function is above the function in the external javascript file that creates the list and link. Driving me crazy, it seems like it should be so simple.
The link shows ok but when it is clicked on the following error occurs
Uncaught ReferenceError: ZoomTo is not defined
<anonymous> javascript:ZoomTo(extent);:1
```
type here
```
I have tried a variety of suggestion from stackoverflow but to no avail, making it global, having function above, using buttons and apending them. The functions are within an initMap function that is called when the window loads
FeatureType function is called from eventhandlers when the map changes.
function ZoomTo(extent)
{
//map.getView().fit(extent,{padding: [100, 100, 100, 100],maxZoom:15, duration:500});
alert(extent);
}
function FeatureType(mapfeaturetype, mapExtent)
{
htmlStr = "<div class='accordion' id="accordianExample">";
// iterate through the feature array
for (var i = 0, ii = mapfeaturetype.length; i < ii; ++i)
{
var featuretemp = mapfeaturetype[i];
// get the geometry for each feature point
var geometry = featuretemp.getGeometry();
var extent = geometry.getExtent();
extent = ol.proj.transformExtent(extent, 'EPSG:3857', 'EPSG:4326');
//If the feature is within the map view bounds display its details
//in a collapsible div in the side menu
var inExtent = (ol.extent.containsExtent(mapExtent, extent));
if (inExtent)
{
htmlStr += "<div class='accordion-item'><div class='accordion-header' id='oneline'>"
htmlStr += "<span class='image'><img src="+imgType+"></span><span class='text'>"
htmlStr += "<a class='btn' data-bs-toggle='collapse' data-bs-target='#collapse"+i+"' href='#collapse"+i+""
htmlStr += "aria-expanded='false' aria-controls='collapse"+i+"'>"+featuretemp.get('Name')+"</a></span>"
htmlStr += "</div><div id='collapse"+i+"' class='accordion-collapse collapse' data-bs-parent='#"+accordionid+"'>"
htmlStr += "<div class='accordion-body'><h3>"+featuretemp.get('Address')+"</h3><h3>"+featuretemp.get('ContactNo')+"</h3>"
htmlStr += "<h5><a href="+featuretemp.get('Website')+" target='_blank'> "+featuretemp.get('Website')+" </a></h5>"
htmlStr += "<h5>"+featuretemp.get('Email')+"</h5><h5>"+featuretemp.get('Descriptio')+"</h5>"
htmlStr += "<div id='zoom'><a href='javascript:ZoomTo(extent);'>Zoom to Marker</a></div>"
htmlStr += "</div></div></div>";
};//end if
};//end loop
htmlStr += "</div>"
document.getElementById("jsoncontent").innerHTML += htmlStr
} |
|python|python-3.x|django|django-models| |
null |
This is likely a bug from MAUI, as a workaround you can apply the below style in your "Platforms/Windows/App.xaml":
```xaml
<maui:MauiWinUIApplication.Resources>
<SolidColorBrush x:Key="ComboBoxForeground" Color="{StaticResource ComboBoxDropDownGlyphForeground}" />
<Color x:Key="ComboBoxDropDownGlyphForeground">Black</Color>
<Thickness x:Key="ComboBoxDropdownBorderThickness">5</Thickness>
<SolidColorBrush x:Key="ComboBoxBorderBrush" Color="LightGray" />
<SolidColorBrush x:Key="ComboBoxBorderBrushPointerOver" Color="{StaticResource ComboBoxDropDownGlyphForeground}" />
<SolidColorBrush x:Key="ComboBoxForegroundPointerOver" Color="{StaticResource ComboBoxDropDownGlyphForeground}" />
<SolidColorBrush x:Key="ComboBoxForegroundPressed" Color="{StaticResource ComboBoxDropDownGlyphForeground}" />
<SolidColorBrush x:Key="ComboBoxDropDownBackground" Color="White" />
<SolidColorBrush x:Key="ComboBoxItemForeground" Color="{StaticResource ComboBoxDropDownGlyphForeground}" />
<SolidColorBrush x:Key="ComboBoxItemForegroundPointerOver" Color="Blue" />
</maui:MauiWinUIApplication.Resources>
```
[Win UI reference](https://github.com/microsoft/microsoft-ui-xaml/blob/winui3/release/1.4-stable/controls/dev/ComboBox/ComboBox_themeresources.xaml) |
I want to implement a set of classes, that can be used both with UInt16 and UInt32 type parameters.
The first one represents the header of a telegram, that can be used both with UInt32 and UInt64 data types.
*Any non essential fields have been ommitted!*
Currently I am working with .NetFramework 4.7.2. and VS 2017.
Upgrading to another framework may be an option, but could break other things, like dependencies...
public class Header<T> // where T : struct, IConvertible
{
public Header(T off, T len)
{
Offset = off;
LenData = len;
}
public T Offset;
public T LenData;
public byte[] ToByteArray()
{
byte[] bTemp = new byte[2 * Marshal.SizeOf(typeof(T))];
if (Offset is UInt16)
{
Array.ConstrainedCopy(System.BitConverter.GetBytes(Convert.ToUInt16(Offset)), 0, bTemp, 0, Marshal.SizeOf(typeof(T)));
Array.ConstrainedCopy(System.BitConverter.GetBytes(Convert.ToUInt16(LenData)), 0, bTemp, Marshal.SizeOf(typeof(T)), Marshal.SizeOf(typeof(T)));
}
else if (Offset is UInt32)
{
Array.ConstrainedCopy(System.BitConverter.GetBytes(Convert.ToUInt16(Offset)), 0, bTemp, 0, Marshal.SizeOf(typeof(T)));
Array.ConstrainedCopy(System.BitConverter.GetBytes(Convert.ToUInt16(LenData)), 0, bTemp, Marshal.SizeOf(typeof(T)), Marshal.SizeOf(typeof(T)));
}
else
{
throw new NotSupportedException("Type " + Offset.GetType() + "not supported");
}
return bTemp;
}
}
This class gives me a byte array for the telegram header (2 + 2 bytes for T = UInt16 and 4 + 4 for T = UInt32).
Now I need another class, which uses the header class to build the byte array for the telegram:
class Telegram<T>
{
public Header<T> header;
public Telegram(T offset, byte[] bData)
{
header = new Header<T>(offset,bData.Length)
}
}
On its own, the class works as expected:
static void Main(string[] args)
{
byte[] bData = new byte[16];
// This call works
Header<UInt16> ui16Header = new Header<ushort>(12345, (ushort)bData.Length);
byte[] b0 = ui16Header.ToByteArray();
b0 contains the desired telegram content...
Then i want to create two telegrams (for both supported data types:
// These calls don't work (because of compile error, or
// because of a thrown exception - see below)
Telegram<UInt16> telegram16 = new Telegram<ushort>(7000, bData);
Telegram<UInt32> telegram32 = new Telegram<uint>(70000, bData);
}
The constructor of the class *Telegram* from above does not compile (cant convert int to T).
After changing the constructor as follows, it can be compiled:
public Telegram(T offset, byte[] bData)
{
object ot = bData.Length;
T len = (T)ot;
header = new Header<T>(offset, len);
}
But this leads to a runtime exception.
Can this problem be solved? I know, that I could simply write two classes that contain all type specific methods themselves, but only the fields in the header are either UInt16 or UInt32
Or would it be the easier way to work with inheritance and interfaces?
|
How to separate polygon and point legends in map ggplot2 |
|r|ggplot2| |
null |
|javascript|post|request|playwright| |
As in title, I am working on an Android app that has already been published to Google Play Store (and therefore signed). I have been given full permissions (admin) to the app through Google Play Console. I assume that this way the client doesn't have to share his signing key but I am having trouble figuring out the required steps. As I understand the dev signing key won't do because the app needs to actually be associated with the correct account in order for in-app purchases to work.
Indeed when I run the app with the dev signing key the store doesn't recognize the products.
In the 'App Signing' section of the app I see three options:
1. Use existing app signing key from Java KeyStore
2. Use existing app signing key from another repository
3. Use a new app signing key (this requires ongoing dual releases)
On the Apple/IOS side of things as soon as I was given permissions I was able to directly use the client's signing keys to sign the app through Xcode. Is there a way to achieve similar results here?
Of course I would like to avoid disrupting the app distribution as well as making the client depend on my keys, so I can't really play around and see what works. |
How do I sign a client's Android app in order to test in-app purchases? |
|android|google-play|google-play-console|android-app-signing| |
null |
When writing async code, you need to code to the principle not the implementation.
> Would the following program be safe to run without locks?
No, the operation is non-atomic and a race condition. A task could read the variable, any other number of tasks in the meantime can read and write to the value, then the original task could update the variable. Protect the data.
In practice if you using a single thread for async, you'll never see the race condition. If you're using multiple threads, you still will not see the race condition because cpython has the GIL which a thread would have to release between reading and writing the variable. That won't happen, *but it is not guaranteed*. For example you could use a different python implementation or interface c that releases the GIL.
> Does eviction need to be user-defined?
I am taking eviction to mean, the os will switch context and let something else run. There is no guarantee. If your running a cpu bound task, then python offers no guarantee that it will be interrupted from time to time to check the async event loop. So start long running expensive tasks with that expectation. A user defined check could be appropriate. Or something similar to the answer I linked in the comment, https://stackoverflow.com/a/71971903/2067492 create a future that you can include in your asynchronous work flow.
Consider you have two tasks that you've submitted asynchronously. Each tasks has actions A1, A2,...AN and B1, B2, ... BN.
When thinking of real time execution you have to consider that any order is possible. eg.
a1, a2, a3, ... aN b1, b2, ..., bN
That is a common execution order. But it could be:
a1, b1, a2, b2, a3, ... bN, aN.
Or even:
b1, b2, b3, ... bN, a1, a2, ... aN
The thing is async gives you the tools to make sure these tasks execute how you want them. You can have `a3` be an action that waits for `b3` and then our ordering is greatly reduced.
In your example `shared_counter += 1` would be 3 actions. a1 is read, a2 is value + 1, and a3 is write. |
Can't call javascript function from a hef tag in innerhtml - undefined |
|javascript|undefined|innerhtml|dynamic-list| |
null |
Given the prerequisites of your original problem description incl. the following we consider status-quo not to touch (thanks for being that honest sharing them.)
> Since setting up a CI/CD environment is (of course depending on the whole tech-stack) quite a lot of work, someone like a DevOps is needed to be the "Godfather" of the system, but this doesn't mean that he is the one using this system, he is the one who takes care of it.
When encountering the following problem.
> * The Developer has quite often the attitude "i'm done as soon i have pushed the code".
> * The DevOps takes care of the CI/CD pipeline, extends it, fixes it, ... you name it. But doesn't care about what the developed application does.
Then this could be a sign that there are potential benefits in gaining better understanding of the CI/CD systems.
Luckily, you already have a person that signs themselves responsible for keeping the concrete CI/CD systems operating, and you have developers pushing code.
Good preconditions, but it looks like none is responsible for the deployments.
Name IT and remove the impediment.
Happy deployments.
---
Ah and your concrete questions:
> Who triggers (clicks on the button) the release to the Quality Environment and who triggers the release to the Productive Environment?
If you don't know, then who should be able to say. As I wrote above, it looks like that none is responsible for that and you let it happen by fortune/accident/sheer luck or throwing a dice. We don't know.
> and why?
Well, this depends on the project, some projects are under the requirement to actually deploy software. This is then the reason.
This is normally known upfront. The short answer is: To install it. |
null |
You may consider this solution that would with any version of `awk`:
```sh
printf '%s\n' "${arr[@]}" |
awk -F, '
{
for(i=1; i<NF; ++i) {
row[$i] = (i == 1 ? "" : row[$i] ", ") $NF
++fq[$i]
}
}
END {
for (k in fq) print fq[k], k ":", row[k]
}' | sort -rn -k1
3 test: meta, amazon, google
2 my: amazon, google
1 this: meta
1 hello: microsoft
```
Note that, I have used `sort` to get output as per your shown expected output. If you don't care about ordering that you can remove `sort` command. |
{"Voters":[{"Id":11002,"DisplayName":"tgdavies"},{"Id":17795888,"DisplayName":"Chaosfire"},{"Id":9214357,"DisplayName":"Zephyr"}],"SiteSpecificCloseReasonIds":[11]} |
Check if you have added a context path.
if yes, you need to try ->
>http://localhost:8080/{{serviceName}}/actuator/health
And check the following in either `.properties` or `.yaml` file
`management.endpoints.web.exposure.include= "*" ` include all endpoints on actuator or `management.endpoints.web.exposure.include= health` if need only health endpoint
|
null |
Check if you have added a context path.
if yes, you need to try ->
>http://localhost:8080/{{serviceName}}/actuator/health
And check the following in either `.properties` or `.yaml` file :
- `management.endpoints.web.exposure.include = * ` : if you want to include all endpoints on actuator
- `management.endpoints.web.exposure.include = health` : if you need only health endpoint
|
I just started to look at the documentation of Angular Signals and I am not able to make heads or tails of how to use it in the Authentication Service that I have made previously.
```
import { Injectable } from '@angular/core';
import {
User,
UserCredential,
createUserWithEmailAndPassword,
getAuth,
signInWithEmailAndPassword,
} from '@angular/fire/auth';
import { LoginUser, SignUpUser } from '../models/user.model';
import { catchError, from, map, of, switchMap, tap, throwError } from 'rxjs';
import { HttpErrorResponse } from '@angular/common/http';
import { Auth, authState } from '@angular/fire/auth';
import { Router } from '@angular/router';
import { ToastrService } from 'ngx-toastr';
@Injectable({
providedIn: 'root',
})
export class AuthService {
authState$ = authState(this.angularAuth);
constructor(
private angularAuth: Auth,
private router: Router,
private toastrService: ToastrService
) {}
signUp(user: SignUpUser) {
let auth = getAuth();
return from(
createUserWithEmailAndPassword(auth, user.email, user.password)
).pipe(
catchError((error: HttpErrorResponse) => {
this.handleError(error);
return of({});
}),
tap((userCredential: UserCredential) => {
if (Object.keys(userCredential).length !== 0) {
console.log(userCredential);
this.toastrService.success(
'Account created successfully',
'Success',
{
timeOut: 5000,
}
);
this.router.navigate(['login']);
}
})
);
}
logIn(user: LoginUser) {
let auth = getAuth();
return from(
signInWithEmailAndPassword(auth, user.email, user.password)
).pipe(
catchError((error: HttpErrorResponse) => {
this.handleError(error);
return throwError(() => error);
}),
switchMap(() => {
return this.authState$.pipe(
map((user: User) => {
if (user) {
this.setUser(user);
this.toastrService.success('Successfully logged in', 'Success', {
timeOut: 2000,
});
this.router.navigate(['/leaderboards']);
}
})
);
})
);
}
setUser(user: User) {
localStorage.setItem('user', JSON.stringify(user!.toJSON()));
}
get isLoggedIn(): boolean {
const user = localStorage.getItem('user');
return !!user;
}
logout() {
return from(this.angularAuth.signOut()).pipe(
catchError((error: HttpErrorResponse) => {
this.handleError(error);
return throwError(() => error);
}),
tap(() => {
localStorage.removeItem('user');
this.toastrService.success('Successfully logged out', 'Success', {
timeOut: 3000,
});
this.router.navigate(['/login']);
})
);
}
handleError(errorMsg: HttpErrorResponse) {
let errorMessage;
// console.log(errorMsg);
if (errorMsg.message.includes('(auth/email-already-in-use)')) {
errorMessage = 'Email already exists. Enter a new Email';
} else if (errorMsg.message.includes(' (auth/invalid-credential)')) {
errorMessage = 'Invalid Email or Password';
} else {
errorMessage = 'An Unknown error has occurred. Try again later';
}
// Alternate Method (For Reference )
// console.log(errorMsg.message);
// switch (errorMsg.message) {
// case 'Firebase: The email address is already in use by another account. (auth/email-already-in-use).':
// errorMessage = 'Email already exists. Enter a new Email';
// break;
// case 'Firebase: Error (auth/invalid-credential).':
// errorMessage = 'Invalid Email or Password';
// break;
// default:
// errorMessage = 'An Unknown error has occurred. Try again later';
// break;
// }
this.toastrService.error(errorMessage, 'Error', {
timeOut: 4000,
});
}
}
```
How can I use Angular signals for anything in this service even if it is a small change. Any sort of help or hint will be appreciated
So far I have not been able to implement any changes |
Understanding how to apply Angular Signals from beginning on an existing service |
|angular|angularjs|angular-signals| |
null |
I'm defining the following flag for my CLI in Golang:
```go
var flags flag.FlagSet
phoneRegexp := flags.String("phone_regexp", "", "custom regex for phone checking.")
```
But, it fails when I'm passing the following argument:
```
./cli --phone_regexp='^\d{1,5}$'
cli: no such flag -5}$
```
While I understand that this is a parsing problem (encountering comma so it thinks its a new flag), I cannot seem to figure out how to escape (I tried adding a \ before the comma) or how to better describe the Go flag. Anyone knows how to solve this problem?
## Edit
As I tried to make the question a little bit more generic, we couldn't reproduce the error. I'm creating a protoc plugin, thus the options are parsed like so:
```go
import (
"flag"
"google.golang.org/protobuf/compiler/protogen"
)
func main() {
var flags flag.FlagSet
phoneRegexp := flags.String("phone_regexp", "", "custom regex for phone checking.")
opts := protogen.Options{
ParamFunc: flags.Set,
}
//...
}
```
and then the flag is set like the following (check is the plugin name):
```bash
protoc ... --check_opt=phone_regexp='^\d{1,5}$' my.proto
```
# Edit 2
I found the culprit [here][1]. They are splitting by comma without checking if it's inside a string value.
[1]: https://github.com/protocolbuffers/protobuf-go/blob/ec47fd138f9221b19a2afd6570b3c39ede9df3dc/compiler/protogen/protogen.go#L167 |
This is my models.py:
class Person(models.Model):
surname = models.CharField(max_length=100, blank=True, null=True)
forename = models.CharField(max_length=100, blank=True, null=True)
def __str__(self):
return '{}, {}'.format(self.surname, self.forename)
class PersonRole(models.Model):
ROLE_CHOICES = [
("Principal investigator", "Principal investigator"),
[etc...]
]
title = models.CharField(choices=TITLE_CHOICES, max_length=9)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
person_role = models.CharField(choices=ROLE_CHOICES, max_length=30)
def __str__(self):
return '{}: {} as {}.'.format(self.project, self.person, self.person_role)
class Project(models.Model):
title = models.CharField(max_length=200)
person = models.ManyToManyField(Person, through=PersonRole)
def __str__(self):
return self.title
def get_PI(self, obj):
return [p.person for p in self.person.all()] #I'll then need to filter where person_role is 'Principal investigator', which should be the easy bit.
In my Admin back-end I'd like to display the person (principal investigator) in the main table:
class ProjectAdmin(ImportExportModelAdmin):
list_filter = [PersonFilter, FunderFilter]
list_display = ("title", "get_PI")
ordering = ('title',)
What I want: display the person with the role 'Principal investigator' in the Projects table in the Admin.
You can see that I created my `get_PI()` in my `models.py` and references in my `list_display`. I'm getting `Project.get_PI() missing 1 required positional argument: 'obj'`. What am I doing wrong? |
My task is to create combinations, more like a Cartesian product for some attribute lines of a library file. I am currently facing the problem of grouping the same attributes(of course the adjacent parameters are different) as sublists of a list. Remember my input may contain a thousand lines of attributes , which I need to extract from a library file.
######################
Example input:
attr1 apple 1
attr1 banana 2
attr2 grapes 1
attr2 oranges 2
attr3 watermelon 0
######################
Example output:
[['attr1 apple 1','attr1 banana 2'], ['attr2 grapes 1','attr2 oranges 2'], ['attr3 watermelon 0']]
The result I am getting:
['attr1 apple 1','attr1 banana 2', 'attr2 grapes 1','attr2 oranges 2', 'attr3 watermelon 0']
Below is the code:
```
import re
# regex pattern definition
pattern = re.compile(r'attr\d+')
# Open the file for reading
with open(r"file path") as file:
# Initialize an empty list to store matching lines
matching_lines = []
# reading each line
for line in file:
# regex pattern match
if pattern.search(line):
# matching line append to the list
matching_lines.append(line.strip())
# Grouping the elements based on the regex pattern
#The required list
grouped_elements = []
#Temporary list for sublist grouping
current_group = []
for sentence in matching_lines:
if pattern.search(sentence):
current_group.append(sentence)
else:
if current_group:
grouped_elements.append(current_group)
current_group = [sentence]
if current_group:
grouped_elements.append(current_group)
# Print the grouped elements
for group in grouped_elements:
print(group)
```
|
I'm writing a program that records from my speaker output using `pyaudio`. I am on a Raspberry Pi. I built the program while using the audio jack to play audio through some speakers, but recently have switched to using the speakers in my monitor, through HDMI. Suddenly, the program records silence.
```
from pyaudio import PyAudio
p = PyAudio()
print(p.get_default_input_device_info()['index'], '\n')
print(*[p.get_device_info_by_index(i) for i in range(p.get_device_count())], sep='\n\n')
```
The above code outputs first the index of the default input device of `pyaudio`, then the available devices. See the results below.
**Case A:**
```
2
{'index': 0, 'structVersion': 2, 'name': 'bcm2835 Headphones: - (hw:2,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 8, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.0016099773242630386, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0}
{'index': 1, 'structVersion': 2, 'name': 'pulse', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0}
{'index': 2, 'structVersion': 2, 'name': 'default', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0}
```
If I then go into to terminal, enter `sudo raspi-config` and change the audio output to the headphone jack, I get an actual recording, not silence, and receive a different output to the above code.
**Case B:**
```
5
{'index': 0, 'structVersion': 2, 'name': 'vc4-hdmi-0: MAI PCM i2s-hifi-0 (hw:0,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 2, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0}
{'index': 1, 'structVersion': 2, 'name': 'bcm2835 Headphones: - (hw:2,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 8, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.0016099773242630386, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0}
{'index': 2, 'structVersion': 2, 'name': 'sysdefault', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 128, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0}
{'index': 3, 'structVersion': 2, 'name': 'hdmi', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 2, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0}
{'index': 4, 'structVersion': 2, 'name': 'pulse', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0}
{'index': 5, 'structVersion': 2, 'name': 'default', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0}
```
You can see in case B that I now have access to many different devices. I've attempted recording from all three available inputs in case A, and both #0 and #1 fail. #1 also records silence, and #0 returns `OSError: [Errno -9998] Invalid number of channels`. If you look closely at case A, you'll see that #0 has `['maxInputChannels'] = 0`, so that's why.
I've attempted to create loopback devices that read from the sound output and introduce another input to pass the audio back in. I would then record from that input, as it would have input channels. I've researched on this thread [here][1], but the only solution is for Windows.
I have also attempted to create a loopback device using the `pulseaudio` utility `pactl`. This link [here][2] demonstrates what I have tried. Upon succesfully creating a loopback, I'm unable to plug into it using `pyaudio`; it doesn't show up in the list of devices.
Does anybody know...
- How to record from a `pulseaudio` loopback using `pyaudio`?
- An alternative way of creating a loopback on Linux?
- An alternative way of using `pyaudio` to solve my problem?
Thanks very much.
[1]: https://stackoverflow.com/questions/23295920/loopback-what-u-hear-recording-in-python-using-pyaudio
[2]: https://askubuntu.com/questions/1295430/how-do-i-mix-together-a-real-microphone-input-and-a-virtual-microphone-using-pul |
Read from audio output in PyAudio through loopbacks |
|python|pyaudio|loopback|portaudio|pulseaudio| |
I need help. I have a Dialog screen on mfc. In this screen, it has an "Add" button wherein it displays its member (up to 8) when it is clicked. I wanted to implement a scrollbar after Member 6 is added. How can I possibly implement as such.
The code is with C++ and using MFC. Kindly advise. Thank you.
![DialogBox[1]][1]
[1]: https://i.stack.imgur.com/siGni.png |
How to enable scrollbar in a specific group member added |
|c++|mfc| |
I distribute a ready-to-run software for Windows written in Python by:
* shipping the content of an embedded version of Python, say `python-3.8.10-embed-amd64.zip`
* adding a `myprogram\` package folder (= the program itself)
* simply run `pythonw.exe -m myprogram` to start the program
It works well. (possible simple alternative to cxfreeze, etc. but out of topic here)
The tree structure is:
```
main\
_asyncio.pyd
_bz2.pyd
... + other .pyd files
libcrypto-1_1.dll
... + other .dll files
python.exe
pythonw.exe
python38._pth
python38.zip
...
myprogram\ # here my main program as a module
PIL\ # dependencies
win32\
numpy\
... # many other folders for dependencies
```
**Is there a way to move all the dependencies folders to a subfolder and still have Python (embedded version) work? How to do so?** More precisely, like this:
```
main\
python.exe
pythonw.exe
python38._pth
python38.zip
...
myprogram\ # here my main program as a module
dependencies\
PIL\ # all required modules
win32\
numpy\
...
_asyncio.pyd # and also .pyd files
...
```
NB: the goal is here to use an embedded Python, which is *totally independent* from the system's global Python install. So this is independent from any environment variable such as `PYTHONPATH` etc. |
I'm making a discord bot to send a simple button, I've created a class called "MyView" which just says "You clicked me!" upon clicking. I don't have much knowledge about the 2024 updates to discord.py which mostly invalidate most tutorials. So if anyone could tell me what's happening and an easy fix, it would be much appreciated.
Also if anyone has a link to a better and more exact guide for discord.py that would be helpful.
Here's my code
```
import discord
from discord import app_commands
from discord.ext import commands
from discord.ui import Item
from discord import utils
intents = discord.Intents.default()
intents.message_content = True
client = commands.Bot(command_prefix="!",intents=intents.all())
@client.event
async def on_ready():
print(f'We have logged in as {client.user}')
try:
synced = await client.tree.sync()
print(f'Synced {len(synced)} command(s)!')
except Exception as e:
print(e)
class MyView(discord.ui.View): # Create a class called MyView that subclasses discord.ui.View
@discord.ui.button(label="Click me!", style=discord.ButtonStyle.primary, emoji="") # Create a button with the label " Click me!" with color Blurple
async def button_callback(self, button, interaction):
await interaction.response.send_message("You clicked the button!")
view = MyView()
@client.hybrid_command(name="button")
async def button(ctx):
await ctx.send(view=view)
client.run(TOKEN)
```
Output:
```Traceback (most recent call last):
File "C:\Users\abhyu\Desktop\python\main2.py", line 31, in <module>
view = MyView()
File "C:\Users\abhyu\Desktop\python\main2.py", line 27, in __init__
super().__init__()
File "C:\Users\abhyu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ui\view.py", line 192, in __init__
self.__stopped: asyncio.Future[bool] = asyncio.get_running_loop().create_future()
RuntimeError: no running event loop
Process finished with exit code 1```
|
I'm getting a "RunTimeError: No running event loop" exception in my discord.py python script. I don't know why it's happening |
|python|discord.py| |
null |
One potential solution to your problem is to convert the background image to Base64 code and insert it into your CSS code.
You can use a lot of tools for conversion, in the most cases I used [this online tool](https://www.base64-image.de/).
Usage:
1. Download the image file.
2. Upload it on this site
3. In the *Encoding* section you can see the results. Click on *copy css* button.
4. Insert this in your CSS file in the following format: `background-image: <copied-code>`. The result looks like this `background-image: url('data:image/png;base64,<a-very-long-hexadecimal-string>')`
|
You are probably experiencing an issue with a firewall. The 'problem' is that the port you specify is not the only port used, it uses 1 or maybe even 2 more ports for RMI, and those are probably blocked by a firewall.
One of the extra ports will not be know up front if you use the default RMI configuration, so you have to open up a big range of ports - which might not amuse the server administrator.
There is a solution that does not require opening up a lot of ports however, I've gotten it to work using the combined source snippets and tips from
<s>http://forums.sun.com/thread.jspa?threadID=5267091</s> - link doesn't work anymore
http://blogs.oracle.com/jmxetc/entry/connecting_through_firewall_using_jmx - Like doesn't work anymore
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
It's even possible to setup an ssh tunnel and still get it to work :-)
|
We have EKS cluster for production with K8s version 1.28. We have to upgrade the version to 1.29 but stucked because of following error showing in AWS Console in Upgrade insights section:
[AWS Console][1]
I can see all the apis versions are already up to date but not getting why this error is showing in the console. We have risk of downtime if we upgrade by ingnoring the error.
[1]: https://i.stack.imgur.com/UTQjN.png |
I am using different numerical methods to understand the results yielded from different types of integrators at different time steps. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution: `$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left | y_{analytical} - y_{numerical}\right | .$$`
Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below.
[enter image description here](https://i.stack.imgur.com/ozMkL.png)
The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision.
I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: [[enter image description here](https://i.stack.imgur.com/Thyjd.png)](https://i.stack.imgur.com/dXxpA.png)
Again when plotting the maximum error I obtain a minimum of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy.
Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point.
An explanation that comes to mind is the round off error cause by float64 that should happen when values reach about 10^(-15), 10^(-17). I have manually checked the position, velocity and acceleration that result from running the verlet method but their lowest values are of order 10^(-9), very far from 10^(-15).
(1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the verlet's method?
(2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values)
(3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?)
Thanks in advance for any help with the above questions |
Optimum Tine step for Verlet's method to solved Damped Simple Harmonic Motion ODE |
null |
You are probably experiencing an issue with a firewall. The 'problem' is that the port you specify is not the only port used, it uses 1 or maybe even 2 more ports for RMI, and those are probably blocked by a firewall.
One of the extra ports will not be know up front if you use the default RMI configuration, so you have to open up a big range of ports - which might not amuse the server administrator.
There is a solution that does not require opening up a lot of ports however, I've gotten it to work using the combined source snippets and tips from
<s>http://forums.sun.com/thread.jspa?threadID=5267091</s> - link doesn't work anymore
http://blogs.oracle.com/jmxetc/entry/connecting_through_firewall_using_jmx - link doesn't work anymore, see [Wayback Machine](https://web.archive.org/web/20161223171343/http://blogs.oracle.com/jmxetc/entry/connecting_through_firewall_using_jmx)
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
It's even possible to setup an ssh tunnel and still get it to work :-)
|
Interestingly under RHEL8.9 and NodeJS v12.22.0, I use npm to start my client and server instances. There is an operator for npm called stop but it fails in a missing script error each time. Based on your blog here, npx does work normally with npx stop server, for one example. However, nodemon is still resident in memory and ports are still being enabled for nodeman. |
In this case, Jackson is actually behaving as intended and there is an issue in your deserialization logic. Ultimately, you want bad enum values to throw an error and return that error to the user. This is infact the default behaviour of spring and Jackson, and will result in a `HTTP 400 BAD REQUEST` error. IMO This is the appropriate error to return (not 404) since the user has supplied bad input.
Unless there is a specific reason for you to implement a custom `@JsonCreator` in your enum class, I would get rid of it. What is happening here is that Jackson is being told to use this method for converting a string into an enum value instead from the defualt method. When a text is passed that is not a valid value of your enum, you are returning `null` which results into that values deserializing to null.
A quick fix, would be to delete the `JsonCreator` and allow jackson to use its default behaviour for handling enums. The extra properties methods you have added are unnecessary in most cases
```java
public enum CouponOddType {
BACK("back"),
LAY("lay");
private String value;
CouponOddType(String value) {
this.value = value;
}
}
```
If you need to preserve the creator for some other reason, then you will need to add business logic to determine if any of the `enum` values in the arrays evaluated to `null`.
```java
private Response someSpringRestEndpoint(@RequestBody CouponQueryFilter filter){
if (filter.getOddTypes() != null && filter.getOddTypes().contains(null){
throw new CustomException()
}
if (filter.getStatuses() != null && filter.getStatuses().contains(null){
throw new CustomException()
}
//... other business logic
}
``` |
null |
I am plotting covariates against predicted occupancy models using the unmarked package, three of my covariates are continuous so I have plotted using predict function and ggplot, geom_ribbon. However obviously this doesn't work with categorical/factor variables, I would like to be able to plot as a boxplot the predicted occupancy for the two discrete categories within my factor covariate.
The dataset UMF - is the unmarked frame with site covariates, obs covariates and capture history of the individual species. I have included the code for the null model, continuous covariate model (path_dist) and the point I am at with the categorical covariate (fox_presence). The categorical covariate has two levels: present and abscent and is treated as a factor in the dataset. I have tried to use the same predict function as with the continous and null but changed the type to "response" however this produces an error code.
Is there any way I can model and plot categorical covariates against individual species occupancy in the unmarked package? I have cut out the modelling of the other continuous variables as its just repition but is why the model moves from m2 to m5.
m1 <- occu(formula = ~1
~1,
data = umf)
m2 <- occu(formula = ~1 # detection formula first
~path_dist, # occupancy formula second,
data = umf)
newDat <-
cbind(expand.grid(path_dist=seq(min(cov$path_dist),max(cov$path_dist),
length.out=100)))
newDat<- predict(m2, type="state", newdata = newDat, appendData=TRUE) #
predict psi (type = "state") and confidence intervals based on our
model for these road distances
p1 <- ggplot(newDat, aes(x = path_dist, y = Predicted)) +
geom_ribbon(aes(ymin = lower, ymax = upper), alpha = 0.5, linetype =
"dashed") + geom_path(size = 1) + labs(x = "Distance to path", y =
"Occupancy probability") + theme_classic() + coord_cartesian(ylim =
c(0,1))
#fox_presence model/categorical covariate
cov$Fox_Presence <- factor(cov$Fox_presence)
m5 <- occu(formula = ~1
~Fox_presence, data = umf)
newDat4 <- cbind(expand.grid(Fox_presence=seq(cov$Fox_presence),
(cov$Fox_presence), length.out=100))
newDat4 <- predict(m5, type="response", newdata = newDat4,
appendData=TRUE)
#error: valid types are state, det |
|python|numpy|numerical-integration|verlet-integration| |
null |
|c#|generics|.net-4.7.2| |
I've implemented something similar a while ago.
For a nice type to help with declaring your default config:
type Primitive = undefined | null | boolean | string | number;
// Deeply make all fields optional
type DeepOptional<T> =
T extends Primitive | any[] ? T // If primitive or array then return type
: {[P in keyof T]?: DeepOptional<T[P]>} // Make this key optional and recurse
Then you can define your default config with intellisense like:
const a: DeepOptional<Config>={...}
For the resulting type to be correct:
What you want to achieve can be done with a merge function returning a intersection of the types like.
function merge<A extends Object, B extends Object>(a:A,b:B): A & B
Since the result is A & B, B will overwrite A's optional statement if the same key is given without it.
You can check this out for a working example: https://github.com/Aderinom/typedconf/blob/master/src/config.builder.ts#L118
----------
*Edit: Changed incorrect wording (union to intersection) based on jcalz comment* |
im new to coding and want to make a code in wolfram mathematica.
what i want to do is decompose a perfect number into fractions for each digit.
example for the 2nd perfect number:
i want 28 to be written as 20/100 and 8/100. i want to do this for the bigger perfect numbers aswell so i need a general code.
next i want to find the ammount common divisors of 20 and 100. (which is 6) and the amount of common divisors of 8 and 100 (which is 3).
now seems simple but for bigger numbers i get discrepancies between wolfram alpha calculations and wolfram mathematica calculations.
i can't really find the problem but the list i found was:
1st (x) 3
2nd (y) 6 3
3rd (z) 12 4 2
4th (g) 16 9 6 4
5th (t) 64 49 42 30 0 9 4 2
6th (r) 110 90 88 49 54 30 16 0 6 2
but i get a different output in mathematica. it seems that mathematica doubles some numbers and some times it just adds a few (for the 3rd number i get 12, 4, and 2 but mathematica gives me 15, 12, and 4)
the code i made up until now is:
(* Definieer het perfecte nummer dat u wilt analyseren *)
perfectNumber = PerfectNumber[6];
(* Converteer het perfecte nummer naar een lijst van zijn cijfers *)
digits = IntegerDigits[perfectNumber];
(* Bepaal het aantal cijfers in het perfecte nummer *)
numDigits = Length[digits];
(* Positioneer de cijfers op basis van hun plaats in het perfecte nummer *)
positionedDigits = MapIndexed[#1 10^(numDigits - #2[[1]]) &, digits];
(* Verdeel elk gepositioneerd cijfer door het aantal cijfers in het perfecte nummer *)
dividedDigits = Map[#/10^numDigits &, positionedDigits];
(* Bepaal de delers van elk gepositioneerd cijfer *)
numeratorDivisors = Map[Divisors, positionedDigits];
(* Bepaal het aantal delers van elk gepositioneerd cijfer *)
numDivisors = Map[Length, numeratorDivisors];
|
wolfram mathematica code vs. wolfram alfa discrepancy in calculation of common divisors? help me pls :) |
|wolfram-mathematica| |
I am encountering a 'System.Text.Json.JsonException' with the message "'O' is an invalid start of a value" while attempting to deserialize MongoDB BSON documents using the ASP.NET Core MongoDB driver. The issue arises when using JsonSerializer.DeserializeAsync method. I have verified the JSON structure and MongoDB documents but am still facing this problem. Any insights or suggestions on resolving this issue would be greatly appreciated.
ps: I'm currently saving JSON documents with a dynamic schema , that's why i followed that approach –
[enter image description here][1]
```
[HttpGet("{id}")]
public async Task<ActionResult<ProjectIdentification>> GetProjectById(string id)
{
var project = await _projectService.GetProjectById(id);
if (project == null)
{
return NotFound();
}
return Ok(project.ToJson());
}
```
```
public class ProjectIdentification
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string? Id { get; set; } = ObjectId.GenerateNewId().ToString();
public string? version { get; set; }
public Object[] content { get; set; }
public Object[] columns { get; set; }
public Theme theme { get; set; }
}
```
```
public async Task AddProject(ProjectIdentification project)
{
string serializedJson = JsonSerializer.Serialize(project, new JsonSerializerOptions
{
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
});
BsonDocument bsonDocument = BsonDocument.Parse(serializedJson);
await _collection.InsertOneAsync(bsonDocument);
}
public async Task<ProjectIdentification> GetProjectById(string id)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", ObjectId.Parse(id));
var result = await _collection.Find(filter).FirstOrDefaultAsync();
Console.Write(result);
if (result != null)
{
return JsonSerializer.Deserialize<ProjectIdentification>(result.ToJson());
}
return null;
}
```
I attempted to deserialize MongoDB BSON documents
[1]: https://i.stack.imgur.com/hoq9w.png |
yes, it is possible to perform Load testing using Simulator. We can record and develop PVT scripts using simulator and execute in Controller. |
I'm struggling to implement dynamic role-based routing with the latest version of react-router-dom (v6.22.3). I have some ideas, but I'm finding it difficult to implement them fully.
** I familiar with examples described here [link][1]
*** Goal description :
1. define roles in the router
2. consume it in the outlet
Here's what I've tried:
I've set up a mapping for my pages and defined roles for each page:
```
export const paths = {
boo: { path: "/boo", roles: [] },
foo: { path: "/foo", roles: ["admin", "manager"] },
};
```
And here's how I've set up my router: (defining the roles inside the id prop is the best option i found)
```
const router = createBrowserRouter([
{
path: "/",
element: <App />,
children: [
{ path: paths.details.boo, element: <Boo/>, id: createRouteId("boo") },
{ path: paths.details.foo, element: <Foo/>, id: createRouteId("Foo") },
],
errorElement: <ErrorPage />,
},
{
path: "/login",
element: <Login />,
},
]);
export default router;
```
Now, I'm wondering if there's any way to use a property other than "id" to uniquely identify routes. However, let's assume "id" is the best option.
In my App.tsx, I'm using a custom hook to get the user object, which looks like this:
```
{userName:string,roles:roles[],...}
```
I'm also attempting to check if the user has valid roles for the current route:
```
const isValidRole = (userRoles: string[], outletRole: string[]) => {
return userRoles.some((role) => outletRole.includes(role));
};
```
Here's my App.tsx component: (here we use outlet and i try to consume the roles based on the location with the outlet)
```
function App() {
const { user } = useAuth();
const getRouteIdFromOutlet = useOutlet();
const [isValidRole] = useState(isRoleValid(user.roles, JSON.parse(getRouteIdFromOutlet.props.id)));
if (!user) return <Login />;
if(!isValidRole) throw new Error("Unauthorized"); // the error page will know what to render
return (
<Layout>
<NavbarDesktop />
<Content style={{ padding: "0 48px" }}>
<Outlet />
</Content>
</Layout>
);
}
```
now the ```const getRouteIdFromOutlet = useOutlet();``` is the closest option i found in order to read the id from the route and utilize it here
[1]: https://stackoverflow.com/questions/66289122/how-to-create-a-protected-route-with-react-router-domhttps://stackoverflow.com/questions/66289122/how-to-create-a-protected-route-with-react-router-dom |
I ran into the same issue and have found a workaround.
In my viewmodel I have an ObservableCollection Projects which is displayed using a CollectionView on my ContentPage.
Then I have the following method:
public async Task ReloadProjects()
{
var temp = Projects.ToList();
Projects.Clear();
await Task.Delay(1);
foreach (var project in temp)
{
Projects.Add(project);
}
}
This method gets called from the code-behind of my ContentPage each time the page is resized:
private async void OnPageSizeChanged(object sender, EventArgs e)
{
await viewModel.ReloadProjects();
}
This asynchronous reloading of my Projects correctly rescales the width of my grid columns. I'm wondering if there's a better fix for this issue though and would love to hear if you found something else. |
You have incorrect usage of `PUTS`. FYI, `PUTS` is shorthand `TRAP #22`.
`PUTS` requires the address of null-character terminated string to print goes in `R0`, and that's the only register used to pass parameter to `puts`. (By passing 0 or 1 to `PUTS` (i.e. in `R0`) as you're doing now, it will fail in some manner.)
Since `R0` is necessarily used/repurposed to pass a parameter (address of string to print) to `PUTS`, then if the intent is to leave the program (at `HALT`) with `R0` as holding 0 or 1 (depending on the prime result), you will then obviously need to set `R0` *after* the `PUTS` trap.
|
I agree with Phys, you have to have the entire disk space available of the dataset to use tfds.load. The tfds.load command runs the three commands, as stated on [tdfs load][1]:
1. It does not take disk space
> builder = tfds.builder('open_images_challenge2019_detection')
2. It downloads the data to the disk space
> builder.download_and_prepare()
3. It loads the amount of that that the user wants from the downloaded disk space, so split seems useless
> ds = builder.as_dataset(split='train[0:100]', shuffle_files=True)
The only option is to find a smaller dataset or use https://docs.voxel51.com/user_guide/dataset_zoo/index.html, but I think it would be more convenient to just use Tensorflow without FiftyOne Dataset Zoo.
[1]: https://www.tensorflow.org/datasets/api_docs/python/tfds/load |
The main issue in your code is because you're nesting event handlers, ie. the native JS one and a jQuery one. This is causing odd behaviour where you need to click multiple times for the logic to have any effect. You can avoid the issue by just using one event handler to hide/show the element on succesive clicks:
Also note that it's not good practice to use `onclick` attriutes. Event handlers should be bound unobtrusively in JS, not in HTML.
Here's a working example with the above changes applied:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const game = { Name: 'foo', Thumbnail: 'foo.jpg', Description: 'description' };
const newDiv = document.createElement("div");
newDiv.className = "game-info";
newDiv.innerHTML = `
<p>${game.Name}</p>
<div class="info hide">
<img src="${game.Thumbnail}">
<p>${game.Description}</p>
</div>`;
document.body.appendChild(newDiv);
// bind event handler
const toggler = e => e.target.nextElementSibling.classList.toggle('hide');
document.querySelectorAll('div > p').forEach(p => p.addEventListener('click', toggler));
<!-- language: lang-css -->
.hide {
display: none;
}
<!-- language: lang-html -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
<!-- end snippet -->
|
null |
I created virtual machine scale set and i followed the below documentation to configure it as an agent pool in azure devops.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops#troubleshooting-issues
After configuring , i cant find the agents under the agent pool created in Devops.
When i checked the instances in VMSS, the instances are update with latest model and they are up and running.On furthur anlaysis I found that there was an issue installing extension for devops on the instance
Below are the logs from the faulty VM
[2024-03-26T12:45:11.409Z] Executing: C:\Packages\Plugins\Microsoft.VisualStudio.Services.TeamServicesAgent\1.31.0.0\enable.cmd
[2024-03-26T12:46:09.198Z] Execution Complete.
######
Execution Output:
[2024-03-26T12:45:14] Sequence Number: 1
[2024-03-26T12:46:08] Error occured during Enable Pipelines Agent Error
[2024-03-26T12:46:08] System.Management.Automation.RuntimeException: .agent file not found. The agent was not set up correctly.
Execution Error:
######
Number of Tries: 1
Command C:\Packages\Plugins\Microsoft.VisualStudio.Services.TeamServicesAgent\1.31.0.0\enable.cmd of Microsoft.VisualStudio.Services.TeamServicesAgent has exited with Exit code: -1 |
Virtual Machine Scale Sets - Error Installing Microsoft.VisualStudio.Services.TeamServicesAgent |
|azure|azure-devops|azure-vm-scale-set| |
null |
I need to plot a geographical distribution of five different instruments. Some locations can have only 1 instrument whereas other locations can have a combination of more than 1 instrument.
I want to use a marker which would indicate which combination of instruments is present at each location. My idea is to use a pentagon equally divided into 5 triangles. Each instrument will have its assigned position and colour in one of the constituting triangles. Hence, if there is only one instrument present, only one triangle would be filled and the others would be blank/transparent, whereas if all 5 instruments are present at one location, the full multi-coloured pentagon would be plotted.
After making a coloured pentagon and in draw.io and saving it as an svg, I tried using this [Tutorial (Making custom matplotlib markers)][1] but found out that it only plots mono-coloured.
[1]: https://petercbsmith.github.io/marker-tutorial.html
I would kindly appreciate if anybody has any advice/solutions, and I'm also open to explore other methods too to achieve this goal. |
Custom multi-coloured markers in python |
|python|matplotlib|matplotlib-basemap| |
This doesn't have much to do with web3. It's more of a typescript question. See fixes below with comments:
import { BrowserProvider, ethers, JsonRpcSigner } from "ethers";
import { useToast } from "@chakra-ui/react";
import { useCallback, useEffect, useState } from "react";
import { MetaMaskInpageProvider } from "@metamask/providers";
declare global {
interface Window {
ethereum?: MetaMaskInpageProvider;
}
}
export interface IWeb3State {
address: unknown;
currentChain: unknown;
signer: JsonRpcSigner | null;
provider: BrowserProvider | null;
isAuthenticated: boolean;
}
const useWeb3Provider = () => {
const initialWeb3State = {
address: null,
currentChain: null,
signer: null,
provider: null,
isAuthenticated: false,
};
const toast = useToast();
const [state, setState] = useState<IWeb3State>(initialWeb3State);
const connectWallet = useCallback(async () => {
if (state.isAuthenticated) return;
try {
const { ethereum } = window;
if (!ethereum) {
return toast({
status: "error",
position: "top-right",
title: "Error",
description: "No ethereum wallet found",
});
}
const provider = new ethers.BrowserProvider(ethereum);
const accounts: string[] = (await provider.send("eth_requestAccounts", [])) as string[];
if (accounts.length > 0) {
const signer = await provider.getSigner();
const chain = Number(await (await provider.getNetwork()).chainId);
//once we have the wallet, are we exporting the variables "provider" & accounts outside this try?
setState({
...state,
address: accounts[0],
signer,
currentChain: chain,
provider,
isAuthenticated: true,
});
localStorage.setItem("isAuthenticated", "true");
}
// Empty block statement is also an issue
} catch (e) {
console.error(e);
}
}, [state, toast]);
const disconnect = () => {
setState(initialWeb3State);
localStorage.removeItem("isAuthenticated");
};
useEffect(() => {
if (window == null) return;
// Isse One: Do not access Object.prototype method 'hasOwnProperty' from target object. (Fix Below)
if (Object.prototype.hasOwnProperty.call(localStorage, "isAuthenticated")) {
connectWallet();
}
}, [connectWallet, state.isAuthenticated]);
useEffect(() => {
if (typeof window.ethereum === "undefined") return;
// '(...args: unknown[]) => void'. (Fix Below and in IWeb3State above) - literaaly telling you the expected type in the error message
window.ethereum.on("accountsChanged", (...accounts: unknown[]) => {
setState({ ...state, address: accounts[0] });
});
// '(...args: unknown[]) => void'. (Fix Below and in IWeb3State above) - literaaly telling you the expected type in the error message
window.ethereum.on("networkChanged", (network: unknown) => {
setState({ ...state, currentChain: network });
});
return () => {
window.ethereum?.removeAllListeners();
};
}, [state]);
return {
connectWallet,
disconnect,
state,
};
};
export default useWeb3Provider;
|
You need to pass `user_id` in the `useEffect`'s dependency array and check if it's defined or not. Because on first render it's undefined because redux didn't provided it yet.
```
useEffect(() => {
if(user_id){
setLoading(true);
PendingServices.getPendingTransaction(
`localhost:8000/user/transaction/pending/get-transaction/${params.id}?user_id=${user_id}`
)
.then((response) => {})
.catch((error) => {})
.finally(() => {
setLoading(false);
});
}
}, [user_id]);
``` |