instruction stringlengths 0 30k ⌀ |
|---|
{"Voters":[{"Id":1509264,"DisplayName":"MT0"},{"Id":4216641,"DisplayName":"Turing85"},{"Id":266304,"DisplayName":"Alex Poole"}],"SiteSpecificCloseReasonIds":[18]} |
C++ error: no matching member function for call to 'enqueue' futures.emplace_back(TP.enqueue(sum_plus_one, x, &M)); |
|c++|multithreading|std|threadpool| |
It is probably too late for answer, but the solution for me was pretty much simple.
My problem was that I first created a project in Jupiter, and then add e helpers.py with my finctions. So basically your Jupiter notebook won't see it untill you restart the Kernel in your Jupiter Notebook. Make sure you did that, and once you do, your imports strat works no problems. |
I have checked almost all previous question related to my query but did not find my solution.
I'm facing issue with Full keyboard access accessibility when integrate it with scrollview. Inside scrollview textfield and secure textfield are accessible with "tab" key but other component like buttons are not accessible using "tab" key.
but when I remove scrollview all elements are accessible with "tab" key.
Even in system installed iOS Apps they don't support scrollview with tab button, I have
analysed apple documentation regarding this but did not find specific to this.
https://fileport.io/WymhhSPWyZH5 : this video is with scrollview addedd and not able to access all buttons with tab key.
https://fileport.io/DkbeCussK7LP : this video shows without scrollview and all buttons can we accessed with tab key
Does anyone have idea regarding this kind of behaviour?
below is code with scrollview
import SwiftUI
struct ContentView: View {
@State private var name: String = ""
var body: some View {
ScrollView{
VStack {
TextField("Enter your name", text: $name)
.frame(height: 50)
.background(Color.white)
.foregroundColor(.black)
TextField("Enter your name", text: $name)
.frame(height: 50)
.background(Color.white)
.foregroundColor(.black)
TextField("Enter your name", text: $name)
.frame(height: 50)
.background(Color.white)
.foregroundColor(.black)
TextField("Enter your name", text: $name)
.frame(height: 50)
.background(Color.white)
.foregroundColor(.black)
TextField("Enter your name", text: $name)
.frame(height: 50)
.background(Color.white)
.foregroundColor(.black)
Button("Button example") {
print("Button tapped!")
}
.frame(height: 50)
Button("Button example") {
print("Button tapped!")
}
.frame(height: 50)
Button("Button example") {
print("Button tapped!")
}
.frame(height: 50)
Button("Button example") {
print("Button tapped!")
}
.frame(height: 50)
Button("Button example") {
print("Button tapped!")
}
.frame(height: 50)
}
.padding()
}
}
}
#Preview {
ContentView()
}
|
Getting IllegalAnnotationExceptions exception after upgrading to jakarta and java 17 |
|spring-boot|jaxb|java-17|spring-boot-3| |
check if this code execute correctly made some changes: data formatting, field initialization and fixed the column name mismatch
import sqlite3
import psycopg2
from contextlib import contextmanager
from dataclasses import dataclass, field
from datetime import datetime
from psycopg2 import extras
from uuid import UUID
psycopg2.extras.register_uuid()
db_path = 'db.sqlite'
@contextmanager
def conn_context(db_path: str):
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row
try:
yield conn
finally:
conn.close()
@dataclass
class FilmWork:
created_at: datetime = None
updated_at: datetime = None
id: UUID = field(default_factory=uuid.uuid4)
title: str = ''
description: str = ''
creation_date: datetime = None
rating: float = 0.0
type: str = ''
file_path: str = ''
def __post_init__(self):
if self.creation_date is None:
self.creation_date = datetime.now()
if self.created_at is None:
self.created_at = datetime.now()
if self.updated_at is None:
self.updated_at = datetime.now()
if self.description is None:
self.description = 'Нет описания'
if self.rating is None:
self.rating = 0.0
def copy_from_sqlite():
with conn_context(db_path) as connection:
cursor = connection.cursor()
cursor.execute("SELECT * FROM film_work;")
result = cursor.fetchall()
films = [FilmWork(**dict(film)) for film in result]
save_film_work_to_postgres(films)
def save_film_work_to_postgres(films: list):
dsn = {
'dbname': 'movies_database',
'user': 'app',
'password': '123qwe',
'host': 'localhost',
'port': 5432,
'options': '-c search_path=content',
}
try:
conn = psycopg2.connect(**dsn)
print("Successful connection!")
with conn.cursor() as cursor:
cursor.execute(f"SELECT column_name FROM information_schema.columns WHERE table_name = 'film_work' ORDER BY ordinal_position;")
column_names_list = [row[0] for row in cursor.fetchall()]
column_names_str = ','.join(column_names_list)
col_count = ', '.join(['%s'] * len(column_names_list))
bind_values = ','.join(cursor.mogrify(f"({col_count})", film).decode('utf-8') for film in films)
cursor.execute(f"""INSERT INTO content.film_work ({column_names_str}) VALUES {bind_values} """)
conn.commit() # Don't forget to commit changes
except psycopg2.Error as _e:
print("Ошибка:", _e)
finally:
if conn is not None:
conn.close()
copy_from_sqlite()
|
null |
null |
null |
null |
Copying spesific file amount |
|python|google-colaboratory| |
null |
I'm working on a CLI tool. I'm using macOS. I was using `node index.js <command>` when running it locally. I decided to publish it as an npm package, but the command I've defined in the `bin` section of my `package.json` won't work.
In my `package.json`, i've defined `bin` as follows:
```JSON
"bin": {
"scaffold": "index.js"
},
```
I'm really not sure how to navigate this and quite unfamiliar. Would appreciate any help.
I've installed the package globally.
Tried to troubleshoot by running a few commands from posts, here are the results if they help:
Running `npm bin -g` results in:
```bash
Unknown command: "bin"
To see a list of supported npm commands, run:
npm help
```
Running `npm root -g` results in:
```bash
/opt/homebrew/lib/node_modules
```
Running `which scaffold` results in:
```bash
/opt/homebrew/bin/scaffold
```
Running `npm list -g project-scaffold` results in:
```bash
/opt/homebrew/lib
└── project-scaffold@1.1.1
```
Here's the package with the link to the repo and everything: https://www.npmjs.com/package/project-scaffold
Thank you.
|
I don't really understand why you are messing around with central directories and building the Zip manually. None of this is necessary as you can use `ZipArchive` to do this in one go.
Furthermore, you can't compress chunks of bytes like that and then just concatenate them. The Deflate algorithm doesn't work that way.
Your concern about flushing is misplaced: if the `ZipArchive` is closed then everything is flushed. You just need to make it leave the stream open once you dispose it.
I would advise you to only work with `Stream`, but you could use `byte[]` or `Memory<byte>` if absolutely necessary.
```cs
public async Task<Memory<byte>> Compress(string fileName, IAsyncEnumerable<Memory<byte>> data)
{
var ms = new MemoryStream();
using (var zip = new ZipArchive(ms, ZipArchiveMode.Create, leaveOpen: true))
{
var entry = zip.CreateEntry(fileName);
using var zipStream = entry.Open();
await foreach (var bytes in data)
{
zipStream.Write(bytes);
}
}
return new ms.GetBuffer().AsMemory(0, (int)ms.Length);
}
```
If you want to avoid even the `MemoryStream` and upload directly to `HttpClient` then you can use a custom `HttpContent` that "pulls" the data as and when needed.
This example is taken from [the documentation][1].
```cs
public class ZipUploadContent : HttpContent
{
private readonly string _fileName;
private readonly IAsyncEnumerable<Stream> _data;
public MyContent(string fileName, IAsyncEnumerable<Stream> data)
{
_fileName = fileName
_data = data;
}
protected override bool TryComputeLength(out long length)
{
length = 0;
return false;
}
protected override Task SerializeToStreamAsync(Stream stream, TransportContext? context)
=> SerializeToStreamAsync(stream, context, CancellationToken.None)
protected override Task SerializeToStreamAsync(Stream stream, TransportContext? context, CancellationToken cancellationToken)
{
using var zip = new ZipArchive(ms, ZipArchiveMode.Create, leaveOpen: true);
var entry = zip.CreateEntry(_fileName);
await using var zipStream = entry.Open();
await foreach (var inputStream in _data.WithCancellation(cancellationToken))
{
inputStream.CopyToAsync(zipStream, cancellationToken);
}
}
protected override void SerializeToStream(Stream stream, TransportContext? context, CancellationToken cancellationToken)
=> Task.Run(() => SerializeToStreamAsync(stream, context, cancellationToken)).Wait();
}
```
[1]: https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpcontent?view=net-8.0#examples |
Getting an error:
````
FAILED: com.testingvasitum.testcases.LoginpageTests.login
org.testng.internal.reflect.MethodMatcherException:
[public void com.testingvasitum.testcases.LoginpageTests.login(java.lang.String,java.lang.String) throws java.lang.InterruptedException] has no parameters defined but was found to be using a data provider (either explicitly specified or inherited from class level annotation).
Data provider mismatch
`````
I try to get the data from excel sheet by using dataProvider to login with multiple credentials by writing the script mentioned below:
```java
public final class LoginpageTests extends BaseTest {
@Test(dataProvider="loginData")
public void login(String email, String pass, String scenario) throws InterruptedException {
//Uninterruptibles.sleepUninterruptibly(3, TimeUnit.SECONDS);
//WebDriverWait wait = new WebDriverWait(DriverManager.getDriver(), Duration.ofSeconds(30));
new VasitumLoginPage()
.enterEmail(email)
.enterPass(pass)
.clickOnLoginButton();
String actualDashboardTitle = DriverManager.getDriver().getCurrentUrl();
String expectedDashboardTitle = FrameworkConstants.getRecruiterDashboardTitle();
Assert.assertSame(actualDashboardTitle, expectedDashboardTitle, scenario);
}
@DataProvider(name = "loginData")
public Object[][] getData() throws IOException, InterruptedException {
FileInputStream fs = new FileInputStream("C:\\Users\\maven\\Desktop\\TestData.xlsx");
XSSFWorkbook workbook = new XSSFWorkbook(fs);
XSSFSheet sheet = workbook.getSheet("Login");
int row = sheet.getLastRowNum() + 1;
System.out.println("Row: "+ row);
int column = sheet.getRow(0).getLastCellNum();
System.out.println("Column: "+ column);
Object[][] data = new Object[row-1][column];
for(int r=0; r<row-1; r++) {
for(int c=0; c<column; c++) {
data[r][c] = sheet.getRow(r+1).getCell(c).getStringCellValue();
//System.out.println("Value of row and column "+ r + c + ": "+ data[r][c]);
}
}
return data;
}
}
``` |
I am currently working on a project titled "Build-a-Website-using-an-API-with-HTML-JavaScript-and-JSON" available on GitHub at [GitHub Repository Link](https://github.com/zwanski2019/Build-a-Website-using-an-API-with-HTML-JavaScript-and-JSON.git). This project involves creating a simple weather forecast web app using HTML, JavaScript, and JSON with API integration to display real-time weather data for various locations in an easy-to-use and informative manner.
While I have made progress on the project and have established the fundamental structure, including essential files like HTML, CSS, and JavaScript, I am encountering challenges in implementing certain functionalities to enhance user experience and improve the overall performance of the application.
Specifically, I am seeking assistance with the following aspects:
1. Enhancing the visual design and layout of the web app using CSS.
2. Adding interactive features to make the weather data more engaging for users.
3. Optimizing the codebase for better performance and efficiency.
If you have expertise in web development, API integration, or working with weather data, I would greatly appreciate any guidance, suggestions, or code snippets that could help me address these challenges and successfully complete the project.
Thank you in advance for your valuable input and support. Any advice or recommendations will be highly appreciated.
Warm regards,
zwanski
I attempted to implement a new feature in my web application by adding a weather forecast module using an API. I expected the module to display real-time weather data for different locations. However, after integrating the API and updating the code, the weather information did not load correctly,
showing blank results instead. I need assistance in troubleshooting this issue and ensuring the weather data is displayed accurately. |
Skip level in nested JSON and convert to Pandas dataframe |
|python|python-3.x|pandas|dataframe| |
How can I sort a csv file using pandas so that the names are sorted the same way finder does it?
In finder I have millions of files named as such: "-2odhDKSZ22302_000.jpg". These file names are also in a csv. I'd like to be able to go through the images via finder and the csv file in the same order.
If there are differences in how other operating systems (e.g. Ubuntu) sort file names could you please point this out.
Update: `sort_values()` has a `key` argument, if I knew what the default method of sorting was for the operating systems then I could match them up.
I've tried searching on stack sites but can't find what I'm looking for. |
I'm currently in the process of transitioning from using Terraform for managing my GitLab CI/CD pipelines to using OpenTofu. In this migration, I also need to integrate OIDC (OpenID Connect) authentication into my GitLab pipelines.
Previously, my .gitlab-ci.yml file looked something like this with Terraform:
'''You should upgrade to the latest version. You can find the latest version at https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security-public/oidc-modules/-/releases
include:
- remote: 'https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security-public/oidc-modules/-/raw/3.1.2/templates/gcp_auth.yaml'
- template: "Terraform/Base.gitlab-ci.yml"
variables:
WI_POOL_PROVIDER: //iam.googleapis.com/projects/$GCP_PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_IDENTITY_POOL/providers/$WORKLOAD_IDENTITY_POOL_PROVIDER
SERVICE_ACCOUNT: $SERVICE_ACCOUNT
TF_ROOT: infrastructure
TF_STATE_NAME:tfstate
stages:
- validate
- test
- build
- deploy
validate:
extends: .terraform:validate
needs: []
build:
extends:
- .google-oidc:auth
- .terraform:build
deploy:
extends:
- .google-oidc:auth
- .terraform:deploy
dependencies:
- build'''
Now, I want to replace the Terraform-based setup with OpenTofu, while also incorporating OIDC authentication into my pipeline. However, I'm unsure about how to structure the .gitlab-ci.yml file and configure OpenTofu to achieve this.
Could someone provide guidance on how to migrate from Terraform to OpenTofu for GitLab CI/CD pipelines, particularly focusing on integrating OIDC authentication into the pipeline setup? Any examples, tips, or resources would be greatly appreciated. Thank you! |
I have code in python which is finding and replacing a image in pdfs, but I'm having hard time adjusting the size of the new image so what the code is doing is finding the old image and it's using the the size and position of it to apply the new image. I want to be able to modify the size of the new image to whatever height of width I want, but still place it on the same spot where the old one was. The goal is to replace a logo on multiple pdfs at the same time. Thanks for suggestions.
```'
from pikepdf import Pdf, PdfImage, Name
from PIL import Image
import zlib
import os
# Path to the folder containing the input PDF files
input_folder = r'C:\input folder'
# Path to the folder where the modified PDF files will be saved
output_folder = r'C:\output folder'
# Path to the replacement image
replacement_image_path = r'C:\new image to replace'
def replace_images_in_pdf(input_pdf_path, output_pdf_path, image_path):
pdf = Pdf.open(input_pdf_path, allow_overwriting_input=True)
replacement_image = Image.open(image_path)
image_replaced = False # Track if an image has been replaced
for page in pdf.pages:
if image_replaced: # If an image has already been replaced, stop processing further pages
break
for image_key in list(page.images.keys()):
raw_image = page.images[image_key]
pdf_image = PdfImage(raw_image)
raw_image = pdf_image.obj
pillow_image = pdf_image.as_pil_image()
# Resize the replacement image to match the original image's dimensions
replacement_image_resized = replacement_image.resize((pillow_image.width, pillow_image.height))
# Replace the original image
raw_image.write(zlib.compress(replacement_image_resized.tobytes()), filter=Name("/FlateDecode"))
raw_image.ColorSpace = Name("/DeviceRGB")
raw_image.Width, raw_image.Height = pillow_image.width, pillow_image.height
image_replaced = True # Mark that an image has been replaced
break # Exit the loop after replacing the first image
pdf.save(output_pdf_path)
pdf.close()
def process_folder(input_folder, output_folder, replacement_image_path):
if not os.path.exists(output_folder):
os.makedirs(output_folder)
for input_file in os.listdir(input_folder):
if input_file.lower().endswith('.pdf'):
input_pdf_path = os.path.join(input_folder, input_file)
output_pdf_path = os.path.join(output_folder, input_file)
replace_images_in_pdf(input_pdf_path, output_pdf_path, replacement_image_path)
# Run the process
process_folder(input_folder, output_folder, replacement_image_path)
``` |
i have a grid of images in my app which will open a scroll view of the images if one is clicked, but no matter what image i click, the scroll view will always open on image 1 with 2,3,4,5,6,7,8,9 below it. i want to make it so if a user clicks on image 6 it will open the scroll view on image 6 with images 1,2,3,4,5 above it and 7,8,9 below it. here is my code;
LazyVGrid(columns: threeColumnGrid, alignment: .center) {
ForEach(viewModel.posts) { post in
NavigationLink(destination: ScrollPostView(user: user)){
KFImage(URL(string: post.imageurl))
.resizable()
.aspectRatio(1, contentMode: .fit)
.cornerRadius(15)
}
}
.overlay(RoundedRectangle(cornerRadius: 14)
.stroke(Color.black, lineWidth: 2))
} |
how to make a scroll view of 9 images in a forEach loop open on image 6 if image 6 is clicked on from a grid? |
|swift|swiftui|swift3| |
null |
**_Originally asked on Swift Forums: https://forums.swift.org/t/using-bindable-with-a-observable-type/70993_**
I'm using SwiftUI environments in my app to hold a preferences object which is an @Observable object
But I want to be able to inject different instances of the preferences object for previews vs the production code so I've abstracted my production object in to a `Preferences` protocol and updated my Environment key's type to:
```swift
protocol Preferences { }
@Observable
final class MyPreferencesObject: Preferences { }
@Observable
final class MyPreviewsObject: Preferences { }
// Environment key
private struct PreferencesKey: EnvironmentKey {
static let defaultValue : Preferences & Observable = MyPreferencesObject()
}
extension EnvironmentValues {
var preferences: Preferences & Observable {
get { self[PreferencesKey.self] }
set { self[PreferencesKey.self] = newValue }
}
}
```
The compiler is happy with this until I go to use `@Bindable` in my code where the compiler explodes with a generic error,
eg:
```swift
@Environment(\.preferences) private var preferences
// ... code
@Bindable var preferences = preferences
```
If I change the environment object back to a conforming type eg:
```swift
@Observable
final class MyPreferencesObject() { }
private struct PreferencesKey: EnvironmentKey {
static let defaultValue : MyPreferencesObject = MyPreferencesObject()
}
extension EnvironmentValues {
var preferences: MyPreferencesObject {
get { self[PreferencesKey.self] }
set { self[PreferencesKey.self] = newValue }
}
}
```
Then `@Bindable` is happy again and things compile.
Is this a known issue/limitation? Or am I missing something here? |
Using @Bindable with a Observable type |
|swift|observation| |
I'm no linux expert but I'm trying to learn.
I wrote a simple bash script to run my vpn connection automatically upon start up.
#!/bin/bash
cyberghostvpn --country-code PL --connect
I works fine when I run it through my CLI
sudo ./cyberghostvpn-launcher.sh
I tried to wrap it in a service like this :
[Unit]
Description=VPN start
After=network-online.target
[Service]
ExecStart=/home/xxxx/Documents/cyberghostvpn-launcher.sh
[Install]
WantedBy=multi-user.target
I ran the enable command (OK)
`$ sudo systemctl enable vpn-launch.service`
But I get an error when trying to start the service :
julien@uproxy:~$ sudo systemctl status vpn-launch.service
× vpn-launch.service - VPN start
Loaded: loaded (/etc/systemd/system/vpn-launch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2024-03-31 17:00:12 UTC; 16s ago
Process: 95180 ExecStart=/home/julien/Documents/cyberghostvpn-launcher.sh (code=exited, status=1/FAILURE)
Main PID: 95180 (code=exited, status=1/FAILURE)
CPU: 173ms
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "cyberghostvpn.py", line 6, in <module>
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "PyInstaller/loader/pyimod02_importers.py", line 352, in exec_module
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "configs/base.py", line 3, in <module>
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: File "configs/base.py", line 12, in BaseConfiguration
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: TypeError: can only concatenate str (not "NoneType") to str
Mar 31 17:00:13 uproxy cyberghostvpn-launcher.sh[95182]: [95182] Failed to execute script 'cyberghostvpn' due to unhandled exception!
Mar 31 17:00:12 uproxy systemd[1]: vpn-launch.service: Failed with result 'exit-code'.
I can't understand what's wrong.
Any ideas folks?
thanks for your lightings
|
Seeking Assistance to Enhance Weather Forecast Web App Project |
|javascript|html|css| |
null |
I've been trying to overcome the issue of my website's HTML document size being too large at 68 Kb, well above the average of 33 Kb. I've attempted various strategies such as code optimization and resource management to bring it down to the recommended size, but unfortunately, I haven't seen any improvement in site speed. Any suggestions on what else I could try for optimal website performance.
I attempted various methods to reduce my HTML document size from 68 Kb to the recommended average of 33 Kb. This includes code optimization, minification, image compression, CSS and JavaScript optimization, and implementing caching and compression techniques. Despite these efforts, I haven't seen significant improvements in reducing the size. |
How to convert the size of the HTML document from 68 Kb to the average of 33 Kb? |
|java|html|optimization|seo| |
null |
ggeffects has a "margin" argument in predict_response, which controls how non-focal terms are addressed when estimating predicted values, which is mostly of consequence when non-focal terms are categorical. in test_predictions, however, there is no such argument.
1. how does test_predictions deal with categorical non-focal terms by default?
2. is there a way to make it treat them as predict_response does when margin = "marginalmeans"?
in my case, I am fitting a logistic regression with an interaction between a numeric (year) and a categorical (parfam) variable with another categorical variable (countryname) as a control. Then I estimate predicted probabilities, and then I want to examine pairwise comparisons between these probabilities:
```
# Set seed for reproducibility
set.seed(123)
## create and combine three datasets, to ensure unbalanced data
# Create first dataset
data1 <- data.frame(
childcare = sample(c(0, 1), 5000, replace = TRUE, prob = c(0.8, 0.2)),
parfam = sample(c("agr", "con", "cd", "lib", "sd", "green", "left", "rr"), 5000, replace = TRUE),
year = sample(1970:2022, 5000, replace = TRUE),
countryname = sample(c("Austria", "Belgium", "Finland", "France", "Germany", "UK"), 5000, replace = TRUE)
)
# Create second dataset
data2 <- data.frame(
childcare = sample(c(0, 1), 5000, replace = TRUE, prob = c(0.9, 0.1)),
parfam = sample(c("agr", "con", "cd", "rr"), 5000, replace = TRUE),
year = sample(2000:2022, 5000, replace = TRUE),
countryname = sample(c("France", "Germany", "UK"), 5000, replace = TRUE)
)
# Create third dataset
data3 <- data.frame(
childcare = sample(c(0, 1), 5000, replace = TRUE, prob = c(0.5, 0.5)),
parfam = sample(c("lib", "sd", "green", "left"), 5000, replace = TRUE),
year = sample(2000:2022, 5000, replace = TRUE),
countryname = sample(c("Austria", "Belgium", "Finland"), 5000, replace = TRUE)
)
# Combine datasets
data <- rbind(data1, data2, data3)
# run a logistic regression
m <- glm(childcare ~ parfam*year+countryname,
data = data,
family = binomial)
# get predicted probabilities using ggeffects, where margin = marginalmeans
pred_prob <- predict_response(m, terms = c("year [1980, 2000, 2020]", "parfam"), margin = "marginalmeans")
# test pairwise comparisons
test_predictions(m, terms = c("parfam", "year [1980, 2000, 2020]"), collapse_levels = TRUE)
```
it seems that some of the differences reported in test_predictions do not match the differences between predicted probabilities in predict_response. see for example the predicted value in pred_prob for "left" in 1980 is 0.2097297, and the predicted value for "cd" in 1980 is 0.1639278. the difference between the two is 0.0458019. however, in the results from test_predictions, the contrast for left-cd in 1980 is 0.06.
if i drop the "margin" argument, the results do seem to match:
```
# get predicted probabilities using ggeffects
pred_prob <- predict_response(m, terms = c("year [1980, 2000, 2020]", "parfam"))
# test pairwise comparisons
test_predictions(m, terms = c("parfam", "year [1980, 2000, 2020]"), collapse_levels = TRUE)
```
moreover, when i ask for three levels of "year", it works well, but when i ask for five, i get this error:
```
# test pairwise comparisons
test_predictions(m, terms = c("parfam", "year [1980, 1990, 2000, 2010, 2020]"), collapse_levels = TRUE)
```
Error: The "pairwise", "reference", and "sequential" options of the `hypotheses` argument are not supported for `marginaleffects` commands which generate more than 25 rows of results. Use the `newdata`, `by`, and/or `variables` arguments to compute a smaller set of results on which to conduct hypothesis tests. |
I would like to produce a file consisting of the three-way diff of three files, with every difference represented as a conflict in the git style. That is, common lines are shown verbatim, and differing sections of a file are shown with conflict markers "<<<<<<", "||||||", "======", and ">>>>>>" (also called "conflict brackets"):
```
common line 1
common line 2
<<<<<<<
text from mine.txt
|||||||
text from base.txt
more text from base.txt
=======
text from yours.txt
>>>>>>>
common line 3
common line 4
<<<<<<<
same text in mine.txt and yours.txt, none in base.txt
|||||||
=======
same text in mine.txt and yours.txt, none in base.txt
>>>>>>>
common line 5
common line 6
```
Crucially, I would like **every difference** to be marked with conflict brackets, including differences that are mergeable.
Here are some options that do not work:
* `git diff` only takes two files as input; that is, it compares two things rather than three.
* `git merge-file` does not show mergeable differences (it merges them).
* `diff3 -m` is like `git merge-file`: close: it shows the whole file, with conflict markers for
conflicts, but it does not show conflict markers for mergeable differences.
* `diff3` shows all the differences, even mergeable ones, but not in the given format.
* `diff3 -A` does not show all the differences, and mergeable ones are not output with conflict markers.
I can write a program that takes the `diff3` output and the original files, and outputs the conflicted file in git style. However, I would prefer to avoid that if I can.
|
Well In my wordpress site I want to change my currency USD to AUD with value . By value I mean it will trun 1 dollar = 1.53 AUD dollar . and it will change automatic. Cus I have 600+ product but it not possible to change one by one . And i have just decide to sell on Australia.
Can you please help me to write sql query or something like this |
How to change woocomerce or full wordpress currency with value from USD to AUD |
|php|sql|mysql|wordpress|woocommerce| |
null |
I was following a tutorial on Web API and I saw the creator creating his service method as nullable `( Task<Comment?> Update(CommentUpdateDto comment) )` and he later used it like following which made sense:
```
[HttpPut("{id:int}")]
public async Task<IActionResult> Update([FromRoute] int id, [FromBody] CommentUpdateDto commentUpdateDto)
{
if (!ModelState.IsValid)
return BadRequest(ModelState);
var comment = await _commentRepo.UpdateAsync(id, commentUpdateDto.ToCommentFromUpdate());
if (comment == null)
{
return NotFound("Comment not found!");
}
return Ok(comment);
}
```
Now I want to use same approach in my ASP.NET Core MVC app, but for some reason it doesn't make sense. It is probably because I am using fluent validation with auto mapper.
My `CategoryService`:
```
public async Task<Category?> UpdateCategoryAsync(UpdateCategoryDto updateCategoryDto)
{
var category = await GetCategoryByIdAsync(updateCategoryDto.Id);
if (category == null)
return null;
mapper.Map(updateCategoryDto, category);
await unitOfWork.GetRepository<Category>().UpdateAsync(category);
await unitOfWork.SaveChangesAsync();
return category;
}
```
My controller action:
```
[HttpPost]
public async Task<IActionResult> Update(UpdateCategoryDto updateCategoryDto)
{
var category = mapper.Map<Category>(updateCategoryDto);
var result = validator.Validate(category);
var exists = await categoryService.Exists(category);
if (exists)
result.Errors.Add(new ValidationFailure("Name", "This category name already exists"));
if (result.IsValid)
{
await categoryService.UpdateCategoryAsync(updateCategoryDto);
return RedirectToAction("Index", "Category", new { Area = "Admin" });
}
result.AddToModelState(ModelState);
return View(updateCategoryDto);
}
```
I tried to modify it like this:
```
public async Task<IActionResult> Update(UpdateCategoryDto updateCategoryDto)
{
var category = mapper.Map<Category>(updateCategoryDto);
var result = validator.Validate(category);
var exists = await categoryService.Exists(category);
if (exists)
result.Errors.Add(new ValidationFailure("Name", "This category name already exists"));
if (result.IsValid)
{
var value = await categoryService.UpdateCategoryAsync(updateCategoryDto);
if (value == null)
return NotFound();
return RedirectToAction("Index", "Category", new { Area = "Admin" });
}
result.AddToModelState(ModelState);
return View(updateCategoryDto);
}
```
The thing is if my `updateCategoryDto` is null, I cannot even pass the validation as my category will be null, so my modification doesn't change anything. I want to know what changes I should make in order to have a logical flow. Should I just use my service method as `Task<Category>` instead of `Task<Category?>` or do I have to make changes in my controller action.
Note that I am a self-taught beginner, so any suggestions or advice is valuable for me. If you think I can make changes in my code for better, please share it with me. Thanks in advance! |
The `kruskalmc()` function from the pgirmess package does not output p values for each comparison, and I’m wondering whether there is a different function that performs a similar post hoc test after `kruskal.test()` that does output the p-values for each comparison.
If not, is there anyway to calculate the p-values from the data given in `kruskalmc()`? |
null |
How can I load a Bootstrap modal only after form validation passes? |
Need help/guidance on creating a Query that will re-organize a table. It has been over 5 years since I was heavily using Query and Arrayformula's in Google sheets and am not getting the results needed.
.
Google Sheet example Data: https://docs.google.com/spreadsheets/d/1YI1mPJqL8x0GfxMSE9K2xCNzwiUvo0RLhd5DQ0O1mC4/edit?usp=sharing
Example Google Sheet: https://docs.google.com/spreadsheets/d/1YI1mPJqL8x0GfxMSE9K2xCNzwiUvo0RLhd5DQ0O1mC4/edit?usp=sharing
Original format:

Format needed:
 |
`findall` return an array, thus your `child` is also an array. If you want to remove all the children, you have to make another loop for `child` as
```
for parent in root.findall('parent'):
for child in parent.findall('child'):
parent.remove(child)
```
According to the [19.7.1.3. of the xml package docs][1]
> Element.findall() finds only elements with a tag which are direct
> children of the current element. Element.find() finds the first child
> with a particular tag
Thus if you only have a single child, you can use `find` instead of `findall`.
thus the following snipped would then be valid
```
for parent in root.find('parent'):
child = parent.find('child')
parent.remove(child)
```
**Update with a fully working example with write to file what turns**
```
import xml.etree.ElementTree as ET
tree = ET.parse("test.xml")
root = tree.getroot()
for parent in root.findall('parent'):
for child in parent.findall('child'):
parent.remove(child)
tree.write("test1.xml")
```
This snippet would turn
```
<foo>
<parent>
<child>
<grandchild>
</grandchild>
</child>
<child>
<grandchild>
</grandchild>
</child>
<child>
<grandchild>
</grandchild>
</child>
</parent>
...
</foo>
```
into
```
<foo>
<parent>
</parent>
...
</foo>
```
[1]: https://docs.python.org/2/library/xml.etree.elementtree.html |
null |
I am trying to pop up a modal window with the terms & conditions, only when the user accepts the terms and conditions then the data is submitted to the database. I want to validate user input before the modal pops up. Below is my code.
**individual_account.html**
```
<form method="post" class="bg-white shadow-md rounded px-8 pt-6 pb-8 mb-4" id="individualForm">
{% csrf_token %} {% for hidden_field in form.hidden_fields %} {{ hidden_field.errors }} {{ hidden_field }} {% endfor %}
<div class="flex flex-wrap -mx-3 mb-6">
<div class="w-full md:w-1/2 px-3 mb-6 md:mb-0">
<label class="block uppercase tracking-wide text-gray-700 text-xs font-bold mb-2" for="{{form.full_name.id_for_label}}">
{{ form.full_name.label }}
</label>
{{ form.full_name }}
{% if form.full_name.errors %}
{% for error in form.full_name.errors %}
<p class="text-red-600 text-sm italic pb-2">{{ error }}</p>
{% endfor %}
{% endif %}
</div>
<div class="w-full md:w-1/2 px-3">
<label class="block uppercase tracking-wide text-gray-700 text-xs font-bold mb-2" for="{{form.id_passport_no.id_for_label}}">
{{ form.id_passport_no.label}}
</label>
{{ form.id_passport_no }}
{% if form.id_passport_no.errors %}
{% for error in form.id_passport_no.errors %}
<p class="text-red-600 text-sm italic pb-2">{{ error }}</p>
{% endfor %}
{% endif %}
</div>
</div>
<!-- Button trigger modal -->
<button type="button" class="bg-yellow-700 text-white rounded-none hover:bg-white hover:text-blue-900 hover:border-blue-900 shadow hover:shadow-lg py-2 px-4 border border-gray-900 hover:border-transparent" data-bs-toggle="modal" data-bs-target="#exampleModal">
Create Account
</button>
<!-- Modal -->
<div class="modal fade" id="exampleModal" tabindex="-1" aria-labelledby="exampleModalLabel" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="exampleModalLabel">Terms & Conditions</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<p>Terms & CConditions Here!! </p>
<label><input type="checkbox" id="acceptTerms"> I accept the terms and conditions</label>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div>
</div>
</div>
</div>
</form>
```
**views.py**
```
def individual_account(request):
if request.method == 'POST':
form = IndividualForm(request.POST)
if form.is_valid():
form.save()
fullname = form.cleaned_data['full_name']
messages.success(
request,
f'Thank You {fullname} For Creating An Account.'
)
return HttpResponseRedirect(
reverse_lazy('accounts:individual-account')
)
else:
form = IndividualForm()
return render(request, 'accounts/individual_account.html', {'form': form})
```
How can I achieve this?
|
I'm trying to validate an input array of file types, and Laravel simply ignores it. See the html and validate in the controller. What am I doing wrong, and I don't see it? The other field's validation works perfectly.
```
<input type="file" accept="image/png, image/jpg, image/jpeg, application/pdf" name="comprovante[]" id="comprovante-1"/>
```
```
$request->validate([
'curso.*' => 'required|string|max:250',
'tipo.*' => 'required|string|max:250',
'instituicao.*' => 'required|string|max:250',
'cidade.*' => 'required|string|max:250',
'estado.*' => 'required|string|max:250',
'anoinicio.*' => 'required|string|max:4',
'anofinal.*' => 'required|string|max:4',
'comprovante.*' => 'required|file',
]);
``` |
Find, Replace and adjust image in PDF's using python |
|python|image|pdf|replace|find| |
|file-upload|struts2|struts-action| |
i'm trying to make weather app and i'm using geolocator package but after adding package project is not building.
[![][1]][1]
```
* What went wrong:
Execution failed for task ':app:checkDebugDuplicateClasses'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.CheckDuplicatesRunnable
> Duplicate class kotlin.collections.jdk8.CollectionsJDK8Kt found in modules jetified-kotlin-stdlib-1.9.0 (org.jetbrains.kotlin:kotlin-stdlib:1.9.0) and jetified-kotlin-stdlib-jdk8-1.7.10 (org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.7.10)
```
[1]: https://i.stack.imgur.com/EFzKZ.png |
null |
I am running kafka-topics.sh <brokers>:9098 --describe --topic __consumer_offsets --command-config /etc/client.properties
its throwing below error
```
Failed to create new KafkaAdminClient
org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:541)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:492)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:137)
at org.apache.kafka.tools.TopicCommand$TopicService.createAdminClient(TopicCommand.java:437)
at org.apache.kafka.tools.TopicCommand$TopicService.<init>(TopicCommand.java:426)
at org.apache.kafka.tools.TopicCommand.execute(TopicCommand.java:98)
at org.apache.kafka.tools.TopicCommand.mainNoExit(TopicCommand.java:87)
at org.apache.kafka.tools.TopicCommand.main(TopicCommand.java:82)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new NetworkClient
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:252)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:189)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:525)
... 7 more
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /etc/client/certs/keystore.bcfks of type BCFKS
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:119)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:223)
... 9 more
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /etc/client/certs/keystore.bcfks of type BCFKS
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:382)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.<init>(DefaultSslEngineFactory.java:354)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:304)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:164)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:141)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:98)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:180)
... 13 more
Caused by: java.security.KeyStoreException: BCFKS not found
at java.base/java.security.KeyStore.getInstance(KeyStore.java:878)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:376)
... 19 more
Caused by: java.security.NoSuchAlgorithmException: BCFKS KeyStore not available
at java.base/sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
at java.base/java.security.Security.getImpl(Security.java:656)
at java.base/java.security.KeyStore.getInstance(KeyStore.java:875)
... 20 more
```
my client.properties file contains
```
cat client.properties
# Kafka client configuration
bootstrap.servers=xxxx.amazonaws.com
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
# SSL configurations for BouncyCastle
ssl.truststore.type=BCFKS
ssl.truststore.location=/etc/client/certs/truststore.bcfks
ssl.truststore.password=<redacted>
ssl.keystore.type=BCFKS
ssl.keystore.location=/etc/client/certs/keystore.bcfks
ssl.keystore.password=<redacted>
# Configure the BouncyCastle provider
ssl.security.provider=BouncyCastleProvider
```
Also I have set java.security file as
```
cat java.security | grep security.provider
# security.provider.<n>=<provName | className>
security.provider.1=org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider
security.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:BCFIPS
security.provider.3=SUN
security.provider.4=SunRsaSign
security.provider.5=SunEC
security.provider.6=SunJSSE
security.provider.7=SunJCE
security.provider.8=SunJGSS
security.provider.9=SunSASL
security.provider.10=XMLDSig
security.provider.11=SunPCSC
security.provider.12=JdkLDAP
security.provider.13=JdkSASL
security.provider.14=SunPKCS11
# jdk.security.provider.preferred=AES/GCM/NoPadding:SunJCE, \
#jdk.security.provider.preferred=
login.configuration.provider=sun.security.provider.ConfigFile
policy.provider=sun.security.provider.PolicyFile
# provider (sun.security.provider.PolicyFile) does not support this property.
```
```
root@kafka-lag-dp-report-5254-7459f94c7d-xpjxl:/opt/java/openjdk/lib/security# cat java.security | grep fips
security.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:BCFIPS
fips.provider.1=org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider
fips.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:BCFIPS
```
scratching my head what else I am missing. Please help me identify the issue causing this error. |
Kafka-topics.sh error with Failed to load SSL keystore /keystore.bcfks of type BCFKS / BCFKS not found / BCFKS KeyStore not available |
|apache-kafka|aws-msk| |
null |
I was trying to setup the EmailOperator of Airflow to finish my pipeline with a notification. Following the various steps I found this problem:
Aiflow is not actually showing the variables I passed through the `environment` key in docker. It seems to update it but there is no way to print (for check/verify) them if not by testing directly the task.
Indeed, the `airflow config list ` command is not usefull and retrive the default values.
The problem is extended to the variable like `AIRFLOW__CORE__EXECUTOR: CeleryExecutor` that the default docker (created by the airflow team) is builded with.
Is there any solution to this?
Given the airflow docker compose from the official api [here](https://airflow.apache.org/docs/apache-airflow/2.8.4/docker-compose.yaml)
I tried to add few variables for a simple mail send:
- AIRFLOW__SMTP__SMTP_HOST: smtp.gmail.com
- AIRFLOW__SMTP__SMTP_USER: xxx@gmail.com
- AIRFLOW__SMTP__SMTP_PASSWORD: xxx
- AIRFLOW__SMTP__SMTP_PORT: 587
- AIRFLOW__SMTP__SMTP_MAIL_FROM: xxx@gmail.com
I also changed the property `AIRFLOW__CORE__LOAD_EXAMPLES` to `false` to see a difference in the gui.
Here the code:
```
`x-airflow-common: &airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.8.4}
environment: &airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ""
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: "true"
AIRFLOW__CORE__LOAD_EXAMPLES: "false"
AIRFLOW__API__AUTH_BACKENDS: "airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session"
# OTHER VARIABLE HERE
AIRFLOW__SMTP__SMTP_HOST: smtp.gmail.com
AIRFLOW__SMTP__SMTP_USER: xxx@gmail.com
AIRFLOW__SMTP__SMTP_PASSWORD: xxx
AIRFLOW__SMTP__SMTP_PORT: 587
AIRFLOW__SMTP__SMTP_MAIL_FROM: xxx@gmail.com
AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: "true"
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
- ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
- ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config
- ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on: &airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy`
```
After running the `docker compose up --build` Im trying to list the properties to check the result with the command: `airflow config list`
The result is (always):
```
[core]
load_examples = True
[smtp]
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
\# smtp_user =
\# smtp_password =
smtp_port = 25
smtp_mail_from = airflow@example.com
smtp_timeout = 30
smtp_retry_limit = 5
```
The fact is that if go in the gui i actually dont see the examples, And i can send the mail too. So everithing works, but the `aiflow config list` or the `airflow config get-value` returns the old values. Why that? How can i actually check the properties? |
There are multiple problems in your code:
- `strcmp(req, "GET")` returns `0` is the strings have the same characters, so you should write:
```
if (strcmp(req, "GET") == 0) {
return GET;
}
```
or
```
if (!strcmp(req, "GET")) {
return GET;
}
```
- you should reverse the order of the tests in `while((*(req + size) != '\n') && (size < req_size))` to avoid accessing `req[req_size]`.
- `line = strncpy(line, req, size);` has undefined behavior: `line` is an uninitialized pointer, so you cannot copy anything to it.
Furthermore, [**you should never use `strncpy`**][1]: it does not do what you think.
In your code, you should instead use `char *line = strndup(req, size);` which allocates memory and copies the string fragment to it.
`strndup()` is part of POSIX so it is available on most systems, but it your target does not have it, it can be defined this way:
```
#include <stdlib.h>
char *strndup(const char *s, size_t n) {
char *p;
size_t i;
for (i = 0; i < n && s[i] != '\0'; i++)
continue;
p = malloc(i + 1);
if (p != NULL) {
memory(p, s, i);
p[i] = '\0';
}
return p;
}
```
[1]: https://randomascii.wordpress.com/2013/04/03/stop-using-strncpy-already/ |
I am trying to write a personal project in JavaScript.
So I started with an empty folder with two files:
card.js
constants.js
and in `card.js`, I used on the first line:
import { SUIT } from "./constants.js";
and run it using `node card.js`. It doesn't work, giving the error:
> (node:91577) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
First,
> set "type": "module" in the package.json
I don't even have a package.json. Must I use it? Can I not use one? If I set "type": "module", does that make my own project become a module?
Second, I have never seen any `.mjs` file ever before. What is it about? I can use that and forget everything about NodeJS or package.json?
If I look at the [docs for import][1], it has:
import { myExport } from "/modules/my-module.js";
So I used `mkdir modules` and moved that `constants.js` in there.
And then I used:
import { SUIT } from "/modules/constants.js"
and it still has the same error. That doc doesn't not mention `package.json` whatsoever.
I am wondering if this `import` is the same as `import` in the context of NodeJS (and so it may be different in Deno).
How is it done?
**P.S.** I renamed `constant.js` to `constant.mjs` and got the same error.
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import |
I'm using selenium to scrape off a table from the tracking page of a logistics company. Here is my code:
```python
driver.find_element(By.CSS_SELECTOR, "#trackingNo").send_keys(tracking_id)
WebDriverWait(driver, 5).until(EC.text_to_be_present_in_element_value((By.CSS_SELECTOR, "#trackingNo"), str(tracking_id)))
element = driver.find_element(By.CSS_SELECTOR, "#goBtn")
element.click()
WebDriverWait(driver, 5).until(EC.staleness_of(element)) # waiting for the page to reload
table = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, f"#SHIP{tracking_id} table"))) # waiting for the table to load
ths = [th.text for th in table.find_elements(By.CSS_SELECTOR, "th")]
tds = [td.text for td in table.find_elements(By.CSS_SELECTOR, "td")]
src = table.get_attribute("innerHTML")
```
Inspecting the page I can see that the table loads the data, however ths, tds only capture a subset of the rows in the table.
To investigate the problem, I'm capturing the raw HTML source of the table in the `src` variable, and if `status` (one of the th's present in the table) is not present I output the raw HTML itself.
Here is the code
```python
data = {th.lower().strip(): td for th, td in zip(ths, tds)}
if "status" not in data:
print(data)
print(src)
return -1
```
And here is the output (retracting some personal information):
```html
<tbody>
<tr>
<th class="track-details-bg track-details-bg bg-success text-right">Waybill
No</th>
<td>XXXXXXXXXX</td>
</tr>
<tr>
<th class="track-details-bg track-details-bg bg-success text-right">Pickup Date </th>
<td>03 May 2023</td>
</tr>
<tr>
<th class="track-details-bg track-details-bg bg-success text-right">From </th>
<td>Mumbai</td>
</tr>
<tr>
<th class="track-details-bg track-details-bg bg-success text-right">To </th>
<td>Surat</td>
</tr>
<tr>
<th class="track-details-bg bg-success text-right">Status </th>
<td>Shipment Delivered
</td>
</tr>
<tr>
<th class="track-details-bg bg-success text-right">Date of Delivery </th>
<td>DD M YYYY</td>
</tr>
<tr>
<th class="track-details-bg bg-success text-right">Time of Delivery </th>
<td>HH:MM</td>
</tr>
<tr>
<th class="track-details-bg bg-success text-right">Recipient </th>
<td>XXXXXXXXXX</td>
</tr>
<tr>
<th class="track-details-bg bg-success text-right">Reference No </th>
<td>XXXXXXXXXX</td>
</tr>
</tbody>
```
As can be seen from this, selenium does load the table, but find_element cannot access th and td from this.
NOTE: At the moment I'm using BS4 to read in the page_source and parse my data. But I'm unsure why selenium is not able to find the elements. Thanks!
|
Before executing commands the shell first expands them. There are 7 consecutive expansions: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion (see [this section of the bash manual](https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html)).
In your case the command substitution replaces the `find` command with `"./? ./*"`. So the `for` command becomes:
```
for f in "./? ./*"; do...
```
The loop iterates only once with `f="./? ./?"`. In `echo "$f"` `$f` is expanded (parameter expansion) and replaced with the value of variable `f`. Because of the double quotes the result is considered as a word and printed without further expansion (see [this section of the bash manual](https://www.gnu.org/software/bash/manual/html_node/Quoting.html)). And you see `./? ./*`.
In `echo $f`, `$f` is also expanded and replaced with the value of `f` but the expansion continues with command substitution, arithmetic expansion, word splitting, and pathname expansion. Command substitution and arithmetic expansion have no effect here. Word splitting separates the two words `./?` and `./*` such that they are treated separately in the next steps.
Pathname expansion replaces `./?` with all files in current directory with single character names (in your case `./?` and `./*`) and `./*` is replaced with all files in current directory (in your case `./?` and `./*`). So, finally what is echoed is `./? ./* ./? ./*`. |
I need to print the bounding box coordinates of a walking person in a video. Using YOLOv5 I detect the persons in the video. Each person is tracked. I need to print each person's bounding box coordinate with the frame number. Using Python how to do this.
The following is the code to detect, track persons and display coordinates in a video using YOLOv5.
```
#display bounding boxes coordinates
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
# Open the video file
cap = cv2.VideoCapture("Shoplifting001_x264_15.mp4")
#get total frames
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(f"Frames count: {frame_count}")
# Initialize the frame id
frame_id = 0
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True,classes=[0])
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Print the bounding box coordinates of each person in the frame
print(f"Frame id: {frame_id}")
for result in results:
for r in result.boxes.data.tolist():
if len(r) == 7:
x1, y1, x2, y2, person_id, score, class_id = r
print(r)
else:
print(r)
# Display the annotated frame
cv2.imshow("YOLOv5 Tracking", annotated_frame)
# Increment the frame id
frame_id += 1
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else: # Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
The above code is working and display the coordinates of tracked persons.
But the problem is in some videos it is not working properly
```
The above code is working and display the coordinates of tracked persons.
But the problem is in some videos it is not working properly 0: 384x640 6 persons, 1292.9ms Speed: 370.7ms preprocess, 1292.9ms inference, 20.8ms postprocess per image at shape (1, 3, 384, 640) Frame id: 0 [849.5707397460938, 103.34817504882812, 996.0990600585938, 371.2213439941406, 1.0, 0.9133888483047485, 0.0] [106.60043334960938, 74.8958740234375, 286.6423645019531, 562.144287109375, 2.0, 0.8527513742446899, 0.0] [221.3446044921875, 60.8421630859375, 354.4775390625, 513.18017578125, 3.0, 0.7955091595649719, 0.0] [472.7821044921875, 92.33056640625, 725.2569580078125, 632.264404296875, 4.0, 0.7659056782722473, 0.0] [722.457763671875, 222.010986328125, 885.9102783203125, 496.00372314453125, 5.0, 0.7482866644859314, 0.0] [371.93310546875, 46.2138671875, 599.2041625976562, 437.1387939453125, 6.0, 0.7454277873039246, 0.0]
This output is correct.
But for another video there are only three people in the video but at the beginning of the video at 1st frame identify as 6 person.
0: 480x640 6 persons, 810.5ms Speed: 8.0ms preprocess, 810.5ms inference, 8.9ms postprocess per image at shape (1, 3, 480, 640) Frame id: 0 [0.0, 10.708396911621094, 37.77726745605469, 123.68929290771484, 0.36418795585632324, 0.0] [183.0453338623047, 82.82539367675781, 231.1952667236328, 151.8341522216797, 0.2975049912929535, 0.0] [154.15158081054688, 74.86528778076172, 231.10934448242188, 186.2017822265625, 0.23649221658706665, 0.0] [145.61187744140625, 69.76246643066406, 194.42532348632812, 150.91973876953125, 0.16918501257896423, 0.0] [177.25042724609375, 82.43289947509766, 266.5430908203125, 182.33889770507812, 0.131477952003479, 0.0] [145.285400390625, 69.32669067382812, 214.907470703125, 184.0771026611328, 0.12087596207857132, 0.0]
```
This output is correct.
But for another video there are only three people in the video but at the beginning of the video at 1st frame identify as 6 person.
0: 480x640 6 persons, 810.5ms
Speed: 8.0ms preprocess, 810.5ms inference, 8.9ms postprocess per image at shape (1, 3, 480, 640)
Frame id: 0
[0.0, 10.708396911621094, 37.77726745605469, 123.68929290771484, 0.36418795585632324, 0.0]
[183.0453338623047, 82.82539367675781, 231.1952667236328, 151.8341522216797, 0.2975049912929535, 0.0]
[154.15158081054688, 74.86528778076172, 231.10934448242188, 186.2017822265625, 0.23649221658706665, 0.0]
[145.61187744140625, 69.76246643066406, 194.42532348632812, 150.91973876953125, 0.16918501257896423, 0.0]
[177.25042724609375, 82.43289947509766, 266.5430908203125, 182.33889770507812, 0.131477952003479, 0.0]
[145.285400390625, 69.32669067382812, 214.907470703125, 184.0771026611328, 0.12087596207857132, 0.0]
Also, the output does not show the person ID here. Only display coordinates, confidence score, and class id. What is the reason for that?
|
Error:
> {"name":"INVALID_REQUEST","message":"Request is not well-formed, syntactically incorrect, or violates schema.","debug_id":"7bf4f312d5676","details":[{"field":"/amount/value","value":"60.00","location":"BODY","issue":"calculation_error","description":"Amount is invalid."}],"links":[{"href":"https://developer.paypal.com/docs/api/invoicing/#errors","method":"GET"}]}
I tried to create an invoice template. Code:
import requests
import json
headers = {
'Authorization': 'Bearer zekwhYgsYYI0zDg0p_Nf5v78VelCfYR0',
'Content-Type': 'application/json',
'Prefer': 'return=representation',
}
auth = ('AZ_4WD_n4iksfwbF********rzHkraVIYQ0ATLfsZW2DzeRC5jjF-va5o2uVQK-n', 'ENkQHmneW******CVZRr7xADBikY6QowcE0')
data = {
"default_template": True,
"template_info": {
"configuration": {
"tax_calculated_after_discount": True,
"tax_inclusive": False,
"allow_tip": True,
"partial_payment": {
"allow_partial_payment": True,
"minimum_amount_due": {
"currency_code": "USD",
"value": "20.00"
}
}
},
"detail": {
"reference": "deal-ref",
"note": "Thank you for your business.",
"currency_code": "USD",
"terms_and_conditions": "No refunds after 30 days.",
"memo": "This is a long contract",
"attachments": [
{
"id": "Screen Shot 2018-11-23 at 16.45.01.png",
"reference_url": "https://api-m.paypal.com/invoice/payerView/attachments/RkG9ggQbd4Mwm1tYdcF6uuixfFTFq32bBdbE1VbtQLdKSoS2ZOYpfjw9gPp7eTrZmVaFaDWzixHXm-OXWHbmigHigHzURDxJs8IIKqcqP8jawnBEZcraEAPVMULxf5iTyOSpAUc2ugW0PWdwDbM6mg-guFAUyj3Z98H7htWNjQY95jb9heOlcSXUe.sbDUR9smAszzzJoA1NXT6rEEegwQ",
"version": "1",
"sig": "JNODB0xEayW8txMQm6ZsIwDnd4eh3hd6ijiRLi4ipHE"
}
],
"payment_term": {
"term_type": "NET_10"
}
},
"invoicer": {
"name": {
"given_name": "David",
"surname": "Larusso"
},
"address": {
"address_line_1": "1234 First Street",
"address_line_2": "337673 Hillside Court",
"admin_area_2": "Anytown",
"admin_area_1": "CA",
"postal_code": "98765",
"country_code": "US"
},
"email_address": "merchant@example.com",
"phones": [
{
"country_code": "001",
"national_number": "4085551234",
"phone_type": "MOBILE"
}
],
"website": "www.test.com",
"tax_id": "ABcNkWSfb5ICTt73nD3QON1fnnpgNKBy-Jb5SeuGj185MNNw6g",
"logo_url": "https://example.com/logo.PNG",
"additional_notes": "2-4"
},
"primary_recipients": [
{
"billing_info": {
"name": {
"given_name": "Stephanie",
"surname": "Meyers"
},
"address": {
"address_line_1": "1234 Main Street",
"admin_area_2": "Anytown",
"admin_area_1": "CA",
"postal_code": "98765",
"country_code": "US"
},
"email_address": "bill-me@example.com",
"phones": [
{
"country_code": "001",
"national_number": "4884551234",
"phone_type": "MOBILE"
}
],
"additional_info": "add-info"
},
"shipping_info": {
"name": {
"given_name": "Stephanie",
"surname": "Meyers"
},
"address": {
"address_line_1": "1234 Main Street",
"admin_area_2": "Anytown",
"admin_area_1": "CA",
"postal_code": "98765",
"country_code": "US"
}
}
}
],
"additional_recipients": [
"inform-me@example.com"
],
"items": [
{
"name": "Yoga Mat",
"description": "new watch",
"quantity": "1",
"unit_amount": {
"currency_code": "USD",
"value": "50.00"
},
"tax": {
"name": "Sales Tax",
"percent": "7.25"
},
"discount": {
"percent": "5"
},
"unit_of_measure": "QUANTITY"
},
{
"name": "Yoga T Shirt",
"quantity": "1",
"unit_amount": {
"currency_code": "USD",
"value": "10.00"
},
"tax": {
"name": "Sales Tax",
"percent": "7.25",
"tax_note": "Reduced tax rate"
},
"discount": {
"amount": {
"currency_code": "USD",
"value": "57.12"
}
},
"unit_of_measure": "QUANTITY"
}
],
"amount": {
"currency_code": "USD",
"value": "60.00"
}
},
"settings": {
"template_item_settings": [
{
"field_name": "items.date",
"display_preference": {
"hidden": True
}
},
{
"field_name": "items.discount",
"display_preference": {
"hidden": False
}
},
{
"field_name": "items.tax",
"display_preference": {
"hidden": False
}
},
{
"field_name": "items.description",
"display_preference": {
"hidden": False
}
},
{
"field_name": "items.quantity",
"display_preference": {
"hidden": True
}
}
],
"template_subtotal_settings": [
{
"field_name": "custom",
"display_preference": {
"hidden": False
}
},
{
"field_name": "discount",
"display_preference": {
"hidden": False
}
},
{
"field_name": "shipping",
"display_preference": {
"hidden": False
}
}
]
},
"unit_of_measure": "QUANTITY",
"standard_template": False
}
data["name"] = "Template"
json_data = json.dumps(data)
response = requests.post('https://api-m.sandbox.paypal.com/v2/invoicing/templates', headers=headers, data=json_data, auth=auth)
print(response.text)
|
PayPal v2 invoicing API "INVALID_REQUEST" from incorrect amount fields |
{"Voters":[{"Id":14853083,"DisplayName":"Tangentially Perpendicular"},{"Id":476,"DisplayName":"deceze"}],"SiteSpecificCloseReasonIds":[13]} |
This is the original data that I want to edit -
```
"Languages" : [
"English",
"French",
"Tamil"
],
```
And this is what I want to achieve by inserting the data such that it will leave 2 null values if there is empty spaces when entering the data at position 5 and if there is only 1 empty space then it should only add 1 null data in the array and then it should add Tamil into the array -
```
"Languages" : [
"English",
"French",
null,
null,
"Tamil"
],
```
This is what I tried -
```
$push: {
"Languages" : {
$each: ["Tamil"],
$position: 4
}
}
``` |
How to enter data in mongodb array at specific position such that if there is only 2 data in array and I want to insert at 5, then rest data is null |
|arrays|database|mongodb|mongoose| |
null |
I will also post an answer here, as the original ones do not fit if you are working with transparency or EmptyView. With this, you can click anywhere outside of a TextField to remove focus.
This does:
- Add a transparent view as responder on top
- As transparent views have no contentShape by default, we assign one
```swift
ZStack {
// Make a general clickable area available to respond to clicks outside of TextView
Color.clear
.contentShape(Rectangle())
.edgesIgnoringSafeArea(.all)
.onTapGesture {
DispatchQueue.main.async {
NSApp.keyWindow?.makeFirstResponder(nil)
}
}
// Your ContentView() goes here
}
``` |
|c#|asp.net-core|asp.net-core-mvc|.net-8.0| |
I want to subdivde the number of scrolling wheels it takes to reach max zoom.. but zoomDelta not working. No matter what zoomDelta value is, it zooms by 1 always.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Custom Zoom Control</title>
<!-- Include Leaflet CSS -->
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css" />
<!-- Include Leaflet JavaScript -->
<script src="https://unpkg.com/leaflet@1.9.4/dist/leaflet.js"></script>
</head>
<body>
<!-- Create a div element with ID 'map' to contain the map -->
<div id="map" style="width: 600px; height: 400px;"></div>
<script>
// Initialize the map with a center, zoom level, and other options
var map = L.map('map', {
center: [500, 500], // Initial center latitude and longitude
zoom: 1, // Initial zoom level
minZoom: 1, // Minimum zoom level
maxZoom: 3, // Maximum zoom level
crs: L.CRS.Simple // Use Simple CRS for image overlay
});
// Define bounds for the image overlay
var bounds = [[0,0], [1000,1000]];
// Add an image overlay using your image
var image = L.imageOverlay('Texture/test map.png', bounds).addTo(map);
// Remove the default zoom control
map.zoomControl.remove();
// Add event listener for zoomstart event
map.on('zoom', function(event) {
console.log('Current Zoom Level: ' + map.getZoom());
});
</script>
</body>
</html>
```
Console log:
Current Zoom Level: 2 Text6.html:37
Current Zoom Level: 3 Text6.html:37
Current Zoom Level: 2 Text6.html:37
Current Zoom Level: 1 Text6.html:37
|
With non-graphical maps in Leaflet, zoomDelta doesn't work |
|javascript|html|leaflet| |
null |
Listview - Getting error while linking the items correctly in Android Java |
null |
null |
null |
null |
null |
from bs4 import BeautifulSoup
# Assuming you have your HTML content in 'html_content'
soup = BeautifulSoup(html_content, 'html.parser')
# Find the parent span and extract the text, excluding the nested span's text
rain_forecast = soup.find("span", {"class": "Column--precip--3JCDO"}).contents[-1].strip()
print(rain_forecast)
|
I have been working on a project that uses a **`database`**. It makes the user choose from the commands in the program. The program makes you add your **password, E-mail, user id, and birthday**, and every user has his own **user id**. He can make a **new user**, **delete password**, **delete user**, **change password and change user id**.
Every option has its own `command`, but there a problem appeared to me while I was working.
I made a input variable and named it **(Command option)** that prints a message which tells the user to write the command he wants to use, I also made an If condition which is the variable **(Command option)** == "N" or "n" to create new user, and I made an else if condition that if the variable **(Command option)** isn't equal "N" or "n" to print
("Sorry, we don't have this command"),
[My code](https://i.stack.imgur.com/V9FuT.png)
but the problem is that if I put the condition to make a new user before the condition ; to print, doesn't work, even if the variable **(Command option)** isn't equal to "N" or "n", and if I do the opposite to put the condition to print before the condition to make a new user, the condition to make new a user doesn't work, even if the variable **(Command option)** equals "N" or "n" so, what's the problem? |
File array validation in Laravel 10 |
|arrays|laravel|laravel-10| |
> How can I find a case in which the program fails to output the right answer?
Try
int n = 2;
int*arr=(int*)malloc(n*sizeof(int));
arr[0] = 56;
arr[1] = 561;
Your code will give
56156
but you could have formed
56561
|
|python|numpy|tensor|tensordot|einsum| |
I think the first question you should is answer is:
> *Am I going to do this myself or am I going to hire somebody to help me?*
This choice will, to some extend, determine the other choices you have.
If you do hire a professional I would suggest to discuss these things with that person. It can be hard to find someone you can work with for a longer time. The choices you make will have to be compatible with their capabilities.
If you're going to do this by yourself, you can only do what you know and feel comfortable with. Wordpress is fantastic to get you started quickly, and can do a lot, but it also has some clear disadvantages (bloated, slow, vulnerable, costly, etc). Going with something like Laravel is more complicated, but still gives you a nice head start. Creating forms yourself means you need to be a web designer, PHP programmer, database manager, etc. It's a full-stack job. **Not easy** by any means.
Just keep in mind that these questions never stop coming. Every few years you need to radically update your code, and sometimes you need to shift to something completely new. Don't regard this as a one off, but more as a continuous evolving matter. |
select avg( columnname) from table;
This will average all rows. To average a subset, use a `where` clause. To average for each group (of something) use a `group by` clause. |
I'd say your best bet is to do a simple assert that heading is not None:
```python
try:
with open("inpu.csv", newline="") as f:
reader = csv.DictReader(f, skipinitialspace=True)
headings = reader.fieldnames
assert headings is not None
assert len(headings) == 3
menu: list[list[str]] = []
for row in reader:
menu.append([row[headings[0]], row[headings[1]], row[headings[2]]])
except IOError as e:
print(f"couldn't open CSV: {e}")
except AssertionError:
print(
f"couldn't get fieldnames, check CSV and ensure first line is a properly encoded row"
)
else:
print(menu)
```
If it fails, you'll know to look at the CSV file for any kind of issues.
- Before the assert, VSCode shows that headings could be `Sequence[str] | None`.
- After the assert, the type checker realizes that it cannot have been None, so headings must be `Sequence[str]`.
Also, look at Barmar's and Mark Tolonen's comments, you can do what you much more simply by just reading the CSV as a list of strings:
```python
reader = csv.reader(f)
menu = list(reader)[1:]
print(menu)
```
```python
[
["foo", "1", "a"],
["bar", "2", "b"],
["baz", "3", "c"],
]
```
|
I'm reading in serial data from a serial USB device with python.
I want to compare the received data with a predefined string if it matches.
Reading the data works fine the problem is this if-statement:
```
if serialInst.in_waiting:
startPacket = serialInst.readline()
z = "iohioowwcncuewqrte"
k = startPacket.decode('utf').rstrip('\n')
if (k == z):
print("right passkey")
else:
print("Error")
exit()
```
Thank you in advance. |
**You are trying to add a class to an element based on the value of a range slider. there are a couple of issues in your code.**
1. The mousemove event is not suitable for detecting changes in the value of a range slider. you should use the input event.
2. The condition if (this.value == "1", "2", "3", "4", "5") is incorrect. You can't use multiple values like that in an equality check. You should use a range check instead.
Here is the corrected code:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
$(document).ready(function() {
$("#range-slider").on('input', function() {
var value = parseInt($(this).val());
if (value >= 1 && value <= 5) {
$('.other-element').addClass('is-active').html(`Slider Value is: ${value}`);
} else {
$('.other-element').removeClass('is-active');
}
});
});
<!-- language: lang-css -->
.other-element {
margin-top: 20px;
padding: 10px;
background-color: lightblue;
display: none;
}
.other-element.is-active {
display: block;
}
<!-- language: lang-html -->
<!-- begin snippet: js hide: false console: true babel: false -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script>
<input type="range" id="range-slider" min="0" max="9">
<div class="other-element"></div>
<!-- end snippet -->
|
Within the function `sumAtBis`
int sumAtBis(tree a, int n, int i){
if(i==n){
if(isEmpty(a))
return 0;
else
return root(a);
}
return sumAtBis(left(a),n,i+1)+sumAtBis(right(a),n,i+1);
}
there is no check whether `left( a )` or `right( a )` are null pointers. So the function can invoke undefined behavior.
Actually the function `sumAtBis` is redundant. It is enough to define the function `sumAt` as for example
long long int sumAt( tree a, size_t n )
{
if ( isEmpty( a ) )
{
return 0;
}
else
{
return n == 0 ?
root( a ) :
sumAt( left( a ), n - 1 ) + sumAt( right( a ), n - 1 );
}
}
Also using the typedef name `tree`
typedef node * tree;
is not a good idea because you can not define the type `const node *` by means of this typedef name because `const tree` is equivalent to `node * const` instead of the required type `const node *` and this type should be used in the parameter declaration of the function because the function does not change nodes of the tree. |