instruction stringlengths 0 30k ⌀ |
|---|
How to update an item inside if a TabView with .tabViewStyle(.page) in ios lower than 17? |
I'm trying to implement some basic auth for a NextJS app. I have a FastAPI backend which sends a JWT as a cookie on a successful login, which I can see setting the cookie in chrome dev tools, but I can't access this in NextJS.
The FastAPI code for the token request is:
```
@router.post('/users/token')
async def get_token(response: Response, form_data: Annotated[OAuth2PasswordRequestForm, Depends()]):
print('Token requested')
if users_collection.find_one({'email': form_data.username}) == None:
raise HTTPException(status_code=404, detail='email_not_found')
try:
user = authenticate_user(form_data.username, form_data.password)
except Exception as exception:
print("exception in get_token calling authenticate user")
print("get_token exception = " )
print(exception)
return exception
if not user:
raise HTTPException(status_code=401, detail='incorrect_credentials')
access_token_expiration_delta = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
access_token = create_access_token(data={'sub': user.email}, expires_delta=access_token_expiration_delta)
response.set_cookie(key='access_token', value=access_token, samesite='lax', httponly='true')
response.status_code = 200
print('cookie set')
return response
```
this sets a cookie in chrome:
[](https://i.stack.imgur.com/bRqQz.png)
However any requests do not send the cookie with the request:
```
'use client'
import { useCookies } from 'next-client-cookies';
// further code...
const clientCookies = useCookies();
async function onSubmit(event: FormEvent<HTMLFormElement>) {
event.preventDefault();
const formData = new FormData(event.currentTarget);
console.log(formData);
const rawFormData = {
username: formData.get('username'),
password: formData.get('password'),
};
console.log(rawFormData);
const response = await fetch(apiAddress + 'users/token', {
method: 'POST',
body: formData,
credentials: 'include',
});
console.log(response.status);
console.log(typeof response.status);
if (response.status === 404) {
handleUserNotFoundOpen();
}
if (response.status == 401) {
console.log('Password incorrect');
handlePasswordIncorrectOpen();
}
if (response.status == 200) {
console.log('client Cookie:');
const tokenCookie = clientCookies.get('access_token');
console.log(tokenCookie); // Returns undefined
const login_check = await fetch(apiAddress + 'users/get_user/', {
credentials: 'include',
});
console.log(login_check);
router.push('/dashboard');
}
```
The get_user endpoint is:
```
@router.get('/users/get_user/')
async def get_user(request: Request):
print('get current active user called')
try:
header = request.headers.get("Cookie")
print(header)
except Exception as e:
print('cookie exception')
print(e)
try:
access_token = request.cookies.get('access_token')
print('received access token = ' + str(access_token))
user = await get_current_user(request.cookies.get('access_token'))
print('user =')
print(user)
except Exception as e:
print('error encountered')
print(e)
return {"error": "yes"}
print("all good")
return {"Authenticated": "Yes"}
```
CORS (should) be set up to deal with this:
Route FastAPI file:
```
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"]
)
```
Origins include http://localhost:3000
The backend shows no cookie being provided in the request (received access token = None). Dev tools console also shows no cookie being retrieved. My understanding is the cookie should be automatically sent with all requests to the backend, but this doesn't seem to be happening.
Requesting the same in POSTMan and then making a GET request to another API endpoint works just fine so I think I am doing something wrong in NextJS.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/zzzkH.png
Is there a fix for this or is there a simpler way of implementing auth with NextJS? I'd prefer to use JWT as I plan on implementing middleware to check for valid JWTs for protected routes. |
Just encounter the same issue. But this time, for me, the problem was because of an incompatibility between the **Target framework** of the application (old .NetFramework 4.5) and a more recent version of Selenium NuGet package (4.18.1).
A cheap **solution that worked for me**: upgrading the Target framework of the application to .NetFramework 4.8.
I guess it could've also work by using an older version of Selenium as well
|
I would use the more general type `Txn` and then check whether it is an `AssetTransferTxn` or a `PayTxn` within the method.
```
addLiquidity(
aXfer: Txn,
bXfer: Txn,
poolAsset: AssetID,
aAsset: AssetID,
bAsset: AssetID
): void {
``` |
Well, I have been trying to get into web scraping with selenium and works well on one of my devices with Chrome. But if possible i would like to use it on a different device of mine with Brave browser. I have tried some suggestions mentioned here but sadly none of them work anyomre either due to selenium undergoing updations or I am missing something important.
These are my browser's specs tho:
1.63.169 Chromium: 122.0.6261.111 (Official Build) (64-bit) |
I have a a problem to rasterize and mainly plot the result.
We have a DEM with these features:
```
class : SpatRaster
dimensions : 1600, 1600, 1 (nrow, ncol, nlyr)
resolution : 100, 100 (x, y)
extent : 605998.7, 765998.7, 5059499, 5219499 (xmin, xmax, ymin, ymax)
coord. ref. : WGS 84 / UTM zone 32N (EPSG:32632)
source(s) : memory
name : w48055_s10
min value : 65.000
max value : 3872.241
```
Furthemore, I have a dataset on which I have created a SpatVector, as below:
``` r
pt_1 <- vect(dati, geom = c("Longitude","Latitude"), crs = "epsg:4326")
pt_1 <- project(pt_1, "epsg:32632")
```
```
class : SpatVector
geometry : points
dimensions : 1301540, 5 (geometries, attributes)
extent : 606000.9, 765847.5, 5059702, 5219496 (xmin, xmax, ymin, ymax)
coord. ref. : WGS 84 / UTM zone 32N (EPSG:32632)
names : ID1 bio01 bio04 bio12 bio15
type : <int> <num> <num> <int> <num>
values : 1 -1.375 590.7 1663 46.46
2 -1.342 589.6 1658 46.4
3 -1.333 589.9 1657 46.48
```
Once I try to rasterize them, the process seems work well, but the related plot has some problems.the problem is that the plot is not smooth and shows a strange texture even if I run the raster in Qgis.Initially I thought it was a graphical problem but I don't think so
``` r
cli <- rasterize(pt_1, demta, field = c("bio01", "bio04", "bio12", "bio15"))
```

What are the problems? The different size or did I forget something?
|
I found a solution that is not based on importing tensorflow (which I dont use) but allows muting [through environment variables][1]:
Simply add this line to the beginning of your script.
```python
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # FATAL
```
This should work for any `tensorflow > 1.14`
[1]: https://stackoverflow.com/a/40871012/9659620 |
```
var j = schedule.scheduleJob(unique_name, timeNow, async function () {
session10.startTransaction();
try{
await readmodel.findOneAndUpdate({}, {
$set: {
readonlyMode: true,
yearEndClosure: true
}
}, { session10 });
var archivalActivity ={
activity_id: 4,
activityName: 'Current year Data Back',
activity_Status: 'In Progress',
err_occured: 'None',
remarks: 'None'
}
await yearmodel.findOneAndUpdate({year:MCpayload.year,month:MCpayload.month,'activities.activity_id': 3},{
$set :{
'activities.$.activity_Status': 'Completed',
},
},{session10});
await yearmodel.findOneAndUpdate({year:MCpayload.year,month:MCpayload.month},{
$push :{
activities: archivalActivity
}
},{session10})
const etdataCurrentYear = await etmodel.find({}, { _id: 0 },{session10})
const etBackupData = await emt_archival_yearEnd.insertMany(etdataCurrentYear, { session10 });
console.log(etdataCurrentYear.length, etBackupData.length, "inserted Succesfully")
const etproofCurrentYear = await etproofmodel.find({}, { _id: 0 }, { session10});
const etproofBackupData = await proof_archival_yearEnd.insertMany(etproofCurrentYear, { session10});
console.log(etproofCurrentYear.length, etproofBackupData.length, "inserted Succesfully")
const cmnproofCurrentYear = await etcommonproofmodel.find({}, { _id: 0 }, {session10 });
const cmnproofBackupData = await common_proof_archival_yearEnd.insertMany(cmnproofCurrentYear, { session10 });
console.log(cmnproofCurrentYear.length, cmnproofBackupData.length, "inserted Succesfully")
const etcaseCurrentYear = await etcasemodel.find({}, { _id: 0 }, {session10});
const etcaseBackupData = await caseStudy_proof_archival_yearEnd.insertMany(etcaseCurrentYear, { session10});
console.log(etcaseCurrentYear.length, etcaseBackupData.length, "inserted Succesfully")
const umtSummaryCurrentYear = await umt_summary_model.find({}, { _id: 0 }, { session10 });
const umtSummaryBackupData = await UmtSummary_archive_yearEnd.insertMany(umtSummaryCurrentYear, {session10 });
console.log(umtSummaryCurrentYear.length, umtSummaryBackupData.length, "inserted Succesfully")
const umtDetailsCurrentYear = await umt_data_model.find({}, { _id: 0 }, {session10 });
const umtDetailsBackupData = await UmtDetail_archive_yearEnd.insertMany(umtetailsCurrentYear, {session10 });
console.log(umtDetailsCurrentYear.length, umtDetailsBackupData.length, "inserted Succesfully")
const umtExpCurrentYear = await umt_exception_model.find({}, { _id: 0 }, { session10 });
const umtExpBackupData = await UmtExceptional_archive_yearEnd.insertMany(umtExpCurrentYear, { session10 });
console.log(umtExpCurrentYear.length, umtExpBackupData.length, "inserted Succesfully")
var archivalTablebackup ={
activity_id: 5,
activityName: 'Archival Tables Backup',
activity_Status: 'In Progress',
err_occured: 'None',
remarks: 'None'
}
await year_end_closure_model.findOneAndUpdate({year:MCpayload.year,month:MCpayload.month,'activities.activity_id': 4},{
$set :{
'activities.$.activity_Status': 'Completed',
},
},{session10});
await year_end_closure_model.findOneAndUpdate({year:MCpayload.year,month:MCpayload.month},{
$push :{
activities: archivalTablebackup
}
},{session10})
console.log("current year tables backup done")
const etarchival = await etmodel_archive.find({}, { _id: 0 }, {session10});
const etarchivalBackup = await emt_archive_backup.insertMany(etarchival, {session10});
console.log(etarchival.length, etarchivalBackup.length, "inserted Succesfully")
const etProofArchival = await etproofmodel_archive.find({}, { _id: 0 }, { session10});
const etProofArchivalBackup = await proof_archive_backup.insertMany(etProofArchival, {session10});
console.log(etProofArchival.length, etProofArchivalBackup.length, "inserted Succesfully")
const cmnProofArchival = await etcommonproof_archive_model.find({}, { _id: 0 }, { session10});
const cmnProofArchivalBackup = await cmn_proof_archive_backup.insertMany(cmnProofArchival, {session10 });
console.log(cmnProofArchival.length, cmnProofArchivalBackup.length, "inserted Succesfully")
const etcaseArchvial = await etcasemodel_archive.find({}, { _id: 0 }, { session:session10 });
const etcaseArchivalBackup = await caseStudy_archive_backup.insertMany(etcaseArchvial, {session10});
console.log(etcaseArchvial.length, etcaseArchivalBackup.length, "inserted Succesfully")
session10.commitTransaction();
}catch(err){
}
```
if anything in transaction is failing, it's not reverting back, please let me know where I have done mistake.......................................................................................................................................................................................................................................................................................... |
the transactions are not rolling back in mongodb |
|node.js|mongodb|express|mongoose|mongodb-query| |
null |
I have already solved the problem, it was that the permissions of a Powerbi workspace user had to be changed for the changes to be applied and the API in Powerbi to have access. |
Wagtail-CRX installs with a pre-defined StreamField ImageGalleryBlock that allow a user to select a Collection of images that are then output to the page along with a modal pop-up structure.
In *models.py* of my app I have created the image_gallery variable like this
```
image_gallery = StreamField([
('image_gallery', ImageGalleryBlock()),
],
verbose_name="Choose images for the gallery",
null=True,
blank=True,
default="",
use_json_field=True )
FieldPanel("image_gallery"),
```
This worls fine. The FieldPanel adds the Collection choice block to the page edit form. However, the images in the chosen Collection never appear on the page using any of the possible methods for calling the block into the page template e.g.
```
{% for block in page.image_gallery %}
<section>{% include_block block %}</section>
{% endfor %}
```
The *include* here calls in the block using the template *image_gallery_block.html* - the structure for the modal pop-up is rendered on the page but there no images appear to populate it with.
Inside the *image_gallery_block.html* template the first line is
```
{% get_pictures self.collection.id as pictures %}
```
where *get_pictures* is a function that should pass the data from the Collection objects into the variable *pictures* and they should be iterated over in the subsequent template html thus
```
{% if pictures %}
{% for picture in pictures %}
{% image picture fill-800x450 format-jpeg preserve-svg as picture_image %}
{% image picture max-1600x1600 format-webp preserve-svg as original_image %}
<div class="col-sm-6 col-md-4 col-lg-3 my-3">
<a href="#" class="lightbox-preview" data-bs-toggle="modal" data-bs-target="#modal-{{modal_id}}">
<img class="img-thumbnail w-100" src="{{picture_image.url}}" data-original-src="{{original_image.url}}"
alt="{{picture_image.image.title}}" title="{{picture_image.image.title}}">
</a>
</div>
{% endfor %} etc.
```
Adding `{{ self.collection.id }}`to the template outputs the correct Collection number so the id is being passed but `{{ pictures}}` returns ImageQuerySet[]
*get_pictures* is referenced from the *coderedcms_tags.py* file and is this
```
@register.simple_tag
def get_pictures(collection_id):
collection = Collection.objects.get(id=collection_id)
return Image.objects.filter(collection=collection)
```
the tags are being correctly loaded at the top of the image_gallery_block.html template with `{% load wagtailcore_tags wagtailimages_tags coderedcms_tags %}
`
I don't yet have enough python experience to work out how but it seems the *get_pictures* function is misfiring. |
Worth it to access data by blocks on modern OS/hardware? |
|database|file|operating-system|storage|data-access-layer| |
If someone still looks for an asnwer, this configuration in `luukvbaal/statuscol.nvim` solves the issue. I tested it myself.
{
'luukvbaal/statuscol.nvim',
opts = function()
local builtin = require('statuscol.builtin')
return {
setopt = true,
-- override the default list of segments with:
-- number-less fold indicator, then signs, then line number & separator
segments = {
{ text = { builtin.foldfunc }, click = 'v:lua.ScFa' },
{ text = { '%s' }, click = 'v:lua.ScSa' },
{
text = { builtin.lnumfunc, ' ' },
condition = { true, builtin.not_empty },
click = 'v:lua.ScLa',
},
},
}
end,
} |
>How to make Postgres GIN index work with jsonb_* functions?
You can't*. PostgreSQL [indexes][1] are tied to operators in specific operator classes:
>In general, PostgreSQL indexes can be used to optimize queries that contain one or more WHERE or JOIN clauses of the form
>
>>*`indexed-column`* ***`indexable-operator`*** *`comparison-value`*
>
>Here, the *`indexed-column`* is whatever column or expression the index has been defined on. The ***`indexable-operator`*** is an operator that is a member of the index's operator class for the indexed column. And the *`comparison-value`* can be any expression that is not volatile and does not reference the index's table.
[GIN][2] will help you only if you use the operators in the opclass you used when you defined the index (`jsonb_ops` by default):
>The default GIN operator class for `jsonb` supports queries with the key-exists operators `?`, `?|` and `?&`, the containment operator `@>`, and the `jsonpath` match operators `@?` and `@@`.
Even though there are equivalent `jsonb_path_X()` functions that do the exact same thing those operators do, the index will only kick in if you use the operator and not the function.
***
\*Except you *kind of* can
There are cases like [PostGIS][3] where functions do in fact use the index but that's because they [wrap an operator][4] or add an operator-based condition that's using the index, then use the actual function to just re-check pre-filtered rows. You can mimmick that if you want: [demo][5]
```pgsql
CREATE OR REPLACE FUNCTION my_jsonb_path_exists(arg1 jsonb,arg2 jsonpath)
RETURNS boolean AS 'SELECT $1 @? $2' LANGUAGE 'sql' IMMUTABLE;
EXPLAIN ANALYZE
SELECT * FROM applications
WHERE my_jsonb_path_exists(
applications.application,
'$.persons[*] ? (@.type_code == 3)'
);
```
| QUERY PLAN |
|:-----------|
| Bitmap Heap Scan on applications (cost=165.51..5277.31 rows=21984 width=163) (actual time=15.650..83.960 rows=22219 loops=1) |
| Recheck Cond: (application @? '$."persons"\[\*]?(@."type\_code" == 3)'::jsonpath) |
| Heap Blocks: exact=4798 |
| -> **Bitmap Index Scan** on gin\_idx (cost=0.00..160.01 rows=21984 width=0) (actual time=14.891..14.892 rows=22219 loops=1) |
| Index Cond: (application **@?** '$."persons"\[\*]?(@."type\_code" == 3)'::jsonpath) |
| Planning Time: 0.231 ms |
| Execution Time: 85.092 ms |
You can see now it uses the index because the condition got rewritten as the operator it was wrapping. It finds 22219 matches because I increased the sample set to 200k and randomised the rows.
[1]: https://www.postgresql.org/docs/current/indexes-intro.html
[2]: https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING
[3]: https://postgis.net/workshops/postgis-intro/indexing.html
[4]: https://gis.stackexchange.com/a/188420/127552
[5]: https://dbfiddle.uk/zI0k5iHc |
{"OriginalQuestionIds":[41265266],"Voters":[{"Id":1746118,"DisplayName":"Naman","BindingReason":{"GoldTagBadge":"java"}}]} |
null |
null |
null |
null |
Update -
@Frank van Puffelen
Here are the screenshots on the data I am expecting from firestore from my webpage -
**Screenshot 1**
This is the data I am expecting from
```.where('questionState', 'array-contains-any', [1,2])```
[![enter image description here][1]][1]
**ScreenShot 2**
This is the data I am expecting from
```.where('qS', 'array-contains-any', [2])```
[![enter image description here][2]][2]
If I was not clear before -
The operand value getFV changes based on button clicks from three buttons, setting return to `[1,2]`, `[1]`, or `[2]`. I created a method to manage this dynamic change, avoiding the need for multiple conditional statements for each button click scenario.
FV from console -
[![fv from console][3]][3]
Even though the getFV method returns the exact array needed for the operand in this where clause .where('qS', 'array-contains-any', fV) (Attached sccreenshot tagged - FV from console), it consistently retrieves data from ***Screenshot 1***, regardless of the buttons clicked.
However, when I manually input the same array that was returned from the ***Screenshot 2*** directly into the code, like this .where('qS', 'array-contains-any', [1]), I get the desired data.
[![fv from console single][4]][4]
I am using Firestore to fetch data based on a filter that utilizes the 'array-contains-any' operator.
Here is the code I am using to fetch the data -
```
const fV = this.getFV(this.selectedButton);
return qnaCollectionRef
.where('qS', 'array-contains-any' , fV)
.onSnapshot((querySnapshot) => {
```
here is the code for getFv
```
getFV(qF: QF): number[] {
debugger;
switch (qF) {
case QF.All:
return [1,2];
case QF.O:
return [1];
case QF.A:
return [2];
default:
throw new Error('Unsupported filter option');
}
}
```
Assume that my firestore db contains data like
```
for QF.All - [abc, dce, efg, hij]
for QF.O - [abc, dce]
for QF.A - [efg, hij]
```
Now I have a button which changes this fV to [1,2], [1], [2] which gets called in below query
```
.where('qS', 'array-contains-any' , fV)
```
The issue I'm facing is that regardless of the filter values I pass in fVs, the query always returns data from the fv [1,2]. However, when I modify the query and hardcode the operand to [1] or [2], it correctly returns the desired data from [1] or [2].
```
.where('qS', 'array-contains-any', [2])
```
[1]: https://i.stack.imgur.com/qmaxJ.png
[2]: https://i.stack.imgur.com/y2Zo3.png
[3]: https://i.stack.imgur.com/cpY2G.png
[4]: https://i.stack.imgur.com/ubTkD.png |
I am trying to create a network Load Balancer in OCI but i am getting the error below:
```
404-NotAuthorizedOrNotFound, Authorization failed or requested resource not found.
│ Suggestion: Either the resource has been deleted or service Network Load Balancer need policy to access this resource. Policy reference: https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm
│ Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/network_load_balancer_network_load_balancer
│ API Reference: https://docs.oracle.com/iaas/api/#/en/networkloadbalancer/20200501/NetworkLoadBalancer/CreateNetworkLoadBalancer
│ Request Target: POST https://network-load-balancer-api.af-johannesburg-1.oci.oraclecloud.com/20200501/networkLoadBalancers
│ Provider version: 5.31.0, released on 2024-02-29. This provider is 4 Update(s) behind to current.
│ Service: Network Load Balancer
│ Operation Name: CreateNetworkLoadBalancer
│ OPC request ID: 393f50d1bc243450e5d99f5d35b2633a/6BE524CE75B807B16CE03B3CCC3EFF51/F18A5909F19EBA97E64AACB30AECFE36
```
If i create the NLB manually and import it into my tfstate i get this error now:
```
404-NotAuthorizedOrNotFound, Authorization failed or requested resource not found.
│ Suggestion: Either the resource has been deleted or service Network Load Balancer need policy to access this resource. Policy reference: https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm
│ Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/network_load_balancer_network_load_balancer
│ API Reference: https://docs.oracle.com/iaas/api/#/en/networkloadbalancer/20200501/NetworkLoadBalancer/CreateNetworkLoadBalancer
│ Request Target: POST https://network-load-balancer-api.af-johannesburg-1.oci.oraclecloud.com/20200501/networkLoadBalancers
│ Provider version: 5.31.0, released on 2024-02-29. This provider is 4 Update(s) behind to current.
│ Service: Network Load Balancer
│ Operation Name: CreateNetworkLoadBalancer
│ OPC request ID: 393f50d1bc243450e5d99f5d35b2633a/6BE524CE75B807B16CE03B3CCC3EFF51/F18A5909F19EBA97E64AACB30AECFE36
```
Below is the code for my NLB:
resource "oci_network_load_balancer_network_load_balancer" "web" {
compartment_id = data.doppler_secrets.prod_main.map.OCI_GAIA_COMPARTMENT_PRODUCTION_ID
display_name = "web"
subnet_id = oci_core_subnet.web_public_01.id
freeform_tags = local.tags.defaults
is_preserve_source_destination = true
is_private = false
network_security_group_ids = [
oci_core_network_security_group_security_rule.web.id
]
}
Am i missing a policy or something that will enable me to create NLB resources ? |
Rather than generating new points and finding a closest neighbor in `df.x`, define the probability that each point should be sampled according to your target distribution. You can use `np.random.choice`. A million points are sampled from `df.x` in a second or so for a gaussian target distribution like this:
x = np.sort(df.x)
f_x = np.gradient(x)*np.exp(-x**2/2)
sample_probs = f_x/np.sum(f_x)
samples = np.random.choice(x, p=sample_probs, size=1000000)
`sample_probs` is the key quantity, as it can be joined back to the dataframe or used as an argument to `df.sample`, e.g.:
# sample df rows without replacement
df_samples = df["x"].sort_values().sample(
n=1000,
weights=sample_probs,
replace=False,
)
The result of `plt.hist(samples, bins=100, density=True)`:
[![corrected image][1]][1]
We can also try gaussian distributed x, uniform target distribution
x = np.sort(np.random.normal(size=100000))
f_x = np.gradient(x)*np.ones(len(x))
sample_probs = f_x/np.sum(f_x)
samples = np.random.choice(x, p=sample_probs, size=1000000)
[![sample to uniform distribution from gaussian distributed points][2]][2]
The tails would look more uniform if we increased the bin size; this is an artifact that `D` is sparse at the edges.
This solution calculates approximate probabilities for `x` in the form:
prob(x_i) ~ delta_x*rho(x_i)
where `rho(x_i)` is the density function and `np.gradient(x)` is used as a differential value. If the differential weight is ignored, `f_x` will over-represent close points and under-represent sparse points in the resampling. I made this mistake initially, the effect is small is x is uniformly distributed (but generally can be significant):
[![un-corrected version][3]][3]
[1]: https://i.stack.imgur.com/9L3tBm.png
[2]: https://i.stack.imgur.com/wqhj0m.png
[3]: https://i.stack.imgur.com/pxCHvm.png |
My WPF app won't start for two new users. It's ClickOnce deployed and runs from the company network (not installed). The error is "**WPF.Themes.dll has a different computed hash than specified in manifest.**" Another dll name appears in the error if I remove WPF.Themes.dll. The error seems accurate when I compare the dll hash to the manifest, so why does the app still run for other users (as it has for 10 years)? It fails on two Win 11 pcs that have the pre-requisite .Net 4.0 client installed (and .Net 4.0 full). It runs on other Win 11 computers. It is not signed. "Enable ClickOnce security settings" is checked/enabled, and when I uncheck that box, Save, and publish the box is automatically re-checked. I tried changing Publish->Application Files->Hash from Include to Exclude for the DLL but the error still occurs. I have deleted the users cache at `%LocalAppData%\Apps\2.0` and the error still occurs. I've been able to run other WPF applications on the problem PC. This is a real struggle and any help/suggestions are greatly appreciated. |
Only two users get error "...dll has different computed hash than manifest" |
|c#|wpf|clickonce| |
I am not sure why this happens. It might be related to [this bug](https://github.com/apple/swift/issues/66450), which is fixed in Swift 5.10 (though I don't have 5.10 yet to check).
In any case, it seems like `Lazify` would add a declaration with the name of whatever string is passed to its `name` parameter. This is not a good design, since, at least, it would be difficult to rename this using the "Refactor -> Rename" option in Xcode.
Instead of generating whatever name the user wants, you can use the `suffixed` or `prefixed` options in `@attached(peer, names: ...)`, and always generate a name from the name of the declaration to which the macro is attached. For example, use:
```
@attached(peer, names: suffixed(_lazify))
// you can write multiple "suffixed(...)" here, if you need
```
Then,
@Lazify(lock: SomeClass.classLock)
func createLazyVariable() -> String { ... }
should be implemented to generate `createLazyVariable_lazify`.
<hr>
You can't use instance properties here for a similar reason to why something like this doesn't work:
```
class Foo {
let x = 1
let y = x // "self" is not available here!
}
```
I think this is checked before the macro expansion, so this is not a problem of your macro, per se.
To allow instance members, you can add an overload like this:
```
public macro Lazify<T, P: Protectable>(lock: KeyPath<T, P>) = ...
```
Usage:
```
@Lazify(lock: \SomeClass._internal_lock)
// in the macro implementation, you would generate something like this:
self[keyPath: \SomeClass._internal_lock]
// to access the lock.
```
<hr>
From your [previous question][1], it seems like a better design would be to just use a property wrapper:
```
@propertyWrapper
struct Lazify<T> {
let lock: Protectable
let initialiser: () -> T
var wrappedValue: T {
mutating get {
if let x = lazy {
return x
}
return lock.around {
if let x = self.lazy {
return x
}
let temp = initialiser()
self.lazy = temp
return temp
}
}
set { lazy = newValue }
}
private(set) var lazy: T?
}
```
[1]: https://stackoverflow.com/q/78204619/5133585 |
Issues with selenium (Python) and Brave broser |
|python|selenium-webdriver|brave| |
null |
I chose NamedPipes over TCP because I needed two applications on the same machine to talk to each other. I wanted exceedingly simple, quick, and easy, unfortunately it has proven to be the opposite.
Server sends two messages to client, client sends one message to the server. However, the client stalls on the second ReadLineAsync, despite two messages being there, and also the client stalls upon calling WriteLineAsync.
Here's my complete wrapper class used by both the client and server:
public class CNamedPipe : IDisposable
{
private PipeStream _stream;
private StreamReader _reader;
private StreamWriter _writer;
public void Host(string pipeName)
{
_stream = new NamedPipeServerStream(pipeName, PipeDirection.InOut);
Task.Run(ServerWaitForConnectionsAync);
}
public void Connect(string pipeName, string serverName = ".")
{
_stream = new NamedPipeClientStream(serverName, pipeName, PipeDirection.InOut);
Task.Run(ClientConnectAsync);
}
private async Task ServerWaitForConnectionsAync()
{
while (!IsClosed)
{
await Server.WaitForConnectionAsync();
CreateReaderWriter();
Console.WriteLine("New connection!");
}
Console.WriteLine("Stopped waiting for new connections.");
}
private async Task ClientConnectAsync()
{
await Client.ConnectAsync();
CreateReaderWriter();
}
private void CreateReaderWriter()
{
_reader = new StreamReader(_stream);
_writer = new StreamWriter(_stream);
_writer.AutoFlush = true;
Task.Run(ReadDataAsync);
Task.Run(WriteDataAsync);
}
private NamedPipeServerStream Server
{
get
{
return _stream as NamedPipeServerStream;
}
}
public NamedPipeClientStream Client
{
get
{
return _stream as NamedPipeClientStream;
}
}
private ConcurrentQueue<string> _readQueue = new ConcurrentQueue<string>();
private async Task ReadDataAsync()
{
while (!IsClosed && _stream.IsConnected)
{
string message = await _reader.ReadLineAsync();
Console.WriteLine($"Received: {message}");
_readQueue.Enqueue(message);
}
Console.WriteLine("ReadDataAsync completed.");
}
private ConcurrentQueue<string> _writeQueue = new ConcurrentQueue<string>();
private async Task WriteDataAsync()
{
while (!IsClosed && _stream.IsConnected)
{
bool wroteAny = false;
if (_writeQueue.TryDequeue(out string message))
{
try
{
await _writer.WriteLineAsync(message);
await _writer.FlushAsync();
}
catch (Exception ex)
{
Console.WriteLine($"Exception on write: {ex}");
throw;
}
wroteAny = true;
}
if (!wroteAny)
await Task.Delay(100);
}
Console.WriteLine("WriteDataAsync completed.");
}
public IEnumerable<string> TryRead()
{
while (_readQueue.Count > 0)
{
if (_readQueue.TryDequeue(out string ret) && !string.IsNullOrEmpty(ret))
yield return ret;
}
}
public void Write(string message)
{
_writeQueue.Enqueue(message);
}
private bool _isClosed = false;
public bool IsClosed
{
get
{
return _isClosed || _stream == null;
}
}
public bool IsConnected
{
get
{
return !IsClosed && _stream.IsConnected;
}
}
public void Close()
{
_isClosed = true;
if (_stream != null)
{
_stream.Close();
}
}
public void Dispose()
{
Close();
if (_stream != null)
{
_stream.Dispose();
_stream = null;
}
}
}
They both say they're connected. EDIT, after some fiddling, manually try/catching and printing errors to the console, it does appear that calling FlushAsync triggers a "Pipe is broken" exception. I appreciate your assistance. |
How can i remove already registered images/ cameras from colmap 3D Pointcloud. When i run Colmap to reconstruct a 3D Scene i get the following folder output: `database.db, sparse >> 0 >> cameras.bin, images.bin and points3D.bin`. I tied to remove a incorrectly registered camera by deleting it inside colmap and then exporting the Model, this doesnt work. I also tried to remove the images index inside the database.db file which didnt worked. I Think the camera/image data is sotred inside the images.bin file. How can i remove specific camera views inside this file? |
How to remove specific images / cameras from colmap 3D Poincloud | Clean Colmap |
|binary|bin|colmap| |
I'm currently working implementing AES encryption in the backend using Python, but I'm encountering some issues in ensuring compatibility between frontend and backedn. I need help in integrating the frontend JavaScript code to work with it.
My backend Python code:
<!-- language: lang-py -->
class Crypt():
def pad(self, data):
BLOCK_SIZE = 16
length = BLOCK_SIZE - (len(data) % BLOCK_SIZE)
return data + (chr(length)*length)
def unpad(self, data):
return data[:-(data[-1] if type(data[-1]) == int else ord(data[-1]))]
def bytes_to_key(self, data, salt, output=48):
assert len(salt) == 8, len(salt)
data += salt
key = sha256(data).digest()
final_key = key
while len(final_key) < output:
key = sha256(key + data).digest()
final_key += key
return final_key[:output]
def bytes_to_key_md5(self, data, salt, output=48):
assert len(salt) == 8, len(salt)
data += salt
key = md5(data).digest()
final_key = key
while len(final_key) < output:
key = md5(key + data).digest()
final_key += key
return final_key[:output]
def encrypt(self, message):
passphrase = "<secret passpharse value>".encode()
salt = Random.new().read(8)
key_iv = self.bytes_to_key_md5(passphrase, salt, 32+16)
key = key_iv[:32]
iv = key_iv[32:]
aes = AES.new(key, AES.MODE_CBC, iv)
return base64.b64encode(b"Salted__" + salt + aes.encrypt(self.pad(message).encode()))
def decrypt(self, encrypted):
passphrase ="<secret passpharse value>".encode()
encrypted = base64.b64decode(encrypted)
assert encrypted[0:8] == b"Salted__"
salt = encrypted[8:16]
key_iv = self.bytes_to_key_md5(passphrase, salt, 32+16)
key = key_iv[:32]
iv = key_iv[32:]
aes = AES.new(key, AES.MODE_CBC, iv)
return self.unpad(aes.decrypt(encrypted[16:])).decode().strip('"')
def base64_decoding(self, encoded):
base64decode = base64.b64decode(encoded)
return base64decode.decode()
crypt = Crypt()
test = "secret message to be send over network"
encrypted_message = crypt.encrypt(test)
print("Encryp msg:", encrypted_message)
decrypted_message = crypt.decrypt(encrypted_message)
print("Decryp:", decrypted_message)
here's what I've tried so far on the frontend with React and CryptoJS:
<!-- language: lang-js -->
import React from "react";
import CryptoJS from 'crypto-js';
const DecryptEncrypt = () => {
function bytesToKey(passphrase, salt, output = 48) {
if (salt.length !== 8) {
throw new Error('Salt must be 8 characters long.');
}
let data = CryptoJS.enc.Latin1.parse(passphrase + salt);
let key = CryptoJS.SHA256(data).toString(CryptoJS.enc.Latin1);
let finalKey = key;
while (finalKey.length < output) {
data = CryptoJS.enc.Latin1.parse(key + passphrase + salt);
key = CryptoJS.SHA256(data).toString(CryptoJS.enc.Latin1);
finalKey += key;
}
return finalKey.slice(0, output);
}
const decryptData = (encryptedData, key) => {
const decodedEncryptedData = atob(encryptedData);
const salt = CryptoJS.enc.Hex.parse(decodedEncryptedData.substring(8, 16));
const ciphertext = CryptoJS.enc.Hex.parse(decodedEncryptedData.substring(16));
const keyIv = bytesToKey(key, salt.toString(), 32 + 16);
const keyBytes = CryptoJS.enc.Hex.parse(keyIv.substring(0, 32));
const iv = CryptoJS.enc.Hex.parse(keyIv.substring(32));
const decrypted = CryptoJS.AES.decrypt(
{ ciphertext: ciphertext },
keyBytes,
{ iv: iv, mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 }
);
return decrypted.toString(CryptoJS.enc.Utf8);
};
const encryptData = (data, key) => {
const salt = CryptoJS.lib.WordArray.random(8); // Generate random salt
const keyIv = bytesToKey(key, salt.toString(), 32 + 16);
const keyBytes = CryptoJS.enc.Hex.parse(keyIv.substring(0, 32));
const iv = CryptoJS.enc.Hex.parse(keyIv.substring(32));
const encrypted = CryptoJS.AES.encrypt(data, keyBytes, {
iv: iv,
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7
});
const ciphertext = encrypted.ciphertext.toString(CryptoJS.enc.Hex);
const saltedCiphertext = "Salted__" + salt.toString(CryptoJS.enc.Hex) + ciphertext;
return btoa(saltedCiphertext);
};
const dataToEncrypt = 'Data to be sent over network';
const encryptionKey = "<secret passpharse value>";
const encryptedData = encryptData(dataToEncrypt, encryptionKey);
console.log("Encrypted data:", encryptedData);
const decryptedData = decryptData(encryptedData, encryptionKey);
console.log("Decrypted data:", decryptedData);
return (<>
Check
</>);
}
export default DecryptEncrypt;
I'm encountering some issues in ensuring compatibility between frontend and backedn. Specifically, I'm struggling with properly deriving the key and IV, and encrypting/decrypting the data in a way that matches the backend implementation. Getting error as below when i try to send encrypted text to backend where it throws following error while decrypting,
<!-- language: lang-none -->
packages\Crypto\Cipher\_mode_cbc.py", line 246, in decrypt
raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size)
ValueError: Data must be padded to 16 byte boundary in CBC mode
I m bit new to implementing AES in a fullstack app, so learning and trying but still stuck with this issue. Could someone who has encountered similar issue or implemented encryption/decryption in JavaScript offer some guidance or suggestions on how to modify my frontend code to achieve compatibility with the backend? |
Is there a way of calculating the of abstract sums in Cadabra? E.g., is it possible to show that
```[a_i, sum_{j} a_j^\dagger a_j] = a_i```, where ```[a_i, a_j^dagger] = delta_{ij}```?
I found a similar post here: <https://stackoverflow.com/questions/62254012/sympy-how-to-get-simplified-commutators-using-the-second-quantization-module>, but I couldn’t figure out how to do this for abstract sums...
If this is not possible with cadabra, can I do this with some other software? |
Calculate commutator with abstract sum using Cadabra |
|symbolic-math|symbolic-references| |
ThisWorkbook.FollowHyperlink ("folder path goes here")
|
I have a device running a modified modbus protocol. The device sends messages to the RPi3 serial port. The message is 14 bytes long that starts with a sync byte followed by 11 data bytes then two modbusCRC-16 bytes.
In order to check the validity of the message (by way of a CRC check), I can only send the 11 data bytes to the CRC check function. The problem is, I just can't figure out how to extract those 11 bytes and put them into new bytes list(??) acceptable by the CRC function.
The program is should below and the output below it (the error shown is understood because the newData list has not been created - that's what I'd like some help with!!!!!
```
import serial
from time import sleep
from modbus_crc import check_crc
ser = serial.Serial("/dev/ttyS0", 9600)
print("waiting for message from the serial port ......\n")
rxData = ser.read()
sleep(0.03)
data_left = ser.inWaiting()
rxData += ser.read(data_left)
print("Message has been received\n")
print("The 'rxData' type from ser.read() is ",type(rxData), " and length is ", len(rxData))
print("'rxData - ", [hex(i) for i in rxData], "\n")
print("Now show only bytes 1 to 11 of rxData\n")
x = range(1,12,1)
for i in x:
print((hex(rxData[i])), end=" ")
print("\n")
#####################
#### Missing code to make newData with only the bytes (1 to 11 in rxData
#####################
print("\nThe 'newData' type is ",type(newData), " and length is ", len(newData))
print("'newData' - ", [hex(i) for i in newData], "\n")
print("\n")
print("check if newData CRC is OK\n")
if not check_crc(newData):
print("CRC is NOT OK")
else:
print("CRC is OK!")
```
Output after running the program:
```
waiting for message from the serial port ......
Message has been received
The 'rxData' type from ser.read() is <class 'bytes'> and length is 14
'rxData - ['0xff', '0xd', '0x77', '0x2', '0x1', '0x1', '0x12', '0x33', '0x30', '0x2e', '0x38', '0x39', '0xfd', '0x78']
Now show only bytes 1 to 11 of rxData
0xd 0x77 0x2 0x1 0x1 0x12 0x33 0x30 0x2e 0x38 0x39
Traceback (most recent call last):
File "/home/stevev/Projects/TKinter/20240309-operation on Bytes class.py", line 29, in <module>
print("\nThe 'newData' type is ",type(newData), " and length is ", len(newData))
NameError: name 'newData' is not defined
```
|
I'm performing data analysis on a dataset with categorical labels are interrelated.
My labels track experimental conditions.
In my case, labels track concentrations of combinations of two chemicals that produce an output measured by n features.
Is it best practice to use the categorical labels in place of the concentrations of the combinations of chemicals, or is there a better method?
Here's a sample of the translation between categorical label and real life condition it represents.
| Condition | Chemical1 | Chemical2 |
| --------- | --------- | -------- |
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 3 | 0 | 1 |
| 4 | 0 | 2 |
| 5 | 1 | 1 |
| 6 | 1 | 2 |
|
Using sklearn where the label a combination of multiple inputs |
|python|machine-learning|scikit-learn|data-preprocessing| |
null |
{"OriginalQuestionIds":[9251117],"Voters":[{"Id":4342498,"DisplayName":"NathanOliver","BindingReason":{"GoldTagBadge":"c++"}}]} |
If I understand correctly, you are looking for [geopandas.total_bounds](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.total_bounds.html). This property returns the minimum bounding box for the entire GeoDataFrame.
To get the center of it, you can use e.g. [shapely.centroid](https://shapely.readthedocs.io/en/stable/reference/shapely.centroid.html). E.g. something like this:
import geopandas as gpd
import shapely
p1 = shapely.Polygon([[0, 0], [0, 1], [1, 1], [0, 0]])
p2 = shapely.Polygon([[99, 99], [99, 100], [100, 100], [99, 99]])
df = gpd.GeoDataFrame(data={"desc": ["p1", "p2"]}, geometry=[p1, p2])
center = shapely.box(*df.total_bounds).centroid
print(center)
# POINT (50 50)
|
{"OriginalQuestionIds":[2081640],"Voters":[{"Id":523612,"DisplayName":"Karl Knechtel","BindingReason":{"GoldTagBadge":"python"}}]} |
I have downloaded a .csv file from the World Bank's databank containing USA's gdp per capita growth (annual %) from 1964 to 2022. After reading the file and saving the values in a variable, R seems to believe that the data is non-numeric, as per the following errors:
> mean(gdp)
[1] NA
Warning message:
In mean.default(gdp) : argument is not numeric or logical: returning NA
> round(gdp)
Error in Math.data.frame(list(4.34054896320299, 5.07809761043293, 5.27711385836413, :
non-numeric-alike variable(s) in data frame: 38, 39, 59
However, the variable clearly has numeric data:
> gdp
1 4.340549 5.078098 5.277114 1.389951 3.758819 2.09737 -1.438451 1.9956
1 4.138097 4.642156 -1.445134 -1.184581 4.391463 3.577147 4.422985 2.033887
1 -1.209298 1.53632 -2.73457 3.631979 6.312168 3.250656 2.510886 2.538624
1 3.235416 2.698167 0.7414861 -1.4342 2.096613 1.405709 2.760882 1.468718
1 2.572259 3.197212 3.270511 3.597985 2.925441 -0.0399197287893003
1 0.756774050858581 1.91648 2.895848 2.533784 1.796486 1.04493 -0.8203679
1 -3.450016 1.860292 0.8145194 1.533102 1.138692 1.540381 1.953004 0.9333754
1 1.597136 2.404868 1.829668 -3.700953 5.779548 1.55148744876492
What is causing this error?
---
**How the data was downloaded:** by accessing the [WB's databank][1] and selecting the following options:
* Country: USA
* Series: GDP per capita growth (annual %)
* Time: from 1964 to 2022 (both inclusive)
Then the data can be downloaded by clicking on "Download options" and "CSV". The file should look like so:
[![enter image description here][2]][2]
---
**How the data was read:** after deleting the B, C, and D columns and renaming the file to 'gdp.csv', I open R in the corresponding directory and run
> data = read.csv("gdp.csv")
> gdp = unname(data[1,-1])
---
**Output of some commands asked for in the comments:**
> dput(gdp)
structure(list(4.34054896320299, 5.07809761043293, 5.27711385836413,
1.38995128628369, 3.75881936763187, 2.09736970647879, -1.43845053330176,
1.99560023463674, 4.13809677493917, 4.64215574445912, -1.44513434368933,
-1.18458141215473, 4.39146286737468, 3.57714670648173, 4.42298460692567,
2.03388706463849, -1.20929826319077, 1.53632028085542, -2.73456973098722,
3.631979295881, 6.31216765588185, 3.25065642338708, 2.51088596744542,
2.53862353656405, 3.23541610899424, 2.69816667236526, 0.74148609960973,
-1.43420012525736, 2.09661276602233, 1.40570856500528, 2.76088229716902,
1.46871823407399, 2.57225920399024, 3.1972120547039, 3.27051107297336,
3.59798501816873, 2.92544098347776, "-0.0399197287893003",
"0.756774050858581", 1.91648045091225, 2.89584777850045,
2.53378411366741, 1.79648632582968, 1.04493013677586, -0.820367898524154,
-3.45001592321435, 1.86029167805893, 0.814519357932042, 1.53310203539129,
1.13869234666606, 1.54038064866397, 1.95300411790625, 0.933375361665711,
1.59713559028371, 2.40486787201, 1.82966838809384, -3.70095252825833,
5.77954841835778, "1.55148744876492"), row.names = 4L, class = "data.frame")
> str(gdp)
'data.frame': 1 obs. of 59 variables:
$ : num 4.34
$ : num 5.08
$ : num 5.28
[...]
$ : num 2.93
$ : chr "-0.0399197287893003"
$ : chr "0.756774050858581"
$ : num 1.92
[...]
$ : num 5.78
$ : chr "1.55148744876492"
where I have ommitted most lines starting with `num`, as I'm guessing the ones beginning with `chr` are the issue.
[1]: https://databank.worldbank.org/reports.aspx?source=2&series=NY.GDP.PCAP.KD.ZG&country=USA#
[2]: https://i.stack.imgur.com/flBgz.png |
The presence of `DialogContent` in your posted code suggests the use of Material-UI's Dialog component.
The Dialog component has an accessibility feature that moves the browser's focus to the modal and adds an event listener to keep it there:
[![enter image description here][1]][1]
This is preventing focus on the overlaying iframe that Stripe opens. To prevent this, use the `disableEnforceFocus` property on your containing `Dialog` component:
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/cQgpY.png
[2]: https://i.stack.imgur.com/oLnlX.png |
I am trying to generate a signature programmatically for the body of my SOAP Messages
Looking at the [spec](https://www.w3.org/TR/xmldsig-core2/#sec-KeyInfo) it should be possible to have
```
<KeyInfo>
<X509Data>
<X509Certificate/>
<X509IssuerSerial>
<X509IssuerName>
...
</X509IssuerName>
<X509SerialNumber>...</X509SerialNumber>
</X509IssuerSerial>
</X509Data>
</KeyInfo>
```
I am using WSS4J 3.0.3
```
KeyPair keyPair = generateKeys();
Certificate certificate = generateCertificate(new KeyPair(getPublicKey(), getPrivateKey()));
String alias = "alias";
KeyStore keyStore = saveKeyStore(temporaryFolder.newFile(KEY_STORE_FILENAME), PASSWORD, certificate, keyPair, alias);
try(InputStream inputStream = TestSOAPSignatureValidationServiceWSSJ.class.getResourceAsStream("UnsignedDocument.xml")){
SOAPMessage message = MessageFactory.newInstance().createMessage(null, inputStream);
SOAPBody soapBody = message.getSOAPBody();
Document document = soapBody.getOwnerDocument();
WSSecHeader secHeader = new WSSecHeader(document);
secHeader.setMustUnderstand(true);
Element securityHeaderElement = document.createElementNS("http://schemas.xmlsoap.org/soap/security/2000-12", "SOAP-SEC:Signature");
message.getSOAPHeader().appendChild(securityHeaderElement);
secHeader.setSecurityHeaderElement(securityHeaderElement);
secHeader.insertSecurityHeader();
WSSecSignature signature = new WSSecSignature(secHeader);
signature.setX509Certificate((X509Certificate) certificate);
Properties properties = new Properties();
properties.setProperty("org.apache.ws.security.crypto.provider", "org.apache.ws.security.components.crypto.Merlin");
Crypto crypto = CryptoFactory.getInstance(properties);
crypto.loadCertificate(new ByteArrayInputStream(certificate.getEncoded()));
((Merlin) crypto).setKeyStore(keyStore);
signature.setUserInfo(alias, PASSWORD);
WSDocInfo wsDocInfo = new WSDocInfo(document);
signature.setWsDocInfo(wsDocInfo);
signature.setAddInclusivePrefixes(false);
org.apache.xml.security.Init.init();
WSEncryptionPart wsEncryptionPart = new WSEncryptionPart(soapBody.getLocalName(), soapBody.getNamespaceURI(), "Content");
wsEncryptionPart.setElement(soapBody);
wsEncryptionPart.setId("Body");
signature.addReferencesToSign(List.of(wsEncryptionPart));
signature.setKeyIdentifierType(WSConstants.ISSUER_SERIAL);
Document signed = signature.build(crypto);
LOG.info(XMLUtils.prettyDocumentToString(signed));
}
}
```
but I am getting
```
<ds:KeyInfo Id="KI-c26f3a7c-ddf2-4889-9eef-55b541cb458f">
**<wsse:SecurityTokenReference** wsu:Id="STR-d5bbb99f-f861-4512-af56-f5e697c939ba" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<ds:X509Data>
<ds:X509IssuerSerial>
<ds:X509IssuerName>C=CA,ST=UK,L=London,O=*** Crypto.,OU=Security&Defense,CN=***.com</ds:X509IssuerName>
<ds:X509SerialNumber>236</ds:X509SerialNumber>
</ds:X509IssuerSerial>
</ds:X509Data>
**</wsse:SecurityTokenReference>**
</ds:KeyInfo>
```
Is it possible to instruct WSS4J to avoid the `SecurityTokenReference` tag and have the `X509Data` tag as a direct child of `KeyInfo`?
Thanks in advance!
|
Terraform OCI error when creating Network Load Balancer |
|terraform|oracle-cloud-infrastructure|terraform-provider-oci|network-load-balancer| |
I am trying to scrape the MLB daily lineup information from here: https://www.rotowire.com/baseball/daily-lineups.php
I am trying to use python with requests, BeautifulSoup and pandas.
My ultimate goal is to end up with two pandas data frames.
First is a starting pitching data frame:
|date |game_time |pitcher_name |team |lineup_throws|
|----------|------------|----------------|-------|-------------|
|2024-03-29|1:40 PM ET |Spencer Strider |ATL |R |
|2024-03-29|1:40 PM ET |Zack Wheeler |PHI |R |
Second is a starting batter data frame:
|date |game_time|batter_name |team |pos |batting_order |lineup_bats|
|-------|---------|-----|-------|-------|---------------|-----------|
|2024-03-29|1:40 PM ET |Ronald Acuna |ATL |RF |1 |R|
|2024-03-29|1:40 PM ET |Ozzie Albies |ATL |2B |2 |S|
|2024-03-29|1:40 PM ET |Austin Riley |ATL |3B |3 |R|
|2024-03-29|1:40 PM ET |Kyle Schwarber |PHI |DH |1 |L|
|2024-03-29|1:40 PM ET |Trea Turner |PHI |SS |2 |R|
|2024-03-29|1:40 PM ET |Bryce Harper |PHI |1B |3 |L|
This would be for all game for a given day.
I've tried adapting this answer to my needs but can't seem to get it to quite work: https://stackoverflow.com/questions/67814115/scraping-web-data-using-beautifulsoup
Any help or guidance is greatly appreciated. |
Scraping MLB daily lineups from rotowire using python |
|python|web-scraping|beautifulsoup| |
{"OriginalQuestionIds":[41265266],"Voters":[{"Id":1746118,"DisplayName":"Naman","BindingReason":{"GoldTagBadge":"java"}}]} |
The relative path to the included file was not correct. Hence, one should write something like use("C:\\path\\to\\included\\file.chai"). |
I am attempting to install Python3.10 on a Ubuntu v20.04 Docker Image. I am using the line:
```
RUN export DEBIAN_FRONTEND=noninteractive TZ=US && \
apt-get update && \
apt-get -y install python3.10 python3-pip
```
which seems to run properly, and I get a successful Image.
However when I run the Container, I appear to only have Python3.8 installed?
```
# python3 --version
Python 3.8.10
# python3.10 --version
/bin/sh: 3: python3.10: not found
# whereis python3
python3: /usr/bin/python3.8 /usr/bin/python3 /usr/bin/python3.8-config /usr/lib/python3 /usr/lib/python3.8 /usr/lib/python3.9 /etc/python3.8 /etc/python3 /usr/local/lib/python3.8 /usr/include/python3.8 /usr/share/python3
```
How can I install Python 3.10 in this case? |
Installing Python 3.10 in Docker Image? |
|python|python-3.x|docker|docker-compose| |
Your output:
```lang-none
this is thread 1
this is thread 2
main exists
thread 2 exists
thread 1 exists
thread 1 exists
```
Before I see that *"it prints "thread 1 exists" twice."*, I see that it prints after "main exists":This behavior can lead to unpredictable results.
--
First, you should array your code:
```cpp
#include <thread>
#include <iostream>
#include <future>
#include <syncstream>
void log(const char* str)
{
std::osyncstream ss(std::cout);
ss << str << std::endl;
}
void worker1(std::future<int> fut)
{
log("this is thread 1");
fut.get();
log("thread 1 exists");
}
void worker2(std::promise<int> prom)
{
log("this is thread 2");
prom.set_value(10);
log("thread 2 exists");
}
int main()
{
std::promise<int> prom;
std::future<int> fut = prom.get_future();
// Fire the 2 threads:
std::thread t1(worker1, std::move(fut));
std::thread t2(worker2, std::move(prom));
t1.join();
t2.join();
log("main exists");
}
```
Key points:
* CRITICAL: Replace the `while` loop and `detach()` with `join()` in the `main()` to ensure that the main thread waits for all child threads to finish before exiting.
* Dilute the `#include` lines to include only what's necessary - For better practice.
* Remove unused variables - For better practice.
* Remove the unused `using namespace` directive - For better practice.
* In addition, I would also replace the `printf()` calls with `std::osyncstream`.
[Demo][1]
With that, the output is:
```lang-none
this is thread 1
this is thread 2
thread 2 exists
thread 1 exists
main exists
```
[1]: https://onlinegdb.com/b2NwxGX-d |
I have a thread that sends GPS coordinates to a database every six seconds and I have a check that verifies that the user is within a defined area. If the user is not within the location, I want an alert dialog that notifies them that they are out of range, and if they are within the area I want a dialog that tells them they are within range.
I have the checks working properly, but I have tried and I'm pretty sure that I can't add the dialog on the background thread. I have read a bit about using handlers. How can I implement one?
This is how I call `FindLocation.java` from my main activity (`MainActivity.java`):
new FindLocation(getBaseContext()).start(usr_id1); //sends a user id with it
Below is `FindLocation.java`
public class FindLocation extends Thread {
public boolean inJurisdiction;
public boolean AlertNotice = false;
private LocationManager locManager;
private LocationListener locListener;
Context ctx;
public String userId;
public FindLocation(Context ctx) {
this.ctx = ctx;
}
public void start(String userId) {
this.userId = userId;
super.start();
}
@Override
public void run() {
Looper.prepare();
final String usr = userId;
//get a reference to the LocationManager
locManager = (LocationManager) ctx.getSystemService(Context.LOCATION_SERVICE);
//checked to receive updates from the position
locListener = new LocationListener() {
public void onLocationChanged(Location loc) {
String lat = String.valueOf(loc.getLatitude());
String lon = String.valueOf(loc.getLongitude());
Double latitude = loc.getLatitude();
Double longitude = loc.getLongitude();
if (latitude >= 39.15296 && longitude >= -86.547546 && latitude <= 39.184901 && longitude <= -86.504288 || inArea != false) {
Log.i("Test", "Yes");
inArea = true;
JSONArray jArray;
String result = null;
InputStream is = null;
StringBuilder sb = null;
ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>();
nameValuePairs.add(new BasicNameValuePair("id", usr));
//http post
try{
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost("http://www.example.com/test/example.php");
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs));
HttpResponse response = httpclient.execute(httppost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
}catch(Exception e){
Log.e("log_tag", "Error in http connection"+e.toString());
}
//convert response to string
try{
BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8);
sb = new StringBuilder();
sb.append(reader.readLine() + "\n");
String line="0";
while ((line = reader.readLine()) != null) {
sb.append(line + "\n");
}
is.close();
result=sb.toString();
}
catch(Exception e){
Log.e("log_tag", "Error converting result "+e.toString());
}
try{
jArray = new JSONArray(result);
JSONObject json_data=null;
for(int i=0;i<jArray.length();i++){
json_data = jArray.getJSONObject(i);
String ct_name = json_data.getString("phoneID");
Log.i("User ID", ct_name);
if(ct_name == usr) {
locManager.removeUpdates(locListener);
}
else{
locManager.removeUpdates(locListener);
Log.i("User ID", "NONE");
}
}
}
catch(Exception e){
//Log.e("log_tag", "Error converting result "+e.toString());
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost("http://example.com/test/example.php");
try {
List<NameValuePair> nameValuePairs1 = new ArrayList<NameValuePair>(2);
nameValuePairs1.add(new BasicNameValuePair("lat", lat));
nameValuePairs1.add(new BasicNameValuePair("lon", lon));
nameValuePairs1.add(new BasicNameValuePair("id", usr));
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs1));
httpclient.execute(httppost);
Log.i("SendLocation", "Yes");
}
catch (ClientProtocolException g) {
// TODO Auto-generated catch block
} catch (IOException f) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
else {
Log.i("Test", "No");
inArea = false;
}
}
public void onProviderDisabled(String provider){
}
public void onProviderEnabled(String provider){
}
public void onStatusChanged(String provider, int status, Bundle extras){
}
};
locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 6000, 0, locListener);
Looper.loop();
}
}
|
I recently started a new Angular 17 project and I'm encountering a bit of an issue with server side rendering and/or hydration, not 100% sure which. Up until now I've primarily worked with lower versions (up to 14) and without server side rendering, so I have yet to encounter such an issue.
Within my home page component I subscribe to a Subject as provided by the `angular-requests` library. This means that I have an open observable, which causes the `ApplicationRef`'s `isStable` to not fire resulting in the page needing to wait 10 seconds to load (after which I am also given the following warning: `Angular hydration expected the ApplicationRef.isStable() to emit true, but it didn't happen within 10000ms. Angular hydration logic depends on the application becoming stable as a signal to complete hydration process. Find more at https://angular.io/errors/NG0506`).
Reading through the link it suggests to subscribe to these observables using `NgZone` and subscribing outside of Angular, but I wouldn't want to force that within every single component that causes a request to be sent, as well as that feeling like a bad idea. This page did lead me managing to find a pretty hacky fix, but I feel like there must be some better way that allows me to have open subscriptions. Any help would be greatly appreciated.
Hacky Fix:
```typescript
app.component.ts
export class AppComponent {
public canRender: boolean = true;
constructor() {
inject(ApplicationRef).isStable
.subscribe(isStable => this.canRender = isStable || this.canRender);
}
}
```
```html
app.component.html
@if (canRender) {
<router-outlet></router-outlet>
}
```
Again, any help would be greatly appreciated, thanks in advance,
Venfi
|
Issue With Angular Hydration |
|angular|server-side-rendering|hydration| |
Here is my code: https://github.com/d0uble-happiness/DiscogsCsvVue
App.vue
<template>
<div>
<FileUpload @file="setFile" />
<ParseCsvToArray v-if="file" :file="file" />
<ProcessReleaseData />
<FetchRelease />
</div>
</template>
<script lang="ts">
import { defineComponent } from 'vue';
import FileUpload from './components/FileUpload.vue'
import ParseCsvToArray from './components/ParseCsvToArray.vue'
import ProcessReleaseData from './components/ProcessReleaseData.vue'
import FetchRelease from './components/FetchRelease.vue'
export default defineComponent({
name: 'App',
components: {
FileUpload,
ParseCsvToArray,
ProcessReleaseData,
FetchRelease
},
data() {
return {
file: null as null | File,
}
},
methods: {
setFile(file: File) {
console.log("Received file:", file)
this.file = file;
}
},
mounted() {
console.log("mounted");
},
});
</script>
<style></style>
ParseCsvToArray.vue
<template>
<div>
<p v-for="row of parsedData" v-bind:key="row.id">
{{ row }}
</p>
</div>
</template>
<script lang="ts">
import { defineComponent } from 'vue'
import Papa from 'papaparse';
import ROW_NAMES from './RowNames.vue'
export default defineComponent({
name: 'ParseCsvToArray',
props: {
file: File
},
data() {
return {
parsedData: [] as any[],
rowNames: ROW_NAMES
}
},
methods: {
parseCsvToArray(file: File) {
Papa.parse(file, {
header: false,
complete: (results: Papa.ParseResult<any>) => {
console.log('Parsed: ', results.data);
this.parsedData = results.data;
}
});
}
},
mounted() {
if (this.file) {
this.parseCsvToArray(this.file);
}
},
});
</script>
<style></style>
FetchRelease.vue
<template>
<label>Fetch release</label>
</template>
<script lang="ts">
import { DiscogsClient } from '@lionralfs/discogs-client';
import ProcessReleaseData from './ProcessReleaseData.vue'
import { defineComponent } from 'vue'
// import { defineAsyncComponent } from 'vue'
export default defineComponent ({
name: 'FetchRelease',
methods: {
fetchRelease
}
});
const db = new DiscogsClient().database();
// const AsyncComp = defineAsyncComponent(() => {
// return new Promise((resolve, reject) => {
// // ...load component from server
// resolve(/* loaded component */)
// })
// })
async function fetchRelease(releaseId: string): Promise<any[] | { error: string }> {
try {
const { data } = await db.getRelease(releaseId);
return ProcessReleaseData(releaseId, data);
} catch (error) {
return {
error: `Release with ID ${releaseId} does not exist`
};
}
}
</script>
<style></style>
ProcessReleaseData.vue
<template>
<label>Process release data</label>
</template>
<script lang="ts">
import { type GetReleaseResponse } from '@lionralfs/discogs-client/types/types';
export default {
name: 'ProcessReleaseData',
methods: {
processReleaseData
},
data() {
return {
// formattedData
}
},
};
function processReleaseData(releaseId: string, data: GetReleaseResponse) {
const { country = 'Unknown', genres = [], styles = [], year = 'Unknown' } = data;
const artists = data.artists?.map?.(artist => artist.name);
const barcode = data.identifiers.filter(id => id.type === 'Barcode').map(barcode => barcode.value);
const catno = data.labels.map(catno => catno.catno);
const uniqueCatno = [...new Set(catno)];
const descriptions = data.formats.map(descriptions => descriptions.descriptions);
const format = data.formats.map(format => format.name);
const labels = data.labels.map(label => label.name);
const uniqueLabels = [...new Set(labels)];
const qty = data.formats.map(format => format.qty);
const tracklist = data.tracklist.map(track => track.title);
// const delimiter = document.getElementById('delimiter').value || '|';
const delimiter = '|';
const formattedBarcode = barcode.join(delimiter);
const formattedCatNo = uniqueCatno.join(delimiter);
const formattedGenres = genres.join(delimiter);
const formattedLabels = uniqueLabels.join(delimiter);
const formattedStyles = styles.join(delimiter);
const formattedTracklist = tracklist.join(delimiter);
const preformattedDescriptions = descriptions.toString().replace('"', '""').replace(/,/g, ', ');
const formattedDescriptions = '"' + preformattedDescriptions + '"';
const formattedData: any[] = [
releaseId,
artists,
format,
qty,
formattedDescriptions,
formattedLabels,
formattedCatNo,
country,
year,
formattedGenres,
formattedStyles,
formattedBarcode,
formattedTracklist
];
return formattedData;
}
</script>
<style></style>
AFAIK I want to take the `parsedData` (which will be an array of integers), and pass it on to `FetchRelease`, to do API calls using `ProcessReleaseData`.
At the moment, `return ProcessReleaseData` is throwing this error:
> Value of type 'DefineComponent<{}, {}, {}, {}, { processReleaseData: (releaseId: string, data: GetReleaseResponse) => any[]; }, ComponentOptionsMixin, ComponentOptionsMixin, ... 5 more ..., {}>' is not callable. Did you mean to include 'new'?
...but VSCode's suggested fix doesn't solve it.
I was told...
> Two ways to go about this - js/ts file that is a shared “helper” or something and exports the function, or include the component, give it a ref and access function through components ref.
...but I really don't know how to do that?
I was wondering if it could possibly be something as simple as...
const AsyncComp = defineAsyncComponent(() => {
return new Promise((resolve, reject) => {
// ...load component from server
component: import('./ProcessReleaseData.vue'),
resolve(ProcessReleaseData)
})
})
...but that throws this error:
> Argument of type 'DefineComponent<{}, {}, {}, {}, { processReleaseData: (releaseId: string, data: GetReleaseResponse) => any[]; }, ComponentOptionsMixin, ComponentOptionsMixin, ... 5 more ..., {}>' is not assignable to parameter of type '{ default: never; } | PromiseLike<{ default: never; }>'.
Any help please? TIA |
please add the "apiKey" parameter at the end of your API endpoint. Be sure to prefix it with "?". |
[sql.js](https://github.com/sql-js/sql.js/) is a sqlite client for javascript, based on the original sqlite compiled to WASM.
bundle size is 640KB [wasm](https://sql.js.org/dist/sql-wasm.wasm) + 50KB [js](https://sql.js.org/dist/sql-wasm.js) = 700KB, or 1300KB [asm-js](https://sql.js.org/dist/sql-asm.js)
to compare: [alasql](https://bundlephobia.com/package/alasql@4.3.2) has a bundle size of 440KB
limitation: the full database is stored in memory, so the database should be smaller than 100 MB.
ideally, sql.js should support IndexedDB as an on-disk storage backend, to reduce memory usage
see also
- [absurd-sql](https://github.com/jlongster/absurd-sql) - sql.js with an IndexedDB storage backend. size limit for IndexedDB is around 200 MB
- [Allow hooking into the filesystem and providing a persistent backend (like absurd-sql) sql.js#481](https://github.com/sql-js/sql.js/pull/481)
- [Meta VFS for adding storage options sql.js#447](https://github.com/sql-js/sql.js/issues/447) |
I have a web app that will draw a polyline for each user (tracks movement), and I'd like to incorporate some functionality that allows the web app user to 'focus' on a certain user by changing the color of the polyline. It will have to first change all the polylines to red, and then change the selected polyline to blue. I think this is best to avoid focusing on one line, then trying to focus on another and having them both blue.
I'm really not sure how to implement this, but I have functionality that returns a user id when the name is pressed. I just need to iterate over each object (each user's polyline) to change them to red first, then change the specific one to blue. Here is some code below. This is a condensed version of my code.
function User(id) {
this.id = id;
this.locations = [];
this.mark = 0;
this.getId = function() {
return this.id;
};
this.addLocation = function(latitude, longitude) {
this.locations[this.locations.length] = new google.maps.LatLng(latitude, longitude);
};
var polyline;
this.drawPolyline = function(loc) {
polyline = new google.maps.Polyline({
map: map,
path: loc,
strokeColor: "#FF0000",
strokeOpacity: 1.0,
strokeWeight: 2
});
polyline.setMap(map);
};
this.removePolyline = function() {
if (polyline != undefined) {
polyline.setMap(null);
}
}
this.get_user_info = function(user_id) {
var datastr = 'id=' + user_id;
$.ajax({
type: "POST",
url: 'user_api.php',
data: datastr,
dataType: 'json',
success: function(data){
var phone_id = data[0];
var leftDiv = document.createElement("div"); //Create left div
leftDiv.id = "left"; //Assign div id
leftDiv.setAttribute("style", "float:left; width:66.5%; line-height: 26px; text-align:left; font-size:12pt; padding-left:8px; height:26px;"); //Set div attributes
leftDiv.style.background = divColor;
//user_name = document.createTextNode(fullName + ' '); //Set user name
a = document.createElement('a');
a.href ="javascript:setFocus('" + phone_id + "');";
a.innerHTML = fullName + ' ';
leftDiv.appendChild(a);
}
});
}
}
function setFocus(phone_id) {
alert(phone_id);
}
function Users() {
this.users = {};
this.createUser = function(id) {
this.users[id] = new User(id);
return this.users[id];
};
this.getUser = function(id) {
return this.users[id];
};
this.removeUser = function(id) {
var user = this.getUser(id);
delete this.users[id];
return user;
};
}
var users = new Users(); |
Why does the ImageGalleryBlock in wagtail-crx/coderedcms return no images? |
|django|wagtail|wagtail-streamfield| |
null |
I'm trying to test the notifications component. When the user clicks on one of the cards it should call method `markNotificationAsRead` but when I write the test like this:
it('should mark notification as read and navigate to correct page when clicked', async () => {
vi.mock('../services/notifications.http', () => ({
markNotificationAsRead: vi.fn(),
markAllNotificationsAsRead: vi.fn(),
loadNotifications: vi.fn(),
}));
vi.mock('@tanstack/react-query', async () => {
const mod = await vi.importActual<typeof import('@tanstack/react-query')>('@tanstack/react-query');
return {
...mod,
useInfiniteQuery: vi.fn(() => ({
data: {
pages: [
[
{
id: '1',
text: 'Notification 1',
title: 'Title 1',
data: {
id: 1,
notify_type: 'type1',
category: 'category1',
},
read: false,
type: 'type1',
created_at: '2022-01-01T00:00:00Z',
},
],
[
{
id: '2',
text: 'Notification 2',
title: 'Title 2',
data: {
id: 2,
notify_type: 'type2',
category: 'category2',
},
read: true,
type: 'type2',
created_at: '2022-01-02T00:00:00Z',
},
],
],
},
fetchNextPage: vi.fn(),
hasNextPage: true,
isFetching: false,
isLoading: false,
})),
useQueryClient: vi.fn(() => ({
invalidateQueries: vi.fn(),
})),
};
});
const markNotificationAsRead = vi.spyOn(notificationsHttp, 'markNotificationAsRead');
renderWithProviders(<Notifications />);
const firstNotificationCard = screen.getAllByRole('listitem', { name: 'Notification' })[0];
await userEvent.click(firstNotificationCard);
await waitFor(() => {
expect(markNotificationAsRead).toHaveBeenCalled();
});
});
When i run this test i get:
```AssertionError: expected "spy" to be called at least once```
I tried to put the click in a ```act``` function and tried `fireEvent`, and none of them worked for me. Also, I'm sure that the card functions normally when clicking on it in normal ui.
Here's the exact same test but it tests the `mark All as read` functionality and is working normally
it('should mark all notifications as read when "Mark all as read" button is clicked', async () => {
const markAllNotificationsAsRead = vi.spyOn(notificationsHttp, 'markAllNotificationsAsRead');
renderWithProviders(<Notifications />);
const allReadBtn = await waitFor(() => screen.getByRole('button', { name: 'allUnread' }));
userEvent.click(allReadBtn);
await waitFor(() => {
expect(markAllNotificationsAsRead).toHaveBeenCalled();
});
});
The 2 tests are the same, except that the 1st test fires click on item from a list and the second fires click on a single btn. |
I have been learning how to use the Pi4J library and tried to implement the minimal example application that can be found in the [Pi4J website](https://pi4j.com/getting-started/minimal-example-application/)
The code that I used is here:
```
package org.PruebaRaspberry11;
import com.pi4j.Pi4J;
import com.pi4j.io.gpio.digital.DigitalInput;
import com.pi4j.io.gpio.digital.DigitalOutput;
import com.pi4j.io.gpio.digital.DigitalState;
import com.pi4j.io.gpio.digital.PullResistance;
import com.pi4j.util.Console;
public class PruebaRaspberry11 {
private static final int PIN_BUTTON = 24;
private static final int PIN_LED = 22;
private static int pressCount = 0;
public static void main (String[] args) throws InterruptedException, IllegalArgumentException {
final var console = new Console();
var pi4j = Pi4J.newAutoContext();
var ledConfig = DigitalOutput.newConfigBuilder(pi4j)
.id("led")
.name("LED Flasher")
.address(PIN_LED)
.shutdown(DigitalState.LOW)
.initial(DigitalState.LOW);
var led = pi4j.create(ledConfig);
var buttonConfig = DigitalInput.newConfigBuilder(pi4j)
.id("button")
.name("Press button")
.address(PIN_BUTTON)
.pull(PullResistance.PULL_DOWN)
.debounce(3000L);
var button = pi4j.create(buttonConfig);
button.addListener(e -> {
if (e.state().equals(DigitalState.LOW)) {
pressCount++;
console.println("Button was pressed for the " + pressCount + "th time");
}
});
while (pressCount<5) {
if(led.state().equals(DigitalState.HIGH)){
led.low();
console.println("LED low");
} else {
led.high();
console.println("LED high");
}
Thread.sleep(500/(pressCount+1));
}
pi4j.shutdown();
}
}
```
I've tested all the hardware (the LED, the button) to make sure that the work fine. In the console, the "LED high" "LED low" lones are printed, but the LED doesn't blink and nothing happens when I press the button.
What is more strange is that the first time I run the code it worked, but I tried to do some modifications and never worked again, even after deleting the changes I did. |
# IDEA
## IMPORTANT NOTIFICATION
THE EXPLAINATION IS NOT VERY CLEAR, AND I EXPECT YOU TO GUESS. GUESS IS GOOD FOR YOUR BRAIN.
BELIEVE YOURSELF. Your GUESS, if you think is LOGICAL, is RIGHT.
In fact they are just simple ideas, with little guide then you will also understand.
So you are expected to directedly understand what `order`, `label`, `start` is.
If you still feel unsure, scroll down and see explanation.
## better function's procedure
### example (they will looks like C#)
``` C#
class ExpClass1{
int a;
int b;
int c;
int d;
virtual (int,int) F(int a,int b) {
this.a=a;
this.b=b;
this.c=a+b;
this.d=a-b;
return (c,d);
}
}
```
Normally, computer just do as it says, linear. But is that the best?
For example, for "d=a-b", it don't need the result of c, and it just need the value of a and b.
In short, any action can be down when all parameter are gained.
A better function (guess it):
``` C#
(int,int) F(int a,int b) {
order start,[SetA,SetB,GainC,GainD],end;
label SetA: {this.a=a;}
label SetB: {this.b=b;}
label GainC: {this.c=a+b;}
label GainD: {this.d=a-b;}
label End: {return (c,d);}
}
```
Gramma is casual. And don't worry about the code. The labels are expected to be automatically added by lang.
Notice that the label is important.
More complex procedure will be discussed later <del>(by you)</del>
<del>logical spaghetti</del>
## What does order do
Tell computer that don't worry about the order of some procedures.
And also tell us that some procedures are not related.
"decoupling between procedures"
"deeper work of logic"
## extend it
the same, extra works just needed to insert with label and order
``` C#
class ExpClass2:ExpClass1{
int e;
override (int,int) F(int a,int b) {
order [GainC,GainD],GainE,End;
label GainE:{this.e=c*d;}
}
}
```
ExpClass2.F is called "extensional function for ExpClass1.F"
## multiply extend it
You (and computer) just realized that 2 functions can be combined without conflict.
``` C#
class ExpClass3:ExpClass1{
double e;
override (int,int) F(int a,int b) {
order [GainC,GainD],GainE,End;
label GainE:{this.e=c/d;}
}
}
// always as C++ virtual class
class ExpClass4:ExpClass2,ExpClass3
{
double f;
override (int,int) F(int a,int b) {
order [ExpClass2::GainE,ExpClass3::GainE],GainF,End;
label GainF:{this.f=ExpClass2::e+ExpClass3::e;}
}
}
```
perfect( believe ).
## What does extension do
Feel it by your self
Break big hard block into toy bricks, which is programs should be.
## Example
It is hard to find a example really requires this to solve ,<br>
because normally normal combine (like python's Super()) is enough.
But when you really need this, you will feel its power.
Create a abstract class `SearchTree` and `Node`. (good design) (for instance, image Delete(key))<br>
Then create a abstract subclass `NodeLinked` that provides ability of `LinkedList` and O(n) enum.<br>
Then create a abstract subclass `TreeWithCache` that provides a cache to fast access last searched value<br>
Then combine these 2 together as `TreeWithCacheAndLinked`, which provides fast access of value near the cached<br>
Then create a subclass `RBTree` of `SearchTree` that provides Red-Black tree structure<br>
Then combine all together and feel its power.
## more talk
### what does it do
decoupling between procedure
### what else can it do
protogenetic multithreading
less pain for some situation
``` C#
class FA{
void FA(); void FB();
virtual void FC(){
FA();
FCBetweenFAAndFB(); //Toooo strange but necessary
FB();
}
virtual void FCBetweenFAAndFB(){
}
}
```
One engine (I forgot) uses about 10 labels to sort update procedures.
## Problems (<del>disclaimer</del>)
### Conflict: it must be caused by bad design (1k word omitted)
Not related extend function normally should have no conflict.
If they both modify return, or they both cause conflict side-effect, then they may shouldn't be used together.
Notice that `a+=b` and `a+=c` has no conflict, they can be considered to be used for side-effect.
### More chars and spaghetti: the cost of more abstract
In other sight, they just provide more possibility, logically, and be expected to don't ruin original codes.
(image it is the same as writing annotation)
### Implement
They can(in theory) be implemented by multithreading, but can also be implemented by just compiling them into normal, linear one.
### <del>poor english, poor expression</del>
I believe you can understand what I says
When I try to explain them more clearly, I feel my brain changes state.<br>
The state of that I write this is the same as that I come up with the idea.<br>
And the state of that I explain this is that I learn this.
I think the state of "mind-blowing" is not only good, but also important.<br>
So I eventually decided to keep this version (and make it more abstract), and give further explanation later
## better function's process - Explained
Any action should be down only, and just when all parameter are gained.<br>
Then, we can consider the requirement order and try to make use of it.
``` C#
class ExpClass1{
int a;
int b;
int c;
int d;
virtual (int,int) F(int a,int b) {
order start,[SetA,SetB,GainC,GainD],end;
label SetA: {this.a=a;}
label SetB: {this.b=b;}
label GainC: {this.c=a+b;}
label GainD: {this.d=a-b;}
label End: {return (c,d);}
}
}
```
Each "label" marks a piece of code.<br>
After all the previous label to be done, code of this label should be done.<br>
And after it, the label is considered as done.
"order" explain the order for code to run.<br>
the basic order is (defined label A,B,C,D):<br>
`order B,D;`<br>
`order C,D;`<br>
which means D should be done after B and C to be done.<br>
then, `order A,[B,C],D` means after A to be done, B and C should be done.<br>
and after B and C to be done, D should be done.<br>
`start` is the label of start (maybe it can just be omitted).
## attemp to explain how this solve multiple inheritance
Multiple inheritance is not only for code reuse, but also SHOULD be logical (e.g. Liskov substitution principle),which MEANS no conflict in code.<br>
For override function, this MEANS B,C added code have no conflict, and can be combined together.<br>
Image 2 flow graph for old and "new" structure. One linear, one flowed by requirement<br>
As previous fouction structure, computer and you (JUST) don't know how to combine 2 big hard blocks.<br>
But now, every extensional function is no longer a single function but a plug-in, and label of main function is socket. Then it is free to add multiple plug-in into main function
|
Idea of "extensional function for function" to solve some problem |
Since Hibernate 6.3 this luckily became way easier to do:
```java
import org.hibernate.annotations.Generated;
import org.hibernate.dialect.PostgreSQLEnumJdbcType;
//....
@Column(name = "status")
@Enumerated
@JdbcType(PostgreSQLEnumJdbcType.class)
public TransmissionStatusType getTransmissionStatus() {
return this.transmissionStatus ;
}
```
You can find all the other types in the [dialect package](https://docs.jboss.org/hibernate/orm/6.3/javadocs/org/hibernate/dialect/package-summary.html). |
Is it possible to implement the functionality shown in the bash script below from a C++ application, possibly using an OpenSSL provider obtained using:
```cpp
OSSL_PROVIDER * tpm2_provider = OSSL_PROVIDER_load(NULL, "tpm2");
```
or by another method?
```sh
# Create CSR using TPM-resident private key
openssl req -provider tpm2 -provider default -propquery '?provider=tpm2'
-new
-key handle:$TPMHandle
-config openssl.conf
-reqexts v3_req
-out device.csr
```
Can the private key remain in the TPM or is it necessary to extract the private key into the application, in order to sign the CSR in the C++ code?
I am able to create a CSR using the bash script with the openssl tpm2 provider to sign the CSR. I don't see a way to achieve the same functionality using the C or C++ APIs.
This code works but it retrieves the private key from the TPM. Can I achieve the same result without extracting the private key?
```cpp
EVP_PKEY * GetPrivateKeyFromTPM(void) {
OSSL_STORE_CTX *storeCtx = NULL;
storeCtx = OSSL_STORE_open_ex("handle:0x81005020", tpm2_libctx,"?provider=tpm2", NULL, NULL, NULL,NULL, NULL);
while (!OSSL_STORE_eof(storeCtx)) {
OSSL_STORE_INFO *info = OSSL_STORE_load(storeCtx);
switch (OSSL_STORE_INFO_get_type(info)) {
case OSSL_STORE_INFO_PKEY:
EVP_PKEY *TPMpkey = OSSL_STORE_INFO_get1_PKEY(info);
if (TPMpkey) {
OSSL_STORE_close(storeCtx);
return TPMpkey;
}
break;
}
}
OSSL_STORE_close(storeCtx);
return NULL;
}
```
|
|c++|openssl|tpm-2.0| |
Based on Leeroy's answer there is a C# code that can make the request:
```cs
var keyAttributes = new KeysAndAttributes
{
Keys =
{
new Dictionary<string, AttributeValue>
{
{ "Partition key", new AttributeValue { S = "aa" } },
{ "Sort key", new AttributeValue { S = "str1" } },
},
new Dictionary<string, AttributeValue>
{
{ "Partition key", new AttributeValue { S = "aa" } },
{ "Sort key", new AttributeValue { S = "str3" } },
},
},
};
return new BatchGetItemRequest
{
RequestItems =
{
{ "My table", keyAttributes },
},
};
``` |
|c#|apache-kafka|masstransit| |
In the documentation for Remix it seems like there are layout files we can use for specific routes.
But I am trying to make a global layout, where I want to put my providers, so I don't have to pass the supabase instance that I create in `root.tsx` to every single provider manually, and use use the Outlet context inside the providers. Thus I want a global layout file.
```typescript
import { cssBundleHref } from "@remix-run/css-bundle";
import type { LinksFunction, LoaderFunctionArgs } from "@remix-run/node";
import {
Links,
LiveReload,
Meta,
Outlet,
Scripts,
ScrollRestoration,
useLoaderData,
} from "@remix-run/react";
import { createBrowserClient } from "@supabase/ssr";
import { useState } from "react";
import { Database } from "@supabase/database.types";
export async function loader({}: LoaderFunctionArgs) {
return {
env: {
SUPABASE_URL: process.env.SUPABASE_URL!,
SUPABASE_ANON_KEY: process.env.SUPABASE_ANON_KEY!,
},
};
}
export const links: LinksFunction = () => [
...(cssBundleHref ? [{ rel: "stylesheet", href: cssBundleHref }] : []),
];
export default function App() {
const { env } = useLoaderData<typeof loader>();
const [supabase] = useState(() =>
createBrowserClient<Database>(env.SUPABASE_URL, env.SUPABASE_ANON_KEY)
);
return (
<html lang="en">
<head>
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<Meta />
<Links />
</head>
<body>
<Outlet context={{ supabase }} />
<ScrollRestoration />
<Scripts />
<LiveReload />
</body>
</html>
);
}
```
I am struggling to actually find anything on google and the remix documentation that is remotely close to this. ChatGPT said something about `app/routes/__layout.tsx` but that does not work seemingly.
Is there actually no root layout file.. Outside the root.tsx file in remix v2?
It would be lovely to just be able to use `const { supabase } = useOutletContext<{
supabase: SupabaseClient<Database>;
}>();` inside my providers if I can create a "global layout" that is not `root.tsx` |
Since you don't provide arguments for the constructor they are become `undefined` and assigned to the props. You could use a default parameter value `null` to declare that these parameters are optional and handle them accordingly in the constructur:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
class Rectangle {
height = 10;
width = 10;
constructor(width = null, height = null) {
width === null || (this.width = width);
height === null || (this.height = height);
}
}
const rec = new Rectangle();
console.log(rec);
<!-- end snippet -->
Of course you can write it this way:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
class Rectangle {
constructor(width = 10, height = 10) {
this.width = width;
this.height = height;
}
}
const rec = new Rectangle();
console.log(rec);
<!-- end snippet -->
You could ask what's the difference. Well, they are documented: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Public_class_fields
A general one example:
> By declaring a public field, you can ensure the field is always present, and the class definition is more self-documenting.
A specific one example:
> Because class fields are added using the [[DefineOwnProperty]] semantic (which is essentially Object.defineProperty()), field declarations in derived classes do not invoke setters in the base class. This behavior differs from using this.field = … in the constructor.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
class Shape{
#width = 10;
get width(){
return this.#width;
}
set width(val){
console.log('setting width');
this.#width = val;
}
}
class Rectangle extends Shape{
width = 10;
}
class Rectangle2 extends Shape{
constructor(width = 10){
super();
this.width = width;
}
}
const rc = new Rectangle();
console.log(rc);
const rc2 = new Rectangle2();
console.log(rc2);
<!-- end snippet -->
One subtle difference is performance also. Consider the default props as an object literal, and it should perform faster than assigning props in a constructor:
```
` Chrome/123
--------------------------------------------------------------------------------------
> n=10 | n=100 | n=1000 | n=10000
default props ■ 1.00x x1b 341 | ■ 1.00x x10m 390 | ■ 1.00x x1m 52 | ■ 1.00x x10k 407
constructor 1.02x x1b 348 | 1.13x x10m 441 | 1.04x x1m 54 | 1.12x x10k 456
--------------------------------------------------------------------------------------
https://github.com/silentmantra/benchmark `
```
In Firefox the difference is opposite and extreme for some reasons:
```
` Firefox/124
-------------------------------------------------------------------------------------------
> n=10 | n=100 | n=1000 | n=10000
constructor ■ 1.00x x10m 323 | ■ 1.00x x10m 739 | ■ 1.00x x100k 133 | ■ 1.00x x10k 262
default props 1.06x x10m 342 | 1.57x x1m 116 | 15.79x x10k 210 | 67.48x x1k 1768
-------------------------------------------------------------------------------------------
https://github.com/silentmantra/benchmark `
```
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const $chunk = Array.from({length: 10}, () => Math.random().toString().slice(2, 8));
const $input = [...$chunk];
const genWithout = new Function('', `
return class{
${ $input.map(prop => `prop${prop} = ${Math.random()}`).join(';') }
}
`);
const genWith = new Function('', `
return class{
constructor(){
${$input.map(prop => `this.prop${prop} = ${Math.random()}`).join(';')}
}
}
`);
// @benchmark default props
const c = genWithout();
// @run
new c;
// @benchmark constructor
const d = genWith();
//@run
new d;
/*@skip*/ fetch('https://cdn.jsdelivr.net/gh/silentmantra/benchmark/loader.js').then(r => r.text().then(eval));
<!-- end snippet -->
|
Hope you doing well !!
So, I have a project idea in mind and I don't know how to achieve it. Let me explain.
My idea is to create a web app that suggests travel destination ideas depending on dates and a budget.
But as far as I know all flight or hotel APIs available need a destination as a query parameter, so I'm stuck here... A work around could be to create my own dataset of flight and hotel price, but I have no idea how to achieve this effectively and this is gonna take time and money to collect everything, so I'm not sure I can afford to do this.
So what do you think ? Have you another idea to help me to achieve this idea ?
Thank you !! |
Need advice for a project |
|api|web|project| |
null |
black ".\research\evaluation_detection.ipynb"
outputs
error: cannot format research\evaluation_detection.ipynb: unindent does not match any outer indentation level (<tokenize>, line 11)
Oh no!
1 file failed to reformat.
The problem is that my file has lots of cells with line 11; there is only one level at the start of the file.
What tool/script to use to resolve this codestyle exception breaking ability to commit. |
Automatically find the precise line in ipynb to fix indentation error for black |
|python-3.x|jupyter-lab|pycodestyle| |
I'm trying to implement a thread so that I can send GPS coordinates in the background. I think I have a good start but I'm having a little trouble. Where it says `locManager = (LocationManager) context.getSystemService(Context.LOCATION_SERVICE);`.
I'm getting an error on context (before getSystemService) which says "context cannot be resolved." I call this class from my main activity with the following statement `new FindLocation(getBaseContext()).start(usr_id1);` maybe that has something to do with the problem.
public class FindLocation extends Thread {
private LocationManager locManager;
private LocationListener locListener;
Context ctx;
public String userId;
public FindLocation(Context ctx) {
this.ctx = ctx;
}
public void start(String userId) {
this.userId = userId;
super.start();
}
@Override
public void run() {
final String usr = userId;
//get a reference to the LocationManager
locManager = (LocationManager) context.getSystemService(Context.LOCATION_SERVICE);
//checked to receive updates from the position
locListener = new LocationListener() {
public void onLocationChanged(Location loc) {
String lat = String.valueOf(loc.getLatitude());
String lon = String.valueOf(loc.getLongitude());
JSONArray jArray;
String result = null;
InputStream is = null;
StringBuilder sb = null;
ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>();
nameValuePairs.add(new BasicNameValuePair("id", usr));
//http post
try{
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost("http://www.example.com/test/example.php");
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs));
HttpResponse response = httpclient.execute(httppost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
}catch(Exception e){
Log.e("log_tag", "Error in http connection"+e.toString());
}
//convert response to string
try{
BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8);
sb = new StringBuilder();
sb.append(reader.readLine() + "\n");
String line="0";
while ((line = reader.readLine()) != null) {
sb.append(line + "\n");
}
is.close();
result=sb.toString();
}
catch(Exception e){
Log.e("log_tag", "Error converting result "+e.toString());
}
try{
jArray = new JSONArray(result);
JSONObject json_data=null;
for(int i=0;i<jArray.length();i++){
json_data = jArray.getJSONObject(i);
String ct_name = json_data.getString("phoneID");
//stop = true;
Log.i("User ID", ct_name);
if(ct_name == usr) {
locManager.removeUpdates(locListener);
}
else{
Log.i("User ID", "NONE");
}
}
}
catch(Exception e){
//Log.e("log_tag", "Error converting result "+e.toString());
HttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost("http://example.com/test/example.php");
try {
List<NameValuePair> nameValuePairs1 = new ArrayList<NameValuePair>(2);
nameValuePairs1.add(new BasicNameValuePair("lat", lat));
nameValuePairs1.add(new BasicNameValuePair("lon", lon));
nameValuePairs1.add(new BasicNameValuePair("id", usr));
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs1));
httpclient.execute(httppost);
Log.i("SendLocation", "Yes");
}
catch (ClientProtocolException g) {
// TODO Auto-generated catch block
} catch (IOException f) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public void onProviderDisabled(String provider){
}
public void onProviderEnabled(String provider){
}
public void onStatusChanged(String provider, int status, Bundle extras){
}
};
locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 6000, 0, locListener);
}
}
|
We want to setup Apache Superset as an enterprise BI solution. Earlier solution used is MicroStrategy. It provides Report Builder feature which helps users to build a report based on the prompts. They choose columns and metrics that they want on the reports. How can the Apache Superset be used to provide ad-hoc reporting to the users?
Tried SQL Templating. It does work to a descent extent. However I want to explore what other users have done with respect to the ad-hoc reporting and Apache Superset. |
Can apache superset be used for ad hoc reporting? 9Something similar to Report Builder feature of MicroStrategy) |
|apache-superset| |
null |
Even though I have no idea *why* this works, after trying everything else on the internet, simply *deleting* the libqxcb.so file from the pyqt plugins solved the issue for me. If anyone knows why this fixed the issue for me (while reinstalling things did not), please let me know.
The way to do this is to go to the folder mentioned in the error message, go to the `platforms` subfolder, and then delete the `libqxcb.so` file in there. For me, the message was `Could not load the Qt platform plugin "xcb" in "/home/MyUsername/.local/lib/python3.10/site-packages/cv2/qt/plugins" even though it was found.`, so I deleted `/home/MyUsername/.local/lib/python3.10/site-packages/cv2/qt/plugins/platforms/libqxcb.so`. If you, like me, can't find any other solutions to this issue, you could try this. Or, to be more safe, move it to some backup location so that you could place it back if something else breaks more catastrophically. |
null |
I have a device running a modified modbus protocol. The device sends messages to the RPi3 serial port. The message is 14 bytes long that starts with a sync byte followed by 11 data bytes then two modbusCRC-16 bytes.
In order to check the validity of the message (by way of a CRC check), I can only send the 11 data bytes to the CRC check function. The problem is, I just can't figure out how to extract those 11 bytes and put them into new bytes list(??) acceptable by the CRC function.
The program is shown below and the output below it (the error shown is understood because the `newData` list has not been created - that's what I'd like some help with).
```python
import serial
from time import sleep
from modbus_crc import check_crc
ser = serial.Serial("/dev/ttyS0", 9600)
print("waiting for message from the serial port ......\n")
rxData = ser.read()
sleep(0.03)
data_left = ser.inWaiting()
rxData += ser.read(data_left)
print("Message has been received\n")
print("The 'rxData' type from ser.read() is ",type(rxData), " and length is ", len(rxData))
print("'rxData - ", [hex(i) for i in rxData], "\n")
print("Now show only bytes 1 to 11 of rxData\n")
x = range(1,12,1)
for i in x:
print((hex(rxData[i])), end=" ")
print("\n")
#####################
#### Missing code to make newData with only the bytes (1 to 11 in rxData
#####################
print("\nThe 'newData' type is ",type(newData), " and length is ", len(newData))
print("'newData' - ", [hex(i) for i in newData], "\n")
print("\n")
print("check if newData CRC is OK\n")
if not check_crc(newData):
print("CRC is NOT OK")
else:
print("CRC is OK!")
```
Output after running the program:
```
waiting for message from the serial port ......
Message has been received
The 'rxData' type from ser.read() is <class 'bytes'> and length is 14
'rxData - ['0xff', '0xd', '0x77', '0x2', '0x1', '0x1', '0x12', '0x33', '0x30', '0x2e', '0x38', '0x39', '0xfd', '0x78']
Now show only bytes 1 to 11 of rxData
0xd 0x77 0x2 0x1 0x1 0x12 0x33 0x30 0x2e 0x38 0x39
Traceback (most recent call last):
File "/home/stevev/Projects/TKinter/20240309-operation on Bytes class.py", line 29, in <module>
print("\nThe 'newData' type is ",type(newData), " and length is ", len(newData))
NameError: name 'newData' is not defined
```
|