instruction stringlengths 0 30k β |
|---|
It's a litle fussy, but in v0.13+ you could layer an unfilled boxplot over a filled boxplot:
```python
tips = sns.load_dataset("tips")
spec = dict(data=tips, x="day", y="total_bill", hue="sex", gap=.1)
sns.boxplot(**spec, linewidth=0, showfliers=False, boxprops=dict(alpha=.5))
sns.boxplot(**spec, fill=False, legend=False)
```
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/IXV2U.png |
Windows has not a built in X-server so, if you want to go with X11-forwarding, you must install one on your Windows machine (`VcXsrv` or `Xming` are both good choices).
Putty can be avoided using the windows 10 built in `SSH` client. Just check if it is enabled in windows features and follow [@Aamir Sultan's reply][1] (second paragraph).
If you don't want to install an X-server on windows, I suggest to go with RDP protocol:
1) Install xrdp on RedHat server (I suppose you already have a desktop environment installed there, otherwise you need one installed too)
2) Use windows' built in remote desktop connection client to connect to your Ubuntu server desktop and run GUI apllication inside it.
Hope this helps
[1]: https://stackoverflow.com/a/73561531/13338157 |
In your `appsettings.json` :
```
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Information",
"Microsoft.Hosting.Lifetime": "Information"
}
},
```
If you have multiple `appsettings.json` based on your environment, be sure to select your environment properly and edit that specific `appsettings.json`.
For example, if you are in `development` environment, make sure to set these settings in `appsettings.Development.json`. |
Python: How to create a regular expression to partition a string that terminates in either ": 45" or ",", without the ": " |
|python|search|python-re|group| |
null |
The absolute difference will always lie in the range [0,12). Subtracting 6 from this and taking the absolute value maps to the range [0,6]9 but "reversed" so subtract all from 6 to get
```
float dist(float a, float b){
return 6-abs(abs(a-b)-6)
}
``` |
I can't recall exactly how response.json() works but I have a feeling it 'runs' its input as regular JavaScript, which means if the MIME type returned by the server isn't application/javascript then it might not recognise it as runnable code. (Perhaps!)
First, try dumping/inspecting the data you get back from the server to double check it is indeed plain JSON. Then use JSON.parse() instead of response.json() to turn it from a string into an object. If the MIME type is indeed the problem, then JSON.parse() will work so long as you feed it a string, regardless of how the network headers described that string. |
Here is an object to get the "mounted" SD cards for Android 10. Card one is the sim card and card two is the micro SD card. When you use Environment.getExternalStorageDirectory() it gives you the sim card in slot one which is read-only so you can't write to that.
package com.example.audioplayer;
import android.content.Context;
import android.os.Environment;
import android.os.Parcel;
import android.os.storage.StorageManager;
import android.os.storage.StorageVolume;
import java.util.ArrayList;
import java.util.List;
public class MountedVolumes {
public ArrayList<String> mMountedVolumes = new ArrayList<>();
public MountedVolumes(Context context) {
StorageManager storageManager = (StorageManager) context.getSystemService(Context.STORAGE_SERVICE);
if (storageManager == null) {
return;
}
List<StorageVolume> storageVolumes = storageManager.getStorageVolumes();
for (StorageVolume storageVolume : storageVolumes) {
if (storageVolume.getState().equals(Environment.MEDIA_MOUNTED)) {
Parcel p = Parcel.obtain();
storageVolume.writeToParcel(p, 0);
p.setDataPosition(0);
p.readString();
String path = p.readString();
p.recycle();
mMountedVolumes.add(path);
}
}
}
public String GetVolume(int i) {
if (mMountedVolumes.isEmpty()) {
return null;
}
if (i >= mMountedVolumes.size()) {
return null;
}
return mMountedVolumes.get(i);
}
} |
{"Voters":[{"Id":71141,"DisplayName":"Etienne de Martel"},{"Id":2055998,"DisplayName":"PM 77-1"},{"Id":14853083,"DisplayName":"Tangentially Perpendicular"}],"SiteSpecificCloseReasonIds":[19]} |
The strings are of the form:
```
Item 5. Some text: 48
Item 5E. Some text,
```
The result of the search should produce
4 groups as follows.
```
Group(1) = "#"
Group(2) = "5" or "5E"
Group(3) = "Some text"
Group(4) = "48" or ","
```
I have tried:
```
(a) r"(.*)Group (.*)\.(.+)(?::(.+))|(,)"
(b) r"(.*)Group (.*)\.(.+)(?:(?::(.+))|(,))"
(c) r"(.*)Group (.*)\.(.+)(:(.+))|(,)"
```
It have tried a variety of ways to solve this, but none work as required.
What should the regular expression be? |
I have a problem using Staudenmeir's BelongsToThrough package. This is what I have in my Laravel 11 project:
Database:
```
* ----------------------- *
| users |
* ----------------------- *
| + id |
| + address_id |
* ----------------------- *
* ----------------------- *
| addresses |
* ----------------------- *
| + id |
| + country_code_id |
* ----------------------- *
* ----------------------- *
| country_codes |
* ----------------------- *
| + id |
* ----------------------- *
```
User.php
```
class User extends Model
{
public function address(): BelongsTo
{
return $this->belongsTo(Address::class);
}
public function country_code(): BelongsToThrough
{
return $this->belongsToThrough(CountryCode::class, Address::class);
}
}
```
Address.php
```
class Address extends Model
{
public function user(): HasOne
{
return $this->hasOne(User::class);
}
public function country_code(): BelongsTo
{
return $this->belongsTo(CountryCode::class);
}
}
```
CountryCode.php
```
class CountryCode extends Model
{
public function address(): HasMany
{
return $this->hasMany(Address::class);
}
public function users(): HasManyThrough
{
return $this->hasManyThrough(User::class, Address::class);
}
}
```
UserController.php
```
class UserController extends Controller
{
public function show(Request $request, User $user)
{
# access countryCode through address from user
# not working:
return (new UserResource($user))->load('country_code');
}
}
```
CountryCodeController.php
```
class CountryCodeController extends Controller
{
public function show(Request $request, CountryCode $countryCode)
{
# access user through address from countryCode
# working:
return (new CountryCodeResource($countryCode))->load('user');
}
}
```
I can query without problems:
- ```UserResource->load('address')```
- ```AddressResource->load('country_code')```
- ```CountryCodeResource->load('address')```
- ```AddressResource->load('user')```
- ```AddressResource->load('users')``` with hasManyThrough()
**Now the problem:**
If i try the inverse of the hasManyThrough, **belongsToThrough** provided by [Staudenmeir][1] (using ```UserResource->load('country_code')```), I always get the the following error message:
```
"message": "Call to undefined relationship [country_code] on model [App\\Models\\User]."
```
Is there any problem in my code or have i misunderstood something? In my opinion all necessary relations should be defined and even the hasManyThrough works like a charm.
If you need any further information, just ask.
Thanks for your help in advance!
**Edit**
I also tried it with **camelCase** naming, but it did not fix the problem.
[1]: https://github.com/staudenmeir/belongs-to-through |
Single core embedded system
Priority based scheduling
Thread 2(T2) - High priority
Thread 1(T1) - Low priority
Single producer(T2) and single consumer (T1)
**Requirement:**
1. Data flow from T2 to T1 through shared circular buffer. Data flow
possible in one direction i.e. T2 to T1.
2. Between threads planning to use circular buffer as shared memory to
post received packets from T2 to T1.
3. T2 will enqueue and T1 will dequeue the packets from circular buffer.
4. And planning to add enqueue index and dequeue index pointer for circular buffer as volatile(consider it won't solve critical section problem).
5. T2 will update the Enqueue index pointer in circular buffer and T2
will update the Dequeue index pointer in circular buffer.
6. T1 thread will check Enqueue and dequeue index are equal before
start to dequeue the packet from circular buffer.
**Query:**
- while T1 thread(low priority) trying to read the enqueue index, let's say in
that instance task switched to high priority thread T2 and it will update the enqueue index variable?
- In this case again task switched to T1, whether variable 'enqueue index' read will give old value or INVALID value?
- My understanding is, for my requirement T2->T1 direction only
possible so here critical section issue will not come in preemptive
scheduling
Please give your views
|
Styles didn't apply to Django Project when uploaded to Render |
|javascript|html|css|python-3.x|django| |
I managed to solve my question. I was trying to adapt on old macro and mis-understanding the loop function.
I have removed the following from the above code:
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Set targetRange as the first cell in column C without conditional formatting
' under the last cell in column C with no value.
Do Until targetRange.FormatConditions.Count = 0
Set targetRange = targetRange.Offset(1, 0)
Loop
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
' Lock pasteSheet to prevent further editing.
pasteSheet.Protect Password:="password"
AND I have amended the loop to be:
' Get the loop count from Sheet3!A1
loopCount = Sheets("Sheet3").Range("A1").Value
' Loop from 1 to the specified loop count
For i = 1 To loopCount
' Set targetRange as the last cell in column C with a value.
Set targetRange = pasteSheet.Cells(pasteSheet.Rows.Count, 3).End(xlUp).Offset(1, 0)
' Copy range C22:M22.
copySheet.Range("C22:M22").Copy
' Paste the copied range into targetRange.
targetRange.PasteSpecial xlPasteAll
Next i
The above now removes redundant code and allows my desired function to run "X" amount of times with "X" being the value of Sheet3A1. Note as per comments I forgot to define printSheet which I have now amended in my original question.
|
|javascript|csv|observablehq| |
I'm using the firestore emulator and exists isn't preventing calls when the document exists.
```
match /usernames/{username} {
allow update: if false;
allow create: if isUsernameAvailable(request.resource.data.username);
function isUsernameAvailable(rawName) {
let name = rawName.lower();
return isValidMessage() &&
isValidUsername(name) &&
!exists(/databases/$(database)/documents/users/$(request.auth.uid));
}
```
Here is the accepted Request:

and Pre-Existing Document:

The document clearly exists. I've compared my code to similar code, and I don't see syntactical errors, so I'm at a loss. |
|google-cloud-firestore|firebase-security| |
null |
**What is a fast way of checking which event has been triggered in scipy.solve_ivp()?**
For context, I am simulating a double pendulum. I want to check if either the first rod or the second rod has been flipped. I have created two event functions for each, but once the event is triggered, is there any way I can retrieve which event has been triggered? Thanks :) |
Checking Event in solve_ivp |
The CRON instruction to use is the following `0 */3 * * *` in conjunction with using an appropriate `start_date` and `end_date`.
```python
DAG(
dag_id="my_dag_name",
start_date=datetime.datetime(2024, 2, 15, 12, 0),
end_date=datetime.datetime(2024, 2, 17, 12, 0),
schedule='0 */3 * * *',
catchup=True,)
```
With this configuration, the DAG will generate [DAG runs][1] for every time period included between the start date and the end date, using a 3 hours interval.
Turning on [catch up][2] mode allows to run the DAG for the whole period once activated.
You can use tools such as[crontab.guru][3] to design the CRON expression. You can click on the `next` button to have examples of the following executions.
[1]: https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dag-run.html
[2]: https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dag-run.html#catchup
[3]: https://crontab.guru/#0_*/3_*_*_* |
Created a cloud function to Enabling real-time email and chat notifications for webex notifications from google support document and function is failing.
```
Error Message in Test Function
Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging Details:
500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
Error in traceback
severity: "ERROR"
textPayload: "Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/layers/google.python.pip/pip/lib/python3.8/site- packages/functions_framework/__init__.py", line 171, in view_func
function(data, context)
File "/workspace/main.py", line 60, in send_webex_teams_notification_2023
project=asset["resourceProperties"]["project"],
KeyError: 'project'"
```
I've tried to changed the "project" to project_id, I tried changing in requirements.txt file to upgrade version of security command center. I also tried changing the security command center link. I am expecting high or critical errors for all projects in the organization level being monitored and sending specific alerts to webex bot
Code
main.py
#!/usr/bin/env python3
import base64
import json
import requests
from google.cloud import securitycenter_v1
WEBEX_TOKEN = "N2YzYjk0NmItNDAxMS00MzdlLWE4MjMtYzFlNGNkNjYxODBmNDZhNWNiOTktOTgx_PF84_e7a300f8-3aac-4db0-9c42-848488a96bf4"
ROOM_ID = "Y2lzY29zcGFyazovL3VzL1JPT00vNmE1MWEwOTAtYWM3Yy0xMWVkLTkxMDQtYzE1YzVmZDEyMTFi"
TEMPLATE = """
**Severity:** {severity}\n
**Asset:** {asset}\n
**SCC Category:** {category}\n
**Project:** {project}\n
**First observed:** {create_time}\n
**Last observed:** {event_time}\n
**Link to finding:** {finding_link}
"""
PREFIX = "https://console.cloud.google.com/security/command-center/findings"
def get_finding_detail_page_link(finding_name):
"""Constructs a direct link to the finding detail page."""
org_id = finding_name.split("/")[1]
return f"{PREFIX}?organizationId={org_id}&resourceId={finding_name}"
def get_asset(parent, resource_name):
"""Retrieves the asset corresponding to `resource_name` from SCC."""
client = securitycenter_v1.SecurityCenterClient()
resp = client.list_assets(
securitycenter_v1.ListAssetsRequest(
parent=parent,
filter=f'securityCenterProperties.resourceName="{resource_name}"',
)
)
page = next(resp.pages)
if page.total_size == 0:
return None
asset = page.list_assets_results[0].asset
return json.loads(securitycenter_v1.Asset.to_json(asset))
def send_webex_teams_notification(event, context):
"""Send the notification to WebEx Teams."""
pubsub_message = base64.b64decode(event["data"]).decode("utf-8")
message_json = json.loads(pubsub_message)
finding = message_json["finding"]
parent = "/".join(finding["parent"].split("/")[0:2])
asset = get_asset(parent, finding["resourceName"])
requests.post(
"https://webexapis.com/v1/messages",
json={
"roomId": ROOM_ID,
"markdown": TEMPLATE.format(
severity=finding["severity"],
asset=asset["securityCenterProperties"]["resourceDisplayName"],
category=finding["category"],
project=asset["resourceProperties"]["project"],
create_time=finding["createTime"],
event_time=finding["eventTime"],
finding_link=get_finding_detail_page_link(finding["name"]),
),
},
headers={"Authorization": f"Bearer {WEBEX_TOKEN}"},
)
requirements.txt
requests==2.25.1
google-cloud-securitycenter==1.1.0
|
I have a gallery with images and when scroll horizontally, multiple images are scrolled instead of one.
Below is part of my code:
public View getView(int position, View convertView, ViewGroup parent) {
View view = convertView;
if (convertView == null) {
view = inflater.inflate(resourceid, null);
}
synchronized (view) {
TextView txtTitle = (TextView) view
.findViewById(R.id.txtCaption);
ImageList item = getItem(position);
ImageView ivImage = (ImageView) view.findViewById(R.id.ivImage);
ivImage.setScaleType(ScaleType.CENTER_INSIDE);
try {
ivImage.setImageBitmap(getBitmapFromAsset(item.imageUrl));
}
|
Android gallery horizontal scrolling |
I'm using the following code to enable dark mode support. However, the main window already exists at that point (I'm doing that from within a plugin dll), so the menu popups don't switch to the dark theme automatically - some trigger is still missing.
If I switch from fullscreen to a regular window, the menu dropdown suddenly starts using the dark mode. Same when changing the theme in Windows 11 settings (doesn't matter if I switch to a dark or light theme there). The only trick I found so far is `SendMessage(hWnd, WM_DWMNCRENDERINGCHANGED, FALSE, 0);`, but that messes up the fullscreen geometry of the main window. Can someone explain what the recommended way to do that is? I guess I could custom draw the popups, but since it starts working after some events, maybe we can identify what these events are and if there is an official solution to the problem.
```
HMODULE hUxtheme = LoadLibraryExW(L"uxtheme.dll", nullptr, LOAD_LIBRARY_SEARCH_SYSTEM32);
ASSERT(hUxtheme);
SetPreferredAppMode = (fnSetPreferredAppMode)GetProcAddress(hUxtheme, MAKEINTRESOURCEA(135));
ASSERT(SetPreferredAppMode);
AllowDarkModeForWindow = (fnAllowDarkModeForWindow)GetProcAddress(hUxtheme, MAKEINTRESOURCEA(133));
ASSERT(AllowDarkModeForWindow);
SetPreferredAppMode(PreferredAppMode::ForceDark);
void enableImmersiveDarkMode(HWND hWnd) {
BOOL USE_DARK_MODE = true;
BOOL SET_IMMERSIVE_DARK_MODE_SUCCESS = SUCCEEDED(DwmSetWindowAttribute(
hWnd, DWMWINDOWATTRIBUTE::DWMWA_USE_IMMERSIVE_DARK_MODE,
&USE_DARK_MODE, sizeof(USE_DARK_MODE)));
}
``` |
Suppose I have colmap camera poses, is it possible and how to obtain a new view of input image `I` (planar object) from a different viewpoint/camera pose using those poses?
Colmap camera poses has following data:
```
extr = cam_extrinsics[key]
intr = cam_intrinsics[extr.camera_id]
height = intr.height
width = intr.width
uid = intr.id
R = np.array(qvec2rotmat(extr.qvec))
T = np.array(extr.tvec)
if intr.model=="SIMPLE_PINHOLE":
focal_length_x = intr.params[0]
FovY = focal2fov(focal_length_x, height)
FovX = focal2fov(focal_length_x, width)
fx = fy = intr.params[0]
cx = intr.params[1]
cy = intr.params[2]
elif intr.model=="PINHOLE":
focal_length_x = intr.params[0]
focal_length_y = intr.params[1]
FovY = focal2fov(focal_length_y, height)
FovX = focal2fov(focal_length_x, width)
fx = intr.params[0]
fy = intr.params[1]
cx = intr.params[2]
cy = intr.params[3]
```
```
class DummyCamera:
def __init__(self, uid, R, T, FoVx, FoVy, K, image_width, image_height):
self.uid = uid
self.R = R
self.T = T
self.FoVx = FoVx
self.FoVy = FoVy
self.K = K
self.image_width = image_width
self.image_height = image_height
self.projection_matrix = getProjectionMatrix(znear=0.01, zfar=100.0, fovX=FoVx, fovY=FoVy).transpose(0,1).cuda()
self.world_view_transform = torch.tensor(getWorld2View2(R, T, np.array([0,0,0]), 1.0)).transpose(0, 1).cuda()
self.full_proj_transform = (self.world_view_transform.unsqueeze(0).bmm(self.projection_matrix.unsqueeze(0))).squeeze(0)
self.camera_center = self.world_view_transform.inverse()[3, :3]
```
Colmap camera poses are computed on different flat object, size of images used in this computation is different from size of image `I`
going from this:
[input image](https://i.stack.imgur.com/SyQ5q.jpg)
to this after transformation using comap pose:
[expected image](https://i.stack.imgur.com/b6k8V.jpg) |
I think you need to use AND instead of OR:
SELECT
ld.status,
ld.lead_name
FROM
DATAWAREHOUSE.SFDC_STAGING.SFDC_LEAD AS ld
WHERE
ld.status <> 'Open'
AND ld.lead_name NOT LIKE '%test%'
AND ld.lead_name NOT LIKE '%t3st%'
AND ld.lead_name NOT LIKE '%auto%'
AND ld.lead_name NOT LIKE '%autoXmation%'
AND ld.lead_name NOT LIKE 'automation%';
|
pip install sklearn
Collecting sklearn
Using cached sklearn-0.0.post12.tar.gz (2.6 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [15 lines of output]
The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
rather than 'sklearn' for pip commands.
Here is how to fix this error in the main use cases:
- use 'pip install scikit-learn' rather than 'pip install sklearn'
- replace 'sklearn' by 'scikit-learn' in your pip requirements files
(requirements.txt, setup.py, setup.cfg, Pipfile, etc ...)
- if the 'sklearn' package is used by one of your dependencies,
it would be great if you take some time to track which package uses
'sklearn' instead of 'scikit-learn' and report it to their issue tracker
- as a last resort, set the environment variable
SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error
More information is available at
https://github.com/scikit-learn/sklearn-pypi-package
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
don't know what to do |
Changing the theme of a #32768 (menu) window class at runtime |
|winapi|themes|win32gui| |
Here my solution without imports:
```generate n = sequence (replicate n [1..n])```
|
If the url is always the same and you only need to change the warehouse id, you can get the id in the following way. This is without intercepting any errors and only shows the basic mechanism.
Sub getDataWarehouseId()
Const url As String = "https://www.screener.in/company/KRISHANA/"
Dim doc As Object
Set doc = CreateObject("htmlFile")
With CreateObject("MSXML2.XMLHTTP.6.0")
.Open "GET", url, False
.send
If .Status = 200 Then
doc.body.innerHTML = .responseText
MsgBox doc.getElementByID("company-info").getAttribute("data-warehouse-id")
Else
MsgBox "Page not loaded. HTTP status " & .Status
End If
End With
End Sub |
I am facing a problem on selecting a correct classifier for my data-mining task.
I am labeling webpages using statistical method and label them using a 1-4 scale, 1 being the poorest while 4 being the best.
Previously, I used SVM to train the system since I was using a binary (1,0) label then. But now since I switch to this 4-class label, I need to change classifier, because I think the SVM classifier will only work for two-class classification (correct me if I am wrong).
What kind of classifier is most appropriate here for my classification purpose?
|
How to choose the right machine-learning classifer |
I want to iterate a chunk of code across columns in a dataframe, to create a results matrix. I am getting stuck with how to iteratively create new objects and matrices using/named after values in a certain column (Site) in my dataframe, to then pass through to the following lines of code.
The code I am wanting to iterate is:
```
SiteI1_data = data %>% subset(Site == I1) %>%select(x_coord, y_cood)
SiteI1_dists <- as.matrix(dist(SiteI1_data))
SiteI1_dists.inv <- 1/SiteI1_dists
diag(SiteI1_dists.inv) <- 0
Resid_taxa1_siteI1 <- as.vector( Resid_data %>% subset(Site == I1) %>% select(Abundance_Taxa1) )
Moran.I(Resid_taxa1_siteI1, SiteI1_dists.inv)
```
The "Site" names and coordinates for the first chunk of code are in the following dataframe (data):
| Sample | Site | Treatment | x_coord | y_coord | Abundance_Taxa_1 | ... | Abundance_Taxa_n
| ------ | ---- | -------- | ------- | ------- | ---------------- | --- | ----------------
| I1S1 | I1 | Inside | 140.00 | -29.00 | 56 | ... | 0
| O2S1 | O2 | Outside | 141.00 | -28.00 | 3 | ... | 100
| O2S2 | O2 | Outside | 141.10 | -28.10 | 5 | ... | 4
The "Abundance_Taxa_" values are stored with the associated "Site" names in a separate dataframe (Resid_data):
|Site | Abundance_Taxa_1 | ... | Abundance_Taxa_n |
|-----| ---------------- | --- | ---------------- |
|O1 | -0.5673 | --- | 1.1579 |
|I1 | -0.6666 | --- | 1.2111 |
There are multiple "Samples" for each "Site". The first four lines of code aim to use the x and y coordinates to create an inverse distance weights matrix for each site.
The fifth line aims to create a vector of the values in Resid_data for each site and taxa.
The final line of code uses the taxa/site vector and the previously created distance matrix to calculate Moran's I for each site and taxa.
My desired outcome is to produce the following matrix:
| Site | Treatment | MoranI_Taxa_1 | ... | MoranI_Taxa_n|
| ---- | -------- | ------------- | --- | -------------|
| I1 | Inside | 0.1 | ... | 0.2 |
| O2 | Outside | 0.5 | ... | 0.01 |
I am not sure how to write this to
- iteratively create a distance matrix for each site, named after that site, to later pass through to the final line
- iterate the 5th line of code for each site and taxa in Resid_data (i.e., all columns beginning with "Abundance_Taxa_"), and create a vector for each
- iterate the final line for each site (e.g., for site I1, it would repeat as:
```
Moran.I(Resid_taxa1_siteI1, SiteI1_dists.inv)
Moran.I(Resid_taxa2_siteI1, SiteI1_dists.inv)
```
until it has completed for all vectors of residual values for that site).
The number of Site values is relatively small (12), but the number of "Abundance_Taxa" columns that I need to iterate through is large (~2400). Most of what I have found online has been writing very simple 'for' loops. As such, I don't even really know where to start with this. |
How to solve problem with pip installation? |
|installation|scikit-learn|pip| |
null |
I have overlapping text from my script:
import matplotlib.pyplot as plt
from adjustText import adjust_text
x = [12,471,336,1300]
y = [2,5,4,11]
z = [0.1,0.2,0.3,0.4]
im = plt.scatter(x, y, c = z, cmap = "gist_rainbow", alpha = 0.5)
plt.colorbar(im)
texts = []
texts.append(plt.text(783, 7.62372448979592, 'TRL1'))
texts.append(plt.text(601, 6.05813953488372, 'CFT1'))
texts.append(plt.text(631, 4.28164556962025, 'PTR3'))
texts.append(plt.text(665, 7.68018018018018, 'STT4'))
texts.append(plt.text(607, 5.45888157894737, 'RSC9'))
texts.append(plt.text(914, 4.23497267759563, 'DOP1'))
texts.append(plt.text(612, 7.55138662316476, 'SEC8'))
texts.append(plt.text(766, 4.1264667535854, 'ATG1'))
texts.append(plt.text(681, 3.80205278592375, 'TFC3'))
plt.show()
which shows overlapping text:
[![enter image description here][1]][1]
however, when I add `adjust_text`:
import matplotlib.pyplot as plt
from adjustText import adjust_text
x = [12,471,336,1300]
y = [2,5,4,11]
z = [0.1,0.2,0.3,0.4]
im = plt.scatter(x, y, c = z, cmap = "gist_rainbow", alpha = 0.5)
plt.colorbar(im)
data = [
(783, 7.62372448979592, 'TRL1'),
(601, 6.05813953488372, 'CFT1'),
(631, 4.28164556962025, 'PTR3'),
(665, 7.68018018018018, 'STT4'),
(607, 5.45888157894737, 'RSC9'),
(914, 4.23497267759563, 'DOP1'),
(612, 7.55138662316476, 'SEC8'),
(766, 4.1264667535854, 'ATG1'),
(681, 3.80205278592375, 'TFC3')
]
texts = [plt.text(x, y, l) for x, y, l in data]
adjust_text(texts)
plt.savefig('adjust.text.png', bbox_inches='tight', pad_inches = 0.1)
the labels are shifted to the lower left corner, making them useless, instead of just a little overlapped.
I am following clues from `adjust_text(texts)` as suggested by the below two links,
https://stackoverflow.com/questions/63583615/how-to-adjust-text-in-matplotlib-scatter-plot-so-scatter-points-dont-overlap
and https://adjusttext.readthedocs.io/en/latest/Examples.html
I get this:[![enter image description here][2]][2]
how can I get `adjust_text` to fix the overlapping labels?
[1]: https://i.stack.imgur.com/Glh77.png
[2]: https://i.stack.imgur.com/jF5UG.png |
as long as you control the initialization of `ServiceConfigurationService` by yourself, you can use any tool to perform the injection.
But if you want to use nestjs's DI container, you must do that through nestjs providers as the docs shown here: https://docs.nestjs.com/providers
|
For me it was, to clearly declare, that I use typescript not only generally but also within Svelte:
```json
{
"env": {
"browser": true,
"node": true,
"es2021": true
},
"globals": {
...
},
"extends": [
"standard",
"plugin:svelte/recommended",
"plugin:@typescript-eslint/recommended"
],
"parser": "@typescript-eslint/parser", // <-- you might say typescript here
"parserOptions": {
"sourceType": "module",
"extraFileExtensions": [".svelte"]
},
"plugins": ["@typescript-eslint"],
"overrides": [
{
"files": ["*.svelte"],
"parser": "svelte-eslint-parser",
"parserOptions": {
"parser": "@typescript-eslint/parser" // <-- you HAVE TO say typescript here again
}
}
]
}
```
See second code snippet in this section:
https://github.com/sveltejs/eslint-plugin-svelte?tab=readme-ov-file#parser-configuration
|
|scipy|simulation|numerical-methods| |
In my case, it was detected by adb but not fastboot, and it was solved just disconnecting the cable and reconnecting it again, so the phone is properly detected.
That is, although it was detected by adb (`adb devices`), when i instructed it to reboot in fastboot (`adb reboot bootloader`), it did not recognize the device straight away. Reconnecting allowed it to realize the change in state, and now it was properly recognized (`fastboot devices`) and it would follow any instruction properly. |
It turns out that the problem is caused by a feature in **Windows Security** called **Mandatory ASLR** (Thanks to [dariush-mazlumi](https://stackoverflow.com/users/8956917/dariush-mazlumi)).
Here are some ways to solve the problem:
- **Option 1** (I used this one): turn off **Mandatory ALSR** by going to **Windows Security** > **App & browser control** > **Exploit protection settings** > **System settings**
- **Option 2** (I find it helpful): add all of binaries in **C:\msys2\usr\bin** to **Windows Security** > **App & browser control** > **Exploit protection settings** > **Program settings** ([Source](https://github.com/appveyor/ci/issues/3777#issuecomment-1848799949)) |
Has anyone got example code to add relationships to Erwin model using Erwin API. E.g. adding a new key_group.
Adding entities and attributes is pretty straightforward, struggling with figuring out how to add relationships.
I have written code to read key group properties and check if relationship exists, unsure about adding key groups and properties. |
Erwin API question adding relationships to model |
|api|relationship|erwin| |
null |
The document title in the bottom right of that second screenshot is shown in italics, which means that the document does not actually exist. The ID is just shown in the Firestore console because you have subcollections with data under it.
And since the document doesn't actually exist, calling `exist` on that path correctly returns false. If you want it to return true, you'll have to ensure that the document actually exists.
Also see:
* https://stackoverflow.com/questions/48137582/firestore-db-documents-shown-in-italics
* https://stackoverflow.com/questions/52826365/parent-document-is-being-marked-as-deleted-italics-by-default
* https://stackoverflow.com/questions/54499104/problem-with-getting-list-of-document-collection-from-directory |
null |
I have understood and solved the notebook available on Coursera for the Deep Learning Specialization (Sequence Models course) by Andrew Ng.
In the notebook, he provides a detailed walkthrough for building a wake word detection model. However, at the end, **he loads a pre-trained model trained on the word "activate."**
I attempted to use Google Colab and my own data. I collected 369 voices of people saying "Alexa," which are available on Kaggle. However, they have a sample rate of 16000KHz.
I also used Google voice commands as negative sounds and collected some clips from YouTube that contain various environmental sounds.
I followed all the steps exactly as instructed, a**nd i'm using google colab for training**. but when I try to create the dataset, **the RAM quickly fills up**, and I cannot create 4000 samples as mentioned by Andrew in his notebook.
here is my code of "create_training_examples":
```
nsamples = 4000
X_train = []
Y_train= []
X_test = []
Y_test = []
train_count = 0
test_count = 0
to_test = False
for i in range(0, nsamples):
if i % 500 == 0:
print(i)
rand = random.randint(0,61)
if i%5 == 0:
x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = True)
X_test.append(x.swapaxes(0,1))
Y_test.append(y.swapaxes(0,1))
test_count+=1
else:
x, y = create_data_example(backgrounds_list[rand], alexa_list, negatives_list, Ty, name=str(i),to_test = False)
X_train.append(x.swapaxes(0,1))
Y_train.append(y.swapaxes(0,1))
train_count+=1
print("Number of training samples:", train_count)
print("Number of testing samples:", test_count)
X_train = np.array(X_train)
Y_train = np.array(Y_train)
np.save('XY_train/X_train.npy', X_train)
np.save('XY_train/Y_train.npy', Y_train)
X_test = np.array(X_test)
Y_test = np.array(Y_test)
np.save('XY_test/X_test.npy', X_test)
np.save('XY_test/Y_test.npy', Y_test)
print('done saving')
print('X_train.shape: ',X_train.shape)
print('Y_train.shape: ',Y_train.shape)
```
here is the model i use:
```
def model(input_shape):
X_input = Input(shape = input_shape)
X = Conv1D(196,15,strides=4)(X_input)
X = BatchNormalization()(X)
X = Activation('relu')(X)
X = Dropout(0.8)(X)
X = GRU(128,return_sequences = True)(X)
X = Dropout(0.8)(X)
X = BatchNormalization()(X)
X = GRU(128,return_sequences=True)(X)
X = Dropout(0.8)(X)
X = BatchNormalization()(X)
X = Dropout(0.8)(X)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
model = Model(inputs = X_input, outputs = X)
return model
```
**and for training:**
```
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size = 5, epochs=20,validation_data=(X_test,Y_test))
```
I have used the notebook as it is, even following the same method for feature extraction. **The only thing I modified was the training data, using my own data.** However, what happened is that the RAM quickly filled up. And when I reduced the number of samples to 1600KHz or 8000KHz, **I didn't get good results at all.**
**i also tried to edit batch_size,learning_rate.**
nothing change..
am i doing something wrong?
Do you have any advice or suggestions please? |
run all cells. this should work.
else why not try : df['Column'] |
I have an HTML/CSS/JS website where I'm embedding facebook posts, and adding some text (with various info and action buttons) to the top of each post. I'm adding the text that goes above each post dynamically in Javascript, so that the embedded post (iframe) is in a separate container than the text that goes above it.
As you can see in the image below, the text is positioned as `absolute` so that it appears above the post, and not next to it. However, I can't seem to get the post to move *down* nor the text to move up in order to create some separation between them. Any help or ideas would be greatly appreciated!!
UPDATE: I meant to include a codepen that reproduces the issue with minimal code: https://codepen.io/Mickey_Vershbow/pen/dyLzLNV?editors=1111
See below for screenshot and actual code:
[![enter image description here][1]][1]
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
// Array of facebook post ID's
facebookArr = [
{
post_id:
"pfbid0cm7x6wS3jCgFK5hdFadprTDMqx1oYr6m1o8CC93AxoE1Z3Fjodpmri7y2Qf1VgURl"
},
{
post_id:
"pfbid0azgTbbrM5bTYFEzVAjkVoa4vwc5Fr3Ewt8ej8LVS1hMzPquktzQFFXfUrFedLyTql"
}
];
// Variables to store post ID, embed code, parent container
let postId = "";
let embedCode = "";
let facebookContainer = document.getElementById("facebook-feed-container");
$(facebookContainer).empty();
// Loop through data to display posts
facebookArr.forEach((post) => {
let relativeContainer = document.createElement("div");
postId = post.post_id;
postLink = `${postId}/?utm_source=ig_embed&utm_campaign=loading`;
// ---> UPDATE: separate container element
let iframeContainer = document.createElement("div");
embedCode = `<iframe src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2FIconicCool%2Fposts%2F${postId}&show_text=true&width=500" width="200" height="389" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowfullscreen="true" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share" id=fb-post__${postId}></iframe>`;
// Update the DOM
iframeContainer.innerHTML = embedCode;
// ADDITIONAL TEXT
let additionalTextParentContainer = document.createElement("div");
additionalTextParentContainer.className = "risk-container";
let additionalText = document.createElement("div");
additionalText.className = "absolute";
additionalText.innerText = "additional text to append";
additionalTextParentContainer.append(additionalText);
relativeContainer.append(additionalText);
facebookContainer.append(relativeContainer, iframeContainer);
});
<!-- language: lang-css -->
#facebook-feed-container {
display: flex;
flex-direction: row;
row-gap: 1rem;
column-gap: 3rem;
padding: 1rem;
}
.absolute {
position: absolute;
margin-bottom: 5rem;
color: red;
}
.risk-container {
margin-top: 3.5rem;
}
<!-- language: lang-html -->
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<div id="facebook-feed-container">
</div>
<!-- end snippet -->
[1]: https://i.stack.imgur.com/33joo.jpg |
I want to write a function that returns several configuration settings from a settings store. Settings can be of several types, some have parameters, some hold strings, some hold arrays like so:
```typescript
type StringSetting = 'outputPath' | 'fileName'
type StringSettingWithParam = 'emailAddress' | 'homeDirectory'
type ArrayOfStringsSetting = 'inputPaths'
```
I want to fetch several settings at once, the function implementation is not a problem, but I can't find a way to strongly type the function:
Example parameter to the function:
```typescript
{
a: {setting: 'outputPath'},
b: {setting: 'emailAddress', param: 'userA'},
c: {setting: 'inputPaths'}
}
```
Expected return value
```typescript
{
a: '/home/user', // string
b: 'foo@bar.com', // string
c: ['/usr/var/output', '/home/user/output'] // string[]
}
```
In other words, the output type should have the same keys as the input type, the output values should be dependent on the input values
I thought, a good starting point would be to describe functions for each kind of input type
```typescript
type StringSettingProcessor = (s: {setting: StringSetting}) => string
type StringWithParamSettingProcessor = (s: { setting: StringSettingWithParam; param: string }) => string
type ArrayOfStringsSettingProcessor = (s: { setting: ArrayOfStringsSetting }) => string[]
```
Now I can combine all of those into a single type
```typescript
type Processors = StringSettingProcessor | StringWithParamSettingProcessor | ArrayOfStringsSettingProcessor
```
My function signature could be perhaps be something like this, but I have no idea how to correctly set the output type.
```typescript
function getSettings<T extends Record<string, Parameters<Processors>[0]>(settings: { [K in keyof T]: T[K] }): any {}
```
|
I was following @john-rocky on [tensorflow-object-detection-api](https://rockyshikoku.medium.com/how-to-use-tensorflow-object-detection-api-with-the-colab-sample-notebooks-477707fadf1b)
This tutorial solves the problem of training images with ONE bbox in images [640,640,3] with RF Zoo.
I have tried to mount it with N bbox by image, and it gets the error.
```python
WARNING:tensorflow:5 of the last 5 calls to ....
```
In my case, After long investigation this error is due to _"tf.functions can only handle one predefined input form, if the form changes, or if different python objects are passed, tensorflow automatically rebuilds the function."_ @bela127 [comment 619036189](https://github.com/tensorflow/tensorflow/issues/34025#issuecomment-619036189)
I explain in detail:
When entering `def train_step_fn` function values with format ` [640,640,3] ` for images and `[1,4]` for boxes. `@tf.function` causes a **graph to be created**. That graph gives you a **_lot of speed_** when entering. In my case 10s for 460 images.
But I have `[N,4]` per image (there are some images with seven elements to bbox in a image [7,4] others [3,4]), that is to say each call to `def train_step_fn `, **_TF has to create another graph_**, because **they do not match the previous format**. This results in the error `tensorflow:5 of the last 5 calls` , and take 1320s for 460 images.
On the left the graphic card without the error `tensorflow:5 of the last 5 calls` on the right having to create new graphs constantly. (GPU with the error)

_**Solution in my case**_, the input format of the tensors to `def train_step_fn`, must be exactly the same. i.e. all images must have the same number of labelled boxes.
|
You can pass a function (instead of a react element) as children of `LoginButton`
```
export default function Home() {
return (
<main>
<LoginButton>
{disabled => (
<Button
variant="secondary"
size="lg"
className="mt-8"
disabled={disabled}
>
Sign in
</Button>
)}
</LoginButton>
</div>
</main>
);
}
```
and call this function in `LoginButton` component
```
export const LoginButton = ({children}) => {
const [isPending, startTransition] = useTransition();
...
return (
<span className="cursor-pointer" onClick={onclickHandler}>
{children(isPending)}
</span>
);
};
``` |
I'm new to server-side development, but I know a lot of things from front-end development
My problem:
1\. From the client side (React.js application), a JSON string is sent (POST request) to a C++ server:
```
const handleFetchData = async () => {
try {
dispatch(setRepoUrl(inputUrl))
const urlParts = inputUrl.split('/')
const username = urlParts[3]
const repo = urlParts[4]
const response = await axios.get(
`https://api.github.com/repos/${username}/${repo}/issues`
)
dispatch(setIssues(response.data))
const api_data = JSON.stringify(response.data)
await axios.post('http://127.0.0.1:5174/user_data.json', { api_data })
console.log('Data saved on server:', api_data)
} catch (error) {
console.error('Error fetching data: ', error)
}
}
```
2\. The C++ server processes this request and writes it to a file, but first deletes existing data in the .json file:
```
void TcpServer::writeData(const std::string& data)
{
remove(user_data_path);
size_t start = data.find("{");
if (start == std::string::npos)
{
std::cerr << "JSON data not found in the input string.\n\n" << std::endl;
return;
}
std::string jsonData = data.substr(start);
std::ofstream file(user_data_path, std::ios::out);
if (file.is_open())
{
file << jsonData << std::endl;
file.close();
std::cout << "Data saved to user_data.json\n\n" << std::endl;
}
else
{
std::cerr << "Failed to open file for writing.\n\n" << std::endl;
}
}
```
3\. After writing the JSON string to the .json file, the Visual Studio application displays a window with the following error message:
File Load
Some bytes have been replaced with the Unicode substitution character while loading file
Saving the file will not preserve the original file contents
[](https://i.stack.imgur.com/bsCuA.png)
4\. After that, some characters in the JSON string are transformed into this - (\\r\\n\\t\\t\<Ex1 innerRef={ref}\>\\r\\n\\t\\t\\t{children}\\r\\n\\t\\t\</Ex1\>\\r\\n\\t);\\r\\n});οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½7#οΏ½
Perhaps someone knows what the problem is and how to fix it.
In the end, I need a valid JSON string.
P.S. This server is written for Windows (#include \<winsock.h\>).
I don't want to provoke dissatisfaction in the comments like "why is he writing a server in C++ instead of node.js, etc."
I am a Junior Front-end Developer, and I just like C++, so I chose this language.
Thank you for understanding, I hope for your help.
I've been searching for a solution to the problem for a long time and simply came to the conclusion to write a post on Stack Overflow, so I don't have any versions of the solution to this problem. |
C++(or Visual Studio) saving the file will not preserve the original file contents |
|c++|json|visual-studio| |
null |
I also got struck in the same issue but I had no mistake in exporting file and remaining all codes were fine as well.
The only problem with my code was I had imported the model as below:
const { User } = require('./Database/Models/UserModel');
Since I had **only one model exported** from the file UserModel.js, I should not use **{}** while importing the model.
Correct way of importing would be:
const User = require('./Database/Models/UserModel');
And my issue got resolved. |
This is node expressjs code and it is configured to allow dynamic origin.
const whitelist = ["http://localhost:8080"];
const corsOptions = {
origin: function (origin, callback) {
console.log(origin);
if (whitelist.indexOf(origin) !== -1) {
console.log(whitelist.indexOf(origin) !== -1);
callback(null, true);
} else {
callback(new Error("Not allowed by CORS"));
}
},
credentials: true,
};
const PORT = 5000;
const app = express();
app.use(cors(corsOptions));
a simple endpoint
app.get("/", (req, res) => {
res.send("ok");
});
with this when i make a request from this http://localhost:8080(react app) to http://localhost:5000(backend app) is working fine and able to see response text
But when i load the app in browser (http://localhost:5000) it should show the response i.e the **response text** **"ok"**. But I am getting the CORS err even it is the same origin.
But when i replace the line
app.use(cors(corsOptions));
with
app.use(cors());
I was able to see the response text in the browser when the app (http://localhost:5000) is loaded. Why is this behavior when **dynamic origin is set** (setting cors config). And when i try to log the origin it is showing **undefined** too.
const corsOptions = {
origin: function (origin, callback) {
console.log(origin);
..........
Irrespective of dynamic origin is set or not set when the app loaded in browser it should load the response text text right ? why it is not.
Another option is to add **!origin** to fix the issue which i am not clear correct or not.
if (whitelist.indexOf(origin) !== -1 || !origin) {....}
|
CORS Err - Same origin is undefined in browser |
|node.js|express|cors| |
i've been trying to fix the error in code cs0103, where the 'faceRect' does not exist in the current context, however when checking my code i dont know where i'm wrong, i'm new in both dlibdotnet and emgu.cv
here's the 3 functions working together
FrameGrabber:
```
void FrameGrabber(object sender, EventArgs e)
{
try
{
face_detected_lbl.Text = "0";
NamePersons.Add("");
// Get the current frame from the capture device
Emgu.CV.Image<Emgu.CV.Structure.Bgr, byte> currentFrame = grabber.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
// Convert it to Grayscale
Emgu.CV.Image<Emgu.CV.Structure.Gray, byte> gray = currentFrame.Convert<Emgu.CV.Structure.Gray, byte>();
// Face Detector
MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(face, 1.2, 15, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(30, 30));
// Action for each element detected
foreach (MCvAvgComp f in facesDetected[0])
{
t = t + 1;
Emgu.CV.Image<Emgu.CV.Structure.Gray, byte> result = gray.Copy(f.rect).Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
// Draw the face detected in the 0th (gray) channel with light green color
currentFrame.Draw(f.rect, new Bgr(Color.LightGreen), 2);
// Check if the face is live
DlibDotNet.Rectangle dlibRect = new DlibDotNet.Rectangle(f.rect.Left, f.rect.Top, f.rect.Right, f.rect.Bottom);
bool isLive = IsLiveFunction(currentFrame, dlibRect);
if (trainingImages.ToArray().Length != 0)
{
// TermCriteria for face recognition with numbers of trained images like maxIteration
MCvTermCriteria termCrit = new MCvTermCriteria(ContTrain, 0.001);
// Eigen face recognizer
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages.ToArray(),
labels.ToArray(),
3000,
ref termCrit);
string name = recognizer.Recognize(result);
finalname = name;
// Draw the label for each face detected and recognized
if (string.IsNullOrEmpty(name))
{
currentFrame.Draw(string.IsNullOrEmpty(name) ? "Unknown" : name, ref font, new System.Drawing.Point(f.rect.X - 2, f.rect.Y - 2), new Bgr(Color.Red));
}
else
{
currentFrame.Draw(name, ref font, new System.Drawing.Point(f.rect.X - 2, f.rect.Y - 2), new Bgr(Color.Lime));
}
}
```
IsLiveFunction:
```
private bool IsLiveFunction(Emgu.CV.Image<Emgu.CV.Structure.Bgr, byte> currentFrame, DlibDotNet.Rectangle faceRect)
{
bool isLive = false;
try
{
System.Drawing.Rectangle emguRect = new System.Drawing.Rectangle((int)faceRect.Left, (int)faceRect.Top, (int)faceRect.Width, (int)faceRect.Height);
// Convert the current frame to grayscale
Emgu.CV.Image<Emgu.CV.Structure.Gray, byte> grayFrame = currentFrame.Convert<Emgu.CV.Structure.Gray, byte>();
// Extract region of interest (ROI) from the current frame using the face rectangle
Emgu.CV.Image<Emgu.CV.Structure.Gray, byte> faceImage = grayFrame.Copy(emguRect);
double ear = CalculateEyeAspectRatio(faceImage);
double blinkThreshold = 0.2;
bool blinkingDetected = ear < blinkThreshold;
DlibDotNet.Rectangle dlibRect = new DlibDotNet.Rectangle(emguRect.Left, emguRect.Top, emguRect.Right, emguRect.Bottom);
bool headMovementDetected = DetectHeadMovement(grayFrame, dlibRect);```
LogInLogOut:
```private void login_timer_Tick(object sender, EventArgs e)
{
try
{
pwede_na_magout = false;
if (!string.IsNullOrEmpty(identified_name_lbl.Text) )
{
//----------------------------TIMEIN 1------------------------
bool checkuser = false;
int face_detected = Convert.ToInt32(face_detected_lbl.Text);
currentFrame = grabber.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
bool isLive = IsLiveFunction(currentFrame, faceRect);
string querry = "SELECT Username, Date, Time_In FROM logbook_tb";
MySqlCommand cmd = new MySqlCommand(querry, connection);
if (connection.State != ConnectionState.Open)
{
blah blah
}
blah blah
{
blah
{
blah
}
}
blah
if (checkuser == false && !string.IsNullOrEmpty(lname.Text) && (face_detected == 1 && isLive) && (DateTime.Parse(time_login.Text, CultureInfo.InvariantCulture) < DateTime.Parse("11:59 AM", CultureInfo.InvariantCulture)))
{
MessageBox.Show("Blink once and move your head to log in.", "Instructions", MessageBoxButtons.OK, MessageBoxIcon.Information);
LOGIN();
READ_RECORD();
}
else
{
// If blinking or head movement is not detected
DialogResult result = MessageBox.Show("Failed to detect real face. Would you like to try again?", "Error", MessageBoxButtons.RetryCancel, MessageBoxIcon.Error); ;
if (result == DialogResult.Retry)
{
// Retry
login_timer_Tick(sender, e);
}
else
{
// Exit
Application.Exit();
}
}
```
I tried 'DlibDotNet.Rectangle faceRect = GetFaceRectangleFromCurrentFrame();' however, this will lead me into creating a new function, which is not even guaranteed it will fix the problem.
Here's how it should work on my mind:
Facedect\>Blink and headmovement\> if both are true, then login, otherwise, promt a message.
I stil dont know if there would be more issues regarding the functions, it would be a great help if you recognized some and tell me.
Thanks! |
CS0103 dlibdotnet and emu.cv facerect not in context |
|c#|.net|emgucv|dlib| |
null |
When i click add to cart on my django ecommerce website the items are not added to cart and on the console a error message (VM40:1 Uncaught (in promise) Syntax Error: Unexpected token '<', "<!DOCTYPE "... is not valid JSON) and (POST http://127.0.0.1:8000/update_item/ 404 (Not Found))
i tried adding a csrf token on my views.py and checkout.html files but the csrf token is not even displayed its not showing
Here is my view.py code
```
from django.shortcuts import render
from django.http import JsonResponse
import json
import datetime
from .models import *
def store(request):
if request.user.is_authenticated:
customer = request.user.customer
order, created = Order.objects.get_or_create(customer=customer, complete=False)
items = order.orderitem_set.all()
cartItems = order.get_cart_items
else:
#Create empty cart for now for non-logged in user
items = []
order = {'get_cart_total':0, 'get_cart_items':0, 'shipping':False}
cartItems = order['get_cart_items']
products = Product.objects.all()
context = {'products':products, 'cartItems':cartItems}
return render(request, 'store/store.html', context)
def cart(request):
if request.user.is_authenticated:
customer = request.user.customer
order, created = Order.objects.get_or_create(customer=customer, complete=False)
items = order.orderitem_set.all()
cartItems = order.get_cart_items
else:
#Create empty cart for now for non-logged in user
items = []
order = {'get_cart_total':0, 'get_cart_items':0, 'shipping':False}
cartItems = order['get_cart_items']
context = {'items':items, 'order':order, 'cartItems':cartItems}
return render(request, 'store/cart.html', context)
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def checkout(request):
if request.user.is_authenticated:
customer = request.user.customer
order, created = Order.objects.get_or_create(customer=customer, complete=False)
items = order.orderitem_set.all()
cartItems = order.get_cart_items
else:
#Create empty cart for now for non-logged in user
items = []
order = {'get_cart_total':0, 'get_cart_items':0, 'shipping':False}
cartItems = order['get_cart_items']
context = {'items':items, 'order':order, 'cartItems':cartItems}
return render(request, 'store/checkout.html', context)
def updateItem(request):
data = json.loads(request.body)
productId = data['productId']
action = data['action']
print('Action:', action)
print('Product:', productId)
customer = request.user.customer
product = Product.objects.get(id=productId)
order, created = Order.objects.get_or_create(customer=customer, complete=False)
orderItem, created = OrderItem.objects.get_or_create(order=order, product=product)
if action == 'add':
orderItem.quantity = (orderItem.quantity + 1)
elif action == 'remove':
orderItem.quantity = (orderItem.quantity - 1)
orderItem.save()
if orderItem.quantity <= 0:
orderItem.delete()
return JsonResponse('Item was added', safe=False)
def processOrder(request):
transaction_id = datetime.datetime.now().timestamp()
data = json.loads(request.body)
if request.user.is_authenticated:
customer = request.user.customer
order, created = Order.objects.get_or_create(customer=customer, complete=False)
total = float(data['form']['total'])
order.transaction_id = transaction_id
if total == order.get_cart_total:
order.complete = True
order.save()
if order.shipping == True:
ShippingAddress.objects.create(
customer=customer,
order=order,
address=data['shipping']['address'],
city=data['shipping']['city'],
state=data['shipping']['state'],
zipcode=data['shipping']['zipcode'],
)
else:
print('User is not logged in')
return JsonResponse('Payment submitted..', safe=False)
```
checkout.html
```
{% extends 'store/main.html' %}
{% load static %}
{% block content %}
<div class="row">
<div class="col-lg-6">
<div class="box-element" id="form-wrapper">
<form id="form">
{% csrf_token %}
<div id="user-info">
<div class="form-field">
<input required class="form-control" type="text" name="name" placeholder="Name..">
</div>
<div class="form-field">
<input required class="form-control" type="email" name="email" placeholder="Email..">
</div>
</div>
<div id="shipping-info">
<hr>
<p>Shipping Information:</p>
<hr>
<div class="form-field">
<input class="form-control" type="text" name="address" placeholder="Address..">
</div>
<div class="form-field">
<input class="form-control" type="text" name="city" placeholder="City..">
</div>
<div class="form-field">
<input class="form-control" type="text" name="state" placeholder="State..">
</div>
<div class="form-field">
<input class="form-control" type="text" name="zipcode" placeholder="Zip code..">
</div>
<div class="form-field">
<input class="form-control" type="text" name="country" placeholder="Zip code..">
</div>
</div>
<hr>
<input id="form-button" class="btn btn-success btn-block" type="submit" value="Continue">
</form>
</div>
<br>
<div class="box-element hidden" id="payment-info">
<small>Paypal Options</small>
<button id="make-payment">Make payment</button>
</div>
</div>
<div class="col-lg-6">
<div class="box-element">
<a class="btn btn-outline-dark" href="{% url 'cart' %}">← Back to Cart</a>
<hr>
<h3>Order Summary</h3>
<hr>
{% for item in items %}
<div class="cart-row">
<div style="flex:2"><img class="row-image" src="{{item.product.imageURL}}"></div>
<div style="flex:2"><p>{{item.product.name}}</p></div>
<div style="flex:1"><p>${{item.product.price|floatformat:2}}</p></div>
<div style="flex:1"><p>x{{item.quantity}}</p></div>
</div>
{% endfor %}
<h5>Items: {{order.get_cart_items}}</h5>
<h5>Total: ${{order.get_cart_total|floatformat:2}}</h5>
</div>
</div>
</div>
<script type="text/javascript">
var shipping = '{{order.shipping}}'
var total = '{{order.get_cart_total|floatformat:2}}'
if (shipping == 'False'){
document.getElementById('shipping-info').innerHTML = ''
}
if (user != 'AnonymousUser'){
document.getElementById('user-info').innerHTML = ''
}
if (shipping == 'False' && user != 'AnonymousUser'){
//Hide entire form if user is logged in and shipping is false
document.getElementById('form-wrapper').classList.add("hidden");
//Show payment if logged in user wants to buy an item that does not require shipping
document.getElementById('payment-info').classList.remove("hidden");
}
var form = document.getElementById('form')
csrftoken = form.getElementsByTagName("input")[0].value
console.log('Newtoken' ,form.getElementsByTagName("input")[0].value)
/dform.addEventListener('submit', function(e){
e.preventDefault()
console.log('Form Submitted...')
document.getElementById('form-button').classList.add("hidden");
document.getElementById('payment-info').classList.remove("hidden");
})
document.getElementById('make-payment').addEventListener('click', function(e){
submitFormData()
})
function submitFormData(){
console.log('Payment button clicked')
var userFormData = {
'name':null,
'email':null,
'total':total,
}
var shippingInfo = {
'address':null,
'city':null,
'state':null,
'zipcode':null,
}
if (shipping != 'False'){
shippingInfo.address = form.address.value
shippingInfo.city = form.city.value
shippingInfo.state = form.state.value
shippingInfo.zipcode = form.zipcode.value
}
if (user == 'AnonymousUser'){
userFormData.name = form.name.value
userFormData.email = form.email.value
}
console.log('Shipping Info:', shippingInfo)
console.log('User Info:', userFormData)
var url = "/process_order/"
fetch(url, {
method:'POST',
headers:{
'Content-Type':'applicaiton/json',
'X-CSRFToken':csrftoken,
},
body:JSON.stringify({'form':userFormData, 'shipping':shippingInfo}),
})
.then((response) => response.json())
.then((data) => {
console.log('Success:', data);
alert('Transaction completed');
window.location.href = "{% url 'store' %}"
})
}
</script>
{% endblock content %}
```
|
i am experiencing a 404 error in the console on my django project |
|javascript|django-models|django-views|django-forms|django-templates| |
null |
Check the **Trackpad** settings on the Mac:

With the settings shown here, a `LongPressGesture` works for me when I hold using three fingers. It also works to "force click" the trackpad with one finger and then hold. I haven't tried it wuth Flutter though. |
Here is the structure of my project:
poker/analyse_hand_river/analyse_hand_on_river.py
poker/flop_turn_river.py
Inside analyse_hand_on_river.py I have the following import:
from flop_turn_river_cards import TheRiver
This used to work, but I think since I had issues with my Conda environment, and switched to a new one, it is not recognising the import anymore?
Any ideas how to test or fix this? Or how to set the root of my directory as `poker`? Which is what I think is the issue.
|
My import is not recognised suddenly, getting Unresolved reference when importing? |
|python|import| |
For this problem, I would like to model out this constraint in cplex.But I get an error saying Operator not available for dvar float+[][Periods] == dvar float+. Is there a way to model this? Here are some of my codes:
```
Data:
Products = {P1 P2 P3};
NbOperations = 10;
NbPeriods = 3;
LeadTime = [1,1,3,5,0,0,3,0,0,0];
Model:
{string} Products = ...;
int NbOperations = ...; range Operations = 1..NbOperations;
int NbPeriods = ...; range Periods = 1..NbPeriods;
int LeadTime[Operations]= ...;
forall(p in Products,l in Operations,t in Periods)
ct3:
X[p][t-LeadTime[l]] == Y[p][l][t];
```
The LeadTime is the estimated lead time of l operation of product p. In my case, I am assuming the lead time is the same for all products.
I had try to make it in terms of int but it just got lot more errors.
Thanks in advance. |
Here is the structure of my project:
poker/analyse_hand_river/analyse_hand_on_river.py
poker/flop_turn_river.py
Inside `analyse_hand_on_river.py` I have the following import:
from flop_turn_river_cards import TheRiver
This used to work, but I think since I had issues with my Conda environment, and switched to a new one, it is not recognising the import anymore?
Any ideas how to test or fix this? Or how to set the root of my directory as `poker`? Which is what I think is the issue.
|
Property in Pina store returns null after setting the value |
I have a database with two columns, one is a date and the other is a string formatted as JSON as follows:
```
'{ "k1": 1, "k2": 2, "k3": 3 }'
```
I am trying to get a table to use in Looker Studio, where for each date I have as many rows as there are keys in the corresponding JSON. For example if the above example corresponds to today's date, I want to have the rows:
`today, k1, 1`, `today, k2, 2`, `today, k3, 3` in my table.
The challenge is **I don't know the names of the keys `k1`, `k2`, `k3`** in advance, and there could be more or less of them depending on the date.
I have seen people solve this on the internet using JavaScript (because apparently even though BigQuery SQL can process JSON, it's actually quite lacking in versatility when doing it - to use an euphemism for "utterly incompetent in terms of functionality").
So I have this mockup code:
```sql
-- works in bigquery but not in looker studio (failed to fetch data from underlying dataset)
CREATE TEMP FUNCTION EXTRACT_KV_PAIRS(json_str STRING)
RETURNS ARRAY< STRUCT<key string, value string> >
LANGUAGE js AS """
try{
const json_dict = JSON.parse(json_str);
const all_kv = Object.entries(json_dict).map(
(r)=>Object.fromEntries([["key", r[0]],["value",
JSON.stringify(r[1])]]));
return all_kv;
} catch(e) { return [{"key": "error","value": e}];}
""";
with
data
as
(
select
cast('2024-01-10' as date) as date, '{"a": 12, "b": 11, "c": 51}' as json_string
union all
select
cast('2024-01-11' as date), '{"a": 3, "b": 2 , "d": 124}'
) -- data should act as a database table
,
extracted_pairs as
(SELECT data.date as date,
EXTRACT_KV_PAIRS(data.json_string) as kv_list
FROM data
)
SELECT
extracted_pairs.date as `date`, item.key as `key`, item.value as `value`
FROM extracted_pairs
CROSS JOIN UNNEST(extracted_pairs.kv_list) as item
```
So finding key-value pairs is done in javascript in the function EXTRACT_CV_PAIRS, which returns an array of structs, which we then cross join with.
**This works wonderfully in BigQuery itself, but it fails in Looker Studio! Looker studio tells me "Failed to fetch data from the underlying dataset." even though, again, the BigQuery query returns preciesely the expected results.**
I was thinking that it's because of the JavaScript, so I tried another solution I found here, using regular expressions:
```sql
with
data
as
(
select
cast('2024-01-10' as date) as date, '{"a": 12, "b": 11, "c": 51}' as json_string
union all
select
cast('2024-01-11' as date), '{"a": 3, "b": 2 , "d": 124}'
) -- data should act as a database table
SELECT
data.`date`, key, value
FROM sku_db
, UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(data.json_string), r'"([^"]+)"\s*:\s*([^,}]+)')) AS item
CROSS JOIN UNNEST([STRUCT(SPLIT(item, ':')[OFFSET(0)] AS key, SPLIT(item, ':') [OFFSET(1)] AS value)])
```
However, this doesn't work in BigQuery even because the regular expression used to extract strings of the form `"a":12` contains two capture groups! (the reason being that BigQuery uses re2, which does not support multiple capture groups - of course, who would have expected versatility from this querying engine at this point). I also tried capturing the key and the values directly, but again re2 does not allow the regex that would be required for that, because for example to capture the keys one would need the regexp `"([^"]+)"\s*(?=:)` and the "`(?=:)`" part is, what a surprise, unsupported...
One more option that I haven't tried is creating a view, as I heard that looker studio sometimes doesn't take well to data coming from bigquery queries without giving any reasonable explanation given... I could go that route, but I avoided it because I currently don't have permission to create views in the project.
So before going that way - which is of course not guaranteed to work - I decided to ask: Besides what I've tried, is there any other way of achieving something as (seemingly) simple as extracting the key-value pairs from a column of simple JSONs with unknown keys in BigQuery, a way which would also happen to work in Looker Studio? Many thanks to whomever would think of another way to achieve this. |
Looker studio failed to fetch data from underlying dataset even though query works fine in BigQuery, suspecting it's because JavaScript function |
|sql|json|regex|google-bigquery|looker-studio| |
i run a code in c++ in debug version step by step (F11) using visual studio2010 linked with cplex 12.4, so i obtained this error and message at the line where i declare IloEnv env ; it don't work , and i don't know why , ididn't find all these reperatory in my own pc so can someone help me to how fix it
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/B11ss.jpg |
IloEnv env ; don't work in the dbug and it can access to it |
|c++|visual-studio-2010|cplex|opl|rational-team-concert| |
Suppose I have colmap camera poses, is it possible and how to obtain a new view of input image `I` (planar object) from a different viewpoint/camera pose using those poses?
Colmap camera poses has following data:
```
extr = cam_extrinsics[key]
intr = cam_intrinsics[extr.camera_id]
height = intr.height
width = intr.width
uid = intr.id
R = np.array(qvec2rotmat(extr.qvec))
T = np.array(extr.tvec)
if intr.model=="SIMPLE_PINHOLE":
focal_length_x = intr.params[0]
FovY = focal2fov(focal_length_x, height)
FovX = focal2fov(focal_length_x, width)
fx = fy = intr.params[0]
cx = intr.params[1]
cy = intr.params[2]
elif intr.model=="PINHOLE":
focal_length_x = intr.params[0]
focal_length_y = intr.params[1]
FovY = focal2fov(focal_length_y, height)
FovX = focal2fov(focal_length_x, width)
fx = intr.params[0]
fy = intr.params[1]
cx = intr.params[2]
cy = intr.params[3]
```
```
class DummyCamera:
def __init__(self, uid, R, T, FoVx, FoVy, K, image_width, image_height):
self.uid = uid
self.R = R
self.T = T
self.FoVx = FoVx
self.FoVy = FoVy
self.K = K
self.image_width = image_width
self.image_height = image_height
self.projection_matrix = getProjectionMatrix(znear=0.01, zfar=100.0, fovX=FoVx, fovY=FoVy).transpose(0,1).cuda()
self.world_view_transform = torch.tensor(getWorld2View2(R, T, np.array([0,0,0]), 1.0)).transpose(0, 1).cuda()
self.full_proj_transform = (self.world_view_transform.unsqueeze(0).bmm(self.projection_matrix.unsqueeze(0))).squeeze(0)
self.camera_center = self.world_view_transform.inverse()[3, :3]
```
Colmap camera poses are computed on different flat object, size of images used in this computation is different from size of image `I`
going from this:
[input image](https://i.stack.imgur.com/SyQ5q.jpg)
to this after transformation using comap pose:
[expected image](https://i.stack.imgur.com/b6k8V.jpg) |
I have Azure Notifications Hubs set up on iOS device and it is receiving messages from the Test Send page in Azure Notification Hubs, but I cannot seem to get an Azure Function Timer to send notifications using the backend adapted code examples for [specific devices][1] and [specific users][2].
I keep getting this error:
Error: The remote server returned an error: (400) BadRequest. Reason: Bad Request..TrackingId:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,TimeStamp:2024-03-26T10:35:00.5448540Z
The code is:
private async Task sendMessageAsync() {
_logger.LogInformation("SendingMessage...");
NotificationHubClient hub = NotificationHubClient
.CreateClientFromConnectionString("Endpoint=<endpoint string>"
, "<hub name>");
try {
var msg = "\"{\"aps\":{\"alert\":\"Notification Hub test notification\"}}";
await hub.SendAppleNativeNotificationAsync(msg);
} catch (Exception e) {
Console.WriteLine("Error: " + e.Message);
}
}
I copied the `DefaultFullSharedAccessSignature` from the Notification Hub and am using the right Notification Hub name. The test message itself is taken from the Notification Hub test page. I am sending from Azure Functions running on my MacOS using Visual Studio, but don't think that matters.
I am using this in the .csproj:
<PackageReference Include="Microsoft.Azure.NotificationHubs" Version="4.2.0" />
This post, [Getting Bad Request (400) on Azure Notification Hub's "Test Send"][3], didn't apply as the test send is working. this post, [NotificationHub BadRequest][4] didn't have a resolution. This post, [Azure Apple Push Notification Error: 400 Bad Request][5], the test was not working. This post, [azure-notificationhubs IOS Bad request][6], the test was not working.
These posts seem to use the same method:
- [Sending Push Notifications from a Azure Function][7]
- [Getting Started β Azure, Notification Hub, and iOS][8]
- [ASP .NET Core Web API with Azure Notification Hub][9]
**UPDATE**
I thought I might try a console app, to avoid all the local function complexity in case that was somehow the problem, but still received the 400 error.
using Microsoft.Azure.NotificationHubs;
using System.Threading.Tasks;
namespace NotificationConsole;
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
SendNotificationAsync().Wait();
}
static async Task SendNotificationAsync()
{
NotificationHubClient hub = NotificationHubClient
.CreateClientFromConnectionString("<endpoint connection string"
, "<hub name>");
NotificationOutcome outcome = null;
try {
var msg = "\"{\"aps\":{\"alert\":\"Notification Hub test notification\"}}";
outcome = await hub.SendAppleNativeNotificationAsync(msg,"");
} catch (Exception e) {
Console.WriteLine("Error: " + e.Message);
Console.WriteLine(e.StackTrace);
}
}
}
[1]: https://learn.microsoft.com/en-us/azure/notification-hubs/notification-hubs-ios-xplat-segmented-apns-push-notification
[2]: https://learn.microsoft.com/en-us/azure/notification-hubs/notification-hubs-aspnet-backend-ios-apple-apns-notification
[3]: https://stackoverflow.com/questions/38919302/getting-bad-request-400-on-azure-notification-hubs-test-send
[4]: https://learn.microsoft.com/en-us/answers/questions/745750/notificationhub-badrequest
[5]: https://stackoverflow.com/questions/46814066/azure-apple-push-notification-error-400-bad-request
[6]: https://stackoverflow.com/questions/43629857/azure-notificationhubs-ios-bad-request
[7]: https://blog.verslu.is/azure/push-notifications-azure-function/
[8]: https://medium.com/@Dbradford/getting-started-azure-notification-hub-and-ios-ea7ca648416a
[9]: https://techmindfactory.com/ASP-.NET-Core-Web-API-with-Azure-Notification-Hub/ |
I host multiple PHP (WordPress) sites on a Windows 2019 Server with IIS 10.
I have PHP 7.4 and PHP 8.2 installed on this server.
I have one site which works fine when running with PHP 8.2 and FastCGI.
I have another site, which is supposed to be configured exactly the same way, which is running fine when configured with PHP 7.4 and FastCGI, but generates error 500 on every request as soon as I configure it with PHP 8.2 and FastCGI.
When I look at IIS log file, I see errors as 500 0 3221225477, and the error code 3221225477 is 0xC0000005 in hex, which on Windows is: define STATUS_ACCESS_VIOLATION.
I am quite disappointed as this appears one week ago, and I did not make any change on the server.
I have already spent a lot of time to compare the configuration of the two sites, and cannot see any difference.
Any advice on how investigate to find the root cause of the error and fix it will be greatly appreciated.
|
IIS PHP FastCGI 500 error when running with PHP 8.2 instead of PHP 7.4 |
|php|wordpress|fastcgi|http-status-code-500|windows-server-2019| |
{"Voters":[{"Id":573032,"DisplayName":"Roman C"},{"Id":455417,"DisplayName":"mdrg"},{"Id":3890632,"DisplayName":"khelwood"}]} |
I am trying to push a commit to a github branch. I forked the branch from the parent project. But everytime I do I am facing this error.
fatal: bad boolean config value 'https://github.com/Enan511/Internship-2024-Tech-Team-2.git' for 'push.autosetupremote' |
GIt push shows bad boolean config value error |
|github|git-push| |
I'm afraid I can't find a CSS/HTML only solution that would fit any shape.
So this snippet takes a simplistic approach - keeps moving the text down until the space below and above can be made equal with padding.
This ensures that any adjustment in height of the text element due to wrapping around the shape are taken into account.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const container = document.querySelector('.container');
const innerText = document.querySelector('.innertext');
const containerStyle = getComputedStyle(container);
const innerTextStyle = getComputedStyle(innerText);
function reposition() {
const height = Number(containerStyle.height.slice(0, -2));
let innerHeight = Number(innerTextStyle.height.slice(0, -2));
let paddingTop = 0;
while (((paddingTop * 2) + innerHeight) < height) {
paddingTop++;
innerText.style.paddingTop = paddingTop + 'px';
innerHeight = Number(getComputedStyle(innerText).height.slice(0, -2));
}
innerText.style.paddingTop = (paddingTop - 1) + 'px';
}
window.onresize = reposition;
reposition();
<!-- language: lang-css -->
html,
body {
width: 100%;
height: 100%;
margin: 0;
padding: 0;
}
.container {
width: 100%;
height: 100%;
}
.innerText :first-child {
margin-top: 0;
}
.cutout {
float: right;
background-color: #faa;
width: 80%;
height: 100%;
shape-outside: polygon(0 100%, 100% 0%, 100% 100%);
clip-path: polygon(0 100%, 100% 0%, 100% 100%);
}
<!-- language: lang-html -->
<div class="container">
<div class="cutout">
</div>
<div class="innertext">
<h1>
Lorem Ipsum
</h1>
<p>
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor
in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
</p>
</div>
</div>
<!-- end snippet -->
|
This happens because you are not authenticated for that API route.
first install [WP REST API][1] plugin in wordpress
then you must use JWT token for example in PHP you can use `firebase/php-jwt`
```bash
composer require firebase/php-jwt
```
Code example:
```php
<?php
require_once 'vendor/autoload.php'; // Include the JWT library
use Firebase\JWT\JWT;
// Your secret key for signing the token
$secret_key = 'your-secret-key';
// Payload data (you can customize this according to your needs)
$payload = array(
"iss" => "mywpsite.com",
"aud" => "mywpsite",
"iat" => time(),
"exp" => time() + 3600, // Token expiration time (1 hour from now)
// Add any other relevant data here
);
// Generate the JWT token
$token = JWT::encode($payload, $secret_key);
// Prepare the request headers
$headers = array(
'Authorization: Bearer ' . $token,
'Content-Type: application/json',
);
// Make the API request
$api_url = 'https://mywpsite/wp-json/ldlms/v1/groups';
$ch = curl_init($api_url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
$response = curl_exec($ch);
curl_close($ch);
var_dump($response);
?>
```
Pay attention that the generated jwt token is placed in the Authorization header.
[1]: https://cl.wordpress.org/plugins/jwt-authentication-for-wp-rest-api/ |
As an alternative to my answer, I was just wondering how exactly are you using the storage class variables?
Instead of 4 `boolean` variables wouldn't it be easier to just declare one `string` variable and add a condition in order to accept only valid values?
For example, assuming that valid values are `sc1`, `sc2`, `sc3` and `sc4`:
```lang-hcl
variable "storage_class" {
type = string
description = "Value of storage class"
validation {
condition = contains(["sc1", "sc2", "sc3", "sc4"], var.storage_class)
error_message = "The storage_class must be one of: sc1, sc2, sc3, sc4."
}
}
```
Running `terraform plan` with variable `storage_class="abc"`:
```lang-txt
Planning failed. Terraform encountered an error while generating this plan.
β·
β Error: Invalid value for variable
β
β on main.tf line 1:
β 1: variable "storage_class" {
β βββββββββββββββββ
β β var.storage_class is "abc"
β
β The storage_class must be one of: sc1, sc2, sc3, sc4.
```
|
You are encountering an error while opening the file. This is most likely due to an encoding issue.
Try this:
obj = json.load(open("Streaming_History_Audio_2016-2018_1.json", encoding="utf8")) |