instruction stringlengths 0 30k ⌀ |
|---|
Container Instance Exec API WebSockets |
|azure-container-instances| |
Probably the duplication happens in the first step `insert into table2`. New data is being inserted into the original table. Try inserting into some new table instead `insert into table_new` |
I've run across a similar issue, and it stems from the `-` operator. This operator is overloaded to accept either text or integer, but acts differently for each type. Using text will remove by value, and using an integer will remove by index. But what if your value IS an integer? Well then you're shit outta luck...
If possible, you can try changing your jsonb integer array to a jsonb string array (of integers), and then the `-` operator should work smoothly.
e.g.
```
'{1,2,3}' - 2 = '{1,2}' -- removes index 2
'{1,2,3}' - '2' = '{1,2,3}' -- removes values == '2' (but '2' != 2)
'{"1","2","3"}' - 2 = '{"1","2"}' -- removes index 2
'{"1","2","3"}' - '2' = '{"1","3"}' -- removes values == '2'
```
**If you run into a situation where you need to remove elements from an Integer array of type jsonb, you can solve it this way (albeit with some complexity)**
```
UPDATE table
SET integer_array_coloum =
(
SELECT jsonb_agg(value::int)
FROM jsonb_array_elements_text(integer_array_coloum)
WHERE value::int <> 2
)
WHERE tag_ids @> '2';
```
|
null |
|machine-learning|streamlit|encoder|custom-training| |
just change
```map <String,dynamic>```
to
```List< dynamic >```
and you are good to go :-)
|
I could not install IIS tracer on a machine running Windows 7 and IIS 7.0 because IIS admin objects were not present. What do I need to do to have those objects created?
![IIS Tracer configuration dialog][1]
[1]: http://i.stack.imgur.com/qlXGZ.jpg
The important text of the error being: "You cannot install IISTracer ISAPI filter using this application if IIS Admin objects are not present on the destination computer. See help to install this filter manually." |
I'm new to DynamoDb. So if someone could explain to me, what am I doing wrong, or what I miss in my understanding that would be great.
I'm trying to get the most effiicient way of searching for a row, that contains some value.
'm playing around with some test data to see how it works and how to design everything.
I have a table with about 1700 rows. Some rows have quite some data in them.
There is PK - Id, And some other attributes like Name, Nationality, Description etc.
I also added GSI on 'Name' With projection type 'KEYS_ONLY'
Now, my scenario is to find a person, that name contains given string. Let's say Name is 'Pablo Picasso', and I want to find any 'Picasso'
My assumtion was, that if I am scanning the GSI it should be pretty fast, I understand, Scan can only go thorugh !mb of data, but I assumed, that My GSI looked something like this:
| Name. | Id |
| -------- | --- |
| A Hopper | 2 |
| Timoty c | 3 |
| Donald Duck | 14 |
Having that in mind, I was sure it should find my row on first scan. Unfortunetaly my first scan went only through like 340 rows. I was able to find my row after 4 calls to Dynamo.
When I made simillar scan, but not on the GSI it took 5 calls. which doesn't seem like that different.
Am I doing something wrong? Or do I missunderstood anything?
For testing purposes I'm using C# code like this:
```
var result = await _dynamoDb.ScanAsync(new ScanRequest(DynamoConstants.ArtistsTableName)
{
IndexName = "NameIndex",
FilterExpression = "contains(#Name, :name)",
ExpressionAttributeNames = new Dictionary<string, string>() { { "#Name", "name" } },
ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{ { ":name", new AttributeValue("Picasso") } }
});
```
My index looks like this:
```
var nameIndex = new GlobalSecondaryIndex
{
IndexName = "NameIndex",
ProvisionedThroughput = new ProvisionedThroughput
{
ReadCapacityUnits = 5,
WriteCapacityUnits = 5
},
Projection = new Projection { ProjectionType = "KEYS_ONLY" },
KeySchema = new List<KeySchemaElement> {
new() { AttributeName = "name", KeyType = "HASH"}
}
};
``` |
Why Scanning GSI on DynamoDb doesnt work as fast as expected when using CONTAINS? |
|.net|amazon-dynamodb|aws-sdk|dynamodb-queries|dynamo-local| |
null |
I want to generate a list given a number n which returns all possible combinations from one to n recursively.
For example
> generate 3
should return:
>[[1,1,1],[1,1,2],[1,1,3],[1,2,1],[1,2,2],[1,2,3],[1,3,1],[1,3,2],[1,3,3],[2,1,1],[2,1,2],[2,1,3],[2,2,1],[2,2,2],[2,2,3],[2,3,1],[2,3,2],[2,3,3],[3,1,1],[3,1,2],[3,1,3],[3,2,1],[3,2,2],[3,2,3],[3,3,1],[3,3,2],[3,3,3]]
Logically something like this, which obviously returns an error:
```generate n = [replicate n _ | _ <- [1..n]]```
Hope you can help and thanks is advance! |
in odoo 17 Getting OwlError: Invalid handler (expected a function, received: 'undefined') |
|odoo|odoo-17| |
null |
I did what I wanted to do by doing something like this:
```python
import json
from jose import jwt
from django.conf import settings
from oauth2_provider.models import AccessToken
class CustomTokenView(TokenView):
def create_token_response(self, request):
response_data = super().create_token_response(request)
url, headers, body, status = response_data
if status == 400:
return response_data
# Extract token
token_data = json.loads(body)
access_token = token_data.get("access_token")
access_token_object = AccessToken.objects.get(token=access_token)
# Extract the email from the request and get the user with that email
user_email = access_token_object.user
user = User.objects.get(email=user_email)
# Decode the token and add custom claims and encode the token again
decoded_token = jwt.decode(
access_token, settings.SECRET_KEY, algorithms=["HS256"]
)
decoded_token["token_type"] = "access"
decoded_token["user_id"] = user.id
decoded_token["email"] = user.email
decoded_token["exp"] = access_token_object.expires
updated_access_token = jwt.encode(
decoded_token, settings.SECRET_KEY, algorithm="HS256"
)
# Update the token with new tokens containing custom claims
token_data["access_token"] = updated_access_token
modified_body = json.dumps(token_data)
response_data = url, headers, modified_body, status
return response_data
```
and overriding the `auth/token/` endpoint to use this custom view:
```python
from django.contrib import admin
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
from drf_spectacular.views import SpectacularAPIView, SpectacularSwaggerView
from account.views import CustomTokenView
urlpatterns = [
# oauth
path("auth/token/", CustomTokenView.as_view(), name="token_obtain"),
path("auth/", include("drf_social_oauth2.urls", namespace="drf")),
# admin
path("admin/", admin.site.urls),
# user account
path("api/account/", include("account.urls", namespace="account")),
path("api-auth/", include("rest_framework.urls", namespace="rest_framework")),
# api schema
path("api/schema/", SpectacularAPIView.as_view(), name="schema"),
path(
"api/schema/docs/",
SpectacularSwaggerView.as_view(url_name="schema"),
name="swagger-ui",
),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
``` |
You can override `State` class from `initState()` if you need to set initial value to your state variables from the `StatefulWidget`
```dart
class _RecordPageState extends State<RecordPage> {
Record _recordObject;
@override
void initState() {
super.initState();
_recordObject = widget.recordObject;
}
@override
Widget build(BuildContext context) {
//.....
}
}
``` |
I have a span tag that has role attr button. What I want to achieve is to remove the whole span while matching not the span itself but the role attribute
$str = 'This is a test buton. <span id="UmniBooking_36" class="insideB" type="Form" style="cursor: pointer;color:" role="button" >Click here</span>';
$str = preg_replace('~<role="button"(.*?)</(.*?)>~Usi', "", $str);
I am doing something wrong but I cant figure out what.
|
Set up a [Global Constant](https://codeigniter.com/user_guide/general/common_functions.html#global-constants) in `app/Config/Constants.php`
````php
class UserConstants{
const YEAR = 2024;
const MAX = 9999;
const FOO = "myfoo";
}
define("UserConstants", UserConstants::class);
````
Usage anywhere within your application
````php
echo (new (UserConstants))::FOO;
```` |
This script, adapted from a solution I found [stackoverflow](https://stackoverflow.com/questions/18599339/watchdog-monitoring-file-for-changes), uses the watchdog library to monitor a folder for changes. When a new image is detected, it executes a command to process the image with the YOLOv5 model. The process is supposed to be straightforward: upon detecting a new image, the script should run the detection model and then wait for the next image.
```
#!/usr/bin/python
import time
import os
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class MyHandler(FileSystemEventHandler):
def run_yolo(self, path):
if path.endswith('jpg'):
cmd = f'python3 detect.py --source {path} --weights yolov5s.pt'
print("Running command: ", cmd)
os.system(cmd)
def on_created(self, event):
print(f'event type: {event.event_type} path : {event.src_path}')
self.run_yolo(event.src_path)
def on_modified(self, event):
pass
def on_moved(self, event):
pass
if __name__ == "__main__":
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, path='imgs/', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
```
For testing purposes, I downloaded an image using wget(`wget https://predictivehacks.com/wp-content/uploads/2019/10/cycling001-1024x683.jpg`), which successfully triggered the script. However, I encountered an unexpected behavior: the script executed the detection process three times for the same image. The output clearly shows that the detection command was triggered multiple times, each time detecting the same objects in the image and saving the results to a new directory.
```
event type: modified path : imgs
event type: modified path : imgs/cycling001-1024x683.jpg
Running command: python3 detect.py --source imgs/cycling001-1024x683.jpg --weights yolov5s.pt
detect: weights=['yolov5s.pt'], source=imgs/cycling001-1024x683.jpg, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 2024-3-26 Python-3.11.6 torch-2.2.1+cpu CPU
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
image 1/1 /usr/src/app/imgs/cycling001-1024x683.jpg: 448x640 1 person, 2 bicycles, 1 car, 42.0ms
Speed: 0.3ms pre-process, 42.0ms inference, 0.6ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp2
event type: modified path : imgs/cycling001-1024x683.jpg
Running command: python3 detect.py --source imgs/cycling001-1024x683.jpg --weights yolov5s.pt
detect: weights=['yolov5s.pt'], source=imgs/cycling001-1024x683.jpg, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 2024-3-26 Python-3.11.6 torch-2.2.1+cpu CPU
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
image 1/1 /usr/src/app/imgs/cycling001-1024x683.jpg: 448x640 1 person, 2 bicycles, 1 car, 36.0ms
Speed: 0.3ms pre-process, 36.0ms inference, 0.6ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp3
event type: modified path : imgs
event type: modified path : imgs/cycling001-1024x683.jpg
Running command: python3 detect.py --source imgs/cycling001-1024x683.jpg --weights yolov5s.pt
detect: weights=['yolov5s.pt'], source=imgs/cycling001-1024x683.jpg, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 2024-3-26 Python-3.11.6 torch-2.2.1+cpu CPU
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
image 1/1 /usr/src/app/imgs/cycling001-1024x683.jpg: 448x640 1 person, 2 bicycles, 1 car, 36.9ms
Speed: 0.3ms pre-process, 36.9ms inference, 0.6ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp4
```
This repetition was not intended, and I'm puzzled about why the script reacted multiple times to a single image modification. The issue seems to be related to how the watchdog event is being triggered or handled, rather than the YOLOv5 model itself. I'm considering adjusting the script to ensure it only processes each image once, possibly by implementing a check to ignore subsequent detections of the same image unless it has been modified again.
Edits: The unintended repetition seems to be tied to the behavior of wget during the image download process, which likely creates temporary files. This behavior might have inadvertently triggered multiple events. Upon experimenting with moving the file into the folder using mv, the script appears to function as expected, processing each image only once.
+Edits: Switching to the on_created event handler in the code, make it safe when using wget to download images into the monitored directory.
Would employing a similar approach to the one I've tested help in addressing the main issue I'm experiencing with YOLOv5 hanging in the Docker container, or might there be a more effective strategy to ensure smooth and single-instance processing of new images? |
|woocommerce|woocommerce-subscriptions|auto-renewing| |
I'm trying to initialize an array of hashes of a specific length in a short way.
$array=@(@{"status"=1})*3
This does not work as I expect it to, when I change a value in the hash for one element in the array, all elements are updated
$a1=@(@{"status"=1},@{"status"=1},@{"status"=1})
$a2=@(@{"status"=1})*3
$a3=@(1)*3
$a1[1]["status"]=2
$a2[1]["status"]=2
$a3[1]=2
Write-Host "`nArray a1:"
$a1
Write-Host "`nArray a2:"
$a2
Write-Host "`nArray a3:"
$a3
I expect that only the second element in the array is affected by the update in all three scenarios. But this is what I get
Array a1:
Name Value
---- -----
status 1
status 2
status 1
Array a2:
status 2
status 2
status 2
Array a3:
1
2
1
|
null |
You can use `ICU4j` library (https://unicode-org.github.io/icu/userguide/icu4j/).
`pom.xml` (Maven):
<dependency>
<groupId>com.ibm.icu</groupId>
<artifactId>icu4j</artifactId>
<version>74.2</version>
</dependency>
Then in Java:
import com.ibm.icu.util.TimeZone;
return TimeZone.getWindowsID("America/New_York"); |
I need a help to use or build a regular expression to mask alphanumeric with *.
I tried it with this expression, but it doesn't work correctly when it has zeros in the middle of the string:
```
(?<=[^0].{3})\w+(?=\w{4})
```
Live samples:
https://www.regexplanet.com/share/index.html?share=yyyyf47wp3r
|Input |Output |
|--------------------|--------------------|
|0001113033AA55608981|0001113*********8981|
|23456237472347823923|2345************3923|
|00000000090000000000|0000000009000***0000|
|09008000800060050000|09008***********0000|
|AAAABBBBCCCCDDDDEEEE|AAAA************EEEE|
|0000BBBBCCCCDDDDEEEE|0000BBBB********EEEE|
The rules are:
1. The first 4 that are not zeros, and the last 4 a must be displayed.
1. Leading zeros are ignored, but not removed or replaced. |
How to generate all possible matrices given a number n in Haskell |
|haskell| |
I'm importing a large high resolution time series hydrometric dataset from government website as a csv file. The column 'x.Timestamp' is importing as chr with some unusual characters but when I try to convert it from chr to date, NA is returned. I would be really grateful for any help.
```
library(lubridate)
#>
#> Attaching package: 'lubridate'
#> The following objects are masked from 'package:base':
#>
#> date, intersect, setdiff, union
OW_HR_Discharge_DataV2<- data.frame(
stringsAsFactors = FALSE,
X.Timestamp = c("1 1955-11-08T00:00:00.000Z",
"2 1955-11-08T00:15:00.000Z","3 1955-11-08T00:30:00.000Z",
"4 1955-11-08T00:45:00.000Z",
"5 1955-11-08T01:00:00.000Z","6 1955-11-08T01:15:00.000Z"),
Value = c(10.227, 10.226, 10.228, 10.227, 10.227, 10.227),
Quality.Code = c(31L, 31L, 31L, 31L, 31L, 31L)
)
str(OW_HR_Discharge_DataV2)
#> 'data.frame': 6 obs. of 3 variables:
#> $ X.Timestamp : chr "1 1955-11-08T00:00:00.000Z" "2 1955-11-08T00:15:00.000Z" "3 1955-11-08T00:30:00.000Z" "4 1955-11-08T00:45:00.000Z" ...
#> $ Value : num 10.2 10.2 10.2 10.2 10.2 ...
#> $ Quality.Code: int 31 31 31 31 31 31
OW_HR_Discharge_DataV2$X.Timestamp<-as.POSIXct(OW_HR_Discharge_DataV2$X.Timestamp, format = "%Y-%m-%d %H:%M:%S", tz="GMT")
str(OW_HR_Discharge_DataV2)
#> 'data.frame': 6 obs. of 3 variables:
#> $ X.Timestamp : POSIXct, format: NA NA ...
#> $ Value : num 10.2 10.2 10.2 10.2 10.2
```
```
...
#> $ Quality.Code: int 31 31 31 31 31 31
``` |
|r|datetime|type-conversion|lubridate| |
|assembly|x86|stack|osdev|bios| |
I try to run this on macOS:
pip3.8 install turicreate
I have:
Python 3.8.6
and pip 24.0 from (python 3.8)
Update: The output below shows
installing to build/bdist.macosx-14-arm64/wheel
Reopened the terminal as I thought maybe it didn't open with Rosetta, now I get same error message but with
installing to build/bdist.macosx-10.9-x86_64/wheel
I am using this on the opened using Rosetta terminal.
I still get this error message:
SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
installing to build/bdist.macosx-10.9-x86_64/wheel
running install
==================================================================================
TURICREATE ERROR
If you see this message, pip install did not find an available binary package
for your system.
Supported Platforms:
* macOS 10.12+ x86_64.
* Linux x86_64 (including WSL on Windows 10).
Support Python Versions:
* 2.7
* 3.5
* 3.6
* 3.7
* 3.8
Another possible cause of this error is an outdated pip version. Try:
`pip install -U pip`
==================================================================================
I got this error with my initial python and pip versions updated both to the requirements of turicreate.
I tried using Rosetta and checking my anaconda version which is x86_64.
Tried pip install wheel, pip install -U turicreate. |
I am plotting some lines with Seaborn:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# dfs: dict[str, DataFrame]
fig, ax = plt.subplots()
for label, df in dfs.items():
sns.lineplot(
data=df,
x="Time step",
y="Loss",
errorbar="sd",
label=label,
ax=ax,
)
ax.set(xscale='log', yscale='log')
```
The result looks like [this](https://i.stack.imgur.com/FanHR.png).
Note the clipped negative values in the "effector_final_velocity" curve, since the standard deviation of the loss between runs is larger than its mean, in this case.
However, if `ax.set(xscale='log', yscale='log')` is called *before* the looped calls to `sns.lineplot`, the result looks like [this](https://i.stack.imgur.com/JVGG4.png).
I'm not sure where the unclipped values are arising.
Looking at the source of `seaborn.relational`: at the end of `lineplot`, the `plot` method of a `_LinePlotter` instance is called. It plots the error bands by passing the already-computed standard deviation bounds to `ax.fill_between`.
Inspecting the values of these bounds right before they are passed to `ax.fill_between`, the negative values (which would be clipped) are still present. Thus I had assumed that the "unclipping" behaviour must be something matplotlib is doing during the call to `ax.fill_between`, since `_LinePlotter.plot` appears to do no other relevant post-transformations of any data before it returns, and `lineplot` returns immediately.
However, consider a small example that calls `fill_between` where some of the lower bounds are negative:
```python
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
np.random.seed(5678)
ax.fill_between(
np.arange(10),
np.random.random((10,)) - 0.2,
np.random.random((10,)) + 0.75,
)
ax.set_yscale('log')
```
Then it makes no difference if `ax.set_yscale('log')` is called before `ax.fill_between`; in both cases the result is [this](https://i.stack.imgur.com/ctRUi.png).
I've spent some time searching for answers about this in the Seaborn and matplotlib documentation, and looked for answers on SA and elsewhere, but I haven't found any information about what is going on here.
|
I have been generate a singularity sandbox on the host dir "ubuntu_latest/", how can I cp some files into the container dir ?
I try to use “singularity --bind” or “singularity --exec” to cp files, but it doesn't matter. |
How to copy files into the singularity sandbox? |
|linux| |
null |
I am developing mobile Apps in Flutter. Right now I was using Spring Boot for backend services like User Authentication, Data Storage and so on.
But now I had a look to appwrite as a backend abstraction layer. I can use User Authentication from there and get rid of writing my own User Authentication for every new project I do.
Are there any experiences if that is suitable? Can I use Appwrite for creating all backend services? The frontend contains no business logic, it only calls Spring Boot REST services for that. With appwrite I can use functions to do that?
Is that a good idea? Are there any compares between appwrite and Spring Boot Backends showing the advantages / disadvantages? |
Appwrite and / or Spring Boot Backend |
Remove tag with a specific attr from text PHP |
|php|preg-replace| |
[enter image description here](https://i.stack.imgur.com/C0x3x.png)
for the given example html page:
```html
<html>
<head></head>
<body>
<div class="ia-secondary-content">
<div class="plugin_pagetree conf-macro output-inline" data-hasbody="false" data-macro-name="pagetree">
<div class="plugin_pagetree_children_list plugin_pagetree_children_list_noleftspace">
<div class="plugin_pagetree_children" id="children1326817570-0">
<ul class="plugin_pagetree_children_list" id="child_ul1326817570-0">
<li>
<div class="plugin_pagetree_childtoggle_container">
<a aria-expanded="false" aria-label="Expand item Topic 1" class="plugin_pagetree_childtoggle aui-icon aui-icon-small aui-iconfont-chevron-right" data-page-id="1630374642" data-tree-id="0" data-type="toggle" href="" id="plusminus1630374642-0"></a>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span" id="childrenspan1630374642-0"> <a href="#">Topic 1</a></span>
</div>
<div class="plugin_pagetree_children_container" id="children1630374642-0"></div>
</li>
<li>
<div class="plugin_pagetree_childtoggle_container">
<a aria-expanded="false" aria-label="Expand item Topic 2" class="plugin_pagetree_childtoggle aui-icon aui-icon-small aui-iconfont-chevron-right" data-page-id="1565544568" data-tree-id="0" data-type="toggle" href="" id="plusminus1565544568-0"></a>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span" id="childrenspan1565544568-0"> <a href="#">Topic 2</a></span>
</div>
<div class="plugin_pagetree_children_container" id="children1565544568-0"></div>
</li>
<li>
<div class="plugin_pagetree_childtoggle_container">
<a aria-expanded="true" aria-label="Expand item Topic 3" class="plugin_pagetree_childtoggle aui-icon aui-icon-small aui-iconfont-chevron-down" data-children-loaded="true" data-expanded="true" data-page-id="3733362288" data-tree-id="0" data-type="toggle" href="" id="plusminus3733362288-0"></a>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span" id="childrenspan3733362288-0"> <a href="#">Topic 3</a></span>
</div>
<div class="plugin_pagetree_children_container" id="children3733362288-0">
<ul class="plugin_pagetree_children_list" id="child_ul3733362288-0">
<li>
<div class="plugin_pagetree_childtoggle_container">
<span class="no-children icon"></span>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span"> <a href="#">Subtopic 1</a></span>
</div>
<div class="plugin_pagetree_children_container"></div>
</li>
<li>
<div class="plugin_pagetree_childtoggle_container">
<span class="no-children icon"></span>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span"> <a href="#">Subtopic 2</a></span>
</div>
<div class="plugin_pagetree_children_container"></div>
</li>
</ul>
</div>
</li>
<li>
<div class="plugin_pagetree_childtoggle_container">
<a aria-expanded="false" aria-label="Expand item Topic 4" class="plugin_pagetree_childtoggle aui-icon aui-icon-small aui-iconfont-chevron-right" data-page-id="2238798992" data-tree-id="0" data-type="toggle" href="" id="plusminus2238798992-0"></a>
</div>
<div class="plugin_pagetree_children_content">
<span class="plugin_pagetree_children_span" id="childrenspan2238798992-0"> <a href="#">Topic 4</a></span>
</div>
<div class="plugin_pagetree_children_container" id="children2238798992-0"></div>
</li>
</ul>
</div>
</div>
<fieldset class="hidden">
</fieldset>
</div>
</div>
</body>
</html>
```
I need to extract the innermost nested links from this sort of a page tree. Given the title within which, I need to get all the links, how can I find all the innermost nested links. I want to write a python script for the same which dynamically extracts the innermost nested links of the various html pages provided. Take note that the nesting levels may not be the same.
thus for the above example I should get:
```
<a href="#">Subtopic 1</a>
<a href="#">Subtopic 2</a>
```
I tried extracting all the links in the same nesting structure but it didn't work |
extracting innermost nested links |
|python|html| |
null |
{"OriginalQuestionIds":[78165838],"Voters":[{"Id":794749,"DisplayName":"gre_gor"},{"Id":9473764,"DisplayName":"Nick"},{"Id":1974224,"DisplayName":"Cristik"}]} |
The background is being applied to the table cell, you just can't see it. If you change the width and height of the cell, its evident. What are you trying to achieve exactly?
<!-- begin snippet: js hide: false console: true babel: null -->
<!-- language: lang-html -->
<table class="center-on-narrow" style="display: table !important;" role="presentation" border="0" cellspacing="0" cellpadding="0" align="center">
<tbody>
<tr>
<td class="button-td" style="display:block;width:700px; height: 700px; background-image: url('https://images.unsplash.com/photo-1598257006626-48b0c252070d?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1950&q=80');"><a class="button-a sans text-white" href="*|ARCHIVE3|*"> <span class="button-link"> READ MORE </span> </a></td>
</tr>
</tbody>
</table>
<!-- end snippet -->
Alternatively, if you want to keep the size of the `<td>`, you can resize the background, using `background-size: cover`;
|
null |
I am developing an android application (quiz feature) where I store progress data after a user completes a quiz. However, I am encountering several problems with the saveProgress function responsible for updating various metrics.
Problem Description:
Incorrect Answers Issue:
The total incorrect answers (totalWrongAnswers) are consistently being reported as 0 and not updating correctly, even when providing non-zero values for incorrect answers. This discrepancy leads to inaccurate progress tracking.
Optimal Update Implementation:
Despite several revisions, I am unable to identify the root cause leading to incorrect answers not being updated correctly within the total questions answered. I aim to implement a solution that ensures the total correct, total incorrect, and total questions answered are maintained accurately.
```
package com.mtcdb.stem.mathtrix.quiz
import android.content.*
import android.util.*
import java.io.*
object QuizProgressStorage {
private const val PROGRESS_FILE_NAME = "quiz_progress.txt"
fun saveProgress(
context : Context,
selectedDifficulty : String,
totalQuestionsAnswered : Int,
totalCorrectAnswers : Int,
totalWrongAnswers : Int,
) {
try {
val file = File(context.getExternalFilesDir(null), PROGRESS_FILE_NAME)
val progressDataMap = loadProgress(context).toMutableMap()
val progressData = progressDataMap[selectedDifficulty] ?: listOf(0, 0, 0)
val totalQuestions = progressData[0] + totalQuestionsAnswered
val totalCorrect = progressData[1] + totalCorrectAnswers
val totalIncorrect = progressData[2] + totalWrongAnswers
progressDataMap[selectedDifficulty] =
listOf(totalQuestions, totalCorrect, totalIncorrect)
file.bufferedWriter().use { out ->
progressDataMap.forEach { (key, value) ->
out.write("$key:${value.joinToString(",")}\n")
}
}
} catch (e : Exception) {
e.printStackTrace()
Log.e("QuizProgressStorage", "Failed to save progress data.")
}
}
fun loadProgress(context : Context) : MutableMap<String, List<Int>> {
val file = File(context.getExternalFilesDir(null), PROGRESS_FILE_NAME)
val progressDataMap = mutableMapOf<String, List<Int>>()
if (file.exists()) {
file.bufferedReader().useLines { lines ->
lines.forEach { line ->
val dataParts = line.split(":")
val difficulty = dataParts[0]
val values = dataParts[1].split(",")
.map { it.trim().toInt() } // Trim whitespace before parsing
progressDataMap[difficulty] = values
}
}
}
return progressDataMap
}
fun loadAverageScores(context : Context) : Map<String, Double> {
val progressDataMap = loadProgress(context)
val averageScores = mutableMapOf<String, Double>()
for ((difficulty, data) in progressDataMap) {
val totalCorrectAnswers = data.getOrNull(1) ?: 0
val totalQuestionsAnswered = data.getOrNull(0) ?: 1
val averageScore =
if (totalQuestionsAnswered != 0) totalCorrectAnswers.toDouble() / totalQuestionsAnswered else 0.0
averageScores[difficulty] = averageScore
}
return averageScores
}
// Calculate the total values across all difficulties
fun calculateTotalValues(progressDataMap : Map<String, List<Int>>) : List<Int> {
var totalQuestions = 0
var totalCorrectAnswers = 0
var totalWrongAnswers = 0
progressDataMap.values.forEach { data ->
totalQuestions += data.getOrNull(0) ?: 0
totalCorrectAnswers += data.getOrNull(1) ?: 0
totalWrongAnswers += data.getOrNull(2) ?: 0
}
return listOf(totalQuestions, totalCorrectAnswers, totalWrongAnswers)
}
}
```
The data will come from here:
```
package com.mtcdb.stem.mathtrix.quiz
import android.annotation.*
import android.database.sqlite.*
import android.graphics.*
import android.os.*
import android.view.*
import android.widget.*
import androidx.fragment.app.*
import androidx.lifecycle.*
import com.akexorcist.roundcornerprogressbar.*
import com.google.android.material.radiobutton.*
import com.mtcdb.stem.mathtrix.R
import java.util.*
@Suppress("NAME_SHADOWING")
class QuizFragment : Fragment() {
private lateinit var questionTextView : TextView
private lateinit var optionsRadioGroup : RadioGroup
private lateinit var dbHelper : QuizDbHelper
private lateinit var tVQuestions : TextView
private lateinit var database : SQLiteDatabase
private var questions : MutableList<QuizEntity> = mutableListOf()
private var currentQuestionIndex = 0
private lateinit var timerTextView : TextView
private lateinit var timerProgressBar : RoundCornerProgressBar
private var selectedOptionIndex = -1
private var countDownTimer : CountDownTimer? = null
private var timeLeftMillis : Long = 300000 // 5 minutes in milliseconds
private val totalTimeMillis : Long = 300000 // 5 minutes in milliseconds
private var totalQuestionsPerGame = 10
private var questionsAnswered = 0
private var quizStartTimeMillis : Long = 0
private lateinit var sharedViewModel : SharedViewModel
private var totalQuestionsAnswered = 0
private var totalCorrectAnswers = 0
private var totalWrongAnswers = 0
@SuppressLint("MissingInflatedId")
override fun onCreateView(
inflater : LayoutInflater, container : ViewGroup?,
savedInstanceState : Bundle?,
) : View? {
val view = inflater.inflate(R.layout.fragment_quiz, container, false)
// Initialize sharedViewModel
sharedViewModel = ViewModelProvider(requireActivity())[SharedViewModel::class.java]
// Initialize totalQuestionsAnswered and totalCorrectAnswers in the SharedViewModel
sharedViewModel.totalQuestionsAnswered.value = 0
sharedViewModel.totalCorrectAnswers.value = 0
timerTextView = view.findViewById(R.id.timerTextView)
questionTextView = view.findViewById(R.id.questionTextView)
optionsRadioGroup = view.findViewById(R.id.optionsRadioGroup)
timerProgressBar = view.findViewById(R.id.timerProgress)
tVQuestions = view.findViewById(R.id.tv_questions)
// Display the first question
fetchQuestions()
displayQuestion()
dbHelper = QuizDbHelper(requireContext())
database = dbHelper.writableDatabase
val totalQuestions = questions.size
val currentQuestionNumber = currentQuestionIndex + 1
quizStartTimeMillis = System.currentTimeMillis()
view?.findViewById<TextView>(R.id.tv_progress)?.text = buildString {
append("Question ")
append(currentQuestionNumber)
}
tVQuestions.text = buildString {
append("out of ")
append(totalQuestions)
}
startTimer()
return view
}
private fun displayQuestion() {
// Check if questions are not empty before accessing their elements
if (questions.isNotEmpty() && currentQuestionIndex < questions.size) {
val currentQuestion = questions[currentQuestionIndex]
questionTextView.text = currentQuestion.question
optionsRadioGroup.removeAllViews()
for ((index, option) in currentQuestion.options.withIndex()) {
val radioButton = context?.let { MaterialRadioButton(it) }
radioButton?.text = option
radioButton?.id = index
radioButton?.setBackgroundResource(R.drawable.option_border_bg)
radioButton?.setTextColor(Color.parseColor("#FF000000"))
radioButton?.setPadding(32, 16, 32, 16)
val linearLayoutParams = LinearLayout.LayoutParams(
LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT
)
linearLayoutParams.setMargins(0, 0, 0, 32)
radioButton?.layoutParams = linearLayoutParams
optionsRadioGroup.addView(radioButton)
// Set a listener for each RadioButton
radioButton?.setOnCheckedChangeListener { _, isChecked ->
if (isChecked) {
// Highlight the selected option when the RadioButton is checked
selectedOptionIndex = index
} else {
radioButton.setBackgroundResource(R.drawable.option_border_bg)
}
}
radioButton?.setOnClickListener {
checkAnswer(index)
}
}
// Update the question number display
updateQuestionNumber()
} else {
// Handle the case where questions are empty or all questions have been displayed
}
}
// Function to update the question number display
private fun updateQuestionNumber() {
val currentQuestionNumber = currentQuestionIndex + 1
view?.findViewById<TextView>(R.id.tv_progress)?.text = buildString {
append("Question ")
append(currentQuestionNumber)
}
}
private fun fetchQuestions() {
dbHelper = QuizDbHelper(requireContext())
database = dbHelper.writableDatabase
// Determine the difficulty level
val difficultyLevel = when (this.arguments?.getString("difficultyLevel")) {
"Easy" -> "Easy"
"Medium" -> "Medium"
"Hard" -> "Hard"
else -> "Easy"
}// Default to Easy if difficulty level is not recognized
totalQuestionsPerGame = when (difficultyLevel) {
"Easy" -> 10
"Medium" -> 15
"Hard" -> 20
else -> 10 // Default to 10 if difficulty level is not recognized
}
// Fetch questions based on the selected difficulty level
val cursor = database.rawQuery(
"SELECT * FROM ${QuizDbHelper.TABLE_QUIZ} WHERE ${QuizDbHelper.COLUMN_DIFFICULTY_LEVEL} = ? LIMIT ?",
arrayOf(difficultyLevel, totalQuestionsPerGame.toString())
)
if (cursor.moveToFirst()) {
// If there are questions in the cursor, convert them to a list of QuizEntity
questions = mutableListOf()
do {
val idColumnIndex = cursor.getColumnIndex(QuizDbHelper.COLUMN_ID)
val questionColumnIndex = cursor.getColumnIndex(QuizDbHelper.COLUMN_QUESTION)
val optionsColumnIndex = cursor.getColumnIndex(QuizDbHelper.COLUMN_OPTIONS)
val correctAnswerColumnIndex =
cursor.getColumnIndex(QuizDbHelper.COLUMN_CORRECT_ANSWER_INDEX)
val difficultyLevelColumnIndex =
cursor.getColumnIndex(QuizDbHelper.COLUMN_DIFFICULTY_LEVEL)
if (idColumnIndex >= 0 && questionColumnIndex >= 0 && optionsColumnIndex >= 0 && correctAnswerColumnIndex >= 0 && difficultyLevelColumnIndex >= 0) {
val id = cursor.getLong(idColumnIndex)
val question = cursor.getString(questionColumnIndex)
val options = cursor.getString(optionsColumnIndex).split(",")
val correctAnswerIndex = cursor.getInt(correctAnswerColumnIndex)
val difficultyLevel = cursor.getString(difficultyLevelColumnIndex)
val quizEntity = QuizEntity(
id, question, options, correctAnswerIndex, difficultyLevel
)
questions.add(quizEntity)
} else {
// Handle the case where one or more columns are not found
}
} while (cursor.moveToNext())
questions.shuffle()
// Display the first question
displayQuestion()
updateQuestionNumber()
} else {
// Handle the case where there are no questions in the cursor
}
updateQuestionNumber()
cursor.close()
}
private fun checkAnswer(selectedOptionIndex : Int) {
if (selectedOptionIndex != -1) {
if (questions.isNotEmpty() && currentQuestionIndex < questions.size) {
questions[currentQuestionIndex].selectedAnswerIndex = selectedOptionIndex
val correctAnswerIndex = questions[currentQuestionIndex].correctAnswerIndex
if (selectedOptionIndex == correctAnswerIndex) {
// Mark option as correct
markOptionAsCorrect(selectedOptionIndex)
// Increment total correct answers
totalCorrectAnswers++
} else {
// Mark option as incorrect
val correctOptionIndex = questions[currentQuestionIndex].correctAnswerIndex
markOptionAsIncorrect(selectedOptionIndex, correctOptionIndex)
// Increment total wrong answers
totalWrongAnswers++
}
// Increment the total questions answered only once per question
totalQuestionsAnswered++
Handler(Looper.getMainLooper()).postDelayed({
currentQuestionIndex++
optionsRadioGroup.clearCheck()
if (currentQuestionIndex < questions.size) {
// Clear and display the next question
clearOptionBackgrounds()
displayQuestion()
} else {
// End of the quiz
val score = calculateScore()
navigateToQuizResult(score)
}
}, 1000) // Delay for 1 second before moving to the next question
}
} else {
Toast.makeText(context, "Please select an option.", Toast.LENGTH_SHORT).show()
}
}
private fun clearOptionBackgrounds() {
for (i in 0 until optionsRadioGroup.childCount) {
optionsRadioGroup.getChildAt(i)?.background = null
}
}
private fun markOptionAsCorrect(optionIndex : Int) {
optionsRadioGroup.getChildAt(optionIndex)?.setBackgroundResource(R.drawable.option_correct)
}
private fun markOptionAsIncorrect(selectedIndex : Int, correctIndex : Int) {
val correctColor = R.drawable.option_correct
val incorrectColor = R.drawable.option_wrong
optionsRadioGroup.getChildAt(selectedIndex)?.setBackgroundResource(incorrectColor)
optionsRadioGroup.getChildAt(correctIndex)?.setBackgroundResource(correctColor)
}
private fun calculateScore() : Int {
var score = 0
for (question in questions) {
if (question.isAnswerCorrect) {
score++
}
}
return score
}
private fun moveNextOrEndQuiz() {
clearOptionBackgrounds()
optionsRadioGroup.clearCheck()
if (questionsAnswered < totalQuestionsPerGame) {
// Move to the next question
currentQuestionIndex++
questionsAnswered++
displayQuestion()
// Save progress data
val selectedDifficulty = this.arguments?.getString("difficultyLevel") ?: "Easy"
saveProgressData(
selectedDifficulty,
totalQuestionsPerGame,
totalCorrectAnswers,
totalWrongAnswers
)
} else {
// End of the quiz
val score = calculateScore()
navigateToQuizResult(score)
}
}
private fun startTimer() {
// Cancel any existing timer to avoid overlapping
countDownTimer?.cancel()
// Create a new CountDownTimer
countDownTimer = object : CountDownTimer(timeLeftMillis, 1000) {
override fun onTick(millisUntilFinished : Long) {
// Update the timer text
timerTextView.text = buildString {
append("Time left: ")
append(formatTime(millisUntilFinished))
}
// Update the timer ProgressBar
timerProgressBar.setProgress(((totalTimeMillis - millisUntilFinished) / 1000).toInt())
}
override fun onFinish() {
// Time's up, move to the next question or end the quiz
moveNextOrEndQuiz()
}
}
// Start the timer
countDownTimer?.start()
}
private fun formatTime(millis : Long) : String {
val minutes = millis / 60000
val seconds = (millis % 60000) / 1000
return String.format(Locale.getDefault(), "%02d:%02d", minutes, seconds)
}
private fun navigateToQuizResult(score : Int) {
val resultFragment = QuizResultFragment()
val difficulty = this.arguments?.getString("difficultyLevel")
// Calculate time taken
val quizEndTimeMillis = System.currentTimeMillis()
val timeTakenMillis = quizEndTimeMillis - quizStartTimeMillis
// Convert time taken to a formatted string (e.g., "03:30" for 3 minutes and 30 seconds)
val formattedTimeTaken = formatTime(timeTakenMillis)
// Pass the questions, difficulty, time taken, and score as arguments to the fragment
val bundle = Bundle()
bundle.putParcelableArrayList("quizQuestions", ArrayList(questions))
bundle.putInt("quizScore", score)
bundle.putString("difficulty", difficulty)
bundle.putString("timeTaken", formattedTimeTaken)
resultFragment.arguments = bundle
// Perform the fragment transaction
requireActivity().supportFragmentManager.beginTransaction()
.replace(R.id.quiz_container, resultFragment).commit()
}
private fun saveProgressData(
selectedDifficulty : String,
totalQuestionsPerGame : Int,
totalCorrectAnswers : Int,
totalWrongAnswers : Int,
//averageScoresMap: Map<String, Double>
) {
val totalQuestions = totalQuestionsPerGame
// Save the progress data including the updated total questions answered, total correct answers,
// total wrong answers, and average scores for the selected difficulty
QuizProgressStorage.saveProgress(
requireContext(),
selectedDifficulty,
totalQuestions,
totalCorrectAnswers,
totalWrongAnswers
)
}
}
```
and will be displayed here:
```
package com.mtcdb.stem.mathtrix.quiz
import android.os.*
import android.view.*
import android.widget.*
import androidx.fragment.app.*
import com.mtcdb.stem.mathtrix.*
import com.mtcdb.stem.mathtrix.quiz.QuizProgressStorage.calculateTotalValues
import com.mtcdb.stem.mathtrix.quiz.QuizProgressStorage.loadProgress
class QuizProgressFragment : Fragment() {
private lateinit var tvTotalQuestions : TextView
private lateinit var tvTotalCorrectAnswers : TextView
private lateinit var tvTotalWrongAnswers : TextView
private lateinit var tvAverageScoreEasy : TextView
private lateinit var tvAverageScoreMedium : TextView
private lateinit var tvAverageScoreHard : TextView
override fun onCreateView(
inflater : LayoutInflater, container : ViewGroup?,
savedInstanceState : Bundle?,
) : View? {
return inflater.inflate(R.layout.fragment_quiz_progress, container, false)
}
override fun onViewCreated(view : View, savedInstanceState : Bundle?) {
super.onViewCreated(view, savedInstanceState)
// Initialize TextViews
tvTotalQuestions = view.findViewById(R.id.tvTotalQuestions)
tvTotalCorrectAnswers = view.findViewById(R.id.tvTotalCorrectAnswers)
tvTotalWrongAnswers = view.findViewById(R.id.tvTotalWrongAnswers)
tvAverageScoreEasy = view.findViewById(R.id.tvAverageScoreEasy)
tvAverageScoreMedium = view.findViewById(R.id.tvAverageScoreMedium)
tvAverageScoreHard = view.findViewById(R.id.tvAverageScoreHard)
val averageScores = QuizProgressStorage.loadAverageScores(requireContext())
// Update the UI with the loaded progress data
val progressDataMap = loadProgress(requireContext())
val totalValues = calculateTotalValues(progressDataMap)
tvTotalQuestions.text = totalValues[0].toString()
tvTotalCorrectAnswers.text = totalValues[1].toString()
tvTotalWrongAnswers.text = totalValues[2].toString()
// Update average scores for each difficulty level
averageScores.let { scores ->
tvAverageScoreEasy.text = scores["Easy"]?.toString() ?: "N/A"
tvAverageScoreMedium.text = scores["Medium"]?.toString() ?: "N/A"
tvAverageScoreHard.text = scores["Hard"]?.toString() ?: "N/A"
}
}
override fun onDestroy() {
(activity as? MainActivity)?.supportActionBar?.title = getString(R.string.app_name)
super.onDestroy()
}
}
```
Can you help me identify any potential issues that might be causing the incorrect answers not to update correctly in the saveProgress function?
What modifications or adjustments can be made to ensure accurate tracking of total correct, incorrect, and unanswered questions in the application's progress data storage?
**What I've Tried:**
- Checked the logic in the saveProgress function multiple times to verify the calculations for updating total questions answered, total correct answers, and total incorrect answers.
- Reviewed the data loading process and data structures to ensure correctness in retrieving and updating progress data.
- Tested different scenarios with sample data to track how the incorrect answers count is being handled in the progress data storage.
**Expected Behavior:**
The saveProgress function is expected to increment the total number of questions answered, total correct answers, and total incorrect answers accurately based on the user's quiz responses. The total incorrect answers should reflect the actual count of incorrectly answered questions and shouldn't remain at 0 when incorrect answers are provided. |
Issue with updating progress data accurately in a quiz (Android Kotlin) |
|android|kotlin|mobile| |
null |
My issue with this message was literally a permission gap.
To solve the problem you can visit the `service-usage-access-control-reference` [page][1] (there you can find the `serviceusage.services.use` permission that you need).
Based on permissions there, you can use for example the `roles/serviceusage.serviceUsageAdmin` permission in your service account.
In my case i did the following:
gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:YOUR_SA" --role="roles/serviceusage.serviceUsageAdmin"
You can use different permissions based on your needs, for example: `roles/serviceusage.serviceUsageConsumer`
Finishing the command i was able to run gh action (`gcloud builds submit`) using the service account.
Service account creation ref: [service accounts create docs][2]
[1]: https://cloud.google.com/service-usage/docs/access-control
[2]: https://cloud.google.com/iam/docs/service-accounts-create?hl=pt-br#creating |
Using jPasskit for Apple™ PassKit Web Service.Please let me know how can we check if passes are already there for the particular loyalty card.
Scenario is like we should identify if passes are already added to apple wallet, then we should not allow user to create pass again.
Scenario is like we should identify if passes are already added to apple wallet, then we should not allow user to create pass again. Need help with https://github.com/drallgood/jpasskit
|
|flutter|spring-boot|appwrite| |
Can Ghidra load a directory and automatically convert the binary files within it into assembly code? If not, is there a pre-defined script available, or do I need to write one myself? If Ghidra isn't suitable, do you recommend any other tools for this task besides objdump and IDA? |
Can Ghidra load a directory and translate the binary files within it into assembly code? |
|ida|ghidra|reversing| |
You were getting the exception because the `3.5.0` and `3.5.1` releases of WireMock had an issue with the POM, meaning that Maven wouldn’t fully download transitive dependencies. `3.5.2` fixes this.
I would try upgrading to `3.5.2` and running your original code again.
The release of `3.5.2` can be found here - https://github.com/wiremock/wiremock/releases/tag/3.5.2 |
When running typescript-language-server very basically like `typescript-language-server --stdio` I get the following `legend` back for the initialization response:
```json
{
"tokenTypes": [
"class",
"enum",
"interface",
"namespace",
"typeParameter",
"type",
"parameter",
"variable",
"enumMember",
"property",
"function",
"member"
],
"tokenModifiers": [
"declaration",
"static",
"async",
"readonly",
"defaultLibrary",
"local"
]
}
```
What is missing, for example, is the `keyword` token type. I looked at https://github.com/typescript-language-server/typescript-language-server/blob/master/docs/configuration.md but did not find a hint whether I need to activate additional token types explicitly.
Any idea why these and other token types are not reported. Is that left to vscode? |
typescript-language-server does not seem to provide keyword as a semantic token type |
|typescript|language-server-protocol|typescript-language-server| |
I am plotting some lines with Seaborn:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# dfs: dict[str, DataFrame]
fig, ax = plt.subplots()
for label, df in dfs.items():
sns.lineplot(
data=df,
x="Time step",
y="Loss",
errorbar="sd",
label=label,
ax=ax,
)
ax.set(xscale='log', yscale='log')
```
The result looks like [this](https://i.stack.imgur.com/FanHR.png).
Note the clipped negative values in the lower error band of the "effector_final_velocity" curve, since the standard deviation of the loss between runs is larger than its mean, in this case.
However, if `ax.set(xscale='log', yscale='log')` is called *before* the looped calls to `sns.lineplot`, the result looks like [this](https://i.stack.imgur.com/JVGG4.png).
I'm not sure where the unclipped values are arising.
Looking at the source of `seaborn.relational`: at the end of `lineplot`, the `plot` method of a `_LinePlotter` instance is called. It plots the error bands by passing the already-computed standard deviation bounds to `ax.fill_between`.
Inspecting the values of these bounds right before they are passed to `ax.fill_between`, the negative values (which would be clipped) are still present. Thus I had assumed that the "unclipping" behaviour must be something matplotlib is doing during the call to `ax.fill_between`, since `_LinePlotter.plot` appears to do no other relevant post-transformations of any data before it returns, and `lineplot` returns immediately.
However, consider a small example that calls `fill_between` where some of the lower bounds are negative:
```python
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
np.random.seed(5678)
ax.fill_between(
np.arange(10),
np.random.random((10,)) - 0.2,
np.random.random((10,)) + 0.75,
)
ax.set_yscale('log')
```
Then it makes no difference if `ax.set_yscale('log')` is called before `ax.fill_between`; in both cases the result is [this](https://i.stack.imgur.com/ctRUi.png).
I've spent some time searching for answers about this in the Seaborn and matplotlib documentation, and looked for answers on SA and elsewhere, but I haven't found any information about what is going on here.
|
I am using typescript. I need to setup an integration test to save a record to s3 and check dynamodb for that record. I also have to setup a s3 client. I am not sure how to check a request that comes into the api then save it to s3 and verify that it made it in that location.
I haven't tried anything because I am stuck on this. |
S3 integration testing |
|typescript|api|amazon-s3|amazon-dynamodb|integration-testing| |
null |
There are two models, ProductModel and CategoryModel.
Purpose: When creating a product (ProductModel), assign a category to it, respectively, an array of products should be filled in the category collection, but the connection between them does not work. When creating a product (ProductModel), I output the response via json, it contains all fields except the category field, I need this field to be filled in too.
Product Model
```
import { Schema, model } from 'mongoose'
const ProductModel = new Schema({
productName: { type: String, required: true },
price: { type: Number, required: true },
preview: { type: String },
color: { type: String, required: true },
specs: {
images: [{ type: String }],
memory: { type: Number },
ram: { type: Number },
diagonal: { type: String },
},
category: {
name: String,
type: Schema.Types.ObjectId,
ref: 'Category',
},
})
export default new model('Product', ProductModel)
```
CategoryModel
```
import { Schema, model } from 'mongoose'
const CategoryModel = new Schema({
name: {
type: String,
required: true,
},
products: [
{
type: Schema.Types.ObjectId,
ref: 'Product',
},
],
})
export default new model('Category', CategoryModel)
```
The logic of the product creation route
```
async post(req, res, next) {
try {
// Get fields from client
let { productName, price, preview, color, specs, category } = req.body
// Looking for a category transferred from a client
const productCategory = await CategoryModel.find({ name: category })
console.log(productCategory)
// Creating product
const doc = new ProductModel({
productName,
price,
preview,
color,
specs,
category: productCategory._id,
})
// Save product
const product = await doc.save()
// Returning a response from the server to the client
return res.json(product)
} catch (error) {
console.log(error.message)
}
}
```
Here is what I send to the server and receive from it
```
Request:
{
"productName": "Air pods pro",
"price": 123,
"preview": "preview",
"color": "red",
"specs": {
"images": ["image1, image2, image3"],
"memory": 64,
"ram": 16,
"diagonal": "diagonal"
},
"category": "AirPods"
}
Response:
{
"productName": "Air pods pro",
"price": 123,
"preview": "preview",
"color": "red",
"specs": {
"images": [
"image1, image2, image3"
],
"memory": 64,
"ram": 16,
"diagonal": "diagonal"
},
"_id": "6609ad76341da85122e029d0",
"__v": 0
}
```
As you can see, there is no category field in the response, just like in the database, this field is missing in each product. And the Category collection, which has an array of products, is also not filled
I will attach screenshots below
[Mongo](https://i.stack.imgur.com/zEbuy.png)
[Mongo](https://i.stack.imgur.com/YeTUb.png)
|
md2perpe's answer was amazingly helpful. I modified it slightly to work with optional Date properties:
type Jsonized<T> = T extends object ? {
[K in keyof T]:
T[K] extends Function ? never :
T[K] extends Date ? string :
T[K] extends (Date | undefined) ? string | undefined :
T[K] extends number ? number :
T[K] extends string ? string :
Jsonized<T[K]>
} : T
|
The app directory does not support `getServerSideProps` or any of the other data fetching methods.
----
Inside the app directory you will achieve your goal using server components. Inside them you can directly fetch the required data in an asynchronous manner, similarly as you would in the data fetching methods used inside the pages directory.
```typescript
export default async function MyComponent() {
const response = await fetch("https://exxample.com/api/hello"):
const data = await response.json();
return <pre>{data}</pre>;
}
```
I recommend you take a look at the [documentation](https://nextjs.org/docs/getting-started/react-essentials) to learn more about the fundamentals of Next.js 13, the app directory and React 18. |
|android|android-studio|gradle|android-gradle-plugin| |
If you want a legend you have to map on aesthetics. In case you want to add a legend entry to the already present `color` and `shape` legend you could do by mapping e.g. the constant `"LMWL"` on `color` and `shape` in `geom_abline` which also requires to move `intercept` and `slope` inside `aes()`. Additionally you have to provide a color and a shape for LMWL in the scales.
Using some fake example data:
``` r
library(ggrepel)
#> Loading required package: ggplot2
library(ggplot2)
df_merge_iso <- data.frame(
d18O = 1:6,
d2H = 1:6,
category = LETTERS[1:6],
month = letters[1:6]
)
lmwl_slope_wgt <- .4
lmwl_intercept_wgt <- 2
p <- ggplot(df_merge_iso, aes(x = d18O, y = d2H, color = category, shape = category, label = month)) +
labs(
x = expression(paste(delta^{
18
}, "O (\u2030)")),
y = expression(paste(delta^{
2
}, "H (\u2030)"))
) +
geom_point(size = 1, stroke = 1) +
theme_minimal() +
theme(
axis.text = element_text(size = 12, color = "black"),
axis.title = element_text(size = 12, color = "black")
) +
scale_color_manual(
name = "",
values = c(
"A" = "#0015f0",
"B" = "#6AC4FF",
"C" = "#4B8CEB",
"D" = "#FF7E5F",
"E" = "#FFB300",
"F" = "#E75007",
LMWL = "black"
)
) +
scale_shape_manual(
name = "",
values = c(
"A" = 2,
"B" = 1,
"C" = 21,
"D" = 3,
"E" = 4,
"F" = 6,
LMWL = 1
)
) +
geom_text_repel(
vjust = -1, hjust = 0.5, size = 2, show.legend = FALSE,
box.padding = 0.2
)
p +
geom_abline(
aes(
color = "LMWL", shape = "LMWL",
slope = lmwl_slope_wgt, intercept = lmwl_intercept_wgt
),
linetype = "solid", size = 0.5
)
```
<!-- -->
If you want a separate legend you could fake one by mapping e.g. on the `linetype` aes:
``` r
p +
geom_abline(
aes(
linetype = "LMWL",
slope = lmwl_slope_wgt, intercept = lmwl_intercept_wgt
),
size = 0.5
) +
scale_linetype_manual(name = NULL, values = "solid")
```
<!-- --> |
How do I link two models in mongoose? |
|javascript|node.js|mongodb|express|mongoose| |
null |
Here's what worked for me using Qt 6 and CMake.
First of all, an `.rc` file is different than `.qrc`. This isn't explained *at all* in the Qt documentation for setting up an application icon ([link][1]), they just assume everyone knows that. Both formats are *text files*, they don't contain any pixel data. They're just text files with a different extension.
I'm assuming you already have an .ico file. This post isn't about creating one. Ideally, it should have multiple versions embedded inside of it at different resolutions: 64x64, 48x48, 40x40, 32x32, 24x24, 20x20, 16x16.
In my project folder I have it as "./res/img/favicon.ico" but the name of the .ico file doesn't matter, you can name it "appicon.ico" if you want, or whatever.
And a text file renamed as `favicon.rc` next to it, with the following contents:
IDI_ICON1 ICON "favicon.ico"
And as a side note, Visual Studio generated an .rc file (in another project) with the following comment above the app icon which is interesting:
// Icon with lowest ID value placed first to ensure application icon
// remains consistent on all systems.
IDI_ICON1 ICON "LearnOpenGL.ico"
So the "IDI_ICON1" name seems to serve an actual purpose.
You could, technically, rename that label anything you wanted, like "MyAppIcon" instead of "IDI_ICON1". Just like the OP did in the question at the very top.
However, then only the .exe icon will be set, not the window icon as well. Which I suppose can be set separately, using Qt Designer (or editing the .ui in Qt Creator) for the top-most widget "QMainWindow", since there's a "windowIcon" field there. Or programmatically with `.setWindowIcon()`. But that's besides the point. It doesn't make sense (to me) to have separate icons for the .exe and for the window. As long as it's set in CMake correctly, it will apply to both the .exe and the window.
Moving on.
In the `CMakeLists.txt` file, create an `app_icon_resource_windows` variable (with `set`) before adding it to `qt_add_executable(...)`:
if(${QT_VERSION_MAJOR} GREATER_EQUAL 6)
set(app_icon_resource_windows "${CMAKE_CURRENT_SOURCE_DIR}/res/img/favicon.rc")
qt_add_executable(YourCoolProjectNameHere
MANUAL_FINALIZATION
${PROJECT_SOURCES}
${app_icon_resource_windows}
)
AND THAT'S IT!
Rebuild your project (or Clear + Build, same thing) and enjoy!
PS: The `app_icon_resource_windows` CMake variable is in lowercase in my example (same as in the Qt documentation), while the OP had it in uppercase. It doesn't matter, as long as it's consistent in both places. Otherwise it compiles fine but with no icon and without even a WARNING!
PS #2: If you don't want the .rc file next to your .ico files or images or whatever, you can place it 2 folders above it, but then the contents of the file should point to it correctly:
IDI_ICON1 ICON "res/img/favicon.ico"
And then obviously the CMake variable should be set to not contain "/res/img" any more when pointing to the .rc file:
set(app_icon_resource_windows "${CMAKE_CURRENT_SOURCE_DIR}/favicon.rc")
The idea is to have CMake point at a .res file, which itself points at an .ico file. As long as you set the paths correctly, it will work out just fine.
[1]: https://doc.qt.io/qt-6/appicon.html |
null |
# create SignUpForms.py file
from django import forms
from django.contrib.auth.models import User
class SignupForm(forms.ModelForm):
class Meta:
model = User
fields = ['username', 'password', 'first_name',
'last_name', 'email', ]
widgets = {
'password': forms.PasswordInput()
}
|
Just change `map<String,dynamic>` to `List<dynamic>` and you are good to go. |
|sonarqube|cobol| |
null |
Ctrl + Alt + Up Arrow
Ctrl + Alt + Down Arrow
|
I would like to draw a semi-transparent overlay over most (but not all) of the screen.
[![Screenshot][1]][1]
_screenshot of the feature working in Xamarin_
The most obvious solution is to use an `AbsoluteLayout` + `GraphicsView`. However, this doesn't work because Xamarin/Maui don't allow `AbsoluteLayout` to draw things over the `NavigationPage` navigation bar.
---
The solution in Xamarin was to write a custom `NavigationPageRenderer` and override `DispatchDraw`
```
[assembly: ExportRenderer(typeof(NavigationPage), typeof(MyNavigationPageRenderer))]
public class MyNavigationPageRenderer : NavigationPageRenderer
{
...
protected override void DispatchDraw(Canvas screenCanvas)
{
base.DispatchDraw(screenCanvas);
// Custom drawing code here - screenCanvas can draw on the entire screen
}
}
````
In Maui, you have two choices:
1. **Use a compatibility renderer** via `.AddCompatibilityRenderer`. However, as far as I can tell, these are just completely broken. [Even a completely empty one crashes the app](https://github.com/dotnet/maui/issues/21116).
2. **Use a [Maui handler](https://learn.microsoft.com/en-us/dotnet/maui/user-interface/handlers/?view=net-maui-8.0)**. However, these seem to be _significantly_ less powerful than renderers. There is no way to override methods on native controls; there is no draw event to hook into on either the native or platform control; and there is no draw command in the `CommandMapper`. So I can't figure out any way to draw things.
So, **how can this feature be implemented in Maui?**
[1]: https://i.stack.imgur.com/XLx1x.png |
i am trying to install algolia search in medusajs by using the documentation on this link [https://docs.medusajs.com/plugins/search/algolia](https://docs.medusajs.com/plugins/search/algolia)
i installed the plugin by the follwoing command
`npm install medusa-plugin-algolia`
entered the api keys in .env
below is the config , i am using (as given in the documentation )
```
resolve: `medusa-plugin-algolia`,
options: {
applicationId: process.env.ALGOLIA_APP_ID,
adminApiKey: process.env.ALGOLIA_ADMIN_API_KEY,
settings: {
products: {
indexSettings: {
searchableAttributes: ["title", "description"],
attributesToRetrieve: [
"id",
"title",
"description",
"handle",
"thumbnail",
"variants",
"variant_sku",
"options",
"collection_title",
"collection_handle",
"images",
],
},
},
},
},
},
];
```
but i am getting the below error in the terminal on running - npx medusa develop
`info: Processing SEARCH_INDEX_EVENT which has 1 subscribers
error: An error occurred while processing SEARCH_INDEX_EVENT: [object Object]`
Even on algolia, i am not getting the records, as i should get
the index is getting created "products", but the records are not being uploaded to algolia by the api
[algolia no records image](https://i.stack.imgur.com/BMrcb.png)
Also on postman while verifying the plugin, i am not getting the desired result, it is empty
[postman result](https://i.stack.imgur.com/4SVg4.png)
kindly let me what config should be used in the plugin, so that the api is able to upload all the records to algolia
|
I have observed an issue while using the **Hyperband** algorithm in **Optuna**. According to the Hyperband algorithm, when **min_resources** = 5, **max_resources** = 20, and **reduction_factor** = 2, the search should start with an **initial space of 4** models for bracket **1**, with each model receiving **5** epochs in the first round. Subsequently, the number of models is reduced by a factor of **2** in each round and search space should also reduced by factor of **2** for next brackets i.e bracket 2 will have initial search space of **2** models, and the number of epochs for the remaining models is doubled in each subsequent round. so total models should be **11** is expected but it is training lot's of models.
link of the article:- https://arxiv.org/pdf/1603.06560.pdf
```
import optuna
import numpy as np
# Toy dataset generation
def generate_toy_dataset():
np.random.seed(0)
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, size=(100,))
X_val = np.random.rand(20, 10)
y_val = np.random.randint(0, 2, size=(20,))
return X_train, y_train, X_val, y_val
X_train, y_train, X_val, y_val = generate_toy_dataset()
# Model building function
def build_model(trial):
model = Sequential()
model.add(Dense(units=trial.suggest_int('unit_input', 20, 30),
activation='selu',
input_shape=(X_train.shape[1],)))
num_layers = trial.suggest_int('num_layers', 2, 3)
for i in range(num_layers):
units = trial.suggest_int(f'num_layer_{i}', 20, 30)
activation = trial.suggest_categorical(f'activation_layer_{i}', ['relu', 'selu', 'tanh'])
model.add(Dense(units=units, activation=activation))
if trial.suggest_categorical(f'dropout_layer_{i}', [True, False]):
model.add(Dropout(rate=0.5))
model.add(Dense(1, activation='sigmoid'))
optimizer_name = trial.suggest_categorical('optimizer', ['adam', 'rmsprop'])
if optimizer_name == 'adam':
optimizer = tf.keras.optimizers.Adam()
else:
optimizer = tf.keras.optimizers.RMSprop()
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy', tf.keras.metrics.AUC(name='val_auc')])
return model
def objective(trial):
model = build_model(trial)
# Assuming you have your data prepared
# Modify the fit method to include AUC metric
history = model.fit(X_train_splitted, y_train_splitted, validation_data=(X_val, y_val), verbose=1)
# Check if 'val_auc' is recorded
auc_key = None
for key in history.history.keys():
if key.startswith('val_auc'):
auc_key = key
print(f"auc_key is {auc_key}")
break
if auc_key is None:
raise ValueError("AUC metric not found in history. Make sure it's being recorded during training.")
# Report validation AUC for each model
if auc_key =="val_auc":
step=0
else:
step = int(auc_key.split('_')[-1])
auc_value=history.history[auc_key][0]
trial.report(auc_value, step=step)
print(f"prune or not:-{trial.should_prune()}")
if trial.should_prune():
raise optuna.TrialPruned()
return history.history[auc_key]
# Optuna study creation
study = optuna.create_study(
direction='maximize',
pruner=optuna.pruners.HyperbandPruner(
min_resource=5,
max_resource=20,
reduction_factor=2
)
)
# Start optimization
study.optimize(objective)
```
|
You are attempting to link a program compiled with Windows `g++` (`x86_64-w64-mingw32-g++.exe`)
against an import library `C:/Python310/libs/python310.lib` that was built with Micosoft MSVC.
That will not work because `x86_64-w64-mingw32` libraries are incompatible with MSVC libraries.
Get the [`x86_64-w64-mingw32` python library for MSYS2](https://packages.msys2.org/package/mingw-w64-x86_64-python)
and build using its header files and libraries.
Consider installing and using [MSYS2](https://www.msys2.org)
as your working `gcc/g++` environment on Windows.
With that environment, a compile-and-link commandline (using the Windows filesystem) ought be like:
# Link with explicitly versioned import library
g++ -I C:\msys64\mingw64\include -I C:\msys64\mingw64\include\python3.11 main.cpp \
-o output -L C:\msys64\mingw64\lib -lpython3.11
or:
# Link with unversioned import library
g++ -I C:\msys64\mingw64\include -I C:\msys64\mingw64\include\python3.11 main.cpp \
-o output -L C:\msys64\mingw64\lib -lpython3
An import library in the msys64 environment is not called `name.lib` as per Windows but `libname.dll.a`,
`C:\msys64\mingw64\lib\libpython3.dll.a` being for example the import library for `C:\msys64\mingw64\bin\libpython3.dll` |
|r|lubridate|spotfire| |
I try to run this on macOS:
pip3.8 install turicreate
I have:
Python 3.8.6
and pip 24.0 from (python 3.8)
Update: The initial output showed
installing to build/bdist.macosx-14-arm64/wheel
Reopened the terminal as I thought maybe it didn't open with Rosetta, now I get same error message but with
installing to build/bdist.macosx-10.9-x86_64/wheel
I am using this on the opened using Rosetta terminal.
I still get this error message:
SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
installing to build/bdist.macosx-10.9-x86_64/wheel
running install
==================================================================================
TURICREATE ERROR
If you see this message, pip install did not find an available binary package
for your system.
Supported Platforms:
* macOS 10.12+ x86_64.
* Linux x86_64 (including WSL on Windows 10).
Support Python Versions:
* 2.7
* 3.5
* 3.6
* 3.7
* 3.8
Another possible cause of this error is an outdated pip version. Try:
`pip install -U pip`
==================================================================================
I got this error with my initial python and pip versions updated both to the requirements of turicreate.
I tried using Rosetta and checking my anaconda version which is x86_64.
Tried pip install wheel, pip install -U turicreate. |
If you're writing something that does "the same thing, with just one thing changing at each step", that's a loop. You don't use separate `if` statements. Not even when, as you say, "you're being lazy": being lazy is an _excellent_ trait to have when you're a programmer, because it means you want to do as little work as possible for the maximum result. Of course, in this case that means "why am I even doing this, [`npm install marked`](https://www.npmjs.com/package/marked), oh look I'm done", but even if you insist on implementing a markdown parser yourself (because sometimes you just want to write code to see if you can do it) you don't use a sequence of `if` statements because it takes more time to write, and will take more time to fix or update (as you discovered).
Lazy is excellent. Lazy saves you _so_ much time. Lazy programmers are efficient programmers. But _sloppy_ is your worst enemy.
However, even if you _do_ use `if` statements, resolve them either such that you handle "the largest thing first", to ensure there's no fall-through, _or_ with if-else statements, so there's no fall-though. (And based on your question about whether to use a switch: you almost never want switches, they're a hold-over from programming languages that didn't have dictionaries/key-value objects. In JS you're almost _always_ better off using a mapping object with your case values as object keys, turning an O(n) code path into an O(1) immediate lookup)
However, you don't need any of this, because what you're really doing is simple text matching, so you can use the best tool in the toolset for that: you can trivially get both the `#` sequence and "remaining text" with a regex, and then generate the replacement HTML [using the captured data](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace#replacement):
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
function markdownToHTML(doc) {
return convertMultiLineMD(doc.split(`\n`)).join(`\n`);
}
function convertMultiLineMD(lines) {
// convert tables, lists, etc, while also making sure
// to perform inline markup conversion for any content
// that doesn't span multiple lines. For the purpose of
// this answer, we're going to ignore multi-line entirely:
return convertInlineMD(lines);
}
function convertInlineMD(lines) {
return lines.map((line) => {
// convert headings
line = line.replace(
// two capture groups, one for the markup, and one for the heading,
// with a third optional group so we don't capture EOL whitespace.
/^(#+)\s+(.+?)(\s+)?$/,
// and we extract the first group's length immediately
(_, { length: h }, text) => `<h${h}>${text}</h${h}>`
);
// then wrap bare text in <p>, convert bold, italic, etc. etc.
return line;
});
}
// And a simple test based on what you indicated:
const docs = [`## he#llo\nthere\n# yooo `, `# he#llo\nthere\n## yooo`];
docs.forEach((doc, i) => console.log(`[doc ${i + 1}]\n`, markdownToHTML(doc)));
<!-- end snippet -->
However, this is also a naive approach to writing a transpiler, and will have dismal runtime performance compared to writing a DFA based on the markdown grammar (the "markup language specification" grammar, i.e. the rules that say which tokens can follow which other tokens), where you run through your document by tracking what kind of token we're dealing with, and convert on the fly as we pass token terminations.
(This is, in fact, how regular expressions work: they generate a DFA from the [regular grammar](https://en.wikipedia.org/wiki/Regular_grammar) pattern you specify, then run the input through that DFA, achieving near-perfect runtime performance)
Explaining how to get started with writing DFAs is of course wildly beyond the scope of this answer, but absolutely worth digging into if you're doing this "just to see if you can": anyone can write code "that works" but is extremely inefficient, so that's not an exercise that's going to improve your skill as a programmer. |
I installed Posgresql 16 server on a Debian 11 host that I access through SSH, and all works as expected when I use PSQL cli.
Then I installed and configured PGAdmin4 Web, and I configured NGINX with reverse proxy so I can access to PGAdmin4 web interface from a browser.
The firewall is correctly configured.
The NGINX directives are as follow:
```
location /pgadmin4/ {
proxy_pass http://127.0.0.1:80/pgadmin4/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Script-Name /pgadmin4;
proxy_buffering off;
}
```
I can access to PGAdmin4 web interface through an URL:
https://<mydomain_name>/pgadmin4/
The problem is that I am unable to connect to the Postgresql server through the Server Dialog, I systematically get an error message:
Unable to connect to server: connection timeout expired.
In file /usr/pgadmin4/web/config.py, I replaced DEFAULT_SERVER = '127.0.0.1' by DEFAULT_SERVER = '0.0.0.0'.
In file /etc/postgresql/16/main/pg_hba.conf, I added the below line:
```
host all all 0.0.0.0/0 md5
```
In the PGAdmin4 Dialog box I configured the following options (let's assume that the host name is example.tld):
Connection tab:
- **Host name/address:** example.tld
- **Maintenance database:** postgres
- **Username:** postgres
SSH Tunnel tab:
- **Tunnel host**: example.tld
- **Tunnel port**: 22
- **Username**: pgadmin (regular Unix user)
- **Authentication**: Password
I also created a new database (bookstore) with that I can access directly under user 'pgadmin' from a shell with the below command, and modified the connection tab accordingly:
psql -U pgadmin -d bookstore -h 127.0.0.1 -p 5432 -W
I read countless documentation online but still no luck.
I am stuck and any help would be appreciated.
I tried the configuration with different databases and users in the Connection tab where I have no issue with PSQL.
When I try to use the port 5432, I immediately get an error message, see below:
[Cannot use port 5432 because behind a reverse proxy on port 80](https://i.stack.imgur.com/5mK9M.png)
In the SSH Tunnel tab, when I use a non-existent user (e.g. test) or a wrong password, I have a different error message:[Error when a wrong user or wrong password is used](https://i.stack.imgur.com/JfOui.png) |
In polars, how do you efficiently get the 2nd largest element, or nth for some small n compared to the size of the column? |
Polars: efficiently get the 2nd largest element |
|python-polars|rust-polars| |
I just want some project ideas that will help me get a more better job / senior profile role in web dev using MERN stack
I have built some projects in Front end using React, Redux and Tailwind CSS like Youtube, Netflix and Food ordering app. But I had to rely on API which was already built. Now I am learning backend and want to make some awesome projects in MERN |
What are some MERN projects that will grow me from junior dev to senior |
I am currently trying to create unit tests for a game, and in order to test movement, player input is required. I was planning to do this using the "Press" function (as shown in documentation [here](https://docs.unity.cn/Packages/com.unity.inputsystem@1.3/api/UnityEngine.InputSystem.InputTestFixture.html#UnityEngine_InputSystem_InputTestFixture_Press_), and as shown in an official unity tutorial [here](https://unity.com/how-to/automated-tests-unity-test-framework#character-movement-tests).) However, when I go to run the test, I am given the error that "The name 'Press' does not exist in the current context." Does anyone know why 'Press' is not being found.
Snippet of the test file is below. Also, the `Unity.InputSystem` assembly was already added to the assembly definition, so that is not the issue.
```CSharp
using System.Collections;
using System.Collections.Generic;
using NUnit.Framework;
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.TestTools;
using UnityEngine.UI;
using UnityEngine.InputSystem;
public class MovementTest
{
Keyboard keyboard;
// A Test behaves as an ordinary method
[SetUp]
public void LevelUITestSetup()
{
SceneManager.LoadScene("TestPlayModeScene");
keyboard = InputSystem.AddDevice<Keyboard>();
Press(keyboard.rightArrowKey);
}
//a bunch of irrelevant tests here
}
```
If needed, the latest version of Unity Test Framework, and Unity Version `2022.3.18f1` are used. |
null |
I'm not deeply familiar with Tailwind configuration, but the problem I'm facing seems to be related to configuration, as most of what I've set up is working.
Here's the basic setup:
package.json
```json
"scripts": {
"start": "npm run build:tailwind-dev --watch && stencil build --dev --watch --serve",
"build:tailwind-dev": "postcss --postcss-config ./postcss.config.js src/global/app.css -o src/styles/tailwind-optimized.css",
"watch:tailwind-dev": "postcss --postcss-config ./postcss.config.js src/global/app.css -o src/styles/tailwind-optimized.css --watch",
},
"dependencies": {
"@stencil/core": "^4.7.0",
"autoprefixer": "^10.4.18",
"postcss": "^8.4.35",
"postcss-cli": "^11.0.0",
"tailwindcss": "^3.4.1"
},
"devDependencies": {
"@types/jest": "^29.5.6",
"@types/node": "^16.18.11",
},
```
tsconfig.json
```json
{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"allowUnreachableCode": false,
"declaration": false,
"experimentalDecorators": true,
"lib": [
"dom",
"es2017"
],
"moduleResolution": "node",
"module": "esnext",
"target": "es2017",
"noUnusedLocals": true,
"noUnusedParameters": true,
"jsx": "react",
"jsxFactory": "h"
},
"include": [
"src"
],
"exclude": [
"node_modules"
]
}
```
stencil.config.ts
```typescript
import { Config } from '@stencil/core';
export const config: Config = {
namespace: 'exploration-project-tailwind',
outputTargets: [
{
type: 'dist',
esmLoaderPath: '../loader',
},
{
type: 'dist-custom-elements',
},
{
type: 'docs-readme',
},
{
type: 'www',
serviceWorker: null, // disable service workers
},
],
};
```
postcss.config.js
```javascript
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}
```
tailwind.config.js
```javascript
module.exports = {
content: ['./src/**/*.html', './src/**/*.tsx', './src/**/*.ts'],
theme: {
},
variants: {
extend: {
backgroundColor: ['active'],
backgroundOpacity: ['active'],
gradientColorStops: ['active'],
},
},
plugins: [
],
}
```
app.css
```css
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';
```
src/styles/tailwind-optimized.css (partial output)
```css
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
```
stencil-component.tsx (partial example):
```
<div class="ccs-isp-transfer-flow__wrapper flex flex-col w-full items-center pt-10 bg-gradient-to-t from-violet-50 to-white">
```
stencil-component.css
```
@import '../../styles/tailwind-optimized.css';
:host {
display: block;
}
```
## Problem
The problem is that, when I run the tailwind build script `npm run build:tailwind-dev`, the output file doesn't include the gradient CSS variable values that were supposed to exist, even though the classes are being applied correctly in the stencil component.
```css
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
```
All other classes that I use, get included in the output, so I guess the core functionality is working.
Is there anything I might be missing in the configuration?
Also, another issue: when I run the `npm start` script, it loads the dev server. However, each time I include a new class in the component, I have to restart the server for the changes to take effect. Is there a way to make the server reload the changes automatically?
|
I have two tables `keywords` and `posts` in my PieCloudDB Database.
Each topic can be expressed by one or more keywords. If a keyword of a certain topic exists in the content of a post (**case insensitive**) then the post has this topic.
For example:
| topic_id | keyword |
| -------- | ---------- |
| 1 | basketball |
| 2 | music |
| 3 | food |
| 4 | war |
| post_id | content |
| ------- | --------------------------------------------------------------- |
| 1 | A typhoon warning has been issued in southern Japan |
| 2 | We are going to play neither basketball nor volleyball |
| 3 | I am indulging in both the delightful music and delectable food |
| 4 | That basketball player fouled again |
Now I want to find the topics of each post according to the following rules:
- If the post does not have keywords from any topic, its topic should be "`Vague!`".
- If the post has at least one keyword of any topic, its topic should be a string of the IDs of its topics sorted in ascending order and separated by commas ','.
For the above example data, the results should be:
| post_id | topics |
| ------- | ------ |
| 1 | Vague! |
| 2 | 1 |
| 3 | 2,3 |
| 4 | 1 |
```
SELECT post_id, COALESCE(array_to_string(array_agg(DISTINCT topic_id ORDER BY topic_id), ','), 'Vague!') AS topic
FROM (
SELECT p.post_id, k.topic_id
FROM Posts p
LEFT JOIN Keywords k
ON LOWER(content) LIKE '% ' || keyword || ' %' OR content LIKE keyword || ' %' OR content LIKE '% ' || keyword
) a
GROUP BY post_id
ORDER BY post_id
```
I tried this query but the results I got were not exactly correct. I don't know why the output of post 1 is `null`:
| post_id | topics |
| ------- | ------ |
| 1 | |
| 2 | 1 |
| 3 | 2,3 |
| 4 | 1 |
Can anyone give me a correct answer?
(If you don’t know the database I use, you can use PostgreSQL instead.) |
null |