instruction stringlengths 0 30k ⌀ |
|---|
I have a project and when i try to run the main or the test directories i have no problems with the gradle build.
But now when i try to run a an android test i get this error:
Directory 'C:\Users\...' does not contain a Gradle build.
i wrote a Compose Test that looks like this:
`@RunWith(AndroidJUnit4::class)
class NavigationTest {
@get :Rule
val composeTestRule = createAndroidComposeRule<ComponentActivity>()
fun startApp_showRegistrationScreen() {
composeTestRule.setContent {
MyTheme {
MyApp()
}
}
val registerString = composeTestRule.activity.getString(R.string.register_for_first_time)
composeTestRule.onNodeWithText(registerString).assertIsDisplayed()
}
}`
the error:
Directory 'C:\Users\alici\MyProject\Myproject' does not contain a Gradle build.
* Try:
> Run gradle init to create a new Gradle build in this directory.
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
I am confused. WHat do i need to do?
Here the project structure:
[![Project strucure][1]][1]
[1]: https://i.stack.imgur.com/jhhrO.png |
I'm working on a project in Reac Native Expo and using "expo-router" to navigate between pages.
I need to send a nested object array from one screen to another.
My object structure is something like this:
item: {
id: '123',
name: 'name',
nestedObjectArray: [{id: 1, name: 'name'}, {id: 2, name: 'name'}, {id: 3, name: 'name'}],
}
I've used router.navigate(`({pathname: 'path/to/screen', params: item}) but in the other screen I'm receiving this:
item: {
id: '123',
name: 'name',
nestedObjectArray: [object Object], [object Object], [object Object],
}
Any suggestions? |
My query
```
select
custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales,
count(salesid) orders from SalesTable inner join CustomerTable c
on salestable.CustID=c.CustID
where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2
group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end,
case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end
order by 1,2
```
Gets all customers sales (sum amount, week number, amount orders) by one week any row.
like:
| custname | week# | sales | orders |
|---------|----------|----------|-----------|
|CustAAA | -1 | 974697.41 | 62013 |
|CustAAA |1| 10.01 | 5 |
|CustAAA |2| 10 |2|
|CustAAA |2| 372.95| 11|
|CustAAA |3| 70.86| 13|
|CustAAA |3| 0| 3|
|CustAAA |4| 8.08| 2|
|CustAAA |5| 20 |6|
|CustAAA |48| 0 |38|
|CustAAA |49 |84.27| 2|
|CustXYZ |-1 |12.12| 1|
|CustXYZ |1 |22.59| 1|
|CustXYZ |4 |117.9| 1|
|CustXYZ |48 |19.3| 1|
[enter image description here](https://i.stack.imgur.com/3qC7j.png)
How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders.
and again the next week number, like example:
| custname | week -1 sales | week -1 orders | week 1 sales | week 1 orders | week 2 sales | week 2 orders | week 3 sales | week 3 orders | week 4 sales | week 4 orders | week 5 sales | week 5 orders | week 48 sales | week 48 orders | week 49 sales | week 49 orders |
|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|-----------|
|CustAAA | 974697.41 | 62013 | 10.01 | 5 | 382.95 | 13 | 70.86 | 16 | 8.08 | 2 | 20 | 6 | 0 | 38 | 84.27 | 2 |
|CustXYZ | 12.12 | 1 | 22.59 | 1 | | | | | 117.9 | 1 | | | 19.3 | 1 | | |
[enter image description here](https://i.stack.imgur.com/Z8sHj.png) |
I have data that looks similar to this:
```
df <- data.frame(
LastName = c("Gamgee", "Gamgee", "Gamgey", "Baggins", "Baggins", NA, "The Gray"),
FirstName = c("Sam", "Samwise", "Samuel", "Frodo the ring carrier", "Frodoh", "Smeagol", "Gandalf"),
DOB = c("2012-08-24", NA, "2012-08-24", NA, "01-31-2004", "05-15-2005", "11-02-1999")
)
df
> df
LastName FirstName DOB
1 Gamgee Sam 2012-08-24
2 Gamgee Samwise <NA>
3 Gamgey Samuel 2012-08-24
4 Baggins Frodo the ring carrier <NA>
5 Baggins Frodoh 01-31-2004
6 <NA> Smeagol 05-15-2005
7 The Gray Gandalf 11-02-1999
```
What I need to do is match names that seem to align... Maybe create clusters that group together names and DOBS's that the code has identified are similar? All I have found online is probabilistic matches across dataframes, so I am unsure how to conduct person-matching (for de-duplication purposes) within rows of the same df with variable/imperfectly matched names.
Potential output could look like this (but is malleable, only showing as an example output):
```
> df
LastName FirstName DOB Cluster
1 Gamgee Sam 2012-08-24 1
2 Gamgee Samwise <NA> 1
3 Gamgey Samuel 2012-08-24 1
4 Baggins Frodo the ring carrier <NA> 2
5 Baggins Frodoh 01-31-2004 2
6 <NA> Smeagol 05-15-2005 3
7 The Gray Gandalf 11-02-1999 4
```
It needs to be generalizable so direct name references (i.e. `str_detect("Sam")`) will not be an adequate solution to this question. |
Is it possible to create subqueries in SQFLite? |
|flutter|sqlite|dart| |
This is my code:
const execBuffer = require('exec-buffer');
const gifsicle = import('gifsicle');
execBuffer({
input: encoder.out.getData(),
bin: 'gifsicle',
args: ['--output', execBuffer.output, '--lossy=100', '--optimize=03', execBuffer.input]
}).then(data => {
console.log(data); // Works :-)
let result = data; // Don't works :-(
});
return result;
How can I return the result variable from a promise in this case? |
Better create an extension function like `ParcelableExt.kt`
```kotlin
inline fun <reified T : Parcelable> Intent.parcelable(key: String): T? = when {
SDK_INT >= TIRAMISU -> getParcelableExtra(key, T::class.java)
else -> @Suppress("DEPRECATION") getParcelableExtra(key) as? T
}
inline fun <reified T : Parcelable> Bundle.parcelable(key: String): T? = when {
SDK_INT >= TIRAMISU -> getParcelable(key, T::class.java)
else -> @Suppress("DEPRECATION") getParcelable(key) as? T
}
inline fun <reified T : Parcelable> Bundle.parcelableArrayList(key: String): ArrayList<T>? = when {
SDK_INT >= TIRAMISU -> getParcelableArrayList(key, T::class.java)
else -> @Suppress("DEPRECATION") getParcelableArrayList(key)
}
inline fun <reified T : Parcelable> Intent.parcelableArrayList(key: String): ArrayList<T>? = when {
SDK_INT >= TIRAMISU -> getParcelableArrayListExtra(key, T::class.java)
else -> @Suppress("DEPRECATION") getParcelableArrayListExtra(key)
}
```
For example you can use this in you resultLauncher like this
```kotlin
result.data!!.parcelable(RESULT_KEY)
``` |
When I add the option `--oem 0` (OCR Engine mode for Tesseract only), the `--user-patterns` option is properly enforced ! See this [PR comment][1].
Following is my example.
I slightly tweaked the image `in.png` from https://stackoverflow.com/questions/33429143/tesseract-user-pattern-is-not-applied to have ambiguity on the one before last character, which can now be read as a `5` or an `S` (or still a `9`).
[![BDVPD474SQ][2]][2]
I use tesseract version `5.3.3` and when I run the below (where `pattern` is a file with a single line that goes `\A\A\A\A\A\d\d\A\A`):
tesseract in.png stdout --user-patterns pattern
I get `BDVPD4745Q`, which outputs a `5` for the one before last character, despite my specifying `\A` in the user pattern.
Once I add `--oem 0` as such:
tesseract in.png stdout --user-patterns pattern --oem 0
I obtain the result `BDVPD474SQ`, with an `S`, as I expected it.
[1]: https://github.com/tesseract-ocr/tesseract/issues/960#issuecomment-412773829
[2]: https://i.stack.imgur.com/sa9Or.png |
I want to prevent celery from acknowledging task if it raised an exception
setting acks_late didn't make any difference
```
@app.task(acks_late=True)
def send_mail(data):
raise Exception
```
I tried doing the following
```
class CeleryTask(celery.Task):
def on_failure(self, exc, task_id, args, kwargs, einfo : ExceptionInfo):
app.send_task(self.name, args=args, kwargs=kwargs)
def run(self, *args, **kwargs):
pass
@app.task(base=CeleryTask)
def send_mail(data):
raise Exception
```
but that will cause an infinite recursion or I'm missing something for it to be an optimal solution
what is a better solution? |
felixolszewski@Felixs-MacBook-Pro-2 SMART TV % ares-install --device emulator "/Users/felixolszewski/Desktop/SMART TV/com.felixolszewski.app1_1.0.0_all.ipk"
ares-install ERR! [syscall failure]: connect ECONNREFUSED 127.0.0.1:6622
ares-install ERR! [Tips]: Connection refused. Please check the device IP address or the port number
felixolszewski@Felixs-MacBook-Pro-2 SMART TV % ares-setup-device --list
name deviceinfo connection profile
------------------ ------------------------ ---------- -------
emulator (default) developer@127.0.0.1:6622 ssh tv
felixolszewski@Felixs-MacBook-Pro-2 SMART TV %
Do I need to link my simulator manually? Or does the emulator device in the list refer to the simulator? If manually: how exactly? I would have no clue what to set as:
1. name
2. deviceinfo
3. ssh
4. tv
Why do I want to execute this command?
I want to create an app from a static Angular15 website. I also tried simply opening the unpackaged app build folder, as shown in the lg web os tutorial: https://webostv.developer.lge.com/develop/getting-started/build-your-first-web-app
But then I got this error when inspecting my app:
[![enter image description here][1]][1]
FAILED TO LOAD RESOURCE: ERR_FILE_NOT_FOUND, for those four filenames:
runtimemefa9a14c71368959.js:1
polyfillssc5a6162cac4767e2mjs:1
main.607e420b5eadbc63.js:1
styles.56a9653a72393d06scss:1
[1]: https://i.stack.imgur.com/2QDDI.png
Hence I tried deploying the packaged app, as this might resolve the error. But maybe I also have to be able to launch my unpackaged app? If so, then why am I getting the error? I think it could be related to:
https://forum.webostv.developer.lge.com/t/failed-to-load-module-script-error-on-webos-5-lg-tvs/2351
Can anyone help? I am not too familiar with specifc js/es version things. I could provide my tsconfig.json if necessary. |
packed ECONNREFUSED/unpackaged ERR_FILE_NOT_FOUND: failing to get app runing on lg web os simulator |
|typescript|tsconfig|webos|lg| |
I am trying to ground my vertex generative AI studio chatbot with a vertex data store. The field asks for a vertex data store path and specifies the path format
projects/(project name)/......
I have already created a data store in vertex ai search. How do I identify it's path? The tutorial suggests that I click on more details on the data store page, but no such option is visible.
I am trying to ground my vertex generative AI studio chatbot with a vertex data store. The field asks for a vertex data store path and specifies the path format
projects/(project name)/......
I have already created a data store in vertex ai search. How do I identify it's path? The tutorial suggests that I click on more details on the data store page, but no such option is visible. |
How do I find my vertex data store path for grounding a chatbot in the studio? |
|google-cloud-platform|google-cloud-vertex-ai| |
null |
When I move my mouse outside the boundary I drew in tkintre it gives error
```
> Exception in Tkinter callback
Traceback (most recent call last):
File "c:\Users\My PC\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 1962, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "c:\Windows\System32\awdkhjgeuyfsbiufhefse.py", line 4, in <lambda>
K=__import__('tkinter');m=K.Tk();R,C,z,p=50,50,10,0;l=[[0]*C for _ in range(R)];c=K.Canvas(m,w=C*z,he=R*z,bg='#0c0c0c');c.pack();[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C)];c.bind("<B1-Motion>",lambda e:d(e.x//z,e.y//z));m.bind("<space>",t);m.mainloop()
^^^^^^^^^^^^^^^^
File "c:\Windows\System32\awdkhjgeuyfsbiufhefse.py", line 1, in d
def d(x,y):global l;l[y][x]=1;c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='#fff',outline='')
~~~~^^^
IndexError: list assignment index out of range
```
the code is:
```
def d(x,y):global l;l[y][x]=1;c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='#fff',outline='')
def t(_=None):global p;p^=1;['',u()][p]
def u():global l,K;l=[[1 if(l[y][x]==1 and sum(l[y+dy][x+dx]for dx,dy in((dx,dy)for dx in(-1,0,1)for dy in(-1,0,1)if(dx,dy)!=(0,0))if 0<=x+dx<C and 0<=y+dy<R)in(2,3))or(l[y][x]==0 and sum(l[y+dy][x+dx]for dx,dy in((dx,dy)for dx in(-1,0,1)for dy in(-1,0,1)if(dx,dy)!=(0,0))if 0<=x+dx<C and 0<=y+dy<R)==3)else 0 for x in range(C)]for y in range(R)];K=__import__('tkinter');c.delete("all");[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R+1)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C+1)];[[c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='white',outline='')for x in range(C)if l[y][x]]for y in range(R)];m.after(100,u) if p else None
K=__import__('tkinter');m=K.Tk();R,C,z,p=50,50,10,0;l=[[0]*C for _ in range(R)];c=K.Canvas(m,w=C*z,he=R*z,bg='#0c0c0c');c.pack();[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C)];c.bind("<B1-Motion>",lambda e:d(e.x//z,e.y//z));m.bind("<space>",t);m.mainloop()
```
how to fix this error?
I tried running the code |
Is there a way to match individuals across rows in one dataframe? |
|r|rstudio| |
What you've written (`if(VAR_A OR VAR_B OR VAR_C)`) should work just fine.
CMake supports [`NOT`, `OR`, and `AND` for logical operators](https://cmake.org/cmake/help/latest/command/if.html#logic-operators) in `if()` command condition expressions, and you can enclose sub-expressions in `()` to control evaluation precedence/order.
Inside [condition expressions for the `if()` command](https://cmake.org/cmake/help/latest/command/if.html#basic-expressions), most _unquoted_ strings that are not operands and which contain no variable references will be attempted to be resolved as if they were [variable references](https://cmake.org/cmake/help/latest/manual/cmake-language.7.html#variable-references) to a variable named by the string's content. That's why you don't _have_ to write `if(${VAR_A} OR ${VAR_B} OR ${VAR_C})` or `if("${VAR_A}" OR "${VAR_B}" OR "${VAR_C}")`. But I usually do that anyway because I think it conveys intent more clearly (opinion). It can also help prevent other tricky points of confusion, such as [numbers being allowed as variable names](/q/49605059/11107541).
If you want further readings, see my answer to https://stackoverflow.com/q/35847655/11107541. |
Try setting the domain from which the request is coming in the corsOptions field for origin.
```
const corsOptions = {
origin: 'https://example.com',
credentials: true
}
``` |
My query
```
select
custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales,
count(salesid) orders from SalesTable inner join CustomerTable c
on salestable.CustID=c.CustID
where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2
group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end,
case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end
order by 1,2
```
Gets all customers sales (sum amount, week number, amount orders) by one week any row.
like:
| custname | week# | sales | orders |
|---------|----------|----------|-----------|
|CustAAA | -1 | 974697.41 | 62013 |
|CustAAA |1| 10.01 | 5 |
|CustAAA |2| 10 |2|
|CustAAA |2| 372.95| 11|
|CustAAA |3| 70.86| 13|
|CustAAA |3| 0| 3|
|CustAAA |4| 8.08| 2|
|CustAAA |5| 20 |6|
|CustAAA |48| 0 |38|
|CustAAA |49 |84.27| 2|
|CustXYZ |-1 |12.12| 1|
|CustXYZ |1 |22.59| 1|
|CustXYZ |4 |117.9| 1|
|CustXYZ |48 |19.3| 1|
[enter image description here](https://i.stack.imgur.com/3qC7j.png)
How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders.
and again the next week number, like example:
| custname | week -1 sales | week -1 orders | week 1 sales | week 1 orders | week 2 sales | week 2 orders | week 3 sales | week 3 orders | week 4 sales | week 4 orders | week 5 sales | week 5 orders | week 48 sales | week 48 orders | week 49 sales | week 49 orders |
|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|-----------|
|CustAAA | 974697.41 | 62013 | 10.01 | 5 | 382.95 | 13 | 70.86 | 16 | 8.08 | 2 | 20 | 6 | 0 | 38 | 84.27 | 2 |
|CustXYZ | 12.12 | 1 | 22.59 | 1 | | | | | 117.9 | 1 | | | 19.3 | 1 | | |
[enter image description here](https://i.stack.imgur.com/Z8sHj.png) |
Celery Redis how to prevent task acknowledge if raised an exception |
|python|redis|celery| |
|linux|bash|shell| |
By using Lambda@Edge inside CloudFront I am not able to install packages thourgh `npm` as `CloudFront `does not allow layers:
**According to AWS doc:**
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-edge-function-restrictions.html
[Lambda Features](https://i.stack.imgur.com/EujxH.png)
My approach is to validate jwt tokens using Viewer Request
[Functions associated](https://i.stack.imgur.com/wYK0b.png)
There are multiple examples doing this, and they use common libraries such as:
```
import y from 'jsonwebtoken';
import z from 'jwk-to-pem';
```
But How should be referred if layers are not allowed? Using thus modules the application report errors:
*Cannot find package 'jsonwebtoken' imported from /var/task/index.mjs"*
On the other hand, creating a lambda with a cusotm layer with the following packages worked (But unable to use in aws-cloudfront):
```
{
"dependencies": {
"@types/jwk-to-pem": "^2.0.3",
"jsonwebtoken": "^9.0.2",
"jwk-to-pem": "^2.0.5",
"lodash": "^4.17.21"
}
}
```
Tried:
- Create a layer on behalf of my lambda@Edge function.
Expected to happen:
- Validate tokens in AWS@Lambda edge functions with external 3rd parties.
Actually result:
- Layers are not allowed in AWS@Lambda
|
if i am correct you want to read data from firebase and based on that you want to pass values to you form like hints for various fields ?
if so then tack a look at the following
Future<void> _fetchDocument() async {
DocumentSnapshot document = await _firestore.collection('yourCollection').doc('yourDocument_id').get();
setState(() {
_document = document;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('My Form'),
),
body: _document != null
? ListView(
padding: EdgeInsets.all(16.0),
children: _document.data().entries.map((entry) {
String label = entry.key;
dynamic value = entry.value;
returnTextFormField(
decoration: InputDecoration(
hintText: 'Enter $label with $value',
)),
}).toList(),
)
: Center(
child: CircularProgressIndicator(),
),
);
} |
I have this simple code:
import pydantic
from typing import ClassVar
from pydantic import BaseModel, PrivateAttr
class Parent(pydantic.BaseModel):
_attribute: str = PrivateAttr('parent_private_attribute')
class Child(Parent):
_attribute: ClassVar[str] = 'child_class_attribute'
I would expected that if i do
Child._attribute
ill get the `'child_class_attribute'` string.
but what happens is:
Child._attribute
'parent_private_attribute'
While if i access that attribute in the parent class i get:
ModelPrivateAttr(default='parent_private_attribute')
Why when i try to access Child._attribute i dont get the ClassVar i defined, and instead get the parent private attribute?
|
i have this code to multi select PDF from device and i need the real path of each PDF, when i select one PDF i can get the real path using
```
String path = data.getData().getPath();
```
but in case of multi select i can only get the URI of each selected PDF (for my knowledge), here is the code.
```
private void selectPDFs() {
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
intent.setType("application/pdf");
intent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
someActivityResultLauncher.launch(intent);
}
ActivityResultLauncher<Intent> someActivityResultLauncher = registerForActivityResult(
new ActivityResultContracts.StartActivityForResult(),
new ActivityResultCallback<ActivityResult>() {
@Override
public void onActivityResult(ActivityResult result) {
if (result.getResultCode() == Activity.RESULT_OK) {
Intent data = result.getData();
if (data != null) {
if (data.getClipData() != null) {
int count = data.getClipData().getItemCount();
int cItem = 0;
while (cItem < count) {
String uri = Data.getClipData().getItemAt(cItem).getUri().toString();
File f = new File(uri); // here i need the real path instead of uri
files.add(f); // files is an ArrayList<File>
cItem++;
}
} else if (data.getData() != null) {
String path = data.getData().getPath(); // path is the real file path
File f = new File(path);
files.add(f);
}
}
}
}
});
``` |
get the real path of multi selected PDF |
|java|android|android-studio|path|uri| |
When asking questions like this it's very helpful to provide example DDL and DML in an easy to understand and reproduce format:
```SQL
DECLARE @Table TABLE (custname VARCHAR(20), week INT, sales DECIMAL(10,2), orders INT);
INSERT INTO @Table (custname, week, sales, orders) VALUES
('CustAAA', -1 , 974697.41 , 62013), ('CustAAA', 1 , 10.01 , 5), ('CustAAA', 2 , 10 , 2),
('CustAAA', 2 , 372.95 , 11), ('CustAAA', 3 , 70.86 , 13), ('CustAAA', 3 , 0 , 3),
('CustAAA', 4 , 8.08 , 2), ('CustAAA', 5 , 20 , 6), ('CustAAA', 48 , 0 , 38),
('CustAAA', 49 , 84.27 , 2), ('CustXYZ', -1 , 12.12 , 1), ('CustXYZ', 1 , 22.59 , 1),
('CustXYZ', 4 , 117.9 , 1), ('CustXYZ', 48 , 19.3 , 1);
SELECT p.custname, MAX(p.sales1) AS Sales1, MAX(p.orders1) AS Orders1, MAX(p.sales2) AS Sales2, MAX(p.orders2) AS Orders2,
MAX(p.sales3) AS Sales3, MAX(p.orders3) AS Orders3, MAX(p.sales53) AS Sales53, MAX(p.orders53) AS Orders53
FROM (
SELECT i.sales, i.orders, i.custname,
'sales' + CAST(week AS VARCHAR(2)) AS WeekSales, 'orders' + CAST(week AS VARCHAR(2)) AS WeekOrders
FROM @Table i
) a
PIVOT (
SUM(sales) FOR WeekSales IN (sales1, sales2, sales3, sales53)
) p
PIVOT (
SUM(orders) FOR WeekOrders IN (orders1, orders2, orders3, orders53)
) p
GROUP BY p.custname;
```
Here we've reproduced the week column twice, once for each pivot we want to do.
Then, it's just a case of producing the list of each of the values, and aggregating as required.
|custname|Sales1|Orders1|Sales2|Orders2|Sales3|Orders3|Sales53|Orders53 |
|:---|:---|:---|:---|:---|:---|:---|:---|:---|
|CustAAA|10.01|5|372.95|11|70.86|13|| |
|CustXYZ|22.59|1|||||||
|
pydantic v2: ClassVar attribute is taken from parent class instead of current class |
|python-3.x|pydantic-v2| |
I wrote a custom table widget which is derived from `QTableWidget`. The last two columns in this widget use a custom cell widget (one containing a `QCheckBox`, the other a `QLineEdit`, with some optional `QPushButtons`).
Now, I added the option to color the table widget rows in a darker blue or orange color, depending on whether the checkbox is checked or not.
The code for this basically looks like the following:
```c++
// Color the standard tablewidget cells
for (auto j = 0; j < FIRST_FOUR_COLUMNS; j++) {
tableWidget->item(i, j)->setBackground(isCheckedColor);
}
QPalette palette;
palette.setColor(QPalette::Base, isCheckedColor);
// For the push buttons
palette.setColor(QPalette::Button, buttonColor);
// Now color the cell widgets
tableWidget->cellWidget(i, COL_CHECKBOX)->setAutoFillBackground(true);
tableWidget->cellWidget(i, COL_LINE_EDIT)->setAutoFillBackground(true);
tableWidget->cellWidget(i, COL_CHECKBOX)->setPalette(palette);
tableWidget->cellWidget(i, COL_LINE_EDIT)->setPalette(palette);
```
It works fine. However, the following occurs if a row is now selected:
[![Orange row][1]][1]
[![Blue row][2]][2]
The upper image shows a selected orange row, while the lower image shows a selected blue row. As you might see, the highlighting color for the last two columns is different compared to the other, standard table widget cells.
However, I want all of it to look the same.
What I tried so far was:
- Changing the background color via a custom stylesheet (for example):\
`tableWidget->setStyleSheet("QTableWidget::item{ selection-background-color:#ff0000; }");`
- subclassing `QItemDelegate` and overriding the `paint`-method to display my own colors.
Alternatives were discussed [here][3].
But none of that worked so far, still the same problems.
How do I solve this problem?
EDIT: As requested, a minimum example to reproduce the problem is shown below.
```
#include <QApplication>
#include <QHBoxLayout>
#include <QMainWindow>
#include <QTableWidget>
#include <QWidget>
int
main(int argc, char *argv[])
{
QApplication app(argc, argv);
// Setup widgets
auto* const tableWidget = new QTableWidget(1, 2);
tableWidget->setSelectionBehavior(QAbstractItemView::SelectRows);
tableWidget->setItem(0, 0, new QTableWidgetItem());
auto *const cellWidget = new QWidget;
tableWidget->setCellWidget(0, 1, cellWidget);
// Apply palette color
QPalette palette;
palette.setColor(QPalette::Base, QColor(255, 194, 10, 60));
tableWidget->item(0, 0)->setBackground(palette.color(QPalette::Base));
tableWidget->cellWidget(0, 1)->setAutoFillBackground(true);
tableWidget->cellWidget(0, 1)->setPalette(palette);
// Layout and main window
auto* const mainWindow = new QMainWindow;
auto* const layout = new QHBoxLayout;
layout->addWidget(tableWidget);
mainWindow->setCentralWidget(tableWidget);
mainWindow->show();
return app.exec();
}
```
CMake:
```
cmake_minimum_required(VERSION 3.8)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
project(ExampleProject LANGUAGES CXX)
find_package(QT NAMES Qt5 COMPONENTS Widgets REQUIRED)
find_package(Qt5 COMPONENTS Widgets REQUIRED)
add_executable(ExampleProject
${CMAKE_CURRENT_LIST_DIR}/main.cpp
)
target_link_libraries(ExampleProject
PRIVATE Qt::Widgets
)
```
[1]: https://i.stack.imgur.com/2VeBY.png
[2]: https://i.stack.imgur.com/30Les.png
[3]: https://stackoverflow.com/questions/57178983/qtablewidget-different-selection-colors-for-the-different-cells |
You can use `line_profiler` in jupyter notebook.
1. Install it: `pip install line_profiler`
2. Within your jupyter notebook, call: `%load_ext line_profiler`
3. Define your function `prof_function` as in your example.
4. Finally, profile as follows: `%lprun -f prof_function prof_function()`
Which will provide the output:
Timer unit: 1e-06 s
Total time: 3e-06 s
File: <ipython-input-22-41854af628da>
Function: prof_function at line 1
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1 def prof_function():
2 1 1.0 1.0 33.3 x=10*20
3 1 1.0 1.0 33.3 y=10+x
4 1 1.0 1.0 33.3 return (y)
----
Note that you can profile _any_ function in the call hierarchy, not just the top one. If the function you want to profile isn't in the notebook, you must import it. Unfortunately you can only profile functions which are run from source code, not from libraries you've installed (unless they were installed from source).
Example:
import numpy as np
def sub_function():
a = np.random.random((3, 4))
x = 10 * a
return x
def prof_function():
y = sub_function()
z = 10 + y
return z
Output of `%lprun -f prof_function prof_function()`:
Timer unit: 1e-09 s
Total time: 3.2e-05 s
File: /var/folders/hh/f20wzr451rd3m04552cf7n4r0000gp/T/ipykernel_51803/3816028199.py
Function: prof_function at line 6
Line # Hits Time Per Hit % Time Line Contents
==============================================================
6 def prof_function():
7 1 29000.0 29000.0 90.6 y = sub_function()
8 1 3000.0 3000.0 9.4 z = 10 + y
9 1 0.0 0.0 0.0 return z
Output of `%lprun -f sub_function prof_function()`:
Timer unit: 1e-09 s
Total time: 2.7e-05 s
File: /var/folders/hh/f20wzr451rd3m04552cf7n4r0000gp/T/ipykernel_51803/572113936.py
Function: sub_function at line 1
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1 def sub_function():
2 1 13000.0 13000.0 48.1 a = np.random.random((3, 4))
3 1 14000.0 14000.0 51.9 x = 10 * a
4 1 0.0 0.0 0.0 return x
More detail in [this nice blog post](http://mortada.net/easily-profile-python-code-in-jupyter.html). |
as @robert suggested in the comments I started using https://ng-bootstrap.github.io/#/home for this functionality and this works. |
I'm currently trying to deploy a Next.js app on GitHub Pages using GitHub Actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out.
Here is my GitHub repo: https://github.com/Mctripp10/mctripp10.github.io
Here is my website: https://mctripp10.github.io
I used the *Deploy Next.js site to Pages* workflow that GitHub provides. Here is the `nextjs.yml` file:
```lang-yaml
# Sample workflow for building and deploying a Next.js site to GitHub Pages
#
# To get started with Next.js see: https://nextjs.org/docs/getting-started
#
name: Deploy Next.js site to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["dev"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Detect package manager
id: detect-package-manager
run: |
if [ -f "${{ github.workspace }}/yarn.lock" ]; then
echo "manager=yarn" >> $GITHUB_OUTPUT
echo "command=install" >> $GITHUB_OUTPUT
echo "runner=yarn" >> $GITHUB_OUTPUT
exit 0
elif [ -f "${{ github.workspace }}/package.json" ]; then
echo "manager=npm" >> $GITHUB_OUTPUT
echo "command=ci" >> $GITHUB_OUTPUT
echo "runner=npx --no-install" >> $GITHUB_OUTPUT
exit 0
else
echo "Unable to determine package manager"
exit 1
fi
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: ${{ steps.detect-package-manager.outputs.manager }}
- name: Setup Pages
uses: actions/configure-pages@v4
with:
# Automatically inject basePath in your Next.js configuration file and disable
# server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized).
#
# You may remove this line if you want to manage the configuration yourself.
static_site_generator: next
- name: Restore cache
uses: actions/cache@v4
with:
path: |
.next/cache
# Generate a new cache whenever packages or source files change.
key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }}
# If source files changed but packages didn't, rebuild from a prior cache.
restore-keys: |
${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-
- name: Install dependencies
run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }}
- name: Build with Next.js
run: ${{ steps.detect-package-manager.outputs.runner }} next build
- name: Static HTML export with Next.js
run: ${{ steps.detect-package-manager.outputs.runner }} next export
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./out
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
```
I got this on the build step:
```lang-none
Route (app) Size First Load JS
┌ ○ /_not-found 875 B 81.5 kB
├ ○ /pages/about 2.16 kB 90.2 kB
├ ○ /pages/contact 2.6 kB 92.5 kB
├ ○ /pages/experience 2.25 kB 90.3 kB
├ ○ /pages/home 2.02 kB 92 kB
└ ○ /pages/projects 2.16 kB 90.2 kB
+ First Load JS shared by all 80.6 kB
├ chunks/472-0de5c8744346f427.js 27.6 kB
├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB
├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B
└ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB
○ (Static) automatically rendered as static HTML (uses no initial props)
```
Full image: [Build with Next.js][1]
I read in https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the `pages` folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering `page.js` files for each page.
EDIT: Here is my `next.config.js` file:
```
/** @type {import('next').NextConfig} */
const nextConfig = {
basePath: '/pages',
output: 'export',
}
module.exports = nextConfig
```
[1]: https://i.stack.imgur.com/wSlPq.png
|
```
<!DOCTYPE html>
<html>
<body>
<p id="Num"></p>
<p id="musicTitle"></p>
<button onclick="next()">pipi</button>
<audio controls>
<source id="songSrc" type="audio/mp3">
</audio>
<script>
Num = 1
const maxNum = 2
document.getElementById("Num").innerHTML = Num
song = [
{
title: "evil forest",
src: "https://files.catbox.moe/c5zrzm.mp3"
},
{
title: "forest",
src: "https://files.catbox.moe/c5zrzm.mp3"
},
]
let music = document.getElementById("music");
music.src = song[Num-1].src;
document.getElementById("musicTitle").innerHTML = song[Num-1].title;
function next() {
if(maxNum > Num) {
Num++
}
else {
let Num = 1
}
let music = document.getElementById("music")
music.src = song[songNum-1].src
document.getElementById("musicTitle").innerHTML = song[Num-1].title;
document.getElementById("Num").innerHTML = Num
}
</script>
</body>
</html>
```
1) I wanted Num to change as you click the button, but it either doesn't change the variable or doesn't reflect the change in document.getElementById.
2) song title isn't showing even though I made song title variable and made document.getElementById statement. |
The error is telling you that the [`<label>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/label) claims to be associated with a particular form element, but that form element doesn't exist.
If this label is associated with a form element, that element needs to exist. For example, it can be outside the label:
<label for="billing_country">País / Región <abbr class="required" title="obligatorio">*</abbr></label>
<input type="text" id="billing_country" name="billing_country">
Or nested within the label, in which case the `for` attribute isn't needed:
<label>
País / Región <abbr class="required" title="obligatorio">*</abbr>
<input type="text" name="billing_country">
</label>
Alternatively, if that label *isn't* associated with a form element, change it to something else. A `<span>`, for example:
<span>País / Región <abbr class="required" title="obligatorio">*</abbr></span>
Or any other simple text element, or no wrapping element at all if you don't need one:
País / Región <abbr class="required" title="obligatorio">*</abbr> |
I'm starting my React studies and trying to create a title component and a description component (paragraph), but I'm facing challenges. Here's the data that comes from the API:

## Component Code
```javascript
// use client
import { PersonalitiesProps } from "@/@types/personalities";
import { useEffect, useState } from "react";
const Personalities = () => {
const [apiData, setApiData] = useState<PersonalitiesProps | null>(null);
useEffect(() => {
const fetchData = async () => {
try {
const res = await fetch("http://localhost:8000/api/v1/personalidades", {
method: "GET",
headers: {
"Content-Type": "application/json"
}
});
const result = await res.json();
console.log(result);
setApiData({ data: result });
} catch (error) {
console.error(error);
}
};
fetchData();
}, []);
console.log(apiData);
return (
<section>
{apiData && apiData.data && apiData.data.map(nomes => (
<div key={nomes.nome}>
<h1>{nomes.nome}</h1>
</div>
))}
{apiData && apiData.data && apiData.data.map(personalidades => (
<div key={personalidades.id}>
<h2>{personalidades.nome}</h2>
<p>{personalidades.historia}</p>
</div>
))}
</section>
);
};
export default Personalities;
```
what I'm calling it on the page:
```
import Personalities from "@/components/Personalities";
export default function TestePage() {
return (
<section>
<Personalities />
</section>
);
}
```
Information
Nextjs: 14.x
TypeScript: Yes
API: Golang (local)
NextJs: APP/ROUTER |
How does a component in nextjs do so that its value comes from an api |
|reactjs|typescript|api|next.js| |
One reason this can happen is if you don't store the user's password with the task. Without a password, the user is unauthenticated, i.e., anonymous, and can't access the database. See if you have the box below checked and, if so, try unchecking it. This is found under Security options on the task's General tab.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/JePO7.png |
document.getElementById.innerHTML = variable not changing |
|javascript|html|arrays|variables| |
null |
I create plotly bar chart using the next line
```
` fig = px.bar(df_to_print, x="bin_dist", y=metric, color='net_id', barmode="group", text=metric, color_discrete_sequence=px.colors.qualitative.Vivid)`
```
and got this chart (using streamlit):
[enter image description here](https://i.stack.imgur.com/D8z2O.png)
I want to bold the maximum value on each group, something like:
[enter image description here](https://i.stack.imgur.com/vOcHd.png)
Any ideas?
Trying:
for net_id in df_to_print['net_id'].unique():
max_value = df_to_print[df_to_print['net_id'] == net_id][metric].max()
max_index = df_to_print[(df_to_print['net_id'] == net_id) & (df_to_print[metric] == max_value)].index
fig.data[0].marker.line.width[max_index] = 2
but got the excetion:
TypeError: 'NoneType' object does not support item assignment
|
Plotly bar chart - emphasise maximum value of each group |
|plotly|bar-chart|streamlit| |
null |
$updatedRule = [
"id" => $ruleId,
"paused" => false,
"description" => "BlockBadIP",
"action" => "block",
"filter" => [
"id" => $filerID,
"expression" => $newFilterExpression
],
];
$updateUrl = "https://api.cloudflare.com/client/v4/zones/{$zoneIdentifier}/firewall/rules/{$ruleId}";
$data = json_encode($updatedRule);
$updateResponse = makeCurlRequest($updateUrl, $headers, "PUT", $data);
If the above doesn't work, it might be helpful to verify the following:
- Ensure that the $newFilterExpression variable holds the correct format for the expression that Cloudflare's API expects. You might check the Cloudflare documentation or test the expression format separately.
- Verify that the permissions associated with the API key used for authentication have the necessary access rights to modify firewall rules.
- Check the Cloudflare API documentation for any specific requirements or constraints related to updating expressions within firewall rules.
If you've already triple-checked the expression format and verified the permissions, and the issue persists, reaching out to Cloudflare support or their developer community might provide additional insights or assistance specific to their API behavior. |
I have the a list of thumbnails that are rendered to the screen as follows:
<div
class="color"
style="background:${rgbToHex(color.value.r, color.value.g, color.value.b )}"
>
<context-menu></context-menu>
</div>
When I hold on the color div, I want to show the context menu. Only on hold not hover. How can I do that using lit and typescript? Please note its more than one thumbnail that's rendered. So I need to show the context menu for the thumbnail that the person holds on. |
How to handle mousedown in lit |
|typescript|lit| |
You want your script to tell you if the input is a word or a number. Your first attempt tried to do this by dividing the input by itself, but this caused problems when the input wasn't a number.
Here's an improved way to do it:
#!/bin/bash
# This stores your input into a variable.
uniq_value=$1
# This checks if your input looks like a number.
if [[ $uniq_value =~ ^-?[0-9]+(\.[0-9]+)?$ ]]; then
# If it looks like a number, we double-check with a tool called 'bc'.
if echo "$uniq_value" | bc &> /dev/null; then
echo "Given value is a number"
else
echo "Given value is a string"
fi
else
echo "Given value is a string"
fi
Here's what this script does differently:
First, it checks if your input is a number (including decimals) using a simple rule. If it passes this check, it moves to the next step.
Then, it uses a program called bc to double-check if it's really a number. If bc doesn't complain, then your input is definitely a number.
If your input doesn't look like a number from the start, or if bc can't handle it, the script decides it's a word and tells you so.
This way, your script can correctly identify inputs like "abc123xy" as words and "3.045" or "6725302" as numbers, just as you wanted. |
|python|callback|plotly-dash| |
This 'problem' is reproducible by the following git repository:
https://github.com/tschmit/EFCLazyVsEagerWeird
I have the following entities (lots of lines omitted):
```csharp
class Folder
{
public virtual ICollection<File> Files { get; set; }
public virtual ICollection<Log> Logs { get; set; }
}
class File
{
public Int32 FolderId { get; set; }
public virtual Folder Folder { get; set; }
public virtual ICollection<Log> Logs { get; set; }
}
class Log
{
public Int32? FolderId { get; set; }
public virtual Folder Folder { get; set; }
public Int32? FileId { get; set; }
public virtual File File { get; set; }
public bool IsError { get; set; }
}
```
Configured as follow (lazy loading enabling omitted but it is enabled):
```csharp
public class FolderEFConfiguration : IEntityTypeConfiguration<Folder> {
public void Configure(EntityTypeBuilder<Folder> b)
{
b.HasMany(x => x.Files)
.WithOne(y => y.Folder)
.HasForeignKey(y => y.FolderId).IsRequired();
}
}
public class LogEFConfiguration : IEntityTypeConfiguration<Log>
{
public void Configure(EntityTypeBuilder<Log> b)
{
//...
b.HasOne(x => x.File)
.WithMany(fl => fl.Logs)
.HasForeignKey(x => x.FileId);
b.HasOne(x => x.Folder)
.WithMany(fd => fd.Logs)
.HasForeignKey(x => x.FolderId);
//...
}
}
```
The following code fails when printing `fd.Logs.Count()`:
```csharp
using AppContext ctx = new ActapiContext(conf.GetConnectionString("abc"));
var fd = await ctx.Set<Folder>().Where(x => x.InnerRef == "XXXXX").FirstOrDefaultAsync();
Console.WriteLine(fd?.Files.Count());
Console.WriteLine(fd?.Logs.Count());
```
And I get this error:
> System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannot be called on Null values
On the other hands, the following code runs fine:
```csharp
using AppContext ctx = new ActapiContext(conf.GetConnectionString("abc"));
var fd = await ctx.Set<Folder>().Include(x => x.Logs).Where(x => x.InnerRef == "XXXXX").FirstOrDefaultAsync();
Console.WriteLine(fd?.Files.Count());
Console.WriteLine(fd?.Logs.Count());
```
So lazy loading is effective for `Files`, but not for `Logs`. Why?
I will also be grateful for any clue helping to find why. I tried to reproduce the behavior on a small project without success (that is everything runs fine when I use a blog / post sample with log on both blog and post).
Keep digging, I find an update in the MCD:
```sql
alter table Logs add IsError bit default 0
```
which sets a lot of line with `IsError` to null, so the Log entity can't be materialized.
By replacing the MCD update with
```sql
alter table Logs add IsError bit not null default 0
```
everything runs fine, **BUT**: why does the eager loading runs in both cases (good MCD and wrong MCD)?
|
I've created a ttkbootstrap tableview with several columns where one column title/heading is configured to be right justified. When data is loaded the column title/heading is no longer right justified.
I've used Treeview before and when refreshing data column headings always kept their configuration. Using ttkbootstrap TableView this doesn't seem to be the case.
I am using the `build_table_data` method to update the TableView with data. According to the documentation this method will completely rebuild the underlying table with the new column and row data. I confirmed I am providing the same column config as I do when creating the TableView. but for some reason the column heading is no longer right justified.
The following code can reproduce the problem. When run you will see a column heading "Right Just" which is right justified. Press the test button and data is loaded, and the heading is no longer right justified. Is this a bug or expected behavior. Any suggestions on how to get around the feature.
```Python
import ttkbootstrap as ttk
from ttkbootstrap.tableview import Tableview
def loaddata():
eobTable_data = [('Rec 1', '', 'a', 'Col4 data', '')]
eobTable.build_table_data(coldata=eobColms,rowdata=eobTable_data)
# create the toplevel window
root = ttk.Window()
frame = ttk.Frame(root, padding=10)
frame.pack(fill='both', expand='yes')
global eobColms
eobColms = [{'text': 'Col 1', 'anchor': 'w', 'stretch': False},
{'text': 'Col 2', 'stretch': True},
{'text': 'Col 3', 'anchor': 'w','stretch': False},
{'text': 'Right Just','anchor': 'e', 'stretch': False},
{'text': 'Col 5', 'stretch': True}]
eobTable_data = ("",)
eobTable = Tableview(root,coldata=eobColms,rowdata=eobTable_data, paginated=True,pagesize=14,searchable=True,bootstyle='danger',height=14)
eobTable.pack(fill='both', expand=True)
test_btn = ttk.Button(root,text="Test",command=loaddata).pack()
root.mainloop()
``` |
How to force ttkbootstrap TableView column title to always stay right justified |
|tableview|ttkbootstrap| |
null |
Based on [Process.MainWindowHandle Property](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.mainwindowhandle?view=net-8.0) documentation you should call `Process.Refresh` beforehand in order to guarantee that you will retrieve current value.
> The main window is the window opened by the process that currently has
> the focus (the TopLevel form). You must use the Refresh method to
> refresh the Process object to get the most up to date main window
> handle if it has changed. In general, because the window handle is
> cached, use Refresh beforehand to guarantee that you'll retrieve the
> current handle. |
My query
```
select
custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales,
count(salesid) orders from SalesTable inner join CustomerTable c
on salestable.CustID=c.CustID
where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2
group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end,
case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end
order by 1,2
```
Gets all customers sales (sum amount, week number, amount orders) by one week any row.
like:
| custname | week# | sales | orders |
|---------|----------|----------|-----------|
|CustAAA | -1 | 974697.41 | 62013 |
|CustAAA |1| 10.01 | 5 |
|CustAAA |2| 10 |2|
|CustAAA |2| 372.95| 11|
|CustAAA |3| 70.86| 13|
|CustAAA |3| 0| 3|
|CustAAA |4| 8.08| 2|
|CustAAA |5| 20 |6|
|CustAAA |48| 0 |38|
|CustAAA |49 |84.27| 2|
|CustXYZ |-1 |12.12| 1|
|CustXYZ |1 |22.59| 1|
|CustXYZ |4 |117.9| 1|
|CustXYZ |48 |19.3| 1|
[enter image description here](https://i.stack.imgur.com/3qC7j.png)
How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders.
and again the next week number, like example:
| custname | week -1 sales | week -1 orders | week 1 sales | week 1 orders | week 2 sales | week 2 orders | week 3 sales | week 3 orders | week 4 sales | week 4 orders | week 5 sales | week 5 orders | week 48 sales | week 48 orders | week 49 sales | week 49 orders |
|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|---------|----------|----------|-----------|-----------|
|CustAAA | 974697.41 | 62013 | 10.01 | 5 | 382.95 | 13 | 70.86 | 16 | 8.08 | 2 | 20 | 6 | 0 | 38 | 84.27 | 2 |
|CustXYZ | 12.12 | 1 | 22.59 | 1 | | | | | 117.9 | 1 | | | 19.3 | 1 | | |
[enter image description here](https://i.stack.imgur.com/Z8sHj.png) |
public async Task<string> FetchUser(FetchUser postdata)
{
try
{
ServicePointManager.Expect100Continue = true;
ServicePointManager.DefaultConnectionLimit = 9999;
System.Net.ServicePointManager.SecurityProtocol |= SecurityProtocolType.Tls12;
HttpClient client = new HttpClient();
var Message = "";
string apiUrl = ApiSecrets.ApiEndPoint;
var payload = new
{
api_key = ApiSecrets.ApiKey,
datasetKey = ApiSecrets.DataSetKey
};
string payloadJson = Newtonsoft.Json.JsonConvert.SerializeObject(payload);
var content = new StringContent(payloadJson, Encoding.UTF8, "application/json");
string username = ApiSecrets.AuthUserName;
string password = ApiSecrets.AuthPassword;
string credentials = Convert.ToBase64String(Encoding.ASCII.GetBytes($"{username}:{password}"));
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", credentials);
HttpResponseMessage response = await client.PostAsync(apiUrl, content);
// HttpResponseMessage response = client.PostAsync(apiUrl, content).Result;
if (response.IsSuccessStatusCode)
{
// Read and print the response content
string responseBody = await response.Content.ReadAsStringAsync();
// string responseBody = response.Content.ReadAsStringAsync().Result;
JsonResponse Empresponse = JsonConvert.DeserializeObject<JsonResponse>(responseBody);
List<Employee> employees = Empresponse.employee_data;
}
else
{
// Handle unsuccessful response
// Console.WriteLine("Error: " + response.StatusCode);
Message = "Invalid Response";
}
return Message;
}
//Genearl ex
catch (Exception ex)
{
if (ex.InnerException != null)
{
// Log or inspect the InnerException details
// Console.WriteLine($"InnerException: {ex.InnerException.Message}");
// Throw the InnerException to propagate it
throw ex.InnerException;
}
// If there is no InnerException, rethrow the original exception
throw ex;
}
//General Ex end
}
I'm encountering an issue with my C# code that makes HTTP requests using HttpClient to an api endpoint. The error message is "The request was aborted: Could not create SSL/TLS secure channel." Strangely, the same code works fine when I debug it on my local system.
{
"status_code": 500,
"request_name": "FetchUser",
"message": "The request was aborted: Could not create SSL/TLS secure channel.",
"request_time": ,
"response_time":,
"response_time_Utc": 0,
"data": "Failure"
} |
The request was aborted: Could not create SSL/TLS secure channel (while debugging same code is working fine) |
|c#|ssl|httpclient|tls1.2| |
null |
I need to install a self-signed SSL certificate to trusted root on the windows device from my uwp app code how can I achieve this? |
```
<li class="drop-down-bike-1">
<a href="#">MODERN CLASSICS</a>
<ul class="modern-list">
<li class="drop-down-bike-display">
<div class="mod-list" style="width:400px; height:600px; overflow-y: scroll;">
hello
{% for bi in mod_bike %}
<a href="#">
<div class="card mb-3 border-0 listc" >
<div class="row g-0">
<div class="col-md-4">
<img src="{{ bi.image }}" class="img-fluid-bike rounded-start" alt="..." >
</div>
<div class="col-md-8">
<div class="card-body">
<h3 class="card-title bikeName">{{ bi.name }}</h3>
<p class="card-text" style="width:200px"><small class="text-muted">price from ₹ {{ bi.price }}</small></p>
</div>
</div>
</div>
</div>
</a>
{% endfor %}
</div>
</li>
</ul>
</li>
```
this is the html code for fetching details from database
```
def nav(request):
category_names = ['ADVENTURE', 'CLASSICS', 'ROADSTER', 'ROCKET3']
adventure_cat = Category.objects.get(name='ADVENTURE')
modern_cat = Category.objects.get(name='CLASSICS')
road_cat = Category.objects.get(name="ROADSTER")
rocket_cat = Category.objects.get(name="ROCKET3")
adv_bike = Bike.objects.filter(category=adventure_cat)
mod_bike = Bike.objects.filter(category=modern_cat)
road_bike = Bike.objects.filter(category=road_cat)
rocket_bike = Bike.objects.filter(category=rocket_cat)
context = {
"adv_bike": adv_bike,
"mod_bike": mod_bike,
"road_bike": road_bike,
"rocket_bike": rocket_bike,
"adventure_cat":adventure_cat,
"road_cat": road_cat,
"rocket_cat": rocket_cat,
"modern_cat": modern_cat,
}
return render(request, 'nav.html', context=context)
```
this is my views code for fetching data from models the problem is that the data is not fetching and displaying in the html page. i need to fetch and display data in the html page .
|
Failed to notify project evaluation listener error on Android Studio |
|android|kotlin|gradle| |
If you want this to work with string enums, you need to use `Object.values(ENUM).includes(ENUM.value)` because string enums are not reverse mapped, according to https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-4.html#string-enums:
enum Vehicle {
Car = 'car',
Bike = 'bike',
Truck = 'truck'
}
becomes:
{
Car: 'car',
Bike: 'bike',
Truck: 'truck'
}
So you just need to do:
if (Object.values(Vehicle).includes('car')) {
// Do stuff here
}
If you get an error for: `Property 'values' does not exist on type 'ObjectConstructor'`, then you are not targeting ES2017. You can either use this tsconfig.json config:
"compilerOptions": {
"lib": ["es2017"]
}
Or you can just do an any cast:
if ((<any>Object).values(Vehicle).includes('car')) {
// Do stuff here
} |
null |
The given warning:
Warning - Trying to access array offset on value of type null - PHP 7.4.0 #7776
results form this snippet `$key = $key[1];` and can be solved by using this instead:
$key = isset($key[1]) ? $key[1] : null;
|
null |
|sql-server|entity-framework-core|ef-core-3.1|ef-core-8.0| |
I run an angular application and there is not compilation error. However, the browser sent the following error:
Both the table and dtOptions cannot be empty
I am using angular 17.
The ts file contains the following code:
```
import { BankAccountsRefComponent } from './bankaccounts-ref.component';
import { app_bank_out } from './../../models/app-bank';
import { Subject } from "rxjs";
import { Component, OnInit, TemplateRef, ViewChild, inject } from '@angular/core';
import { AppBankService } from 'src/app/services/app-bank.service';
import { tblrowcommand } from 'src/app/models/tbl-row-command';
import { ADTSettings, } from 'angular-datatables/src/models/settings';
@Component({
selector: 'bankaccounts',
templateUrl: './bankaccounts.component.html',
styleUrls: ['./bankaccounts.component.css']
})
export class BankaccountsComponent implements OnInit {
dtOptions: ADTSettings = {};
dtTrigger: Subject<ADTSettings> = new Subject<ADTSettings>()
banks?: Array<app_bank_out>;
bankservice = inject(AppBankService);
that=this
message = '';
constructor() {
}
onClickRow(bank: any):void {
this.message = bank.bankId + '' + bank.name;
}
@ViewChild('BankAccountsRef') BankAccountsRef: TemplateRef<BankAccountsRefComponent> | undefined;
ngOnInit(): void {
setTimeout(() => {
const self = this;
this.dtOptions = {
ajax: (dataTablesParameters: any, callback ) =>{
this.bankservice.getBankById(0)
.subscribe(resp => {
console.log(resp)
callback({
recordsTotal: resp.recordsTotal,
recordsFiltered: resp.recordsFiltered,
data: resp
})
})
},
columns:[{
title:"Id",
data: "bankId"
},{
title: "Bank Name",
data: "name"
},{
title: "Swift / ABA",
data: "swift_ABA"
},{
title: "Address",
data: "address"
},{
title: "Country",
data: "country"
},{
title: "Email",
data: "email"
},{
title: "Phone",
data: "phone"
},{
title: "Contact",
data: "contact"
},{
title: "Actions",
data: null,
defaultContent: '',
// ngTemplateRef: {
// ref: this.BankAccountsRef,
// context: {
// captureEvents: self.onCaptureEvent.bind(self),
// }
// }
}],
rowCallback: (row: Node, data: any[] | Object, index: number) => {
const self = this;
$("td", row).off("click");
$("td", row).on('click', () => {
self.onClickRow(data)
})
return row;
}
}
})
}
ngAfterViewInit() {
setTimeout(() => {
// race condition fails unit tests if dtOptions isn't sent with dtTrigger
this.dtTrigger.next(this.dtOptions);
}, 200);
}
onCaptureEvent(event: tblrowcommand) {
this.message = `Event '${event.cmd}' with data '${JSON.stringify(event.data)}`;
}
ngOnDestroy(): void {
// Do not forget to unsubscribe the event
this.dtTrigger.unsubscribe();
}
}
```
This is the error in the output:

Can somebody help me out?
Thanks
I am trying to list a data though the angular datatable.
I expect to release the list with some functionalities |
**EDITED FOLLOWING COMMENTS**
If you use the default numpy.meshgrid call then the layout of your x and y values with respect to the dependent-variable array corresponds to the following: (print ```x_grid```, ```y_grid``` and ```z_values``` out if you want to see):
x ------->
y
| [dependent-]
| [variable ]
\|/ [ array ]
Your poly-fitted figure corresponds to this, but your heat-map doesn't. I think that's because you **transposed z** in the heatmap plot but you **didn't transpose z** in the polyfit.
Whatever you do, you need to use the **SAME dependent-variable array** in both figures.
You have TWO choices: either (1) transpose the dependent-variable array before any plotting and leave the meshgrid call as it is; or (2) leave the dependent-variable array as it is but change the indexing order in meshgrid.
Full code below. Please note the 2 options (given by easily-changed variable ```option```).
The output figure is the same in both cases. The peaks of the diagrams now correspond between the two subfigures. They are at high-x/low-y.
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
x_values = np.linspace(1, 8, 8)
y_values = np.linspace(1, 10, 10)
z_values = [[0.0128, 0.0029, 0.0009, 0.0006, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0049, 0.0157, 0.0067, 0.0003, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0203, 0.0096, 0.0055, 0.0096, 0.0012, 0.0023, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0229, 0.0191, 0.0020, 0.0073, 0.0055, 0.0026, 0.0022, 0.0000, 0.0000, 0.0000],
[0.0218, 0.0357, 0.0035, 0.0133, 0.0073, 0.0145, 0.0000, 0.0029, 0.0000, 0.0000],
[0.0261, 0.0232, 0.0365, 0.0200, 0.0212, 0.0107, 0.0036, 0.0007, 0.0022, 0.0007],
[0.0305, 0.0244, 0.0284, 0.0786, 0.0226, 0.0160, 0.0000, 0.0196, 0.0007, 0.0007],
[0.0171, 0.0189, 0.0598, 0.0215, 0.0218, 0.0464, 0.0399, 0.0051, 0.0000, 0.0000]]
#### If you want x to vary with ROW rather than the default COLUMN then choose ONE of these two approaches
option = 2
if ( option == 1 ): # Default meshgrid, but transpose the dependent variables
z_values = np.array(z_values).T
x_grid, y_grid = np.meshgrid(x_values, y_values)
else: # Leave the dependent variable, but use non-default meshgrid arrangement: indexing='ij'
z_values = np.array(z_values)
x_grid, y_grid = np.meshgrid(x_values, y_values,indexing='ij')
fig = plt.figure(figsize=(12, 5))
ax1 = fig.add_subplot(121)
contour1 = ax1.contourf(x_grid, y_grid, np.log(z_values + 1))
fig.colorbar(contour1, ax=ax1)
ax1.set_xlabel('x values')
ax1.set_ylabel('y values')
x_flat = x_grid.flatten()
y_flat = y_grid.flatten()
z_flat = z_values.flatten()
degree = 4
poly_features = PolynomialFeatures(degree=degree)
X_poly = poly_features.fit_transform(np.column_stack((x_flat, y_flat)))
model = LinearRegression()
model.fit(X_poly, z_flat)
z_pred = model.predict(X_poly)
z_pred_grid = z_pred.reshape(x_grid.shape)
ax2 = fig.add_subplot(122, projection='3d')
ax2.plot_surface(x_grid, y_grid, np.log(z_pred_grid + 1), cmap='viridis')
ax2.set_xlabel('x values')
ax2.set_ylabel('y values')
ax2.set_zlabel('z values')
plt.show()
Output:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/motaj.png |
how to search for content in Nuxt 3 with the @nuxt/content module using .json files? |
|javascript|typescript|vue.js|nuxt.js| |
null |
I had the same issue but everything was working ok on my local env just remote won't work so Adding "custom-fields" to the support args did not fix my issue, I had to navigate to Settings -> Permalinks -> and select "Post name" and "save changes" for everything to work as my local env.
I think the permalink structure had to be refreshed for everything to work. |
I have a question about consuming webservices with JSON/Gson because my new webservice delivers other strings than my old webservice. My new webservice is generated by Strapi 4 and I have no possibility to change it.
My Java application worked with JSON strings like this from my old webservice:
```json
[
{
"nachname": "Studi1",
"contactWebServiceDTO": [
{
"emailAddress": "studi1@xxx.de",
"db_Id": 10,
"dateCreated": "May 10, 2022 10:33:14 AM",
"dateUpdated": "Feb 7, 2023 9:12:28 AM",
"dtype": "Contact"
}
],
"db_Id": 792,
"dateCreated": "Feb 26, 2021 2:11:19 PM",
"dateUpdated": "May 6, 2022 10:01:02 AM",
"dtype": "Person"
}
]
```
I converted it into Java objects with Gson like this:
```java
JSONArray jsonArrayFromRest = new JSONArray(jsonString);
Gson gson = new Gson();
GsonBuilder builder = new GsonBuilder();
gson = builder.create();
if(identifiableWebServiceDTO instanceof PersonWebServiceDTO) {
PersonWebServiceDTO personWebServiceDTO = new PersonWebServiceDTO();
for (int i = 0; i < jsonArrayFromRest.length(); i++) {
String jsonObjectAsString = jsonArrayFromRest.getJSONObject(i).toString();
personWebServiceDTO = gson.fromJson(jsonObjectAsString, PersonWebServiceDTO.class);
dTOList.add(personWebServiceDTO);
}
}
```
Corresponding classes are:
```
public class IdentifiableWebServiceDTO {
protected long db_Id;
protected Date dateCreated, dateUpdated;
protected String dtype;
//Getter, Setter...
}
public class PersonWebServiceDTO extends IdentifiableWebServiceDTO {
private String nachname;
private List<ContactWebServiceDTO> contactWebServiceDTO;
// ...Setter/Getter
}
public class ContactWebServiceDTO extends IdentifiableWebServiceDTO {
private String emailAddress = null;
//Getter, Setter...
}
```
The new webservice delivers a string that nests every object in the string "data :[]" and the attributes in the string "attributes{}".
```json
{
"data": [
{
"id": 1,
"attributes": {
"name": "Nicole",
"createdAt": "2024-03-27T06:55:54.668Z",
"updatedAt": "2024-03-27T06:56:18.843Z",
"publishedAt": "2024-03-27T06:56:18.841Z",
"mycontacts": {
"data": [
{
"id": 1,
"attributes": {
"address": "addressFromNicole",
"createdAt": "2024-03-27T06:56:09.438Z",
"updatedAt": "2024-03-27T06:56:11.625Z",
"publishedAt": "2024-03-27T06:56:11.619Z"
}
},
{
"id": 2,
"attributes": {
"address": "secondAddressFromNicole",
"createdAt": "2024-03-27T06:56:40.962Z",
"updatedAt": "2024-03-27T06:56:41.709Z",
"publishedAt": "2024-03-27T06:56:41.706Z"
}
}
]
}
}
}
],
"meta": {
"pagination": {
"page": 1,
"pageSize": 25,
"pageCount": 1,
"total": 1
}
}
}
```
Do you have an idea how I can convert this structure into my old Java objects in a simple way?
|
If you don't want to use an agent then you can add a template to your llm and that has a chat history field and then add that as a memory key in the ConversationBufferMemory().
Like this :
template = """You are a chatbot having a conversation with a human.
{chat_history} Human: {human_input} Chatbot:"""
prompt = PromptTemplate(input_variables=["chat_history", "human_input"], template=template )
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(
llm=OpenAI(),
prompt=prompt,
verbose=True,
memory=memory)
*See official docs :* https://python.langchain.com/docs/expression_language/cookbook/memory
If you change your mind and chose to use an agent here's how to do it using a chat_history in the suffix:
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history} Question: {input} {agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"], )
memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=message_history )
*See official docs* : https://python.langchain.com/docs/modules/memory/agent_with_memory_in_db
|

Right click on project you have in use, click on Properties,
go on "Compiler compilance level", and select 1.8
 |
null |
Right-click the project you have in use, click `Properties`, go to `Java Compiler`, and set `Compiler compilance level` to `1.8`.

 |
I'm working on a Node.js project using Firebase Realtime Database, and I'm encountering an error when trying to match clients with therapists based on their responses. The error message I receive is `"db._checkNotDeleted is not a function"`.
Here's the relevant part of my `match.js` file where the error occurs:
```javascript
// utils/match.js
const { getDatabase, ref, get, query, orderByChild, equalTo } = require('firebase/database');
const { firebaseApp } = require('./firebaseConfig');
// Function to calculate the score based on responses
const calculateMatchScore = (clientResponses, therapistResponses) => {
let score = 0;
// Score for language compatibility
const sharedLanguages = clientResponses.languages.filter(language =>
therapistResponses.languages.includes(language)
);
score += sharedLanguages.length * 10; // Assign 10 points for each shared language
// Score for communication preferences
const sharedCommunicationMethods = clientResponses.communication.filter(method =>
therapistResponses.communication.includes(method)
);
score += sharedCommunicationMethods.length * 5; // Assign 5 points for each shared communication method
// Score for therapy experiences
const matchingTherapyExperiences = clientResponses.therapy_experiences.filter(experience =>
therapistResponses.therapy_experiences.includes(experience)
);
score += matchingTherapyExperiences.length * 15; // Assign 15 points for each matching therapy experience
return score;
};
// Function to match clients with therapists
const matchClientsWithTherapists = async () => {
const db = getDatabase(firebaseApp);
const usersRef = ref(db, 'users');
const responsesRef = ref(db, 'responses');
const matchesRef = ref(db, 'matches');
// Retrieve all therapists and clients
const therapistsSnapshot = await get(query(usersRef, orderByChild('role'), equalTo('therapist')));
const clientsSnapshot = await get(query(usersRef, orderByChild('role'), equalTo('client')));
if (!therapistsSnapshot.exists() || !clientsSnapshot.exists()) {
throw new Error('No therapists or clients found');
}
const therapists = therapistsSnapshot.val();
const clients = clientsSnapshot.val();
// Create an array to store potential matches
let potentialMatches = [];
// Calculate scores for each client-therapist pair
for (const clientId in clients) {
const clientResponsesSnapshot = await get(ref(responsesRef, clientId));
const clientResponses = clientResponsesSnapshot.val();
for (const therapistId in therapists) {
const therapistResponsesSnapshot = await get(ref(responsesRef, therapistId));
const therapistResponses = therapistResponsesSnapshot.val();
const score = calculateMatchScore(clientResponses, therapistResponses);
potentialMatches.push({ clientId, therapistId, score });
}
}
// Sort potential matches by score
potentialMatches.sort((a, b) => b.score - a.score);
// Distribute clients to therapists, ensuring no therapist has more than 5 clients
// and that clients are evenly distributed
const matches = {};
potentialMatches.forEach(match => {
if (!matches[match.therapistId]) {
matches[match.therapistId] = [];
}
if (matches[match.therapistId].length < 5) {
matches[match.therapistId].push(match.clientId);
}
});
// Save the matches to the database
for (const therapistId in matches) {
const therapistMatchesRef = push(ref(matchesRef, therapistId));
await set(therapistMatchesRef, matches[therapistId]);
}
return matches;
};
module.exports = { matchClientsWithTherapists };
```
And here's my firebaseConfig.js:
```javascript
//utils/firebaseConfig.js
require('dotenv').config();
const { initializeApp } = require("firebase/app");
const { getDatabase, ref, set, get } = require("firebase/database");
const firebaseConfig = {
apiKey: process.env.FIREBASE_API_KEY,
authDomain: process.env.FIREBASE_AUTH_DOMAIN,
databaseURL: process.env.FIREBASE_DATABASE_URL,
projectId: process.env.FIREBASE_PROJECT_ID,
storageBucket: process.env.FIREBASE_STORAGE_BUCKET,
messagingSenderId: process.env.FIREBASE_MESSAGING_SENDER_ID,
appId: process.env.FIREBASE_APP_ID
};
const firebaseApp = initializeApp(firebaseConfig);
module.exports = {firebaseApp}
```
and then i use the function like this
```javascript
// routes/router.js
const express = require('express');
const router = express.Router();
const { matchClientsWithTherapists } = require('../utils/match');
//other routes
// Route to trigger the matching process
router.post('/api/match', async (req, res) => {
try {
const matches = await matchClientsWithTherapists();
res.status(200).json({ success: true, matches });
} catch (error) {
console.error('Error during matching process:', error);
res.status(500).json({ message: error.message });
}
});
```
I've confirmed that my .env file is correctly set up with all the necessary Firebase configuration variables. I've also made sure that require('dotenv').config(); is called at the beginning of my application's entry point.
I'm using Firebase SDK version 10.7.2 and Node.js version 18.17.0
I've tried reinstalling node modules, checking Firebase imports, and ensuring there are no network issues. I've also checked the Firebase Console for any configuration issues.
Can anyone help me understand why this error is occurring and how to fix it?
Thank you in advance!
i used the same firebaseConfig in another function and it works
```javascript
//utils/register.js
const { hashPassword } = require('./hash');
const crypto = require('crypto');
const { getDatabase, ref, set, get, push,equalTo,orderByChild,query } = require('firebase/database');
const { firebaseApp } = require('./firebaseConfig');
const sendRegistrationEmail = require('./sendRegistrationEmail');
const registerUser = async (firstName, lastName,email,phoneNumber,username,password,role,responses) => {
const db = getDatabase(firebaseApp);
// Query the database for a user with the specified email
const usersRef = ref(db, 'users');
const emailQuery = query(usersRef, orderByChild('email'), equalTo(email));
const emailSnapshot = await get(emailQuery);
// Check if the email already exists
if (emailSnapshot.exists()) {
throw new Error('Email already in use');
}
// Check if the username already exists
const usernameRef = ref(db, `usernames/${username}`);
const usernameSnapshot = await get(usernameRef);
if (usernameSnapshot.exists()) {
throw new Error('Username already taken');
}
try {
const hashedPassword = await hashPassword(password);
// Generate a new unique userId
const newUserRef = push(ref(db, 'users'));
const userId = newUserRef.key;
//Generate a verification token for the new user
const verificationToken = crypto.randomBytes(20).toString('hex');
// Save user details to the database
const userData = {
userId,
username,
email,
phoneNumber,
role,
password: hashedPassword,
firstName,
lastName,
isVerified:false,
verificationToken,
};
await set(newUserRef, userData);
// Also, save the username to prevent duplicates
await set(usernameRef, { userId });
// Save responses to a separate document (table) referenced by userId
if (responses) {
const responsesRef = ref(db, `responses/${userId}`);
await set(responsesRef, responses);
}
await sendRegistrationEmail(email, firstName, verificationToken);
return { success: true, message: 'User registered successfully', userId };
} catch (error) {
console.log(`Error while trying to create a new user: ${error.message}`);
console.error('Error registering user:', error);
throw error;
}
};
module.exports = { registerUser };
```
however in the matching function i get
```bash
Error during matching process: TypeError: db._checkNotDeleted is not a function
at ref (C:\Users\USER\Documents\GitHub\backend\node_modules\@firebase\database\dist\index.node.cjs.js:12964:8)
at matchClientsWithTherapists (C:\Users\USER\Documents\GitHub\backend\utils\match.js:55:47)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async C:\Users\USER\Documents\GitHub\backend\routes\router.js:141:21
``` |
Firebase Realtime Database Error: "db._checkNotDeleted is not a function" in Node.js |
|node.js|firebase|firebase-realtime-database| |
null |
I'm wondering if anyone can explain the difference in behaviour between GCC/Clang and MSVC for this code:
```cpp
#include <iostream>
class Base
{
public:
Base(int* a) : ptr{a} { std::cout << "Base construct\n"; }
~Base() { std::cout << "Base destruct\n"; }
int* ptr;
};
class Derived : public Base
{
public:
Derived() : Base(&val) { std::cout << "Derived construct\n"; }
~Derived() { std::cout << "Derived destruct\n"; }
protected:
int val{2};
};
Derived GiveMeDerived() {
return {};
}
void SomeFunc(Base b) {
std::cout << "SomeFunc " << *b.ptr << "\n";
}
int main()
{
SomeFunc(GiveMeDerived());
return 0;
}
```
In GCC/Clang, I see the expected printout order of:
```
Base construct
Derived construct
SomeFunc 2
Base destruct
Derived destruct
Base destruct
```
but in MSVC 2022 (MSVC 19.39.33523.0), I get:
```
Base construct
Derived construct
SomeFunc 2
Base destruct
Base destruct
Derived destruct
Base destruct
```
Where does the extra base destruct come from?
When I change to code to try to track creation:
```
#include <iostream>
class Base
{
public:
// Base() { std::cout << "Default\n"; }
Base(int* a) : ptr{a} { std::cout << "Base construct\n"; }
~Base() { std::cout << "Base destruct " << default_des << "\n"; }
int* ptr;
std::string default_des{"default"};
};
class Derived : public Base
{
public:
Derived() : Base(&val) { std::cout << "Derived construct\n"; }
~Derived() { std::cout << "Derived destruct\n"; }
protected:
int val{2};
};
Derived GiveMeDerived() {
Derived a;
a.default_des = "From GiveMeDerived\n";
return a;
}
void SomeFunc(Base b) {
std::cout << "SomeFunc " << *b.ptr << "\n";
b.default_des = "inside SomeFunc";
}
int main()
{
SomeFunc(GiveMeDerived());
return 0;
}
```
MSVC seems to "fix" itself and does not have the extra destructor:
```
Base construct
Derived construct
SomeFunc 2
Base destruct inside SomeFunc
Derived destruct
Base destruct From GiveMeDerived
```
|
|django|django-views|django-templates| |
Your algorithm is wrong.
Look at this:
for (i = 0; i < N; i += 8)
for (j = 0; j < M; j += 4) {
if (j == i || j == i + 4) { /* same block */
For 8x8 matrix both `N`and `M` is 8 so it's like:
for (i = 0; i < 8; i += 8)
for (j = 0; j < 8; j += 4) {
if (j == i || j == i + 4) { /* same block */
So which values will `i` and `j`take:
i==0, j==0
i==0, j==4
// Now the inner loop ends as j becomes 8 due j+=4
// Now the outer loop ends as i becomes 8 due i+=8
So the `if` statement will only be executed for (i,j) as (0,0) and (0,4). If both cases the `if` condition will be true. As the last statement in the body of the `if` is a `continue`, the code will never reach:
for (i1 = i; i1 < i + 8 ; i1++)
for (j1 = j; j1 < j + 4 ; j1++)
B[j1][i1] = A[i1][j1];
In other words - your algorithm is wrong.
Another issue... When `j` is 4 consider this:
for (j1 = j; j1 < j + 4; j1++) {
B[j1][j1] = A[j1][j1]; /* i1 == j1 */
B[j1][j1+4] = A[j1+4][j1]; /* i1 == j1 + 4 */
}
So `j1` will take the value 4, 5, 6, 7 and you have
B[j1][j1+4] = A[j1+4][j1]; /* i1 == j1 + 4 */
^^^^
ups... out of bound, i.e. 8, 9, 10, 11
A simple advice. Add some `printf` whenever you write to `b`. Like:
B[j1][j1+4] = A[j1+4][j1];
printf("Line &d: B[%d][%d] = A[%d][%d]\n", __LINE__, j1, j1+4, j1+4, j1)";
Then it's easy to check all assignments to `B` for correctness.
|
I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function):
#!/bin/bash
uniq_value=$1
if `$(echo "$uniq_value / $uniq_value" | bc)` ; then
echo "Given value is number"
else
echo "Given value is string"
fi
The execution result is as follows:
$ sh -x test.sh abc
+ uniq_value=abc
+++ echo 'abc / abc'
+++ bc
Runtime error (func=(main), adr=5): Divide by zero
+ echo 'Given value is number'
Given value is number
There is an error like this:
> Runtime error (func=(main), adr=5): Divide by zero.
Can anyone please suggest how to rectify this error?
- The expected result for the input `abc123xy` should be `Given value is string`.
- The expected result for the input `3.045` should be `Given value is number`.
- The expected result for the input `6725302` should be `Given value is number`.
After this I will assign a series of values to `uniq_value` variable in a loop. Hence getting the output for this script is very important.
|
You can try something like that:
1. Get the full URL:
window.location
2. Get the only protocol:
window.location.protocol
3. Get the host (without port):
window.location.hostname
4. Get the host + port:
window.location.host
5. Get the host and protocol:
window.location.origin
6. Get pathname or directory without protocol and host:
var url = 'http://www.example.com/somepath/path2/path3/path4';
var pathname = new URL(url).pathname;
alert(pathname);
|
Use
SELECT `Order ID` FROM Returnedsss;
or
SELECT [Order ID] FROM Returnedsss;
The issue is how single quotes are used to enclose a literal value character string in preference to enclosing a component name (column table etc). |
Given an linear system Ax=b, x = [x1; x2], where A,b are given and x1 is also given. A is symmetric. I want to compute the gradient of x1 in terms of A,b, i.e. dx1/dA, dx1/db. It seems that torch.autograd.backward() can only compute the gradient by solving the whole linear system. But I want to make it more efficient by not solving x1 again since it is already given. Is it feasible? If so, how to implement it using pytorch?
For now, I only know how to get the gradient by solving the whole system:
```
m1 = 10
n1 = 10
m3 = 5
n3 = n1
m2 = m1
n2 = m1 + m3 - n1
m4 = m3
n4 = n2
A = torch.rand((m1+m3,n1+n2))
A1 = A[:m1,:n1]
A2 = A[:m1,n1:]
A3 = A[m1:,:n1]
A4 = A[m1:,n1:]
x1 = torch.ones((n1,1)).clone().detach().requires_grad_(True)
x2_gt = torch.rand((n2,1))
b1 = (A1 @ x1 + A2 @ x2_gt).detach().requires_grad_(True)
b2 = (A3 @ x1 + A4 @ x2_gt).detach().requires_grad_(True)
b = torch.vstack((b1,b2))
x = torch.linalg.solve(A,b)
x.backward(torch.ones(m1+m3,1))
``` |