instruction stringlengths 0 30k ⌀ |
|---|
We have a multi-tenant B2B SaaS application and we're exploring the usage of Cosmos DB core SQL API. Our plans are to implement per cosmos account, single database with 499 containers. We choose a container per tenant approach something like the following:
|-- CosmosDB Account 1
| |-- Database A
| |-- Container Tenant 1
| |-- Container Tenant 2
| |-- ...
| |-- Container Tenant 499
|
|-- CosmosDB Account 2
| |-- Database B
| |-- Container Tenant 500
| |-- Container Tenant 501
| |-- ...
| |-- Container Tenant 998
This will keep growing up to about 500,000 container limit in a single Azure subscription based on the documentation on resource limits: [limits][1]
When the limit is reached a new azure subscription will be created and the same design will take place as our tenants grow. A master database will also exist, in order to keep track of tenants on our side. Master database will contain documents in regards to each tenant, which azure subscription, account/database they belong to. We plan to use an asp.net web api, it will be a monolith and shared across all tenants. The web api will be responsible for reads/writes for each tenant, across cosmos accounts for each tenants container.
We'll be using the [.NET SDK for Azure Cosmos DB for the core SQL API][2]. My question is what is the recommended approach on how to handle the client instance for the cosmos SDK? I found some information which talks about [Best practices for multi-tenant applications][3] for the cosmos SDK. Which states the following:
> Applications that distribute usage across multiple tenants where each
> tenant is represented by a different database, container, or partition
> key within the same Azure Cosmos DB account should use a single client
> instance. A single client instance can interact with all the
> databases, containers, and partition keys within an account, and it's
> best practice to use the singleton pattern.
>
> However, when each tenant is represented by a different Azure Cosmos
> DB account, it's required to create a separate client instance per
> account. The singleton pattern still applies for each client (one
> client for each account for the lifetime of the application), but if
> the volume of tenants is high, the number of clients can be difficult
> to manage. Connections can increase beyond the limits of the compute
> environment and cause connectivity issues.
So based on the recommendations with my design as context, does that mean I should have multiple CosmosClient instances, one for each cosmos account? If this is the case as we scale, lets take the scenario where we have 500,000 tenants, this would mean 1000 cosmos accounts which would translate to 1000 CosmosClient instances on a single asp.net web api? I'm having trouble understanding the implications of such a design, is this even feasible?
Is it possible to just use a single CosmosClient instances for all accounts? I did find the following on stackoverflow, however is this approach recommended: [Dependency Injection & connection strings / Multiple instances of a singleton][4] which a user posted:
> In the end, is this case, the best solution was the approach that was
> suggested by Mr. T.
> https://devblogs.microsoft.com/cosmosdb/httpclientfactory-cosmos-db-net-sdk/
>
> I'm now still using one CosmosClient, Scoped. Which allows dynamic use
> of endpoints.
>
> By injecting the IHttpClientFactory and setting the
> CosmosClientOptions like this;
>
> {
> HttpClientFactory = () => _httpClientFactory.CreateClient("cosmos") }); we are now making full use of the HttpClient and its ability to reuse ports.
It's be great to get some code examples and general guidance.
[1]: https://learn.microsoft.com/en-us/azure/cosmos-db/concepts-limits
[2]: https://github.com/Azure/azure-cosmos-dotnet-v3
[3]: https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/best-practice-dotnet#best-practices-for-multi-tenant-applications
[4]: https://stackoverflow.com/questions/70898808/dependency-injection-connection-strings-multiple-instances-of-a-singleton |
How to handle multiple cosmos db accounts with a single cosmosclient. Questions on multi tenancy |
|asp.net|azure-cosmosdb|azure-cosmosdb-sqlapi| |
I tried your solution with my matrices:
W = np.array([[0.6 , 0.1 , 0.1 , 0.1 , 0.1],
[0.1 , 0.6 , 0.1 , 0.1 , 0.1],
[0.1 , 0.1 , 0.6 , 0.1 , 0.1],
[0.1 , 0.1 , 0.1 , 0.6 , 0.1],
[0.1 , 0.1 , 0.1 , 0.1 , 0.6]])
u = np.array([.6, .5, .6, .2, .1])
M = np.array([[-0.75 , 0 , 0.75 , 0.75 , 0],
[0 , -0.75 , 0 , 0.75 , 0.75],
[0.75 , 0 , -0.75 , 0 , 0.75],
[0.75 , 0.75 , 0.0 , -0.75 , 0],
[0 , 0.75 , 0.75 , 0 , -0.75]])
and your code generated the right solution:
This is h
[ 0.5 0.45 0.5 0.3 0.25]
This is our steady state v_ss
[ 1.663354 1.5762684 1.66344153 1.56488258 1.53205348]
Maybe the problem is with the Test on coursera. Have you tried to contact them on the forum? |
Your new code just declares `extern` for those symbols so that the compiler will believe they exist. You're not actually defining any variables at all. Hence why linking fails.
You should define the actual variables in the test code that uses them.
e.g. I have a test_rtc.c file in one of my projects, which contains:
```c
RTC_HandleTypeDef hrtc;
ADC_HandleTypeDef hadc1;
```
It sounds like you might want to have something like the following in your tests:
```c
static ADC1_Type MyADC1;
ADC1_Type *ADC1 = &MyADC1;
```
etc.
|
## SUGGESTION
Based on the information provided, that being:
> I want to make a dropdown list with 20 lines where the items can come up even by typing the product code or name...
What I can suggest instead is to use a dynamic code that updates either the provided item code or item name (from a given drop-down) to its corresponding value, basing off the information from the item database, as shown in the screenshot:
[](https://i.stack.imgur.com/qR9Lk.png)
**[Installing the onEdit Trigger](https://developers.google.com/apps-script/guides/triggers/installable#manage_triggers_manually)**
To achieve this, we will first install the `onEdit` trigger on your spreadsheet, which can be achieved through these steps:
1. On your Apps Script editor, click the Triggers tab on the left menu (the clock/alarm icon)
2. At the bottom right part of the page, click "Add Trigger"
3. Select and configure the type of trigger you want to create, and then click Save.
Your setup should look like this: [](https://i.stack.imgur.com/0I35x.png)
**The Script**
I've made no significant changes to your provided script, and instead created a new function `searchItem()` specifically for the onEdit trigger; see the full script below:
```
function searchItem(){
var ss = SpreadsheetApp.getActiveSpreadsheet();
var checkoutSheet = ss.getSheetByName("SheetName");
// Open the Data spreadsheet
var dataSS = SpreadsheetApp.openById("googlesheetID").getSheetByName("Data");
var dataVals = dataSS.getRange("A2:B").getValues().filter(x => x != ""); // removes all empty cells on dataVals
var cell = ss.getActiveCell().getA1Notation(); // gets the A1 notation of the cell
var item = checkoutSheet.getRange(cell).getValue();
if (cell == "B2"){ // the value changed is on the Items dropdown list
dataVals.forEach(x => x[1] == item ? checkoutSheet.getRange("A2").setValue(x[0]) : x);
}
else if (cell == "A2"){ // the value changed is on the Item Code column
for(var i = 0; i < dataVals.length; i++){
if (dataVals[i][0] == item){
checkoutSheet.getRange("B2").setValue(dataVals[i][1]);
break;
}
else if(i == dataVals.length - 1){
// sets a blank value to denote that no item code was found in the database
checkoutSheet.getRange("B2").setValue("");
}
}
}
}
function checkout() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var checkoutSheet = ss.getSheetByName("SheetName");
// Get selected item and quantity from checkout sheet
var selectedItem = checkoutSheet.getRange("B2").getValue();
var quantity = checkoutSheet.getRange("C2").getValue();
// Open the Data spreadsheet
var dataSS = SpreadsheetApp.openById("googlesheetID");
var dataSheet = dataSS.getSheetByName("Data");
// Find the row corresponding to the selected item in the data sheet
var itemRow = getItemRow(selectedItem, dataSheet);
if (itemRow !== -1) {
// Update quantity in data sheet
var currentQuantity = dataSheet.getRange(itemRow, 4).getValue();
dataSheet.getRange(itemRow, 4).setValue(currentQuantity - quantity);
} else {
Browser.msgBox("Item not found in inventory.");
}
}
function getItemRow(item, sheet) {
var dataRange = sheet.getRange("B:B"); // Assuming Item Name is in column B
var values = dataRange.getValues().filter(x => x != ""); // removes all empty cells on dataRange
for (var i = 0; i < values.length; i++) {
if (values[i][0] === item) {
return i + 1; // Return the row number (adding 1 to match sheet indexing)
}
}
return -1; // Return -1 if item not found
}
```
What the function does is a 2-way updating functionality between the item code and item list:
* It first gets the updated data from the checkout sheet using the `getActiveCell()` and `getA1Notation()` functions, to determine if the data updated is the item code or the item name (from the drop-down list)
* The data is then compared onto the database, to see if the item code or item name is included
* If the item is found, it will return its corresponding item code on the checkout sheet
* Likewise, if the item code is found, the drop-down list will show the corresponding item assigned to that code; however, if the provided item code is not found on the database, a blank will be returned on the drop-down item list
And since the `searchItem` function is attached to the installable `onEdit` trigger, this means that it will automatically run whenever a change is made on the A2 cell (for the item code) or B2 cell (the item drop-down) and apply the necessary change accordingly.
## OUTPUT
**Fig.1 2-way updating functionality (data are included in the database)** [](https://i.stack.imgur.com/4UwQr.gif)
**Fig.2 Item Code is not present in the database** [](https://i.stack.imgur.com/v2lq3.gif)
**Fig.3 Full Implementation** [![enter image description here][1]][1]
## REFERENCES
* [getActiveCell()](https://developers.google.com/apps-script/reference/spreadsheet/sheet#getactivecell)
* [getA1Notation()](https://developers.google.com/apps-script/reference/spreadsheet/range#geta1notation)
[1]: https://i.stack.imgur.com/8ScW9.gif |
Why is my steady state output different from Coursera's solution? |
I have a question: When I connect my postgreSQL with keycloak using my own realm, and when I list the tables I get the tables of both realms. The master and the one that I just created.
Then I added a new user with postgreSQL and I couldn't find the user that I just created in both realms.
Also, I'm trying to see the tables in pgAdmin4.
So far this is my docker image:
version: '3.8'
services:
postgres:
image: postgres:15.6
container_name: postgres_db
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB:test
POSTGRES_USER: test
POSTGRES_PASSWORD: password
keycloak_web:
image: quay.io/keycloak/keycloak:23.0.7
volumes:
- ./realm-export.json:/opt/keycloak/data/import/realm.json
container_name: keycloak_web
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: test
KC_DB_PASSWORD: password
KC_HOSTNAME: localhost
KC_HOSTNAME_STRICT: false
KC_HOSTNAME_STRICT_HTTPS: false
KC_LOG_LEVEL: info
KC_METRICS_ENABLED: true
KC_HEALTH_ENABLED: true
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
command:
- start-dev
- --import-realm
depends_on:
- postgres
ports:
- 8180:8080
volumes:
postgres_data: |
Writing Waveform data into CSV file in LabVIEW |
|csv|export-to-csv|labview|data-processing| |
null |
I see a lot of questions about web development. People asking about their website, JavaScript, HTML, etc. But my question is, have all these people figured out how to make a server and host a website themselves? I have never figured out how to do any of that. They would not be asking their questions if they had no website to be working on, right? |
Is it really that easy to get a web server, register a domain name, etc? |
Without seeing your code it's impossible to say for sure, but that error almost always means that you imported something incorrectly. Check your imports, and for all of the imports that are coming from your code and not from a node_module, go into that file and make sure that you actually exported what you're trying to import. This is one of the few errors that is actually really consistent and accurate with pointing you to the right problem.
Also, remember that if you export something like this:
```ts
export const x = ...
```
you import it like this:
```ts
import {x} from ".."
```
but if you export it like:
```ts
const x = ...
export default x
```
you import it like:
```ts
import x from ".."
```
|
I am working on python programming language when I want to convert .py to .ipa (ios) but I don't have an apple account and I am missing ios sdk in xcode
I tried searching for xcode on the forums and all the forums pointed to the download at apple but I don't have an apple account |
how to install xcode on macos hight sierra without apple account |
|python|ios|xcode| |
null |
I created a React component called `MainNavigation` and using it under my `root.tsx` as my main navigation for my remix project.
[![enter image description here][1]][1]
when I apply my CSS to another react component which isnt even a part of main navigation, somehow my main navigation bar moves along with it. Also the components are overlapping with the navigation instead of respecting each others boundaries.
MainNavigation ul class,
<ul className="border border-blue-400 flex fixed w-screen mx-auto my-auto py-3">
should be respected by the other components such as,
X component,
<section className="container mx-auto">
As soon as I add m-2 on X component, i.e.,
<section className="container mx-auto m-2">
the navigation bar moves below,
[![enter image description here][2]][2]
instead of staying on top. How can I fix this? thanks
[1]: https://i.stack.imgur.com/MjsNQ.png
[2]: https://i.stack.imgur.com/TpogL.png |
tailwind CSS applied on react component moves the navigation bar along in remix |
|css|reactjs|tailwind-css|remix.run|tailwind-elements| |
Creating a Google Login-In Extension for a personal project so I've set up funtionality that allows me to log in to the extension with my Google account and it works. I'm using an initial HTML file called "popup.html" to hold the logic of Google's Authentication by calling "popup.js" as the backend script, displaying a "Waiting for Log-in" message, and then setting that <div> attribute to display the contents of another HTML file called "authorized.html" after succesful sign-in by setting its contents as the innerHTML property of "popup.html". The issue is that "authorization.html" own script "authorized.js" will not print anything to either my console nor will it even throw an error on the console despite the other contents of "authorized.html" succefully being displayed. Only the relevant parts of my code are below but I'd like to know why I'm having this issue and how to circumvent it. Thank you
popup.html:
```
<!DOCTYPE html>
<html>
<head>
<title>Google OAuth Sign-In</title>
<script src="popup.js"></script>
</head>
<body>
<div id="Sign-In TempUI" class="content-body">Waiting for Google Sign-In</div >
</body>
</html>
```
popup.html:
```
console.log('popup.js loaded');
document.addEventListener('DOMContentLoaded', function () {
const NewGUI = document.getElementById('Sign-In TempUI');
chrome.identity.getAuthToken({ interactive: true }, function (token) {
if (chrome.runtime.lastError) {
console.error(chrome.runtime.lastError.message);
return;
}
console.log('Token acquired:', token);
loadAuthorizedUI(token);
});
function loadAuthorizedUI(token) {
console.log('Debug:', token);
fetch('authorized.html')
.then(response => response.text())
.then(html => {
console.log('HTML content:', html);
NewGUI.innerHTML = html;
})
.catch(error => console.error('Error fetching or processing HTML:', error));
}
});
```
authorized.html:
```
<!-- Page Displayed after recieving token from OAuthetication. Should display GUI for Email handling -->
<!DOCTYPE html>
<html>
<head>
<title>Authorized UI</title>
</head>
<body>
<h1>Welcome! You are now authorized.</h1>
<ul id="emailList">
</ul>
<script src="authorized.js"></script>
<div>Hit end</div>
</body>
</html>
```
authorized.js:
```
//throw new Error('This is a test error from authorized.js');
console.log('authorized.js loaded');
```
Before, I was starting another document.addEventListener('DOMContentLoaded', function ()) and then trying to replace the contents of emailList after trying to get its element by Id, but it wont even open authorized.js. Console.log doesnt print anything and the exception doesnt actually display anything on the console. However when I move "<script src="authorized.js"></script>" to popup.html, it'll suddenly display albiet errors. My biggest confusion is that "<h1>Welcome! You are now authorized.</h1>" and "<div>Hit end</div>" will display on my GUI but the script wont even print, as if its skipping it |
Understanding throughput of simd sum implementation x86 |
|x86|simd| |
I have JSON Rows which can change key and type of values
changing keys in examples are
`015913d2e43d41ef98e6e5c8dc90cd09_2_1` and
`e8c93befe4a34bcabbf604e352a41a2d_2_1`
changing types of values are
list (Row 1):
```
"answer": [
"<i>GET</i>\n",
"<i>PUT</i>\n",
"<i>POST</i>\n",
"<i>TRACE</i>\n",
"<i>HEAD</i>\n",
"<i>DELETE</i>\n"
],
```
text (Row 2):
```
"answer": "Может быть во всех",
```
Examples of rows:
row 1
```
{
"event": {
"submission": {
"015913d2e43d41ef98e6e5c8dc90cd09_2_1": {
"question": "Какие виды <i>HTTP</i> запросов могут внести изменения на сервере (в общем случае)?",
"answer": [
"<i>GET</i>\n",
"<i>PUT</i>\n",
"<i>POST</i>\n",
"<i>TRACE</i>\n",
"<i>HEAD</i>\n",
"<i>DELETE</i>\n"
],
"response_type": "choiceresponse",
"input_type": "checkboxgroup",
"correct": false,
"variant": "",
"group_label": ""
}
}
}
}
```
row 2
```
{
"event": {
"submission": {
"e8c93befe4a34bcabbf604e352a41a2d_2_1": {
"question": "В запросах какого типа может быть использована <i>RCE</i>?",
"answer": "Может быть во всех",
"response_type": "multiplechoiceresponse",
"input_type": "choicegroup",
"correct": true,
"variant": "",
"group_label": ""
}
}
}
}
```
[Here is the Image in Power Query](https://i.stack.imgur.com/cKe3i.jpg)
How can extract data from the List in Direct Power Query?
If I write this
`= Table.AddColumn(#"Duplicated Column", "answer_s", each Record.Field([raw_event][event][submission],[question_id])[answer]{0})`
I get error with text types answers
[image, error with text types](https://i.stack.imgur.com/Lg3rS.jpg)
If i write this:
`= Table.AddColumn(#"Duplicated Column", "answer_s", each Record.Field([raw_event][event][submission],[question_id])[answer])`
[I get Lists](https://i.stack.imgur.com/5ByC8.jpg)
I have already asked the question with parsing JSON
https://stackoverflow.com/questions/78205099/json-parsing-with-changing-keys/78206024#78206024
This is the next problem I need to solve)
|
I foolowed these steps to install the ODBC driver
if ! [[ "18.04 20.04 22.04 23.04" == *"$(lsb_release -rs)"* ]];
then
echo "Ubuntu $(lsb_release -rs) is not currently supported.";
exit;
fi
curl https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18
# optional: for bcp and sqlcmd
sudo ACCEPT_EULA=Y apt-get install -y mssql-tools18
echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc
source ~/.bashrc
# optional: for unixODBC development headers
sudo apt-get install -y unixodbc-dev |
I'm trying to use celery to handle the heavy task of creating a new qdrant collection every time a new model is created, I need to extract the content of the file, create embedding and store it in qdrant db as a collection. The problem is, I get the following error when I call embeddings.embed with HuggingFaceEmbedding inside celery.
```bash
celery-dev-1 | [2024-03-27 10:18:27,451: INFO/ForkPoolWorker-19] Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2
celery-dev-1 | [2024-03-27 10:18:35,856: ERROR/MainProcess] Process 'ForkPoolWorker-19' pid:115 exited with 'signal 11 (SIGSEGV)'
celery-dev-1 | [2024-03-27 10:18:35,868: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV) Job: 3.')
celery-dev-1 | Traceback (most recent call last):
celery-dev-1 | File "/usr/local/lib/python3.10/site-packages/billiard/pool.py", line 1264, in mark_as_worker_lost
celery-dev-1 | raise WorkerLostError(
celery-dev-1 | billiard.einfo.ExceptionWithTraceback:
celery-dev-1 | """
celery-dev-1 | Traceback (most recent call last):
celery-dev-1 | File "/usr/local/lib/python3.10/site-packages/billiard/pool.py", line 1264, in mark_as_worker_lost
celery-dev-1 | raise WorkerLostError(
celery-dev-1 | billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV) Job: 3.
celery-dev-1 | """
```
Here is the knowledge model when the task is called,
```python
class Knowledge(Common):
name = models.CharField(max_length=255, blank=True, null=True)
file = models.FileField(upload_to=knowledge_path, storage=PublicMediaStorage())
qd_knowledge_id = models.CharField(max_length=255, blank=True, null=True)
is_public = models.BooleanField(default=False)
#
def save(self, *args, **kwargs):
if self.pk is None:
collection_name = f"{self.name}-{datetime.now().strftime('%Y_%m_%d_%H_%M_%S')}"
process_files_and_upload_to_qdrant.delay(self.file.name, collection_name)
self.qd_knowledge_id = collection_name
super().save(*args, **kwargs)
```
here is the task and the functions it calls:
```python
@shared_task
def process_files_and_upload_to_qdrant(file_name, collection_name):
file_path = default_storage.open(file_name)
result = process_file(file_path, collection_name)
return result
def process_file(file : InMemoryUploadedFile, collection_name):
text = read_data_from_pdf(file)
chunks = get_text_chunks(text)
embeddings = get_embeddings(chunks)
client.create_collection(
collection_name=collection_name,
vectors_config=qdrant_models.VectorParams(
size=768, distance=qdrant_models.Distance.COSINE
),
)
client.upsert(collection_name=collection_name, wait=True, points=embeddings)
def read_data_from_pdf(file : InMemoryUploadedFile):
text = ""
pdf_reader = PdfReader(file)
for page in pdf_reader.pages:
text += page.extract_text()
return text
def get_text_chunks(texts: str):
text_splitter = CharacterTextSplitter(
separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len
)
chunks = text_splitter.split_text(texts)
return chunks
def get_embeddings(text_chunks):
from langchain_community.embeddings import HuggingFaceEmbeddings
from qdrant_client.http.models import PointStruct
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-mpnet-base-v2"
)
points = []
for chunk in text_chunks:
embedding = embeddings.embed_query(chunk) <---- The error occurs here
point_id = str(uuid.uuid4())
points.append(
PointStruct(id=point_id, vector=embedding, payload={"text": chunk})
)
return points
```
How do I approach this? Since the model is created as a many to many field, the response takes a long time, due to which I'm trying to move it into a celery task. (Some delay when storing in qdrant is acceptable, it just shouldn't affect the api response time). The api works fine when I do it without celery, but it's super slow.
I've tried splitting them into multiple small celery tasks, but I can't pass the embeddings or non-json serializable data into the task. I don't know how to approach this. |
[enter image description here](https://i.stack.imgur.com/kLJ2b.png) `here is the error . I want to ask While connecting SQL server Authentication I have to used Username and password that i used in while make database in AWS RDS or that i save for SQL Server Authentication but both of them giving same error.`
If anyone has faced the same issue please suggest the solution. |
Encountered an error (ServiceUnavailable) from host runtime on Azure Function App |
|azure|terraform|azure-functions|hcl| |
The data structure for this is a [Trie][1] (Prefix Tree):
* Time Efficiency: Search, Insertion & Deletion: With a time-complexity of **O(m)** where m is the length of the string.
* Space Efficiency: Store the unique characters present in the strings. This can be a space advantage compared to storing the entire strings.
```php
<?php
class TrieNode
{
public $childNode = []; // Associative array to store child nodes
public $endOfString = false; // Flag to indicate end of a string
}
class Trie
{
private $root;
public function __construct()
{
$this->root = new TrieNode();
}
public function insert($string)
{
if (!empty($string)) {
$this->insertRecursive($this->root, $string);
}
}
private function insertRecursive(&$node, $string)
{
if (empty($string)) {
$node->endOfString = true;
return;
}
$firstChar = $string[0];
$remainingString = substr($string, 1);
if (!isset($node->childNode[$firstChar])) {
$node->childNode[$firstChar] = new TrieNode();
}
$this->insertRecursive($node->childNode[$firstChar], $remainingString);
}
public function commonPrefix()
{
$commonPrefix = '';
$this->commonPrefixRecursive($this->root, $commonPrefix);
return $commonPrefix;
}
private function commonPrefixRecursive($node, &$commonPrefix)
{
if (count($node->childNode) !== 1 || $node->endOfString) {
return;
}
$firstChar = array_key_first($node->childNode);
$commonPrefix .= $firstChar;
$this->commonPrefixRecursive($node->childNode[$firstChar], $commonPrefix);
}
}
// Example usage
$trie = new Trie();
$trie->insert("Softball - Counties");
$trie->insert("Softball - Eastern");
$trie->insert("Softball - North Harbour");
$trie->insert("Softball - South");
$trie->insert("Softball - Western");
echo "Common prefix: " . $trie->commonPrefix() . PHP_EOL;
?>
```
Output:
Common prefix: Softball -
[Demo][2]
Trie Visualization (Green nodes are marked: endOfString):
[![enter image description here][3]][3]
[1]: https://en.wikipedia.org/wiki/Trie
[2]: https://onecompiler.com/php/428w3k8pe
[3]: https://i.stack.imgur.com/aY2XN.png |
```
interface Test {
a: string;
b: number;
c: boolean;
}
let arr: string[] = []
function test<S extends Pick<Test, 'a' | 'b'>, T extends keyof S>(val: T[]) {
arr = val // it's not ok
}
```
PlayGround: https://www.typescriptlang.org/play?ssl=19&ssc=27&pln=15&pc=1#code/JYOwLgpgTgZghgYwgAgCoQM5mQbwFDKHJwBcyWUoA5gNwFEBGZIArgLYPR1HIJkMB7AQBsIcEHQC+eGaOxwoUMhWoBtALrIAvMg0yYLEAjDABIZJCwAeAMrIIAD0ggAJhmQAFYAgDWV9FgANMgA5HAhyAA+oQwhAHzBqPZOEK7uPhAAngIwyDZxABQAbnDCZKgaAJS49IQKUNrIJcLIAPStyIaOSAAOkC72igJQAIR40jJgmT0odjpevv6YYMFhEdEhsXF0UzNojRnZuTYyCGZYTaXlGo169Y3NbR0A7gAWmcjA2MDuAj4A-EA
But, if I don't use function, it's ok:
```
type S = Pick<Test, 'a' | 'b'>;
type T = keyof S
const val: T[] = []
arr = val // why it is ok?
```
And, When I declare the generic S separately, there is also no error:
```
type S = Pick<Test, 'a' | 'b'>;
function test<T extends keyof S>(val: T[]) {
arr = val // it is ok
}
``` |
About the error: "incompatible types: String cannot be converted to char"
The "string" we declare them using double quotes,
the 'char' we declare them using single quotes.
Example:
String myString = "javi";
char myCharacter = 'j'; |
When calling a method in my controller, I encounter an issue with the `@ParamConverter` annotation in Symfony. The specific error I am facing is: `App\Entity\Recipe` object not found by the `@ParamConverter` annotation.'
I have a `findPublicRecipe()` method in my `RecipeRepository`. This method is supposed to fetch public recipes based on an optional parameter, `$nbRecipes`. Here is the method in question.
```
function findPublicRecipe(?int $nbRecipes): array
{
$queryBuilder = $this->createQueryBuilder('r')
->where('r.isPublic = 1')
->orderBy('r.createdAt', 'DESC');
if ($nbRecipes !== 0 && $nbRecipes !== null) {
$queryBuilder->setMaxResults($nbRecipes);
}
return $queryBuilder->getQuery()->getResult();
}
```
I'm calling this `findPublicRecipe()` method in my controller without using the `@ParamConverter` annotation. Here's the code from my controller:
```
#[Route('/recipe/public', name: 'recipe.index.public', methods: ['GET'])]
public function indexPublic(
RecipeRepository $repository,
Request $request,
PaginatorInterface $paginator,
): Response {
$recipes = $repository->findPublicRecipe(null);
$recipes = $paginator->paginate(
$recipes,
$request->query->getInt('page', 1),
10
);
return $this->render('pages/recipe/indexPublic.html.twig', [
'recipes' => $recipes
]);
}
```
Despite this, I'm still receiving the error mentioned above. I've checked that the `Recipe` entity is correctly imported into my controller and that the namespace path is correct. Additionally, I've verified that the `Recipe` entity is appropriately defined and the annotations are appropriate. Could anyone please help me understand why this error happens and how to fix it? |
Troubleshooting @ParamConverter Issue in Symfony |
Regarding your Postman screen, you need do a "simple" POST (no multiform part) and put the parameters in the URL (while for a POST it's usually in the `httpBody`).
The Alamofire solution would be then:
```
AF.request(url,
method: .post,
parameters: parameters,
encoding: URLEncoding(destination: .queryString),
headers: nil) //Put your headers there
```
With `URLEncoding(destination: .queryString)` you should be able to tell to put the parameters inside the URL (ie as the query).
Side note:
You can generate `cURL` with POSTMAN, and you can generate `cURL` with Alamofire. It's can be really helpful to have a common ground for comparison and trying to match the working solution.
In Alamofire, you just need:
```
Alamofire.request(...)
.cURLDescription {
print("AF cURL generated:\($0)")
}
.response(...) //or .responseJSON(), but it's deprecated, prefers other ones
```
In your case, I got:
```
$ curl -v \
-X POST \
-H "Accept-Encoding: br;q=1.0, gzip;q=0.9, deflate;q=0.8" \
-H "User-Agent: ... Alamofire/5.4.0" \
-H "Accept-Language: en-US;q=1.0" \
"https://test.com/SaveUndertakingAckowledgement?SchCode=Test&UserID=11&UserType=3&UtID=kfnksdlnfks&key=ndjsfnjkds%3D%3D"
```
Except for the User-Agents, Accept-Language, Accept-Encoding which are more specific to the app/device and are usually not problematic, we see that the parameters have been added to the URL. |
|typescript|deployment|vercel|hapi.js| |
I have some Fiegn client to send request other springboot micro service.
```
@ExceptionHandler(FeignException.class)
public ResponseEntity<ApiError> handleFeignException(FeignException ex, WebRequest webRequest) throws JsonProcessingException {
try{
ObjectMapper objectMapper = JsonMapper.builder()
.findAndAddModules()
.build();
ApiError apiError = objectMapper.readValue(ex.contentUTF8(), ApiError.class);
return new ResponseEntity<>(apiError, HttpStatusCode.valueOf(ex.status()));
} catch (JsonProcessingException e) {
log.error("Error deserializing Feign exception: {}", e.getMessage());
ApiError fallbackError = new ApiError("Error deserializing Feign exception", LocalDate.now(), ex.status(), webRequest.getDescription(false));
return ResponseEntity.status(HttpStatusCode.valueOf(ex.status()))
.body(fallbackError);
}
}
```
```
@AllArgsConstructor
@NoArgsConstructor
@Getter
@Setter
@ToString
public class ApiError {
private String message;
private LocalDate timeStamp;
private int status;
private String requestPath;
}
```
So I am handling feign exception thrown by feign client during inter service communication between spring boot micro services this way.... Is there any other good approach or am i doing it the correct way |
Handling feign exception in Spring boot using RestControllerAdvice |
|java|spring-boot|microservices|spring-cloud|spring-cloud-feign| |
null |
in my case, this was fixed by "quit" the current sbt shell (in intellij) |
Asking help folks,
We are trying to create a custom application using houzez theme and we want to get details(auto capture) on listing properties once the click the button to apply "Fill Application Form" they will redirect to the custom app https://orcarealty.org/ApplicationForm?ManagerId=%2201%22&ManagerName=%22Jurry%22&ManagerEmailAddress=%22pajares.jurry22@gmail.com%22&PropertyId=%2202983%22&PropertyAddress=vancouver
Like for example if the client wants to rent this property: https://wordpress-1154453-4376425.cloudwaysapps.com/property/5-6-w-17th-avenue-vancouver-west/
I want to get the detail dynamically:
<iframe style="border:none; margin:0; padding:0" width="100%" height="100%" src="https://orcarealty.org/ApplicationForm?ManagerId=&ManagerName=&ManagerEmailAddress=&PropertyId=&PropertyAddress="></iframe> |
Custom Application Form (Houzez) |
|c#|wordpress-rest-api| |
null |
Clerk throws an error to the user when it is not able to refresh the access token. This mostly happens when the user is offline. But I do not want this because it kinda ruins the user experience, which in this case, the user might think that the website is broken. I want to handle the error myself and throw a nice error if it is necessary. How do I do this in next js. Here is what I have been trying to do but it is not working:
import { ClerkProvider } from "@clerk/nextjs";
import { useRouter } from "next/router";
import Header from "../components/header/header";
import Footer from "../components/footer/footer";
import "../styles/globals.css";
export default function App({ Component, pageProps }) {
const router = useRouter();
const handleTokenRefreshError = (error) => {
if (error.message === "Failed to load Clerk") {
console.error("Token refresh failed");
}
};
return (
<ClerkProvider navigate={(to) => router.push(to)} onTokenRefreshError={handleTokenRefreshError}>
<Header />
<Component {...pageProps} />
<Footer />
</ClerkProvider>
);
}
I know clerk has the means to solve this error but there is just so little documentation to this matter. Any help will be appreciated. |
{"Voters":[{"Id":380384,"DisplayName":"John Alexiou"}]} |
|java|java-17|java-module|jshell| |
Tossing in the expectations communicates the intent of the test better, even if you're correct that they're not strictly necessary, as Kent Dodds discusses in [this blog post](https://kentcdodds.com/blog/common-mistakes-with-react-testing-library#using-get-variants-as-assertions). I don't see any apparent performance penalty to including them, either.
You may prefer to use `queryByText` which will return an element or null and let you write "normal" expects that are called on both success and failure cases. However, `query` doesn't wait for the predicate as `find` does, so you could use `waitFor` to build your own `find`. See [About Queries](https://testing-library.com/docs/queries/about/) for details on the differences.
If you `expect` an element queried by text _not_ to exist, I've found that this can trigger large component object tree diffs that can slow down testing considerably (and serializing circular structures can crash it). You might use `expect(!!queryByText("this shouldn't exist")).toBe(false);` to avoid this scenario by converting the found element to a boolean, with the drawback that the assertion message will be less clear. |
{"Voters":[{"Id":147356,"DisplayName":"larsks"},{"Id":162698,"DisplayName":"Rob"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[18]} |
{"Voters":[{"Id":14991864,"DisplayName":"Abdul Aziz Barkat"},{"Id":12265927,"DisplayName":"Puteri"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[13]} |
{"Voters":[{"Id":2752075,"DisplayName":"HolyBlackCat"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]} |
Currently, I am working on a project with Vite + React and now I want to dockerize it. Actually, I am beginner to Docker. So, I need guidance on how to build the Docker image and run it.
I tried once and it worked, but not as I supposed.
Can anyone help me with following,
- How to dockerize the React app with Vite?
- How to run the webapp on local host when developing?
- Hot Module Reload tool in Vite didn't work for me, how to fix it?
I used following commands in cmd:
`docker build .`
and
`docker run -p 5173:5173 0ac94edb8feb`
And here is my `Dockerfile`:
```
FROM node:18-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 5173
CMD ["npm", "run", "dev"]
```
Give me instructions to fix above issue. |
I think you should try removing the backslash "/" which is placed inside the STATIC_URL in the starting in the second last code block you given. Because in the first code block I saw:
```python
STATIC_URL = 'static/'
```
When we compare to second last code block:
```python
STATIC_URL = '/static/'
```
Also I compared it to my own projects, the line should be:
```python
STATIC_URL = 'static/'
```
|
I want to achieve an edge-to-edge screen (only for a specific screen in the app) and I am using ComponentActivity `enableEdgeToEdge()` method. I want to restore the previous setting when navigating out of this screen. There doesn't seem to be an opposite method to call.
```
@Composable
fun FullScreen() {
val context = LocalContext.current as ComponentActivity
DisposableEffect(Unit) {
context.enableEdgeToEdge()
onDispose {
// context.disableEdgeToEdge() <-- HOW TO ACHIEVE THIS?
}
}
Column(
modifier = Modifier
.padding(vertical = 200.dp)
.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.spacedBy(20.dp),
) {
Text(text = "This is full screen!", style = MaterialTheme.typography.headlineMedium)
}
}
```
When I press the back button, the previous screen is now edge-to-edge as well, which I do not want. |
I'm evaluating some Plan Cache behaviors and this is my scenario.
I'm running the following two queries separately:
`SELECT TOP 10 * FROM dbo.Countries CT LEFT JOIN dbo.Continents CN ON CT.ContinentId=CN.ContinentId WHERE CountryId='AR'`
`SELECT TOP 10 * FROM dbo.Countries CT LEFT JOIN dbo.Continents CN ON CT.ContinentId=CN.ContinentId WHERE CountryId='BR'`
After running both queries, I'm getting this plan cache view:
[Plan Cache View](https://i.stack.imgur.com/yR5BR.png)
My understanding is:
- Different sql_handle: expected
- Different plan_handle: **unexpected**
- Same query_hash: expected
- Same query_plan_hash: expected
Question: I really don't get why I'm getting a different plan_handle for each execution, even when the query is basically the same, and the query_hash and query_plan_hash do match. What could be the reason for this?
This is a comparison of both cached plans:
[Comparison of cached plans](https://i.stack.imgur.com/rbaGp.png)
I get the difference in the statement but I don't think that should count. Otherwise, we would always have one plan_handle per sql_handle since it would always change.
Some additional settings already checked:
- Optimization level: FULL
- No plan warnings: (both are Good Enough Plans found)
- SET options match, both queries are executed in the same SSMS Window
- Compat Level: 140
- Optimize for Ad Hoc: false
- Query Optimizer Fixes: Off
- Legacy CE: Off
- No Database-scoped configuration
- Parameterization: Simple
- Resource Governor: Disabled
I checked all potential properties affecting this behaviors with no luck.
I would expect both queries reusing the same plan, hence, pointing to the same plan_handle.
Is my expectation incorrect?
Thanks |
plan_handle is always different for each query in SQL Server Cache |
|sql-server|query-optimization|sql-execution-plan| |
null |
Not familiar with those you mentioned but there is no way CGPT in its current state is even close to taking our programmer jobs. Because it is like yeah, great, it can make a hello world program, or a calculator, or a `for` loop, or something that has already been made countless times before. ChatGPT struggles to write rare code with very specific requirements or niche technologies. So no, I find it unlikely that AI will result in programmers losing their jobs, at least not for the foreseeable future. |
I’ll just mimic the string in a *pre* tag and do the string parsing...
<!DOCTYPE html><head></head>
<body onload="Process();">
<script>
function Process(){
let S=Cont.innerHTML;
S=S.replace(/\n+/g,'').replace(/ +/g,'').replace(/{/g,' {\n ').replace(/;/g,';\n').replace(/:/g,': ').replace(/,/g,', ');
Cont.innerHTML=S;
}
</script>
<pre id="Cont">
p {
color: hsl(0deg, 100%, 50%;
}
</pre>
</body></html>
You start by stripping all spaces and all LFs and then insert them back this time strategically, as desired.
What I’ve done here is good for CSS but if your *CodeBlocks* may also contain JS or HTML then you’ll need a separate function to handle each.
I hope you don’t get any mixed *CodeBlocks* because it then gets very tricky!
|
I have multiple repositories that each have a gradle project and in turn submodules. The structure looks like this
**repo1**
-build.gradle
-components
--module1
---module1.gradle
---src/main/java*
--module2
---module2.gradle
---src/main/java*
**repo2**
-build.gradle
-components
--module3
---module3.gradle
---src/main/java*
--module4
---module4.gradle
---src/main/java*
Each of these repo can depend on another. Now I want to build a dependency graph of these repo/modules. How can I do that recursively so that I have the entire dep tree?
I have tried
* gradle dependencies
* gradle module1:dependencies
* Analyse dependency option in Intellij
None of these works properly.
|
Gradle dependencies recursively |
|java|gradle|dependency-management|gradlew| |
I think a demo would be easier than going back and forth: https://github.com/quyentho/submodule-demo
You can check my `dist/` folder in the `placeholder-lib` to see if your generated build have similar structure. You can see I have no problem to include like this in my `consumer`:
import { Button } from "placeholder-lib/components";
import useMyHook from "placeholder-lib/shared";
I guess, your problem could be these export lines in `package.json`
"exports": {
".": "./dist/index.js",
"./components": "./dist/components/index.js",
"./shared": "./dist/shared/index.js"
},
|
**Problem:**
I am working on a library with, e.g., support for a UInt5 type, an Int33 type, etc. The library is a little more complicated than this, but for the sake of example creating a UInt12 type might go
```python
def makeUInt(size:int) -> type[UInt]:
class NewUInt(UInt):
# Do stuff with size
NewUInt.__name__ = f"UInt{size}"
return NewUInt
UInt12 = makeUInt(12)
an_example_number = UInt12(508)
```
My IDE's (VS Code) IntelliSense feature then recognizes the type of an_example_number as UInt, rather than UInt12.
**The Rub:**
I do not expect dynamically declared types to be picked up by type-hinting. However, I have clearly specified UInt12 as a type alias, and in fact if I subclass instead of type-alias by going
```python
def makeUInt(size:int) -> type[UInt]:
class NewUInt(UInt):
# Do stuff with size
NewUInt.__name__ = f"UInt{size}"
return NewUInt
class UInt12(makeUInt(12)): pass
an_example_number = UInt12(508)
```
everything works as intended, so clearly on some level the dynamic declaration can be coerced into something the IDE understands.
For example, I could, hypothetically, have UInt keep a register of created classes and prevent UInt12(makeUInt(12)) from actually subclassing. This is obviously not an ideal workaround, though.
**The Ask:**
How can I (preferably in Python 3.8) retain the advantage of dynamically creating types while getting the IDE to understand my preferred nomenclature for instances of those types?
The end-use case is that I want to provide certain types explicitly without redeclaring the # Do stuff with size information every time, so that common types like UInt16, UInt32, etc. can be declared in my library and receive hinting, whereas more uncommon types like UInt13 will be declared by users as needed and not necessarily receive hinting.
**Back-of-the-box**
```python
def makeUInt(size:int) -> type[UInt]:
class NewUInt(UInt):
# Do stuff with size
NewUInt.__name__ = f"UInt{size}"
return NewUInt
UInt12 = makeUInt(12)
an_example_number = UInt12(508)
```
I wanted an_example_number to show up as a UInt12 by the type-hinter. It shows up as UInt. |
Make the Canvas element in a tkinert app to be fully transparent? |
I am trying to get my laravel app to work. It works locally, I deployed the app with BitBucket pipeline, I can `php artisan serve` (shows it starts the server), but when reaching the server, I get `Bad Gateway 502`, what am I doing wrong?
I run on Debian 12
nginx `error.log` is showing the following message:
> 2024/03/22 15:40:43 [error] 249350#249350: *3 directory index of "/var/www/html/laravel/current/" is forbidden, client: 80.**.**.**, server: <server_name>, request: "GET / HTTP/1.1", host: "85.**.**.**:8000"
This is my nginx.conf:
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/sites-available/*;
}
This is my conf file:
/etc/nginx/sites-available/filename.conf
server {
listen 80;
server_name server_domain_or_IP;
root /var/www/html/laravel/current/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
|
A common task I have is plotting time series data and creating gray bars that denote NBER recessions. For instance, `recessionplot()` from Matlab will do exactly that. I am not aware of similar funcionality in Python. Hence, I wrote the following function to automate this process:
def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2):
"""
Adds NBER recession shades to a singe plt.axes (tipically an "ax").
Args:
ax (plt.Axes): The ax you want to change with data already plotted
nber_df (pd.DataFrame): the Pandas dataframe with a "start" and an "end" column
alpha (float): transparency
Returns:
plt.Axes: returns the same axes but with shades
"""
min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year
nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year]
for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]):
ax.axvspan(start, end, color = "gray", alpha = alpha)
return ax
Here, `nber_df` that looks like the following (copying the dictionary version):
{'start': {0: '1857-07-01',
1: '1860-11-01',
2: '1865-05-01',
3: '1869-07-01',
4: '1873-11-01',
5: '1882-04-01',
6: '1887-04-01',
7: '1890-08-01',
8: '1893-02-01',
9: '1896-01-01',
10: '1899-07-01',
11: '1902-10-01',
12: '1907-06-01',
13: '1910-02-01',
14: '1913-02-01',
15: '1918-09-01',
16: '1920-02-01',
17: '1923-06-01',
18: '1926-11-01',
19: '1929-09-01',
20: '1937-06-01',
21: '1945-03-01',
22: '1948-12-01',
23: '1953-08-01',
24: '1957-09-01',
25: '1960-05-01',
26: '1970-01-01',
27: '1973-12-01',
28: '1980-02-01',
29: '1981-08-01',
30: '1990-08-01',
31: '2001-04-01',
32: '2008-01-01',
33: '2020-03-01'},
'end': {0: '1859-01-01',
1: '1861-07-01',
2: '1868-01-01',
3: '1871-01-01',
4: '1879-04-01',
5: '1885-06-01',
6: '1888-05-01',
7: '1891-06-01',
8: '1894-07-01',
9: '1897-07-01',
10: '1901-01-01',
11: '1904-09-01',
12: '1908-07-01',
13: '1912-02-01',
14: '1915-01-01',
15: '1919-04-01',
16: '1921-08-01',
17: '1924-08-01',
18: '1927-12-01',
19: '1933-04-01',
20: '1938-07-01',
21: '1945-11-01',
22: '1949-11-01',
23: '1954-06-01',
24: '1958-05-01',
25: '1961-03-01',
26: '1970-12-01',
27: '1975-04-01',
28: '1980-08-01',
29: '1982-12-01',
30: '1991-04-01',
31: '2001-12-01',
32: '2009-07-01',
33: '2020-05-01'}}
The function is very simple. It retrieves the minimum and maximum dates that were plotted, slices the given dataframe with start and end dates and then it plots the bars. There are two major ways. In one way it will work as intended, but not in the other way.
*The way it works*:
df = pd.DataFrame(np.random.randn(3000, 2), columns=list('AB'), index=pd.date_range(start='1970-01-01', periods=3000, freq='W'))
plt.figure()
plt.plot(df.index, df['A'], lw = 0.2)
add_nber_shade(plt.gca(), nber)
plt.show()
*The way it does not work* (using Pandas to plot directly)
plt.figure()
df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None)
add_nber_shade(plt.gca(), nber)
plt.show()
It throws out the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[106], line 3
1 plt.figure()
2 df.plot(y=["A"], lw = 0.2, ax = plt.gca(), legend=None)
----> 3 add_nber_shade(plt.gca(), nber)
4 plt.show()
File ~/Dropbox/Projects/SpanVol/src/spanvol/utilities.py:20, in add_nber_shade(ax, nber_df, alpha)
8 def add_nber_shade(ax: plt.Axes, nber_df: pd.DataFrame, alpha: float=0.2):
9 """
10 Adds NBER recession shades to a singe plt.axes (tipically an "ax").
11
(...)
18 plt.Axes: returns the same axes but with shades
19 """
---> 20 min_year = pd.to_datetime(min(ax.lines[0].get_xdata())).year
21 nber_to_keep = nber_df[pd.to_datetime(nber_df["start"]).dt.year >= min_year]
23 for start, end in zip(nber_to_keep["start"], nber_to_keep["end"]):
File ~/miniconda3/envs/volatility/lib/python3.11/site-packages/pandas/core/tools/datetimes.py:1146, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
1144 result = convert_listlike(argc, format)
1145 else:
-> 1146 result = convert_listlike(np.array([arg]), format)[0]
1147 if isinstance(arg, bool) and isinstance(result, np.bool_):
...
File tslib.pyx:552, in pandas._libs.tslib.array_to_datetime()
File tslib.pyx:541, in pandas._libs.tslib.array_to_datetime()
TypeError: <class 'pandas._libs.tslibs.period.Period'> is not convertible to datetime, at position 0
This is because Pandas is doing some transformation under the hood to deal with the index and is transforming it into some other class. Is there a simple way to either fix the function or some way to prevent pandas from doing it? Thanks a lot! |
I'm writing a program on MATLAB that generates 13 waveforms of varying amplitude, duration, and frequency. Each waveform is repeated 5 times, which means I have 65 'trials' in total.
The total length of each trial = 1.5 ms. The sampling frequency = 4 kHz. I would like the wave to begin at 0.5 ms. Prior to the onset of the wave, and following its offset, I would like the amplitude to be zero (i.e. a 'flatline' prior to and following the wave).
I have created a 65x3 matrix where the columns denote the frequency ('hz'), amplitude ('a'), and duration (ms) of the 65 sine waves. Each row denotes a single wave.
I would like to use the information contained in this 65x3 matrix to generate 65 sine waves of amplitude 'a', frequency 'hz', and duration 'ms'. To be specific: each wave should be created using the parameters (hz,a,ms) specified in the nth row of the matrix. E.g. if row 1 = 100, 1, 50... this means I would like to generate a 100 Hz sine wave (amplitude = 1) lasting 50 ms.
I have attempted to construct a for loop to solve this problem. However, the loop returns a number of errors, and I'm not sure how to resolve them. I have adapted the code to the point where no errors are returned; however, my latest attempt seems to generate 65 waves of equal duration, when in fact the duration of each wave should be that which is stated in vector 'ms'.
Here is my latest, albeit newbie and still unsuccessful, attempt: (note that 'trials' represents the 65x3 matrix discussed above; mA = amplitude).
hz=trials(:,1); mA=trials(:,2); ms=trials(:,3);
trials_waves=zeros(65,500); % the max duration (= 500ms); unsure of this part?
for n = 1:size(order,1)
trials_waves = mA*sin(2*pi*hz*0:ms);
end
|
Getting error while connecting to MSSQL with AWS RDS |
|amazon-rds| |
null |
I wanted to create a uber-fat jar for my spring boot app and for that I am using *maven-shade-plugin*.After mvn clean install , i see there are two jar created one is normal and another shaded,normal jar is working fine but when I am trying to run shaded jar it is failing with below error
**Error: Could not find or load main class com.walmart.SpringAppInitializer**
Below is my plugin added in pom.xml
```
<properties>
<start-class>com.walmart.SpringAppInitializer</start-class>
</properties>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${spring-boot.version}</version>
<configuration>
<fork>true</fork>
<mainClass>${start-class}</mainClass>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>${start-class}</mainClass>
</transformer>
</transformers>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/maven/**</exclude>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>shaded</shadedClassifierName>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
```
Can anyone please suggest why fat jar is not working
|
Not able to run uber fat jar |
|java|spring-boot|maven| |
thanks for all the support guys. I realy apprechiate it. As the version with ffmpeg didnt worked i now used movviepy to convert the mp4 into a mp3 and now it works. For anyone asking the code is only two lines long...
FILECONVERT = AudioFileClip(mp4_file)
FILECONVERT.write_audiofile(mp3_file)
with the import
from moviepy.editor import * |
{"OriginalQuestionIds":[4660142],"Voters":[{"Id":1070452,"DisplayName":"Ňɏssa Pøngjǣrdenlarp"},{"Id":1043380,"DisplayName":"gunr2171","BindingReason":{"GoldTagBadge":"c#"}}]} |
Creating a Google Login-In Extension for a personal project so I've set up funtionality that allows me to log in to the extension with my Google account and it works. I'm using an initial HTML file called "popup.html" to hold the logic of Google's Authentication by calling "popup.js" as the backend script, displaying a "Waiting for Log-in" message, and then setting that <div> attribute to display the contents of another HTML file called "authorized.html" after succesful sign-in by setting its contents as the innerHTML property of "popup.html". The issue is that "authorization.html" own script "authorized.js" will not print anything to either my console nor will it even throw an error on the console despite the other contents of "authorized.html" succefully being displayed. Only the relevant parts of my code are below but I'd like to know why I'm having this issue and how to circumvent it. Thank you
popup.html:
```
<!DOCTYPE html>
<html>
<head>
<title>Google OAuth Sign-In</title>
<script src="popup.js"></script>
</head>
<body>
<div id="Sign-In TempUI" class="content-body">Waiting for Google Sign-In</div >
</body>
</html>
```
popup.html:
```
console.log('popup.js loaded');
document.addEventListener('DOMContentLoaded', function () {
const NewGUI = document.getElementById('Sign-In TempUI');
chrome.identity.getAuthToken({ interactive: true }, function (token) {
if (chrome.runtime.lastError) {
console.error(chrome.runtime.lastError.message);
return;
}
console.log('Token acquired:', token);
loadAuthorizedUI(token);
});
function loadAuthorizedUI(token) {
console.log('Debug:', token);
fetch('authorized.html')
.then(response => response.text())
.then(html => {
console.log('HTML content:', html);
NewGUI.innerHTML = html;
})
.catch(error => console.error('Error fetching or processing HTML:', error));
}
});
```
authorized.html:
```
<!-- Page Displayed after recieving token from OAuthetication. Should display GUI for Email handling -->
<!DOCTYPE html>
<html>
<head>
<title>Authorized UI</title>
</head>
<body>
<h1>Welcome! You are now authorized.</h1>
<ul id="emailList">
</ul>
<script src="authorized.js"></script>
<div>Hit end</div>
</body>
</html>
```
authorized.js:
```
//throw new Error('This is a test error from authorized.js');
console.log('authorized.js loaded');
```
Before, I was starting another document.addEventListener('DOMContentLoaded', function ()) and then trying to replace the contents of emailList after trying to get its element by Id, but it wont even open authorized.js. Console.log doesnt print anything and the exception doesnt actually display anything on the console. However when I move "<script src="authorized.js"></script>" to popup.html, it'll suddenly display albiet errors. My biggest confusion is that "<h1>Welcome! You are now authorized.<h1>" and "<div>Hit end</div>" will display on my GUI but the script wont even print, as if its skipping it |
{"Voters":[{"Id":807126,"DisplayName":"Doug Stevenson"},{"Id":209103,"DisplayName":"Frank van Puffelen"},{"Id":7015400,"DisplayName":"Peter Haddad"}],"SiteSpecificCloseReasonIds":[13]} |
I guess your JSON data is in following format.
{
a : "A",
b : "B",
c : "C"
}
Now you can have a class called `JsonData` which is structured as follows,
public class JsonData {
public String a;
public String b;
public String c;
}
Now you can convert your json data as Java object using [gson](https://code.google.com/p/google-gson/) library.
Now crate a class like `ObjectHolder`, which is structured as follows.
public class ObjectHolder {
public static JsonData jsonData;
}
Store your converted object in `ObjectHolder.jsonData`. Now you can access this object throughout project anytime.
Note: this object will becomes `null` when you clear your app from "recent apps" list.
|
I want to find out numbers or counts of each 2,3,4 required to fit in 100
I got the formula in excel which is only capable of finding out for two given variables i.e. 2,3. I want to try for three variables or more. I got the following equation/formula for two variables given by Tom Sharpe [text](https://stackoverflow.com/questions/67921641/how-to-find-how-many-times-numbers-fit-into-a-large-number
```
```
). I am unable to manipulate this formula for three variables.
=LET(x,A2,y,B2,z,C2, seq,SEQUENCE(1,z/x-0.1), modDiff,MOD(z-seqx,y), diff,IF(modDiff=0,0,y-modDiff), minDiff,MIN(diff), FILTER(seq&"X"&x&",""IENT(z-seqx+diff,y)&"X"&y,diff=minDiff))
I was expecting I will be able to find out the solution but I could not. |
How to find out how many of each 2, 3 and 4 required to fit in 100 using excel? |
|excel|sequence|min|let| |
null |
so this is the code i was working on plz tell me where im wrong
```
from time import sleep
from selenium import webdriver
from selenium.webdriver import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
options = webdriver.ChromeOptions()
options.add_extension('chrome.crx')
driver = webdriver.Chrome(options=options)
driver.set_script_timeout(30)
driver.get('https://uflix.to/mPlayer?movieid=lousy-carter-2024&stream=stream1')
# WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.ID, 've-iframe')))
click = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#embed-player > div.main-content > div.play-btn')))
ActionChains(driver).move_to_element(click).click(click).perform()
# sleep(20)
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.ID, 've-iframe')))
button = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div[title="Embed Video"] button')))
driver.execute_script("arguments[0].click();", button)
alert = driver.switch_to.alert.text
print(alert)
```
if u guys are not able to open the devtools goto devtools before playing video and open source tab and search for block-inspection and block its request done now help me guys plz |
Selenium clicked button but still getting error and exiting |
|python|selenium-webdriver|crash| |
|matlab|for-loop|trigonometry| |
Upgraded C# MVC project EF 4.7.2 to .net Core 8.0, got error:
The type or namespace name 'DirectoryServices' does not exist in the namespace 'System' (are you missing an assembly reference?)
I have tried to install System.ServiceModel.Primitive and it didn't work.
The old code uses 'using System.Security.Principal;
using System.ServiceModel;'
|
Upgraded C# MVC project EF 4.7.2 to .net Core 8.0 - The type or namespace name 'DirectoryServices' does not exist in the namespace 'System' |
|c#|asp.net-mvc| |
I have a problem in understanding the curtain part of the code, how it works and what's behind the operating stack register(s).
```
; that part i'm referring to
mov ss, ax
mov sp, 0x7c00
```
I need an explanation of that code part and its purpose, maybe the info will be from the source such as documentation or forum you're about to send the link to, you can always correct me if i'm wrong about something. |
Purpose of stack register(s) in holding 0x7c00 |
|x86|virtualbox|nasm|osdev|bios| |
null |
**Make a Sine Wave**
For starters, let's make a sine wave with variable rate, amplitude, and length.
Fs = 4e3; % sample rate of 4 kHz
Sr = 100; % example rate
Sa = 1; % amplitude
St = 10e-3; % signal duration is 10 ms
% To create a sine wave in MATLAB, I'm going to first create a vector of time,
% `t`, and then create the vector of sine wave samples.
N = St * Fs; % number of samples = duration times sample rate;
t = (1:N) * 1/Fs; % time increment is one over sample rate
% Now I can build my sine wave:
Wave = Sa * sin( 2 * pi * Sr * t );
figure; plot(t, Wave);
Note! This is barely enough time for a full wavelength, so be careful with slow rates and short time lengths.
**Make many Sine Waves**
To turn this into a loop, I need to index into vectors of input variables. Using my previous example:
Fs = 4e3; % sample rate of 4 kHz
Sr = [100 200 300]; % rates
Sa = [1 .8 .5]; % amplitudes
St = [10e-3 20e-3 25e-3]; % signal durations
nWaves = length(Sr);
N = max(St) * Fs; % number of samples = duration times sample rate;
t = (1:N) /Fs; % time increment is one over sample rate
% initialize the array
waves = zeros(nWaves, N);
for iWaves = 1:nWaves
% index into each variable
thisT = (1:St(iWaves) * Fs) * 1/Fs;
myWave = Sa(iWaves) * sin( 2 * pi * Sr(iWaves) * thisT );
waves(iWaves,1:length(myWave)) = myWave;
end
figure; plot(t, waves);
You still have one more piece, zero padding the front end of your signals, there's lots of ways to do it, one way would be to build the signal the way I've described and then concatenate an appropriate number of zeros to the front of your signal array. |
I have a piano key component. When a key is pressed a visual is created, which is set in a state "visual" so that the component rerenders with the new visual. This visual is a div which is also assigned a ref so that I can animate it easily. I have a counter ref to key into the "visual" state and "visual_ref" ref. After 3 seconds, I delete both the ref and the visual for the div as it is offscreen. Deleting the ref works fine, but deleting the visual deletes the corresponding visual, but also sets the last visual in the visual_ref to null. I do not know why
[visual_ref.current][1] 3 Visuals created. visual_ref.current[counter] is correctly deleted, however set_visuals deletion of its JSX element sets the last index of visual_ref.current to null. This is not because the last ref was assigned to it
**PianoKey Component**
```
const audio = useRef(null)
let visual_refs = useRef([])
let curr_animation = useRef([null, true])
let [visuals, set_visuals] = useState([])
let glowline = useRef(null)
let counter = useRef(0)
useEffect(() => {
console.log(visual_refs.current)
if (visuals[counter.current] && curr_animation.current[1]) {
curr_animation.current = [attribute_animation(visual_refs.current[counter.current], 'height', '0', '300000px', 1000000), false]
}
}, [visuals])
useEffect(() => {
if (pressed) {
audio.current.play()
set_visuals(prev_state => {
curr_animation.current[1] = true
return ({
...prev_state,
[counter.current]: (
<div key={`${counter.current}`} ref={ref => visual_refs.current[counter.current] = ref}
className={`visualizer-instance ${color === 'black' ? 'black-visualizer': ''}`}></div>
)
})})
attribute_animation(glowline.current, 'opacity', '0', '1', 600, 'cubic-bezier(0,.99,.26,.99)')
} else if (!pressed && pressed !== null && counter.current in visual_refs.current && curr_animation.current[0]) {
curr_animation.current[0].pause()
attribute_animation(visual_refs.current[counter.current], 'bottom', '0', '300000px', 1000000)
attribute_animation(glowline.current, 'opacity', '1', '0', 3000, 'cubic-bezier(.19,.98,.24,1.01)')
let curr_counter = counter.current
setTimeout(() => {
delete visual_refs.current[curr_counter]
// BELOW DELETES EXTRA VISUAL REF
set_visuals(prev_state => {
const new_state = {...prev_state}
delete new_state[curr_counter]
return new_state
})
}, 3000, curr_counter)
counter.current += 1
}
}, [pressed])
```
[1]: https://i.stack.imgur.com/1f2yy.png |
I don't really understand why you are messing around with central directories and building the Zip manually. None of this is necessary as you can use `ZipArchive` to do this in one go.
Furthermore, you can't compress chunks of bytes like that and then just concatenate them. The Deflate algorithm doesn't work that way.
Your concern about flushing is misplaced: if the `ZipArchive` is closed then everything is flushed. You just need to make it leave the stream open once you dispose it.
I would advise you to only work with `Stream`, but you could use `byte[]` or `Memory<byte>` if absolutely necessary.
```cs
public async Task<Memory<byte>> Compress(string fileName, IAsyncEnumerable<Memory<byte>> data)
{
var ms = new MemoryStream();
using (var zip = new ZipArchive(ms, ZipArchiveMode.Create, leaveOpen: true))
{
var entry = zip.CreateEntry(fileName);
using var zipStream = entry.Open();
await foreach (var bytes in data)
{
zipStream.Write(bytes);
}
}
return new ms.GetBuffer().AsMemory(0, (int)ms.Length);
}
```
If you want to avoid even the `MemoryStream` and upload directly to `HttpClient` then you can use a custom `HttpContent` that "pulls" the data as and when needed.
This example is taken from [the documentation][1].
```cs
public class ZipUploadContent : HttpContent
{
private readonly string _fileName;
private readonly IAsyncEnumerable<Stream> _data;
public MyContent(string fileName, IAsyncEnumerable<Stream> data)
{
_fileName = fileName
_data = data;
}
protected override bool TryComputeLength(out long length)
{
length = 0;
return false;
}
protected override Task SerializeToStreamAsync(Stream stream, TransportContext? context)
=> SerializeToStreamAsync(stream, context, CancellationToken.None)
protected override Task SerializeToStreamAsync(Stream stream, TransportContext? context, CancellationToken cancellationToken)
{
using var zip = new ZipArchive(ms, ZipArchiveMode.Create, leaveOpen: true);
var entry = zip.CreateEntry(fileName);
await using var zipStream = entry.Open();
await foreach (var inputStream in data.WithCancellation(cancellationToken))
{
inputStream.CopyToAsync(zipStream, cancellationToken);
}
}
protected override void SerializeToStream(Stream stream, TransportContext? context, CancellationToken cancellationToken)
=> Task.Run(() => SerializeToStreamAsync(stream, context, cancellationToken)).Wait();
}
```
[1]: https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpcontent?view=net-8.0#examples |