instruction stringlengths 0 30k ⌀ |
|---|
null |
|c#|build|compiler-errors|fivem| |
{"OriginalQuestionIds":[8049520],"Voters":[{"Id":476,"DisplayName":"deceze","BindingReason":{"GoldTagBadge":"python"}}]} |
I need to extract the softsubs embeded into an mkv file hosted on a remote server using nodejs.
All of the solutions I've found online so far are using local files, and others are either poorly documented or throwing unknown errors that I can't find online; That doesn't work for me. I need to extract subtitles from the remote file (like `http://some-url.com/videos/video.mkv`) without downloading the entire file to use it in my web-based application. |
|riot.js| |
|javascript|components|tags|riot.js| |
my vite ssr is running good in my localhost by http://localhost:5173 then i am creating a build version by npm run build. it is creating two folder inside dist. one is client and one is server. then i am uploading it to my aws ec2 server in a folder named frontendssr. now i want to server my ssr through mysite.com/v2. Problem is what will be nginx config and what will be the pm2 command? I am confused because in backend server we have server.js to run throung pm2. can anyone suggest me how can mysite.com/v2 will server http://localhost:5173 |
how to deploy vite ssr in the aws ec2 server? |
|vite|server-side-rendering| |
I have a class String and I've made a field char* name, default, parameterized, copy constructor, destructor and overwritten operator =. My question is how my default constructor should look like for a char*. Below is part of my code. I use char* because the array should be dynamic;
```
class String {
public:
char* name;
String(){
name = new char[0];
}
String(char* str){
name = new char[strlen(str) + 1];
strcpy(name, str);
}
```
|
What value to assign to a char* in default constructor c++? |
|c++|dynamic-arrays|default-constructor| |
sphereGeom.translate(5, 0, 0); |
As it happens in Firebase, issues with the package caused this failure. In this case, Google released a fix (10.22.1) that directly resolves.
https://firebase.google.com/support/release-notes/ios |
There is nothing inherently wrong with comparing hex values with `==`.
Moreover hex values are actually just integers.
But there are 2 issues in your code:
1. **Operator precedence in your `if` statement:**
`==` has higher precedence than `^`,
and therefore this expression: `s1 ^ s2 == s3`
Is actually interpreted as: `s1 ^ (s2 == s3)` which is not what you want.
To fix it you should use:
```
if ((s1 ^ s2) == s3) { /*...*/ }
```
2. **Both your `for` loops will be infinite loops**:
`unsigned char i` and `unsigned char j` will always be in the range [0x0, 0xFF] because an `unsigned char` will wrap back to 0x0 (0) after a 0xFF (255) value will be incremented.
To fix it:
(1) Use a larger integer, e.g. `unsigned int` for `i` and `j`.
(2) When calling `s` with expressions based on `i`, `j` cast them to `unsigned char` (to avoid a "possible loss of data" or similar warning). |
sp_OAMethod always returns NULL when consuming a Rest API... I'm have no idea where the problem could be. Here is the source code of the procedure:
CREATE PROCEDURE MakeCEPAbertoRequest
@cep VARCHAR(8) -- Tamanho do CEP no Brasil
AS
BEGIN
DECLARE @token VARCHAR(1000) = 'Token token=d142c65bc45454595b15897e1d70c04b6';
DECLARE @url VARCHAR(1000) = 'https://www.cepaberto.com/api/v3/cep?cep=' + @cep;
DECLARE @requestResult NVARCHAR(MAX);
DECLARE @Stat INT
DECLARE @resposta NVARCHAR(4000)
-- Criar um objeto XMLHTTP
DECLARE @objectID INT;
EXEC @Stat = sp_OACreate N'MSXML2.ServerXMLHTTP', @objectID OUT;
-- Abrir a conexão para o URL desejado
EXEC sp_OAMethod @objectID, N'open', NULL, N'GET', @url, false;
EXEC sp_OAMethod @objectID, N'setRequestHeader', NULL, 'Content-Type', 'application/json'
-- Definir o cabeçalho de autorização
EXEC sp_OAMethod @objectID, N'setRequestHeader', NULL, N'Authorization', @token;
-- Enviar a solicitação
EXEC sp_OAMethod @objectID, N'send';
-- Obter a resposta
EXEC sp_OAMethod @objectID, N'responseText', @requestResult OUTPUT;
-- Destruir o objeto XMLHTTP
EXEC sp_OADestroy @objectID;
-- Retorna a resposta
SELECT @requestResult AS Response
END
GO
I have created the equivalent of this procedure with CLR but the behavior is the same (I always get NULL output).
When I use this API in Insomnia (with the same parameters) I get the expected response. |
PDF form checkbox/radio button ignores content stream |
|pdf|pdf-generation|zend-pdf| |
null |
{"Voters":[{"Id":4902099,"DisplayName":"hcheung"},{"Id":354577,"DisplayName":"Chris"},{"Id":13376511,"DisplayName":"Michael M."}],"SiteSpecificCloseReasonIds":[13]} |
{"Voters":[{"Id":18470692,"DisplayName":"ouroboros1"},{"Id":14460824,"DisplayName":"HedgeHog"},{"Id":839601,"DisplayName":"gnat"}],"SiteSpecificCloseReasonIds":[13]} |
|django|iis|deployment|wfastcgi|httpplatformhandler| |
{"Voters":[{"Id":13434871,"DisplayName":"Limey"},{"Id":22180364,"DisplayName":"Jan"},{"Id":12545041,"DisplayName":"SamR"}],"SiteSpecificCloseReasonIds":[13]} |
{"Voters":[{"Id":3001761,"DisplayName":"jonrsharpe"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":839601,"DisplayName":"gnat"}]} |
I am working on a very large excel dataset, with more than 100 thousand rows, it contains data such as hours and dates, but they are not split (20231201 instead of 2023/12/01 or 1130 instead of 11:30),i managed to write a code that splits them in order to copy and paste them back on excel, however it doesn't give me the whole dataset in output, the first 30k rows are always missing... is there a way to set the output level to infinite?
```
#this is the code for hours
import pandas as pd
df = pd.read_excel('/Volumes/PortableSSD/Università - Lavori/Progetto statistica/Definitivo 1223.xlsx')
df['Scheduled departure'] = df['Scheduled departure'].astype(str)
df['formatted_hour'] = df['Scheduled departure'].apply(lambda x: '{:0>4}'.format(x))
df['formatted_hour'] = df['formatted_hour'].apply(lambda x: f"{x[:2]}:{x[2:]}")
# Display the formatted time
print(df['formatted_hour'].to_string(index=True))
```
```
#this is the code for dates
import pandas as pd
df = pd.read_excel('/Volumes/PortableSSD/Università - Lavori/Progetto statistica/Definitivo 1223.xlsx')
df['Date'] = df['Date'].astype(str)
df['year'] = df['Date'].str[:4]
df['month'] = df['Date'].str[4:6]
df['day'] = df['Date'].str[6:]
df['formatted_date'] = df['Date'].str[6:] + '/' + df['Date'].str[4:6] + '/' + df['Date'].str[:4]
# Display the formatted date
print(df['formatted_date'].to_string(index=False))
```
|
How to make pandas show large datasets in output? |
|python|pandas|statistics| |
null |
My `/app/api/auth/route.ts` file:
```javascript
import { redirect } from 'next/navigation';
export async function GET(req: Request) {
try {
redirect('/dashboard');
} catch (error) {
console.log(error);
redirect('/');
}
}
```
I realized that when I do redirect in a try catch, I get the error :
```
Error: NEXT_REDIRECT
at getRedirectError (webpack-internal:///(sc_server)/./node_modules/next/dist/client/components/redirect.js:40:19)
at redirect (webpack-internal:///(sc_server)/./node_modules/next/dist/client/components/redirect.js:46:11)
at GET (webpack-internal:///(sc_server)/./app/api/auth/route.ts:23:66)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (webpack-internal:///(sc_server)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:244:37) {
digest: 'NEXT_REDIRECT;replace;/dashboard'
}
```
When I get get rid of the try catch everything works fine:
```javascript
export async function GET(req: Request) {
redirect('/dashboard')
}
```
This works as expected. I need try and catch because this is an auth route and I need some error handling because the request could fail, I have left out the auth functionalities because I realized that this happens just on a simple try and catch.
Or if Next 13 has another way of error handling in `/api` routes please let me know.
|
Angular - type guard not narrowing types |
I am working on a scraping project where I am trying to scrap NSE announcements page - https://www.nseindia.com/companies-listing/corporate-filings-announcements.
Now the table is static but the JSON content get loads dynamically.
I am able to hit the API endpoint of the website but after few times it says - “Resource not found”.
I am not able to find workaround for it. How can I extract the news from announcements page?
I tried using the headers in the code but even that is not helping. |
How to scrap website which loads json content dynamically? |
|python|web-scraping| |
null |
I have 68 3d points (I'm guessing it's called a sparse point cloud). I want to connect them all together to create a mesh. I first tried this using Delaunay Triangulation. However, it didn't work well because Delaunay Triangulation only gives a convex hull, so it is giving a mesh that basically ignores the eyes which are concave for example.
Here is a picture illustrating what I mean:
[Delaunay](https://i.stack.imgur.com/WXSX0.png)
So I tried using something else which is alphashape. I've been using this documentation: https://pypi.org/project/alphashape/
My problem is that it's simply just not working.
Here are some pictures:
[Alpha Shape](https://i.stack.imgur.com/QEzSQ.png)
The above picture shows the 3d points which I want to convert to a mesh
Below pictures show the result of me using alpha shape!
I want to get a 3D convex hull.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import alphashape
points_3d = np.array(sixtyEightLandmarks3D)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points_3d[:, 0], points_3d[:, 1], points_3d[:, 2])
plt.show()
points_3d = [
(0., 0., 0.), (0., 0., 1.), (0., 1., 0.),
(1., 0., 0.), (1., 1., 0.), (1., 0., 1.),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5), (.5, .5, .25), (.75, .5, .5),
(.5, .75, .5), (.5, .5, .75)
]
points_3d = [
(7, 191, 325.05537989702617), (6, 217, 330.15148038438355), (8, 244, 334.2528982671654),
(11, 270, 340.24843864447047), (19, 296, 349.17330940379736), (34, 320, 361.04985805287333),
(56, 340, 373.001340480165), (80, 356, 383.03263568526376), (110, 361, 387.06330231630074),
(140, 356, 383.08354180256816), (165, 341, 373.1621631409058), (187, 321, 359.4022815731698),
(205, 298, 344.64039229318433), (214, 272, 334.72376670920755), (216, 244, 328.54984401152893),
(218, 217, 324.34703636691364), (217, 190, 319.189598828032), (22, 166, 353.0056656769123), (33, 152, 359.0055709874152),
(52, 145, 364.0), (72, 147, 368.00135869314397), (91, 153, 372.0013440835933), (125, 153, 370.0013513488836),
(145, 146, 366.001366117669), (167, 144, 361.0), (186, 151, 358.00558654859003), (197, 166, 351.0056979594491),
(108, 179, 376.02127599379264), (108, 197, 381.02099679676445), (109, 214, 387.03229839381623),
(109, 233, 393.04579885809744), (87, 252, 383.03263568526376), (98, 255, 386.0323820614017), (109, 257, 387.03229839381623),
(120, 254, 385.0324661635691), (131, 251, 383.0469945058961), (44, 183, 360.01249978299364), (55, 176, 363.00550960006103),
(69, 176, 363.0123964825444), (81, 186, 364.0219773585106), (69, 188, 364.0219773585106), (54, 188, 364.0219773585106),
(136, 185, 361.01246515875323), (147, 175, 362.0013812128346), (162, 175, 361.00554012369395),
(174, 183, 357.0014005574768), (163, 188, 360.01249978299364), (149, 188, 362.01243072579706),
(73, 289, 384.04687213932624), (86, 282, 389.0462697417879), (100, 278, 391.0319680026174),
(109, 281, 391.0460330958492), (120, 277, 391.0319680026174), (134, 281, 387.03229839381623),
(147, 289, 380.0210520484359), (135, 299, 388.03221515745315), (121, 305, 392.04591567825315),
(110, 307, 392.04591567825315), (100, 306, 392.04591567825315), (86, 300, 391.0626548265636),
(78, 290, 386.0207248322297), (100, 290, 391.0319680026174), (109, 291, 391.0319680026174),
(120, 289, 391.0319680026174), (142, 289, 381.03280698648507), (120, 290, 391.0319680026174),
(109, 292, 392.03188645823184), (100, 291, 392.04591567825315),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5)
]
alpha_shape = alphashape.alphashape(points_3d, lambda ind, r: 0.3 + any(np.array(points_3d)[ind][:,0] == 0.0))
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_trisurf(*zip(*alpha_shape.vertices), triangles=alpha_shape.faces)
plt.show()
``` |
Get remote MKV file metadata using nodejs |
|javascript|node.js|typescript|ffmpeg|mkv| |
null |
I want to open a modalBottomSheet but also be able to interact with the map above which is the barrier area. How can I achieve that?
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/810Dg.png |
How to make barrier area interactive in flutter modal bottom sheet |
|flutter|dart| |
The problem for me was just because in the action file I had another exported object! |
I had been using Redis via docker with the command
`docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:lates`
I could spin up a password-protected redis-stack-server docker instance via the command
`docker run -d --name redis-stack-server -p 6379:6379 -e REDIS_ARGS="--requirepass <password>" redis/redis-stack-server:latest`
you can login into this instance via RedisInsight or any other client using <password> and user name `"default"` which is the default username in redis |
I don't mind resetting the TrafficStats value before launching the application. For this I use TrafficStats.clearThreadStatsTag(), but TrafficStats does not clear.
I know that TrafficStats is being reset. happens when the phone is rebooted
It is possible to reset TrafficStats. via code
fun myNetvorkCheck(){
checkNetworkConnection = CheckNetworkConnection(application)
checkNetworkConnection.observe(this, { isConnected->
if (isConnected) {
checkTraficStart()
Toast.makeText(this, "Connected", Toast.LENGTH_SHORT).show()
}else{
checkTrafficEnd()
Toast.makeText(this, "No Connected",Toast.LENGTH_SHORT).show()
}
})
}
fun checkTraficStart() {
//resetMobile()
//total dowlaood bytes
val dowladBytes = TrafficStats.getTotalRxBytes()
//total uload bytes
val uploaBytes = TrafficStats.getTotalTxBytes()
// only star mobile dowlaod bytes
val startdowMobile = TrafficStats.getMobileRxBytes()
//only start mobile upload bytes
val startuploMobile = TrafficStats.getMobileTxBytes()
//convert bytes to Mb
val finalmobDowload = ( startdowMobile ) / 1048576
val finalmodUpload1 = ( startuploMobile ) / 1048576
//total traffic za ves den
val modileZaden = finalmobDowload + finalmodUpload1
startmobileTraffic = modileZaden.toInt()
textDowload.text = finalmobDowload.toInt().toString()
textUpload.text = finalmodUpload1.toInt().toString()
txtInfo.text = modileZaden.toInt().toString()
}
fun checkTrafficEnd() {
endmobileTraffic = endmobileTraffic + startmobileTraffic
mobileVse = startmobileTraffic
textObchiy.text = endmobileTraffic.toString()
saveTrafic(mobileVse)
saveTrafficAll(endmobileTraffic)
resetMobile()
}
fun saveTrafic(res:Int) {
val edit = traficPref?.edit()
edit?.putInt("key",res)
edit?.apply()
}
fun saveTrafficAll(res: Int){
val alledit = traficPrefAll?.edit()
alledit?.putInt("keyAll",res)
alledit?.apply()
}
fun resetMobile() {
TrafficStats.clearThreadStatsTag()
}
why textDowload.text = 515
textUpload.text = 50
and not
textDowload.text = 0
textUpload.text = 0
????
|
Here are some ways you can achieve this, basically, you can create a ServiceResolver or a factory to retrieve your object with a key, or use anti pattern.
You can see more info here: https://andrewlock.net/exploring-the-dotnet-8-preview-keyed-services-dependency-injection-support/ |
We have activated "Route53 public DNS query logging" but have not found a way to limit DNS lookups (A, AAAA). Is there a way with the AWS CLI to accomplish this?
|
How to limit AWS Route 53 query frequency? |
|amazon-web-services|amazon-route53| |
{"OriginalQuestionIds":[1711990],"Voters":[{"Id":4117728,"DisplayName":"463035818_is_not_an_ai","BindingReason":{"GoldTagBadge":"c++"}}]} |
Okay, so I am using DMD to formulate a low rank model of a system I'm working on. Everything works fine thus far, and I can reconstruct the states for the time I have measurements - so for t \in [0, 10000] seconds. However, I wanna forecast for the next pred_steps = 100 seconds or so, and I know that the algorithm can do that, I am just not sure which formula I should use...
So the book by Brunton and Kutz shows two formulas, and I am confused as to which one is the correct one / the one I should use in my case?
The formulas in the book are:
A) $x_{k=1} = A x_{k}$
B) $x(t) = \Phi exp(\Omega t) b$
I understand that one is in discrete-time and the other in continuous-time spaces, but I am not sure as to which I should use... Are they equivalent?
Thank you in advance!!
I have already tried implementing both, and to be honest the forecast results are veery different from one to the other. So i have am not sure which one would be the "correct" approach... |
What is the correct formula to use when computing future state prediction (forecasting) with DMD? |
|prediction|forecasting|decomposition|data-driven|matrix-decomposition| |
null |
In place of "NOCASE" you can use "CITEXT" as this worked for me
So in your case in place of "NOCASE_UTF8" use "CITEXT" and before this add a line - CREATE EXTENSION IF NOT EXISTS citext;
As in Postgres, you would typically use a different collation for achieving a case-insensitive comparison.
Postgres offers the citext data type for case-insensitive text comparison, which you can use directly for your column. Here's how you can create your table using citext
|
|python|html|flask|smtp| |
This is my (untested) code to perform this operation on nested directories:
def latest(file: File): File =
file.listFiles.maxByOption(_.lastModified) match {
case Some(f) => latest(f)
case None => file
}
This is tail recursive and will compile to a simple loop.
|
IN MbedTls with RSA in a C-programm encryption/decryption works when using separate buffers (for plainText, cipherText, and decryptedText, i.e. the content of plainText and decryptedText is the same), but not when using just one buffer to perform in-place encryption/decryption as i get gibberish/not correctly encrypted data.
Is that just a general limitation or is my code wrong?
Background:
I'm trying to use in-place encryption and decryption with RSA in MbedTls in a C-programm. [Here](https://forums.mbed.com/t/in-place-encryption-decryption-with-aes/4531) it says that "In place cipher is allowed in Mbed TLS, unless specified otherwise.", although I'm not sure if they are also talking about AES. From my understanding i didn't see any specification saying otherwise for mbedtls_rsa_rsaes_oaep_decrypt Mbed TLS API documentation.
Code:
```
size_t sizeDecrypted;
unsigned char plainText[15000] = "yxcvbnm";
unsigned char cipherText[15000];
unsigned char decryptedText[15000];
rtn = mbedtls_rsa_rsaes_oaep_encrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, sizeof("yxcvbnm"), &plainText, &cipherText);
rtn = mbedtls_rsa_rsaes_oaep_decrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, &sizeDecrypted, &cipherText, &decryptedText, 15000);
//decryptedText afterwards contains the correctly decrypted text just like plainText
unsigned char text[15000] = "yxcvbnm";
rtn = mbedtls_rsa_rsaes_oaep_encrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, sizeof("yxcvbnm"), &text, &text);
rtn = mbedtls_rsa_rsaes_oaep_decrypt(&rsa, mbedtls_ctr_drbg_random, &ctr_drbg, NULL, 0, &sizeDecrypted, &text, &text, 15000);
//someText afterwards doesn't contain the correctly decrypted text/has a different content than plainText
//rtn is always 0, i.e. no error is returned
``` |
Presumably you're looking for completion of `.tex` files in the current location without the `.\` in which case you can use [`Register-ArgumentCompleter`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/register-argumentcompleter?view=powershell-7.4) with the `-Native` switch:
```sh
Register-ArgumentCompleter -CommandName pdflatex -Native -ScriptBlock {
param($wordToComplete, $commandAst, $cursorPosition)
Get-ChildItem -Filter *.tex |
Where-Object { $_.Name.StartsWith($wordToComplete, [StringComparison]::InvariantCulture) } |
ForEach-Object Name
}
``` |
|javascript|html|forms| |
I have a PointLight struct in C++
```cpp
struct PointLight
{
glm::vec4 position; // 16 bytes
glm::vec4 color; // 16 bytes
float intensity; // 4 bytes rounded to 16 bytes?
float range; // 4 bytes rounded to 16 bytes?
};
```
usage in ssbo:
```glsl
layout(std430, binding = 3) buffer lightSSBO
{
PointLight pointLight[];
};
```
Will the float elements be rounded up to 16 bytes due to the vec4s?
Does std430 always round elements in an array to the size of the largest element?
Will I need to add padding to my struct to correctly access the variables?
```cpp
struct PointLight
{
glm::vec4 position; // 16 bytes
glm::vec4 color; // 16 bytes
float intensity; // 4 bytes
float range; // 4 bytes
float padding[2]; // padding to make struct multiple of 16 bytes, because
// largest element vec4 is 16 bytes
// total 48 bytes
};
``` |
How does OpenGL std430 layout align elements in an array? |
|opengl|glsl| |
null |
|python|tensorflow|tf.keras| |
This might work, but the question isn't fully clear:
```
SELECT DISTINCT ID2
FROM LINK_TABLE lt1
WHERE ID1 IN (1,2)
AND NOT EXISTS (
SELECT 1
FROM LINK_TABLE lt2
WHERE lt2.ID2 = lt1.ID2
AND l2.ID1 NOT IN (1,2)
)
```
---
Hmm... you could also try this and it will likely be even faster:
```
SELECT ID2
FROM LINK_TABLE lt1
GROUP BY ID2
HAVING SUM(CASE WHEN ID1 NOT IN (1,2) THEN 1 ELSE 0 END) = 0
AND SUM(CASE WHEN ID1 IN (1,2) THEN 1 ELSE 0 END) > 0
```
|
I have pandas dataframe with this input:
data = {
'document_section_id': ['1', '', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3', '', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3', '4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4', '5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4', '6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1', '', '', '6.9.2'],
'paragraph_type': ['Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4', 'Heading4', 'Heading3']
}
df = pd.DataFrame(data)
The problem is to populate the blank "document_section_id" values with accurate section ID values, using the preceding ones as references.
Conditions:
1. The number of digits is determined by the "paragraph type" column. For example, for "Heading3," there should be 4 digits and 3 dots, like so: 1.2.3.1.
2. For each empty value, it should reference the preceding available "paragraph type" and increment by 1 accordingly.
Example 1:Given the input, the section ID for the 12th row can be derived from the previous one, resulting in the computed value of 2.3.1.Example 2:
For the 48th and 49th rows, the section ID needs to be derived as 6.9.1.1 and 6.9.1.2, respectively.
There can be max 10 levels of subsection, so that should be taken care irrespective of number of sub sections.
Output:
document_section_id = [
'1', '1.1', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3',
'2.3.1', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3',
'4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4',
'5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4',
'6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1',
'6.9.1.1', '6.9.1.2', '6.9.2'
]
paragraph_type = [
'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3',
'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3',
'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2',
'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2',
'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2',
'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2',
'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3',
'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4',
'Heading4', 'Heading3'
]
|
When i run my command "yarn start" the project to launch the development server the build i get no error but in the logcat in android studio i get the following error can't find the source of the problem.
"react-native": "0.67.5",
"react-native-device-info": "^10.10.0",
[enter image description here](https://i.stack.imgur.com/arq3T.png)
```
Unhandled SoftException
java.lang.RuntimeException: Catalyst Instance has already disappeared: requested by DeviceInfo
at com.facebook.react.bridge.ReactContextBaseJavaModule.getReactApplicationContextIfActiveOrWarn(ReactContextBaseJavaModule.java:66)
at com.facebook.react.modules.deviceinfo.DeviceInfoModule.invalidate(DeviceInfoModule.java:114)
at com.facebook.react.bridge.ModuleHolder.destroy(ModuleHolder.java:110)
at com.facebook.react.bridge.NativeModuleRegistry.notifyJSInstanceDestroy(NativeModuleRegistry.java:108)
at com.facebook.react.bridge.CatalystInstanceImpl$1.run(CatalystInstanceImpl.java:368)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:27)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(MessageQueueThreadImpl.java:226)```
I'm expecting the home page of my app to be displayed but i got blank page nothing is displayed |
[React Native]: java.lang.RuntimeException: Catalyst Instance has already disappeared: requested by DeviceInfo |
|react-native|android-studio| |
null |
I also had the same problem with macos ventura, the only solution that fix is was:
brew uninstall colima
brew uninstall docker
after that i tried to install colima without docker:
brew install colima
then, as expected, i have colima complaining that there is no docker dependency
So, I install docker:
brew install docker
And finally, i installed colima:
brew install colima
colima start
this the only solution worked for me. |
As of v17.2, yes deferrable views are only supported on standalone components.
The Angular team is looking to add support for non-standalone components in v18. So wait & see. |
Doesn't work TrafficStats.clearThreadStatsTag() in Kotlin |
|kotlin|android-studio|mobile| |
null |
I'm exploring the use of the Databricks REST API to access data from underlying tables without relying on generating personal access tokens (PATs) through the web UI. Imagine I have Platform A, from which I obtain an "Authorization token," and I aim to use this token to authenticate in Databricks (Platform B). Both platforms are connected via the same identity provider (IDP) and employ single sign-on.
I have a couple of questions:
1. Can I authenticate in Databricks using the token obtained from Platform A and then generate a PAT using an API call?
2. Are there alternative methods to programmatically generate this token?
Despite consulting Databricks documentation, I couldn't find guidance on these aspects.
Most examples focus on generating personal access tokens through the web UI.
Additionally, when I attempted to use Postman to include the Authorization header with a JWT token, I encountered an error stating "unable to load OAuth config."
I appreciate any insights or suggestions. Thank you. |
Generate Databricks personal access token using REST API |
|jwt|databricks|single-sign-on|aws-databricks|identity-provider| |
I'm attempting to use ffmpegwasm in a Next.js project to convert MP3 or MP4 files to WAV format directly in the frontend. However, I encounter a "Module not found" error during the process. I have made sure to use the latest version of Next.js. Below is the error message and the code snippet where the issue occurs. I'm seeking assistance to resolve this problem, as it has become quite troubling.
**error**
```
./node_modules/@ffmpeg/ffmpeg/dist/esm/classes.js:104:27 Module not found
102 | if (!this.#worker) {
103 | this.#worker = classWorkerURL ?
> 104 | new Worker(new URL(classWorkerURL, import.meta.url), {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
105 | type: "module",
106 | }) :
107 | // We need to duplicated the code here to enable webpack
```
```
"use client"
import { FFmpeg } from "@ffmpeg/ffmpeg"
import { fetchFile, toBlobURL } from "@ffmpeg/util"
import React, { useEffect, useRef, useState } from "react"
export default function TestPage() {
const [loaded, setLoaded] = useState(false)
const ffmpegRef = useRef(new FFmpeg())
const messageRef = useRef(null)
useEffect(() => {
load()
}, [])
const load = async () => {
const baseURL = "https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd"
const ffmpeg = ffmpegRef.current
ffmpeg.on("log", ({ message }) => {
if (messageRef.current) messageRef.current.innerHTML = message
console.log(message)
})
await ffmpeg.load({
coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"),
wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, "application/wasm"),
})
setLoaded(true)
}
const convertToWav = async ({ target: { files } }) => {
const ffmpeg = ffmpegRef.current
const file = files[0]
await ffmpeg.writeFile("input.mp4", await fetchFile(file))
await ffmpeg.exec(["-i", "input.mp4", "output.wav"])
const data = await ffmpeg.readFile("output.wav")
const url = URL.createObjectURL(new Blob([data.buffer], { type: "audio/wav" }))
const link = document.createElement("a")
link.href = url
link.setAttribute("download", "output.wav")
document.body.appendChild(link)
link.click()
}
return (
<div>
{loaded ? (
<>
<input type="file" onChange={convertToWav} accept="audio/mp3,video/mp4" />
<p ref={messageRef}></p>
</>
) : (
<button onClick={load}>Load ffmpeg-core</button>
)}
</div>
)
}
```
Attempted Solutions:
I've ensured that I'm using the latest version of Next.js.
I've tried various configurations for the ffmpeg instance.
Questions:
How can I resolve the "Module not found" error when using ffmpegwasm with Next.js?
Are there any specific configurations or setups within Next.js that I need to be aware of to successfully use ffmpegwasm?
Any guidance or assistance with this issue would be greatly appreciated. Thank you in advance for your help. |
|javascript|next.js| |
null |
|javascript|typescript|next.js| |
The Tox 4.12.2 documentation on [External Package builder](https://tox.wiki/en/4.14.2/config.html#external-package-builder) tells that it is possible to define an `external` package option (thanks for the comment @JürgenGmach). The external [package](https://tox.wiki/en/4.14.2/config.html#package) option means that you set
```ini
[testenv]
...
package = external
```
In addition to this, one must create a section called `[.pkg_external]` (or `<package_env>_external` if you have edited your [package_env](https://tox.wiki/en/4.14.2/config.html#package_env) which has an alias `isolated_build_env`). In this section, one should define *at least* the [`package_glob`](https://tox.wiki/en/4.14.2/config.html#package_glob), which tells the tox where to install the wheel. If you also want to *create* the wheel, the you can do that in the `commands` option of the `[.pkg_external]`.
## Simple approach (multiple builds)
Example of a working configuration (tox 4.12.2):
```ini
[testenv:.pkg_external]
deps =
build==1.1.1
commands =
python -c 'import shutil; shutil.rmtree("{toxinidir}/dist", ignore_errors=True)'
python -m build -o {toxinidir}/dist
package_glob = {toxinidir}{/}dist{/}wakepy-*-py3-none-any.whl
```
- Pros: Pretty simply to implement
- Cons: This approach has the downside that you will trigger the build (`python -m build`) for each of your environments which do not have `skip_install=True`. This has an open issue: [tox #2729](https://github.com/tox-dev/tox/issues/2729).
## Building wheel only once
It is also possible to make tox 4.14.2 build the wheel only once using the tox [hooks](https://tox.wiki/en/4.14.2/plugins.html). As can be seen from the *Order of tox execution* (in Appendix), one hook which can be used for this is the `tox_on_install` for the ".pkg_external" (either "requires" or "deps"). I use it to place a dummy file (`/dist/.TOX-ASKS-REBUILD`) which means that a build should be done. If that `.TOX-ASKS-REBUILD` exists, when the build script is ran, the `/dist` folder with all of its contents is removed, and new `/dist` folder with a .tar.gz and a .whl file is created.
- Pros:
- Faster to run tox, as sdist and wheel built only as many times as required.
- Will build also if using tox with single env, like `tox -e py311` (if not `skip_install=True`)
- Cons:
- More involved
- Will not work in parallel mode. For that, probably would require to have separate build command which would be ran each time before tox (unless the parallelization plugin supports a common pre-command).
Hopefully this solution will become unnecessary at some point (when #2729 gets resolved)
### The hook
- Located in a `toxfile.py` at project root.
```python
from __future__ import annotations
import typing
from pathlib import Path
from typing import Any
from tox.plugin import impl
if typing.TYPE_CHECKING:
from tox.tox_env.api import ToxEnv
dist_dir = Path(__file__).resolve().parent / "dist"
tox_asks_rebuild = dist_dir / ".TOX-ASKS-REBUILD"
@impl
def tox_on_install(tox_env: ToxEnv, arguments: Any, section: str, of_type: str):
if (tox_env.name != ".pkg_external") or (of_type != "requires"):
return
# This signals to the build script that the package should be built.
tox_asks_rebuild.parent.mkdir(parents=True, exist_ok=True)
tox_asks_rebuild.touch()
```
### The tox_build_mypkg.py
- Located at `/tests/tox_build_mypkg.py`
```python
import shutil
import subprocess
from pathlib import Path
dist_dir = Path(__file__).resolve().parent.parent / "dist"
def build():
if not (dist_dir / ".TOX-ASKS-REBUILD").exists():
print("Build already done. skipping.")
return
print(f"Building sdist and wheel into {dist_dir}")
# Cleanup. Remove all older builds; the /dist folder and its contents.
# Note that tox would crash if there were two files with .whl extension.
# This also resets the TOX-ASKS-REBUILD so we build only once.
shutil.rmtree(dist_dir, ignore_errors=True)
out = subprocess.run(
f"python -m build -o {dist_dir}", capture_output=True, shell=True
)
if out.stderr:
raise RuntimeError(out.stderr.decode("utf-8"))
print(out.stdout.decode("utf-8"))
if __name__ == "__main__":
build()
```
### The tox.ini
```ini
[testenv]
; The following makes the packaging use the external builder defined in
; [testenv:.pkg_external] instead of using tox to create sdist/wheel.
; https://tox.wiki/en/latest/config.html#external-package-builder
package = external
[testenv:.pkg_external]
; This is a special environment which is used to build the sdist and wheel
; to the dist/ folder automatically *before* any other environments are ran.
; All of this require the "package = external" setting.
deps =
; The build package from PyPA. See: https://build.pypa.io/en/stable/
build==1.1.1
commands =
python tests/tox_build_mypkg.py
; This determines which files tox may use to install mypkg in the test
; environments. The .whl is created with the tox_build_mypkg.py
package_glob = {toxinidir}{/}dist{/}mypkg-*-py3-none-any.whl
```
## Notes
- This solution requires a fairly new version of tox. The tox_on_install hook [was added in tox 4.0.9](https://tox.wiki/en/4.14.2/changelog.html#v4-0-9-2022-12-13).
- If using any tox extensions and having problems with the hooks being called, try without the tox extension first.
- It is possible to also put the hooks in any *installed* python module and define the location in `pyproject.toml` as described in [Extensions points](https://tox.wiki/en/4.11.4/plugins.html#module-tox.plugin). However, the `toxfile.py` is a bit handier as it does not have to be *installed* in the current environment.
## Appendix
### Order of tox execution
The order of execution within tox can be reverse-engineered by using the dummy hook file defined in the Appendix (`tox_print_hooks.py`) and the bullet point list about the order of execution in the [System Overview](https://tox.wiki/en/4.14.2/user_guide.html#system-overview). Note that I have set the `package = external` already which has some effect on the output. Here is what tox does:
```
1) CONFIGURATION
tox_register_tox_env
tox_add_core_config
tox_add_env_config (N+2 times[1])
2) ENVIRONMENT (for each environment)
tox_on_install (envname, deps)
envname: install_deps (if not cached)
If not all(skip_install) AND first time: [2]
tox_on_install (.pkg_external, requires)
.pkg_external: install_requires (if not cached)
tox_on_install (.pkg_external, deps)
.pkg_external: install_deps (if not cached)
If not skip_install:
.pkg_external: commands
tox_on_install (envname, package)
envname: install_package [3]
tox_before_run_commands (envname)
envname: commands
tox_after_run_commands (envname)
tox_env_teardown (envname)
```
------
<sup>[1]</sup> N = number of environments in tox config file. The "2" comes from .pkg_external and .pkg_external_sdist_meta <br>
<sup>[2]</sup> "First time" means: First time in this tox call. This is done only if there is at least one selected environment which does not have `skip_install=True`. <br>
<sup>[3]</sup> This installs the package from wheel. If using the `package = external` in [testenv], it takes the wheel from the place defined by the `package_glob` in the `[testenv:.pkg_external]` <br>
### The dummy hook file `tox_print_hooks.py`
```python
from typing import Any
from tox.config.sets import ConfigSet, EnvConfigSet
from tox.execute.api import Outcome
from tox.plugin import impl
from tox.session.state import State
from tox.tox_env.api import ToxEnv
from tox.tox_env.register import ToxEnvRegister
@impl
def tox_register_tox_env(register: ToxEnvRegister) -> None:
print("tox_register_tox_env", register)
@impl
def tox_add_core_config(core_conf: ConfigSet, state: State) -> None:
print("tox_add_core_config", core_conf, state)
@impl
def tox_add_env_config(env_conf: EnvConfigSet, state: State) -> None:
print("tox_add_env_config", env_conf, state)
@impl
def tox_on_install(tox_env: ToxEnv, arguments: Any, section: str, of_type: str):
print("tox_on_install", tox_env, arguments, section, of_type)
@impl
def tox_before_run_commands(tox_env: ToxEnv):
print("tox_before_run_commands", tox_env)
@impl
def tox_after_run_commands(tox_env: ToxEnv, exit_code: int, outcomes: list[Outcome]):
print("tox_after_run_commands", tox_env, exit_code, outcomes)
@impl
def tox_env_teardown(tox_env: ToxEnv):
print("tox_env_teardown", tox_env)
``` |
Sure, I can help with formatting your code properly. Here's your original question with the code
I am working on a Rust project using the libp2p library to create a peer-to-peer network. I have configured my swarm to listen on all interfaces using the following code:
```rust
let listen_address_udp = format!("/ip4/0.0.0.0/udp/{}/quic-v1", port);
swarm.listen_on(listen_address_udp.parse()?)?;
let listen_address_tcp = format!("/ip4/0.0.0.0/tcp/{}", port);
swarm.listen_on(listen_address_tcp.parse()?)?;
```
However, when reading swarm events, I am unable to retrieve the external public address that my node is listening on. The code for reading swarm events is as follows:
```rust
loop {
select! {
_ = sig_term_handler.recv() => {
trigger_message = !trigger_message;
},
event = swarm.select_next_some() => match event {
SwarmEvent::Behaviour(MyBehaviourEvent::Gossipsub(gossipsub::Event::Message {
propagation_source: peer_id,
message_id: id,
message,
})) => {
println!(
"Received'{}' with id: {id} from peer: {peer_id}, Size :{}",
String::from_utf8_lossy(&message.data), message.data.len()
)
},
SwarmEvent::NewListenAddr { address, .. } => {
println!("Local node is listening on {address}");
}
_ => {}
}
}
}
```
The output I receive only shows the local addresses where my node is listening, such as:
```
Local node is listening on /ip4/127.0.0.1/tcp/8082
Local node is listening on /ip4/172.24.181.240/tcp/8082
```
I have tried pinging and opening port 8082 for TCP on all nodes, but I still cannot determine if my local node is publicly listening or not. What should be the external public address I expect to see in the `SwarmEvent::NewListenAddr` event? Any suggestions or insights into resolving this issue would be greatly appreciated.
|
Unable to Retrieve External Public Address in libp2p Swarm Events |
|rust|p2p|libp2p| |
null |
You could use [`top_k`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.top_k.html) and slice the last row:
```
import polars as pl
np.random.seed(0)
df = pl.DataFrame({'col': np.random.choice(5, size=5, replace=False)})
out = df.top_k(2, by='col')[-1]
```
Output:
```
shape: (1, 1)
┌─────┐
│ col │
│ --- │
│ i64 │
╞═════╡
│ 3 │
└─────┘
```
Input:
```
shape: (5, 1)
┌─────┐
│ col │
│ --- │
│ i64 │
╞═════╡
│ 2 │
│ 0 │
│ 1 │
│ 3 │
│ 4 │
└─────┘
``` |
FYI: likely to be related to [this Node issue][1]. Introduced in Node 20.0, fixed in 20.3.
[1]: https://github.com/nodejs/node/issues/47822 |
How can I generate a concave hull of 3D points? |
|python|point-clouds|triangulation|3d-reconstruction|alpha-shape| |
null |
Applying `sudo` depends on which type of **LXC** container you are working on.
Privileged containers require `sudo`, and must be created with `sudo create ...`.
Unprivileged containers though can be created with simply `lxc-create ...` |
I'd like to export/install an example project that shows how to use my CMake module (basically a C library). I'd like to distribute a `CMakeLists.txt` with the example that builds a hello-world-like executable that links with my library. The in-source version of the example's `CMakeLists.txt` `INCLUDE`s some cmake fragments that are also used elsewhere.
I'd like to replace these `INCLUDE` commands with the contents of the files (recursively) so that eventually a single `CMakeLists.txt` remains that can be distributed, similarly to what [latexepand][1] does for LaTeX documents.
Is there any built-in functionality that can help with that? Searching for "cmake" and "include" is not exactly very fruitful... :)
[1]: https://ctan.org/pkg/latexpand |
CMake: "dereference" INCLUDE commands to create a single CMakeLists.txt |
|cmake|cmake-language| |
I am trying to get the total students who have enrolled in a particular course, with a course id, in this month only. The table structure is same as what remains in most of the Moodle databases.
I dont want the total of students of all time, I just want to get them in this month.
The code can be something related to those provided on this page: [check-similar-here](https://stackoverflow.com/questions/22161606/sql-query-for-courses-enrolment-on-moodle)
The above link provides similar codes but doesnot include the **date** part which I want....
You can suggest me any other way also like using external webservices on moodle if this function exists there. |
SQL query to get student enrolled in this month in a course - Moodle |
|sql|wordpress|woocommerce|moodle|moodle-api| |
null |