instruction stringlengths 0 30k ⌀ |
|---|
|php|composer-php| |
I'm making an app with the Qt framework using Qt Creator and CMake (not qmake, because I want the full project to be buildable from other IDEs, but that's not relevant for this topic). In this project, I need the pffft library which is a compact and convenient alternative of fftw. This library has a .h header, and a .c file for the implementation. But when I try to build the project and run it, I get the LNK2019 error for EVERY function I use from this library, although I added the .h and .c files in the CMakeLists.txt, and included the .h file in every source file that uses it (checked that around 10 times now haha)
Note : everything worked fine before implementing pffft, so I'm pretty sure it's this library that creates the problem.
Here is the part of my CMakeLists.txt where I tell CMake to compile all the files I need (the .ico and .png files are also in the list I know, this file is automatically generated by Qt so I don't touch it that much) :
```
set(PROJECT_SOURCES
main.cpp
mainwindow.cpp
mainwindow.h
mainwindow.ui
)
if(${QT_VERSION_MAJOR} GREATER_EQUAL 6)
qt_add_executable(IR_Maker
MANUAL_FINALIZATION
${PROJECT_SOURCES}
pffft.c pffft.h
about.h about.cpp about.ui
add.qrc
img/irmaker.ico
sweepgenerator.h sweepgenerator.cpp sweepgenerator.ui
img/branch-closed2.png img/branch-open2.png
qcustomplot.cpp qcustomplot.h
)
```
Finally, my config is :
- Windows 10
- Qt 6.6.2
- Qt Creator 12.0.1
- MSVC 2019 compiler
Since the pffft library has a .c source file and not a .cpp, I modified the `project(IR_Maker VERSION 0.1 LANGUAGES CXX)` to `project(IR_Maker VERSION 0.1 LANGUAGES CXX C)`, and the LNK2019 errors disappreared but I have a "C1189 error : Error in C++ Standrad Library usage" in a .h file inside of the compiler directory, so I don't know if that gets me any closer from a solution. Same thing happends if I simply remove the `LANGUAGES CXX` in the project() arguments.
I also changed the compiler from MSVC 2019 to MinGW because I heard about pffft not being msvc friendly (or the opposite haha), and the errors also disappeared but now it's another problem where it doesn't find my qcustomplot.h file in the autogenerated .h file made from the .ui file of the mainwindow (where qcustomplot.h is used), even if I have the qcustomplot.h file in my project directory.
I checked for the `extern "C"` command in the .h file, and it is as shows this code. I'm not sure about the ordrer of the rest so it'll maybe be useful.
#ifndef PFFFT_H
#define PFFFT_H
#include <stddef.h> // for size_t
#ifdef __cplusplus
extern "C"
{
// all the header contents are there
}
#endif
#endif // PFFFT_H |
How to use bitwise operators in GLSL |
|opengl|glsl|shader| |
null |
As noted in the comments, an `awk` solution is simple.
For example:
```
awk '!index($0,q) || ++c>n' n=3 q=please "$filename"
```
---
It is not very convenient to try to count with `sed`, although it is possible. Parameterising arguments is also complicated.
Ignoring both those issues, here is a sed script to delete the first 3 occurrences of lines containing "please":
```
sed '
/please/ {
x
s/././3
x
t
x
s/^/ /
x
d
}
' "$filename"
```
Applying either script to:
```
1 leave this line alone
2 leave this line alone
1 please delete this line
3 leave this line alone
2 please delete this line
4 leave this line alone
5 leave this line alone
3 please delete this line
6 leave this line alone
4 please leave this line alone
7 leave this line alone
```
produces:
```
1 leave this line alone
2 leave this line alone
3 leave this line alone
4 leave this line alone
5 leave this line alone
6 leave this line alone
4 please leave this line alone
7 leave this line alone
```
As "one-liner":
```
sed -e'/please/{x;s/././3;x;t' -e'x;s/^/ /;x;d;}' "$filename"
```
|
[CICS Transaction Gateway (CICS TG) SDK Downloads](https://www.ibm.com/support/pages/node/6217349) which says (in part)
> The CICS TG SDK contains all the CICS TG JARs, RARs, shared libraries, headers, libs, and schemas that are needed to write, compile, and ship a CICS TG application for any supported release of CICS TG. The SDK is designed to provide everything that is required to write a client application, implementing one of the exits, processing CICS TG statistics, and creating JSON web services.
There is an archive of an archive in the download there, and within said archive you can find `cicstgsdk\api\jee\runtime\managed\cicseci.rar`
You may also find [Choosing the correct CICS TG Resource Adapter to deploy in WebSphere Application Server](https://www.ibm.com/support/pages/choosing-correct-cics-tg-resource-adapter-deploy-websphere-application-server) helpful. |
void main() {
WidgetsFlutterBinding.ensureInitialized();
WindowManager.instance.ensureInitialized();
WindowManager.instance.setTitleBarStyle(TitleBarStyle.hidden);
runApp(MyApp());
} |
I was able to mostly recreate the LAMB namespace that's explained on this site [Excel Lambda Introducing the LAMB Namespace][1].
I called it PIPER and added some logging and helper functionality.
LAMBDA(Names,Initial,
LET(SQRT_,LAMBDA(vector,INDEX(SQRT(vector))),
LN_,LAMBDA(vector,INDEX(LN(vector))),
LOG_,LAMBDA(base,LAMBDA(vector,INDEX(LOG(vector,base)))),
LOG_10_,LAMBDA(vector,INDEX(LOG_(10)(vector))),
POWER_, LAMBDA(exponent, LAMBDA(vector, INDEX(POWER(vector, exponent)))),
RECIPROCAL_,LAMBDA(vector,INDEX(POWER_(-1)(vector))),
RECIPROCAL_SQ_,LAMBDA(vector,INDEX(POWER_(-2)(vector))),
CUBEROOT_,LAMBDA(vector, INDEX(POWER_(1/3)(vector))),
MINVERSE_,LAMBDA(m,INDEX(MINVERSE(m))),
MTRANSPOSE_,LAMBDA(m,INDEX(TRANSPOSE(m))),
MMULT_, LAMBDA(vectorA, LAMBDA(vectorB, INDEX(MMULT(vectorA, vectorB)))),
FnNames, VSTACK("sqrt", "ln", "log", "log_10", "power", "reciprocal", "reciprocal_sq", "cuberoot", "triple", "minverse", "mtranspose", "mmult"),
CHOOSER_, LAMBDA(name, CHOOSE(IFERROR(MATCH(LOWER(name), FnNames,0), name), SQRT_, LN_, LOG_, LOG_10_, POWER_, RECIPROCAL_, RECIPROCAL_SQ_, CUBEROOT_, TRIPLE, MINVERSE_, MTRANSPOSE_, MMULT_)),
STACKER_, LAMBDA(s,f,v, INDEX(VSTACK(HSTACK(s, IF(COLUMNS(v)>1, HSTACK(f, SEQUENCE(1, COLUMNS(v)-1, 2)), f)), HSTACK(INDEX(s+SEQUENCE(ROWS(v))*POWER(10, -(1 + FLOOR(LOG10(ABS(ROWS(v))))))), v)))),
REDUCER_,LAMBDA(prev,cur,INDEX(CHOOSER_(cur)(prev))),
REDUCER_LOG_,LAMBDA(prev,cur,
LET(StageNos,INDEX(prev,0,1),
StageNo,INT(MAX(StageNos)),
StageHeader,INDEX(prev,MATCH(StageNo,StageNos,0),0),
StageFn, INDEX(StageHeader,1,2),
StageVal, FILTER(FILTER(prev, StageNos>StageNo), ISBETWEEN(SEQUENCE(1, COLUMNS(prev)),2, COUNTA(StageHeader))),
NextVal, CHOOSER_(cur)(StageVal),
IFERROR(VSTACK(prev, STACKER_(StageNo+1,cur,NextVal))))),
IS_LOGIT_,LAMBDA(n,LOWER(INDEX(n,1,1))="logit"),
IS_HELP_,LAMBDA(n,OR(LOWER(INDEX(n,1,1))="help",INDEX(n,1,1)="?")),
NAMES_,LAMBDA(n, IF(IS_LOGIT_(n), FILTER(TOCOL(n), SEQUENCE(ROWS(TOCOL(n)))>1), TOCOL(n))),
IF(IS_HELP_(Names), VSTACK(, "Execute a process pipe by providing either the function IDs shown below or function names and initial value like this: PIPER({1,""triple"",8},12). A single function can be executed by providing the name of the function and no initial value. For example PIPER(""triple"",)(9)=27"&CHAR(10)&CHAR(10)&REDUCE("ID|FUNCTION",SEQUENCE(ROWS(FnNames)), LAMBDA(t,i,t&CHAR(10)&i&"|"&UPPER(INDEX(FnNames,i))))),
IF(ISBLANK(INDEX(Initial,1,1)),
CHOOSER_(Names),
IF(IS_LOGIT_(Names),
REDUCE(STACKER_(0, "initial", Initial), NAMES_(Names), REDUCER_LOG_),
REDUCE(Initial,NAMES_(Names),REDUCER_))))))
Bottom line, it uses an array or function names to identify the index for the next function in the pipe, then uses CHOOSE to get that function and pass it parameters.
As its written, the functions and their names are hardcoded into the function, but I've been working on some similar stuff lately involving multiple actions and their triggers. Each action has an associated trigger, similar to the function names in PIPER. I use it for State Management. Both the triggers and actions are easily expandable. The actions can be just a value, or a lambda function that can accept up to 3 optional parameters, based on my specific use case:
LAMBDA(state, stateCount, previousTrigSum, triggers, actions, LET(
_F1,"Calls action with various call signatures",
TakeAction, lambda(id,s,i, let(a, actions(id), if(iserror(a(s,i,id)), if(iserror(a(s,i)), if(iserror(a(s)), if(iserror(a()), a, a()), a(s)), a(s,i)), a(s,i,id)))),
_F2,"Gets sum of trigger indexes",
TriggerSum, lambda(t, sumproduct(tocol(t),sequence(counta(t)))),
_F3,"Returns which trigger ran",
WhichAction,lambda(p,c,abs(p-c)),
currentTrigSum, TriggerSum(triggers),
actionId, WhichAction(previousTrigSum, currentTrigSum),
newState, TakeAction(actionId, state, stateCount),
HSTACK(newState, stateCount+1, currentTrigSum)))
(HSTACK(A1,B1,C1), LAMBDA(id, CHOOSE(id, 25, LAMBDA(a, a+2), LAMBDA(a,b, a+b))))
I'm still trying to figure out the preferred structure, but bundling my triggers and actions up into those two parameters seems to help with a lot of things. Here's a sample sheet that goes into the [State Management][2].
[1]: https://www.flexyourdata.com/blog/excel-lambda-introducing-the-lamb-namespace/
[2]: https://docs.google.com/spreadsheets/d/1XpeV1PWUmXve5EUSnif-pM2pTcNdfXk2I3xw5TfiITI/edit?usp=sharing |
I have used the code begin{equation} in one item which used in the code begin{enumerate}, how can I begin the number one code begin{equation} in the next item:
item 1:
1
2
...
100
item 2:
1
2
...
100
How can I recount the number in the item 2? Thanks.
I have try to put the begin equation new but it doesn't make sense, please give me a full detail code if you can |
How to begin a new equation |
|latex| |
null |
In my webview application start with progress bar after progress bar hide screen is totally black and not loading website in android webview hedgehog version.
Here is my MainActivity.java code,
public class MainActivity extends AppCompatActivity {
String websiteURL = "https://xxxxxxxxx.com";
private WebView webview;
ProgressBar progressBar;
SwipeRefreshLayout mySwipeRefreshLayout;
private ValueCallback<Uri> mUploadMessage;
private Uri mCapturedImageURI = null;
private ValueCallback<Uri[]> mFilePathCallback;
private String mCameraPhotoPath;
private static final int INPUT_FILE_REQUEST_CODE = 1;
private static final int FILECHOOSER_RESULTCODE = 1;
private File createImageFile() throws IOException {
// Create an image file name
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
String imageFileName = "JPEG_" + timeStamp + "_";
File storageDir = Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES);
File imageFile = File.createTempFile(
imageFileName, /* prefix */
".jpg", /* suffix */
storageDir /* directory */
);
return imageFile;
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if(! CheckNetwork.isInternetAvailable(this))//returns true if internet available
{//if there is no internet do this
setContentView(R.layout.activity_main);//Toast.makeText(this,”No Internet Connection, Chris”,Toast.LENGTH_LONG).show();
new AlertDialog.Builder(this)//alert the person knowing they are about to close
.setTitle("No internet connection available")
.setMessage("Please Check you’re Mobile data or Wifi network.")
.setPositiveButton("Ok", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
finish();
}
})
//.setNegativeButton(“No”, null)
.show();
}
else
{
//Webview stuff
webview = findViewById(R.id.webView);
progressBar = findViewById(R.id.progressBar1);
webview.setWebViewClient(new WebViewClientDemo());
//ChromeClient is used for select file from gallery
webview.setWebChromeClient(new ChromeClient());
webview.getSettings().setJavaScriptEnabled(true);
webview.getSettings().setAllowFileAccess(true);
webview.getSettings().setAllowContentAccess(true);
webview.loadUrl(websiteURL);
}//Swipe to refresh functionality
mySwipeRefreshLayout = (SwipeRefreshLayout)this.findViewById(R.id.swipeContainer);
mySwipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() {
@Override
public void onRefresh() {
webview.reload();
}
});
}
private class WebViewClientDemo extends WebViewClient {
public boolean shouldOverrideUrlLoading(WebView view, String url) {
//@Override//Keep webview in app when clicking links
if(url.startsWith("tel:") || url.startsWith("whatsapp:")) {
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse(url));
startActivity(intent);
return true;
}
else if (url.startsWith("http") || url.startsWith("https")) {
/*view.loadUrl(url);*/
return false;
}
/*else if (url.startsWith("intent")) {
try {
Intent intent = Intent.parseUri(url, Intent.URI_INTENT_SCHEME);
String fallbackUrl = intent.getStringExtra("browser_fallback_url");
if (fallbackUrl != null) {
view.loadUrl(fallbackUrl);
return false;
}
} catch (URISyntaxException e) {
//not an intent uri
}
return true;
}*/
// do your handling codes here, which url is the requested url
// probably you need to open that url rather than redirect:
if ( url.contains(".pdf")){
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setDataAndType(Uri.parse(url), "application/pdf");
try{
view.getContext().startActivity(intent);
} catch (ActivityNotFoundException e) {
//user does not have a pdf viewer installed
}
} else {
webview.loadUrl(url);
}
return false; // then it is not handled by default action
}
@Override
public void onPageStarted(WebView view, String url, Bitmap favicon) {
super.onPageStarted(view, url, favicon);
}
@Override
public void onPageFinished(WebView view, String url) {
super.onPageFinished(view, url);
/*mySwipeRefreshLayout.setRefreshing(false);*/
progressBar.setVisibility(View.GONE);
}
@Override
public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) {
Log.e("error",description);
}
}//set back button functionality
public class ChromeClient extends WebChromeClient {
// For Android 5.0
public boolean onShowFileChooser(WebView view, ValueCallback<Uri[]> filePath, WebChromeClient.FileChooserParams fileChooserParams) {
// Double check that we don't have any existing callbacks
if (mFilePathCallback != null) {
mFilePathCallback.onReceiveValue(null);
}
mFilePathCallback = filePath;
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
// Create the File where the photo should go
File photoFile = null;
try {
photoFile = createImageFile();
takePictureIntent.putExtra("PhotoPath", mCameraPhotoPath);
} catch (IOException ex) {
// Error occurred while creating the File
//Log.e(Common.TAG, "Unable to create Image File", ex);
}
// Continue only if the File was successfully created
if (photoFile != null) {
mCameraPhotoPath = "file:" + photoFile.getAbsolutePath();
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT,
Uri.fromFile(photoFile));
} else {
takePictureIntent = null;
}
}
Intent contentSelectionIntent = new Intent(Intent.ACTION_GET_CONTENT);
contentSelectionIntent.addCategory(Intent.CATEGORY_OPENABLE);
contentSelectionIntent.setType("image/*");
Intent[] intentArray;
if (takePictureIntent != null) {
intentArray = new Intent[]{takePictureIntent};
} else {
intentArray = new Intent[0];
}
Intent chooserIntent = new Intent(Intent.ACTION_CHOOSER);
chooserIntent.putExtra(Intent.EXTRA_INTENT, contentSelectionIntent);
chooserIntent.putExtra(Intent.EXTRA_TITLE, "Image Chooser");
chooserIntent.putExtra(Intent.EXTRA_INITIAL_INTENTS, intentArray);
startActivityForResult(chooserIntent, INPUT_FILE_REQUEST_CODE);
return true;
}
// openFileChooser for Android 3.0+
public void openFileChooser(ValueCallback<Uri> uploadMsg, String acceptType) {
mUploadMessage = uploadMsg;
// Create AndroidExampleFolder at sdcard
// Create AndroidExampleFolder at sdcard
File imageStorageDir = new File(
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES)
, "AndroidExampleFolder");
if (!imageStorageDir.exists()) {
// Create AndroidExampleFolder at sdcard
imageStorageDir.mkdirs();
}
// Create camera captured image file path and name
File file = new File(
imageStorageDir + File.separator + "IMG_"
+ String.valueOf(System.currentTimeMillis())
+ ".jpg");
mCapturedImageURI = Uri.fromFile(file);
// Camera capture image intent
final Intent captureIntent = new Intent(
android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
captureIntent.putExtra(MediaStore.EXTRA_OUTPUT, mCapturedImageURI);
Intent i = new Intent(Intent.ACTION_GET_CONTENT);
i.addCategory(Intent.CATEGORY_OPENABLE);
i.setType("image/*");
// Create file chooser intent
Intent chooserIntent = Intent.createChooser(i, "Image Chooser");
// Set camera intent to file chooser
chooserIntent.putExtra(Intent.EXTRA_INITIAL_INTENTS
, new Parcelable[] { captureIntent });
// On select image call onActivityResult method of activity
startActivityForResult(chooserIntent, FILECHOOSER_RESULTCODE);
}
// openFileChooser for Android < 3.0
public void openFileChooser(ValueCallback<Uri> uploadMsg) {
openFileChooser(uploadMsg, "");
}
//openFileChooser for other Android versions
public void openFileChooser(ValueCallback<Uri> uploadMsg,
String acceptType,
String capture) {
openFileChooser(uploadMsg, acceptType);
}
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
if (requestCode != INPUT_FILE_REQUEST_CODE || mFilePathCallback == null) {
super.onActivityResult(requestCode, resultCode, data);
return;
}
Uri[] results = null;
// Check that the response is a good one
if (resultCode == Activity.RESULT_OK) {
if (data == null) {
// If there is not data, then we may have taken a photo
if (mCameraPhotoPath != null) {
results = new Uri[]{Uri.parse(mCameraPhotoPath)};
}
} else {
String dataString = data.getDataString();
if (dataString != null) {
results = new Uri[]{Uri.parse(dataString)};
}
}
}
mFilePathCallback.onReceiveValue(results);
mFilePathCallback = null;
} else if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) {
if (requestCode != FILECHOOSER_RESULTCODE || mUploadMessage == null) {
super.onActivityResult(requestCode, resultCode, data);
return;
}
if (requestCode == FILECHOOSER_RESULTCODE) {
if (null == this.mUploadMessage) {
return;
}
Uri result = null;
try {
if (resultCode != RESULT_OK) {
result = null;
} else {
// retrieve from the private variable if the intent is null
result = data == null ? mCapturedImageURI : data.getData();
}
} catch (Exception e) {
Toast.makeText(getApplicationContext(), "activity :" + e,
Toast.LENGTH_LONG).show();
}
mUploadMessage.onReceiveValue(result);
mUploadMessage = null;
}
}
return;
}
@Override
public void onBackPressed()
{//if user presses the back button do this
if(webview.isFocused() && webview.canGoBack()) {//check if in webview and the user can go back
webview.goBack();//go back in webview
}else
{//do this if the webview cannot go back any further
new AlertDialog.Builder(this)//alert the person knowing they are about to close
.setTitle("EXIT")
.setMessage("Are you sure. You want to close this app?")
.setPositiveButton("Yes", new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).setNegativeButton("No", null).show();
}
}
}
class CheckNetwork {private static final String TAG = CheckNetwork.class.getSimpleName();
public static boolean isInternetAvailable(Context context)
{
NetworkInfo info = (NetworkInfo) ((ConnectivityManager)
context.getSystemService(Context.CONNECTIVITY_SERVICE)).getActiveNetworkInfo();
if(info == null)
{
Log.d(TAG,"no internet connection");
return false;}
else{
if(info.isConnected()){
Log.d(TAG," internet connection available…");
return true;
}else{
Log.d(TAG," internet connection");
return true;
}
}
}
}
Here is my activity_main.xml code
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity"
android:orientation="vertical">
<androidx.swiperefreshlayout.widget.SwipeRefreshLayout
android:id="@+id/swipeContainer"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ProgressBar
android:id="@+id/progressBar1"
android:max="3"
android:progress="100"
style="?android:attr/progressBarStyleLarge"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_centerInParent="true" />
<WebView android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/webView"
android:layout_alignParentTop="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
tools:ignore="MissingConstraints"
android:layout_below="@+id/progressBar1"/>
</androidx.swiperefreshlayout.widget.SwipeRefreshLayout>
</LinearLayout> |
After progressbar load website is not loading in android webview |
{"OriginalQuestionIds":[54957941],"Voters":[{"Id":5577765,"DisplayName":"Rabbid76","BindingReason":{"GoldTagBadge":"python"}}]} |
|sql|mysql|pivot-table| |
I am plotting some lines with Seaborn:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
fig, ax = plt.subplots()
for label, df in dfs.items():
sns.lineplot(
data=df,
x="Time step",
y="Loss",
errorbar="sd",
label=label,
ax=ax,
)
ax.set(xscale='log', yscale='log')
```
The result looks like [this](https://i.stack.imgur.com/FanHR.png).
Note the clipped negative values in the `effector_final_velocity` curve, since the standard deviation of the loss between runs is larger than its mean, in this case.
However, if `ax.set(xscale='log', yscale='log')` is called *before* the looped calls to `sns.lineplot`, the result looks like [this](https://i.stack.imgur.com/JVGG4.png).
I'm not sure where the unclipped values are arising.
Looking at the source of `seaborn.relational`: at the end of `lineplot`, the `plot` method of a `_LinePlotter` instance is called. It plots the error bands by passing the already-computed standard deviation bounds to `ax.fill_between`.
Inspecting the values of these bounds right before they are passed to `ax.fill_between`, the negative values (which would be clipped) are still present. Thus I had assumed that the "unclipping" behaviour must be something matplotlib is doing during the call to `ax.fill_between`, since `_LinePlotter.plot` appears to do no other relevant post-transformations of any data before it returns, and `lineplot` returns immediately.
However, consider a small example that calls `fill_between` where some of the lower bounds are negative:
```python
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
np.random.seed(5678)
ax.fill_between(
np.arange(10),
np.random.random((10,)) - 0.2,
np.random.random((10,)) + 0.75,
)
ax.hlines(0, 0, 10, color='black', linestyle='--')
ax.set_yscale('log')
```
Then it makes no difference if `ax.set_yscale('log')` is called before `ax.fill_between`; in both cases the result is [this](https://i.stack.imgur.com/ctRUi.png).
I've spent some time searching for answers about this in the Seaborn and matplotlib documentation, and looked for answers on SA and elsewhere, but I haven't found any information about what is going on here.
|
This happens because Communities has not developed a stable version of the `location` package that matches the `location` package with your kotlin version. Because of that, temporary solution can be proposed. Specify the `location` package in your `pubspec.yaml` file like this. this is working for me.
replaced
location: ^4.4.0
with this in pubspec.yaml and ran flutter pub get
location:
git:
url: https://github.com/781flyingdutchman/flutterlocation.git
ref: 'V4'
path: packages/location
I hope this is fixed in v4 by the maintainer of this extension soon.
|
I was training a tensorflow model and using `ignore_class=0` to ignore the class 0 when computing the loss.
```
unet.compile(
loss=keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
ignore_class=0,
),
optimizer=keras.optimizers.Adam(learning_rate=0.001),
metrics=["accuracy"],
)
```
This stopped working after I updated all my packages including python version, tf and keras.
Running the model now raises the following error:
```
model.fit(
File "/user/anaconda3/envs/environment/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/user/anaconda3/envs/environment/lib/python3.12/site-packages/keras/src/backend/tensorflow/nn.py", line 623, in sparse_categorical_crossentropy
raise ValueError(
ValueError: Arguments `target` and `output` must have the same shape up until the last dimension: target.shape=(None, 224, 224, 1), output.shape=(None, 224, 224, 224, 6)
```
The training is successful when I comment out `ignore_class=0`. Any clue what's causing the extra 224 in output.shape?
```
unet = keras.Model(inputs=inputs, outputs=out)
```
This is what I get when I log the shapes of my inputs and outputs.
```
inputs <KerasTensor shape=(None, 224, 224, 3), dtype=float32, sparse=None, name=keras_tensor_429> (None, 224, 224, 3)
out <KerasTensor shape=(None, 224, 224, 6), dtype=float32, sparse=False, name=keras_tensor_459> (None, 224, 224, 6)
``` |
null |
I have a Vite-based React library, currently structured like this:
```
import { Button, Typography, Box, Flex, Color, TypographyVariant } from '@placeholder-library';
```
I want to separate the imports to add submodules so that components and shared are imported from different paths:
```
import { Button, Typography, Box, Flex } from '@placeholder-library/components’;
import { Color, TypographyVariant } from ‘@placeholder-library/shared’;
```
**index.ts**
import './index.scss';
export * from './components';
export * from './shared';
**vite.config.ts:**
```
import react from '@vitejs/plugin-react';
import path from 'path';
import { defineConfig } from 'vite';
import dts from 'vite-plugin-dts';
import svgr from 'vite-plugin-svgr';
import tsconfigPaths from 'vite-tsconfig-paths';
import commonjs from 'vite-plugin-commonjs';
export default defineConfig({
resolve: {
alias: {
src: path.resolve(__dirname, './src'),
},
},
build: {
outDir: 'build',
lib: {
entry: './src/index.ts',
name: 'Placeholder Library',
fileName: 'index',
},
rollupOptions: {
external: ['react', 'react-dom'],
output: [
{
globals: {
react: 'React',
'react-dom': 'ReactDOM',
},
},
{
dir: 'build/cjs',
format: 'cjs',
globals: {
react: 'React',
'react-dom': 'ReactDOM',
},
},
{
dir: 'build/esm',
format: 'esm',
globals: {
react: 'React',
'react-dom': 'ReactDOM',
},
},
],
},
sourcemap: true,
emptyOutDir: true,
},
plugins: [
svgr(),
react(),
commonjs(),
tsconfigPaths(),
dts({
outDir: ['build/cjs', 'build/esm', 'build'],
include: ['./src/**/*'],
exclude: ['**/*.stories.*'],
}),
],
});
```
package.json:
```
{
"name": "@placeholder-library",
"version": "0.0.26",
"description": "Placeholder Library components library",
"license": "ISC",
"main": "build/cjs/index.js",
"module": "build/index.mjs",
"files": ["*"],
"scripts": {
"build": "tsc && vite build",
"build-storybook": "storybook build",
"build-storybook-docs": "storybook build --docs",
"dev": "vite",
"format": "prettier --write .",
"lint:fix": "eslint . --fix --ignore-path .gitignore",
"prepare": "husky install",
"preview": "vite preview",
"storybook": "storybook dev -p 6006",
"storybook-docs": "storybook dev --docs"
},
"dependencies": {
"..."
},
"devDependencies": {
"..."
},
"peerDependencies": {
"react": "^18.2.0"
}
}
```
my folder structure:
My folder structure
src
index.ts
components
index.ts
Button
index.ts
shared
hooks
index.ts
index.ts in components:
export * from './Button'
How can I configure Vite and my package structure to achieve this separation of components and shared/utils? Any advice or examples would be greatly appreciated.
I attempted to modify the **`vite.config.ts`** file to include separate entries for components and shared/utils, but I couldn't figure out how to properly configure the paths. I also tried to adjust the **`package.json`** file to specify different entry points for components and shared/utils, but I wasn't sure how to structure it correctly.
`import { Button, Typography, Box, Flex } from '@placeholder-library/components’;`
`import { Color, TypographyVariant } from ‘@placeholder-library/shared’;`
However, I couldn't find a clear example or documentation on how to set this up in a Vite-based React library. Any guidance or examples on how to achieve this would be greatly appreciated. |
I am working on a small project and I am trying to implement shaking animation when the user clicks dont know. Right now, when I click the button it shakes two times. It does that when i add onRest function. When I remove it, it works great but I need to use that button multiple times so i need to update the state after the animation executes. How to solve this?
animation configuration:
const { x } = useSpring({
from: { x: 0 },
to: { x: dontKnowClicked ? 1 : 0 },
reset: true,
onRest: () => {
if (dontKnowClicked) {
setDontKnowClicked(false);
}
},
config: { mass: 1, tension: 500, friction: 10 },
});
button controller:
const handleDontKnow = () => {
setDontKnowClicked(true);
};
animated element tyle (it is input):
style={{
transform: x
.to({
range: [0, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 1],
output: [0, 20, -20, 20, -20, 20, -20, 0],
})
.to((x) => `translate3d(${x}px, 0px, 0px)`),
}} |
Why does the React Spring animation executes 2 times? |
|reactjs|animation|react-spring| |
I'm deploying a zip package as an Azure function with the use of Terraform. The relevant code fragments below:
resource "azurerm_storage_account" "mtr_storage" {
name = "mtrstorage${random_string.random_storage_account_suffix.result}"
resource_group_name = azurerm_resource_group.mtr_rg.name
location = azurerm_resource_group.mtr_rg.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
network_rules {
default_action = "Deny"
ip_rules = ["127.0.0.1", "my.id.addr.here"]
virtual_network_subnet_ids = [azurerm_subnet.mtr_subnet.id]
bypass = ["AzureServices", "Metrics"]
}
tags = {
environment = "local"
}
}
resource "azurerm_storage_blob" "mtr_hello_function_blob" {
name = "MTR.ListBlobsFunction.publish.zip"
storage_account_name = azurerm_storage_account.mtr_storage.name
storage_container_name = azurerm_storage_container.mtr_hello_function_container.name
type = "Block"
source = "./example_code/MTR.ListBlobsFunction/MTR.ListBlobsFunction.publish.zip"
}
resource "azurerm_service_plan" "mtr_hello_function_svc_plan" {
name = "mtr-hello-function-svc-plan"
location = azurerm_resource_group.mtr_rg.location
resource_group_name = azurerm_resource_group.mtr_rg.name
os_type = "Linux"
sku_name = "B1"
tags = {
environment = "local"
}
}
data "azurerm_storage_account_blob_container_sas" "storage_account_blob_container_sas_for_hello" {
connection_string = azurerm_storage_account.mtr_storage.primary_connection_string
container_name = azurerm_storage_container.mtr_hello_function_container.name
start = timeadd(timestamp(), "-5m")
expiry = timeadd(timestamp(), "5m")
permissions {
read = true
add = false
create = false
write = false
delete = false
list = false
}
}
resource "azurerm_linux_function_app" "mtr_hello_function" {
name = "mtr-hello-function"
location = azurerm_resource_group.mtr_rg.location
resource_group_name = azurerm_resource_group.mtr_rg.name
service_plan_id = azurerm_service_plan.mtr_hello_function_svc_plan.id
storage_account_name = azurerm_storage_account.mtr_storage.name
storage_account_access_key = azurerm_storage_account.mtr_storage.primary_access_key
app_settings = {
"FUNCTIONS_WORKER_RUNTIME" = "dotnet"
"WEBSITE_RUN_FROM_PACKAGE" = "https://${azurerm_storage_account.mtr_storage.name}.blob.core.windows.net/${azurerm_storage_container.mtr_hello_function_container.name}/${azurerm_storage_blob.mtr_hello_function_blob.name}${data.azurerm_storage_account_blob_container_sas.storage_account_blob_container_sas_for_hello.sas}"
"AzureWebJobsStorage" = azurerm_storage_account.mtr_storage.primary_connection_string
"AzureWebJobsDisableHomepage" = "true"
}
site_config {
always_on = true
application_insights_connection_string = azurerm_application_insights.mtr_ai.connection_string
application_stack {
dotnet_version = "8.0"
use_dotnet_isolated_runtime = true
}
cors {
allowed_origins = ["*"]
}
}
tags = {
environment = "local"
}
}
The zip file is uploaded to the storage account, is downloadable and has the correct structure (or so I think). Function App is created as well, however when I scroll down to see the functions, it tells me there was an error loading functions: `Encountered an error (ServiceUnavailable) from host runtime.` It's probably some configuration error, but I just can't see it... |
Have made an alertDialog but it appears to have a huge padding in the bottom, but it isn´t asign by me, here is the code and a image of the dialog.
add_input_popup = ft.AlertDialog(
title=ft.Text(" Introduce los datos de la entrada: ",max_lines=1,size = 18, color=ft.colors.BLACK, font_family = "Poppins", weight = ft.FontWeight.W_700),
content=ft.Column([
inputField,
ft.Row([
popUpAddInput
]),
ft.Container(
content = ft.ElevatedButton(content=(ft.Text("Añadir",size = 18, color=ft.colors.BLACK, font_family = "Poppins", weight = ft.FontWeight.BOLD)), on_click=add_input_close, bgcolor="WHITE",style=ft.ButtonStyle(overlay_color="#98E9CB,0.75")),
alignment= ft.alignment.center,
margin = ft.margin.only(top=10),
),
]),
adaptive= True,
on_dismiss=lambda e: (add_input_close),
bgcolor= "#F8F8FA"
)
[enter image description here](https://i.stack.imgur.com/CGBVU.png) Look, I want to delete that bottom blank space. |
Is there an invisible bottom padding in flet alertDialog? |
|python|flet|flutter-alertdialog| |
null |
Have made an alertDialog but it appears to have a huge padding in the bottom, but it isn´t asign by me, here is the code and a image of the dialog.
add_input_popup = ft.AlertDialog(
title=ft.Text(" Introduce los datos de la entrada: ",max_lines=1,size = 18, color=ft.colors.BLACK, font_family = "Poppins", weight = ft.FontWeight.W_700),
content=ft.Column([
inputField,
ft.Row([
popUpAddInput
]),
ft.Container(
content = ft.ElevatedButton(content=(ft.Text("Añadir",size = 18, color=ft.colors.BLACK, font_family = "Poppins", weight = ft.FontWeight.BOLD)), on_click=add_input_close, bgcolor="WHITE",style=ft.ButtonStyle(overlay_color="#98E9CB,0.75")),
alignment= ft.alignment.center,
margin = ft.margin.only(top=10),
),
]),
adaptive= True,
on_dismiss=lambda e: (add_input_close),
bgcolor= "#F8F8FA"
)
[APP IMAGE](https://i.stack.imgur.com/CGBVU.png) Look, I want to delete that bottom blank space. |
```
TITLE
——
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
total : 5 nuits
——
1 nuit
1 nuit
1 nuit
total : 3 nuits
——
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
and so on...
```
I'm having this paragraph in which I'd like to select the last lines after the occurence of the last `——`
It should match and group the 6 following lines right after the ——... I've tried pretty much everything that crossed my mind so far but I must be missing something here.
I tested `(?s:.*\s)\K——` that is able to match the last —— of the document. But I can't seem to be able to select the lines after that match.
Thanks.
The point here is to count the lines after that. So if I'm only able to select the "1" or "nuit" that's fine...
The expected result :
```
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
``` |
Select all lines after certain character in regex |
|regex| |
product_id was not added in partition by clause which as a result giving unexpected result as it fetch the prev sales from any other products . |
For this section of code, I am trying to randomly choose either theta or mu to be zero. When one variable is zero, then I need the other one uniformly randomized (and vice versa).
```
N = 10000
random = np.arccos(np.random.uniform(-1, 1, N))
zero = 0
choices = [random, zero]
theta = np.random.choice(choices)
if theta == random:
mu = zero
else:
mu = random
```
I know that `random` and `zero` do not have a homogenous shape.
This is why I got the error:
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
However, I do not know how to fix this (I am still very new to programming). Any thoughts would be appreciated.
|
I am new to WordPress and Azure. There are two mail was setup in WordPress on Azure app service. one mail was failed. one mail was successful. "Your submission failed because of a server error" is the error message that appears.
I don't know how to test mail in WordPress. please help me to how to do this |
Invalid format for email address in WordPress on Azure app service |
|wordpress|azure|email|azure-web-app-service| |
null |
I can't interfere with any element on this site. Could you help ? Thank you very much for your answers in advance.
https://pttws.ptt.gov.tr/info_web/info_kayit.jsp
```python
driver = webdriver.Chrome()
driver.get("https://pttws.ptt.gov.tr/info_web/info_kayit.jsp")
inputs = driver.find_element(By.TAG_NAME, "input")
for input_element in inputs:
print(input_element.get_attribute("id"))
```
++
```
devTools listening on ws://127.0.0.1:52080/devtools/browser/196a44c6-8117-4520-bce9-a02c51c2a35f
Traceback (most recent call last):
File "C:\Users\r00t\Downloads\test_ptt.py", line 16, in <module>
inputs = driver.find_elements(By.TAG_NAME, "input")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_elements
return self.execute(Command.FIND_ELEMENTS, {"using": by, "value": value})["value"] or []
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` |
I believe the function you are looking for is itertools.product:
lasts = ['x', 'y', 'z']
firsts = ['a', 'b', 'c']
from itertools import product
for last, first in product(lasts, firsts):
print (last, first)
x a
x b
x c
y a
y b
y c
z a
z b
z c
Another alternative, that also produces an iterator is to use a nested comprehension:
iPairs=( (l,f) for l in lasts for f in firsts)
for last, first in iPairs:
print (last, first)
If you must use zip(), both inputs must be extended to the total length (product of lengths) with the first one repeating items and the second one repeating the list:
iPairs = zip( (l for l in lasts for _ in firsts),
(f for _ in lasts for f in firsts) ) |
I am having an issue. It just published my first project and now I am realising if you add an "/" to the end of the URL it does load in a very different way I cant even explain. When you add a "Trailingslash" some of the Buttons of the Website are not clickable and if I do this with ng serve the font is changing, I cant click buttons and I get a white border to the side. You can try it yourself on https://22ndspartans.de the issue is only on the discord page at https://22ndspartans.de/discord and the issues appear with https://22ndspartans./discord/.
I tried it on my localhost, I tried to do it with other browsers and other devices. I found this issue when I set up my google search indexing. When I clicked my Discord page directly from the search results it doesn't load any backgroundimage. At the begining the issue was on the other pages aswell. I am abolutely confused.
Sorry for the english I gave my best to explain it.
[you can see the white border and font here with Trailingslash][1] ---->
[How it should be (without trailingslash)][2]
Edit: I found out in the published version the Anchor to the homepage is not working when you have a trailingslash on any other page. The rest seems to be working. I am hosting with cloudflare. Maybe this has something to do with google search aswell. Idk
2nd Edit: Okay I found out that the main problem is the Trailingslash so how do I tell my application to redirect to no trailingslash if a trailingslash is called?
Tried Lazyloading aswell, but it didn't work either:
[My Lazyloading][3]
[1]: https://i.stack.imgur.com/C9NOF.jpg
[2]: https://i.stack.imgur.com/58Pmk.jpg
[3]: https://i.stack.imgur.com/DDp0x.png |
I am working on building a 2D renderer with the Javascript canvas API, and I am trying to improve performance by skipping renders when no changes to the state of any renderable objects have occurred.
To this end, I want to build an extensible class (call it `Watchable`) that will detect changes to its state while allowing subclasses to remain agnostic to this state tracking. Client code will extend this class to create renderable objects, then register those objects with the renderer.
After some work, I arrived at a partial solution using proxies, and implemented a `makeWatchable` function as follows:
```
function makeWatchable(obj) {
// Local dirty flag in the proxy's closure, but protected from
// external access
let dirty = true;
return new Proxy(obj, {
// I need to define accessors and mutators for the dirty
// flag that can be called on the proxy.
get(target, property) {
// Define the markDirty() method
if(property === "markDirty") {
return () => {
dirty = true;
};
}
// Define the markClean() method
if(property === "markClean") {
return () => {
dirty = false;
};
}
// Define the isDirty() method
if(property === "isDirty") {
return () => {
return dirty;
};
}
return Reflect.get(target, property);
},
// A setter trap to set the dirty flag when a member
// variable is altered
set(target, property, value) {
// Adding or redefining functions does not constitute
// a state change
if(
(target[property] && typeof target[property] !== 'function')
||
typeof value !== 'function'
) {
if(target[property] !== value) dirty = true;
}
target[property] = value;
return true;
}
});
}
```
This function accepts an object and returns a proxy that traps the object's setters to set a dirty flag within the proxy's closure and defines accessors and mutators for the dirty flag.
I then use this function to create the `Watchable` base class as follows:
```
// A superclass for watchable objects
class Watchable {
constructor() {
// Simulates an abstract class
if (new.target === Watchable) {
throw new Error(
'Watchable is an abstract class and cannot be instantiated directly'
);
}
return makeWatchable(this);
}
}
```
`Watchable` can then be extended to provide state tracking to its subclasses.
Unfortunately, this approach seems to suffer from two significant limitations:
1. `Watchable`s will only detect shallow changes to their state.
2. `Watchable`s will only detect changes to their *public* state.
I am comfortable passing this first limitation on to client code to deal with. But I want to support tracking of any arbitrary private members included in a subclass of `Watchable` without requiring extra work from the subclass.
So, for example, I'd like to be able to define a subclass like this:
```
class SecretRenderable extends Watchable {
#secret;
constructor() {
super();
this.#secret = 'Shh!';
}
setSecret(newSecret) {
this.#secret = newSecret;
}
}
```
use it like this:
```
const obj = new SecretRenderable();
obj.markClean();
obj.setSecret('SHHHHHHHHH!!!!!');
```
and find that `obj.isDirty()` is true.
I would hope the line `this.#secret = newSecret` would be caught by the proxy set trap. But it looks like this line bypasses the set trap entirely.
Is there a way I can modify my proxy implementation to achieve private state change detection? If not, is there an alternative approach I should consider?
|
You can use `::ng-deep` directive
> The ::ng-deep CSS selector is often used in an Angular Component’s CSS to override the styles of a third-party component or a child component’s styles
Something just like this:
::ng-deep mat-menu > .mat-mdc-menu-panel > .mat-mdc-menu-content > button {
min-height: 24px !important;
} |
The code below uses a **background callback** (dash) to retrieve a value.
Every 3 seconds, `update_event(event, shared_value)` sets an `event` and define a `value`. In parallel, `listening_process(set_progress)` waits for the `event`.
This seems to work nicely and the waiting time is low (say a few seconds). When the waiting time is below 1 second (say 500ms), `listening_process(set_progress)` is missing values.
Is there a way for the **background callback** to refresh faster ?
It looks like the server updates every one second or so, independently of the rate at which I set the `event`.
I could have used a `dcc.interval` but wanted to prevent polling.
[![enter image description here][1]][1]
```
import time
from uuid import uuid4
import diskcache
import dash_bootstrap_components as dbc
from dash import Dash, html, DiskcacheManager, Input, Output
from multiprocessing import Event, Value, Process
from datetime import datetime
import random
# Background callbacks require a cache manager + a unique identifier
launch_uid = uuid4()
cache = diskcache.Cache("./cache")
background_callback_manager = DiskcacheManager(
cache, cache_by=[lambda: launch_uid], expire=60,
)
# Creating an event
event = Event()
# Creating a shared value
shared_value = Value('i', 0) # 'i' denotes an integer type
# Updating the event
# This will run in a different process using multiprocessing
def update_event(event, shared_value):
while True:
event.set()
with shared_value.get_lock(): # ensure safe access to shared value
shared_value.value = random.randint(1, 100) # generate a random integer between 1 and 100
print("Updating event...", datetime.now().time(), "Shared value:", shared_value.value)
time.sleep(3)
app = Dash(__name__, background_callback_manager=background_callback_manager)
app.layout = html.Div([
html.Button('Run Process', id='run-button'),
dbc.Row(children=[
dbc.Col(
children=[
# Component sets up the string with % progress
html.P(None, id="progress-component")
]
),
]
),
html.Div(id='output')
])
# Listening for the event and generating a random process
def listening_process(set_progress):
while True:
event.wait()
event.clear()
print("Receiving event...", datetime.now().time())
with shared_value.get_lock(): # ensure safe access to shared value
value = shared_value.value # read the shared value
set_progress(value)
@app.callback(
[
Output('run-button', 'style', {'display': 'none'}),
Output("progress-component", "children"),
],
Input('run-button', 'n_clicks'),
prevent_initial_call=True,
background=True,
running=[
(Output("run-button", "disabled"), True, False)],
progress=[
Output("progress-component", "children"),
],
)
def run_process(set_progress, n_clicks):
if n_clicks is None:
return False, None
elif n_clicks > 0:
p = Process(target=update_event, args=(event,shared_value))
p.start()
listening_process(set_progress)
return True, None
if __name__ == '__main__':
app.run(debug=True, port=8050)
```
[1]: https://i.stack.imgur.com/mHARa.gif |
unable to use ignore_class in SparseCategoricalCrossentropy |
|tensorflow|tf.keras| |
null |
Let's say that i am trying to fill a MatrixXf with the rolling mean of another MatrixXf like this
MatrixXf ComputeRollingMean(const MatrixXf& features) {
MatrixXf df(features.rows() - 30, features.cols());
for (size_t i = 30; i < features.rows(); ++i) {
MatrixXf mat = features.block(i-30, 0, 30, 3);
RowVectorXf mean_vector = mat.colwise().mean();
RowVectorXf stdev_vector = ((mat.rowwise() - mean_vector).colwise().squaredNorm() / (mat.rows()-1)).cwiseSqrt();
MatrixXf zscores = (mat.rowwise() - mean_vector).array().rowwise() / stdev_vector.array();
RowVectorXf zscore = zscores(last, all);
df.block(index, 0, 1, 3) = zscore;
}
return df;
}
It seems that Eigen has some sort of parallelism process
#include <Eigen/Core>
and then
int main() {
Eigen::initParallel();
...}
But I am not sure how this can parallelize the for loop in my ComputeRollingMean function. Anyone knows how I could parallelize, and maybe choose the number of process i want to start please?
|
The aim is to use the API to request the median tenure for a companies in a list.
If there is no official way then would be bootstrapped way be using selenium and searching each name, clicking the profile and reverting to the insights tab, is this possible with selenium? |
Linkedin API for median tenure |
|python|selenium-webdriver|linkedin-api| |
The OP's problem qualifies perfectly for a solution of mainly 2 combined techniques ...
1) the [`map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) based creation of a list of [async function/s (expressions)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function), each function representing a delayed broadcast task.
2) the creation of an [async generator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/AsyncGenerator) via an [async generator-function (expression)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function*), where the latter consumes / works upon the created list of delayed tasks, and where the async generator itself will be iterated via the [`for await...of` statement](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of).
In addition one needs to write kind of a `wait` function which can be achieved easily via an async function which returns a [`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/Promise) instance, where the latter resolves the promise via [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout) and a customizable delay value.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const queueData =
["Sample Data 1", "Sample Data 2", "Sample Data 3"];
// create a list of async function based "delayed tasks".
const delayedTasks = queueData
.map(data => async () => {
wss.broadcast(
JSON.stringify({ data })
);
await wait(1500);
return `successful broadcast of "${ data }"`;
});
// create an async generator from the "delayed tasks".
const scheduledTasksPool = (async function* (taskList) {
let task;
while (task = taskList.shift()) {
yield await task();
}
})(delayedTasks);
// utilize the async generator of "delayed tasks".
(async () => {
for await (const result of scheduledTasksPool) {
console.log({ result });
}
})();
<!-- language: lang-css -->
.as-console-wrapper { min-height: 100%!important; top: 0; }
<!-- language: lang-html -->
<script>
const wss = {
broadcast(payload) {
console.log('broadcast of payload ...', payload);
},
};
async function wait(timeInMsec = 1_000) {
return new Promise(resolve =>
setTimeout(resolve, Math.max(0, Math.min(timeInMsec, 20_000)))
);
}
</script>
<!-- end snippet -->
And since the approach is two folded, one even can customize each delay in between two tasks ... one just slightly has to change the format of the to be queued data and the task generating mapper functionality (2 lines of code are effected) ...
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
// changed format.
const queueData = [
{ data: "Sample Data 1", delay: 1000 },
{ data: "Sample Data 2", delay: 3000 },
{ data: "Sample Data 3", delay: 2000 },
{ data: "Sample Data 4" },
];
// create a list of async function based "delayed tasks".
const delayedTasks = queueData
.map(({ data, delay = 0 }) => async () => { // changed argument.
wss.broadcast(
JSON.stringify({ data })
);
await wait(delay); // changed ... custom delay.
return `successful broadcast of "${ data }"`;
});
// create an async generator from the "delayed tasks".
const scheduledTasksPool = (async function* (taskList) {
let task;
while (task = taskList.shift()) {
yield await task();
}
})(delayedTasks);
// utilize the async generator of "delayed tasks".
(async () => {
for await (const result of scheduledTasksPool) {
console.log({ result });
}
})();
<!-- language: lang-css -->
.as-console-wrapper { min-height: 100%!important; top: 0; }
<!-- language: lang-html -->
<script>
const wss = {
broadcast(payload) {
console.log('broadcast of payload ...', payload);
},
};
async function wait(timeInMsec = 1_000) {
return new Promise(resolve =>
setTimeout(resolve, Math.max(0, Math.min(timeInMsec, 20_000)))
);
}
</script>
<!-- end snippet -->
|
If you want to match bot, but not google bot:
^$|(?<!\bgoogle)bot|crawl|spider
[Regex demo](https://regex101.com/r/jD7hz8/1)
Or you could group the alternatives in a non capture group and surround that group with word boundaries to prevent partial matches for all alternatives:
^$|\b(?:bot|crawl|spider)\b
[Regex demo](https://regex101.com/r/8DhxC7/1)
|
When I use pgsodium with a unique key_id for each row, encryption works well when I add a new record, and I can see the decrypted value in the associated view:
```
CREATE EXTENSION IF NOT EXISTS pgsodium;
```
```
create table if not exists "schema"."secret" (
"id" uuid DEFAULT uuid_generate_v4() NOT NULL PRIMARY KEY,
"createdAt" timestamp with time zone DEFAULT "now"() NOT NULL,
"updatedAt" timestamp with time zone DEFAULT "now"() NOT NULL,
"lastUsed" timestamp with time zone,
"name" text DEFAULT ''::text not null,
"value" text DEFAULT ''::text not null,
"key_id" uuid NOT NULL references pgsodium.key(id) default (pgsodium.create_key()).id,
"nonce" bytea DEFAULT pgsodium.crypto_aead_det_noncegen(),
"userId" uuid DEFAULT "auth"."uid"() references "auth"."users"("id")
);
```
```
SECURITY LABEL FOR pgsodium
ON COLUMN "schema"."secret"."value"
IS 'ENCRYPT WITH KEY COLUMN key_id ASSOCIATED (userId) NONCE nonce';
```
However, once I update a row in the 'secret' table, a NEW key is created in the pgsodium schema to encrypt the value, which is not subsequently updated in the key_id column. So after updating a record, I can no longer access its decrypted value from the associated view.
I tried all the different ways to assign keys (per column, per row, with and without nonce) but it always assigns a fresh key to encrypt the updated row.
Why this is happening? I thought pgsodium is supposed to use the specified key_id.
I think it is a bug with some predefined triggers on ON UPDATE, but I'm at a loss on where to find / overwrite them. |
I'm using .net 8 and Docker Desktop with linux container.
I'm making a request from API in docker to another api and it's not working and if i'm a call to external api with domain for example https://somedomian/api - it's working find.
I have this error (localhost to localhost with HttpClient in Docker container):
```
The request was canceled due to the configured HttpClient.Timeout of 30 seconds elapsing
```
Docker file:
```
#See https://aka.ms/customizecontainer to learn how to customize your debug container and how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM comp-artifactory.bb.comp.gov.com/docker/dotnet/aspnet:8.0 AS base
USER app
WORKDIR /app
EXPOSE 80
EXPOSE 443
#RUN dotnet dev-certs https --trust
FROM comp-artifactory.bb.comp.gov.com/docker/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["netTestAApi/netTestAApi.csproj", "netTestAApi/"]
RUN dotnet nuget add source http://comp-artifactory.bb.comp.gov.com:8082/artifactory/api/nuget/v3/nuget -n artifactory
RUN dotnet nuget disable source nuget.org
RUN dotnet restore "./netTestAApi/./netTestAApi.csproj"
COPY . .
WORKDIR "/src/netTestAApi"
RUN dotnet build "./netTestAApi.csproj" -c $BUILD_CONFIGURATION -o /app/build
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./netTestAApi.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
COPY ./netTestAApi/crt/1.crt /usr/local/share/ca-certificates
COPY ./netTestAApi/crt/2.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY --from=build-env /app/out/netTestAApi.SwaggerExtensions.xml ./
ENTRYPOINT ["dotnet", "netTestAApi.dll"]
```
Thanks a lot for your help.
|
Gitlab issue filtering seems to default to AND filtering if you have multiple criteria against the same filter object (eg. label). For instance, if you have labels `vehicle::car`, `vehicle::truck`, `vehicle::motorcycle`, if you want to see `vehicle::car` OR `vehicle::truck`, if you put `label=vehicle::car` `label=vehicle::truck`, it will show nothing.
Is there any way to get around this this?
Thanks |
Have two excel spreadsheets. In first excel is a lot of HEX decoded data (eg.73F57D5AA922166CA546318DA9B6E0EF), in second excel is complex AES encoded solution (https://www.nayuki.io/page/aes-cipher-internals-in-excel) with little modify input by me (no need update ciphertext table, table get update from string - is alternative as original)
The problem is how make mass process by simple drag in excel? No problem move decoded HEX data to 2nd spreadsheet but how connect it and mass process?
Best what I can imagine is create new function based on AES complex solution from 2nd excel and use the function where argument is HEX decoded data as input and result is encoded HEX string but how to do it?
[Here graphical how encode strings now, manual copy and paste between excels](https://i.stack.imgur.com/9Co3o.png)
Search and use other excel AES encode function but all what I found encode strings, not HEX strings (cannot adopt other AES encode function to my data), I have HEX in string data and mentioned excel solution encode correct data. |
Excel - create new definition of function based on results form other cell with complex formula |
|java|android|webview|progress-bar|mobile-development| |
In react 17 they tried to hide the fact that they were doing a double render in strict mode by temporarily overwriting the `console.log` function to be a no-op during that second render. So it is rendering twice, you just can't log it out.
In the following code i have added a variable for counting the number of renders. You'll notice that on every click, the count increases by 2.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<script
src="https://unpkg.com/react@17/umd/react.development.js"
crossorigin
></script>
<script
src="https://unpkg.com/react-dom@17/umd/react-dom.development.js"
crossorigin
></script>
<script src="https://unpkg.com/babel-standalone@6/babel.min.js"></script>
<script type="text/babel">
console.log(React.version, ReactDOM.version);
let renderCount = 0;
class App extends React.Component {
state = {
count: 0,
};
render() {
renderCount++;
console.log("render", renderCount);
return (
<div>
<button
onClick={() => {
this.setState({
count: this.state.count + 1,
});
}}
>
click
</button>
<div>count {this.state.count}</div>
</div>
);
}
}
ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById("root")
);
</script>
</head>
<body>
<div id="root"></div>
</body>
</html>
<!-- end snippet --> |
I am developing a perfume review application using a React frontend and an Express/SQLite backend. My goal is to transition from using static data within a React component to fetching this data dynamically from my backend.
**Project Architecture:**
**Backend:**
- **Models:** **`Parfum.js`** - Defines the perfume model.
- **Controllers:** **`ParfumController.js`** - Contains logic to interact with the database.
- **Database Management:** **`database.js`** - Manages SQLite database operations.
- **API Setup:** **`app.js`** - Configures Express routes for the API.
**Frontend (newfrontend directory):**
- **React Component:** **`MainWindow.js`** - Displays perfume information.
- **Styling:** **`MainWindow.css`**
**Database:**
- **SQLite Database:** **`database.db`** - Stores perfume data.
The backend provides various endpoints, like **`/api/perfume-name/:id`**, for fetching perfume names.
**Issue:**
I need to fetch a list of perfumes from the backend dynamically instead of using static data within **`MainWindow.js`**.
**Questions:**
1. How can I structure my API call within **`MainWindow.js`** to fetch and display perfumes?
2. Based on the provided database schema and Express routes, how can I update the React component's state with the fetched data?
3. What are the best practices for error handling in this scenario, especially for failed API calls or when no data is returned?
**Attempts:**
- Used axios in **`MainWindow.js`** with **`useEffect`** to fetch data on component mount.
- Tested with hardcoded API endpoint URLs, which works fine with static data. Now seeking to implement dynamic data fetching.
**Challenges:**
- Encountering CORS errors and issues updating the component's state with fetched data.
- Unsure how to handle the asynchronous nature of API calls within React effectively
**Code Snippets:**
database.js:
```
class Database {
// Method to initialize database and tables
_initializeDatabase() {
// SQL table creation scripts
}
_Add_Perfume(Perfume) {
// Add attributes of a Perfume object which contains scraped data in a Perfume Object created in the Model and coming from the controller
}
// Example getter
getPerfumeName(perfumeId) {
// Implementation
}
// Additional getters and setters...
}
```
app.js backend setup:
```
import express from 'express';
import cors from 'cors';
import { Database } from './database.js';
const app = express();
app.use(cors());
const port = 3001;
app.get('/api/perfume-name/:id', (req, res) => {
// Endpoint implementation
});
// More routes...
```
**Desired Outcome:**
I'm looking for guidance on fetching data from my Express backend into the MainWindow.js React component to replace the static data setup. Specifically, how to dynamically render this data in the component.
**Current Static Setup in React (MainWindow.js):**
Below is a snippet from my MainWindow.js component, showing how perfume data is currently hardcoded within the component's state. I aim to replace this static data setup with dynamic data fetched from my backend.
```
import React, { useState } from 'react';
import './MainWindow.css';
const MainWindow = () => {
const [perfumes, setPerfumes] = useState([
{
id: 1,
name: 'Sauvage',
brand: 'Dior',
imageUrl: 'https://example.com/sauvage.jpg',
genre: 'Men',
// Additional perfume details...
},
{
id: 2,
name: 'Chanel No 5',
brand: 'Chanel',
imageUrl: 'https://example.com/chanel-no-5.jpg',
genre: 'Women',
// Additional perfume details...
},
// Additional perfumes...
]);
// Component rendering logic...
return (
<div className="perfume-container">
{perfumes.map(perfume => (
<div key={perfume.id} className="perfume-card">
<img src={perfume.imageUrl} alt={perfume.name} />
<h2>{perfume.name}</h2>
<p>Brand: {perfume.brand}</p>
{/* More perfume details */}
</div>
))}
</div>
);
};
export default MainWindow;
``` |
I have a multi-threaded implementation of Mandelbrot. Instead of scanning X and Y in nested loops, I scan pixels from 0 to width*height. The Mandelbrot worker thread then calculates the logical and real coordinates of the pixel. Here is a subset of the code...
```
// Worker thread for processing the Mandelbrot algorithm
DWORD WINAPI MandelbrotWorkerThread(LPVOID lpParam)
{
// This is a copy of the structure from the paint procedure.
// The address of this structure is passed with lParam.
typedef struct ThreadProcParameters
{
int StartPixel;
int EndPixel;
int yMaxPixel;
int xMaxPixel;
uint32_t* BitmapData;
double dxMin;
double dxMax;
double dyMin;
double dyMax;
} THREADPROCPARAMETERS, *PTHREADPROCPARAMETERS;
PTHREADPROCPARAMETERS P;
P = (PTHREADPROCPARAMETERS)lpParam;
// Algorithm obtained from https://en.wikipedia.org/wiki/Mandelbrot_set.
double x0, y0, x, y, xtemp;
int iteration;
// Loop for each pixel in the slice.
for (int Pixel = P->StartPixel; Pixel < P->EndPixel; ++Pixel)
{
// Calculate the x and y coordinates of the pixel.
int xPixel = Pixel % P->xMaxPixel;
int yPixel = Pixel / P->xMaxPixel;
// Calculate the real and imaginary coordinates of the point.
x0 = (P->dxMax - P->dxMin) / P->xMaxPixel * xPixel + P->dxMin;
y0 = (P->dyMax - P->dyMin) / P->yMaxPixel * yPixel + P->dyMin;
// Initial values.
x = 0.0;
y = 0.0;
iteration = 0;
// Main Mandelbrot algorithm. Determine the number of iterations
// that it takes each point to escape the distance of 2. The black
// areas of the image represent the points that never escape. This
// algorithm is supposed to be using complex arithmetic, but this
// is a simplified separation of the real and imaginary parts of
// the point's coordinate. This algorithm is described as the
// naive "escape time algorithm" in the WikiPedia article noted.
while (x * x + y * y <= 2.0 * 2.0 && iteration < max_iterations)
{
xtemp = x * x - y * y + x0;
y = 2 * x * y + y0;
x = xtemp;
++iteration;
}
// When we get here, we have a pixel and an iteration count.
// Lookup the color in the spectrum of all colors and set the
// pixel to that color. Note that we are only ever using 1000
// of the 16777215 possible colors. Changing max_iterations uses
// a different pallette, but 1000 seems to be the best choice.
// Note also that this bitmap is shared by all 64 threads, but
// there is no concurrency conflict as each thread is assigned
// a different region of the bitmap. The user has the option of
// using the original RGB or the new and improved Log HSV system.
if (!bUseHSV)
{
// The old RGB system.
P->BitmapData[Pixel] = ReverseRGBBytes
((COLORREF)(-16777215.0 / max_iterations * iteration + 16777215.0));
}
else
{
// The new HSV system.
sRGB rgb;
sHSV hsv;
hsv = mandelbrotHSV(iteration, max_iterations);
rgb = hsv2rgb(hsv);
P->BitmapData[Pixel] =
(((int)(rgb.r * 255))) +
(((int)(rgb.g * 255)) << 8) +
(((int)(rgb.b * 255)) << 16 );
}
}
// End of thread execution. The return value is available
// to the invoking thread, but we don't presently use it.
return 0;
```
And this is the thread dispatcher...
```
// Parameters for each thread.
typedef struct ThreadProcParameters
{
int StartPixel;
int EndPixel;
int yMaxPixel;
int xMaxPixel;
uint32_t* BitmapData;
double dxMin;
double dxMax;
double dyMin;
double dyMax;
} THREADPROCPARAMETERS, *PTHREADPROCPARAMETERS;
// Allocate per thread parameter and handle arrays.
PTHREADPROCPARAMETERS* pThreadProcParameters = new PTHREADPROCPARAMETERS[Slices];
HANDLE* phThreadArray = new HANDLE[Slices];
// MaxPixel is the total pixel count among all threads.
int MaxPixel = (rect.bottom - tm.tmHeight) * rect.right;
int StartPixel, EndPixel, Slice;
// Main thread dispatch loop. Walk the start and end pixel indices.
for (StartPixel = 0, EndPixel = PixelStepSize, Slice = 0;
(EndPixel <= MaxPixel) && (Slice < Slices);
StartPixel += PixelStepSize, EndPixel = min(EndPixel + PixelStepSize, MaxPixel), ++Slice)
{
// Allocate the parameter structure for this thread.
pThreadProcParameters[Slice] =
(PTHREADPROCPARAMETERS)HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY, sizeof(THREADPROCPARAMETERS));
if (pThreadProcParameters[Slice] == NULL) ExitProcess(2);
// Initialize the parameters for this thread.
pThreadProcParameters[Slice]->StartPixel = StartPixel;
pThreadProcParameters[Slice]->EndPixel = EndPixel;
pThreadProcParameters[Slice]->yMaxPixel = rect.bottom - tm.tmHeight; // Leave room for the status bar.
pThreadProcParameters[Slice]->xMaxPixel = rect.right;
pThreadProcParameters[Slice]->BitmapData = BitmapData; // Bitmap is shared among all threads.
pThreadProcParameters[Slice]->dxMin = dxMin;
pThreadProcParameters[Slice]->dxMax = dxMax;
pThreadProcParameters[Slice]->dyMin = dyMin;
pThreadProcParameters[Slice]->dyMax = dyMax;
// Create and launch this thread.
phThreadArray[Slice] = CreateThread
(NULL, 0, MandelbrotWorkerThread, pThreadProcParameters[Slice], 0, NULL);
if (phThreadArray[Slice] == NULL)
{
ErrorHandler((LPTSTR)_T("CreateThread"));
ExitProcess(3);
}
} // End of main thread dispatch loop.
// Wait for all threads to terminate.
WaitForMultipleObjects(Slices, phThreadArray, TRUE, INFINITE);
```
This can be broken up into as many threads as desired. My program, for instance, allows between 1 and 64 threads. For the full program, see https://github.com/alexsokolek2/mandelbrot. Good luck. |
{"Voters":[{"Id":2357112,"DisplayName":"user2357112"},{"Id":1773434,"DisplayName":"Govind Parmar"},{"Id":1902010,"DisplayName":"ceejayoz"}]} |
I have airflow and spark on different hosts. I am trying to submit it, but I got the following error:
```lang-none
{standard_task_runner.py:107} ERROR - Failed to execute job 223 for task spark_job (Cannot execute: spark-submit --master spark://spark-host --proxy-user hdfs --name arrow-spark --queue default --deploy-mode client /opt/airflow/dags/plugins/plug_code.py. Error code is: 1.; 27605)
```
Spark-Submit cmd:
```sh
spark-submit --master spark://spark-host --name arrow-spark --queue default --deploy-mode client /opt/airflow/dags/plugins/plug_code.py
```
I tried with a proxy name, did not make sense. |
null |
I have airflow and spark on different hosts. I am trying to submit it, but I got the following error:
>{standard_task_runner.py:107} ERROR - Failed to execute job 223 for task spark_job (Cannot execute: spark-submit --master spark://spark-host --proxy-user hdfs --name arrow-spark --queue default --deploy-mode client /opt/airflow/dags/plugins/plug_code.py. Error code is: 1.; 27605)
Spark-Submit cmd:
```sh
spark-submit --master spark://spark-host --name arrow-spark --queue default --deploy-mode client /opt/airflow/dags/plugins/plug_code.py
```
I tried with a proxy name, did not make sense. |
the input is:

need output as:

is this possible in just XL format or a python script is required.
if yes can you help with python script?
thanks.
my script:
my logic is read each df.col and compare value from 1 to 20, if equal write it out or write blank.
import pandas as pd
df = pd.read_excel(r"C:\Users\my\scripts\test-file.xlsx")
print(df)
for column in df.columns[0:]:
print(df[column]) |
I am using node server for the backend. Connection to Cassandra is done using cassandra-driver nodejs.
Connection is done as follows:
const client = new cassandra.Client({
contactPoints: ['h1', 'h2'],
localDataCenter: 'datacenter1',
keyspace: 'ks1'
});
1. In contactPoints, do I just need to add 'seed' nodes or can I can add any nodes from the datacenter?
2. Do I need to run separate backend service for each datacenter? Or is there a way to connect multiple datacenter from the same nodejs backend service?
3. Any recommended way for setting backend server such that bandwidth can be minimized between Cassandra nodes and backend server? Should backend server run on the same machine where one of the Cassandra node is running so that data won't need to travel between multiple machines? Or is it fine if backend server runs on a completely separate machine than Cassandra node? Here, for example, if AWS EC2 is used, then data transfer charges might increase due to data flow between Cassandra node and backend server.
|
I am trying to create a structure `Student` which contains q substructures named `Course`. Each `Course` is a structure with a credit and point `int` values.
How do I set up my `Student` structure to have an integer number of `Course` structures within it? Thanks
```
struct Student{
int q;
Course course[q];
};
struct Course{
int credit;
int point;
};
```
I tried this but VSC is telling me it is wrong.
Edit error:
```
typedef struct Student{
int q;
vector<Course> courses;
};
struct Course{
string name;
int credit;
int point;
};
```
Error text!
'Course': undeclared identifiercpp(C2065)
'std::vector': 'Course' is not a valid template type argument for parameter '_Ty'cpp(C2923)
'std::vector': too few template argumentscpp(C2976)
class std::vector<Course> |
I am using livewire component and in component blade file I am using script,
```
@assets
<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
@endassets
@script
<script>
var options = {
series: [],
legend: {
show: false
},
chart: {
height: '650px',
type: 'treemap'
},
dataLabels: {
enabled: true,
style: {
fontSize: '12px',
},
formatter: function(text, op) {
return [text, op.value]
},
offsetY: -4
},
plotOptions: {
treemap: {
enableShades: true,
shadeIntensity: 0.5,
reverseNegativeShade: true,
colorScale: {
ranges: [{
from: -6,
to: 0,
color: '#CD363A'
},
{
from: 0.001,
to: 6,
color: '#52B12C'
}
]
}
}
}
};
var chart = new ApexCharts(document.querySelector("#chart"), options);
chart.render();
$wire.on('dataUpdated', (data) => {
const dataArray = data[0];
// Map the data to format it properly
const mappedData = dataArray.map(item => ({
x: item.asset_symbol,
y: item.pchange
}));
mappedData.sort((a, b) => b.y - a.y);
chart.updateSeries([{
data: mappedData
}]);
console.log(dataArray);
console.log('updated');
});
</script>
@endscript
```
But when the page loads, it throws an error
```
livewire.js?id=5d8beb2e:1243 **Uncaught SyntaxError: Unexpected token 'var'**
at new AsyncFunction (<anonymous>)
at safeAsyncFunction (livewire.js?id=5d8beb2e:1243:21)
at generateFunctionFromString (livewire.js?id=5d8beb2e:1253:16)
at generateEvaluatorFromString (livewire.js?id=5d8beb2e:1258:16)
at normalEvaluator (livewire.js?id=5d8beb2e:1223:111)
at evaluateLater (livewire.js?id=5d8beb2e:1213:12)
at Object.evaluate (livewire.js?id=5d8beb2e:1209:5)
at livewire.js?id=5d8beb2e:8592:28
at Object.dontAutoEvaluateFunctions (livewire.js?id=5d8beb2e:1203:18)
at livewire.js?id=5d8beb2e:8591:26
```
If I move ```var options....``` after ```$wire.on ``` function then it loads correctly,
Why component script does not recognise ```var``` or ```const```? |
|excel| |
null |
null |
null |
null |
null |
null |
null |
python encoding issue only in cloud but not on-prem with python version on 2.7.5 on-prem and 2.7.18 on cloud |
tried all, but unfortunately to no avail. I think my initial nginx set-up was faulty, although i could not figure out why this was. I eventually ended up freshly installing nginx which went well. Now other issues this helped me: https://askubuntu.com/questions/361902/how-to-install-nginx-after-removed-it-manually
then followed the previous steps and it started working |
With pure JS, you can do something like the below.
The basic concepts are these: First, your data is in xml format so you should parse it with an xml parser; that the job of [DOMParser()][1]. Next, to query the xml data you should use xpath, incorporated in JS in the form of [the document.evaluate() method][2]. Finally, you convert the search results into html using [template literals][3] and insert them into the table using the [insertAdjacentHTML() method][4].
As I mentioned, I changed the structure of the table a little so it makes more sense (to me), but you can obviously play with it and use another format.
So all together:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
xml = `<?xml version="1.0" encoding="UTF-8"?>
<enrollment>
<student>
<stud_no>0001</stud_no>
<stud_name>May</stud_name>
<enrollment_data>
<subj_code>IPT</subj_code>
<subj_desc>Integrative Programming and Technologies</subj_desc>
<subj_units>3</subj_units>
<professor>Jes</professor>
</enrollment_data>
<enrollment_data>
<subj_code>WD</subj_code>
<subj_desc>Web Devt</subj_desc>
<subj_units>3</subj_units>
<professor>Mark</professor>
</enrollment_data>
</student>
<student>
<stud_no>0002</stud_no>
<stud_name>June</stud_name>
<enrollment_data>
<subj_code>IPT</subj_code>
<subj_desc>Integrative Programming and Technologies</subj_desc>
<subj_units>3</subj_units>
<professor>Jes</professor>
</enrollment_data>
</student>
</enrollment>
`;
const docEval = (doc, expr, context) =>
doc.evaluate(expr, context, null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null);
domdoc = new DOMParser().parseFromString(xml, "text/xml");
dest = document.querySelector("#theTable");
stus = docEval(domdoc, './/student', domdoc);
for (let i = 0; i < stus.snapshotLength; i++) {
let student = stus.snapshotItem(i);
studNo = docEval(domdoc, ".//stud_no/text()", student);
studName = docEval(domdoc, ".//stud_name/text()", student);
row = `<tr style="background-color:powderblue;">
<td>${studNo.snapshotItem(0).textContent}</td>
<td>${studName.snapshotItem(0).textContent}</td>
</tr>`;
dest.insertAdjacentHTML("beforeend", row);
infos = docEval(domdoc, ".//enrollment_data", student);
for (let i = 0; i < infos.snapshotLength; i++) {
let entry = infos.snapshotItem(i);
subjCode = docEval(domdoc, ".//subj_code/text()", entry);
subjDesc = docEval(domdoc, ".//subj_desc/text()", entry);
subjUnit = docEval(domdoc, ".//subj_units/text()", entry);
prof = docEval(domdoc, ".//professor/text()", entry);
row2 = `<tr style="background-color:yellow;">
<td>${subjCode.snapshotItem(0).textContent}</td>
<td>${subjDesc.snapshotItem(0).textContent}</td>
<td>${subjUnit.snapshotItem(0).textContent}</td>
<td>${prof.snapshotItem(0).textContent}</td>
</tr>`;
dest.insertAdjacentHTML("beforeend", row2);
}
}
<!-- language: lang-html -->
<html lang="en-US">
<link rel="stylesheet" href="https://edwardtufte.github.io/tufte-css/tufte.css" type="text/css">
<body>
<table id='theTable' border='1'><tr>
<td>id&CourseCode</td>
<td>Name&Subject</td>
<td>Units</td>
<td>Prof</td>
</tr></table>
</body>
<!-- end snippet -->
[1]: https://developer.mozilla.org/en-US/docs/Web/API/DOMParser
[2]: https://developer.mozilla.org/en-US/docs/Web/API/Document/evaluate
[3]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals
[4]: https://developer.mozilla.org/en-US/docs/Web/API/Element/insertAdjacentHTML |
I have a project with WinForms, using GDI+ and DirectX for render some data (2d). I want to add additional renderer via Silk.NET (interesting OpenGL/Vulkan).
Have any ideas how to add silk-surface on WinForms control? I hadn't information in tutorials or git projects. |
In Android Compose, please tell me how to hide LazyColumn so that TopBar disappears when I raise LazyColumn
``` kotlin
Scaffold(
topBar = {
PostTopBar()
}
){ innerPadding ->
CompositionLocalProvider(
LocalOverscrollConfiguration provides null
) {
LazyColumn(
horizontalAlignment = Alignment.CenterHorizontally,
modifier = Modifier
.padding(innerPadding)
.fillMaxWidth()
) {
items(postData.size) {
Column {
Post(postData = postData[it], index = it)
}
}
}
}
}
``` |
I am using spring boot 2.7 version and recently upgraded *jakarta.annotations-api.2.1.1* version. After this when I started application, I getting error like
No class deffounderror for javax.security.RunAs
How can I fix this issue? |
null |
I am using spring boot 2.7 version and recently upgraded `jakarta.annotations-api.2.1.1` version. After this when I started application, I getting error like
No class deffounderror for javax.security.RunAs
How can I fix this issue? |
NoClassDefFoundError for javax.security.RunAs when using Jakarta Annotations API 2.1.1 on Spring Boot 2.7 |
|spring-boot|jakarta-annotations| |
Below is the piece of code where it is writing from already encoded cp1252 files.
with open(out_path, "a") as csv_file:
writer = csv.writer(csv_file, dialect='excel', lineterminator='\n')
writer.writerows(csv_values)
I am getting data as ? in place of -
I have tried to encode to cp1252 again, but getting below error
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 64: ordinal not in range(128)
Later I tried to decode and encode
but then getting subsequent errors.
Strangely this issue is not seen on-prem and on cloud.
Does the python version impact in encoding ?
|
Is there a way to update dash faster when using a background callback? |
|python|multiprocessing|plotly-dash| |
Webpack generates a JS file for each resource defined in the entry option.
The `mini-css-extract-plugin` extract CSS from SCSS file defined in entry, but not eliminate a generated empty JS file.
To fix it, you can use the `webpack-remove-empty-scripts` plugin.
1. Install
```
npm install webpack-remove-empty-scripts --save-dev
```
2. Add the plugin in your Webpack config:
```js
const path = require('path');
const MiniCssExtractPlugin = require('mini-css-extract-plugin');
const RemoveEmptyScriptsPlugin = require('webpack-remove-empty-scripts'); // <= add it
module.exports = {
entry: {
index: './src/js/index.es6.js', // Entry point of your application
edit: './src/scss/edit.scss',
style: './src/scss/style.scss'
},
// ...
plugins: [
// removes the empty `.js` files generated by webpack
new RemoveEmptyScriptsPlugin(),// <= add it here
new MiniCssExtractPlugin({
// ...
})
],
// ...
};
``` |
I have setup a postgres:13.3 docker container and scram-sha-256 authentication.
Postgres.conf:
```
password_encryption = scram-sha-256
```
pg_hba.conf:
```
hostnossl all all 0.0.0.0/0 scram-sha-256
local all all scram-sha-256
```
After above done and restarted container, I created a new fbp2 user and applied password 'fbp123', and password seems to be saved as
scram in pg_authid table:
```
16386 | fbp2 | t | t | f | f | t | f | f | -1 | SCRAM-SHA-256$4096:yw+jyaEzlvlOjZnc/L/flA==$tqPlJIDXv9zueaGd8KpQf11N82IGgAOsK4
Lhb7lPhi4=:+mCXFKb2y5PG6ycIKCz7xaY8U5MNLnkzlPZK8pt3to0= |
```
I use the original plain-text from within my java app to connect:
```
hikariConfig = new HikariConfig();
hikariConfig.setUsername("fbp2");
hikariConfig.setPassword("fbp123");
hikariConfig.setJdbcUrl("jdbc:postgresql://%s:%s/%s".formatted("localhost", 5432, "mydb"));
HikariDataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource.getConnection();
```
From logs, this url is used: ``` jdbc:postgresql://localhost:5432/mydb ```
The issue is I'm having authentication issue, although I use the plain-text password that I used in postgres server:
```
2024-03-30 14:38:03.372 DEBUG 22440 [ main] c.z.h.u.DriverDataSource : Loaded driver with class name org.postgresql.Driver for jdbcUrl=jdbc:postgresql://localhost:5432/mydb
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.PoolBase : HikariPool-1 - Failed to create/setup connection: FATAL: password authentication failed for user "fbp2"
2024-03-30 14:38:03.601 DEBUG 22440 [ main] c.z.h.p.HikariPool : HikariPool-1 - Cannot acquire connection from data source
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "fbp2"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:693)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:203)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:258)
```
Note that If I revert to "trust" and send no passwords, I have this:
```
org.postgresql.util.PSQLException: The server requested SCRAM-based authentication, but no password was provided.
```
So, it seems server only wants scram. I have tried md5 with no success.
----
Some relevant dependencies:
```
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.3.0</version>
</dependency>
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>5.1.0</version>
</dependency>
```
My docker desktop runs on windows 11.
I use Oracle OpenJDK 20.0.1
I can connect to mydb with fbp2 user with no problem via psql admin tool. |
{"Voters":[{"Id":10952503,"DisplayName":"Elikill58"},{"Id":22180364,"DisplayName":"Jan"},{"Id":354577,"DisplayName":"Chris"}]} |
`x-c+M×s-N×t==0` and `z+s+t==1` with `z,s,t` being Booleans representing zero, negative or positive and `M` and `N` representing slack variables. It's very straight forward. The other answers saying it's not possible are wrong. `M×s` and `N×t` are easily linearized via the Fortet inequalities. For this reason, the accepted answer is subpar and misleading. |
I prepared a simple dojo here: http://dojo.telerik.com/iQERE
**Scenario:**
I have an array within another array and I wanted to render it with a kendo template in a sort of table/grid.
First array's items are the rows and inner array's items are the columns.
I Googled and found this technique: [template inside template][1]
The problems are:
**1) How can I bind values for the nested array's items?**
I tried `data-bind="value:subval"` but it doesn't work.
I think because using that technique the 'real data' of this template is the outer array, not the inner one!
Tried `data-bind="value: item.subval"` - leaded to nothing.
So finally I tried `data-bind="value: subList[#:index#].subval"` and it works. But I ask myself: Is this correct?
**2) How can I bind the value to a function in the nested template? (famous kendo mvvm calculated fields).**
I hoped I could bind all the input to a unique function who takes the 'caller' value and do something (multiply for another model field for example).
But I can't get rid who called the function... my `"e"` argument is the whole data!
After some experiments I tried this way: http://dojo.telerik.com/OpOja and first time works... but it seems the function doesn't trigger when the value1 of the model change (which I would expect in a normal mvvm behavior), maybe because I declared the function inside the `dataSource`. (it's not an `observable` object itself?)
I hope I explained my problem well!
[1]: http://www.telerik.com/forums/template-inside-template |
You can use scikit image. if its a normal tiff file
```
from skimage import io
my_img = io.imread('myimage.tiff')
```
Alternatively if your tiff file is a georeferenced image. You can use gdal
```
from osgeo import gdal
ds = gdal.Open("myimage.tiff")
myarr1 = np.array(ds.GetRasterBand(1).ReadAsArray())
myarr2 = np.array(ds.GetRasterBand(2).ReadAsArray())
myarr3 = np.array(ds.GetRasterBand(3).ReadAsArray())
myimage = np.dstack([myarr1, myarr2, myarr3])
``` |
The answers so far are mostly correct, but there is more to the story.
The simple answer given so far -- which is that there might be runtime values that match no case, regardless of compile-time type checking for exhaustiveness -- are correct. A default clause is allowed because it is possible that it is selected (and if you don't provide one, the compiler gives you a synthetic one that throws MatchException.)
There are two primary reasons why a switch that is exhaustive at compile time might not be truly exhaustive at runtime: separate compilation, and remainder.
Separate compilation has been treated adequately in the other answers; novel enum constants and novel subtypes of sealed types can show up at runtime, because it is possible to recompile the hierarchy without recompiling switches over it. This is normally handed silently for you by the compiler (there's no point in making you declare a `default` clause that just throws "can't get here"), but you can handle it yourself if you want.
The second reason is _remainder_, which reflects the fact that the reasonable meaning of "exhaustive enough" and actual exhaustiveness do not fully coincide, and if we demanded that switches be truly exhaustive, it would be very unfun to program in.
A simple example is this:
Box<Box<String>> bbs = ...
switch (bbs) {
case Box(Box(String s)): ...
}
Should this switch be exhuastive on `Box<Box<String>>`? As it turns out, there is one possible value at runtime it does not match: `Box(null)`. (Recall that nested patterns match the outer pattern, and then use the bindings of the outer pattern as match candidates for the inner pattern -- and a record pattern cannot match null, because it wants to invoke the component accessors.)
We could demand that to be exhaustive, there be a separate error handling case for `Box(null)`, but no one would like that, and with less trivial examples, the error handling would overwhelm the useful cases. So Java makes the pragmatic choice of defining switch exhaustiveness in "human" terms -- the code that seems exhaustive for the reasonable classes -- and allowing for the "silly" cases to be handled by the synthetic default. (You are still free to handle any silly cases explicitly if you want.) This switch is considered exhaustive, but with non-empty remainder.
This whole concept is explained in greater detail in [Patterns: Exhaustiveness, Unconditionality, and Remainder](https://openjdk.org/projects/amber/design-notes/patterns/exhaustiveness). |