instruction stringlengths 0 30k ⌀ |
|---|
```
const Sidebar = () => {
return (
<Box flex={1} height="100%">
<Box postion="fixed" height="100%" sx={{ boxShadow: '3' }}>
<Box p={4}>
<Stack
sx={{
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
}}
direction="column"
spacing={2}
>
<Avatar alt="Remy Sharp" src="/static/images/avatar/1.jpg" />
<Typography>Hi, DEANS</Typography>
</Stack>
</Box>
<List>
<ListItem disablePadding>
<ListItemButton component="a" href="#home">
<ListItemIcon>
<Home />
</ListItemIcon>
<ListItemText primary="Homepage" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<Article />
</ListItemIcon>
<ListItemText primary="Pages" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<Group />
</ListItemIcon>
<ListItemText primary="Groups" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<Storefront />
</ListItemIcon>
<ListItemText primary="Marketplace" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<Person />
</ListItemIcon>
<ListItemText primary="Friends" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<Settings />
</ListItemIcon>
<ListItemText primary="Settings" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<AccountBox />
</ListItemIcon>
<ListItemText primary="Profile" />
</ListItemButton>
</ListItem>
<ListItem disablePadding>
<ListItemButton component="a" href="#simple-list">
<ListItemIcon>
<ModeNight />
</ListItemIcon>
<Switch onChange={() => {}} />
</ListItemButton>
</ListItem>
</List>
</Box>
</Box>
)
}
export default Sidebar
``` |
I'll start by saying that I'm not a Java expert, and I'm facing a really strange problem that I'm not able to solve.
I'm working on an application to which I decided to add a telegram bot in order to give some remote information. When I run the application from the IDE (currently I'm using IntelliJ IDEA) all works fine, as soon as I create an excecutable version of the same application the telegram bot stops to working without throw any exception and I cannot figure out what I'm doing wrong.
Now in order to isolate the problem I created a dummy project that implement only the telegram bot (that I've extracted from the first project) but the behavior is the same: from IDE no problem, from excecutable the bot don't work.
In the following section you can find all the details about the code and all the dependencies that I'm using.
Basically I created a JavaFX project that is made by only one scene whit 2 buttonsç
- **Connect**: used to open the connection with the telegram bot
- **Send**: used to send a "Test" message over the bot channel.
[Application View](https://i.stack.imgur.com/i569O.png)
# Code
Here you can find the FX controller class:
```
package com.example.telegrambotui;
import javafx.fxml.FXML;
import javafx.scene.control.Alert;
import javafx.scene.control.ButtonType;
import javafx.scene.control.Label;
import javafx.stage.Stage;
import org.telegram.telegrambots.meta.exceptions.TelegramApiException;
import java.io.File;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Optional;
public class HelloController {
@FXML
private Label welcomeText;
private TelegramBot telegramBot;
@FXML
protected void connectClick() throws IOException {
try {
telegramBot = new TelegramBot("My_test_bot");
welcomeText.setText("Connected");
} catch (Exception e) {
File file = new File("C:\\Users\\luca-\\IdeaProjects\\TelegramBotUI\\Installer\\ConnectException.txt");
file.createNewFile();
PrintWriter pw = new PrintWriter(file);
e.printStackTrace(pw);
}
}
@FXML
protected void onTestButtonClick() throws IOException {
try {
telegramBot.sendMessage("Test");
welcomeText.setText("Sent \"Test\"");
} catch (TelegramApiException e) {
welcomeText.setText("Excetion!!");
fireAlarm(Alert.AlertType.ERROR, HelloApplication.stage, "Error", "Connection Info not set!", e.getMessage());
File file = new File("C:\\Users\\luca-\\IdeaProjects\\TelegramBotUI\\Installer\\TestButtonException.txt");
file.createNewFile();
PrintWriter pw = new PrintWriter(file);
e.printStackTrace(pw);
pw.close();
}
}
private Optional<ButtonType> fireAlarm(Alert.AlertType type, Stage owner, String title, String headerText, String contentText) {
Alert alert = new Alert(type);
alert.initOwner(owner);
alert.setTitle(title);
alert.setContentText(contentText);
if (!headerText.equals("")) {
alert.setHeaderText(headerText);
}
return alert.showAndWait();
}
}
```
Here you can find the TelegramBot class:
```
package com.example.telegrambotui;
import org.telegram.telegrambots.bots.TelegramLongPollingBot;
import org.telegram.telegrambots.meta.TelegramBotsApi;
import org.telegram.telegrambots.meta.api.methods.send.SendMessage;
import org.telegram.telegrambots.meta.api.objects.Update;
import org.telegram.telegrambots.meta.exceptions.TelegramApiException;
import org.telegram.telegrambots.updatesreceivers.DefaultBotSession;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
public class TelegramBot extends TelegramLongPollingBot {
private final static HashMap<String, TelegramChannel> AvailableBots = new HashMap<>(){{
put("My_test_bot", new TelegramChannel("My_test_bot", "Here I've have put the bot Token"));
}};
private TelegramChannel telegramChannel;
private final String INIT;
private final String HELLO_WORLD = "/hello";
private final String IS_ALIVE = "/alive";
private final List<String> COMMANDS = Arrays.asList( IS_ALIVE, HELLO_WORLD);
private TelegramBotsApi telegramBotsApi;
public TelegramBot(String channel) throws TelegramApiException {
telegramChannel = AvailableBots.get(channel);
telegramBotsApi = new TelegramBotsApi(DefaultBotSession.class);
telegramBotsApi.registerBot(this);
INIT = "<strong>[" + telegramChannel.username() + "]</strong>\n";
}
public static String[] getAvailableBot() {
String[] bots = new String[AvailableBots.keySet().size()];
int i = 0;
for(String k: AvailableBots.keySet()){
bots[i] = k;
i++;
}
return bots;
}
@Override
public void onUpdateReceived(Update update) {
if (update.hasChannelPost() && update.getChannelPost().hasText()) {
if (COMMANDS.contains(update.getChannelPost().getText())){
try {
switch (update.getChannelPost().getText()){
case HELLO_WORLD -> sendMessage("Hello World!");
case IS_ALIVE -> sendMessage("I'm alive!!");
}
} catch (TelegramApiException e) {
throw new RuntimeException(e);
}
}
}
}
public void sendMessage(String mex) throws TelegramApiException {
if(!mex.startsWith(INIT)){
mex = INIT + mex;
}
SendMessage toSend = new SendMessage(telegramChannel.chat_id(), mex);
toSend.enableHtml(true);
execute(toSend);
}
@Override
public String getBotUsername() {
return telegramChannel.username();
}
@Override
public String getBotToken() {
return telegramChannel.token();
}
}
```
# Dependencies
Here you can find all the dependencies:
[Dependencies](https://i.stack.imgur.com/sA90M.png)
And this is the Artifacts file:
[Artifacts part 1](https://i.stack.imgur.com/A8yLQ.png)
[Artifacts part 2](https://i.stack.imgur.com/1W8RS.png)
# Excecutable script
In order to generate the excecutable file Im using the following script:
```
jpackage -t exe --name "TelegramBotUI" --icon ".\MyIcon.ico" --input "../out/artifacts/TelegramBotUI_jar" --dest "./" --main-jar "TelegramBotUI.jar" --main-class "com.example.telegrambotui.HelloApplication" --module-path "C:\Program Files\Java\javafx-jmods-17.0.2" --add-modules javafx.controls,javafx.fxml --win-menu --win-dir-chooser
```
Since I didn't catch any exception from the executable version, I tried to save any kind of exception in the txt file if there was one but none was caught.
I think that Im doing some mistakes in the way I generate the excecutable because from the IDE the application works without any problem.
I've tried to change the Telegram bot version with a new one (6.9.7.1) but nothing has changed.
What am I doing wrong? |
I've come up with a solution that requires a naming/styling convention on the bridge model, but other than that, everything works dynamically. You can seem the naming assumptions I made in the f-strings below.
```
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, ForeignKey, UniqueConstraint, Table, create_engine, select
from sqlalchemy.orm import relationship, registry, sessionmaker
from sqlalchemy.dialects.sqlite import insert
mapper_registry = registry()
Base = declarative_base()
bridge_category = Table(
"bridge_category",
Base.metadata,
Column("video_id", ForeignKey("video.id"), primary_key=True),
Column("category_id", ForeignKey("category.id"), primary_key=True),
UniqueConstraint("video_id", "category_id"),
)
class BridgeCategory: pass
mapper_registry.map_imperatively(BridgeCategory, bridge_category)
class Video(Base):
__tablename__ = 'video'
id = Column(Integer, primary_key=True)
title = Column(String)
categories = relationship("Category", secondary=bridge_category, back_populates="videos")
class Category(Base):
__tablename__ = 'category'
id = Column(Integer, primary_key=True)
text = Column(String, unique=True)
videos = relationship("Video", secondary=bridge_category, back_populates="categories")
def get_dict_from_model_obj(obj):
d = {}
for column in obj.__table__.columns:
d[column.name] = getattr(obj, column.name)
return d
def add_model_object_with_lists(s, obj):
d = get_dict_from_model_obj(obj)
d_not_list = {k: v for k, v in d.items() if not isinstance(v, list)} # remove list attrs (.i.e categories)
model_2 = type(obj)(**d_not_list)
s.add(model_2)
s.commit()
list_models = [obj.__getattribute__(attr) for attr in obj.__dict__ if isinstance(obj.__getattribute__(attr), list)] # get list attrs (i.e. categories)
for list_model in list_models:
for model_obj in list_model:
d = get_dict_from_model_obj(model_obj)
model_obj_type = type(model_obj)
sql = insert(model_obj_type).values(d).on_conflict_do_nothing([model_obj_type.text])
s.execute(sql)
s.commit()
model_obj_id = s.scalar(select(model_obj_type.id).where(model_obj_type.text == model_obj.text))
# makes assumptions about bridge table names and column names
bridge_class = globals()[f'Bridge{model_obj_type.__name__}']
sql = insert(bridge_class).values(
{
f'{model_2.__tablename__}_id': model_2.id,
f'{model_obj.__tablename__}_id': model_obj_id
}
)
s.execute(sql)
s.commit()
if __name__=='__main__':
engine = create_engine('sqlite:///:memory:', echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
v1 = Video(title='A', categories=[Category(text='blue'), Category(text='red')])
v2 = Video(title='B', categories=[Category(text='green'), Category(text='red')])
v3 = Video(title='C', categories=[Category(text='grey'), Category(text='red')])
videos = [v1, v2, v3]
with Session() as s:
for video in videos:
add_model_object_with_lists(s, video)
``` |
I have a really cool kotlin/gradle lib that i want to use in a spring boot micro service developed in Java. Its is a market data API for Polygon [Client JVM GitHub][1]
[1]: https://github.com/polygon-io/client-jvm
I am bran new to Kotlin, have made my around in this project, they have some Sample Java Usage code that runs in my IDE, I am now trying to import this project into a Maven spring boot service and in the gradle.build of the project there is a maven publish plugin being referenced, I can't find the client lib in any public maven repo so I am trying to build it locally, it works in my Intelij IDE but command line get this error
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
My other key question is, assuming I can get the artifact to build and publish it into my local ~/.m2 repository is it possible for me in spring boot java to import it like i would for any other maven lib and instantiate/invoke class methods developed in Kotlin while I am in in Java/Spring boot or is the process not that easy?
This is the build.gradle of the client lib as you can see it does reference the maven publish plugin, to get this to publish is the command gradle publish? Of course I would have to get it to build first.
plugins {
`java-library`
`maven-publish`
kotlin("jvm") version "1.6.10"
kotlin("plugin.serialization") version "1.6.10"
kotlin("kapt") version "1.6.10"
}
|
I was having the same issue, then I try putting event.preventDefault() and worked. My notificationclick event looked like this:
self.addEventListener("notificationclick", (event) => {
event.preventDefault();
let distUrl = self.location.origin + "/specific-path";
const apntId = event.notification.data?.apntId;
if (apntId) distUrl = self.location.origin + "/other-path/" + apntId;
event.notification.close();
event.waitUntil(
self.clients.matchAll({ type: "window", includeUncontrolled: true }).then((clients) => {
if (clients.length > 0) {
const client = clients[0];
client.navigate(distUrl);
client.focus();
return;
} else event.waitUntil(self.clients.openWindow(distUrl));
})
);
}); |
I'm learning in detail how PGP system works but there are somethings that are not said everywhere I tried to look about it ;
According to [this Wikipedia diagram:](https://upload.wikimedia.org/wikipedia/commons/4/4d/PGP_diagram.svg)
[![enter image description here][1]][1]
When we are encrypting, we use the Data and a Random key then we have the protected data ( the one with locket ).
Here's the first problem, how are these data crypted ? Which algorithm has been used ?
My second problem is at the last encryption ;
Locket data + Locket key = Encrypted message
Same thing here, how ? What did they used ?
Also, I read that somewhere, they are hashing the whole data so you can't change it or it will break everything, but when do they do that ?
Thanks in advance !
[1]: https://i.stack.imgur.com/Y3Llg.png |
Unlike the 1st answer I disagree.
Try both code segments with an `.explain()` and you will see the generated Physical Plan for Execution is *exactly the same*.
Spark is based on `lazy evaluation`. That is to say:
> All transformations in Spark are lazy, in that they do not compute
> their results right away. Instead, they just remember the
> transformations applied to some base dataset (e.g. a file). The
> transformations are only computed when an action requires a result to
> be returned to the driver program. This design enables Spark to run
> more efficiently. For example, we can realize that a dataset created
> through map will be used in a reduce and return only the result of the
> reduce to the driver, rather than the larger mapped dataset.
The upshot of all this is, that I ran similar code to yours with 2 filters applied, and note that as the **Action** `.count` causes just-in-time evaluation, Catalyst filtered out based on both the first and the 2nd filter. This is known as `"Code Fusing"` - which can be done to late execution aka Lazy Evaluation.
**Snippet 1 and Physical Plan**
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
from pyspark.sql.functions import col
data = [("James","","Smith","36636","M",3000),
("Michael","Rose","","40288","M",4000),
("Robert","","Williams","42114","M",4000),
("Maria","Anne","Jones","39192","F",4000),
("Jen","Mary","Brown","","F",-1)
]
schema = StructType([ \
StructField("firstname",StringType(),True), \
StructField("middlename",StringType(),True), \
StructField("lastname",StringType(),True), \
StructField("id", StringType(), True), \
StructField("gender", StringType(), True), \
StructField("salary", IntegerType(), True) \
])
df = spark.createDataFrame(data=data,schema=schema)
df = df.filter(col('lastname') == 'Jones')
df = df.select('firstname', 'lastname', 'salary')
df = df.filter(col('lastname') == 'Jones2')
df = df.groupBy('lastname').count().explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- HashAggregate(keys=[lastname#212], functions=[finalmerge_count(merge count#233L) AS count(1)#228L])
+- Exchange hashpartitioning(lastname#212, 200), ENSURE_REQUIREMENTS, [plan_id=391]
+- HashAggregate(keys=[lastname#212], functions=[partial_count(1) AS count#233L])
+- Project [lastname#212]
+- Filter (isnotnull(lastname#212) AND ((lastname#212 = Jones) AND (lastname#212 = Jones2)))
+- Scan ExistingRDD[firstname#210,middlename#211,lastname#212,id#213,gender#214,salary#215]
**Snippet 2 and Same Physical Plan**
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
from pyspark.sql.functions import col
data2 = [("James","","Smith","36636","M",3000),
("Michael","Rose","","40288","M",4000),
("Robert","","Williams","42114","M",4000),
("Maria","Anne","Jones","39192","F",4000),
("Jen","Mary","Brown","","F",-1)
]
schema2 = StructType([ \
StructField("firstname",StringType(),True), \
StructField("middlename",StringType(),True), \
StructField("lastname",StringType(),True), \
StructField("id", StringType(), True), \
StructField("gender", StringType(), True), \
StructField("salary", IntegerType(), True) \
])
df2 = spark.createDataFrame(data=data2,schema=schema2)
df2 = df2.filter(col('lastname') == 'Jones')\
.select('firstname', 'lastname', 'salary')\
.filter(col('lastname') == 'Jones2')\
.groupBy('lastname').count().explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- HashAggregate(keys=[lastname#299], functions=[finalmerge_count(merge count#320L) AS count(1)#315L])
+- Exchange hashpartitioning(lastname#299, 200), ENSURE_REQUIREMENTS, [plan_id=577]
+- HashAggregate(keys=[lastname#299], functions=[partial_count(1) AS count#320L])
+- Project [lastname#299]
+- Filter (isnotnull(lastname#299) AND ((lastname#299 = Jones) AND (lastname#299 = Jones2)))
+- Scan ExistingRDD[firstname#297,middlename#298,lastname#299,id#300,gender#301,salary#302]
**withColumn**
Doing this:
df = df.filter(col('lastname') == 'Jones')
df = df.select('firstname', 'lastname', 'salary')
df = df.withColumn("salary100",col("salary")*100)
df = df.withColumn("salary200",col("salary")*200).explain()
or via chaining gives the same result as well. I.e. it does not matter how you write the transformations.
== Physical Plan ==
*(1) Project [firstname#399, lastname#401, salary#404, (salary#404 * 100) AS salary100#414, (salary#404 * 200) AS salary200#419]
+- *(1) Filter (isnotnull(lastname#401) AND (lastname#401 = Jones))
+- *(1) Scan ExistingRDD[firstname#399,middlename#400,lastname#401,id#402,gender#403,salary#404] |
I want to start learing creating menu's, so, this is a script i found for testing.
There are no faults in it, but he doesn't show a window with the menu's.
I have looked for the reason and the problem is that he can't find the sys.
First the code I use:
```
import sys
```
```
print(sys.path)
sys.path.insert(0,'C:\anaconda3\Lib\site-packages')
from qtpy.QtCore import Qt
from qtpy.QtWidgets import QApplication, QLabel, QMainWindow
```
```
class Window(QMainWindow):
def _init_(self, parent=None):
super()._init_(parent)
self.setWindowTitle("Probeersel")
self.resize(400, 200)
self.centralWidget = QLabel("Hello")
self.centralWidget.setAlignment(Qt.AlignHCenter | Qt.AlingVCenter)
self.setCentralWidget(self.centralWidget)
```
```
if __name__ == "_main_":
app = QApplication(sys.argv)
win = Window()
win.show()
sys.exit(app.exec_())
```
I have allready uninstall anaconda and reinstall it. But, the same thing, not working.
I have put my save in the anaconda folder, but still not working.
I have looked all ready her to find a solution, but no, still not working :( |
I'm trying to set `AccessKeys` at `appsettings.Development.json` pointed to OneDrive folder. Since some team members have their OneDrive at different folders, I was tryng to use Windows environment variables like this:
"AppSettings": {
"AccessKeys": "%OneDrive%\\project123\\keys\\",
}
However, the above attempt is not working since file is not being found.
internal XDocument ReturnFileContent(string filename)
{
string documentPath = _configuration["AppSettings:AccessKeys"];
string xmlFilePath = Path.Combine(documentPath, filename);
var aux = File.Exists(xmlFilePath); <- always false
//...
} |
Windows environment variables at appsettings.json |
|asp.net-core|environment-variables|appsettings| |
You need to clone the `Carbon` object or change to immutable object. Carbon is a mutable object by default.
Use `copy` or `clone` method :
$booking_end = $booking_start->copy()->addMinute(45)->format("Y-m-d H:i:s");
$booking_end = $booking_start->clone()->addMinute(45)->format("Y-m-d H:i:s");
Use `object cloning` method : [php.net][1]
$booking_end = (clone $booking_start)->addMinute(45)->format("Y-m-d H:i:s");
----------
Look this Stack Overflow post, same issue https://stackoverflow.com/a/49905830/11836673
[1]: https://www.php.net/manual/en/language.oop5.cloning.php |
I'm working on a .NET 8 Blazor project and I'm trying to implement Azure AD B2C authentication. According to Microsoft's documentation, I'm using `RemoteAuthenticatorView`. It works fine in WebAssembly mode, but I'm encountering issues when using "Interactive Auto".
When I use "Interactive Auto", I get the following error:
Cannot provide a value for property ‘AuthenticationService’ on type ‘Microsoft.AspNetCore.Components.WebAssembly.Authentication.RemoteAuthenticatorView’. There is no registered service of type ‘Microsoft.AspNetCore.Components.WebAssembly.Authentication.IRemoteAuthenticationService`1[Microsoft.AspNetCore.Components.WebAssembly.Authentication.RemoteAuthenticationState]’
In an attempt to resolve this, I added `builder.Services.AddApiAuthorization();` to my code. However, this resulted in another error:
Unable to cast object of type ‘Microsoft.AspNetCore.Components.Server.ServerAuthenticationStateProvider’ to type ‘Microsoft.AspNetCore.Components.WebAssembly.Authentication.IRemoteAuthenticationService`1[Microsoft.AspNetCore.Components.WebAssembly.Authentication.RemoteAuthenticationState]’
I'm looking for guidance on how to resolve these errors and successfully implement Azure AD B2C authentication in my Blazor project. Any help would be greatly appreciated. |
Implementing Azure AD B2C Authentication in .NET 8 Blazor Project (RenderMode: InteractiveAuto) |
|c#|asp.net|.net|blazor|azure-ad-b2c| |
I'm trying to make a bot with Discord.py, and I cant use discord-slash. I tried to install the pip. I will add pics (https://i.stack.imgur.com/6ygwM.png)(https://i.stack.imgur.com/6sBMO.png)(https://i.stack.imgur.com/ci4WB.png)
Don't ask about the bot name, it was a discord bot for a Minecraft server originally...
I added the pip install pics up there and the import... |
After starting my code it closes immediately |
|python|pygame| |
null |
|flutter|firebase|google-cloud-firestore| |
null |
Resolved it!
The biggest problem that I didn't know what the problem was.
After asking around people, I learned that the problem is called a "**circular referenc**e" and "**duplicate header**" issues. In short, these are problems that occur when objects reference each other, or when headers are compiled with duplicate headers. There's a lot of material on the internet, so use this as a milestone.
This could be solved by inserting **#pragma** once or **#ifndef HEADER_H #define HEADER_H #endif** syntax into the header.
In my case, I was able to solve it by inserting **the latter case**,#ifndef ..., and **#ifdef** into the header appropriately.
I leave the answer to myself for anyone else who encounters the same problem in the future.
good luck guys! |
i have created a 10k lines dictionary for program iam making , some of the entries has duplicate keys because of the way the data need to be presented .
the fix for this is to turn the "value" into a list removing duplicate keys .
i was wondering if there is an automated way to do this ? (pycharm)
for example :
dict = {
"A": "Red",
"A": "Blue",
"A": "Green",
"B": "Yellow",
"C": "Black"
}
wanted output :
dict = {
"A": ["Red","Blue","Green"],
"B": "Yellow",
"C": "Black"
}
iv tried chatgpt and manual labor :D
looking for a smarter way to do this and learning new ways |
Dictionary contains duplicate keys , automated fix help needed |
|python| |
null |
|javascript|react-native|firebase-authentication|expo|google-signin| |
null |
I wonder about devops webdevolopment ? any can tell me about this please ?
hello what is devops ? what lang should I learn
I want to know lang to learn inorder to complete
if it is same as backend devoloper ? or what else |
What is devops meaning ? What requirement? |
|computer-science| |
null |
Resolved it!
The biggest problem that I didn't know what the problem was.
After asking around people, I learned that the problem is called a "**circular referenc**e" and "**duplicate header**" issues. In short, these are problems that occur when objects reference each other, or when headers are compiled with duplicate headers. There's a lot of reference on the internet, so use this as a milestone.
This could be solved by inserting **#pragma** once or **#ifndef HEADER_H #define HEADER_H #endif** syntax into the header.
In my case, I was able to solve it by inserting **the latter case**,#ifndef ..., and **#ifdef** into the header appropriately.
I leave the answer to myself for anyone else who encounters the same problem in the future.
good luck guys! |
can'' find the sys modul |
|python|sys| |
null |
The easiest thing to do is this.
calculation = if then else ( result < 150 , 1 , 0 )
+ if then else ( result >= 150 :and: result < 200 , 2 , 0 )
+ if then else ( result > 200, 3, 0 )
I'd also remove the numbers (150 etc) and make them constants with units. It will make your model easier for others to understand. |
The program basically sorts the datas. And remove the duplications considering the range of 5-29 characters.
I am trying to sort some values on Mainframe accordingly :
SORT FIELDS=(5,24,CH,A,45,10,CH,A)
when I use
> OUTFIL REMOVECC,NODETAIL,
What I receive before and after running that command is different. Can you please inform me about the function of OUTFIL REMOVECC, NODETAIL command?
Here are the outputs before and after I run this command using the same input:
INPUT:
[![enter image description here][1]][1]
OUTPUT(Without Outfil Removecc, nodetail):
[![enter image description here][2]][2]
OUTPUT (with Outfil Removecc, nodetail command):
[![enter image description here][3]][3]
I want to know the functionality of a command on Mainframe.
[1]: https://i.stack.imgur.com/CQEwS.png
[2]: https://i.stack.imgur.com/rNZy1.png
[3]: https://i.stack.imgur.com/kW5U1.png |
Mainframe Programming Sorting, OUTFIL REMOVECC,NODETAIL |
|sorting|mainframe|jcl| |
null |
I'm trying to integrate PyDeequ with PySpark in my Streamlit application to perform comprehensive data quality checks on a CSV file. I want to use PyDeequ's functionalities to perform various tests including completeness, correctness, uniqueness, outlier detection, and date format correctness. However, I'm encountering an error that says the 'JavaPackage' object is not callable. Here's the relevant code snippet, the specific tests I'm trying to perform, and the error message:
```lang-py
import streamlit as st
from pyspark.sql import SparkSession
from pydeequ import AnalysisRunner
from pydeequ.analyzers import Completeness
def create_spark_session():
return SparkSession.builder.appName("DataQualityCheck").getOrCreate()
def read_csv_data(spark, uploaded_file):
df = spark.read.csv(uploaded_file, header=True, inferSchema=True)
return df
def main():
st.title("Data Quality Checker")
uploaded_file = st.file_uploader("Choose a CSV file:", key="csv_uploader", type="csv")
if uploaded_file is not None:
spark = create_spark_session()
df = read_csv_data(spark, uploaded_file)
analysis_runner = AnalysisRunner(spark)
analysis_result = analysis_runner.onData(df).addAnalyzer(Completeness("MRN")).run()
completeness_results = analysis_result['Completeness']
completeness_mrn = completeness_results['MRN']
completeness_percent_mrn = completeness_mrn['completeness']
missing_count_mrn = completeness_mrn['count']
if __name__ == "__main__":
main()
```
```
TypeError: 'JavaPackage' object is not callable
Traceback:
File "E:\Deequ\pydeequ_env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "E:\data_quality.py", line 43, in <module>
completeness_mrn = completeness_results['MRN']
File "E:\Deequ\pydeequ_env\lib\site-packages\pydeequ\analyzers.py", line 52, in onData
return AnalysisRunBuilder(self._spark_session, df)
File "E:\Deequ\pydeequ_env\lib\site-packages\pydeequ\analyzers.py", line 124, in __init__
self._AnalysisRunBuilder = self._jvm.com.amazon.deequ.analyzers.runners.AnalysisRunBu
```
Data Quality Tests:
1. **Completeness**: Ensure that certain columns (e.g., "MRN" and "Date of Admission") have complete data.
2. **Correctness**: Verify that data in specific columns adhere to certain format or correctness rules (e.g., "MRN" format correctness).
3. **Uniqueness**: Check if certain columns contain unique values (e.g., "MRN" uniqueness).
4. **Outlier Detection**: Identify any outliers in numerical columns (e.g., "Billing Amount").
5. **Date Future Format**: Ensure that dates in a certain column (e.g., "Date of Admission") are not in the future.
I have installed PyDeequ version 1.2.0 and PySpark downgraded version 3.3.1 in my environment. Could someone please help me understand why I'm encountering this error and how to resolve it? Thank you.
|
I actually ended up solving this issue myself. The problem was I had List<GroupModel> instead of List<string. I changed it to List<string> and got all Active Directory Groups. Thanks :D |
It's not a supported use case to link any number of anonymous accounts to full existing account. Once the user has chosen to [convert their anonymous account to a normal account][1], they should only then sign in directly using that account in order to continue using the app with their prior data.
If you are creating anonymous accounts at each unauthenticated visit, then you will have to accept that those new accounts will be abandoned if and when the user chooses to subsequently sign in with their full account using the (email) provider that was previously linked.
Maybe consider [automatic clean-up][2] if you don't want to deal with all these anonymous account accumulating.
[1]: https://firebase.google.com/docs/auth/web/anonymous-auth#convert-an-anonymous-account-to-a-permanent-account
[2]: https://firebase.google.com/docs/auth/web/anonymous-auth#auto-cleanup |
null |
Is it possible to change the timeout setting of a GCP load balancer backend service (a cloud run)?
I tried with the command:
gcloud compute backend-services update my-cloudrun-backend-service --global --project my-project \
--timeout=600
and I got the error:
- Invalid value for field 'resource.timeoutSec': '600'. Timeout sec is not supported for a backend service with Serverless network endpoint groups.
I can't understand why this is fixed to 30s as mentioned in the docs. |
Change the timeout setting of a GCP load balancer backend service |
|google-cloud-run|gcp-load-balancer| |
I've tackled this issue from a different angle, patches. It does require rather high resolution screen though (at least 200x200) for the trail to look nice. Taking inspiration from both @SethTisue and @Nigel. This is an example with black background and white fading trail, but the trail color can be altered given its number range is known (In NetLogo, go to Tools -> Color Swatches for more info).
globals [ trail_clean_freq trail_strength ]
breed [heads head]
to setup
clear-all
create-heads 5
reset-ticks
set trail_clean_freq 10 ;; would be better as a slider
set trail_strength 1 ;; would also be better as a slider
end
to go
ask heads [
fd 1
rt random 10
lt random 10
if trail_strength + pcolor < 9.9 [set pcolor pcolor + trail_strength]
]
if ticks mod trail_clean_freq = 0
[
ask patches [if pcolor > 0.1 [set pcolor pcolor - 0.1]]
]
tick
end
|
Quite annoying that there isn't a clear example documented in the cryptography module. Here's the magic incantation to convert pem files (cert + key) into a p12 file protected by a password.
from cryptography import x509
from cryptography.hazmat.primitives.serialization import BestAvailableEncryption, load_pem_private_key, pkcs12
hostname = "host.domain.com"
crt_file = "cert.pem"
key_file = "privkey.pem"
p12_file = 'host.p12'
password = "password"
with open(crt_file, mode='rb') as file:
crt = x509.load_pem_x509_certificate(file.read())
with open(key_file, mode='rb') as file:
key = load_pem_private_key(file.read(), None)
with open(p12_file, 'wb') as file:
file.write(
pkcs12.serialize_key_and_certificates(
hostname.encode(), key, crt, None,
BestAvailableEncryption(password.encode())))
|
Instead of checking for multiple conditions simultaneously I think this approach is simpler, please check it out too:
**Method:**
- Iterating over each element in the array, you can increment a var integer each time current element is greater than the next element.
- To include the additional condition of boundary elements, use modulo operator to traverse array as a circular array.
(taking n = array.size())
```
for (int i = 0; i < n; i++)
{
if (array[i] > array[(i+1) % n])
count++;
}
return (count <= 1);
```
|
{"Voters":[{"Id":22192445,"DisplayName":"taller"},{"Id":8422953,"DisplayName":"braX"},{"Id":478884,"DisplayName":"Tim Williams"}],"SiteSpecificCloseReasonIds":[11]} |
I think the problem is that the funds are in your platform's balance. They need to be in your connected account's balance for you to make a payout.
To clarify:
- A Payout is a transfer of funds between the Stripe account's balance and a bank account connected to that account.
- A Transfer is a transfer of funds between Stripe accounts' balance.
If you have `available` funds in your platform balance, you can make a payout to the bank account connected to your platform.
If you want to make a payout to the bank account connected to your connected account, you need `available` funds in the connected account's balance.
With that in mind:
- Check the balance for your platform (optional, you already did this)
https://docs.stripe.com/api/balance/balance_retrieve
- Create a Transfer from your platform to your connected account:
https://docs.stripe.com/api/transfers/create
- Check the balance for your connected account (same as above with the `stripeAccount` header)
- Create your payout, with the `stripeAccount` header and bank account / card ID as `destination`.
`source_type` should be card but doesn't need to be specified. In fact I don't think `destination` does either, provided the connected account only has one valid external_account / you want to use the default one. |
I have a Java Spring Boot application running in a container within a Kubernetes pod. I'm wondering if there's a way to:
Retrieve the Dockerfile configuration from the Kubernetes cluster namespace without pulling it from the repository.
Suppose my Java application runs with JAVA_OPTS=X. I would like Kubernetes to somehow add to the JAVA_OPTS so that it becomes JAVA_OPTS=X,Y,Z, and then restart the pod. Additionally, I want the new configuration to be saved inside the container without modifying the original Dockerfile.
Thank you |
Retrieve the Dockerfile configuration from the Kubernetes and also change container java parameter? |
|docker|kubernetes|pods| |
I'm using ```Make``` (```Automake```) to compile and execute unit tests. However these tests need to read and write test data. If I just specify a path, the tests only work from a specific directory. This is a problem, as first the test executables may be executed from a directory different than at compiletime by ```make```, and secondly they should be executable even manually or in ```VPATH```-builds, which currently brakes ```make distcheck```
Even using ```srcdir```, via config.h, isn't particularly useful, because it is of course evaluated at compiletime rather than at runtime. <del>What would be nice is, if the builddir would get passed at runtime instead.</del> What I think is necessary, is a way to get the the srcdir relative to the builddir into the executable. This needs to be done/adjusted at runtime, due to the reasons written earlier. While this wouldn't solve the problem of being executed outside of the builddir, I don't think, that this is a particular problem, because, who would do that?
But would it be better to specify them via the command arguments or via the environment? And is it "better" to specify individual files or a generic directory? I would consider a ```PATH```-like search behaviour overkill for just a test, or would that be recommended?
Or should I just give up and specify a absolute path? But I think, that would break ```VPATH```-builds.
So the question is, how would I specify the path to a test file best, in terms of portability, interoperability, maintainability and common sense? |
The difference in the size of `long` between Windows and Linux can indeed affect the result.
If you want to ensure consistent behavior across platforms, you should consider using fixed-width integer types from the `<cstdint>` header. For example, you can use `int32_t` to ensure a 32-bit signed integer:
#include <typeinfo>
#include <cassert>
#include <cstdint>
#include <iostream>
int main()
{
int32_t result = 1L + static_cast<int32_t>(1U);
assert(typeid(result) == typeid(int32_t));
std::cout << typeid(result).name();
}
By using fixed-width integer types, you explicitly specify the size of the integer, making your code more platform-independent. In this case, int32_t ensures a `32-bit` signed integer regardless of the platform. |
Using any awk and 1 pass just storing the values for one $2 at a time in memory:
$ cat tst.awk
{
if ( $2 == prev[2] ) {
numUniq += ( $3 == prev[3] ? 0 : 1 )
}
else {
prt()
numUniq = numVals = 0
}
vals[++numVals] = $0
split($0,prev)
}
END { prt() }
function prt( i) {
for ( i=1; i<=numVals; i++ ) {
print vals[i], ( numUniq == 1 ? "same" : "diff" )
}
}
<p>
$ awk -f tst.awk test.txt
49808830/ccs 9492 TACA 3 diff
175833950/ccs 971 ACCC 1 diff
180422692/ccs 971 ACCC 10 diff
110952448/ccs 9714 TAGAG 2 same
117309969/ccs 9714 TAGAG 4 same
119998610/ccs 9714 TAGAG 5 same
171509463/ccs 9714 TAGAT 4 same
|
I have a widget that uses the resizeToAvoidBottomInset property on a column. Here it is below with some irrelevant details like decoration taken out:
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
TextField(
maxLines: 1,
maxLength: 6,
controller: textController,
),
Container(height: 16.0),
const Spacer(),
Container(
alignment: Alignment.bottomCenter,
child: ScreenWideButton("Update", onPressed: () {
if (textController.text.length > 4) {
widget.onChanged(int.parse(textController.text
.substring(1, textController.text.length - 2)));
Navigator.of(context).pop();
Fluttertoast.showToast(msg: "Changes saved");
} else {
Fluttertoast.showToast(msg: "Please enter a valid value");
}
}),
),
const SizedBox(height: 10),
],
),
resizeToAvoidBottomInset: true,
);
}
The widget works fine but when I pop back onto the screen under it by pressing the Update button, I get an overflow error very briefly as such (the gif is at 0.25x speed):
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/ILM9x.gif
I haven't tried anything because I don't really understand the problem. Any ideas how to fix it or why its happening? |
resizeToAvoidBottomInset is causing an overflow when the screen is popped |
|flutter|dart| |
I actually ended up solving this issue myself. The problem was I had `List<GroupModel>` instead of List<string. I changed it to `List<string>` and got all Active Directory Groups. Thanks :D |
I hava a JTable into which I've placed a set of numbers representing gear ratios. There are some repetitions in the data and so I would like a different colored background for them; however, each set of duplicates must have a different color. Duplicates may consist of two, three or four cells. The list of duplicates (an ArrayList) includes the row and column in which each exist, and the RGB color (background and foreground) assigned to that set of duplicates. It is defined like so:
```
private Double ratio;
private int row;
private int col;
private int backgroundRed;
private int backgroundGreen;
private int backgroundBlue;
private int foregroundRed;
private int foregroundGreen;
private int foregroundBlue;
```
The list is sent to a cell renderer, so it would seem easy enough—all it has to do is read the list and render the colors. The trouble I'm having is that I can't seem to get the renderer to reset to the default black and white needed for those ratios that are not duplicated.
Here is what the table should look like. All of the non-duplicated cells are rendered in a white background with black font.
[](https://i.stack.imgur.com/Qnkme.png)
And here is the output from what I have developed thus far:
[](https://i.stack.imgur.com/MyLpN.png)
Row 0 in the output is almost correct in that it displays the first 3 columns in black and white since they are not duplicated, and the remaining cells in that row are different colors. The trouble starts in row 1, col 0 and col 1 where these two cells should also be in black and white but are in the same colors as row 0 col 8. This tells me that it’s not resetting to the default (i.e., black on white). This occurs again in row 3 (Col 5, 6 & 7), row 4 (Col 6, 7 & 8), row 5 (Col 0, 1 &2), and all of rows 6 and 8.
Here is the code for the renderer. You’ll notice that I tried two different comparisons (IF statements): one that compares the ratio values and the other that identifies the row and column of each set of duplicates. Both of these produce similar results. So my question is, how can I reset the renderer to a default color for those cells that are not duplicated?
```
public class NumberCellRenderer extends DefaultTableCellRenderer
{
private static final long serialVersionUID = 1L;
private JLabel label;
private DecimalFormat numberFormat = new DecimalFormat("#0.000");
private List<DuplicateValues> duplicateValues = new ArrayList<>();
private String text;
public NumberCellRenderer (List<DuplicateValues> duplicateValues)
{
this.duplicateValues = duplicateValues;
}
@Override
public JLabel getTableCellRendererComponent(JTable jTable, Object value, boolean isSelected, boolean hasFocus, int row, int column)
{
Component c = super.getTableCellRendererComponent(jTable, value, isSelected, hasFocus, row, column);
if (c instanceof JLabel && value instanceof Number)
{
label = (JLabel) c;
label.setHorizontalAlignment(JLabel.CENTER);
Number num = (Number) value;
text = numberFormat.format(num);
label.setText(text);
}
for(int i=0; i<duplicateValues.size(); i++)
{
// if(duplicateValues.get(i).getRatio() == Double.parseDouble(value.toString()))
if(row == duplicateValues.get(i).getRow() && column == duplicateValues.get(i).getCol())
{
label.setBackground(new Color(duplicateValues.get(i).getBackgroundRed(), duplicateValues.get(i).getBackgroundGreen(), duplicateValues.get(i).getBackgroundBlue()));
label.setForeground(new Color(duplicateValues.get(i).getForegroundRed(), duplicateValues.get(i).getForegroundGreen(), duplicateValues.get(i).getForegroundBlue()));
}
else
{
label.setBackground(getBackground());
label.setForeground(getForeground());
}
}
return label;
}
return label;
}
}
```
I've looked at other solutions but most want to render an entire column or row the same color. This app requires that the same colors be dispersed throughout the table. |
Mixed color rendering in a JTable |
|java|jtable|background-color|tablecellrenderer| |
null |
I actually ended up solving this issue myself. The problem was I had `List<GroupModel>` instead of `List<string>`. I changed it to `List<string>` and got all Active Directory Groups. Thanks :D |
From my previous knowledge the (logical AND) has a higher precedence than the (logical OR), so for example in the following line of Java code `boolExp2 `will be compared with `boolExp3 `before comparing `boolExp1 `with `boolExp2`:
`boolean b = boolExp1 || boolExp2 && boolExp3`
Which is the same as:
`boolean b = boolExp1 || (boolExp2 && boolExp3)`
But in the following example I don't see this as true, in the following code iI have an int variable `x` which is equal to 1, when the code increments `x`, which is in this line of te code:
`boolean b = (1<2) || (6<x++) || (++x>9) && (true^false) ^ (x++<7);`
After this line is executed the value of the variable `x` does not change, does this relate to the 'Short Circuit' in the code, I am not an expert in this field, or is there something else?
**Full java code:**
```Java
public class Main {
public static void main(String[] args) {
// TODO Auto-generated method stub
int x = 1;
boolean b = (1<2) || (6<x++) || (++x>9) && (true^false) ^ (x++<7);
System.out.println(b);
System.out.println("x = "+ x);
}
}
```
**The output:
**
```
true
x = 1
```
**Expectation:
**
```
true
x = 4
```
Please provide a detailed execution of the code.
Thanks. |
Does the && (logical AND) operator have a higher precedence than || (logical OR) operator in Java? |
|java|operator-precedence|control-flow|boolean-operations|short-circuiting| |
null |
What does char * argv[] mean? |
I have table with two atribute 'first_interval' and 'second_interval' and I must add new coloumn 'date', but I cant do it.
Connect to my table:
database = boto3.resource(
'dynamodb',
endpoint_url = config.USER_STORAGE_URL,
region_name = 'ru-central1',
aws_access_key_id = config.AWS_PUBLIC_KEY,
aws_secret_access_key = config.AWS_SECRET_KEY
)
table = database.Table('table231')
I used update_item but I took error 'botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: Missing value for required parameter "first_interval"' |
Problem with add new atribute in table with BOTO3 on python |
|python|database|boto3| |
null |
I've been reading an [article][1] about inheritance strategies in Hibernate.
An `Author` has publications, and a `Publication` can be a `Book` or a `BlogPost`.
In the `join` inheritance strategy the following table structure is generated:
[![enter image description here][2]][2]
So the `publicationauthor` table stores two foreign keys: `authorid` and `publicationid`. The `publication` table stores the fields that are common to books and blogposts, and the `book` and `blogpost` store fields that are common only to the subclasses.
Then the article shows the SQL query generated when a `author.getPublications()` method is called:
```sql
select publicatio0_.authorId as authorId2_4_0_, publicatio0_.publicationId as
publicat1_4_0_, publicatio1_.id as id1_3_1_, publicatio1_.publishingDate as
publishi2_3_1_, publicatio1_.title as title3_3_1_, publicatio1_.version as version4_3_1_,
publicatio1_1_.pages as pages1_2_1_, publicatio1_2_.url as url1_1_1_,
case when
publicatio1_1_.id is not null then 1 when publicatio1_2_.id is not null then 2 when
publicatio1_.id is not null then 0 end as clazz_1_
from PublicationAuthor publicatio0_
inner join Publication publicatio1_ on publicatio0_.publicationId=publicatio1_.id
left outer join Book publicatio1_1_ on publicatio1_.id=publicatio1_1_.id
left outer join BlogPost publicatio1_2_ on publicatio1_.id=publicatio1_2_.id
where publicatio0_.authorId=?
```
The SQL query first `inner join`s the `publicationauthor` and `publication`, so that each `Publication` gets an `authorid` from the `publicationauthor` table.
Then the result (which would have Books.length + BlogPosts.length number of rows) is `left outer` joined with `book` table, so that each publication that is a book gets the columns from the `book` table.
At this point the result looks something like this, the green rows are the rows that correspond to books and have the corresponding columns, the gray rows are the rows that didn't map to any `id` in the `book` table and thus only have `authorid` column and columns from the `publication` table filled.
[![enter image description here][3]][3]
Then we do the second `left outer join` to join the result with the `blogpost` table. This is the way I imagine what's going on here: the cartesian product is created: each row in the result table is mapped to all rows in the `blogpost` table (orange rows):
[![enter image description here][4]][4]
Then each row from the result table (green and gray) is compared by `id` with the orange row, and if there's a match, the columns from the `blogpost` row are added to the result table for the matching row.
However, as can be seen from the example, most of the result table is green rows - books, which by definition wouldn't be able to find a matching row in the `blogpost` table, yet we perform the comparison many times over regardless.
Wouldn't it be more efficient to only make the cartesian product of the gray and orange rows, like this:
[![enter image description here][5]][5]
If we assume we have a million books and only one blogpost, wouldn't it enhance performace dramatically? The way I see it, once we've joined the books, we could isolate the gray rows and join them with `blogpost` separately, and then perhaps use a kind of a union operation to get the final result table?
Is there something I'm not seeing/understanding here?
[1]: https://thorben-janssen.com/complete-guide-inheritance-strategies-jpa-hibernate/
[2]: https://i.stack.imgur.com/nBX4Y.png
[3]: https://i.stack.imgur.com/PusFW.png
[4]: https://i.stack.imgur.com/DUJSe.png
[5]: https://i.stack.imgur.com/t88Oy.png |
I'm writing a bot to "archive" messages (move them but keep details of author etc) and threads to a specified channel based on date(s).
I'm nearly there but I can't see how to delete the thread that is posted under the channel name:
[enter image description here](https://i.stack.imgur.com/mPyPc.png)
I can delete the messages in the thread after I move them to another channel but I can't see how to delete that thread under the channel.
I had though it was the "THREAD_STARTER_MESSAGE" (and it still may be) but I can't delete that as I get a 403 error saying "Cannot execute action on a system message"
Any suggestions welcome.
BTW - for anyone interested here's what the moved/"archived" messages look like:
[enter image description here](https://i.stack.imgur.com/555wO.png) |
How do I delete the thread under the channel name in discord with python? |
|python-3.x|discord| |
null |
I had the same problem too, I'm running Flutter 3.16.5 on Windows 10.
Upgrading Flutter to 3.16.7 DIDN'T WORK.
but somehow Converting the partition (where Flutter SDK & My Flutter Project lives) from FAT32 to NTFS, Worked ! |
I tried the following select:
```sql
SELECT (id,name) FROM v_groups vg
INNER JOIN people2v_groups p2vg ON vg.id = p2vg.v_group_id
WHERE p2vg.people_id =0;
```
And, I get the following error column reference `id` is ambiguous.
If I try the same `SELECT`, but I only ask for `name` and not for `id` also, it works.
Any suggestions? |
|sql|database|postgresql|select|ambiguous| |
[items]
list = 1,2,3
get using:
nums = config.get('items', 'list')
nums_list = nums.split(",")
print(nums_list)
Result should be: [1,2,3] |
Based on the minimal example of stopping a QThread from @mahkitah. These are the snippets of the modified version of the code that is now able to cancel the thread (in a way).
The StartProcessWorkerThread(QThread) from the main file (let's say main.py):
class StartProcessWorkerThread(QThread):
processing_finished = pyqtSignal(dict) # Signal for determining if the thread/worker has finished
progress_updated = pyqtSignal(int) # Signal for determining if the progress bar value was updated
# Initialization
def __init__(self, input_video_filepaths, weapons_to_detect, clothings_to_detect_and_colors, username):
super().__init__()
self.input_video_filepaths = input_video_filepaths
self.weapons_to_detect = weapons_to_detect
self.clothings_to_detect_and_colors = clothings_to_detect_and_colors
self.username = username
# Callback method or function that returns the value of the instance's isInterruptionRequested() - used for cancelling QThread instance
def interrupt_thread(self):
return self.isInterruptionRequested()
# Run method that emits the resulting data after running the method run_main_driver_code()
def run(self):
table_data = self.run_main_driver_code()
self.processing_finished.emit(table_data)
# This method is ran by run() method which runs the imported main_driver_code() from another .py file
def run_main_driver_code(self):
user_specific_table_data = {} # Initially set the variable to be returned after the process to an empty dict (can be anything actually)
# Instead of returning something, I implemented a try-except statement
# So that if I raised an exception from cwd.main_driver_code when self.interrupt_thread() == True, I can catch it here with the except keyword
try:
user_specific_table_data = cwd.main_driver_code(
input_video_filepaths=self.input_video_filepaths,
weapons_to_detect=self.weapons_to_detect,
clothings_to_detect_and_colors=self.clothings_to_detect_and_colors,
username=self.username,
progress_callback=self.progress_updated,
cancel_callback=self.interrupt_thread # Pass the interrupt_thread method or function
)
except Exception as e:
print(f"Cancel button clicked. Cancelling operation.")
return user_specific_table_data
The method or function executed when cancel button gets clicked from the main file (let's say main.py):
def cancel_detection_process(self):
self.worker.requestInterruption()
Snippet from the other file that contains long running code (let's say cwd.py):
# Snippet from cwd .py
# I made the callback function as a value in a dictionary so I called it using the key and added "()" to obtain the self.isInterruptionRequested (boolean)
if progress_data['cancel_callback']():
print(f"progress_data['cancel_callback'] = {progress_data['cancel_callback']()}") # Display to know the value of self.isInterruptionRequested (boolean)
raise Exception # raise Exception to that it falls back instantly to the main .py
Basically, what I did is created a callback function in the QThread instance, `self.interrupt_thread()`, that returns the value of the current instance's `self.isInterruptionRequested()` which becomes **True** when the button gets clicked due to `self.worker.requestInterruption()`. After that, I passed the callback function to the long running process (`cwd.py` in my case) where its return value gets constantly checked. I called the callback function (with "()" to get the boolean value) and created a conditional (still in `cwd.py`) so that if it becomes **True**, I'll **raise an exception**. I'll catch it with the `except` keyword (now I'm in `main.py`). Now, the long running process was finished earlier because of the exception raised which stops the thread in a way.
I hope the approach I took with the help of @mahkitah is understandable for those who want to implement this also but is having a hard time. |
I started to write a discord bot using discord.js. I also followed the guid they provided until I was finished with "Event handling": https://discordjs.guide/creating-your-bot/event-handling.html#reading-event-files
I wrote my first event where I want the bot to write a welcome message in a channel when somebody joins the server.
This is my code:
welcome.js
```
const { Events } = require('discord.js');
module.exports = {
name: Events.GuildMemberAdd,
once: true,
execute(client, member) {
const channelID = '1194723491551912079';
const channel = client.channels.cache.get(channelID);
const message = `Welcome <@${member}>!`;
channel.send(message);
},
};
```
The slash commands from the guide (user, ping and server) are working and also the ClientReady event. I don't know what to do after searching for some solutions and being new to js.
Thank you for your help in advance.
Edit: my index.js as well:
```
const fs = require('node:fs');
const path = require('node:path');
const { Client, Collection, GatewayIntentBits } = require('discord.js');
const { token } = require('./config.json');
const client = new Client({ intents: [GatewayIntentBits.Guilds] });
client.commands = new Collection();
const foldersPath = path.join(__dirname, 'commands');
const commandFolders = fs.readdirSync(foldersPath);
// command handling
for (const folder of commandFolders) {
const commandsPath = path.join(foldersPath, folder);
const commandFiles = fs.readdirSync(commandsPath).filter(file => file.endsWith('.js'));
for (const file of commandFiles) {
const filePath = path.join(commandsPath, file);
const command = require(filePath);
if ('data' in command && 'execute' in command) {
client.commands.set(command.data.name, command);
} else {
console.log(`[WARNING] The command at ${filePath} is missing a required "data" or "execute" property.`);
}
}
}
// event handling
const eventsPath = path.join(__dirname, 'events');
const eventFiles = fs.readdirSync(eventsPath).filter(file => file.endsWith('.js'));
for (const file of eventFiles) {
const filePath = path.join(eventsPath, file);
const event = require(filePath);
if (event.once) {
client.once(event.name, (...args) => event.execute(...args));
} else {
client.on(event.name, (...args) => event.execute(...args));
}
}
client.login(token);
``` |
null |
I am using miniedit to create topologies, the mininet version is 2.3.1b4, installed from git clone. Once I finish and want to export it, the terminal shows up error messages[](https://i.stack.imgur.com/ajbl7.jpg):
can anyone please assist me how to solve this issue? many thanks |
The miniedit is unavailable to export topology |
|python|mininet|sdn|ryu|pox| |
null |
If you don't use bootstrap components like me, you can use the **noninteractive** directive instead:
<div v-b-tooltip.noninteractive="{title: `I'm a tooltip`}">Hover me!</div>
For futher reference: [tooltip directives][1]
[1]: https://bootstrap-vue.org/docs/directives/tooltip |
I am trying to post to my group by a business app type facebook but getting below error:
Fatal error: Uncaught Facebook\Exceptions\FacebookAuthorizationException: (#200)
If posting to a group, requires app being installed in the group, and \
either publish_to_groups permission with user token, or both pages_read_engagement \
and pages_manage_posts permission with page token; If posting to a page, \
requires both pages_read_engagement and pages_manage_posts as an admin with \
sufficient administrative permission
I already ask permission when login( i am using facebook-php-sdk and facebook graph api v2.10):
$permissions = ['email','publish_to_groups' ];
$loginUrl = $helper->getLoginUrl('mydomain/fb-callback.php', $permissions);
I go to https://developers.facebook.com/tools/debug/accesstoken and see result of my access_token: Access token info
[![enter image description here][1]][1]
So i think problem is my app is not installed in the group. I haved found a similar thread https://stackoverflow.com/questions/52304018/facebook-api-how-to-add-application-to-group-in-developer-mode, but in this answer, it's just say that if i am an admin of both app and group then it will work, but it still return above error. Now i even can't find this app in personal( Settings > Apps) and my group ( My groups > Settings > Apps). Can anyone help me solve this problem, a clue or a tutorial create facebook app post to group please ? Thank you !
<hr>
**Bounty**
To whoever can explain how to post to a Group (not a page) with step by step instructions and screenshots on how to set it up, first in the Graph Explorer followed by **programmatically** - *in either javascript, node, python, c#, powershell or ideally an AWS Lambda*:
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/oMgVd.png
[2]: https://i.stack.imgur.com/Ftmxl.png |
It seems that you have a missing app. Go to **Help->Install New Software-> "select appropiate app"** in eclipse. |
You should use a dictionary for the Parameters.
Parameters = new Dictionary<string, object>
{
{ "Name", "/my-param" },
{ "WithDecryption", true }
} |
I didn't see any examples of this so I am wondering if this is a bad practice to extend DAG class.
Is it a bad practice and why, if it is?
Example of where I can see this useful follows...
Let's say we have a number of DAGs which all share the same behaviour: calling a specific function as a very last thing, regardless of success or failure. This function could be something like invoking some external API, for instance.
My idea to approach this would be something along these lines:
- extend the DAG class creating a new class DAGWithFinishAction
- implement on_success_callback and on_failure_callback in DAGWithFinishAction to do what I wanted to achieve
- use the new class in ```with DAGWithFinishAction(dag_id=..., ...) as dag: ...```
- schedule tasks in each of implementing DAGs
- expect that each of those DAGs call it's success/failure callbacks after all tasks are finished (in any state)
Is there anything wrong with this approach?
I couldn't find anything similar which makes me believe I am missing something.
class DAGWithFinishAction(DAG):
def __init__(self, dag_id, **kwargs):
self.metric_callback = publish_execution_time
on_success_callback = kwargs.get("on_success_callback")
if on_success_callback is None:
on_success_callback = self.metric_callback
else:
if isinstance(on_success_callback, list):
on_success_callback.append(self.metric_callback)
else:
on_success_callback = [on_success_callback, self.metric_callback]
kwargs["on_success_callback"] = on_success_callback
super().__init__(dag_id, **kwargs)
with DAGWithFinishAction(dag_id=..., ...) as dag:
...
The code above works but I am still not sure if this is something that should be avoided or is it a legitimate approach when designing DAGs. |
I am creating catchment zones based on points. I want to know the demographics of the catchment zones. Right now I have two shape files - one with the Census tracts and all of the demographic data and another with Thiessen polygons around points to create the catchment zones.
I need the average of all tracts within the catchment zone. I have tried Spatial Join, but end up with missing data. Also tried to use sf in R (no real code to show because I did not move past the import), but it does read in the full attribute table, so the demographics aren't there.
Any thoughts are helpful! |
Averaging Census Tracts and their Demographics into Thiessen Polygons (ArcGIS, R) |