row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
9,439
|
A LD (LACTATE DEHYDROGENASE) test came back with a value of 354. Is that significant?
|
5163fe146745ddcc561642cf228ecd6c
|
{
"intermediate": 0.3972438871860504,
"beginner": 0.24485065042972565,
"expert": 0.35790547728538513
}
|
9,440
|
how to change circuit in tor in terminal?
|
a1d99f7b850aa404086bfde63242598e
|
{
"intermediate": 0.31662118434906006,
"beginner": 0.28252631425857544,
"expert": 0.4008525013923645
}
|
9,441
|
C# wpf livechart update
|
5d98388ad32db2e0c1e37afacbf6554c
|
{
"intermediate": 0.4290268123149872,
"beginner": 0.3007134199142456,
"expert": 0.2702597677707672
}
|
9,442
|
// Choose a method that will return a single value
const word = cities.method((acc, currVal) => {
return acc + currVal[0]
}, "C");
|
f1ee9d4f421a2aed439b7f040d8a9c5c
|
{
"intermediate": 0.44857704639434814,
"beginner": 0.25347045063972473,
"expert": 0.2979525327682495
}
|
9,443
|
Сделай так чтобы картинка занимала 100% от ширины и 60% от выоты и была на фоне текста : <androidx.cardview.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_margin="10dp"
android:elevation="6dp"
app:cardCornerRadius="8dp">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:padding="10dp"
>
<TextView
android:id="@+id/discount"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="Товар подешевел на 30%"
android:textSize="18sp"
android:textStyle="bold" />
<ImageView
android:id="@+id/product_image"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_below="@id/discount"
android:src="@drawable/food_1" />
<TextView
android:id="@+id/product_name"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_below="@id/product_image"
android:layout_weight="2"
android:gravity="center_vertical"
android:text="Product Name"
android:textSize="22sp"
android:textStyle="bold" />
<TextView
android:id="@+id/product_description"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_below="@id/product_name"
android:layout_weight="1"
android:text="Product Description"
android:textSize="16sp" />
<TextView
android:id="@+id/old_price"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/product_description"
android:layout_alignParentStart="true"
android:layout_marginTop="10dp"
android:text="Old Price"
android:textColor="@android:color/darker_gray"
android:textSize="16sp" />
<TextView
android:id="@+id/new_price"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@id/old_price"
android:layout_marginStart="20dp"
android:layout_toEndOf="@id/old_price"
android:text="New Price"
android:textColor="#EC0909"
android:textSize="16sp" />
<Button
android:id="@+id/goRestaurantButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/new_price"
android:layout_alignParentEnd="true"
android:layout_marginTop="5dp"
android:layout_marginEnd="9dp"
android:text="Перейти в ресторан" />
</RelativeLayout>
</androidx.cardview.widget.CardView>
|
4d419bb4df7334f08b7135c5c0249237
|
{
"intermediate": 0.36697646975517273,
"beginner": 0.40514156222343445,
"expert": 0.22788196802139282
}
|
9,444
|
consider the following lab activity:
L2: Industrial Internet of Things (IIoT) system simulation - Versions A and B
Summary
The purpose of this lab is to simulate the operation of a portion of an Industrial Internet of
Things (IIoT), that is modeled as a queuing system, and investigate its performance under
variable configurations, to understand how different scenario configurations and network
parameter settings may affect the system behavior and performance.
Consider the Industry 4.0 scenario. Each machinery component of the production line is equipped with a set of sensors and actuators, that build up an IIoT system.
In particular, a number of sensor devices collect real time monitoring data at the edge of the network, to collect several types of information about the status and the operation of the various production line components. These data can be used, either directly or after some processing, to take decisions and perform specific actions on the production line by means of actuators, in order to manage the system operation, modulate the production system speed, prevent or tackle service interruptions, etc.
More in details, data collected by the sensors are sent to a local Micro Data Center ,
that provides computational capacity close to edge of IIoT network by means of edge nodes, which are managed by an edge controller. edge nodes pre-process data, so that
most requests can be fulfilled locally, whereas only computationally intensive requests are
forwarded to a Cloud data center), thus saving bandwidth and energy. Finally, based
on the data processing output, operative commands are generated and sent to the actuators.
In case all the edge nodes are busy, incoming data can be buffered, if a buffer is present. However, if the buffer is full or it is not envisioned at all, data packets are
directly forwarded to the Cloud Data Center to perform the entire processing tasks.
In the considered queuing system, the customers represent the IIoT data packets that arrive at that Micro Data Center. The service represents the local processing of the data by edge nodes before generating the actuation message packets or before forwarding data to the Cloud for further computational processing. Finally, the waiting line represents the buffer where packets are stored before performing the computational tasks. Packets that are sent to the Cloud are further processed by Cloud server(s) before generating the actuation message packets that are sent to the actuators. In this case the service is represented by the data processing carried out by the Cloud server(s). Even in the Cloud a buffer may be envisioned.
Two types of data can be generated by the sensor nodes and sent to the Micro Data Center,
each corresponding to a different class of task that can be performed by the actuators:
Class A - High priority tasks: these tasks are delay sensitive, hence implying high priority operations that must be performed on the production line within a strict time deadline. The packets generated by the sensors that are related to high priority tasks (type A packets) typically require a simple processing, that can be locally performed by the edge nodes.
Class B - Low priority tasks: these tasks require more complex computational operations, however the completion of these tasks is not constrained by strict time deadlines, hence
resulting in delay tolerant tasks that can be performed in a longer time period. Due to the
need for more complex computational operations, this type of packets (type B packets), after a local pre-processing at the Micro Data Center level, must be forwarded to the Cloud Data Center to complete the processing.
Denote f the fraction of packets of type B, i.e. the low priority data packets arriving at the Micro Data Center from the sensors that must be further forwarded to the Cloud Data Center after local pre-processing by the edge node(s), due to more complex required computational tasks. In this case, assume a reasonably increased average service time on
the Cloud server(s). Note that, although the required tasks are more complex, the Cloud
Servers are likely to be better performing than the edge nodes.
Furthermore, when data cannot be locally pre-processed due to busy edge nodes and full
buffer, any incoming data (either of type A or B) are forwarded to the Cloud without any
local pre-processing. Since these forwarded data are fully processed in the Cloud, they experience an average service time that reflect the fact that both the simple pre-processing tasks (for type A and type B data) and the more complex computational tasks (only for type B data) are performed in the Cloud Data center itself.
You can include the propagation delay in your analysis, assuming a reasonable additional
delay due to the data transmission from the Micro Data Center to the Cloud Data Center
and for the transmission of the commands from Cloud Data Center to the actuators. The
other propagation delays can be neglected.
I want you to gimme the code for simulating the system , the code must have the following specifications:
1. The system must record some metrics about the simulation ( i. average packet delay ii. packet loss in percent iii. number of arrive A or B packet )
2. It must be able to have single or multiple node servers and cloud servers
3. You must write the code in an understandable way and not too much advanced (Try to comment lines as much as possible)
Tasks
First task:
assume a single server with finite buffer for both the Micro Data Center and the Cloud Data Center, with f=0.5. Focusing on the Cloud Data Center sub-system, the packet drop probabil-ity of those data packets that are forwarded to the Cloud for any reason.
1. Observe the system behavior during the warm-up transient period and identify
the transition to the steady state.
2. Try to apply a method to remove the warm-up transient in your simulations.
Second Task:
Consider now the overall system operation, including both the Micro Data Center and
the Cloud Data Center.
1. How does the size of the buffer in the micro Data Center impact on the overall
system performance?
2. Does the size of the buffer in the Cloud Data Center show a similar impact on
the system performance?
3. How do the characteristics of input data (i.e. different values of f) affect
the overall average queuing delay?
Third Task:
Now define a desired value for the maximum average queuing time of type A packets,
denoted Tq.
1. Assuming again a single-server Micro Data Center and a single-server Cloud Data
Center, replace the edge node with a progressively faster server. Which is the minimum value of the average service rate that is required to reduce the queuing time of type A packets below the threshold Tq?
2. Try to increase the number of edge nodes, assuming the same fixed average service
time for each edge node: which is the minimum number of servers required to
reduce the queuing time below the threshold Tq?
Fourth Task:
Consider a multi-server system, assuming various servers both in the Micro and in the
Cloud Data Centers. Assign an operational cost to each edge node and to the Cloud
servers, with the operational cost for a Cloud server being different with respect to the
one for an edge node. Simulate the system operation over a fixed period of time.
1. Vary the value of f over time, depending on the different time periods within the considered observation window. Investigate how these configuration settings affect the overall system performance.
2. Now assume at least 4 Cloud servers. Furthermore, consider at least three different
types of Cloud servers, each featuring a different service speed and a specific operational cost (that may depend, for example, on the service speed). Test various combinations of server types and investigate the trade off between the overall cost, the queuing delays and the packet drop probability.
3. Set a value of f < 0.5 and define a
desired threshold on the maximum operational cost.
• Identify the best combination of server types allowing to reduce the cost below
the desired threshold.
• Does this combination allow to respect the constraint on the maximum queuing delay, i.e. Tq, set in Task 3 for type A packets?
4. Now, assume to install half the number of Cloud servers, keeping the same value
of f as in Task 4.(c).
• In this case, can you identify the proper configuration of server types allowing
to reduce the cost below the same desired threshold?
• Compare the obtained queueing delay and cost under these two scenarios (i.e.,
N Cloud servers versus N/2 Cloud servers), also highlighting how packets of
type A and packets of type B are differently affected in terms of delay and
packet drop probability.
|
aeb70b88e96b3ec1bdcf099704dbd69e
|
{
"intermediate": 0.413165807723999,
"beginner": 0.3523298501968384,
"expert": 0.23450438678264618
}
|
9,445
|
Сделай так чтобы картинка занимала 100% от ширины и 60% от выоты и была на фоне текста и помни что этот код для AndroidStudio : <androidx.cardview.widget.CardView xmlns:android=“http://schemas.android.com/apk/res/android”
android:layout_width=“match_parent”
android:layout_height=“wrap_content”
xmlns:app=“http://schemas.android.com/apk/res-auto”
android:layout_margin=“10dp”
android:elevation=“6dp”
app:cardCornerRadius=“8dp”>
<RelativeLayout
android:layout_width=“match_parent”
android:layout_height=“wrap_content”
android:padding=“10dp”
>
<TextView
android:id=“@+id/discount”
android:layout_width=“match_parent”
android:layout_height=“wrap_content”
android:text=“Товар подешевел на 30%”
android:textSize=“18sp”
android:textStyle=“bold” />
<ImageView
android:id=“@+id/product_image”
android:layout_width=“match_parent”
android:layout_height=“0dp”
android:layout_below=“@id/discount”
android:src=“@drawable/food_1” />
<TextView
android:id=“@+id/product_name”
android:layout_width=“match_parent”
android:layout_height=“0dp”
android:layout_below=“@id/product_image”
android:layout_weight=“2”
android:gravity=“center_vertical”
android:text=“Product Name”
android:textSize=“22sp”
android:textStyle=“bold” />
<TextView
android:id=“@+id/product_description”
android:layout_width=“match_parent”
android:layout_height=“0dp”
android:layout_below=“@id/product_name”
android:layout_weight=“1”
android:text=“Product Description”
android:textSize=“16sp” />
<TextView
android:id=“@+id/old_price”
android:layout_width=“wrap_content”
android:layout_height=“wrap_content”
android:layout_below=“@id/product_description”
android:layout_alignParentStart=“true”
android:layout_marginTop=“10dp”
android:text=“Old Price”
android:textColor=“@android:color/darker_gray”
android:textSize=“16sp” />
<TextView
android:id=“@+id/new_price”
android:layout_width=“wrap_content”
android:layout_height=“wrap_content”
android:layout_alignBaseline=“@id/old_price”
android:layout_marginStart=“20dp”
android:layout_toEndOf=“@id/old_price”
android:text=“New Price”
android:textColor=“#EC0909”
android:textSize=“16sp” />
<Button
android:id=“@+id/goRestaurantButton”
android:layout_width=“wrap_content”
android:layout_height=“wrap_content”
android:layout_below=“@id/new_price”
android:layout_alignParentEnd=“true”
android:layout_marginTop=“5dp”
android:layout_marginEnd=“9dp”
android:text=“Перейти в ресторан” />
</RelativeLayout>
</androidx.cardview.widget.CardView>
|
7be183e72dfa9b5568f08e11e781a8b3
|
{
"intermediate": 0.3192265033721924,
"beginner": 0.39036330580711365,
"expert": 0.2904101610183716
}
|
9,446
|
consider the following lab activity:
L2: Industrial Internet of Things (IIoT) system simulation - Versions A and B
Summary
The purpose of this lab is to simulate the operation of a portion of an Industrial Internet of
Things (IIoT), that is modeled as a queuing system, and investigate its performance under
variable configurations, to understand how different scenario configurations and network
parameter settings may affect the system behavior and performance.
Consider the Industry 4.0 scenario. Each machinery component of the
production line is equipped with a set of sensors and actuators, that build up an IIoT system.
In particular, a number of sensor devices collect real time monitoring data at the edge of the
network, to collect several types of information about the status and the operation of the
various production line components. These data can be used, either directly or after some
processing, to take decisions and perform specific actions on the production line by means of
actuators, in order to manage the system operation, modulate the production system speed,
prevent or tackle service interruptions, etc.
More in details, data collected by the sensors are sent to a local Micro Data Center ,
that provides computational capacity close to edge of IIoT network by means of edge nodes,
which are managed by an edge controller. edge nodes pre-process data, so that
most requests can be fulfilled locally, whereas only computationally intensive requests are
forwarded to a Cloud data center), thus saving bandwidth and energy. Finally, based
on the data processing output, operative commands are generated and sent to the actuators
. In case all the edge nodes are busy, incoming data can be buffered, if a
buffer is present. However, if the buffer is full or it is not envisioned at all, data packets are
directly forwarded to the Cloud Data Center to perform the entire processing tasks.
In the considered queuing system, the customers represent the IIoT data packets that arrive
at that Micro Data Center. The service represents the local processing of the data by edge
nodes before generating the actuation message packets or before forwarding data to the
Cloud for further computational processing. Finally, the waiting line represents the buffer
where packets are stored before performing the computational tasks. Packets that are sent to
the Cloud are further processed by Cloud server(s) before generating the actuation message
packets that are sent to the actuators. In this case the service is represented by the data
processing carried out by the Cloud server(s). Even in the Cloud a buffer may be envisioned.
Two types of data can be generated by the sensor nodes and sent to the Micro Data Center,
each corresponding to a different class of task that can be performed by the actuators:
Class A - High priority tasks: these tasks are delay sensitive, hence implying high
priority operations that must be performed on the production line within a strict time
deadline. The packets generated by the sensors that are related to high priority tasks (type
A packets) typically require a simple processing, that can be locally performed by the edge
nodes.
Class B - Low priority tasks: these tasks require more complex computational operations, however the completion of these tasks is not constrained by strict time deadlines, hence
resulting in delay tolerant tasks that can be performed in a longer time period. Due to the
need for more complex computational operations, this type of packets (type B packets), after
a local pre-processing at the Micro Data Center level, must be forwarded to the Cloud Data
Center to complete the processing.
Denote f the fraction of packets of type B, i.e. the low priority data packets arriving
at the Micro Data Center from the sensors that must be further forwarded to the Cloud
Data Center after local pre-processing by the edge node(s), due to more complex required
computational tasks. In this case, assume a reasonably increased average service time on
the Cloud server(s). Note that, although the required tasks are more complex, the Cloud
Servers are likely to be better performing than the edge nodes.
Furthermore, when data cannot be locally pre-processed due to busy edge nodes and full
buffer, any incoming data (either of type A or B) are forwarded to the Cloud without any
local pre-processing. Since these forwarded data are fully processed in the Cloud, they experience an average service time that reflect the fact that both the simple pre-processing tasks
(for type A and type B data) and the more complex computational tasks (only for type B
data) are performed in the Cloud Data center itself.
You can include the propagation delay in your analysis, assuming a reasonable additional
delay due to the data transmission from the Micro Data Center to the Cloud Data Center
and for the transmission of the commands from Cloud Data Center to the actuators. The
other propagation delays can be neglected.
I want you to gimme the code for simulating the system , the code must have the following specifications:
1. The system must record some metrics about the simulation ( 1. average packet delay 2. packet loss in percent 3. number of arrive A or B packet )
2. It must be able to have single or multiple node servers and cloud servers
first task:assume a single server with finite buffer for both the Micro Data
Center and the Cloud Data Center, with f=0.5. Focusing on the Cloud Data
Center sub-system, the packet drop probabil-ity of those data packets that are
forwarded to the Cloud for any reason.
(a) Observe the system behavior during the warm-up transient period and identify
the transition to the steady state.
(b) Try to apply a method to remove the warm-up transient in your simulations.
2. Consider now the overall system operation, including both the Micro Data Center and
the Cloud Data Center.
(a) How does the size of the buffer in the micro Data Center impact on the overall
system performance?
(b) Does the size of the buffer in the Cloud Data Center show a similar impact on
the system performance?
(c) How do the characteristics of input data (i.e. different values of f) affect
the overall average queuing delay?
3. Now define a desired value for the maximum average queuing time of type A packets,
denoted T
q.
(a) Assuming again a single-server Micro Data Center and a single-server Cloud Data
Center, replace the edge node with a progressively faster server. Which is the4
minimum value of the average service rate that is required to reduce the queuing
time of type A packets below the threshold Tq?
(b) Try to increase the number of edge nodes, assuming the same fixed average service
time for each edge node: which is the minimum number of servers required to
reduce the queuing time below the threshold Tq?
4. Consider a multi-server system, assuming various servers both in the Micro and in the
Cloud Data Centers. Assign an operational cost to each edge node and to the Cloud
servers, with the operational cost for a Cloud server being different with respect to the
one for an edge node. Simulate the system operation over a fixed period of time.
(a) Vary the value of f over time,
depending on the different time periods within the considered observation window.
Investigate how these configuration settings affect the overall system performance.
(b) Now assume at least 4 Cloud servers. Furthermore, consider at least three different
types of Cloud servers, each featuring a different service speed and a specific
operational cost (that may depend, for example, on the service speed). Test
various combinations of server types and investigate the trade off between the
overall cost, the queuing delays and the packet drop probability.
(c) Set a value of f < 0.5 and define a
desired threshold on the maximum operational cost.
• Identify the best combination of server types allowing to reduce the cost below
the desired threshold.
• Does this combination allow to respect the constraint on the maximum queuing delay, i.e. Tq, set in Task 3 for type A packets?
(d) Now, assume to install half the number of Cloud servers, keeping the same value
of f as in Task 4.(c).
• In this case, can you identify the proper configuration of server types allowing
to reduce the cost below the same desired threshold?
• Compare the obtained queueing delay and cost under these two scenarios (i.e.,
N Cloud servers versus N/2 Cloud servers), also highlighting how packets of
type A and packets of type B are differently affected in terms of delay and
packet drop probability.
|
cb764da56ca8b78aac52c70d69063f23
|
{
"intermediate": 0.3983285129070282,
"beginner": 0.33473101258277893,
"expert": 0.26694050431251526
}
|
9,447
|
Исправь : package com.example.myapp_2.Data.Discount_Get_table;
import android.os.Bundle;
import android.os.Handler;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.view.animation.Animation;
import android.view.animation.AnimationUtils;
import androidx.fragment.app.Fragment;
import androidx.recyclerview.widget.LinearLayoutManager;
import androidx.recyclerview.widget.RecyclerView;
import androidx.viewpager.widget.ViewPager;
import com.example.myapp_2.Data.List_1.Product;
import com.example.myapp_2.R;
import java.util.ArrayList;
import java.util.List;
public class StocksFragment extends Fragment {
ViewPager viewPager;
StocksAdapter adapter;
ArrayList<StocksModel> stocksModelArrayList;
ProductAdapter1 productAdapter;
List<Product> products;
private List<Product> getProducts() {
List<Product> products = new ArrayList<>();
Product product1 = new Product("Product 1", "Description 1", R.drawable.food_3_1jpg, 0.0f);
products.add(product1);
Product product2 = new Product("Product 2", "Description 2", R.drawable.food_1_1, 0.0f);
products.add(product2);
Product product3 = new Product("Product 3", "Description 3", R.drawable.food_1_4, 0.0f);
products.add(product3);
Product product4 = new Product("Product 4", "Description 4", R.drawable.food_1_1, 0.0f);
products.add(product4);
Product product5 = new Product("Product 5", "Description 5", R.drawable.food_1_1, 0.0f);
products.add(product5);
Product product6 = new Product("Product 6", "Description 6", R.drawable.food_1_1,0.0f);
products.add(product6);
Product product7 = new Product("Product 7", "Description 7", R.drawable.food_3_2,0.0f);
products.add(product7);
Product product8 = new Product("Product 8", "Description 8", R.drawable.food_3_3,0.0f);
products.add(product8);
Product product9 = new Product("Product 9", "Description 9", R.drawable.food_1_1,0.0f);
products.add(product9);
Product product10 = new Product("Product 10", "Description 10", R.drawable.food_1_1,0.0f);
products.add(product10);
return products;
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment_stocks, container, false);
Animation anim = AnimationUtils.loadAnimation(getActivity(), R.anim.fragment_transition_animation);
anim.setDuration(200);
view.startAnimation(anim);
RecyclerView recyclerView = view.findViewById(R.id.recyclerView);
stocksModelArrayList = new ArrayList<>();
stocksModelArrayList.add(new StocksModel("Title1", "Description1", R.drawable.food_3));
stocksModelArrayList.add(new StocksModel("Title2", "Description2", R.drawable.food_4));
stocksModelArrayList.add(new StocksModel("Title2", "Description2", R.drawable.food_1_2));
viewPager = view.findViewById(R.id.viewPager);
adapter = new StocksAdapter(getContext(), stocksModelArrayList);
viewPager.setAdapter(adapter);
final Handler handler = new Handler();
final Runnable runnable = new Runnable() {
@Override
public void run() {
int currentItem = viewPager.getCurrentItem();
int totalItems = viewPager.getAdapter().getCount();
int nextItem = currentItem == totalItems - 1 ? 0 : currentItem + 1;
viewPager.setCurrentItem(nextItem);
handler.postDelayed(this, 3000);
}
};
handler.postDelayed(runnable, 3000);
products = getProducts();
productAdapter = new ProductAdapter1(getContext(), products);
recyclerView.setLayoutManager(new LinearLayoutManager(getContext()));
recyclerView.setAdapter(productAdapter);
return view;
}
}
package com.example.myapp_2.Data.Discount_Get_table;
import android.annotation.SuppressLint;
import android.content.Context;
import android.graphics.Paint;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import androidx.recyclerview.widget.RecyclerView;
import com.example.myapp_2.Data.List_1.Product;
import com.example.myapp_2.R;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
public class ProductAdapter1 extends RecyclerView.Adapter<ProductAdapter1.ProductViewHolder> {
private List<Product> products;
private Context context;
public ProductAdapter1(Context context, List<Product> products) {
this.context = context;
this.products = products;
}
@Override
public ProductViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View view = LayoutInflater.from(context).inflate(R.layout.item_product3, parent, false);
ProductViewHolder holder = new ProductViewHolder(view);
return new ProductViewHolder(view);
}
@SuppressLint("StringFormatMatches")
@Override
public void onBindViewHolder(ProductViewHolder holder, int position) {
Product product = products.get(position);
holder.productImage.setImageResource(product.getImageResource());
holder.productName.setText(product.getName());
holder.productDescription.setText(product.getDescription());
double oldPrice = product.getPrice() * (1 + product.getDiscount());
double newPrice = oldPrice * (1 - product.getDiscount());
holder.oldPriceTextView.setText(context.getString(R.string.old_price, String.format("%.2f", oldPrice)));
holder.newPriceTextView.setText(context.getString(R.string.price, String.format("%.2f", newPrice)));
holder.oldPriceTextView.setPaintFlags(Paint.STRIKE_THRU_TEXT_FLAG);
double discount = product.getDiscount() * 100;
holder.discountTextView.setText(context.getString(R.string.discount, String.format("%.0f", discount)));
}
@Override
public int getItemCount() {
return products.size();
}
public List<Product> getProducts() {
List<Product> products = new ArrayList<>();
Product product1 = new Product("Product 1", "Description 1", R.drawable.food_1, 0.3f);
products.add(product1);
Product product2 = new Product("Product 2", "Description 2", R.drawable.food_2, 0.2f);
products.add(product2);
Product product3 = new Product("Product 3", "Description 3", R.drawable.food_1_3, 0.1f);
products.add(product3);
Product product4 = new Product("Product 4", "Description 4", R.drawable.food_1_2, 0.4f);
products.add(product4);
Product product5 = new Product("Product 5", "Description 5", R.drawable.food_3_4, 0.5f);
products.add(product5);
return products;
}
public void sortProductsByPrice() {
Collections.sort(products, new Comparator<Product>() {
@Override
public int compare(Product o1, Product o2) {
return Double.compare(o1.getPrice(), o2.getPrice());
}
});
notifyDataSetChanged();
}
public class ProductViewHolder extends RecyclerView.ViewHolder {
public Button goToRestaurantButton;
ImageView productImage;
TextView productName;
TextView productDescription;
TextView oldPriceTextView;
TextView newPriceTextView;
TextView discountTextView;
public ProductViewHolder(View view) {
super(view);
productImage = view.findViewById(R.id.product_image);
productName = view.findViewById(R.id.product_name);
productDescription = view.findViewById(R.id.product_description);
discountTextView = view.findViewById(R.id.discount);
goToRestaurantButton = itemView.findViewById(R.id.goRestaurantButton);
oldPriceTextView = itemView.findViewById(R.id.old_price);
newPriceTextView = itemView.findViewById(R.id.new_price);
}
}
}package com.example.myapp_2.Data.Discount_Get_table;
import android.content.Context;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ImageView;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.viewpager.widget.PagerAdapter;
import com.example.myapp_2.R;
import java.util.ArrayList;
public class StocksAdapter extends PagerAdapter{
private Context context;
private ArrayList<StocksModel> stocksModelArrayList;
private LayoutInflater layoutInflater;
public StocksAdapter(Context context, ArrayList<StocksModel> stocksModelArrayList) {
this.context = context;
this.stocksModelArrayList = stocksModelArrayList;
}
@Override
public int getCount() {
return stocksModelArrayList.size();
}
@Override
public boolean isViewFromObject(@NonNull View view, @NonNull Object object) {
return view.equals(object);
}
@NonNull
@Override
public Object instantiateItem(@NonNull ViewGroup container, int position) {
layoutInflater = LayoutInflater.from(context);
View view = layoutInflater.inflate(R.layout.stock_item, container, false);
ImageView imageView;
TextView title, dascription;
imageView = view.findViewById(R.id.img);
imageView.setImageResource(stocksModelArrayList.get(position).getImage());
container.addView(view, 0);
return view;
}
@Override
public void destroyItem(@NonNull ViewGroup container, int position, @NonNull Object object) {
container.removeView((View) object);
}
}
|
79df5fd71199961405ae8bfc5d05da21
|
{
"intermediate": 0.2972301244735718,
"beginner": 0.41725999116897583,
"expert": 0.28550994396209717
}
|
9,448
|
consider the following lab activity:
L2: Industrial Internet of Things (IIoT) system simulation - Versions A and B
Summary
The purpose of this lab is to simulate the operation of a portion of an Industrial Internet of
Things (IIoT), that is modeled as a queuing system, and investigate its performance under
variable configurations, to understand how different scenario configurations and network
parameter settings may affect the system behavior and performance.
Consider the Industry 4.0 scenario. Each machinery component of the production line is equipped with a set of sensors and actuators, that build up an IIoT system.
In particular, a number of sensor devices collect real time monitoring data at the edge of the network, to collect several types of information about the status and the operation of the various production line components. These data can be used, either directly or after some processing, to take decisions and perform specific actions on the production line by means of actuators, in order to manage the system operation, modulate the production system speed, prevent or tackle service interruptions, etc.
More in details, data collected by the sensors are sent to a local Micro Data Center ,
that provides computational capacity close to edge of IIoT network by means of edge nodes, which are managed by an edge controller. edge nodes pre-process data, so that
most requests can be fulfilled locally, whereas only computationally intensive requests are
forwarded to a Cloud data center), thus saving bandwidth and energy. Finally, based
on the data processing output, operative commands are generated and sent to the actuators.
In case all the edge nodes are busy, incoming data can be buffered, if a buffer is present. However, if the buffer is full or it is not envisioned at all, data packets are
directly forwarded to the Cloud Data Center to perform the entire processing tasks.
In the considered queuing system, the customers represent the IIoT data packets that arrive at that Micro Data Center. The service represents the local processing of the data by edge nodes before generating the actuation message packets or before forwarding data to the Cloud for further computational processing. Finally, the waiting line represents the buffer where packets are stored before performing the computational tasks. Packets that are sent to the Cloud are further processed by Cloud server(s) before generating the actuation message packets that are sent to the actuators. In this case the service is represented by the data processing carried out by the Cloud server(s). Even in the Cloud a buffer may be envisioned.
Two types of data can be generated by the sensor nodes and sent to the Micro Data Center,
each corresponding to a different class of task that can be performed by the actuators:
Class A - High priority tasks: these tasks are delay sensitive, hence implying high priority operations that must be performed on the production line within a strict time deadline. The packets generated by the sensors that are related to high priority tasks (type A packets) typically require a simple processing, that can be locally performed by the edge nodes.
Class B - Low priority tasks: these tasks require more complex computational operations, however the completion of these tasks is not constrained by strict time deadlines, hence
resulting in delay tolerant tasks that can be performed in a longer time period. Due to the
need for more complex computational operations, this type of packets (type B packets), after a local pre-processing at the Micro Data Center level, must be forwarded to the Cloud Data Center to complete the processing.
Denote f the fraction of packets of type B, i.e. the low priority data packets arriving at the Micro Data Center from the sensors that must be further forwarded to the Cloud Data Center after local pre-processing by the edge node(s), due to more complex required computational tasks. In this case, assume a reasonably increased average service time on
the Cloud server(s). Note that, although the required tasks are more complex, the Cloud
Servers are likely to be better performing than the edge nodes.
Furthermore, when data cannot be locally pre-processed due to busy edge nodes and full
buffer, any incoming data (either of type A or B) are forwarded to the Cloud without any
local pre-processing. Since these forwarded data are fully processed in the Cloud, they experience an average service time that reflect the fact that both the simple pre-processing tasks (for type A and type B data) and the more complex computational tasks (only for type B data) are performed in the Cloud Data center itself.
You can include the propagation delay in your analysis, assuming a reasonable additional
delay due to the data transmission from the Micro Data Center to the Cloud Data Center
and for the transmission of the commands from Cloud Data Center to the actuators. The
other propagation delays can be neglected.
Tasks
First task:
assume a single server with finite buffer for both the Micro Data Center and the Cloud Data Center, with f=0.5. Focusing on the Cloud Data Center sub-system, the packet drop probabil-ity of those data packets that are forwarded to the Cloud for any reason.
1. Observe the system behavior during the warm-up transient period and identify
the transition to the steady state.
2. Try to apply a method to remove the warm-up transient in your simulations.
Second Task:
Consider now the overall system operation, including both the Micro Data Center and
the Cloud Data Center.
1. How does the size of the buffer in the micro Data Center impact on the overall
system performance?
2. Does the size of the buffer in the Cloud Data Center show a similar impact on
the system performance?
3. How do the characteristics of input data (i.e. different values of f) affect
the overall average queuing delay?
Third Task:
Now define a desired value for the maximum average queuing time of type A packets,
denoted Tq.
1. Assuming again a single-server Micro Data Center and a single-server Cloud Data
Center, replace the edge node with a progressively faster server. Which is the minimum value of the average service rate that is required to reduce the queuing time of type A packets below the threshold Tq?
2. Try to increase the number of edge nodes, assuming the same fixed average service
time for each edge node: which is the minimum number of servers required to
reduce the queuing time below the threshold Tq?
Fourth Task:
Consider a multi-server system, assuming various servers both in the Micro and in the
Cloud Data Centers. Assign an operational cost to each edge node and to the Cloud
servers, with the operational cost for a Cloud server being different with respect to the
one for an edge node. Simulate the system operation over a fixed period of time.
1. Vary the value of f over time, depending on the different time periods within the considered observation window. Investigate how these configuration settings affect the overall system performance.
2. Now assume at least 4 Cloud servers. Furthermore, consider at least three different
types of Cloud servers, each featuring a different service speed and a specific operational cost (that may depend, for example, on the service speed). Test various combinations of server types and investigate the trade off between the overall cost, the queuing delays and the packet drop probability.
3. Set a value of f < 0.5 and define a
desired threshold on the maximum operational cost.
• Identify the best combination of server types allowing to reduce the cost below
the desired threshold.
• Does this combination allow to respect the constraint on the maximum queuing delay, i.e. Tq, set in Task 3 for type A packets?
4. Now, assume to install half the number of Cloud servers, keeping the same value
of f as in Task 4.(c).
• In this case, can you identify the proper configuration of server types allowing
to reduce the cost below the same desired threshold?
• Compare the obtained queueing delay and cost under these two scenarios (i.e.,
N Cloud servers versus N/2 Cloud servers), also highlighting how packets of
type A and packets of type B are differently affected in terms of delay and
packet drop probability.
I want you to gimme the code for simulating the system , the code must have the following specifications:
1. The system must record some metrics about the simulation ( i. average packet delay ii. packet loss in percent iii. number of arrive A or B packet )
2. It must be able to have single or multiple node servers and cloud servers
3. You must write the code in an understandable way and not too much advanced (Try to comment lines as much as possible)
|
673b869242e252f09b7c99793948c08f
|
{
"intermediate": 0.4306161403656006,
"beginner": 0.38012567162513733,
"expert": 0.18925820291042328
}
|
9,449
|
Write me a Python function to compute the Kelly Criterion
|
6321e8fde2d582842103a85b24772ec6
|
{
"intermediate": 0.30392882227897644,
"beginner": 0.19332528114318848,
"expert": 0.5027458667755127
}
|
9,450
|
внеси изменения в xm , чтобы StocksFragment заработал корректно : <androidx.cardview.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:id="@+id/product_card"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="10dp"
android:elevation="6dp"
app:cardCornerRadius="8dp">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<ImageView
android:id="@+id/product_image"
android:layout_width="match_parent"
android:layout_height="250dp"
android:scaleType="centerCrop"
android:src="@drawable/food_1" />
<TextView
android:id="@+id/discount"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:background="@color/white"
android:layout_alignParentTop="true"
android:layout_alignParentStart="true"
android:padding="5dp"
android:text="Товар подешевел на 30%"
android:textSize="18sp"
android:textStyle="bold" />
<TextView
android:id="@+id/new_price"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_alignParentTop="true"
android:layout_marginEnd="10dp"
android:layout_marginTop="10dp"
android:text="New Price"
android:textColor="#EC0909"
android:textSize="22sp"
android:textStyle="bold" />
<TextView
android:id="@+id/product_name"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/product_image"
android:layout_marginTop="10dp"
android:gravity="center"
android:padding="10dp"
android:text="Product Name"
android:textSize="22sp"
android:textStyle="bold" />
<RelativeLayout
android:id="@+id/price_layout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/product_name"
android:background="@color/white">
<TextView
android:id="@+id/old_price"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
android:layout_marginStart="10dp"
android:layout_marginTop="10dp"
android:text="Old Price"
android:textColor="@android:color/darker_gray"
android:textSize="16sp" />
<TextView
android:id="@+id/price_separator"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@id/old_price"
android:layout_centerHorizontal="true"
android:text="|"
android:textColor="@android:color/darker_gray"
android:textSize="16sp" />
<TextView
android:id="@+id/percentage_off"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@id/old_price"
android:layout_alignParentEnd="true"
android:layout_marginEnd="10dp"
android:text="-30%"
android:textColor="#EC0909"
android:textSize="16sp" />
</RelativeLayout>
<Button
android:id="@+id/goRestaurantButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/price_layout"
android:layout_alignParentEnd="true"
android:layout_marginTop="10dp"
android:layout_marginEnd="10dp"
android:text="Перейти в ресторан" />
</RelativeLayout>
</androidx.cardview.widget.CardView>package com.example.myapp_2.Data.Discount_Get_table;
import android.os.Bundle;
import android.os.Handler;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.view.animation.Animation;
import android.view.animation.AnimationUtils;
import androidx.fragment.app.Fragment;
import androidx.recyclerview.widget.LinearLayoutManager;
import androidx.recyclerview.widget.RecyclerView;
import androidx.viewpager.widget.ViewPager;
import com.example.myapp_2.Data.Discount_Get_table.ProductAdapter1;
import com.example.myapp_2.Data.Discount_Get_table.StocksAdapter;
import com.example.myapp_2.Data.Discount_Get_table.StocksModel;
import com.example.myapp_2.Data.List_1.Product;
import com.example.myapp_2.R;
import java.util.ArrayList;
import java.util.List;
public class StocksFragment extends Fragment {
ViewPager viewPager;
StocksAdapter adapter;
ArrayList<StocksModel> stocksModelArrayList;
ProductAdapter1 productAdapter;
List<Product> products;
private List<Product> getProducts() {
List<Product> products = new ArrayList<>();
Product product1 = new Product("Product 1", "Description 1", R.drawable.food_3_1jpg, 0.0f);
products.add(product1);
Product product2 = new Product("Product 2", "Description 2", R.drawable.food_1_1, 0.0f);
products.add(product2);
Product product3 = new Product("Product 3", "Description 3", R.drawable.food_1_4, 0.0f);
products.add(product3);
Product product4 = new Product("Product 4", "Description 4", R.drawable.food_1_1, 0.0f);
products.add(product4);
Product product5 = new Product("Product 5", "Description 5", R.drawable.food_1_1, 0.0f);
products.add(product5);
Product product6 = new Product("Product 6", "Description 6", R.drawable.food_1_1,0.0f);
products.add(product6);
Product product7 = new Product("Product 7", "Description 7", R.drawable.food_3_2,0.0f);
products.add(product7);
Product product8 = new Product("Product 8", "Description 8", R.drawable.food_3_3,0.0f);
products.add(product8);
Product product9 = new Product("Product 9", "Description 9", R.drawable.food_1_1,0.0f);
products.add(product9);
Product product10 = new Product("Product 10", "Description 10", R.drawable.food_1_1,0.0f);
products.add(product10);
return products;
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment_stocks, container, false);
Animation anim = AnimationUtils.loadAnimation(getActivity(), R.anim.fragment_transition_animation);
anim.setDuration(200);
view.startAnimation(anim);
RecyclerView recyclerView = view.findViewById(R.id.recyclerView);
stocksModelArrayList = new ArrayList<>();
stocksModelArrayList.add(new StocksModel("Title1", "Description1", R.drawable.food_3));
stocksModelArrayList.add(new StocksModel("Title2", "Description2", R.drawable.food_4));
stocksModelArrayList.add(new StocksModel("Title2", "Description2", R.drawable.food_1_2));
viewPager = view.findViewById(R.id.viewPager);
adapter = new StocksAdapter(getContext(), stocksModelArrayList);
viewPager.setAdapter(adapter);
final Handler handler = new Handler();
final Runnable runnable = new Runnable() {
@Override
public void run() {
int currentItem = viewPager.getCurrentItem();
int totalItems = viewPager.getAdapter().getCount();
int nextItem = currentItem == totalItems - 1 ? 0 : currentItem + 1;
viewPager.setCurrentItem(nextItem);
handler.postDelayed(this, 3000);
}
};
handler.postDelayed(runnable, 3000);
products = getProducts();
productAdapter = new ProductAdapter1(getContext(), products);
recyclerView.setLayoutManager(new LinearLayoutManager(getContext()));
recyclerView.setAdapter(productAdapter);
return view;
}
}
|
995eed5bc6125337f2be004465c0496c
|
{
"intermediate": 0.4045945405960083,
"beginner": 0.3445087671279907,
"expert": 0.250896692276001
}
|
9,451
|
Do I have make two package.json files for front nad backend?
|
45cf8d0276d911de8187b0402f7b6b4f
|
{
"intermediate": 0.46341684460639954,
"beginner": 0.2432236224412918,
"expert": 0.29335954785346985
}
|
9,452
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pathlib import Path
from threading import Lock
from collections import defaultdict
import shutil
import argparse
import uuid
import zlib
from bottle import route, run, request, error, response, HTTPError, static_file
from werkzeug.utils import secure_filename
storage_path: Path = Path(__file__).parent / "storage"
chunk_path: Path = Path(__file__).parent / "chunk"
allow_downloads = True
dropzone_cdn = "https://cdnjs.cloudflare.com/ajax/libs/dropzone"
dropzone_version = "5.7.6"
dropzone_timeout = "120000"
dropzone_max_file_size = "100000"
dropzone_chunk_size = "1000000"
dropzone_parallel_chunks = "true"
dropzone_force_chunking = "true"
lock = Lock()
chucks = defaultdict(list)
@error(500)
def handle_500(error_message):
response.status = 500
response.body = f"Error: {error_message}"
return response
@route("/")
def index():
index_file = Path(__file__) / "index.html"
if index_file.exists():
return index_file.read_text()
return f"""
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/dropzone.min.css"/>
<link rel="stylesheet" href="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/basic.min.css"/>
<script type="application/javascript"
src="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/dropzone.min.js">
</script>
<title>pyfiledrop</title>
</head>
<body>
<div id="content" style="width: 800px; margin: 0 auto;">
<h2>Upload new files</h2>
<form method="POST" action='/upload' class="dropzone dz-clickable" id="dropper" enctype="multipart/form-data">
</form>
<h2>
Uploaded
<input type="button" value="Clear" onclick="clearCookies()" />
</h2>
<div id="uploaded">
</div>
<script type="application/javascript">
function clearCookies() {{
document.cookie = "files=; Max-Age=0";
document.getElementById("uploaded").innerHTML = "";
}}
function getFilesFromCookie() {{
try {{ return document.cookie.split("=", 2)[1].split("||");}} catch (error) {{ return []; }}
}}
function saveCookie(new_file) {{
let all_files = getFilesFromCookie();
all_files.push(new_file);
document.cookie = `files=${{all_files.join("||")}}`;
}}
function generateLink(combo){{
const uuid = combo.split('|^^|')[0];
const name = combo.split('|^^|')[1];
if ({'true' if allow_downloads else 'false'}) {{
return `<a href="/download/${{uuid}}" download="${{name}}">${{name}}</a>`;
}}
return name;
}}
function init() {{
Dropzone.options.dropper = {{
paramName: 'file',
chunking: true,
forceChunking: {dropzone_force_chunking},
url: '/upload',
retryChunks: true,
parallelChunkUploads: {dropzone_parallel_chunks},
timeout: {dropzone_timeout}, // microseconds
maxFilesize: {dropzone_max_file_size}, // megabytes
chunkSize: {dropzone_chunk_size}, // bytes
init: function () {{
this.on("complete", function (file) {{
let combo = `${{file.upload.uuid}}|^^|${{file.upload.filename}}`;
saveCookie(combo);
document.getElementById("uploaded").innerHTML += generateLink(combo) + "<br />";
}});
}}
}}
if (typeof document.cookie !== 'undefined' ) {{
let content = "";
getFilesFromCookie().forEach(function (combo) {{
content += generateLink(combo) + "<br />";
}});
document.getElementById("uploaded").innerHTML = content;
}}
}}
init();
</script>
</div>
</body>
</html>
"""
@route("/favicon.ico")
def favicon():
return zlib.decompress(
b"x\x9c\xedVYN\xc40\x0c5J%[\xe2\xa3|q\x06\x8e1G\xe1(=ZoV\xb2\xa7\x89\x97R\x8d\x84\x04\xe4\xa5\xcb(\xc9\xb3\x1do"
b"\x1d\x80\x17?\x1e\x0f\xf0O\x82\xcfw\x00\x7f\xc1\x87\xbf\xfd\x14l\x90\xe6#\xde@\xc1\x966n[z\x85\x11\xa6\xfcc"
b"\xdfw?s\xc4\x0b\x8e#\xbd\xc2\x08S\xe1111\xf1k\xb1NL\xfcU<\x99\xe4T\xf8\xf43|\xaa\x18\xf8\xc3\xbaHFw\xaaj\x94"
b"\xf4c[F\xc6\xee\xbb\xc2\xc0\x17\xf6\xf4\x12\x160\xf9\xa3\xfeQB5\xab@\xf4\x1f\xa55r\xf9\xa4KGG\xee\x16\xdd\xff"
b"\x8e\x9d\x8by\xc4\xe4\x17\tU\xbdDg\xf1\xeb\xf0Zh\x8e\xd3s\x9c\xab\xc3P\n<e\xcb$\x05 b\xd8\x84Q1\x8a\xd6Kt\xe6"
b"\x85(\x13\xe5\xf3]j\xcf\x06\x88\xe6K\x02\x84\x18\x90\xc5\xa7Kz\xd4\x11\xeeEZK\x012\xe9\xab\xa5\xbf\xb3@i\x00"
b"\xce\xe47\x0b\xb4\xfe\xb1d\xffk\xebh\xd3\xa3\xfd\xa4:`5J\xa3\xf1\xf5\xf4\xcf\x02tz\x8c_\xd2\xa1\xee\xe1\xad"
b"\xaa\xb7n-\xe5\xafoSQ\x14'\x01\xb7\x9b<\x15~\x0e\xf4b\x8a\x90k\x8c\xdaO\xfb\x18<H\x9d\xdfj\xab\xd0\xb43\xe1"
b'\xe3nt\x16\xdf\r\xe6\xa1d\xad\xd0\xc9z\x03"\xc7c\x94v\xb6I\xe1\x8f\xf5,\xaa2\x93}\x90\xe0\x94\x1d\xd2\xfcY~f'
b"\xab\r\xc1\xc8\xc4\xe4\x1f\xed\x03\x1e`\xd6\x02\xda\xc7k\x16\x1a\xf4\xcb2Q\x05\xa0\xe6\xb4\x1e\xa4\x84\xc6"
b"\xcc..`8'\x9a\xc9-\n\xa8\x05]?\xa3\xdfn\x11-\xcc\x0b\xb4\x7f67:\x0c\xcf\xd5\xbb\xfd\x89\x9ebG\xf8:\x8bG"
b"\xc0\xfb\x9dm\xe2\xdf\x80g\xea\xc4\xc45\xbe\x00\x03\xe9\xd6\xbb"
)
@route("/upload", method="POST")
def upload():
file = request.files.get("file")
if not file:
raise HTTPError(status=400, body="No file provided")
dz_uuid = request.forms.get("dzuuid")
if not dz_uuid:
# Assume this file has not been chunked
with open(storage_path / f"{uuid.uuid4()}_{secure_filename(file.filename)}", "wb") as f:
file.save(f)
return "File Saved"
# Chunked download
try:
current_chunk = int(request.forms["dzchunkindex"])
total_chunks = int(request.forms["dztotalchunkcount"])
except KeyError as err:
raise HTTPError(status=400, body=f"Not all required fields supplied, missing {err}")
except ValueError:
raise HTTPError(status=400, body=f"Values provided were not in expected format")
save_dir = chunk_path / dz_uuid
if not save_dir.exists():
save_dir.mkdir(exist_ok=True, parents=True)
# Save the individual chunk
with open(save_dir / str(request.forms["dzchunkindex"]), "wb") as f:
file.save(f)
# See if we have all the chunks downloaded
with lock:
chucks[dz_uuid].append(current_chunk)
completed = len(chucks[dz_uuid]) == total_chunks
# Concat all the files into the final file when all are downloaded
if completed:
with open(storage_path / f"{dz_uuid}_{secure_filename(file.filename)}", "wb") as f:
for file_number in range(total_chunks):
f.write((save_dir / str(file_number)).read_bytes())
print(f"{file.filename} has been uploaded")
shutil.rmtree(save_dir)
return "Chunk upload successful"
@route("/download/<dz_uuid>")
def download(dz_uuid):
if not allow_downloads:
raise HTTPError(status=403)
for file in storage_path.iterdir():
if file.is_file() and file.name.startswith(dz_uuid):
return static_file(file.name, root=file.parent.absolute(), download=True)
return HTTPError(status=404)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--port", type=int, default=16273, required=False)
parser.add_argument("--host", type=str, default="0.0.0.0", required=False)
parser.add_argument("-s", "--storage", type=str, default=str(storage_path), required=False)
parser.add_argument("-c", "--chunks", type=str, default=str(chunk_path), required=False)
parser.add_argument(
"--max-size",
type=str,
default=dropzone_max_file_size,
help="Max file size (Mb)",
)
parser.add_argument(
"--timeout",
type=str,
default=dropzone_timeout,
help="Timeout (ms) for each chuck upload",
)
parser.add_argument("--chunk-size", type=str, default=dropzone_chunk_size, help="Chunk size (bytes)")
parser.add_argument("--disable-parallel-chunks", required=False, default=False, action="store_true")
parser.add_argument("--disable-force-chunking", required=False, default=False, action="store_true")
parser.add_argument("-a", "--allow-downloads", required=False, default=False, action="store_true")
parser.add_argument("--dz-cdn", type=str, default=None, required=False)
parser.add_argument("--dz-version", type=str, default=None, required=False)
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
storage_path = Path(args.storage)
chunk_path = Path(args.chunks)
dropzone_chunk_size = args.chunk_size
dropzone_timeout = args.timeout
dropzone_max_file_size = args.max_size
try:
if int(dropzone_timeout) < 1 or int(dropzone_chunk_size) < 1 or int(dropzone_max_file_size) < 1:
raise Exception("Invalid dropzone option, make sure max-size, timeout, and chunk-size are all positive")
except ValueError:
raise Exception("Invalid dropzone option, make sure max-size, timeout, and chunk-size are all integers")
if args.dz_cdn:
dropzone_cdn = args.dz_cdn
if args.dz_version:
dropzone_version = args.dz_version
if args.disable_parallel_chunks:
dropzone_parallel_chunks = "false"
if args.disable_force_chunking:
dropzone_force_chunking = "false"
if args.allow_downloads:
allow_downloads = True
if not storage_path.exists():
storage_path.mkdir(exist_ok=True)
if not chunk_path.exists():
chunk_path.mkdir(exist_ok=True)
print(
f"""Timeout: {int(dropzone_timeout) // 1000} seconds per chunk
Chunk Size: {int(dropzone_chunk_size) // 1024} Kb
Max File Size: {int(dropzone_max_file_size)} Mb
Force Chunking: {dropzone_force_chunking}
Parallel Chunks: {dropzone_parallel_chunks}
Storage Path: {storage_path.absolute()}
Chunk Path: {chunk_path.absolute()}
"""
)
run(server="paste", port=args.port, host=args.host)
write a function which when trigger will clear all of the files in the storage folder
|
d614dd71c443ebf1fd4fb4f96f3d324e
|
{
"intermediate": 0.3916360139846802,
"beginner": 0.4710584580898285,
"expert": 0.13730549812316895
}
|
9,453
|
consider the following lab activity:
L2: Industrial Internet of Things (IIoT) system simulation - Versions A and B
Summary
The purpose of this lab is to simulate the operation of a portion of an Industrial Internet of
Things (IIoT), that is modeled as a queuing system, and investigate its performance under
variable configurations, to understand how different scenario configurations and network
parameter settings may affect the system behavior and performance.
Consider the Industry 4.0 scenario. Each machinery component of the production line is equipped with a set of sensors and actuators, that build up an IIoT system.
In particular, a number of sensor devices collect real time monitoring data at the edge of the network, to collect several types of information about the status and the operation of the various production line components. These data can be used, either directly or after some processing, to take decisions and perform specific actions on the production line by means of actuators, in order to manage the system operation, modulate the production system speed, prevent or tackle service interruptions, etc.
More in details, data collected by the sensors are sent to a local Micro Data Center ,
that provides computational capacity close to edge of IIoT network by means of edge nodes, which are managed by an edge controller. edge nodes pre-process data, so that
most requests can be fulfilled locally, whereas only computationally intensive requests are
forwarded to a Cloud data center), thus saving bandwidth and energy. Finally, based
on the data processing output, operative commands are generated and sent to the actuators.
In case all the edge nodes are busy, incoming data can be buffered, if a buffer is present. However, if the buffer is full or it is not envisioned at all, data packets are
directly forwarded to the Cloud Data Center to perform the entire processing tasks.
In the considered queuing system, the customers represent the IIoT data packets that arrive at that Micro Data Center. The service represents the local processing of the data by edge nodes before generating the actuation message packets or before forwarding data to the Cloud for further computational processing. Finally, the waiting line represents the buffer where packets are stored before performing the computational tasks. Packets that are sent to the Cloud are further processed by Cloud server(s) before generating the actuation message packets that are sent to the actuators. In this case the service is represented by the data processing carried out by the Cloud server(s). Even in the Cloud a buffer may be envisioned.
Two types of data can be generated by the sensor nodes and sent to the Micro Data Center,
each corresponding to a different class of task that can be performed by the actuators:
Class A - High priority tasks: these tasks are delay sensitive, hence implying high priority operations that must be performed on the production line within a strict time deadline. The packets generated by the sensors that are related to high priority tasks (type A packets) typically require a simple processing, that can be locally performed by the edge nodes.
Class B - Low priority tasks: these tasks require more complex computational operations, however the completion of these tasks is not constrained by strict time deadlines, hence
resulting in delay tolerant tasks that can be performed in a longer time period. Due to the
need for more complex computational operations, this type of packets (type B packets), after a local pre-processing at the Micro Data Center level, must be forwarded to the Cloud Data Center to complete the processing.
Denote f the fraction of packets of type B, i.e. the low priority data packets arriving at the Micro Data Center from the sensors that must be further forwarded to the Cloud Data Center after local pre-processing by the edge node(s), due to more complex required computational tasks. In this case, assume a reasonably increased average service time on
the Cloud server(s). Note that, although the required tasks are more complex, the Cloud
Servers are likely to be better performing than the edge nodes.
Furthermore, when data cannot be locally pre-processed due to busy edge nodes and full
buffer, any incoming data (either of type A or B) are forwarded to the Cloud without any
local pre-processing. Since these forwarded data are fully processed in the Cloud, they experience an average service time that reflect the fact that both the simple pre-processing tasks (for type A and type B data) and the more complex computational tasks (only for type B data) are performed in the Cloud Data center itself.
You can include the propagation delay in your analysis, assuming a reasonable additional
delay due to the data transmission from the Micro Data Center to the Cloud Data Center
and for the transmission of the commands from Cloud Data Center to the actuators. The
other propagation delays can be neglected.
Now , I want you to gimme the code for simulating the system to perform the below task .The code must contain plotting. You must write the code in an understandable way and not too much advanced (Try to comment lines as much as possible)
Task
Consider a scenario where there is a solitary server containing limited storage capacity for both the Micro Data Center and the Cloud Data Center, denoted by f=0.5. Let’s examine the Cloud Data Center sub-system and its probability of dropping data packets that are sent to the Cloud for any given reason.
1. Observe the system behavior(probability of dropping data packets that are sent to the Cloud) during the warm-up transient period and identify
the transition to the steady state.
2. Try to apply a method to remove the warm-up transient in your simulations.
|
f8f6b1083a6032b5aa31b9dbf6274ce7
|
{
"intermediate": 0.38784635066986084,
"beginner": 0.35840561985969543,
"expert": 0.2537480294704437
}
|
9,454
|
Help me write an image classifier that detects if an image is the anime character "Astolfo"
Write the code for me and explain where I put each file and how to name it
|
26dc728d30a7d95fecaf788ac7f2ed77
|
{
"intermediate": 0.32069292664527893,
"beginner": 0.08730126917362213,
"expert": 0.5920057892799377
}
|
9,455
|
Γ1, Γ2, Γ3 ⊆ LV and φ, ψ, ξ ∈ LV, for some PL LV. Prove the following
properties of logical implication.
a) If Γ1 |= φ and Γ2 |= ψ, then Γ1 ∪ Γ2 |= φ ∧ ψ.
|
41e74bf5aa6fa09b19ce994bffe18f78
|
{
"intermediate": 0.3279067575931549,
"beginner": 0.37622350454330444,
"expert": 0.29586973786354065
}
|
9,456
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pathlib import Path
from threading import Lock
from collections import defaultdict
import shutil
import argparse
import uuid
import zlib
from bottle import route, run, request, error, response, HTTPError, static_file
from werkzeug.utils import secure_filename
storage_path: Path = Path(__file__).parent / "storage"
chunk_path: Path = Path(__file__).parent / "chunk"
allow_downloads = True
dropzone_cdn = "https://cdnjs.cloudflare.com/ajax/libs/dropzone"
dropzone_version = "5.7.6"
dropzone_timeout = "120000"
dropzone_max_file_size = "100000"
dropzone_chunk_size = "1000000"
dropzone_parallel_chunks = "true"
dropzone_force_chunking = "true"
lock = Lock()
chucks = defaultdict(list)
@error(500)
def handle_500(error_message):
response.status = 500
response.body = f"Error: {error_message}"
return response
@route("/")
def index():
index_file = Path(__file__) / "index.html"
if index_file.exists():
return index_file.read_text()
return f"""
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/dropzone.min.css"/>
<link rel="stylesheet" href="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/basic.min.css"/>
<script type="application/javascript"
src="{dropzone_cdn.rstrip('/')}/{dropzone_version}/min/dropzone.min.js">
</script>
<title>pyfiledrop</title>
</head>
<body>
<div id="content" style="width: 800px; margin: 0 auto;">
<h2>Upload new files</h2>
<form method="POST" action='/upload' class="dropzone dz-clickable" id="dropper" enctype="multipart/form-data">
</form>
<h2>
Uploaded
<input type="button" value="Clear" onclick="clearCookies()" />
</h2>
<div id="uploaded">
</div>
<script type="application/javascript">
function clearCookies() {{
document.cookie = "files=; Max-Age=0";
document.getElementById("uploaded").innerHTML = "";
}}
function getFilesFromCookie() {{
try {{ return document.cookie.split("=", 2)[1].split("||");}} catch (error) {{ return []; }}
}}
function saveCookie(new_file) {{
let all_files = getFilesFromCookie();
all_files.push(new_file);
document.cookie = `files=${{all_files.join("||")}}`;
}}
function generateLink(combo){{
const uuid = combo.split('|^^|')[0];
const name = combo.split('|^^|')[1];
if ({'true' if allow_downloads else 'false'}) {{
return `<a href="/download/${{uuid}}" download="${{name}}">${{name}}</a>`;
}}
return name;
}}
function init() {{
Dropzone.options.dropper = {{
paramName: 'file',
chunking: true,
forceChunking: {dropzone_force_chunking},
url: '/upload',
retryChunks: true,
parallelChunkUploads: {dropzone_parallel_chunks},
timeout: {dropzone_timeout}, // microseconds
maxFilesize: {dropzone_max_file_size}, // megabytes
chunkSize: {dropzone_chunk_size}, // bytes
init: function () {{
this.on("complete", function (file) {{
let combo = `${{file.upload.uuid}}|^^|${{file.upload.filename}}`;
saveCookie(combo);
document.getElementById("uploaded").innerHTML += generateLink(combo) + "<br />";
}});
}}
}}
if (typeof document.cookie !== 'undefined' ) {{
let content = "";
getFilesFromCookie().forEach(function (combo) {{
content += generateLink(combo) + "<br />";
}});
document.getElementById("uploaded").innerHTML = content;
}}
}}
init();
</script>
</div>
</body>
</html>
"""
@route("/favicon.ico")
def favicon():
return zlib.decompress(
b"x\x9c\xedVYN\xc40\x0c5J%[\xe2\xa3|q\x06\x8e1G\xe1(=ZoV\xb2\xa7\x89\x97R\x8d\x84\x04\xe4\xa5\xcb(\xc9\xb3\x1do"
b"\x1d\x80\x17?\x1e\x0f\xf0O\x82\xcfw\x00\x7f\xc1\x87\xbf\xfd\x14l\x90\xe6#\xde@\xc1\x966n[z\x85\x11\xa6\xfcc"
b"\xdfw?s\xc4\x0b\x8e#\xbd\xc2\x08S\xe1111\xf1k\xb1NL\xfcU<\x99\xe4T\xf8\xf43|\xaa\x18\xf8\xc3\xbaHFw\xaaj\x94"
b"\xf4c[F\xc6\xee\xbb\xc2\xc0\x17\xf6\xf4\x12\x160\xf9\xa3\xfeQB5\xab@\xf4\x1f\xa55r\xf9\xa4KGG\xee\x16\xdd\xff"
b"\x8e\x9d\x8by\xc4\xe4\x17\tU\xbdDg\xf1\xeb\xf0Zh\x8e\xd3s\x9c\xab\xc3P\n<e\xcb$\x05 b\xd8\x84Q1\x8a\xd6Kt\xe6"
b"\x85(\x13\xe5\xf3]j\xcf\x06\x88\xe6K\x02\x84\x18\x90\xc5\xa7Kz\xd4\x11\xeeEZK\x012\xe9\xab\xa5\xbf\xb3@i\x00"
b"\xce\xe47\x0b\xb4\xfe\xb1d\xffk\xebh\xd3\xa3\xfd\xa4:`5J\xa3\xf1\xf5\xf4\xcf\x02tz\x8c_\xd2\xa1\xee\xe1\xad"
b"\xaa\xb7n-\xe5\xafoSQ\x14'\x01\xb7\x9b<\x15~\x0e\xf4b\x8a\x90k\x8c\xdaO\xfb\x18<H\x9d\xdfj\xab\xd0\xb43\xe1"
b'\xe3nt\x16\xdf\r\xe6\xa1d\xad\xd0\xc9z\x03"\xc7c\x94v\xb6I\xe1\x8f\xf5,\xaa2\x93}\x90\xe0\x94\x1d\xd2\xfcY~f'
b"\xab\r\xc1\xc8\xc4\xe4\x1f\xed\x03\x1e`\xd6\x02\xda\xc7k\x16\x1a\xf4\xcb2Q\x05\xa0\xe6\xb4\x1e\xa4\x84\xc6"
b"\xcc..`8'\x9a\xc9-\n\xa8\x05]?\xa3\xdfn\x11-\xcc\x0b\xb4\x7f67:\x0c\xcf\xd5\xbb\xfd\x89\x9ebG\xf8:\x8bG"
b"\xc0\xfb\x9dm\xe2\xdf\x80g\xea\xc4\xc45\xbe\x00\x03\xe9\xd6\xbb"
)
@route("/upload", method="POST")
def upload():
file = request.files.get("file")
if not file:
raise HTTPError(status=400, body="No file provided")
dz_uuid = request.forms.get("dzuuid")
if not dz_uuid:
# Assume this file has not been chunked
with open(storage_path / f"{uuid.uuid4()}_{secure_filename(file.filename)}", "wb") as f:
file.save(f)
return "File Saved"
# Chunked download
try:
current_chunk = int(request.forms["dzchunkindex"])
total_chunks = int(request.forms["dztotalchunkcount"])
except KeyError as err:
raise HTTPError(status=400, body=f"Not all required fields supplied, missing {err}")
except ValueError:
raise HTTPError(status=400, body=f"Values provided were not in expected format")
save_dir = chunk_path / dz_uuid
if not save_dir.exists():
save_dir.mkdir(exist_ok=True, parents=True)
# Save the individual chunk
with open(save_dir / str(request.forms["dzchunkindex"]), "wb") as f:
file.save(f)
# See if we have all the chunks downloaded
with lock:
chucks[dz_uuid].append(current_chunk)
completed = len(chucks[dz_uuid]) == total_chunks
# Concat all the files into the final file when all are downloaded
if completed:
with open(storage_path / f"{dz_uuid}_{secure_filename(file.filename)}", "wb") as f:
for file_number in range(total_chunks):
f.write((save_dir / str(file_number)).read_bytes())
print(f"{file.filename} has been uploaded")
shutil.rmtree(save_dir)
return "Chunk upload successful"
@route("/download/<dz_uuid>")
def download(dz_uuid):
if not allow_downloads:
raise HTTPError(status=403)
for file in storage_path.iterdir():
if file.is_file() and file.name.startswith(dz_uuid):
return static_file(file.name, root=file.parent.absolute(), download=True)
return HTTPError(status=404)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--port", type=int, default=16273, required=False)
parser.add_argument("--host", type=str, default="0.0.0.0", required=False)
parser.add_argument("-s", "--storage", type=str, default=str(storage_path), required=False)
parser.add_argument("-c", "--chunks", type=str, default=str(chunk_path), required=False)
parser.add_argument(
"--max-size",
type=str,
default=dropzone_max_file_size,
help="Max file size (Mb)",
)
parser.add_argument(
"--timeout",
type=str,
default=dropzone_timeout,
help="Timeout (ms) for each chuck upload",
)
parser.add_argument("--chunk-size", type=str, default=dropzone_chunk_size, help="Chunk size (bytes)")
parser.add_argument("--disable-parallel-chunks", required=False, default=False, action="store_true")
parser.add_argument("--disable-force-chunking", required=False, default=False, action="store_true")
parser.add_argument("-a", "--allow-downloads", required=False, default=False, action="store_true")
parser.add_argument("--dz-cdn", type=str, default=None, required=False)
parser.add_argument("--dz-version", type=str, default=None, required=False)
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
storage_path = Path(args.storage)
chunk_path = Path(args.chunks)
dropzone_chunk_size = args.chunk_size
dropzone_timeout = args.timeout
dropzone_max_file_size = args.max_size
try:
if int(dropzone_timeout) < 1 or int(dropzone_chunk_size) < 1 or int(dropzone_max_file_size) < 1:
raise Exception("Invalid dropzone option, make sure max-size, timeout, and chunk-size are all positive")
except ValueError:
raise Exception("Invalid dropzone option, make sure max-size, timeout, and chunk-size are all integers")
if args.dz_cdn:
dropzone_cdn = args.dz_cdn
if args.dz_version:
dropzone_version = args.dz_version
if args.disable_parallel_chunks:
dropzone_parallel_chunks = "false"
if args.disable_force_chunking:
dropzone_force_chunking = "false"
if args.allow_downloads:
allow_downloads = True
if not storage_path.exists():
storage_path.mkdir(exist_ok=True)
if not chunk_path.exists():
chunk_path.mkdir(exist_ok=True)
print(
f"""Timeout: {int(dropzone_timeout) // 1000} seconds per chunk
Chunk Size: {int(dropzone_chunk_size) // 1024} Kb
Max File Size: {int(dropzone_max_file_size)} Mb
Force Chunking: {dropzone_force_chunking}
Parallel Chunks: {dropzone_parallel_chunks}
Storage Path: {storage_path.absolute()}
Chunk Path: {chunk_path.absolute()}
"""
)
run(server="paste", port=args.port, host=args.host)
write a function which when triggered deleted all the files uploaded by the user
|
1486bc07cb831cbb5e58a9b48ee19671
|
{
"intermediate": 0.3916360139846802,
"beginner": 0.4710584580898285,
"expert": 0.13730549812316895
}
|
9,457
|
I'm working on a flask app for uploading converting and then download files again
app.py
import os
from pathlib import Path
from flask import Flask, render_template, request
from werkzeug.utils import secure_filename
app = Flask(__name__)
@app.get("/")
def index():
return render_template("index.html")
@app.post("/upload")
def upload_chunk():
file = request.files["file"]
file_uuid = request.form["dzuuid"]
# Generate a unique filename to avoid overwriting using 8 chars of uuid before filename.
filename = f"{file_uuid[:8]}_{secure_filename(file.filename)}"
save_path = Path("static", "img", filename)
current_chunk = int(request.form["dzchunkindex"])
try:
with open(save_path, "ab") as f:
f.seek(int(request.form["dzchunkbyteoffset"]))
f.write(file.stream.read())
except OSError:
return "Error saving file.", 500
total_chunks = int(request.form["dztotalchunkcount"])
if current_chunk + 1 == total_chunks:
# This was the last chunk, the file should be complete and the size we expect
if os.path.getsize(save_path) != int(request.form["dztotalfilesize"]):
return "Size mismatch.", 500
return "Chunk upload successful.", 200
if __name__ == "__main__":
app.run(debug=True)
index.html
<html lang="en">
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/dropzone@5/dist/min/dropzone.min.js"></script>
<link rel="stylesheet" href="https://unpkg.com/dropzone@5/dist/min/dropzone.min.css" type="text/css" />
<title>File Dropper</title>
</head>
<body>
<form
method="POST"
action="/upload"
class="dropzone dz-clickable"
id="dropper"
enctype="multipart/form-data"
>
</form>
<script type="application/javascript">
Dropzone.options.dropper = {
paramName: "file",
chunking: true,
forceChunking: true,
url: "/upload",
maxFilesize: 1025, // megabytes
chunkSize: 1000000 // bytes
}
</script>
</body>
</html>
Add a queue system so only one person can upload a file and the next upload won't start until the first person presses a button
also add the ability to download the files that you uploaded
|
2b775946e5d2f61c39848499297a205f
|
{
"intermediate": 0.4477033019065857,
"beginner": 0.351375550031662,
"expert": 0.20092111825942993
}
|
9,458
|
I'm working on a flask app for uploading converting and then download files again
app.py
import os
from pathlib import Path
from flask import Flask, render_template, request
from werkzeug.utils import secure_filename
app = Flask(__name__)
@app.get("/")
def index():
return render_template("index.html")
@app.post("/upload")
def upload_chunk():
file = request.files["file"]
file_uuid = request.form["dzuuid"]
# Generate a unique filename to avoid overwriting using 8 chars of uuid before filename.
filename = f"{file_uuid[:8]}_{secure_filename(file.filename)}"
save_path = Path("static", "img", filename)
current_chunk = int(request.form["dzchunkindex"])
try:
with open(save_path, "ab") as f:
f.seek(int(request.form["dzchunkbyteoffset"]))
f.write(file.stream.read())
except OSError:
return "Error saving file.", 500
total_chunks = int(request.form["dztotalchunkcount"])
if current_chunk + 1 == total_chunks:
# This was the last chunk, the file should be complete and the size we expect
if os.path.getsize(save_path) != int(request.form["dztotalfilesize"]):
return "Size mismatch.", 500
return "Chunk upload successful.", 200
if __name__ == "__main__":
app.run(debug=True)
index.html
<html lang="en">
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/dropzone@5/dist/min/dropzone.min.js"></script>
<link rel="stylesheet" href="https://unpkg.com/dropzone@5/dist/min/dropzone.min.css" type="text/css" />
<title>File Dropper</title>
</head>
<body>
<form
method="POST"
action="/upload"
class="dropzone dz-clickable"
id="dropper"
enctype="multipart/form-data"
>
</form>
<script type="application/javascript">
Dropzone.options.dropper = {
paramName: "file",
chunking: true,
forceChunking: true,
url: "/upload",
maxFilesize: 1025, // megabytes
chunkSize: 1000000 // bytes
}
</script>
</body>
</html>
Add a queue system so only one person can upload a file and the next upload won't start until the first person presses a button
also add the ability to download the files that you uploaded
|
b09c7a8b79f728c5e12dc7bc48e2184c
|
{
"intermediate": 0.4477033019065857,
"beginner": 0.351375550031662,
"expert": 0.20092111825942993
}
|
9,459
|
ts-node .\index.tsx how make autmatic watch?
|
c2d204c15874a9d18a039a25b318bcf9
|
{
"intermediate": 0.4114812910556793,
"beginner": 0.22015021741390228,
"expert": 0.3683684468269348
}
|
9,460
|
Compile the program using the GCC compiler.
Run the program from the command line.
Compile the program without optimization and measure its execution time. Compile the optimized program and compare its execution time.
Compile the program with optimization for size and compare the sizes
Source program:queue.h:
#ifndef QUEUE_H
#define QUEUE_H
class Element {
public:
int priority;
int value;
};
class queue {
private:
Element *elements;
int size;
int capacity;
int highestpriority();
public:
queue();
~queue();
void enqueue(int priority, int value);
Element wthdrw(int index);
Element unqueue();
bool isclear();
int getSize();
};
#endif // QUEUE_H
queue.cpp:
#include “queue.h”
int queue::highestpriority() {
int highest = 0;
for (int i = 1; i < size; i++) {
if (elements[highest].priority < elements[i].priority) {
highest = i;
}
}
return highest;
}
queue::queue() {
size = 0;
capacity = 10;
elements = new Element[capacity];
}
queue::~queue() {
delete[] elements;
}
void queue::enqueue(int priority, int value) {
// previous enqueue implementation
}
Element queue::wthdrw(int index) {
// previous wthdrw implementation
}
Element queue::unqueue() {
// previous unqueue implementation
}
bool queue::isclear() {
// previous isclear implementation
}
int queue::getSize() {
// previous getSize implementation
}
main.cpp:
#include <iostream>
#include “queue.h”
using namespace std;
int main(int argc, char *argv[]) {
setlocale(LC_ALL, “rus”);
queue queue1;
for (int i = 1; i < argc; i += 2) {
int priority = atoi(argv[i]);
int value = atoi(argv[i + 1]);
queue1.enqueue(priority, value);
}
for (int i = 0; i < queue1.getSize(); i++) {
cout << "Элемент под индексом " << i + 1 << " имеет значение " << queue1.wthdrw(i).value << " и приоритет "
<< queue1.wthdrw(i).priority << endl;
}
cout << endl;
while (!queue1.isclear()) {
Element earliestelement = queue1.unqueue();
cout << "Элемент вышел. Приоритет элемента - " << earliestelement.priority << " , значение элемента - "
<< earliestelement.value << endl;
}
return 0;
}
|
eac53e053af41ca314a6103fa6ba3fc7
|
{
"intermediate": 0.3077657222747803,
"beginner": 0.378246009349823,
"expert": 0.31398826837539673
}
|
9,461
|
write mermaid code for state diagram for BLFP.in more detials.
|
992c9f72ac50c905905b98aa99afc8e0
|
{
"intermediate": 0.3623145818710327,
"beginner": 0.2062099128961563,
"expert": 0.4314754903316498
}
|
9,463
|
Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(Use `node --trace-warnings ...` to show where the warning was created)
|
64d35cb8fe4659380b9fe60135f2425d
|
{
"intermediate": 0.41051533818244934,
"beginner": 0.2925494313240051,
"expert": 0.2969352900981903
}
|
9,464
|
import os
import numpy as np
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import ResNet50
from keras.layers import Dense, GlobalAveragePooling2D
from keras.models import Model
from keras.optimizers import Adam
# Parameters
num_classes = 2
image_size = 224
# Load the pre-trained ResNet-50 model
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(image_size, image_size, 3))
# Add custom layers on top of the pre-trained model
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(num_classes, activation='softmax')(x)
# Define the model with the pre-trained base and custom layers
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the base layers to avoid retraining them
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# Define the data generators for training and testing
train_datagen = ImageDataGenerator(rescale=1.0/255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1.0/255)
train_generator = train_datagen.flow_from_directory(
'dataset/training',
target_size=(image_size, image_size),
batch_size=32,
class_mode='categorical')
test_generator = test_datagen.flow_from_directory(
'dataset/validation',
target_size=(image_size, image_size),
batch_size=32,
class_mode='categorical')
# Train the model
model.fit(
train_generator,
steps_per_epoch=train_generator.n // train_generator.batch_size,
epochs=10,
validation_data=test_generator,
validation_steps=test_generator.n // test_generator.batch_size,
workers=1)
# Save the model
model.save('models/astolfo_classifier_v1.h5')
Whats wrong with this code?
|
aca82ee879bb97301149f36cdf03591d
|
{
"intermediate": 0.39605021476745605,
"beginner": 0.2003396600484848,
"expert": 0.40361014008522034
}
|
9,465
|
Hi Chat, WOuld you please give me the command of MEL to create a character and make it move in a straight line ?
|
be08e90cd5a33fdf8e39a293df66b7f0
|
{
"intermediate": 0.4235158860683441,
"beginner": 0.12607024610042572,
"expert": 0.45041385293006897
}
|
9,466
|
can a python script be made to move the mouse on my computer as if I am holding a mouse but using just my camera
|
7299c21c26f89f762514a70953f0e87c
|
{
"intermediate": 0.38587209582328796,
"beginner": 0.16692450642585754,
"expert": 0.4472033679485321
}
|
9,467
|
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
dotenv.config();
mongoose
.connect(process.env.MONGODB_URL)
.then(() => {
console.log("mongodb database connection established");
})
.catch((err) => {
console.log(
"something wrong with the connection to mongodb the error is" + err
);
});
Argument of type 'string | undefined' is not assignable to parameter of type 'string'.
Type 'undefined' is not assignable to type 'string'.ts(2345)
|
b395bf7548a24ceae04ad3ef2c4bc5ca
|
{
"intermediate": 0.4455593526363373,
"beginner": 0.3270457684993744,
"expert": 0.22739490866661072
}
|
9,468
|
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
dotenv.config();
if (process.env.MONGODB_URL) {
mongoose
.connect(process.env.MONGODB_URL)
.then(() => {
console.log("mongodb database connection established");
})
.catch((err) => {
console.log(
"something wrong with the connection to mongodb the error is" + err
);
});
}
can write this condition like that ?
|
a410f9a7978458fec69b0395fa4cf644
|
{
"intermediate": 0.5856730341911316,
"beginner": 0.2545973062515259,
"expert": 0.15972961485385895
}
|
9,469
|
我該如何用python寫一個下面網頁的自動填入程式 https://www.freeliving.com.tw/event/ubot/eat_Order_Form.aspx?BuffetID=81
|
bde8d8d9f15dde64284a234abb45f15b
|
{
"intermediate": 0.4055026173591614,
"beginner": 0.2832689881324768,
"expert": 0.3112284541130066
}
|
9,470
|
I’m building a video game engine using C++ as the coding language and Vulkan for graphics. I am trying to set up a generic renderer using Vulkan that is flexible and will render objects based on a vector that is supplied to it. The renderer will also handle the creation of the window using GLFW and use GLM for all relevant math calls. I am using the ASSIMP library to load 3d models and animations.
Here is a portion of the code:
Engine.h:
#pragma once
#include "Window.h"
#include "Renderer.h"
#include "Scene.h"
#include <chrono>
#include <thread>
class Engine
{
public:
Engine();
~Engine();
void Run();
void Shutdown();
int MaxFPS = 60;
private:
void Initialize();
void MainLoop();
void Update(float deltaTime);
void Render();
Window window;
Renderer renderer;
Scene scene;
};
Engine.cpp:
#include "Engine.h"
#include "Terrain.h"
#include <iostream>
Engine::Engine()
{
Initialize();
}
Engine::~Engine()
{
Shutdown();
}
void Engine::Run()
{
MainLoop();
}
void Engine::Initialize()
{
// Initialize window, renderer, and scene
window.Initialize();
renderer.Initialize(window.GetWindow());
scene.Initialize();
VkDescriptorSetLayout descriptorSetLayout = renderer.CreateDescriptorSetLayout();
//VkDescriptorPool descriptorPool = renderer.CreateDescriptorPool(1); // Assuming only one terrain object
VkDescriptorSetLayout samplerDescriptorSetLayout = renderer.CreateSamplerDescriptorSetLayout(); // Use this new method to create a separate descriptor layout.
VkDescriptorPool descriptorPool = renderer.CreateDescriptorPool(1);
// Create a simple square tile GameObject
GameObject* squareTile = new GameObject();
squareTile->Initialize();
// Define the square’s vertices and indices
std::vector<Vertex> vertices = {
{ { 0.0f, 0.0f, 0.0f }, { 1.0f, 0.0f, 0.0f } }, // Bottom left
{ { 1.0f, 0.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } }, // Bottom right
{ { 1.0f, 1.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } }, // Top right
{ { 0.0f, 1.0f, 0.0f }, { 1.0f, 1.0f, 0.0f } }, // Top left
};
std::vector<uint32_t> indices = {
0, 1, 2, // First triangle
0, 2, 3 // Second triangle
};
// Initialize mesh and material for the square tile
squareTile->GetMesh()->Initialize(vertices, indices, *renderer.GetDevice(), *renderer.GetPhysicalDevice(), *renderer.GetCommandPool(), *renderer.GetGraphicsQueue());
squareTile->GetMaterial()->Initialize("C:/shaders/vert_depth2.spv", "C:/shaders/frag_depth2.spv", "C:/textures/texture.jpg", *renderer.GetDevice(), descriptorSetLayout, samplerDescriptorSetLayout, descriptorPool, *renderer.GetPhysicalDevice(), *renderer.GetCommandPool(), *renderer.GetGraphicsQueue());
// Add the square tile GameObject to the scene
scene.AddGameObject(squareTile);
/*Terrain terrain(0,10,1,renderer.GetDevice(), renderer.GetPhysicalDevice(), renderer.GetCommandPool(), renderer.GetGraphicsQueue());
terrain.GenerateTerrain(descriptorSetLayout, samplerDescriptorSetLayout, descriptorPool);*/
//scene.AddGameObject(terrain.GetTerrainObject());
float deltaTime = window.GetDeltaTime();
}
void Engine::MainLoop()
{
while (!window.ShouldClose())
{
window.PollEvents();
float deltaTime = window.GetDeltaTime();
Update(deltaTime);
Render();
auto sleep_duration = std::chrono::milliseconds(1000 / MaxFPS);
std::this_thread::sleep_for(sleep_duration);
}
}
void Engine::Update(float deltaTime)
{
scene.Update(deltaTime);
}
void Engine::Render()
{
renderer.BeginFrame();
scene.Render(renderer);
renderer.EndFrame();
}
void Engine::Shutdown()
{
// Clean up resources in reverse order
scene.Shutdown();
renderer.Shutdown();
window.Shutdown();
}
Scene.h:
#pragma once
#include <vector>
#include "GameObject.h"
#include "Camera.h"
#include "Renderer.h"
class Scene
{
public:
Scene();
~Scene();
void Initialize();
void Update(float deltaTime);
void Render(Renderer& renderer);
void Shutdown();
void AddGameObject(GameObject* gameObject);
Camera& GetCamera();
float temp;
private:
std::vector<GameObject*> gameObjects;
Camera camera;
};
Scene.cpp:
#include "Scene.h"
Scene::Scene()
{
}
Scene::~Scene()
{
Shutdown();
}
void Scene::Initialize()
{
// Initialize camera and game objects
//camera.SetPosition(glm::vec3(0.0f, 0.0f, -5.0f));
camera.Initialize(800.0f / 600.0f);
camera.SetPosition(glm::vec3(0.0f, 0.0f, 5.0f));
camera.SetRotation(glm::vec3(0.0f, 0.0f, 0.0f));
// Add initial game objects
}
void Scene::Update(float deltaTime)
{
// Update game objects and camera
for (GameObject* gameObject : gameObjects)
{
gameObject->Update(deltaTime);
temp = temp + 0.01;
}
}
void Scene::Render(Renderer& renderer)
{
// Render game objects
for (GameObject* gameObject : gameObjects)
{
gameObject->Render(renderer, camera);
}
//camera.SetRotation(glm::vec3(0, 0, temp));
// Submit rendering-related commands to Vulkan queues
}
void Scene::Shutdown()
{
// Clean up game objects
for (GameObject* gameObject : gameObjects)
{
delete gameObject;
gameObject = nullptr;
}
gameObjects.clear();
camera.Shutdown();
}
void Scene::AddGameObject(GameObject* gameObject)
{
gameObjects.push_back(gameObject);
}
Camera& Scene::GetCamera()
{
return camera;
}
GameObject.h:
#pragma once
#include <glm/glm.hpp>
#include "Mesh.h"
#include "Material.h"
#include "Camera.h"
#include "Renderer.h"
class GameObject
{
public:
GameObject();
~GameObject();
void Initialize();
void Update(float deltaTime);
void Render(Renderer& renderer, const Camera& camera);
void Shutdown();
void SetPosition(const glm::vec3& position);
void SetRotation(const glm::vec3& rotation);
void SetScale(const glm::vec3& scale);
Mesh* GetMesh();
Material* GetMaterial();
private:
glm::mat4 modelMatrix;
glm::vec3 position;
glm::vec3 rotation;
glm::vec3 scale;
Mesh* mesh;
Material* material;
bool initialized = false;
void UpdateModelMatrix();
};
GameObject.cpp:
#include "GameObject.h"
#include <glm/gtc/matrix_transform.hpp>
GameObject::GameObject()
: position(glm::vec3(0.0f, 0.0f, 0.0f)), rotation(glm::vec3(0.0f, 0.0f, 0.0f)), scale(1.0f)
{
}
GameObject::~GameObject()
{
if (initialized)
{
Shutdown();
}
}
void GameObject::Initialize()
{
mesh = new Mesh{};
material = new Material{};
SetScale(glm::vec3(1.0f));
this->initialized = true;
}
void GameObject::Update(float deltaTime)
{
// Update position, rotation, scale, and other properties
// Example: Rotate the object around the Y-axis
rotation.y += deltaTime * glm::radians(90.0f);
UpdateModelMatrix();
}
void GameObject::Render(Renderer& renderer, const Camera& camera)
{
// Render this object using the renderer and camera
VkDevice device = *renderer.GetDevice();
// Bind mesh vertex and index buffers
VkBuffer vertexBuffers[] = { mesh->GetVertexBuffer() };
VkDeviceSize offsets[] = { 0 };
vkCmdBindVertexBuffers(*renderer.GetCurrentCommandBuffer(), 0, 1, vertexBuffers, offsets);
vkCmdBindIndexBuffer(*renderer.GetCurrentCommandBuffer(), mesh->GetIndexBuffer(), 0, VK_INDEX_TYPE_UINT32);
// Update shader uniform buffers with modelMatrix, viewMatrix and projectionMatrix transforms
struct MVP {
glm::mat4 model;
glm::mat4 view;
glm::mat4 projection;
} mvp;
mvp.model = modelMatrix;
mvp.view = camera.GetViewMatrix();
mvp.projection = camera.GetProjectionMatrix();
// Create a new buffer to hold the MVP data temporarily
VkBuffer mvpBuffer;
VkDeviceMemory mvpBufferMemory;
BufferUtils::CreateBuffer(device, *renderer.GetPhysicalDevice(),
sizeof(MVP), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT,
mvpBuffer, mvpBufferMemory);
material->CreateDescriptorSet(renderer.CreateDescriptorSetLayout(), renderer.CreateDescriptorPool(1), mvpBuffer, sizeof(MVP));
// Map the MVP data into the buffer and unmap
void* data = nullptr;
vkMapMemory(device, mvpBufferMemory, 0, sizeof(MVP), 0, &data);
memcpy(data, &mvp, sizeof(MVP));
vkUnmapMemory(device, mvpBufferMemory);
// TODO: Modify your material, descriptor set, and pipeline to use this new mvpBuffer instead of
// the default uniform buffer
// Bind the DescriptorSet associated with the material
VkDescriptorSet descriptorSet = material->GetDescriptorSet();
material->UpdateBufferBinding(descriptorSet, mvpBuffer, device, sizeof(MVP));
renderer.CreateGraphicsPipeline(mesh, material);
vkCmdBindPipeline(*renderer.GetCurrentCommandBuffer(), VK_PIPELINE_BIND_POINT_GRAPHICS, renderer.GetPipeline().get()->GetPipeline());
vkCmdBindDescriptorSets(*renderer.GetCurrentCommandBuffer(), VK_PIPELINE_BIND_POINT_GRAPHICS, material->GetPipelineLayout(), 0, 1, &descriptorSet, 0, nullptr);
// Call vkCmdDrawIndexed()
uint32_t numIndices = static_cast<uint32_t>(mesh->GetIndices().size());
vkCmdDrawIndexed(*renderer.GetCurrentCommandBuffer(), numIndices, 1, 0, 0, 0);
// Cleanup the temporary buffer
//vkDestroyBuffer(device, mvpBuffer, nullptr);
//vkFreeMemory(device, mvpBufferMemory, nullptr);
}
void GameObject::Shutdown()
{
// Clean up resources, if necessary
// (depending on how Mesh and Material resources are managed)
delete mesh;
delete material;
this->initialized = false;
}
void GameObject::SetPosition(const glm::vec3& position)
{
this->position = position;
UpdateModelMatrix();
}
void GameObject::SetRotation(const glm::vec3& rotation)
{
this->rotation = rotation;
UpdateModelMatrix();
}
void GameObject::SetScale(const glm::vec3& scale)
{
this->scale = scale;
UpdateModelMatrix();
}
void GameObject::UpdateModelMatrix()
{
modelMatrix = glm::mat4(1.0f);
modelMatrix = glm::translate(modelMatrix, position);
modelMatrix = glm::rotate(modelMatrix, rotation.x, glm::vec3(1.0f, 0.0f, 0.0f));
modelMatrix = glm::rotate(modelMatrix, rotation.y, glm::vec3(0.0f, 1.0f, 0.0f));
modelMatrix = glm::rotate(modelMatrix, rotation.z, glm::vec3(0.0f, 0.0f, 1.0f));
modelMatrix = glm::scale(modelMatrix, scale);
}
Mesh* GameObject::GetMesh()
{
return mesh;
}
Material* GameObject::GetMaterial()
{
return material;
}
Renderer.h:
#pragma once
#include <vulkan/vulkan.h>
#include "Window.h"
#include <vector>
#include <stdexcept>
#include <set>
#include <optional>
#include <iostream>
#include "Pipeline.h"
#include "Material.h"
#include "Mesh.h"
struct QueueFamilyIndices
{
std::optional<uint32_t> graphicsFamily;
std::optional<uint32_t> presentFamily;
bool IsComplete()
{
return graphicsFamily.has_value() && presentFamily.has_value();
}
};
struct SwapChainSupportDetails {
VkSurfaceCapabilitiesKHR capabilities;
std::vector<VkSurfaceFormatKHR> formats;
std::vector<VkPresentModeKHR> presentModes;
};
class Renderer
{
public:
Renderer();
~Renderer();
void Initialize(GLFWwindow* window);
void Shutdown();
void BeginFrame();
void EndFrame();
VkDescriptorSetLayout CreateDescriptorSetLayout();
VkDescriptorPool CreateDescriptorPool(uint32_t maxSets);
VkDevice* GetDevice();
VkPhysicalDevice* GetPhysicalDevice();
VkCommandPool* GetCommandPool();
VkQueue* GetGraphicsQueue();
VkCommandBuffer* GetCurrentCommandBuffer();
std::shared_ptr<Pipeline> GetPipeline();
void CreateGraphicsPipeline(Mesh* mesh, Material* material);
VkDescriptorSetLayout CreateSamplerDescriptorSetLayout();
private:
bool shutdownInProgress;
uint32_t currentCmdBufferIndex = 0;
std::vector<size_t> currentFramePerImage;
std::vector<VkImage> swapChainImages;
std::vector<VkImageView> swapChainImageViews;
VkExtent2D swapChainExtent;
VkRenderPass renderPass;
uint32_t imageIndex;
std::shared_ptr<Pipeline> pipeline;
VkFormat swapChainImageFormat;
std::vector<VkCommandBuffer> commandBuffers;
void CreateImageViews();
void CleanupImageViews();
void CreateRenderPass();
void CleanupRenderPass();
void CreateSurface();
void DestroySurface();
void CreateInstance();
void CleanupInstance();
void ChoosePhysicalDevice();
void CreateDevice();
void CleanupDevice();
void CreateSwapchain();
void CleanupSwapchain();
void CreateCommandPool();
void CleanupCommandPool();
void CreateFramebuffers();
void CleanupFramebuffers();
void CreateCommandBuffers();
void CleanupCommandBuffers();
void Present();
GLFWwindow* window;
VkInstance instance = VK_NULL_HANDLE;
VkPhysicalDevice physicalDevice = VK_NULL_HANDLE;
VkDevice device = VK_NULL_HANDLE;
VkSurfaceKHR surface;
VkSwapchainKHR swapchain;
VkCommandPool commandPool;
VkCommandBuffer currentCommandBuffer;
std::vector<VkFramebuffer> framebuffers;
// Additional Vulkan objects needed for rendering…
const uint32_t kMaxFramesInFlight = 2;
std::vector<VkSemaphore> imageAvailableSemaphores;
std::vector<VkSemaphore> renderFinishedSemaphores;
std::vector<VkFence> inFlightFences;
size_t currentFrame;
VkQueue graphicsQueue;
VkQueue presentQueue;
void CreateSyncObjects();
void CleanupSyncObjects();
SwapChainSupportDetails querySwapChainSupport(VkPhysicalDevice device, VkSurfaceKHR surface);
VkSurfaceFormatKHR chooseSwapSurfaceFormat(const std::vector<VkSurfaceFormatKHR>& availableFormats);
VkPresentModeKHR chooseSwapPresentMode(const std::vector<VkPresentModeKHR>& availablePresentModes);
VkExtent2D chooseSwapExtent(const VkSurfaceCapabilitiesKHR& capabilities, GLFWwindow* window);
std::vector<const char*> deviceExtensions = {
VK_KHR_SWAPCHAIN_EXTENSION_NAME
};
std::vector<const char*> CheckPhysicalDeviceExtensionSupport(VkPhysicalDevice physicalDevice);
QueueFamilyIndices GetQueueFamilyIndices(VkPhysicalDevice physicalDevice);
};
Here is some relevant code from the Renderer class:
Renderer::Renderer() : currentFrame(0), shutdownInProgress(false)
{
}
Renderer::~Renderer()
{
Shutdown();
}
void Renderer::Initialize(GLFWwindow* window)
{
this->window = window;
CreateInstance();
CreateSurface();
ChoosePhysicalDevice();
CreateDevice();
CreateSwapchain();
CreateRenderPass();
CreateCommandPool();
CreateFramebuffers();
CreateSyncObjects();
}
void Renderer::Shutdown()
{
if (shutdownInProgress) {
return;
}
shutdownInProgress = true;
if (device != VK_NULL_HANDLE) {
vkDeviceWaitIdle(device);
}
CleanupFramebuffers();
CleanupRenderPass();
CleanupSyncObjects();
CleanupCommandBuffers();
CleanupCommandPool();
CleanupImageViews();
CleanupSwapchain();
if (device != VK_NULL_HANDLE) {
CleanupDevice();
}
DestroySurface();
CleanupInstance();
shutdownInProgress = false;
}
I am getting the following error in console when closing the application:
VUID-vkDestroyBuffer-buffer-00922(ERROR / SPEC): msgNum: -464217071 - Validation Error: [ VUID-vkDestroyBuffer-buffer-00922 ] Object 0: handle = 0x1f956bf58f0, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xe4549c11 | Cannot call vkDestroyBuffer on VkBuffer 0x27d60e0000000019[] that is currently in use by a command buffer. The Vulkan spec states: All submitted commands that refer to buffer, either directly or via a VkBufferView, must have completed execution (https://vulkan.lunarg.com/doc/view/1.3.239.0/windows/1.3-extensions/vkspec.html#VUID-vkDestroyBuffer-buffer-00922)
Objects: 1
[0] 0x1f956bf58f0, type: 3, name: NULL
What is causing it and how can I modify the code to fix it?
|
cd62764127378e5226b3c4eec311af5c
|
{
"intermediate": 0.4465576410293579,
"beginner": 0.420981764793396,
"expert": 0.1324605941772461
}
|
9,471
|
#include <vector>
#include <cmath>
#include <iostream>
class Polynomial {
public:
Polynomial(const std::vector<double>& coefficients) : coefficients(coefficients) { }
double operator()(double x) const {
double result = 0;
for (int i = 0; i < coefficients.size(); i++) {
result += coefficients[i] * std::pow(x, i);
}
return result;
}
Polynomial operator+(const Polynomial& other) const {
int size = std::max(coefficients.size(), other.coefficients.size());
std::vector<double> result(size, 0);
for (int i = 0; i < size; i++) {
if (i < coefficients.size()) {
result[i] += coefficients[i];
}
if (i < other.coefficients.size()) {
result[i] += other.coefficients[i];
}
}
return Polynomial(result);
}
Polynomial operator-(const Polynomial& other) const {
int size = std::max(coefficients.size(), other.coefficients.size());
std::vector<double> result(size, 0);
for (int i = 0; i < size; i++) {
if (i < coefficients.size()) {
result[i] += coefficients[i];
}
if (i < other.coefficients.size()) {
result[i] -= other.coefficients[i];
}
}
return Polynomial(result);
}
Polynomial operator*(const Polynomial& other) const {
int size = coefficients.size() + other.coefficients.size() - 1;
std::vector<double> result(size, 0);
for (int i = 0; i < coefficients.size(); i++) {
for (int j = 0; j < other.coefficients.size(); j++) {
result[i + j] += coefficients[i] * other.coefficients[j];
}
}
return Polynomial(result);
}
//Polynomial operator/(const Polynomial& other) const {
// int size = coefficients.size() + other.coefficients.size() - 1;
// std::vector<double> result(size, 0);
// for (int i = 0; i < coefficients.size(); i++) {
// for (int j = 0; j < other.coefficients.size(); j++) {
// result[i + j] += coefficients[i] / other.coefficients[j];
// }
// }
// return Polynomial(result);
//}
Polynomial operator/(const Polynomial& divisor) const {
if (divisor.coefficients.empty() || divisor.coefficients.back() == 0) {
throw std::invalid_argument("Division by zero");
}
std::vector<double> resultCoefficients(degree() - divisor.degree() + 1, 0);
for (int i = degree(); i >= divisor.degree(); i--) {
resultCoefficients[i - divisor.degree()] = coefficients[i] / divisor.coefficients[divisor.degree()];
Polynomial term(std::vector<double>(i - divisor.degree(), 0));
term,coefficients.push_back(resultCoefficients[i - divisor.degree()]);
*this -= divisor & term;
}
removeLeadingZeros();
return Polynomial(resultCoefficients);
}
Polynomial& operator+=(const Polynomial& other) {
*this = *this + other;
return *this;
}
Polynomial& operator-=(const Polynomial& other) {
*this = *this - other;
return *this;
}
Polynomial& operator*=(const Polynomial& other) {
*this = *this * other;
return *this;
}
friend std::ostream& operator<<(std::ostream& stream, const Polynomial& poly) {
for (int i = poly.coefficients.size() - 1; i >= 0; i--) {
if (poly.coefficients[i] == 0) continue;
if (i != poly.coefficients.size() - 1)
if(poly.coefficients[i] >= 0) stream << " + " << poly.coefficients[i];
else stream << " - " << abs(poly.coefficients[i]);
else stream << poly.coefficients[i];
if (i > 0) stream << "x^" << i;
}
return stream;
}
private:
std::vector<double> coefficients;
};
int main() {
int n;
std::cout << "Введите степень первого многочлена: ";
std::cin >> n;
std::vector<double> coeffs_a(n+1);
std::cout << "Введите коэффициенты первого многочлена, начиная со старшего:";
for (int i = n; i >= 0; i--) {
std::cin >> coeffs_a[i];
}
Polynomial a(coeffs_a);
std::cout << "Введите степень второго многочлена: ";
std::cin >> n;
std::vector<double> coeffs_b(n+1);
std::cout << "Введите коэффициенты второго многочлена, начиная со старшего:";
for (int i = n; i >= 0; i--) {
std::cin >> coeffs_b[i];
}
Polynomial b(coeffs_b);
std::cout << "Многочлен a = " << a << std::endl;
std::cout << "Многочлен b = " << b << std::endl;
std::cout << "Сумма многочленов: " << a + b << std::endl;
std::cout << "Разность многочленов: " << a - b << std::endl;
std::cout << "Произведение многочленов: " << a * b << std::endl;
std::cout << "Частное многочленов: " << a / b << std::endl;
return 0;
} Исправь ошибку
|
a1afc3ee843cd7fe4e56e94b3e1353c0
|
{
"intermediate": 0.32841232419013977,
"beginner": 0.4453510642051697,
"expert": 0.22623653709888458
}
|
9,472
|
This is my client code:
-module(client).
-export([start/1,add_remote/1,send_msg/4,stop_client/1]).
start(Client) -> register(Client,spawn(fun() -> loop() end)).
% c(client).
% client:start(elsa).
% client:add_remote('server1@LAPTOP-FELLF17T').
%pong
%client:send_msg(elsa,central,'server1@LAPTOP-FELLF17T',ola).
% c(client).
% client:start(sofia).
% client:add_remote('server1@LAPTOP-FELLF17T').
%pong
%client:send_msg(sofia,central,'server1@LAPTOP-FELLF17T',ola).
%%%%%%%%%%%%%%%%%%%%%%%%%
% Interface to client %
%%%%%%%%%%%%%%%%%%%%%%%%%
%Add remote machine to known ones.
add_remote(RemoteMachine) ->
net_adm:ping(RemoteMachine).
% Sends Message
send_msg(Client,Server,RemoteMachine,Message)->
Client ! {send,Server,RemoteMachine,Message}.
% Stop client
stop_client(Client) ->
Client ! {stop_client}.
%%%%%%%%%%%%%%%%%%%%%%%%%
% Main loop %
%%%%%%%%%%%%%%%%%%%%%%%%%
loop() ->
receive
{send,Server,RemoteMachine,Message} ->
{Server,RemoteMachine} ! {self(),Message},
receive
{_,Reply} -> io:format("Received from server: ~p~n",[Reply])
end,
loop();
{stop_client} ->
io:format("Cliente exiting...")
end.
and this is my server code:
-module(server).
-export([start/1]).
% c(server).
% server:start(central).
start(Server) -> register(Server,spawn(fun() -> loop() end)).
loop() ->
receive
{From, stop} ->
io:format("Received from ~p message to stop!~n",[From]),
From ! {self(),server_disconnect};
{From, Msg} ->
io:format("Received ~p: ~p~n",[From,Msg]),
io:format("Sending reply...~n"),
From ! {self(),happy_to_receive_your_message},
loop()
end.
|
8347631d686aea5888eb6ad75c90187c
|
{
"intermediate": 0.23079043626785278,
"beginner": 0.5332368612289429,
"expert": 0.23597273230552673
}
|
9,473
|
how to create a local proxy in python that switches between 2 other proxies?
|
6b6c55f7039b198b7964e19026579d79
|
{
"intermediate": 0.4215874671936035,
"beginner": 0.172784686088562,
"expert": 0.4056278467178345
}
|
9,474
|
Por favor, implementa un temporizador que represente el tiempo que tiene el usuario para responder la pregunta en cuestión. Una vez se termine el tiempo establecido, debe aparecer un mensaje en pantalla que haga evidencia que el tiempo se ha terminado, junto con el puntaje que tuviese el usuario en ese momento: import json
import random
import os
import googletrans
from googletrans import Translator
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.label import Label
from kivy.uix.button import Button
from kivy.uix.textinput import TextInput
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.clock import Clock
from kivy.metrics import dp
# ... (mantén las funciones cargar_preguntas, mostrar_menu, mostrar_mensaje_porcentaje, jugar, jugar_de_nuevo, seleccionar_idioma, guardar_puntuacion y ver_mejores_puntuaciones) ...
def cargar_preguntas():
with open("preguntas_es.json", "r", encoding="utf-8") as f:
preguntas = json.load(f)
return preguntas
def guardar_puntuacion(puntuacion):
archivo_puntuaciones = "mejores_puntuaciones.txt"
if not os.path.exists(archivo_puntuaciones):
with open(archivo_puntuaciones, "w") as f:
f.write(f"{puntuacion}\n")
else:
with open(archivo_puntuaciones, "r") as f:
puntuaciones = [int(linea.strip()) for linea in f.readlines()]
puntuaciones.append(puntuacion)
puntuaciones.sort(reverse=True)
puntajes = puntuaciones[:5]
with open(archivo_puntuaciones, "w") as f:
for puntuacion in puntajes:
f.write(f"{puntuacion}\n")
def cargar_puntuaciones():
archivo_puntuaciones = "mejores_puntuaciones.txt"
if os.path.exists(archivo_puntuaciones):
with open(archivo_puntuaciones, "r") as f:
puntuaciones = [int(linea.strip()) for linea in f.readlines()]
else:
puntuaciones = []
return puntuaciones
def traducir(texto, idioma):
translator = Translator()
traduccion = translator.translate(texto, dest=idioma)
return traduccion.text
class LanguageScreen(Screen):
def seleccionar_idioma(self, idioma):
self.manager.idioma = idioma
self.manager.current = "menu"
class MainMenu(Screen):
def __init__(self, **kwargs):
super(MainMenu, self).__init__(**kwargs)
self.layout = BoxLayout(orientation='vertical')
self.add_widget(Label(text='Menú principal'))
self.play_button = Button(text='Jugar')
self.play_button.bind(on_press=self.start_game)
self.layout.add_widget(self.play_button)
self.high_scores_button = Button(text='Ver mejores puntuaciones')
self.high_scores_button.bind(on_press=self.show_high_scores)
self.layout.add_widget(self.high_scores_button)
self.exit_button = Button(text='Salir')
self.exit_button.bind(on_press=self.exit_app)
self.layout.add_widget(self.exit_button)
self.add_widget(self.layout)
def start_game(self, instance):
self.manager.current = 'game'
def show_high_scores(self, instance):
self.manager.current = 'high_scores'
def exit_app(self, instance):
App.get_running_app().stop()
class GameScreen(Screen):
def __init__(self, **kwargs):
super(GameScreen, self).__init__(**kwargs)
self.layout = BoxLayout(orientation='vertical')
self.score_label = Label(text='Puntuación: 0')
self.layout.add_widget(self.score_label)
self.question_label = Label(text='')
self.layout.add_widget(self.question_label)
# Agregar los botones de respuesta
self.options_layout = BoxLayout(orientation='vertical')
for i in range(4):
button = Button(text='', size_hint_y=None, height=dp(50))
button.bind(on_release=lambda button: self.submit_answer(button.text))
self.options_layout.add_widget(button)
self.layout.add_widget(self.options_layout)
self.back_button = Button(text='Volver al menú principal')
self.back_button.bind(on_press=self.go_back)
self.layout.add_widget(self.back_button)
self.add_widget(self.layout)
def on_enter(self):
self.start_game()
def on_leave(self):
self.score_label.text = "Puntuación: 0"
self.puntaje = 0 # Reiniciar el puntaje
def start_game(self):
global preguntas
self.puntaje = 0
self.total_preguntas = len(preguntas)
self.preguntas_respondidas = 0
self.preguntas = preguntas.copy()
self.mostrar_pregunta()
def mostrar_pregunta(self):
if self.preguntas:
self.pregunta_actual = random.choice(self.preguntas)
self.preguntas.remove(self.pregunta_actual)
self.question_label.text = self.pregunta_actual["pregunta"]
# Actualizar los botones de respuesta
opciones = self.pregunta_actual["opciones"]
random.shuffle(opciones)
for i, opcion in enumerate(opciones):
self.options_layout.children[i].text = opcion
self.enable_buttons()
else:
self.question_label.text = "¡Felicidades, lo lograste!"
self.disable_buttons()
def submit_answer(self, respuesta):
self.preguntas_respondidas += 1
if respuesta == self.pregunta_actual["respuesta_correcta"]:
self.puntaje += 1
self.score_label.text = f'Puntuación: {self.puntaje}'
else:
mensaje_porcentaje = self.mostrar_mensaje_porcentaje()
self.question_label.text = f"Respuesta incorrecta. La respuesta correcta es: {self.pregunta_actual['respuesta_correcta']}. Tu puntaje final es: {self.puntaje}. {mensaje_porcentaje}"
self.disable_buttons()
guardar_puntuacion(self.puntaje)
self.puntaje = 0 # Reiniciar el puntaje
return
self.mostrar_pregunta()
def disable_buttons(self):
for button in self.options_layout.children:
button.disabled = True
def enable_buttons(self):
for button in self.options_layout.children:
button.disabled = False
def go_back(self, instance):
self.manager.current = 'menu'
def mostrar_mensaje_porcentaje(self):
porcentaje_aciertos = (self.puntaje / self.preguntas_respondidas) * 100
if porcentaje_aciertos >= 75:
mensaje = "¡Excelente trabajo!"
elif porcentaje_aciertos >= 50:
mensaje = "¡Buen trabajo!"
elif porcentaje_aciertos >= 25:
mensaje = "Puedes hacerlo mejor."
else:
mensaje = "Sigue practicando."
return mensaje
def go_back(self, instance):
self.manager.current = 'main_menu'
class HighScoresScreen(Screen):
def __init__(self, **kwargs):
super(HighScoresScreen, self).__init__(**kwargs)
self.layout = BoxLayout(orientation='vertical')
self.high_scores_label = Label(text='')
self.layout.add_widget(self.high_scores_label)
self.back_button = Button(text='Volver al menú principal')
self.back_button.bind(on_press=self.go_back)
self.layout.add_widget(self.back_button)
self.add_widget(self.layout)
def on_enter(self):
self.show_high_scores()
def show_high_scores(self):
archivo_puntuaciones = "mejores_puntuaciones.txt"
if os.path.exists(archivo_puntuaciones):
with open(archivo_puntuaciones, "r") as f:
puntuaciones = [int(linea.strip()) for linea in f.readlines()]
puntuaciones = puntuaciones[:5]
self.high_scores_label.text = "Mejores puntuaciones:\n" + "\n".join(str(puntuacion) for puntuacion in puntuaciones)
else:
self.high_scores_label.text = "No hay puntuaciones guardadas."
def go_back(self, instance):
self.manager.current = 'main_menu'
class TriviaApp(App):
def build(self):
sm = ScreenManager()
sm.add_widget(MainMenu(name='main_menu'))
sm.add_widget(GameScreen(name='game'))
sm.add_widget(HighScoresScreen(name='high_scores'))
return sm
if __name__ == "__main__":
preguntas = cargar_preguntas()
TriviaApp().run()
|
0d7fd4efd6bf77421caf4c26346474ed
|
{
"intermediate": 0.34735602140426636,
"beginner": 0.42899614572525024,
"expert": 0.22364790737628937
}
|
9,475
|
in python, how to convert pdf to image
|
7c624ce258506aa29f7fd7eb29211f85
|
{
"intermediate": 0.40156251192092896,
"beginner": 0.3263002932071686,
"expert": 0.2721371650695801
}
|
9,476
|
cd /etc/openvpn/easy-rsa/
|
8e17fa336d88f440cc77e58ffc8772e6
|
{
"intermediate": 0.33391737937927246,
"beginner": 0.24209266901016235,
"expert": 0.4239899218082428
}
|
9,477
|
flutter how to download file from web on web platform?
|
76af3e978e8c8154d8d63295c27cc792
|
{
"intermediate": 0.5975536704063416,
"beginner": 0.16978348791599274,
"expert": 0.23266278207302094
}
|
9,478
|
c. Sketch a voltage–time graph for the discharging capacitor. Explain how the shape of this graph will be different if the circuit resistance is greater.
|
829176169ddf97c71bcb4e4fd7f4c827
|
{
"intermediate": 0.36793485283851624,
"beginner": 0.21347163617610931,
"expert": 0.41859352588653564
}
|
9,479
|
Package "playground" must depend on the current version of "@fund/gather-abtest": "0.0.3" vs "^0.0.1"
Package "playground" must depend on the current version of "@fund/gather-stock-sse": "0.0.3" vs "^0.0.2"
🦋 error TypeError: Cannot read property 'dependents' of undefined 是什么原因
|
a98f7dc8244e3584603685c912309d0e
|
{
"intermediate": 0.4729352295398712,
"beginner": 0.22627946734428406,
"expert": 0.3007853329181671
}
|
9,480
|
flutter how to save file from web platform via http package?
|
76a2319041091897b30d9cfc0cbdc73f
|
{
"intermediate": 0.6230044960975647,
"beginner": 0.16364140808582306,
"expert": 0.21335406601428986
}
|
9,481
| ERROR: type should be string, got "https://www.chbank.com/en/personal/banking-services/useful-information/deposit-rates/index.shtml, find the p rate of this website by using python and beatiful soup, the example result is 6.00% p.a."
|
5f073fd0ee63fd4312c80dfbccf4b6da
|
{
"intermediate": 0.33948788046836853,
"beginner": 0.270518034696579,
"expert": 0.3899940550327301
}
|
9,482
|
any free tool to make animated Videos or GIFs from a single photo
|
a7c32dd46f9ed2accf085e3de9dbeca1
|
{
"intermediate": 0.3428896367549896,
"beginner": 0.24931201338768005,
"expert": 0.4077984392642975
}
|
9,483
|
System.NotSupportedException: Stream does not support reading.
|
edfa10d06ecfa516b434cb21b2e3373c
|
{
"intermediate": 0.5239323377609253,
"beginner": 0.1625545173883438,
"expert": 0.3135131597518921
}
|
9,484
|
帮我优化一下这条sql:
SELECT count(distinct r2.bind_mobile) as mobile
FROM (SELECT 1, nascent_id
FROM (SELECT BITMAP_AND(t1.nascent_id, t2.nascent_id) AS vars
FROM (SELECT 1 as code, crowd_bitmap AS nascent_id
FROM app_common_crowd_snap
WHERE crowd_code = '08d5e4d3a3ca4e929f82aa6cf4dd534a') t1
INNER JOIN (SELECT 1 AS code
, (BITMAP_UNION(to_bitmap(SR.nascent_id))) AS nascent_id
FROM dwd_member_shop_rt SR
WHERE SR.group_id = 80000027
AND SR.bind_mobile IS NOT NULL
AND SR.bind_mobile != '') t2
ON t1.code = t2.code) ttt LATERAL VIEW explode(bitmap_to_array(ttt.vars)) nascent_id AS nascent_id) AS r1
LEFT JOIN dwd_member_shop_rt AS r2 ON r1.nascent_id = r2.nascent_id
WHERE r2.bind_mobile IS NOT NULL
AND r2.bind_mobile != '';
|
144f3c23a507caaeda619ec6e32da901
|
{
"intermediate": 0.355283260345459,
"beginner": 0.29743650555610657,
"expert": 0.34728026390075684
}
|
9,485
|
give me some Advantage of git and github
|
b51ec0252c80b0c65f1541430c909b25
|
{
"intermediate": 0.7688170671463013,
"beginner": 0.10358806699514389,
"expert": 0.12759479880332947
}
|
9,486
|
flutter. how to download file at via flutter web platform? bring only the code related to sending/receiving requests/responses.
|
6c3381a5bff038ebebb4bb2bcc136fa1
|
{
"intermediate": 0.45088469982147217,
"beginner": 0.2971472442150116,
"expert": 0.25196805596351624
}
|
9,487
|
python code to generate slides with content and image on topic "ChatGpt 4"
|
a5e1f3feb5bcba2737ef1db2eefc83b0
|
{
"intermediate": 0.28579673171043396,
"beginner": 0.177808478474617,
"expert": 0.5363947749137878
}
|
9,488
|
undefined reference to `va_start'
|
3a2b0c0e01c81f7de71fea87c6c1435d
|
{
"intermediate": 0.31316089630126953,
"beginner": 0.26932209730148315,
"expert": 0.4175169765949249
}
|
9,489
|
python Streamlit code to present generated slides using content and image returned for functions "generate_content" and "generate_image"
|
f1b91961e102506d9cffa78ff3eec1b4
|
{
"intermediate": 0.44117629528045654,
"beginner": 0.22600878775119781,
"expert": 0.33281487226486206
}
|
9,490
|
in sqlachemy, how can create the table structe of an auto id when i insert the record which has no key of id
|
056a49d6592e8cca939fc340fe0d1b4a
|
{
"intermediate": 0.5125616788864136,
"beginner": 0.20226755738258362,
"expert": 0.2851707637310028
}
|
9,491
|
qt QDockWidget QAbstractButton setstylesheet怎么写
|
490aebd97c327d5d9a36bf28791b8e14
|
{
"intermediate": 0.5010231137275696,
"beginner": 0.26282238960266113,
"expert": 0.23615454137325287
}
|
9,492
|
old code,new,አማርኛ,መለኪያ,ENGLISH NAMES,UoM
0101010231,0101010401,ፉርኖ ዱቄት የስንዴ አንደኛ ደረጃ ,ኪ.ግ,Furno Duket (Local processed) ,KG
0101010233,0101010402,ሴርፋም የታሸገ,ኪ.ግ,Serifam ,KG
0101010228,0101010403,በቆሎ/ሙሉ ነጭ/ዱቄት,ኪ.ግ,Maize( full white) milled,KG
0101020201,0101020107,በግ/ወንድ/20ኪ.ግ የሚገምት,ቁጥር,Sheep(Estimeted-20 kg)-Male ,Number
0101020202,0101020108,ፍየል /ወንድ/20ኪ.ግ የሚገምት,ቁጥር,Goat(20kg)-Male ,Number based on the above csv write a node.js script that will parse through it used the old code to search through the following below csv ITEM PRICE IN BIRR,,,,,,,,,,,,,,,,,,,,,,,,,,
NAME,CODE,Month,Market Name?,,,,ASAYITA,,SEMERA LOGIA,,MELKA WERER,,AWASH 7 KILO,,,,,,,,,,,,,
,,,Source?,,,,1,2,1,2,1,2,1,2,,,,,,,,,,,,
,,,Market Code?,VARIATION,STDEV,GEOMEAN,0201011,,0201023,,0203011,,0203022,,min,max,ratio,,,,,,,,,
Enjera(teff mixed),0101010101,1,325G,0.14,1.93,13.99,13.00,13.41,16.14,16.53,14.08,11.47,,,11.47,16.53,1.441150828,,,,,,,,,
Bread traditional(Ambasha),0101010102,1,350G,0.15,2.65,18.06,21.88,17.87,,,-,-,17.45,15.58,15.58,21.88,1.40436457,,,,,,,,,
Bread traditional (shelito),0101010103,1,350G,0.02,0.52,25.37,-,-,25.00,25.74,-,-,-,-,25.00,25.74,1.0296,,,,,,,,,
Bread (bakery),0101010104,1,350G,0.01,0.39,26.58,26.92,26.92,,,26.25,26.25,,,26.25,26.92,1.02552381,,,,,,,,, and when it finds a match it changes the code to the new code
|
fe014ccee6c994f97c6ce06fdd7b59b6
|
{
"intermediate": 0.3788410425186157,
"beginner": 0.3826334476470947,
"expert": 0.23852543532848358
}
|
9,494
|
can you infer context from previous chats?
|
0a97c86b4684ba5950fade728efdd377
|
{
"intermediate": 0.38328617811203003,
"beginner": 0.30287057161331177,
"expert": 0.3138432502746582
}
|
9,495
|
hello
|
6a210ace85380ff27ea69ec1347eac55
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
9,496
|
Hi. I need to work with C code
typedef unsigned char ucCode;
I need to use strcmp() or strcmp replace
|
5e5c85b4e23eba8823c2fb0a452066ed
|
{
"intermediate": 0.19150112569332123,
"beginner": 0.5385209918022156,
"expert": 0.2699779272079468
}
|
9,497
|
Hello, you are an eclipse RCP developper and you know a lot about SWT library. I need your help with a SWT Chart i'm working on. I want to print a duration (as string) as the xAxis Labels of my swt chart. Here's how I do it currently :
String[] xAxisLabelTime = new String[xListTime.size()];
double[] xAxisLabelDist = new double[xList.size()];
int i = 0;
for(String label : xListTime) {
xAxisLabelTime[i] = label;
i++;
}
IAxisSet axisSet = chart.getAxisSet();
axisSet.getXAxis(0).enableCategory(true);
// Set the filtered label array with a label step of 10
String[] filteredLabels = getFilteredLabels(xAxisLabelTime, 400);
axisSet.getXAxis(0).setCategorySeries(filteredLabels);
axisSet.getXAxis(0).getTick().setTickLabelAngle(45);
With the function getFilteredLabels :
private String[] getFilteredLabels(String[] originalLabels, int labelStep) {
String[] filteredLabels = new String[originalLabels.length];
for (int i = 0; i < originalLabels.length; i++) {
if (i % labelStep == 0) {
filteredLabels[i] = originalLabels[i];
} else {
filteredLabels[i] = "";
}
}
return filteredLabels;
}
The function "getFilteredLabels" is very important, because my chart have a lot of points (like more than 20000). So if I don't do that, all the string are printed on the xAxis, and the chart overload and becomes very lagy. But the problem is that even with putting an empty string, the chart is still overloading because all the points are printed (with some who have an empty string of course. I cannot put null instead of "" because swt don't acept it. So now I'm stuck with a very lagy chart because the xAxis is overloaded with a too big String[]. But it has to be of this size in order to have all the points of my curves. Do you know how I can make some points disapear in order to fix the chart ? Remember to use SWT Methods that exists please.
|
603e947d5d9b629b2ba00ee60638b393
|
{
"intermediate": 0.595728874206543,
"beginner": 0.30370062589645386,
"expert": 0.10057047754526138
}
|
9,498
|
I am looking to simulate a system with the following specifications:
The main components of the system are as follows:
a. Sensors that continuously generate two types of packets: Type A packets and Type B packets.
b. Type A packets are processed locally at the micro data center with a service time of 4 units. If there is a free server or an available local buffer, they are handled there. Otherwise, they are forwarded to the cloud data center for processing, with a delay of 1 unit. The cloud data center takes 2 units of service time to process these packets. If there are no free cloud servers and the cloud buffer is full, the packet is dropped, and the dropped packet counter is incremented.
c. Type B packets undergo pre-processing locally at the micro data center with a service time of 2 units. Then, they are forwarded to the cloud data center for complete processing, with a delay of 1 unit. The cloud data center takes 4 units of service time to process these packets. If there are no free local servers and the local buffer is full, the packets bypass local preprocessing and are directly forwarded to the cloud data center. In such cases, the cloud data center handles their processing entirely, requiring 5 units of service time. If there are no free cloud servers and the cloud buffer is full, the packet is dropped, and the dropped packet counter is incremented.
d. The fraction of Type B packets (from the total packets of Type A and Type B) is represented by 'f' and can be modified.
The micro data center consists of a single buffer (local buffer) and multiple edge servers (the number of edge servers is modifiable). The micro data center can forward packets to the cloud. Please note that whenever a packet is forwarded to the cloud, a delay of 1 unit is considered.
The cloud data center consists of a single buffer (cloud buffer) and multiple edge servers (the number of edge servers is modifiable).
Please use this information to simulate the system accordingly.
|
419a65588512d13c4863d45d0c31ab34
|
{
"intermediate": 0.2922475039958954,
"beginner": 0.31076279282569885,
"expert": 0.3969896733760834
}
|
9,499
|
For a combined dataframe with users who clicked on the success_full_df advertising message, display a summary table with statistics for string columns (Ad Topic Line, City, Country)
|
4103be72ce7a765e73b72cb1d0edff59
|
{
"intermediate": 0.37370315194129944,
"beginner": 0.275759220123291,
"expert": 0.35053762793540955
}
|
9,500
|
numpy.float64 object cannot be interpreted as an integer
how to solve this
|
80eebf8b71b1870055610984aef9f657
|
{
"intermediate": 0.3672761619091034,
"beginner": 0.40648454427719116,
"expert": 0.22623926401138306
}
|
9,501
|
qwidget hide after showNormal
|
dd34e79b08fc193d95c38ffc0607895b
|
{
"intermediate": 0.37170878052711487,
"beginner": 0.3510633707046509,
"expert": 0.27722787857055664
}
|
9,502
|
how to call a java static function in Rhino engine?
|
7704b876a1fcd8dd9698f171238033cc
|
{
"intermediate": 0.47712597250938416,
"beginner": 0.3341203033924103,
"expert": 0.18875367939472198
}
|
9,503
|
how to call a java static function in javascript code using rhino engine?
|
76aa8daf05e254243fe3850b9e806c28
|
{
"intermediate": 0.5002761483192444,
"beginner": 0.30794891715049744,
"expert": 0.19177499413490295
}
|
9,504
|
i have this function:
export async function createNewMember(req, body) {
console.log(“— createNewMember”);
const endpointNewMembership = “/v1/memberships:grant”;
const CONNECTION = GetStoreConnection(req);
console.log(“— CONNECTION createNewMember”, CONNECTION);
if (!CONNECTION) {
console.error(“CONNECTION PARAMETERS NOT FOUND!!!”);
}
const domain = .get(CONNECTION, “url”);
console.log(“— domain”, domain);
const AuthStr = "Bearer ".concat(.get(CONNECTION, “token”));
console.log(“— AuthStr”, AuthStr);
const urlNewMembershipWithRewards = domain + endpointNewMembership;
const creatMembershipResponse = await axios
.post({
url: urlNewMembershipWithRewards,
method: “POST”,
headers: { Authorization: AuthStr },
data: body,
})
.then((response) => {
console.log(“— response CreateMembership”, response.data);
})
.catch((error) => {
console.error(“— ERROR in CREATE MEMBERSHIP on LAVA”, error);
});
return creatMembershipResponse;
}
which gives me this error:
— ERROR in CREATE MEMBERSHIP on LAVA TypeError [ERR_INVALID_ARG_TYPE]: The “url” argument must be of type string. Received an instance of Object
2023-05-31 10:27:37 │ backend │ at new NodeError (node:internal/errors:399:5)
2023-05-31 10:27:37 │ backend │ at validateString (node:internal/validators:163:11)
2023-05-31 10:27:37 │ backend │ at Url.parse (node:url:183:3)
2023-05-31 10:27:37 │ backend │ at Object.urlParse [as parse] (node:url:154:13)
2023-05-31 10:27:37 │ backend │ at dispatchHttpRequest (C:\dev\shopify\Lava-shopify-adapter\web\node_modules\axios\lib\adapters\http.js:133:22)
2023-05-31 10:27:37 │ backend │ at new Promise (<anonymous>)
2023-05-31 10:27:37 │ backend │ at httpAdapter (C:\dev\shopify\Lava-shopify-adapter\web\node_modules\axios\lib\adapters\http.js:49:10)
2023-05-31 10:27:37 │ backend │ at dispatchRequest (C:\dev\shopify\Lava-shopify-adapter\web\node_modules\axios\lib\core\dispatchRequest.js:58:10)
2023-05-31 10:27:37 │ backend │ at Axios.request (C:\dev\shopify\Lava-shopify-adapter\web\node_modules\axios\lib\core\Axios.js:109:15)
2023-05-31 10:27:37 │ backend │ at Axios.httpMethod [as post] (C:\dev\shopify\Lava-shopify-adapter\web\node_modules\axios\lib\core\Axios.js:144:19) {
2023-05-31 10:27:37 │ backend │ code: ‘ERR_INVALID_ARG_TYPE’
2023-05-31 10:27:37 │ backend │ }
|
020b0f51c580709cdd3cd9962a8acbce
|
{
"intermediate": 0.32242465019226074,
"beginner": 0.4307737648487091,
"expert": 0.24680159986019135
}
|
9,505
|
I need a code to simulate a system with the following specifications:
Components of the system:
Sensors continuously generate two types of packets: Type A and Type B.
Type A packets are processed locally at the micro data center. The service time for Type A packets is 4 units. If a free server or an available local buffer exists, the packets are handled locally. Otherwise, they are forwarded (with a delay of 1 unit) to the cloud data center for processing, which takes 2 units of service time. If there are no free cloud servers and the cloud buffer is full, the packet is dropped, and the dropped packet counter is incremented.
Type B packets undergo pre-processing locally at the micro data center. The service time for Type B packets is 2 units. After pre-processing, they are forwarded (with a delay of 1 unit) to the cloud data center for complete processing, which takes 4 units of service time. If there are no free local servers and the local buffer is full, the packets are directly forwarded (with a delay of 1 unit) to the cloud data center. In such cases, the cloud data center handles their processing entirely, requiring 5 units of service time. If there are no free cloud servers and the cloud buffer is full, the packet is dropped, and the dropped packet counter is incremented.
There is a fraction of the total packets (Type A and Type B) represented by "f" which can be modified.
The micro data center consists of a single buffer (local buffer) and multiple edge servers. The number of edge servers can be modified. The micro data center can forward packets to the cloud data center. Note that whenever a packet is forwarded to the cloud, a delay of 1 unit is considered.
The cloud data center consists of a single buffer (cloud buffer) and multiple edge servers. The number of cloud servers can be modified.
Notes:
Although service times and delays are specified, they should be modifiable using separate unique variables.
The arrival time of packets should be modifiable using a separate unique variable. The inter-arrival times are exponentially distributed.
The simulation time should be modifiable.
The code should provide measurements during the simulation in a separate class called "measurements." The measurements include the number of dropped packets, the number of packet arrivals (Type A and Type B), and the average delay per packet. These measurements are printed after simulation.
Code Components:
Packet class: Takes "f" as an argument and generates packets randomly of Type A or Type B.
Local micro data center class: Contains five functions - process packet A, preprocess packet B, forward packet A, forward packet B, and forward packet B without preprocessing. Each function can be triggered or not based on the scenario. The class also has attributes - "num_edge_server" to specify the number of edge servers and "buffer_size" to specify the size of the local micro data center packet.
Cloud data center class: Contains three functions - process packet A, process packet B, and entire process packet B. It also has attributes - "num_cloud_server" to specify the number of cloud servers and "buffer_size" to specify the cloud buffer size.
system simulation class: Contains two main functions - process packet A and process packet B. the main simulation happen here
|
2d0233e388155ebb82509e65c18a18ca
|
{
"intermediate": 0.30454978346824646,
"beginner": 0.44984251260757446,
"expert": 0.2456076741218567
}
|
9,506
|
QPixmap(): argument 1 has unexpected type ‘Image’
|
be1eb90669967e315d3927e7e5188464
|
{
"intermediate": 0.40520161390304565,
"beginner": 0.26930883526802063,
"expert": 0.32548952102661133
}
|
9,507
|
I have two VBA codes that work perfectly for the outcomes expected.
The VBA codes are listed below but relate to completely different sheets.
I would like to combine them to accomplish the following outcome:
Workbook 'Service Providers' contains the sheet 'Events'
Workbook 'Premises Requests' contains the sheet 'PRtoday'
I would like to select a cell in a row in the sheet 'PRtoday'
then run a vba code that will do the following:
It will check if the Workbook 'Service Providers' is open or not.
If it is not open, it will Pop Up a message "Workbook 'Service Providers' not open" then it will exit the sub.
If workbook 'Service Providers' is open,
the cell row selected of the row in 'Workbook 'Premises Requests' sheet 'Prtoday' from Row B:D will be copied to the other open 'Workbook 'Service Providers' sheet 'Events'
and pasted into the next available empty row in the following order,
PRtoday column D will be pasted into Events column A
PRtoday column C will be pasted into Events column C
PRtoday column B will be pasted into Events column D
After the paste a message will pop up saying 'Request sucessfully copied to Events'
Here are the two working VBA codes that could possibly be combined to provide the desired outcome:
Sub PremisesRequestToNote()
If Selection.Cells.Count = 1 Then
Dim noteText As Range
Set noteText = Range("B" & Selection.Row & ":E" & Selection.Row)
Dim cell As Range
Dim text As String
For Each cell In noteText
text = text & cell.Value & vbCrLf
Next cell
Shell "notepad.exe", vbNormalFocus
SendKeys text
Application.CutCopyMode = False
End If
Application.CutCopyMode = True
End Sub
Sub CopyRemindersToEvents()
Dim remindersSheet As Worksheet
Set prtodaySheet = ThisWorkbook.Sheets("PRtoday")
Dim eventsSheet As Worksheet
Set eventsSheet = Workbook("Service Providers").Sheets("Events")
Dim lastRow As Long
lastRow = remindersSheet.Cells(Rows.Count, "E").End(xlUp).Row
Dim i As Long
For i = 23 To lastRow
If Not IsEmpty(prtodaySheet.Range("B" & i).Value) And _
Not IsEmpty(prtodaySheet.Range("C" & i).Value) And _
Not IsEmpty(prtodaySheet.Range("D" & i).Value) Then
Dim destinationRow As Long
destinationRow = eventsSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1
eventsSheet.Range("A" & destinationRow).Value = prtodaySheet.Range("D" & i).Value 'Event A is Date
eventsSheet.Range("C" & destinationRow).Value = prtodaySheet.Range("C" & i).Value 'Event C is Details
eventsSheet.Range("D" & destinationRow).Value = prtodaySheet.Range("B" & i).Value 'Event D is Notes
End If
Next i
Range("B22").Value = Now()
End Sub
|
7e26bb605b55e7216471ea563561050a
|
{
"intermediate": 0.32638996839523315,
"beginner": 0.4163176417350769,
"expert": 0.25729233026504517
}
|
9,508
|
{
extend: 'collection',
text: 'Export',
className: 'btn btn-success',
buttons: [
{
extend: 'excel',
title: 'Daily Bible Log - Bookmarks',
action: function(e, dt, node, config) {
if (dt.rows({ selected: true }).count() > 0) {
dt.buttons.exportData({ selected: true });
} else {
dt.buttons.exportData();
}
}
},
{
extend: 'csv',
title: 'Daily Bible Log - Bookmarks',
action: function(e, dt, node, config) {
if (dt.rows({ selected: true }).count() > 0) {
dt.buttons.exportData({ selected: true });
} else {
dt.buttons.exportData();
}
}
},
{
extend: 'pdf',
title: 'Daily Bible Log - Bookmarks',
orientation: 'landscape',
pageSize: 'LEGAL',
action: function(e, dt, node, config) {
if (dt.rows({ selected: true }).count() > 0) {
dt.button(this).trigger('selected.export');
} else {
dt.button(this).trigger('all.export');
}
}
}]
}
this isnt triggering download in datatable
|
b3c12adf6b94f46cd8e41a28ada38899
|
{
"intermediate": 0.33691689372062683,
"beginner": 0.40185487270355225,
"expert": 0.2612282633781433
}
|
9,509
|
C __VA_ARGS__
|
1344280b42b4255458237800858a152b
|
{
"intermediate": 0.22646212577819824,
"beginner": 0.2501240074634552,
"expert": 0.523413896560669
}
|
9,510
|
hi
|
16629d4d5e6117b420e856da29cfe143
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
9,511
|
const TradingCup = () => {
const {cup, maxVolume, readyState, cupSubscribe, cupUnsubscribe} = useRustWsServer();
return (
<> </>
);
};
type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
export function useRustWsServer() {
const [connection, setConnection] = useState<WebSocket|null>(null);
const [cup, setCup] = useState<Array<CupItem>>([]);
useEffect(() => {
const url = process.env.NEXT_PUBLIC_RUST_WS_SERVER;
if (url) {
const ws = new WebSocket(url);
setConnection(ws);
}
}, []);
useEffect(() => {
if (null !== connection) {
connection.onmessage = (message: MessageEvent) => {
if (!message.data) return;
const data = JSON.parse(message.data);
if (!data?.commands || data.commands.length === 0) return;
const domUpdate = data.commands.find((item: any) => "undefined" !== typeof item.SymbolDomUpdate);
if (!domUpdate) return;
setCup(splitCupSides(domUpdate.SymbolDomUpdate.dom_rows));
};
connection.onopen = () => {
setReadyState(ReadyState.OPEN);
};
connection.onclose = () => {
setReadyState(ReadyState.CLOSED);
};
}
}, [connection]);
return {
readyState,
cupSubscribe,
cupUnsubscribe,
cup,
maxVolume,
};
}
export default class ClustersClientControllers {
renderTrades = () => {
this.clearOrderFeed();
reduce(this.tradesArr, (prev, cur, index) => {
this.renderTrade(prev, cur, this.tradesArr.length - (index as any))
prev = cur
console.log(prev);
return prev
})
}
clearOrderFeed = () => {
this.orderFeedCtx.clearRect(0, 0, this.canvasWidth, this.canvasHeight)
}
renderTrade = (prev, item, index) => {
//const anomalyQty = this.root.instruments[this.root.selectedSymbol].anomalies.anomaly_qty;
console.log(item); //
price_float
:
0.4139
price_micro
:
4139000
quantity
:
6
side
:
"Buy"
time
:
1685607036920
//if (size < 1) return;
const ctx = this.orderFeedCtx
let xPos = (this.canvasWidth - (index * (bubbleSize * 1.5))) - bubbleSize;
const offsetFromTop = this.root.tradingDriverController.upperPrice - item.price_micro;
const y = ((offsetFromTop / this.root.tradingDriverController.getZoomedStepMicro()) - 1) * rowHeight
const label = abbreviateNumber(item.quantity * item.price_float)
const {width: textWidth} = ctx.measureText(label);
const itemUsdt = item.quantity * item.price_float;
const tradeFilter = this.getTradeFilterBySymbol(this.getSymbol())
const maxUsdtBubbleAmount = tradeFilter * 30;
const maxPixelBubbleAmount = 35;
const realBubbleSize = (itemUsdt / maxUsdtBubbleAmount) * maxPixelBubbleAmount
const size = clamp(realBubbleSize, (textWidth/2)+3, maxPixelBubbleAmount)
const bubbleX = xPos;
const bubbleY = y + 8;
ctx.beginPath();
let bigRatio = (realBubbleSize / maxPixelBubbleAmount) / 3;
bigRatio = bigRatio > 0.95 ? 0.95 : bigRatio;
ctx.fillStyle = item.side === "Sell" ? deepGreen.lighten(bigRatio).toString() : deepRed.lighten(bigRatio).toString()
ctx.strokeStyle = 'black';
ctx.arc(xPos, bubbleY, size, 0, 2 * Math.PI)
ctx.fill();
ctx.stroke();
ctx.fillStyle = "#FFFFFF"
ctx.fillText(label, bubbleX - (textWidth / 2), (bubbleY + (rowHeight / 2)) - 2)
}
}
1. В компоненте TradingCup нужно запрашивать cup из useRustWsServer и отрисовывать как в методах renderTrade renderTrades .
2. renderTrade renderTrades то другой немного компонент с немного другими данными, нужно использовать его в TradingCup только с нашими данными type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
3. В методе renderTrade() показатели quantity и price_float перемножаются, чтобы получить объем в $. Нам это не нужно, будем выводить только quantity.
|
91251f1ef94a9cb29a4dde90687e91cd
|
{
"intermediate": 0.254445880651474,
"beginner": 0.4941536486148834,
"expert": 0.2514004707336426
}
|
9,512
|
hey how would i calculate the app id for a non steam game on steam? if indieshortcutdirectory != '':
exe = '$indieshortcutdirectory'
appname = 'IndieGala'
indieappid = zlib.crc32((exe + appname).encode())
print(indieappid)
# Create a new entry for the Steam shortcut
new_entry = {
'appid': f'{str(indieappid)}',
|
87f48bd695213f4a5c74616e5c6e9643
|
{
"intermediate": 0.5393509268760681,
"beginner": 0.3390761911869049,
"expert": 0.12157277762889862
}
|
9,513
|
update the sum of all charge amount on invoice item to respective account
|
1900315a1419ba6fa75917412cb11c1e
|
{
"intermediate": 0.4029167890548706,
"beginner": 0.22809895873069763,
"expert": 0.368984192609787
}
|
9,514
|
write me useeffect hook that checks if string is empty and if it is it sends a get request to an endpoint to fetch and set data
|
afc1a892d1f2d29532225d9cd2034f9f
|
{
"intermediate": 0.786728024482727,
"beginner": 0.07992773503065109,
"expert": 0.13334429264068604
}
|
9,515
|
I used this code: import time
from binance.client import Client
from binance.enums import *
from binance.exceptions import BinanceAPIException
import pandas as pd
import requests
import json
import numpy as np
import pytz
import datetime as dt
date = dt.datetime.now().strftime("%m/%d/%Y %H:%M:%S")
print(date)
url = "https://api.binance.com/api/v1/time"
t = time.time()*1000
r = requests.get(url)
result = json.loads(r.content)
# API keys and other configuration
API_KEY = ''
API_SECRET = ''
client = Client(API_KEY, API_SECRET)
STOP_LOSS_PERCENTAGE = -50
TAKE_PROFIT_PERCENTAGE = 100
MAX_TRADE_QUANTITY_PERCENTAGE = 100
POSITION_SIDE_SHORT = 'SELL'
POSITION_SIDE_LONG = 'BUY'
symbol = 'BTCUSDT'
quantity = 1
order_type = 'MARKET'
leverage = 125
max_trade_quantity_percentage = 1
client = Client(API_KEY, API_SECRET)
def get_klines(symbol, interval, lookback):
url = "https://fapi.binance.com/fapi/v1/klines"
params = {
"symbol": symbol,
"interval": interval,
"startTime": int(dt.datetime.timestamp(dt.datetime.now() - dt.timedelta(minutes=lookback))),
"endTime": int(dt.datetime.timestamp(dt.datetime.now()))
}
response = requests.get(url, params=params)
data = response.json()
ohlc = []
for d in data:
timestamp = dt.datetime.fromtimestamp(d[0]/1000).strftime('%Y-%m-%d %H:%M:%S')
ohlc.append([
timestamp,
float(d[1]),
float(d[2]),
float(d[3]),
float(d[4]),
float(d[5])
])
df = pd.DataFrame(ohlc, columns=['Open time', 'Open', 'High', 'Low', 'Close', 'Volume'])
df.set_index('Open time', inplace=True)
return df
print(get_klines('BTCUSDT', '1m', 44640))
df = get_klines('BTCUSDT', '1m', 44640)
def signal_generator(df):
open = df.open.iloc[-1]
close = df.close.iloc[-1]
previous_open = df.open.iloc[-2]
previous_close = df.close.iloc[-2]
# Bearish pattern
if (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
return 'sell'
# Bullish pattern
elif (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
return 'buy'
# No clear pattern
else:
return ''
df = get_klines('BTCUSDT', '1m', 44640)
def order_execution(symbol, signal, max_trade_quantity_percentage, leverage):
signal = signal_generator(df)
max_trade_quantity = None
account_balance = client.futures_account_balance()
usdt_balance = float([x['balance'] for x in account_balance if x['asset'] == 'USDT'][0])
max_trade_quantity = usdt_balance * max_trade_quantity_percentage/100
# Close long position if signal is opposite
long_position = None
short_position = None
positions = client.futures_position_information(symbol=symbol)
for p in positions:
if p['positionSide'] == 'LONG':
long_position = p
elif p['positionSide'] == 'SHORT':
short_position = p
if long_position is not None and short_position is not None:
print("Multiple positions found. Closing both positions.")
if long_position is not None:
client.futures_create_order(
symbol=symbol,
side=SIDE_SELL,
type=ORDER_TYPE_MARKET,
quantity=long_position['positionAmt'],
reduceOnly=True
)
time.sleep(1)
if short_position is not None:
client.futures_create_order(
symbol=symbol,
side=SIDE_BUY,
type=ORDER_TYPE_MARKET,
quantity=short_position['positionAmt'],
reduceOnly=True
)
time.sleep(1)
print("Both positions closed.")
if signal == 'buy':
position_side = POSITION_SIDE_LONG
opposite_position = short_position
elif signal == 'sell':
position_side = POSITION_SIDE_SHORT
opposite_position = long_position
else:
print("Invalid signal. No order placed.")
return
order_quantity = 0
if opposite_position is not None:
order_quantity = min(max_trade_quantity, abs(float(opposite_position['positionAmt'])))
if opposite_position is not None and opposite_position['positionSide'] != position_side:
print("Opposite position found. Closing position before placing order.")
client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if
opposite_position['positionSide'] == POSITION_SIDE_LONG
else
SIDE_BUY,
type=ORDER_TYPE_MARKET,
quantity=order_quantity,
reduceOnly=True,
positionSide=opposite_position['positionSide']
)
time.sleep(1)
order = client.futures_create_order(
symbol=symbol,
side=SIDE_BUY if signal == 'buy' else SIDE_SELL,
type=ORDER_TYPE_MARKET,
quantity=order_quantity,
reduceOnly=False,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side, # add position side parameter
leverage=leverage
)
if order is None:
print("Order not placed successfully. Skipping setting stop loss and take profit orders.")
return
order_id = order['orderId']
print(f"Placed {signal} order with order ID {order_id} and quantity {order_quantity}")
time.sleep(1)
# Set stop loss and take profit orders
# Get the order details to determine the order price
order_info = client.futures_get_order(symbol=symbol, orderId=order_id)
if order_info is None:
print("Error getting order information. Skipping setting stop loss and take profit orders.")
return
order_price = float(order_info['avgPrice'])
# Set stop loss and take profit orders
stop_loss_price = order_price * (1 + STOP_LOSS_PERCENTAGE / 100) if signal == 'sell' else order_price * (1 - STOP_LOSS_PERCENTAGE / 100)
take_profit_price = order_price * (1 + TAKE_PROFIT_PERCENTAGE / 100) if signal == 'buy' else order_price * (1 - TAKE_PROFIT_PERCENTAGE / 100)
stop_loss_order = client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if signal == 'buy' else SIDE_BUY,
type=ORDER_TYPE_STOP_LOSS,
quantity=order_quantity,
stopPrice=stop_loss_price,
reduceOnly=True,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side # add position side parameter
)
take_profit_order = client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if signal == 'buy' else SIDE_BUY,
type=ORDER_TYPE_LIMIT,
quantity=order_quantity,
price=take_profit_price,
reduceOnly=True,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side # add position side parameter
)
# Print order creation confirmation messages
print(f"Placed stop loss order with stop loss price {stop_loss_price}")
print(f"Placed take profit order with take profit price {take_profit_price}")
time.sleep(1)
while True:
df = get_klines('BTCUSDT', '1m', 44640)
if df is not None:
signal = signal_generator(df)
print(f"The signal time is: {dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')} :{signal}")
if signal:
order_execution('BTCUSDT', signal, MAX_TRADE_QUANTITY_PERCENTAGE, leverage)
time.sleep(1)
|
b8fd9e9cb61acc9bb57d8f3e4b812e8b
|
{
"intermediate": 0.4164886772632599,
"beginner": 0.39910563826560974,
"expert": 0.18440569937229156
}
|
9,516
|
<div id="container" style="width: 75%;">
<canvas id="population-chart" data-url="{% url 'app:chart' %}"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
$(function () {
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
var ctx = $populationChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: {
labels: data.labels,
datasets: [{
label: 'Element',
backgroundColor: '#800000',
data: data.data
}]
},
options: {
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Element Bar Chart'
}
}
});
}
});
});
</script>
how to change this canva to image
|
d900564e6f09dd1f252575f51c8ec6e6
|
{
"intermediate": 0.3510013520717621,
"beginner": 0.35360097885131836,
"expert": 0.2953975796699524
}
|
9,517
|
how are you
|
597bc675e5b154ea7c229fb7efc51c11
|
{
"intermediate": 0.38165968656539917,
"beginner": 0.3264302909374237,
"expert": 0.2919100522994995
}
|
9,518
|
<div id="container" style="width: 75%;">
<canvas id="population-chart" data-url="{% url 'app:chart' %}"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
$(function () {
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
var ctx = $populationChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: {
labels: data.labels,
datasets: [{
label: 'Element',
backgroundColor: '#800000',
data: data.data
}]
},
options: {
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Element Bar Chart'
}
}
});
}
});
});
</script>
how to get the image of this canvas ?
|
13f9f6a41f34a336f2a71ba367017ad5
|
{
"intermediate": 0.3947300612926483,
"beginner": 0.3301481604576111,
"expert": 0.2751217484474182
}
|
9,519
|
what I want to do here is to make a visual marix constructor onhover menu, which you can use to update these vertices and edges arrays on the fly to model or draw in that visual marix constructor. I'm not sure how to corretly interpret that VMC. any ideas?: const canvas = document.createElement('canvas');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
document.body.appendChild(canvas);
const ctx = canvas.getContext('2d');
const vertices = [
[0, 0, 0],
[0, 1, 0],
[1, 1, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 1],
[1, 1, 1],
[1, 0, 1],
];
const edges = [
[0, 1],
[1, 2],
[2, 3],
[3, 0],
[0, 4],
[1, 5],
[2, 6],
[3, 7],
[4, 5],
[5, 6],
[6, 7],
[7, 4],
];
// Transformation parameters
const scale = 0.025;
const zoom = 1;
const offsetX = 0.5;
const offsetY = 0.5;
function rotateX(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[1, 0, 0],
[0, c, -s],
[0, s, c],
];
}
function rotateY(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[c, 0, s],
[0, 1, 0],
[-s, 0, c],
];
}
function rotateZ(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[c, -s, 0],
[s, c, 0],
[0, 0, 1],
];
}
function project(vertex, scale, offsetX, offsetY, zoom) {
const [x, y, z] = vertex;
const posX = (x - offsetX) * scale;
const posY = (y - offsetY) * scale;
const posZ = z * scale;
return [
(posX * (zoom + posZ) + canvas.width / 2),
(posY * (zoom + posZ) + canvas.height / 2),
];
}
function transform(vertex, rotationMatrix) {
const [x, y, z] = vertex;
const [rowX, rowY, rowZ] = rotationMatrix;
return [
x * rowX[0] + y * rowX[1] + z * rowX[2],
x * rowY[0] + y * rowY[1] + z * rowY[2],
x * rowZ[0] + y * rowZ[1] + z * rowZ[2],
];
}
function extraterrestrialTransformation(vertex, frequency, amplitude) {
const [x, y, z] = vertex;
const cosX = (Math.cos(x * frequency) * amplitude);
const cosY = (Math.cos(y * frequency) * amplitude);
const cosZ = (Math.cos(z * frequency) * amplitude);
return [x + cosX, y + cosY, z + cosZ];
}
let angleX = 0;
let angleY = 0;
let angleZ = 0;
function getDeviation(maxDeviation) {
const t = Date.now() / 1000;
const frequency = 100 / 5;
const amplitude = maxDeviation / 0.5;
const deviation = Math.sin(t * frequency) * amplitude;
return deviation.toFixed(3);
}
function render() {
ctx.fillStyle = '#FFF';
ctx.fillRect(0, 0, canvas.width, canvas.height);
const rotX = rotateX(angleX);
const rotY = rotateY(angleY);
const rotZ = rotateZ(angleZ);
// Extraterrestrial transformation parameters
const frequency = 1;
const amplitude = 0.8;
const transformedVertices = vertices.map(vertex => {
const extraterrestrialVertex = extraterrestrialTransformation(vertex, frequency, amplitude);
const cx = extraterrestrialVertex[0] - offsetX;
const cy = extraterrestrialVertex[1] - offsetY;
const cz = extraterrestrialVertex[2] - offsetY;
const rotated = transform(transform(transform([cx, cy, cz], rotX), rotY), rotZ);
return [
rotated[0] + offsetX,
rotated[1] + offsetY,
rotated[2] + offsetY,
];
});
const projectedVertices = transformedVertices.map(vertex => project(vertex, canvas.height * scale, offsetX, offsetY, zoom));
ctx.lineWidth = 2;
ctx.strokeStyle = 'hsla(' + (angleX + angleY) * 100 + ', 100%, 50%, 0.8)';
ctx.beginPath();
for (let edge of edges) {
const [a, b] = edge;
const [x1, y1] = projectedVertices[a];
const [x2, y2] = projectedVertices[b];
const dist = Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2 + (y2 - x1) ** 2 + (x2 - y1));
const angle = Math.atan2(y2 - y1, x2 - x1, x2 - y1, y2 - x1);
// Calculate control point for curved edge
const cpDist = 0.01 * dist;
const cpX = (x1 + x2) / 2 + cpDist * Math.cos(angle - Math.PI / 2) * getDeviation(2);
const cpY = (y1 + y2) / 2 + cpDist * Math.sin(angle - Math.PI / 2) * getDeviation(2);
ctx.moveTo(x1, y1, x2, y2);
ctx.quadraticCurveTo(cpX, cpY, x2, y2, x1, y1);
}
ctx.stroke();
angleX += +getDeviation(0.02);
angleY += +getDeviation(0.02);
angleZ += +getDeviation(0.02);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
window.addEventListener("resize", () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
});
|
24b008fe99ded381f516a38a2bb6e915
|
{
"intermediate": 0.3572104275226593,
"beginner": 0.31404921412467957,
"expert": 0.3287404179573059
}
|
9,520
|
printf FPT university logo using code C
|
520b023ccd3ea18ba8190f6eb1205049
|
{
"intermediate": 0.2674412727355957,
"beginner": 0.4213084280490875,
"expert": 0.311250239610672
}
|
9,521
|
how to check if user is authenticated in ReactiveSecurityContextHolder
in Mono<void> filter function
|
195fa8326b94c3dbb0a223ff6faeb70d
|
{
"intermediate": 0.46945855021476746,
"beginner": 0.2716708779335022,
"expert": 0.2588706314563751
}
|
9,522
|
Pyrhon qt. Programm open pdf. Convert first page of pdf into image. Show in qt form
|
27242227b4054b7dbfad85b82d9beec6
|
{
"intermediate": 0.44869425892829895,
"beginner": 0.23379696905612946,
"expert": 0.317508727312088
}
|
9,523
|
Hi. I need to create process from program (WinAPI C++) which will be child window of program HWND
|
ad9d1f8f1b05ae81dcc225a995db6c8d
|
{
"intermediate": 0.5650029182434082,
"beginner": 0.1061682179570198,
"expert": 0.328828901052475
}
|
9,524
|
I need create process with C++ WinAPI for example, taskmgr.exe and window of taskmgr must be child of window of program which creates process.
|
f68fbc6e5df27f1d79caeabb02968072
|
{
"intermediate": 0.6579562425613403,
"beginner": 0.14073239266872406,
"expert": 0.20131133496761322
}
|
9,525
|
list in Swiftui
|
98dfdcb6b3133c51be4c98b683b7cbd9
|
{
"intermediate": 0.3081473112106323,
"beginner": 0.5313460826873779,
"expert": 0.16050666570663452
}
|
9,526
|
ما هو الغلط
import SwiftUI
struct ContentView: View {
@State var items: [String] = []
@State var newItem: String = “”
var body: some View {
VStack {
TextField(“Add Item”, text: newItem)
.textFieldStyle(RoundedBorderTextFieldStyle())
.padding()
Button(“Add”) {
items.append(newItem)
newItem = “”
}
.padding()
List(items, id: .self) { item in
Text(item)
}
}
.navigationTitle(“To Do List”)
}
}
|
9f7de4f335531bfda81551c8e77a3c4f
|
{
"intermediate": 0.38678476214408875,
"beginner": 0.2553635537624359,
"expert": 0.35785171389579773
}
|
9,527
|
what I want to do here is to make a visual marix constructor onhover menu, which you can use to update these vertices and edges arrays on the fly to model or draw in that visual marix constructor. I'm not sure how to corretly interpret that VMC. any ideas?: const canvas = document.createElement('canvas');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
document.body.appendChild(canvas);
const ctx = canvas.getContext('2d');
const vertices = [
[0, 0, 0],
[0, 1, 0],
[1, 1, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 1],
[1, 1, 1],
[1, 0, 1],
];
const edges = [
[0, 1],
[1, 2],
[2, 3],
[3, 0],
[0, 4],
[1, 5],
[2, 6],
[3, 7],
[4, 5],
[5, 6],
[6, 7],
[7, 4],
];
// Transformation parameters
const scale = 0.025;
const zoom = 1;
const offsetX = 0.5;
const offsetY = 0.5;
function rotateX(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[1, 0, 0],
[0, c, -s],
[0, s, c],
];
}
function rotateY(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[c, 0, s],
[0, 1, 0],
[-s, 0, c],
];
}
function rotateZ(angle) {
const c = Math.cos(angle);
const s = Math.sin(angle);
return [
[c, -s, 0],
[s, c, 0],
[0, 0, 1],
];
}
function project(vertex, scale, offsetX, offsetY, zoom) {
const [x, y, z] = vertex;
const posX = (x - offsetX) * scale;
const posY = (y - offsetY) * scale;
const posZ = z * scale;
return [
(posX * (zoom + posZ) + canvas.width / 2),
(posY * (zoom + posZ) + canvas.height / 2),
];
}
function transform(vertex, rotationMatrix) {
const [x, y, z] = vertex;
const [rowX, rowY, rowZ] = rotationMatrix;
return [
x * rowX[0] + y * rowX[1] + z * rowX[2],
x * rowY[0] + y * rowY[1] + z * rowY[2],
x * rowZ[0] + y * rowZ[1] + z * rowZ[2],
];
}
function extraterrestrialTransformation(vertex, frequency, amplitude) {
const [x, y, z] = vertex;
const cosX = (Math.cos(x * frequency) * amplitude);
const cosY = (Math.cos(y * frequency) * amplitude);
const cosZ = (Math.cos(z * frequency) * amplitude);
return [x + cosX, y + cosY, z + cosZ];
}
let angleX = 0;
let angleY = 0;
let angleZ = 0;
function getDeviation(maxDeviation) {
const t = Date.now() / 1000;
const frequency = 100 / 5;
const amplitude = maxDeviation / 0.5;
const deviation = Math.sin(t * frequency) * amplitude;
return deviation.toFixed(3);
}
function render() {
ctx.fillStyle = '#FFF';
ctx.fillRect(0, 0, canvas.width, canvas.height);
const rotX = rotateX(angleX);
const rotY = rotateY(angleY);
const rotZ = rotateZ(angleZ);
// Extraterrestrial transformation parameters
const frequency = 1;
const amplitude = 0.8;
const transformedVertices = vertices.map(vertex => {
const extraterrestrialVertex = extraterrestrialTransformation(vertex, frequency, amplitude);
const cx = extraterrestrialVertex[0] - offsetX;
const cy = extraterrestrialVertex[1] - offsetY;
const cz = extraterrestrialVertex[2] - offsetY;
const rotated = transform(transform(transform([cx, cy, cz], rotX), rotY), rotZ);
return [
rotated[0] + offsetX,
rotated[1] + offsetY,
rotated[2] + offsetY,
];
});
const projectedVertices = transformedVertices.map(vertex => project(vertex, canvas.height * scale, offsetX, offsetY, zoom));
ctx.lineWidth = 2;
ctx.strokeStyle = 'hsla(' + (angleX + angleY) * 100 + ', 100%, 50%, 0.8)';
ctx.beginPath();
for (let edge of edges) {
const [a, b] = edge;
const [x1, y1] = projectedVertices[a];
const [x2, y2] = projectedVertices[b];
const dist = Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2 + (y2 - x1) ** 2 + (x2 - y1));
const angle = Math.atan2(y2 - y1, x2 - x1, x2 - y1, y2 - x1);
// Calculate control point for curved edge
const cpDist = 0.01 * dist;
const cpX = (x1 + x2) / 2 + cpDist * Math.cos(angle - Math.PI / 2) * getDeviation(2);
const cpY = (y1 + y2) / 2 + cpDist * Math.sin(angle - Math.PI / 2) * getDeviation(2);
ctx.moveTo(x1, y1, x2, y2);
ctx.quadraticCurveTo(cpX, cpY, x2, y2, x1, y1);
}
ctx.stroke();
angleX += +getDeviation(0.02);
angleY += +getDeviation(0.02);
angleZ += +getDeviation(0.02);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
window.addEventListener("resize", () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
});
|
88e9178ff88a2a0d07841ac003f0ce94
|
{
"intermediate": 0.3572104275226593,
"beginner": 0.31404921412467957,
"expert": 0.3287404179573059
}
|
9,528
|
can you write lua code
|
30634329e35209c525b83a5ad58e89ef
|
{
"intermediate": 0.3002909719944,
"beginner": 0.33193734288215637,
"expert": 0.3677716851234436
}
|
9,529
|
server throws react_devtools_backend_compact.js:2367 TypeError: Cannot read properties of undefined (reading 'name')
at CPI_Basket_add.js:977:132
at Array.map (<anonymous>) on the below code when name is being fetched const menu = (
<Menu
onClick={async (e) => {
onRegionChange(e);
}}
items={isAllRegionsSuccess ? allRegionsData.data.map(regionData => ({ key: regionData.code, label: regionData.translation[0].name})) : []}
/>
);
|
ca9b907a079491c92fc6740af3b9fb52
|
{
"intermediate": 0.5150178074836731,
"beginner": 0.25422629714012146,
"expert": 0.23075592517852783
}
|
9,530
|
php execute the dotnet then call the dll
|
2fc109876e73a47f0ac0b285c3b18ba3
|
{
"intermediate": 0.4555782973766327,
"beginner": 0.271666556596756,
"expert": 0.27275514602661133
}
|
9,531
|
create a php website that allows you to upload video files then it is converted to mp4 and then the download auto starts
|
ba591a610a942431a5ced40a6dd1ce84
|
{
"intermediate": 0.4596507251262665,
"beginner": 0.20455293357372284,
"expert": 0.3357963562011719
}
|
9,532
|
tell me how to change a directory of db.sqlite in the drf project Python?
|
fcebb0704aa1c383200750a312a56b6c
|
{
"intermediate": 0.6950823068618774,
"beginner": 0.11576449126005173,
"expert": 0.18915317952632904
}
|
9,533
|
import time
from binance.client import Client
from binance.enums import *
from binance.exceptions import BinanceAPIException
import pandas as pd
import requests
import json
import numpy as np
import pytz
import datetime as dt
date = dt.datetime.now().strftime("%m/%d/%Y %H:%M:%S")
print(date)
url = "https://api.binance.com/api/v1/time"
t = time.time()*1000
r = requests.get(url)
result = json.loads(r.content)
# API keys and other configuration
API_KEY = 'Y0ReOvKcXm8e3wfIRYlgcdV9UG10M7XqxsGV0H83S8OMH3H3Fym3iqsfIcHDiq92'
API_SECRET = '0u8aMxMXyIy9dQCti8m4AOeSvAGEqugOiIDML4rxVWDx5dzI80TDGNCMOWn4geVg'
client = Client(API_KEY, API_SECRET)
STOP_LOSS_PERCENTAGE = -50
TAKE_PROFIT_PERCENTAGE = 100
MAX_TRADE_QUANTITY_PERCENTAGE = 100
POSITION_SIDE_SHORT = 'SELL'
POSITION_SIDE_LONG = 'BUY'
symbol = 'BTCUSDT'
quantity = 1
order_type = 'MARKET'
leverage = 100
max_trade_quantity_percentage = 1
client = Client(API_KEY, API_SECRET)
def get_klines(symbol, interval, lookback):
url = "https://fapi.binance.com/fapi/v1/klines"
params = {
"symbol": symbol,
"interval": interval,
"startTime": int(dt.datetime.timestamp(dt.datetime.now() - dt.timedelta(minutes=lookback))),
"endTime": int(dt.datetime.timestamp(dt.datetime.now()))
}
response = requests.get(url, params=params)
data = response.json()
ohlc = []
for d in data:
timestamp = dt.datetime.fromtimestamp(d[0]/1000).strftime('%Y-%m-%d %H:%M:%S')
ohlc.append([
timestamp,
float(d[1]),
float(d[2]),
float(d[3]),
float(d[4]),
float(d[5])
])
df = pd.DataFrame(ohlc, columns=['Open time', 'Open', 'High', 'Low', 'Close', 'Volume'])
df.set_index('Open time', inplace=True)
return df
df = get_klines('BTCUSDT', '15m', 1440)
def signal_generator(df):
open = df.open.iloc[-1]
close = df.close.iloc[-1]
previous_open = df.open.iloc[-2]
previous_close = df.close.iloc[-2]
# Bearish pattern
if (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
return 'sell'
# Bullish pattern
elif (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
return 'buy'
# No clear pattern
else:
return ''
df = get_klines('BTCUSDT', '15m', 1440)
def order_execution(symbol, signal, max_trade_quantity_percentage, leverage):
signal = signal_generator(df)
max_trade_quantity = None
account_balance = client.futures_account_balance()
usdt_balance = float([x['balance'] for x in account_balance if x['asset'] == 'USDT'][0])
max_trade_quantity = usdt_balance * max_trade_quantity_percentage/100
# Close long position if signal is opposite
long_position = None
short_position = None
positions = client.futures_position_information(symbol=symbol)
for p in positions:
if p['positionSide'] == 'LONG':
long_position = p
elif p['positionSide'] == 'SHORT':
short_position = p
if long_position is not None and short_position is not None:
print("Multiple positions found. Closing both positions.")
if long_position is not None:
client.futures_create_order(
symbol=symbol,
side=SIDE_SELL,
type=ORDER_TYPE_MARKET,
quantity=long_position['positionAmt'],
reduceOnly=True
)
time.sleep(1)
if short_position is not None:
client.futures_create_order(
symbol=symbol,
side=SIDE_BUY,
type=ORDER_TYPE_MARKET,
quantity=short_position['positionAmt'],
reduceOnly=True
)
time.sleep(1)
print("Both positions closed.")
if signal == 'buy':
position_side = POSITION_SIDE_LONG
opposite_position = short_position
elif signal == 'sell':
position_side = POSITION_SIDE_SHORT
opposite_position = long_position
else:
print("Invalid signal. No order placed.")
return
order_quantity = 0
if opposite_position is not None:
order_quantity = min(max_trade_quantity, abs(float(opposite_position['positionAmt'])))
if opposite_position is not None and opposite_position['positionSide'] != position_side:
print("Opposite position found. Closing position before placing order.")
client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if
opposite_position['positionSide'] == POSITION_SIDE_LONG
else
SIDE_BUY,
type=ORDER_TYPE_MARKET,
quantity=order_quantity,
reduceOnly=True,
positionSide=opposite_position['positionSide']
)
time.sleep(1)
order = client.futures_create_order(
symbol=symbol,
side=SIDE_BUY if signal == 'buy' else SIDE_SELL,
type=ORDER_TYPE_MARKET,
quantity=order_quantity,
reduceOnly=False,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side, # add position side parameter
leverage=leverage
)
if order is None:
print("Order not placed successfully. Skipping setting stop loss and take profit orders.")
return
order_id = order['orderId']
print(f"Placed {signal} order with order ID {order_id} and quantity {order_quantity}")
time.sleep(1)
# Set stop loss and take profit orders
# Get the order details to determine the order price
order_info = client.futures_get_order(symbol=symbol, orderId=order_id)
if order_info is None:
print("Error getting order information. Skipping setting stop loss and take profit orders.")
return
order_price = float(order_info['avgPrice'])
# Set stop loss and take profit orders
stop_loss_price = order_price * (1 + STOP_LOSS_PERCENTAGE / 100) if signal == 'sell' else order_price * (1 - STOP_LOSS_PERCENTAGE / 100)
take_profit_price = order_price * (1 + TAKE_PROFIT_PERCENTAGE / 100) if signal == 'buy' else order_price * (1 - TAKE_PROFIT_PERCENTAGE / 100)
stop_loss_order = client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if signal == 'buy' else SIDE_BUY,
type=ORDER_TYPE_STOP_LOSS,
quantity=order_quantity,
stopPrice=stop_loss_price,
reduceOnly=True,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side # add position side parameter
)
take_profit_order = client.futures_create_order(
symbol=symbol,
side=SIDE_SELL if signal == 'buy' else SIDE_BUY,
type=ORDER_TYPE_LIMIT,
quantity=order_quantity,
price=take_profit_price,
reduceOnly=True,
timeInForce=TIME_IN_FORCE_GTC,
positionSide=position_side # add position side parameter
)
# Print order creation confirmation messages
print(f"Placed stop loss order with stop loss price {stop_loss_price}")
print(f"Placed take profit order with take profit price {take_profit_price}")
time.sleep(1)
while True:
df = get_klines('BTCUSDT', '15m', 1440)
if df is not None:
signal = signal_generator(df)
if signal:
order_execution('BTCUSDT', signal, MAX_TRADE_QUANTITY_PERCENTAGE, leverage)
# Call signal_generator here to print out the signal
print(f"The signal time is: {dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')} :{signal}")
time.sleep(1)
But this code doesn't returns me any signal in terminal
|
8b26682e21dccd161dd17f4840b2b75c
|
{
"intermediate": 0.36484476923942566,
"beginner": 0.3969379961490631,
"expert": 0.23821723461151123
}
|
9,534
|
HTTP Header Injection we.config file add custom errors
|
26e5a8a8f36d0fad691f44f75fe92cec
|
{
"intermediate": 0.37104958295822144,
"beginner": 0.26950696110725403,
"expert": 0.35944345593452454
}
|
9,535
|
You need to create a Python script that generates 40 CSV files. Each CSV file should contain a shuffled list of 212 filenames. The filenames will be selected from a directory containing exactly 192 images. Additionally, 20 random filenames should be added to each CSV file. The CSV files should be saved in the same directory and named "observer_1.csv" to "observer_40.csv". The filenames should follow the format "images/47.jpg", "images/52.jpg", "images/33.jpg", and so on.
|
fef65e2cc578b05f69cc1a2863f27d66
|
{
"intermediate": 0.3529791831970215,
"beginner": 0.24098560214042664,
"expert": 0.4060352146625519
}
|
9,536
|
I have an &Option<String> in Rust. How can I get the value or default it to an empty string ( " " ) ?
|
65eab7e82877d3733aef1d51c3fc61a8
|
{
"intermediate": 0.6210932731628418,
"beginner": 0.2161235213279724,
"expert": 0.16278323531150818
}
|
9,537
|
import {useCallback, useEffect, useState} from "react";
import {ReadyState} from "../enums/readyState";
type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
export interface BestMicroPrice {
buy: number;
sell: number;
}
export function useRustWsServer() {
const [connection, setConnection] = useState<WebSocket|null>(null);
const [readyState, setReadyState] = useState(0);
const [cup, setCup] = useState<Array<CupItem>>([]);
const [bestMicroPrice, setBestMicroPrice] = useState<BestMicroPrice|null>(null);
const [maxVolume, setMaxVolume] = useState(1);
function splitCupSides(rawData: {[key: number]: CupItem}): Array<CupItem> {
const sellRecords = [];
const buyRecords = [];
let max = 0;
for (const value of Object.values(rawData)) {
if (value.side === "Buy") {
buyRecords.push(value);
} else if (value.side === "Sell") {
sellRecords.push(value);
}
if (value.quantity > max) {
max = value.quantity;
}
}
sellRecords.sort((a, b) => {
return b.futures_price_micro - a.futures_price_micro;
});
buyRecords.sort((a, b) => {
return b.futures_price_micro - a.futures_price_micro;
});
setMaxVolume(max);
return [...sellRecords, ...buyRecords];
}
const cupSubscribe = useCallback((symbol: string, camera: number, zoom: number, rowCount: number) => {
if (null === connection || readyState !== ReadyState.OPEN) return;
connection.send(JSON.stringify({
"commands": [
{
commandType: "SUBSCRIBE_SYMBOL",
symbol,
camera: Math.round(camera / zoom) * zoom,
zoom,
rowCount,
},
],
}));
}, [readyState]);
const cupUnsubscribe = useCallback((symbol: string) => {
if (null === connection || readyState !== ReadyState.OPEN) return;
connection.send(JSON.stringify({
"commands": [
{
commandType: "UNSUBSCRIBE_SYMBOL",
symbol,
},
],
}));
}, [readyState]);
useEffect(() => {
const url = process.env.NEXT_PUBLIC_RUST_WS_SERVER;
if (url) {
const ws = new WebSocket(url);
setConnection(ws);
}
}, []);
useEffect(() => {
if (null !== connection) {
connection.onmessage = (message: MessageEvent) => {
if (!message.data) return;
const data = JSON.parse(message.data);
if (!data?.commands || data.commands.length === 0) return;
const domUpdate = data.commands.find((item: any) => "undefined" !== typeof item.SymbolDomUpdate);
if (!domUpdate) return;
setCup(splitCupSides(domUpdate.SymbolDomUpdate.dom_rows));
setBestMicroPrice({
buy: domUpdate.SymbolDomUpdate.best_prices_futures.best_ask_micro,
sell: domUpdate.SymbolDomUpdate.best_prices_futures.best_bid_micro,
});
};
connection.onopen = () => {
setReadyState(ReadyState.OPEN);
};
connection.onclose = () => {
setReadyState(ReadyState.CLOSED);
};
}
}, [connection]);
return {
readyState,
cupSubscribe,
cupUnsubscribe,
cup,
maxVolume,
bestMicroPrice,
};
}
import {BestMicroPrice, useRustWsServer} from "../../hooks/rustWsServer";
import {createContext, Reducer, useEffect, useReducer, useRef, useState} from "react";
import CupDrawer from "../CupDrawer/CupDrawer";
import {IconButton} from "@mui/material";
import {AddRounded, RemoveRounded} from "@mui/icons-material";
import {ReadyState} from "../../enums/readyState";
import {useSelector} from "react-redux";
import {AppState} from "../../store/store";
interface CupConfigSubscription {
pair: string | null;
zoom: number;
camera: number;
rowCount: number;
}
export const CupControlsContext = createContext<{
cupControlsState: any;
cupControlsDispatcher: any;
}>({
cupControlsState: null,
cupControlsDispatcher: null,
});
const TradingCup = () => {
const symbol = useSelector((state: AppState) => state.screenerSlice.symbol);
const {cup, bestMicroPrice, maxVolume, readyState, cupSubscribe, cupUnsubscribe} = useRustWsServer();
const precision = useSelector((state: AppState) => state.binancePrecision.futures[symbol.toUpperCase()]);
const tickSize = useSelector((state: AppState) => state.binanceTickSize.futures[symbol.toUpperCase()]);
const [cupConfig, setCupConfig] = useState<CupConfigSubscription>({
pair: null,
zoom: 10,
camera: 0,
rowCount: 40,
});
useEffect(() => {
if (symbol) {
setCupConfig({
...cupConfig,
pair: symbol.toUpperCase(),
camera: 0,
});
}
}, [symbol]);
useEffect(() => {
if (readyState === ReadyState.OPEN) {
if (null !== cupConfig.pair) {
cupSubscribe(
cupConfig.pair,
cupConfig.camera,
cupConfig.zoom,
cupConfig.rowCount,
);
}
}
return () => {
if (cupConfig.pair != null) {
cupUnsubscribe(cupConfig.pair);
}
};
}, [
cupConfig.pair,
cupConfig.camera,
cupConfig.zoom,
cupConfig.rowCount,
readyState,
]);
return (
<>
</>
);
};
export default TradingCup;
import {each, get, map, reduce, range, clamp, reverse} from 'lodash'
import {ESide} from "../../interfaces/interfaces";
import {abbreviateNumber, blendColors, blendRGBColors, getRatio, shadeColor} from "../../utils/utils";
import {
bubbleSize, clusterBg,
clusterGreen,
clusterRed,
clustersCountUI,
deepGreen,
deepRed,
lightGreen,
lightRed,
maxClusterWidth,
minuteMs,
rowHeight,
timeFrame,
visibleClustersCount
} from "../../constants/consts";
export default class ClustersClientControllers {
xWidthInMs = timeFrame * clustersCountUI
DOMBorderOffset = 0
abnormalDensities = 200
clusters = []
currentMin = 0
tempCluster = {}
tempCurrentMin
totals = []
tempTotal = {}
root: ClientController
canvasHeight = 0
canvasWidth = 0
tradesArr: any = []
public bestPrices: any = null
clustersCtx
orderFeedCtx
public cameraPrice = null
public zoom = 10
clusterCellWidth
virtualServerTime = null
tradesFilterBySymbol = {}
constructor(root) {
this.root = root
window['clusters'] = this
this.restoreClusterSettings()
}
renderTrades = () => {
this.clearOrderFeed();
reduce(this.tradesArr, (prev, cur, index) => {
this.renderTrade(prev, cur, this.tradesArr.length - (index as any))
prev = cur
console.log(prev);
return prev
})
}
clearOrderFeed = () => {
this.orderFeedCtx.clearRect(0, 0, this.canvasWidth, this.canvasHeight)
}
renderTrade = (prev, item, index) => {
//const anomalyQty = this.root.instruments[this.root.selectedSymbol].anomalies.anomaly_qty;
console.log(item); //
price_float
:
0.4139
price_micro
:
4139000
quantity
:
6
side
:
"Buy"
time
:
1685607036920
//if (size < 1) return;
const ctx = this.orderFeedCtx
let xPos = (this.canvasWidth - (index * (bubbleSize * 1.5))) - bubbleSize;
const offsetFromTop = this.root.tradingDriverController.upperPrice - item.price_micro;
const y = ((offsetFromTop / this.root.tradingDriverController.getZoomedStepMicro()) - 1) * rowHeight
const label = abbreviateNumber(item.quantity * item.price_float)
const {width: textWidth} = ctx.measureText(label);
const itemUsdt = item.quantity * item.price_float;
const tradeFilter = this.getTradeFilterBySymbol(this.getSymbol())
const maxUsdtBubbleAmount = tradeFilter * 30;
const maxPixelBubbleAmount = 35;
const realBubbleSize = (itemUsdt / maxUsdtBubbleAmount) * maxPixelBubbleAmount
const size = clamp(realBubbleSize, (textWidth/2)+3, maxPixelBubbleAmount)
const bubbleX = xPos;
const bubbleY = y + 8;
ctx.beginPath();
let bigRatio = (realBubbleSize / maxPixelBubbleAmount) / 3;
bigRatio = bigRatio > 0.95 ? 0.95 : bigRatio;
ctx.fillStyle = item.side === "Sell" ? deepGreen.lighten(bigRatio).toString() : deepRed.lighten(bigRatio).toString()
ctx.strokeStyle = 'black';
ctx.arc(xPos, bubbleY, size, 0, 2 * Math.PI)
ctx.fill();
ctx.stroke();
ctx.fillStyle = "#FFFFFF"
ctx.fillText(label, bubbleX - (textWidth / 2), (bubbleY + (rowHeight / 2)) - 2)
}
1. В компоненте TradingCup запрашивается cup из useRustWsServer, сделать отрисовку канвас как в методах renderTrade renderTrades .
2. renderTrade renderTrades то другой немного компонент с немного другими данными, нужно использовать его в TradingCup только с нашими данными type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
3. В методе renderTrade() показатели quantity и price_float перемножаются, чтобы получить объем в $. Нам это не нужно, будем выводить только quantity.
Нужно все отрисовать в канвасе И использовать тоже самое соединение, что установлено в rustWsServer.ts
обязательно везде typescript
|
d86f36d897cd9b0d72335db8af286716
|
{
"intermediate": 0.3489033579826355,
"beginner": 0.4464230239391327,
"expert": 0.2046736478805542
}
|
9,538
|
{
"compilerOptions": {
"target": "es2017",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noFallthroughCasesInSwitch": true,
"module": "CommonJS",
"moduleResolution": "node",
"resolveJsonModule": true,
"noEmit": true,
},"include": [index.ts]
}
correct include
|
b8fd99ab0a6d4ed4de0d25ffda7a7afc
|
{
"intermediate": 0.37677234411239624,
"beginner": 0.31383436918258667,
"expert": 0.30939334630966187
}
|
9,539
|
how to resolve the below error
|
ae7d31175d693bd856637a69365b92cd
|
{
"intermediate": 0.3866956830024719,
"beginner": 0.27456215023994446,
"expert": 0.33874207735061646
}
|
9,540
|
import {useCallback, useEffect, useState} from "react";
import {ReadyState} from "../enums/readyState";
type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
export interface BestMicroPrice {
buy: number;
sell: number;
}
export function useRustWsServer() {
const [connection, setConnection] = useState<WebSocket|null>(null);
const [readyState, setReadyState] = useState(0);
const [cup, setCup] = useState<Array<CupItem>>([]);
const [bestMicroPrice, setBestMicroPrice] = useState<BestMicroPrice|null>(null);
const [maxVolume, setMaxVolume] = useState(1);
function splitCupSides(rawData: {[key: number]: CupItem}): Array<CupItem> {
const sellRecords = [];
const buyRecords = [];
let max = 0;
for (const value of Object.values(rawData)) {
if (value.side === "Buy") {
buyRecords.push(value);
} else if (value.side === "Sell") {
sellRecords.push(value);
}
if (value.quantity > max) {
max = value.quantity;
}
}
sellRecords.sort((a, b) => {
return b.futures_price_micro - a.futures_price_micro;
});
buyRecords.sort((a, b) => {
return b.futures_price_micro - a.futures_price_micro;
});
setMaxVolume(max);
return [...sellRecords, ...buyRecords];
}
const cupSubscribe = useCallback((symbol: string, camera: number, zoom: number, rowCount: number) => {
if (null === connection || readyState !== ReadyState.OPEN) return;
connection.send(JSON.stringify({
"commands": [
{
commandType: "SUBSCRIBE_SYMBOL",
symbol,
camera: Math.round(camera / zoom) * zoom,
zoom,
rowCount,
},
],
}));
}, [readyState]);
const cupUnsubscribe = useCallback((symbol: string) => {
if (null === connection || readyState !== ReadyState.OPEN) return;
connection.send(JSON.stringify({
"commands": [
{
commandType: "UNSUBSCRIBE_SYMBOL",
symbol,
},
],
}));
}, [readyState]);
useEffect(() => {
const url = process.env.NEXT_PUBLIC_RUST_WS_SERVER;
if (url) {
const ws = new WebSocket(url);
setConnection(ws);
}
}, []);
useEffect(() => {
if (null !== connection) {
connection.onmessage = (message: MessageEvent) => {
if (!message.data) return;
const data = JSON.parse(message.data);
if (!data?.commands || data.commands.length === 0) return;
const domUpdate = data.commands.find((item: any) => "undefined" !== typeof item.SymbolDomUpdate);
if (!domUpdate) return;
setCup(splitCupSides(domUpdate.SymbolDomUpdate.dom_rows));
setBestMicroPrice({
buy: domUpdate.SymbolDomUpdate.best_prices_futures.best_ask_micro,
sell: domUpdate.SymbolDomUpdate.best_prices_futures.best_bid_micro,
});
};
connection.onopen = () => {
setReadyState(ReadyState.OPEN);
};
connection.onclose = () => {
setReadyState(ReadyState.CLOSED);
};
}
}, [connection]);
return {
readyState,
cupSubscribe,
cupUnsubscribe,
cup,
maxVolume,
bestMicroPrice,
};
}
import {BestMicroPrice, useRustWsServer} from "../../hooks/rustWsServer";
import {createContext, Reducer, useEffect, useReducer, useRef, useState} from "react";
import CupDrawer from "../CupDrawer/CupDrawer";
import {IconButton} from "@mui/material";
import {AddRounded, RemoveRounded} from "@mui/icons-material";
import {ReadyState} from "../../enums/readyState";
import {useSelector} from "react-redux";
import {AppState} from "../../store/store";
interface CupConfigSubscription {
pair: string | null;
zoom: number;
camera: number;
rowCount: number;
}
export const CupControlsContext = createContext<{
cupControlsState: any;
cupControlsDispatcher: any;
}>({
cupControlsState: null,
cupControlsDispatcher: null,
});
const TradingCup = () => {
const symbol = useSelector((state: AppState) => state.screenerSlice.symbol);
const {cup, bestMicroPrice, maxVolume, readyState, cupSubscribe, cupUnsubscribe} = useRustWsServer();
const precision = useSelector((state: AppState) => state.binancePrecision.futures[symbol.toUpperCase()]);
const tickSize = useSelector((state: AppState) => state.binanceTickSize.futures[symbol.toUpperCase()]);
const [cupConfig, setCupConfig] = useState<CupConfigSubscription>({
pair: null,
zoom: 10,
camera: 0,
rowCount: 40,
});
useEffect(() => {
if (symbol) {
setCupConfig({
...cupConfig,
pair: symbol.toUpperCase(),
camera: 0,
});
}
}, [symbol]);
useEffect(() => {
if (readyState === ReadyState.OPEN) {
if (null !== cupConfig.pair) {
cupSubscribe(
cupConfig.pair,
cupConfig.camera,
cupConfig.zoom,
cupConfig.rowCount,
);
}
}
return () => {
if (cupConfig.pair != null) {
cupUnsubscribe(cupConfig.pair);
}
};
}, [
cupConfig.pair,
cupConfig.camera,
cupConfig.zoom,
cupConfig.rowCount,
readyState,
]);
return (
<>
</>
);
};
export default TradingCup;
import {each, get, map, reduce, range, clamp, reverse} from 'lodash'
import {ESide} from "../../interfaces/interfaces";
import {abbreviateNumber, blendColors, blendRGBColors, getRatio, shadeColor} from "../../utils/utils";
import {
bubbleSize, clusterBg,
clusterGreen,
clusterRed,
clustersCountUI,
deepGreen,
deepRed,
lightGreen,
lightRed,
maxClusterWidth,
minuteMs,
rowHeight,
timeFrame,
visibleClustersCount
} from "../../constants/consts";
export default class ClustersClientControllers {
xWidthInMs = timeFrame * clustersCountUI
DOMBorderOffset = 0
abnormalDensities = 200
clusters = []
currentMin = 0
tempCluster = {}
tempCurrentMin
totals = []
tempTotal = {}
root: ClientController
canvasHeight = 0
canvasWidth = 0
tradesArr: any = []
public bestPrices: any = null
clustersCtx
orderFeedCtx
public cameraPrice = null
public zoom = 10
clusterCellWidth
virtualServerTime = null
tradesFilterBySymbol = {}
constructor(root) {
this.root = root
window['clusters'] = this
this.restoreClusterSettings()
}
renderTrades = () => {
this.clearOrderFeed();
reduce(this.tradesArr, (prev, cur, index) => {
this.renderTrade(prev, cur, this.tradesArr.length - (index as any))
prev = cur
console.log(prev);
return prev
})
}
clearOrderFeed = () => {
this.orderFeedCtx.clearRect(0, 0, this.canvasWidth, this.canvasHeight)
}
renderTrade = (prev, item, index) => {
//const anomalyQty = this.root.instruments[this.root.selectedSymbol].anomalies.anomaly_qty;
console.log(item); //
price_float
:
0.4139
price_micro
:
4139000
quantity
:
6
side
:
"Buy"
time
:
1685607036920
//if (size < 1) return;
const ctx = this.orderFeedCtx
let xPos = (this.canvasWidth - (index * (bubbleSize * 1.5))) - bubbleSize;
const offsetFromTop = this.root.tradingDriverController.upperPrice - item.price_micro;
const y = ((offsetFromTop / this.root.tradingDriverController.getZoomedStepMicro()) - 1) * rowHeight
const label = abbreviateNumber(item.quantity * item.price_float)
const {width: textWidth} = ctx.measureText(label);
const itemUsdt = item.quantity * item.price_float;
const tradeFilter = this.getTradeFilterBySymbol(this.getSymbol())
const maxUsdtBubbleAmount = tradeFilter * 30;
const maxPixelBubbleAmount = 35;
const realBubbleSize = (itemUsdt / maxUsdtBubbleAmount) * maxPixelBubbleAmount
const size = clamp(realBubbleSize, (textWidth/2)+3, maxPixelBubbleAmount)
const bubbleX = xPos;
const bubbleY = y + 8;
ctx.beginPath();
let bigRatio = (realBubbleSize / maxPixelBubbleAmount) / 3;
bigRatio = bigRatio > 0.95 ? 0.95 : bigRatio;
ctx.fillStyle = item.side === "Sell" ? deepGreen.lighten(bigRatio).toString() : deepRed.lighten(bigRatio).toString()
ctx.strokeStyle = 'black';
ctx.arc(xPos, bubbleY, size, 0, 2 * Math.PI)
ctx.fill();
ctx.stroke();
ctx.fillStyle = "#FFFFFF"
ctx.fillText(label, bubbleX - (textWidth / 2), (bubbleY + (rowHeight / 2)) - 2)
}
1. В компоненте TradingCup запрашивается cup из useRustWsServer, сделать отрисовку канвас как в методах renderTrade renderTrades .
2. renderTrade renderTrades то другой немного компонент с немного другими данными, нужно использовать его в TradingCup только с нашими данными type CupItem = {
futures_price_micro: number;
quantity: number;
spot_quantity: number;
side: string;
};
3. В методе renderTrade() показатели quantity и price_float перемножаются, чтобы получить объем в $. Нам это не нужно, будем выводить только quantity.
Нужно все отрисовать в канвасе И использовать тоже самое соединение, что установлено в rustWsServer.ts
обязательно везде typescript. Напиши весь код что и как где делать
|
125714ebe7a6c4cad1ca8abf122e6b81
|
{
"intermediate": 0.3489033579826355,
"beginner": 0.4464230239391327,
"expert": 0.2046736478805542
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.