instruction stringlengths 0 30k β |
|---|
I don't think what you are asking is possible, but I would like to be proven wrong. GCC beats MSVC in this regard. The only way I know of to find out what caused an exception is to use the Visual Studio IDE. From there I "Start Debugging" and the "Exception Unhandled" pop-up will show me the exception type. From the "Call Stack" window I can also find out which line caused the exception.
But since you mention `PowerShell`, it may be that you are compiling and running from the command line. If so, you can at least be notified that **some exception** happened by using one of the [CRT debug libraries][1], so your complete command line could be e.g:
`cl /MTd /EHsc file.cpp`
Running `file.exe` would (with your example code) pop-up the same "Debug Error!" MessageBox that you get in Visual Studio when you "Start Without Debugging".
[1]: https://learn.microsoft.com/en-us/cpp/c-runtime-library/crt-debugging-techniques?view=msvc-170 |
I want to allow typeclasses to be easily "inherited" on union types, by deriving the typeclass automatically when there exists a _projection_ (this projection is another typeclass that I defined separately). Below is some code illustrating what I'm trying to achieve:
```haskell
class t ##-> a where -- the projection
inj :: t -> a
prj :: a -> Maybe t
class Prop t where -- the property
fn1 :: Int -> t
fn2 :: t -> t
instance (t #-> a, Prop t) => Prop a where -- deriving the property from the projection
cst :: Int -> a
cst = inj . fn1 @t
fn2 :: a -> a
fn2 a1 = inj $ fn2 @t $ fromJust (prj a1)
```
And so, when I define a sum type, I can define only the projection `#->`, without redefining the typeclass `Prop` on the sum type.
```haskell
data K = ...
instance Prop K
data SumType = E1 K | ...
instance K #-> SumType where
inj :: K -> SumType
inj = E1
prj :: SumType -> Maybe K
prj (E1 k) = Just k
prj _ = Nothing
```
However, I ran into the following problem, when I would like to reuse typeclass functions in the instance definition. For example, when I am trying to give a `Prop` class definition for a base type (say, `String`):
```haskell
instance Prop String where
cst :: Int -> String
cst = show
fn2 :: String -> String
fn2 s = if s == cst 0 then "0" else s -- Overlapping instances for Prop String arising from a use of βcstβ
```
The compiler isn't sure whether to use the derived type (from the `#->` derivation), or to use the base instance I defined (i.e. the `Prop String` definition). It seems obvious to me however, that in the `Prop String` definition, the `cst @String` definition should be used. Using `{-# LANGUAGE TypeApplications #-}` also does not seem to help the compiler determine the instance needed.
I'm wondering how would we go about convincing the compiler to use the instance I intended here? |
I believe that [React-Native-Skia](https://github.com/Shopify/react-native-skia) can do that. Here's an example from the documentation:
```ts
import { Skia, AlphaType, ColorType } from "@shopify/react-native-skia";
const pixels = new Uint8Array(256 * 256 * 4);
pixels.fill(255);
let i = 0;
for (let x = 0; x < 256; x++) {
for (let y = 0; y < 256; y++) {
pixels[i++] = (x * y) % 255;
}
}
const data = Skia.Data.fromBytes(pixels);
const img = Skia.Image.MakeImage(
{
width: 256,
height: 256,
alphaType: AlphaType.Opaque,
colorType: ColorType.RGBA_8888,
},
data,
256 * 4
);
```
See https://shopify.github.io/react-native-skia/docs/images#makeimage |
# I want to add my field as new column in all orders table in backend. I have tried the following but its not displaying. Anyone know about this?
```
// Add custom column to orders list table
add_filter('manage_edit-shop_order_columns', 'custom_order_columns');
function custom_order_columns($columns)
{
// Inserting the new column after 'order_status' column
$new_columns = array();
foreach ($columns as $column_name => $column_info) {
$new_columns[$column_name] = $column_info;
if ('order_status' === $column_name) {
$new_columns['contributor'] = __('Contributor', 'textdomain');
}
}
return $new_columns;
}
// Populate custom column with contributor information
add_action('manage_shop_order_posts_custom_column', 'custom_order_column_content', 10, 2);
function custom_order_column_content($column, $post_id)
{
if ($column === 'contributor') {
$contributor_id = get_post_meta($post_id, 'custom_order_contributor', true);
if ($contributor_id) {
$contributor = get_userdata($contributor_id);
if ($contributor) {
echo $contributor->display_name;
} else {
echo 'Not Assigned';
}
} else {
echo 'Not Assigned';
}
}
}
``` |
Add column to woocommerce order table in backend |
|wordpress|woocommerce| |
null |
Well, we know inductive data types are often indexed by indices, a basic example that you may have seen is the vector that is indexed by a Type and its size.
Given an example of a vector `t A (S n)`, since we know that the vector has a size higher than 0, we know there is a value of type A in this vector.
It would be simply written as :
Theorem not_empty : forall A n, t A (S n) -> A.
but if you tried to prove this using the usual destruct/induction, you end up with two cases:
Theorem not_empty : forall A n, t A (S n) -> A.
intros.
destruct X.
- admit. (*here is our impossible clause*)
- exact h.
Admitted.
Even though we know the vector has a size equal/higher than one, Coq still gives you a goal that is related when the vector is null. We can abuse from remembering indices and we can catch these cases by hand.
Theorem not_empty2 : forall A n, t A (S n) -> A.
intros.
remember (S n).
destruct X.
- inversion Heqn0.
- exact h.
Defined.
That is good but would be cooler if there is a tactic that already does it for you and eliminates the impossible clauses?
Theorem not_empty3 : forall A n, t A (S n) -> A.
intros.
dependent induction X.
exact h.
Defined. |
Typeclass projections as inheritance |
|haskell|typeclass| |
I created this matrix using ggplot [colored Table/matrix](https://i.stack.imgur.com/RiUjJ.png)
How can I extract an editable table into excel with same colors to adjust?
Sorry my question could be look silly, i am newbies in R, so no much experience i have in extracting a table. Is there a straightforward way to extract a colored table into excel? |
Extract a table/matrix from R into Excel with same colors and stle |
|rstudio| |
null |
I am working on an algorithm to compute some model. Let's say
```
def model(z):
return z ** 2
```
Now, I need to use an array of upper limits to compute an array of values for definite integrals of this function. I.e.
```
from scipy.integrate import quad
upp_lim = [0, 1, 2, 3, 4]
res_arr = quad(model, 0, upp_lim)
```
This of course, gives the error:
`ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`
The issue here is that I need to further work on this function by using it in an MCMC model (emcee). Hence, if I use any sort of loop, the MCMC algorithm will take an extremely long time to run, even with multiprocessing.
Is there a method to have the integration function accept an array of upper limits without sacrificing this much time? The quad module is not a necessity, however it is the fastest module I have found.
**Existent questions give iteration as an answer. Some pre-existing questions do not iterate over the limits, however still involve some component of iteration.** This is out of my question's scope as I am wondering whether an alternative exists. |
If you are using the Sourcetree app in Mac or Windows
1. Open sourcetree
2. click particular project
3. click settings
4. click Advanced
5. click Edit gitignore
6. Write "node_modules"
7. And Save.
For more information about this free Git client visit
https://www.sourcetreeapp.com/ |
Cross products are helpful for the 2D case, but they do not generalize well to other dimensions. Dot products do however. The dot product of two orthogonal vectors is zero in any space, which you can use to come up with a simple solution.
Let's say you have `P4` on the same line as `P1`-`P2`. You could parametrize it with parameter `t` such that
P4 = P1 + t * (P2 - P1)
The goal is to find `P4` such that
(P3 - P4) . (P2 - P1) == 0
Expanding `P4` in terms of `t` and simplifying:
(P3 - P1 - t * (P2 - P1)) . (P2 - P1) == 0
(P3 - P1) . (P2 - P1) == t * ||P2 - P1||^2
t = (P3 - P1) . (P2 - P1) / ||P2 - P1||^2
You can now plug `t` back into the original definition of `P4` to solve for the length `||P3 - P4||`
D = ||P3 - P4||
= ||P3 - P1 - t * (P2 - P1)||
= ||(P3 - P1) - (P3 - P1) . (P2 - P1) / ||P2 - P1||^2 * (P2 - P1)||
Conceptually, this is the norm of the vector `(P3 - P1)` with the component along `(P2 - P1)` subtracted off.
I've written a function in my library of utility routines called [`haggis`][1]. You can use [`haggis.math.segment_distance`][2] to compute the distance to the entire line (not just the bounded line segment) like this:
d = haggis.math.segment_distance(P3, P1, P2, segment=False)
[1]: https://haggis.readthedocs.io/en/latest/
[2]: https://haggis.readthedocs.io/en/latest/api.html#haggis.math.segment_distance |
Currently, the Node.js SDK is the only way to set the password policy, so you'll have to figure out how to run the script from the documentation you linked.
I recall that work was being done to allow setting this policy from the Firebase console too, so you might want to [reach out to Firebase support](https://firebase.google.com/support/contact/troubleshooting/) to cast your vote for that. |
null |
Found the answer via this github issue https://github.com/angular/angular/issues/54013
Short of it: the version of jest-preset-angular we were using was outdated and did not use the correct JIT transform. Updating to 14.0.3 resolved the issue and thest tests now run as they should. |
|python|pandas|dataframe|csv|lambda| |
I am trying to write a real-time serial data reading and ploting APP, which read data from a arduino UNO board. But there is a problem that after more 10000 dataponits readout, the ploting presents a huge delay. I don't know why.
Below is my code:
```
import re
import sys
import time
import serial
import threading
import numpy as np
import pyqtgraph as pg
from queue import Queue
from pyqtgraph.Qt import QtCore, QtWidgets
class MyApp(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
## Creating the Widgets and Layouts
self.plot_widget = pg.PlotWidget()
self.layout = QtWidgets.QVBoxLayout()
self.sbutton = QtWidgets.QPushButton("Start / Continue")
self.ebutton = QtWidgets.QPushButton("Stop")
self.timer = pg.QtCore.QTimer()
self.scroll = QtWidgets.QScrollBar(QtCore.Qt.Horizontal)
## Creating the variables and constants
self.stop_thread = False
self.data_x = [0]
self.data_y = [0]
self.data_z = [0]
self.vsize = 500
self.psize = 300000000
self.mag_x = Queue(maxsize=0)
self.mag_y = Queue(maxsize=0)
self.mag_z = Queue(maxsize=0)
self.Labelstyles = {"color": "black", "font-size": "30px", "font-weight": "bold"}
self.plot_widget.addLegend(labelTextSize ='30px')
self.plot_item_x = self.plot_widget.plot(self.data_x, pen=pg.mkPen('#89D99D', width=4), name='Bx')
self.plot_item_y = self.plot_widget.plot(self.data_y, pen=pg.mkPen('#FA7F08', width=4), name='By')
self.plot_item_z = self.plot_widget.plot(self.data_z, pen=pg.mkPen('#F24405', width=4), name='Bz')
## Building the Widget
self.setLayout(self.layout)
self.layout.addWidget(self.sbutton)
self.layout.addWidget(self.ebutton)
self.layout.addWidget(self.plot_widget)
self.layout.addWidget(self.scroll)
## Changing some properties of the widgets
self.plot_widget.setYRange(-3000, 3000)
self.plot_widget.setFixedSize(1500, 800)
self.plot_widget.setBackground('w')
self.plot_widget.showGrid(x=True, y=True, alpha=0.5)
self.plot_widget.setTitle('MLX90393 Data Readout', size='18pt', color='#000000', bold=True)
self.plot_widget.setLabel('left', 'Magnetic Intensity (mT)', bold=True, **self.Labelstyles)
self.plot_widget.setLabel('bottom', 'Counts', bold=True, **self.Labelstyles)
self.plot_widget.setMouseEnabled(x=False, y=False)
self.ebutton.setEnabled(False)
self.scroll.setEnabled(False)
self.scroll.setMaximum(self.vsize)
self.scroll.setMinimum(0)
self.scroll.setValue(self.vsize)
self.plot_widget.setXRange(0, self.vsize)
## Coneccting the signals
self.mSerial = serial.Serial('COM6', 2000000)
self.sbutton.clicked.connect(self.start)
self.ebutton.clicked.connect(self.stop)
self.timer.timeout.connect(self.update)
self.scroll.valueChanged.connect(self.upd_scroll)
self.th1 = threading.Thread(target=self.serial, daemon=True)
def upd_scroll(self):
xmax = np.ceil(len(self.data_x) * (self.scroll.value() / self.scroll.maximum()))-2
xmin = (xmax-self.vsize)
self.plot_widget.setXRange(xmin, xmax)
def update(self):
self.data_x.append(self.mag_x.get())
self.data_y.append(self.mag_y.get())
self.data_z.append(self.mag_z.get())
num = len(self.data_x)
if num <= self.psize:
self.plot_item_x.setData(self.data_x)
self.plot_item_y.setData(self.data_y)
self.plot_item_z.setData(self.data_z)
else:
self.plot_item_x.setData(self.data_x[-self.psize:])
self.plot_item_y.setData(self.data_y[-self.psize:])
self.plot_item_z.setData(self.data_z[-self.psize:])
if num == self.vsize:
self.scroll.setEnabled(True)
if num > self.vsize:
self.upd_scroll()
def start(self):
self.sbutton.setEnabled(False)
self.ebutton.setEnabled(True)
self.timer.start()
self.stop_thread = False
window.mSerial.flushInput()
if not self.th1.is_alive():
self.th1 = threading.Thread(target=self.serial, daemon=True)
self.th1.start()
def stop(self):
self.sbutton.setEnabled(True)
self.ebutton.setEnabled(False)
self.timer.stop()
self.stop_thread = True
self.th1.join()
self.upd_scroll()
def closeEvent(self, event):
self.timer.stop()
event.accept()
def serial(self):
while not self.stop_thread:
ret = b''
n = self.mSerial.inWaiting()
if(n):
ret = self.mSerial.readline()
if len(ret):
data_get = ret.decode('UTF-8')
pattern = re.compile(r"[+-]?\d+(?:\.\d+)?")
data_all = pattern.findall(data_get)
for j in range(len(data_all)):
if j==0:
self.mag_x.put(float(data_all[j]))
if j==1:
self.mag_y.put(float(data_all[j]))
if j==2:
self.mag_z.put(float(data_all[j]))
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
window = MyApp()
window.show()
#access serial port
if (window.mSerial.isOpen()):
print("open success")
window.mSerial.flushInput() #clear the input buffer
else:
print("open failed")
serial.close()
sys.exit(app.exec_())
```
I've double-checked my code, and my computer has enough memory capacity to do this. Also, I have checked my communication interface, so this should not be a hardware problem. Please help me out. |
I'm trying to scrape a webpage using HTMLAgilityPack in a C# Web Forms project.
All the solutions Ive seen for doing this use a WebBrowser control. However, from what I can determine, this is only available in WinForms projects.
At present I'm calling the required page via this code:
var getHtmlWeb = new HtmlWeb();
var document = getHtmlWeb.Load(inputUri);
HtmlAgilityPack.HtmlNodeCollection nodes = document.DocumentNode.SelectNodes("//div[@class=\"nav\"]");
An example bit of code that I've seen saying to use the WebBrowser control:
if (this.webBrowser1.Document.GetElementsByTagName("html")[0] != null)
_htmlAgilityPackDocument.LoadHtml(this.webBrowser1.Document.GetElementsByTagName("html")[0].OuterHtml);
How can I grab the page once AJAX has been loaded?
|
|javascript|google-chrome-extension| |
I had some issues with optional groups in the existing solutions. Sometimes I want them to be included in the map if they are not matched (as empty string) and sometimes not included in the map at all.
For this, I created the following method:
func FindNamedMatches(regex *regexp.Regexp, str string, includeNotMatchedOptional bool) map[string]string {
match := regex.FindStringSubmatchIndex(str)
if match == nil {
// No matches
return nil
}
subexpNames := regex.SubexpNames()
results := map[string]string{}
// Loop thru the subexp names (skipping the first empty one)
for i, name := range (subexpNames)[1:] {
startIndex := match[i*2+2]
endIndex := match[i*2+3]
if startIndex == -1 || endIndex == -1 {
// No match found
if includeNotMatchedOptional {
// Add anyways
results[name] = ""
}
continue
}
// Assign the correct value
results[name] = str[startIndex:endIndex]
}
return results
}
In this method, you can pass a boolean `includeNotMatchedOptional` do define if not-matched optional groups should be in the map or not.
Playground: https://go.dev/play/p/fCCf7SGMYF2 |
I'm using the [Subscription for WooCommerce plugin][1] to renew subscriptions automatically,
During the unsubscription process, we want to enhance this section to persuade the user to stay subscribed. To achieve this, we use the "wps_sfw_subscription_cancel" hook to redirect the user when clicking on the "cancel" button (during unsubscription) to a page containing a descriptive message highlighting the subscription benefits, along with a "Keep My Subscription" button and a "Cancel Subscription" link.
My question is: How can I make the "Cancel Subscription" link functional and display the red message "You are unsubscribed" (the original message from the Subscriptions for Woocommerce plugin)?
I'm trying to create the same link for the "cancel" button on /my-account/show-subscription/, but it's not working.
[1]: https://wordpress.org/plugins/subscriptions-for-woocommerce/ |
If I understand your question (double-axis Y little bug) correctly, you have "double-category" in the y-axis because you have specified the following
```
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
```
but the getCategoryLabel function return is like this (returning the same values multiple times):
```
function getCategoryLabel(value) {
// Menentukan label kategori sesuai nilai
if (value >= 0 && value <= 1) {
return 'Ringan';
} else if (value >= 2 && value <= 3) {
return 'Sedang';
} else if (value >= 4) {
return 'Berat';
} else {
return '';
}
}
```
If you wish the Y-axis to show the numeric value instead, simply remove (or remark) the ticks block as follows:
```
/*
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
*/
```
On the other hand, you have not set the following
```
beginAtZero: true,
```
So the following sample code will have the result (see picture underneath)
```
<div>
<canvas id="myChart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
const canvas = document.getElementById('myChart');
const ctx = canvas.getContext('2d');
var chartData = {}; // Menyiapkan objek untuk menyimpan data chart
var myChart = new Chart(ctx, {
type: 'bar', // Menggunakan chart jenis bar
data: {
datasets: [{
label: 'Statistik Karyawan',
data: [],
backgroundColor: ['skyblue'], // Warna untuk masing-masing kategori
borderWidth: 1
}],
labels: ['Jan', 'Feb', 'Mar', 'Apr', 'Mei', 'Jun', 'Jul', 'Agu', 'Sep', 'Okt', 'Nov', 'Des']
},
options: {
scales: {
y: {
beginAtZero: true,
/*
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
*/
}
},
responsive: true, // Membuat chart responsif
maintainAspectRatio: true // Mengatur rasio aspek chart
}
});
// Fungsi untuk mendapatkan label kategori
function getCategoryLabel(value) {
// Menentukan label kategori sesuai nilai
if (value >= 0 && value <= 1) {
return 'Ringan';
} else if (value >= 2 && value <= 3) {
return 'Sedang';
} else if (value >= 4) {
return 'Berat';
} else {
return '';
}
}
// Fungsi untuk mengupdate chart dengan data baru
function updateChart(newData) {
myChart.data.datasets[0].data = newData.allData;
myChart.update();
}
// Fungsi untuk mengubah data chart saat terjadi perubahan
function onDataChanged() {
var newData = {
allData: [1, 2, 3, 4, 5, 6, 7, 8]
};
// Menggunakan objek untuk melacak kategori yang sudah ditambahkan
var addedCategories = {};
// Memproses data baru dan menambahkannya ke chart
newData.allData.forEach(function(value) {
var categoryLabel = getCategoryLabel(value);
// Menambahkan kategori ke chart hanya jika belum ada
if (!addedCategories[categoryLabel]) {
addedCategories[categoryLabel] = true;
myChart.data.datasets[0].data.push(value);
}
});
updateChart(newData);
}
onDataChanged();
</script>
```
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/7uaCQ.jpg |
I'm having problems with my mapping configuration. Whenever I get an order by its ID, I'm having no problems whatsoever. However, once I attempt to get every single order using entityListToResponseModelList, every single ID, aggregate or not, returns null in postman. My mapping seems correct, yet I get a warning telling me that I have "umapped target properties". Here's the full warning:
```
#13 16.36 /usr/src/app/src/main/java/com/example/clothingstore/ordersubdomain/mapperlayer/OrderResponseMapper.java:63: warning: Unmapped target properties: "orderId, productId, customerId, employeeId". Mapping from Collection element "Order order" to "OrderResponseModel orderResponseModel".
#13 16.36 List<OrderResponseModel> entityListToResponseModelList(List<Order> orders);
^
```
My ResponseMapper:
```
package com.example.clothingstore.ordersubdomain.mapperlayer;
import com.example.clothingstore.customersubdomain.datalayer.Customer;
import com.example.clothingstore.employeesubdomain.datalayer.Employee;
import com.example.clothingstore.ordersubdomain.datalayer.Order;
import com.example.clothingstore.ordersubdomain.presentationlayer.OrderController;
import com.example.clothingstore.ordersubdomain.presentationlayer.OrderResponseModel;
import com.example.clothingstore.productsubdomain.datalayer.Product;
import org.mapstruct.AfterMapping;
import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.MappingTarget;
import org.springframework.hateoas.Link;
import java.util.List;
import java.util.stream.Collectors;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
@Mapper(componentModel = "spring")
public interface OrderResponseMapper {
@Mapping(expression = "java(order.getOrderIdentifier().getOrderId())", target = "orderId")
@Mapping(expression = "java(order.getProductIdentifier().getProductId())", target = "productId")
@Mapping(expression = "java(product.getName())", target = "name")
@Mapping(expression = "java(product.getPrice())", target = "price")
@Mapping(expression = "java(order.getCustomerIdentifier().getCustomerId())", target = "customerId")
@Mapping(expression = "java(customer.getBillingAddress().getStreetAddress())", target = "streetAddress")
@Mapping(expression = "java(customer.getBillingAddress().getPostalCode())", target = "postalCode")
@Mapping(expression = "java(customer.getBillingAddress().getCity())", target = "city")
@Mapping(expression = "java(customer.getBillingAddress().getProvince())", target = "province")
@Mapping(expression = "java(order.getEmployeeIdentifier().getEmployeeId())", target = "employeeId")
@Mapping(expression = "java(employee.getLastName())", target = "lastName")
@Mapping(expression = "java(order.getDeliveryStatus().name())", target = "deliveryStatus")
@Mapping(expression = "java(order.getShippingPrice())", target = "shippingPrice")
@Mapping(expression = "java(order.getTotalPrice())", target = "totalPrice")
OrderResponseModel entityToResponseModel(Order order, Product product, Customer customer, Employee employee);
@AfterMapping
default void addLinks(@MappingTarget OrderResponseModel model){
Link selfLink = linkTo(methodOn(OrderController.class)
.getOrderByOrderId(model.getOrderId()))
.withSelfRel();
model.add(selfLink);
Link ordersLink =
linkTo(methodOn(OrderController.class)
.getOrders())
.withRel("All orders");
model.add(ordersLink);
}
@Mapping(expression = "java(order.getOrderIdentifier().getOrderId())", target = "orderId")
@Mapping(expression = "java(order.getProductIdentifier().getProductId())", target = "productId")
@Mapping(expression = "java(order.getCustomerIdentifier().getCustomerId())", target = "customerId")
@Mapping(expression = "java(order.getEmployeeIdentifier().getEmployeeId())", target = "employeeId")
List<OrderResponseModel> entityListToResponseModelList(List<Order> orders);
}
```
And a few other classes that I feel like probably have nothing to do with my problem, but I'm including them for good measure.
OrderResponseModel
```
package com.example.clothingstore.ordersubdomain.presentationlayer;
import com.example.clothingstore.common.enums.DeliveryStatus;
import com.example.clothingstore.common.identifiers.CustomerIdentifier;
import com.example.clothingstore.common.identifiers.EmployeeIdentifier;
import com.example.clothingstore.common.identifiers.OrderIdentifier;
import com.example.clothingstore.common.identifiers.ProductIdentifier;
import lombok.*;
import org.springframework.hateoas.RepresentationModel;
import java.math.BigDecimal;
@EqualsAndHashCode(callSuper = true)
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class OrderResponseModel extends RepresentationModel<OrderResponseModel> {
String orderId;
String productId;
String name;
BigDecimal price;
String customerId;
String streetAddress;
String postalCode;
String city;
String province;
String employeeId;
String lastName;
String deliveryStatus;
BigDecimal shippingPrice;
BigDecimal totalPrice;
public String getOrderId() {
return orderId;
}
public void setOrderId(String orderId) {
this.orderId = orderId;
}
public String getProductId() {
return productId;
}
public void setProductId(String productId) {
this.productId = productId;
}
public String getCustomerId() {
return customerId;
}
public void setCustomerId(String customerId) {
this.customerId = customerId;
}
public String getEmployeeId() {
return employeeId;
}
public void setEmployeeId(String employeeId) {
this.employeeId = employeeId;
}
}
```
Schema:
```
CREATE TABLE IF NOT EXISTS orders (
id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
order_id VARCHAR(36) UNIQUE,
product_id VARCHAR(36),
name VARCHAR(50),
price DECIMAL(10, 2),
customer_id VARCHAR(36),
street_address VARCHAR(50),
postal_code VARCHAR(50),
city VARCHAR(50),
province VARCHAR(50),
employee_id VARCHAR(36),
last_name VARCHAR(50),
delivery_status VARCHAR(50),
shipping_price DECIMAL(10, 2),
total_price DECIMAL(10, 2),
FOREIGN KEY (product_id) REFERENCES products(product_id),
FOREIGN KEY (customer_id) REFERENCES customers(customer_id),
FOREIGN KEY (employee_id) REFERENCES employees(employee_id),
INDEX idx_customer_id (customer_id),
INDEX idx_employee_id (employee_id),
INDEX idx_order_id (order_id)
);
```
getOrders function:
```
@Override
public List<OrderResponseModel> getOrders() {
List<Order> orders = orderRepository.findAll();
return orderResponseMapper.entityListToResponseModelList(orders);
}
```
[And the postman response as well.](https://i.stack.imgur.com/GljLG.png)
I've tried adding getters and setters myself, without the use of lombok. I've tried mapping my entityListToResponseModel. I played around with my Request Mapper to no avail. |
entityListToResponseModelList unable to find mapped target properties, resulting in null results |
|spring|spring-boot|web-services|mapping| |
null |
I am new to node.js and i was trying to make a interactive website for my CNN Project but got this error. help me out guys
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:68:19)
at Object.createHash (node:crypto:138:10)
at module.exports (D:\Potato-disease-Major-1\frontend\node_modules\webpack\lib\util\createHash.js:135:53)
at NormalModule._initBuildHash (D:\Potato-disease-Major-1\frontend\node_modules\webpack\lib\NormalModule.js:417:16)
at D:\Potato-disease-Major-1\frontend\node_modules\webpack\lib\NormalModule.js:452:10
at D:\Potato-disease-Major-1\frontend\node_modules\webpack\lib\NormalModule.js:323:13
at D:\Potato-disease-Major-1\frontend\node_modules\loader-runner\lib\LoaderRunner.js:367:11
at D:\Potato-disease-Major-1\frontend\node_modules\loader-runner\lib\LoaderRunner.js:233:18
at context.callback (D:\Potato-disease-Major-1\frontend\node_modules\loader-runner\lib\LoaderRunner.js:111:13)
at D:\Potato-disease-Major-1\frontend\node_modules\babel-loader\lib\index.js:59:103 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
i tried reinntalling the modules, but got the same result |
Node.js openssl error while runing the npm start commmand |
|node.js|openssl| |
null |
{"OriginalQuestionIds":[77919363],"Voters":[{"Id":2437508,"DisplayName":"eftshift0","BindingReason":{"GoldTagBadge":"git"}}]} |
|pagination|carousel|swiper.js| |
I manually created 'test' directory in my Downloads folder and add your structure.
import shutil
shutil.make_archive(
base_name = '13_03_2024',
format = 'zip',
root_dir = 'test/',
base_dir = '.'
)
After running this code I extracted it successfully by selecting "Extract here..." - no parent folder
[![enter image description here][1]][1]
Also can be extracted by shutil:
import shutil
shutil.unpack_archive(
filename = '13_03_2024.zip',
extract_dir = '.',
format = 'zip')
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/5diUe.png
[2]: https://i.stack.imgur.com/pzcfz.png |
I'm having problems with my mapping configuration. Whenever I get an order by its ID, I'm having no problems whatsoever. However, once I attempt to get every single order using entityListToResponseModelList, every single ID, aggregate or not, returns null in postman. My mapping seems correct, yet I get a warning telling me that I have "umapped target properties". Here's the full warning:
```
#13 16.36 /usr/src/app/src/main/java/com/example/clothingstore/ordersubdomain/mapperlayer/OrderResponseMapper.java:63: warning: Unmapped target properties: "orderId, productId, customerId, employeeId". Mapping from Collection element "Order order" to "OrderResponseModel orderResponseModel".
#13 16.36 List<OrderResponseModel> entityListToResponseModelList(List<Order> orders);
^
```
My ResponseMapper:
```
package com.example.clothingstore.ordersubdomain.mapperlayer;
import com.example.clothingstore.customersubdomain.datalayer.Customer;
import com.example.clothingstore.employeesubdomain.datalayer.Employee;
import com.example.clothingstore.ordersubdomain.datalayer.Order;
import com.example.clothingstore.ordersubdomain.presentationlayer.OrderController;
import com.example.clothingstore.ordersubdomain.presentationlayer.OrderResponseModel;
import com.example.clothingstore.productsubdomain.datalayer.Product;
import org.mapstruct.AfterMapping;
import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.MappingTarget;
import org.springframework.hateoas.Link;
import java.util.List;
import java.util.stream.Collectors;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
@Mapper(componentModel = "spring")
public interface OrderResponseMapper {
@Mapping(expression = "java(order.getOrderIdentifier().getOrderId())", target = "orderId")
@Mapping(expression = "java(order.getProductIdentifier().getProductId())", target = "productId")
@Mapping(expression = "java(product.getName())", target = "name")
@Mapping(expression = "java(product.getPrice())", target = "price")
@Mapping(expression = "java(order.getCustomerIdentifier().getCustomerId())", target = "customerId")
@Mapping(expression = "java(customer.getBillingAddress().getStreetAddress())", target = "streetAddress")
@Mapping(expression = "java(customer.getBillingAddress().getPostalCode())", target = "postalCode")
@Mapping(expression = "java(customer.getBillingAddress().getCity())", target = "city")
@Mapping(expression = "java(customer.getBillingAddress().getProvince())", target = "province")
@Mapping(expression = "java(order.getEmployeeIdentifier().getEmployeeId())", target = "employeeId")
@Mapping(expression = "java(employee.getLastName())", target = "lastName")
@Mapping(expression = "java(order.getDeliveryStatus().name())", target = "deliveryStatus")
@Mapping(expression = "java(order.getShippingPrice())", target = "shippingPrice")
@Mapping(expression = "java(order.getTotalPrice())", target = "totalPrice")
OrderResponseModel entityToResponseModel(Order order, Product product, Customer customer, Employee employee);
@AfterMapping
default void addLinks(@MappingTarget OrderResponseModel model){
Link selfLink = linkTo(methodOn(OrderController.class)
.getOrderByOrderId(model.getOrderId()))
.withSelfRel();
model.add(selfLink);
Link ordersLink =
linkTo(methodOn(OrderController.class)
.getOrders())
.withRel("All orders");
model.add(ordersLink);
}
@Mapping(expression = "java(order.getOrderIdentifier().getOrderId())", target = "orderId")
@Mapping(expression = "java(order.getProductIdentifier().getProductId())", target = "productId")
@Mapping(expression = "java(order.getCustomerIdentifier().getCustomerId())", target = "customerId")
@Mapping(expression = "java(order.getEmployeeIdentifier().getEmployeeId())", target = "employeeId")
List<OrderResponseModel> entityListToResponseModelList(List<Order> orders);
}
```
And a few other classes that I feel like probably have nothing to do with my problem, but I'm including them for good measure.
OrderResponseModel
```
package com.example.clothingstore.ordersubdomain.presentationlayer;
import com.example.clothingstore.common.enums.DeliveryStatus;
import com.example.clothingstore.common.identifiers.CustomerIdentifier;
import com.example.clothingstore.common.identifiers.EmployeeIdentifier;
import com.example.clothingstore.common.identifiers.OrderIdentifier;
import com.example.clothingstore.common.identifiers.ProductIdentifier;
import lombok.*;
import org.springframework.hateoas.RepresentationModel;
import java.math.BigDecimal;
@EqualsAndHashCode(callSuper = true)
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class OrderResponseModel extends RepresentationModel<OrderResponseModel> {
String orderId;
String productId;
String name;
BigDecimal price;
String customerId;
String streetAddress;
String postalCode;
String city;
String province;
String employeeId;
String lastName;
String deliveryStatus;
BigDecimal shippingPrice;
BigDecimal totalPrice;
public String getOrderId() {
return orderId;
}
public void setOrderId(String orderId) {
this.orderId = orderId;
}
public String getProductId() {
return productId;
}
public void setProductId(String productId) {
this.productId = productId;
}
public String getCustomerId() {
return customerId;
}
public void setCustomerId(String customerId) {
this.customerId = customerId;
}
public String getEmployeeId() {
return employeeId;
}
public void setEmployeeId(String employeeId) {
this.employeeId = employeeId;
}
}
```
Schema:
```
CREATE TABLE IF NOT EXISTS orders (
id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
order_id VARCHAR(36) UNIQUE,
product_id VARCHAR(36),
name VARCHAR(50),
price DECIMAL(10, 2),
customer_id VARCHAR(36),
street_address VARCHAR(50),
postal_code VARCHAR(50),
city VARCHAR(50),
province VARCHAR(50),
employee_id VARCHAR(36),
last_name VARCHAR(50),
delivery_status VARCHAR(50),
shipping_price DECIMAL(10, 2),
total_price DECIMAL(10, 2),
FOREIGN KEY (product_id) REFERENCES products(product_id),
FOREIGN KEY (customer_id) REFERENCES customers(customer_id),
FOREIGN KEY (employee_id) REFERENCES employees(employee_id),
INDEX idx_customer_id (customer_id),
INDEX idx_employee_id (employee_id),
INDEX idx_order_id (order_id)
);
```
getOrders function:
```
@Override
public List<OrderResponseModel> getOrders() {
List<Order> orders = orderRepository.findAll();
return orderResponseMapper.entityListToResponseModelList(orders);
}
```
[And the postman response as well.](https://i.stack.imgur.com/GljLG.png)
I've tried adding getters and setters myself, without the use of lombok. I've tried mapping my entityListToResponseModel. I played around with my Request Mapper to no avail.
Thanks in advance for the help! And apologies if this question doesn't meet standards, it's my first time posting on here. |
I want to use the Flutter Custom Clipper to create a shape like the picture below, but it doesn't work. Is there anyone who can help?
[![enter image description here][1]][1]
```dart
class MyCustomClipper extends CustomClipper<Path> {
@override
Path getClip(Size size) {
List<Offset> polygon = [
Offset(0, 10),
Offset(size.width - 30, 10),
Offset(size.width - 25, 0),
Offset(size.width - 20, 10),
Offset(size.width, 10),
Offset(size.width, size.height),
Offset(0, size.height),
Offset(0, 10),
];
Path path = Path()
..addPolygon(polygon, false)
..close();
return path;
}
@override
bool shouldReclip(CustomClipper<Path> oldClipper) {
return true;
}
}
```
Actually, I used this path through polygon, but I couldn't give it a border, so I guess I don't know much.
<Svg code>
<svg width="132" height="110" viewBox="0 0 132 110" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M111.858 1.73511L111.434 1.05619L111.01 1.73512L107.096 8H5.997C2.96081 8 0.5 10.4627 0.5 13.5V103.5C0.5 106.537 2.96081 109 5.997 109H125.925C128.961 109 131.422 106.537 131.422 103.5V13.5C131.422 10.4627 128.961 8 125.925 8H115.771L111.858 1.73511Z" fill="white" stroke="#EAECEE"/>
</svg>
[1]: https://i.stack.imgur.com/YXJ78.png |
{"Voters":[{"Id":2802040,"DisplayName":"Paulie_D"},{"Id":616443,"DisplayName":"j08691"},{"Id":8289095,"DisplayName":"Chris"}]} |
Using SOAP with WSDL in python with the suds-py3 library:
Why doesn't it show me anything when I use the following code?
What am I doing wrong?
config:
https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?WSDL
https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?xsd=1
```lang-python3
from suds.client import Client
url = "https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?xsd=1"
results = Client(url)
print(results)
```
**Solution:**
Solution to the problem: use the zeep library.
**implementation:**
https://github.com/oigonzalezp2024/cliente-webservice-sipsa/blob/main/consultarInsumosSipsaMesMadr.py
[show console][1]
[1]: https://i.stack.imgur.com/KIvoV.png |
null |
null |
null |
null |
An external address is a pointer to memory that is not managed by CLIPS. The only way to create an external address is to use the user-defined function API in CLIPS to call out to a function which returns a pointer to memory. The default value for an external address is a null pointer, so if you declare the type of a slot to be an external-address, and don't specify the value when you assert a fact, you'll see the value of the slot set to a null pointer.
CLIPS> (deftemplate window (slot pointer (type EXTERNAL-ADDRESS)))
CLIPS> (assert (window))
<Fact-1>
CLIPS> (facts)
f-1 (window (pointer <Pointer-C-0x0>))
For a total of 1 fact.
CLIPS>
If you wanted to create a GUI that could be manipulated by CLIPS, you might define some user-defined functions which allocate a window (create-window) and print to that window (print-window). You could then create a window and assign the pointer to the slot value of a fact:
(assert (window (pointer (create-window))))
You could then print to the window from a rule like this:
(defrule hello
(window (pointer ?window))
=>
(print-window ?window "Hello World!"))
Fact-addresses and instance-addresses are also pointers to memory, but these are pointers to data structures that CLIPS maintains.
An instance name is just a special kind of symbol that refers to an instance of a class. You can specify the instance name when creating an instance or, if none is specified, one will be automatically created. Messages can be sent to an instance using either the instance name or the instance address.
CLIPS> (defclass POINT (is-a USER) (slot x) (slot y))
CLIPS> (make-instance [p1] of POINT (x 1) (y 2))
[p1]
CLIPS> (make-instance of POINT (x 0) (y 3))
[gen1]
CLIPS> (instances)
[p1] of POINT
[gen1] of POINT
For a total of 2 instances.
CLIPS> (send [p1] get-x)
1
CLIPS> (instance-address [p1])
<Instance-p1>
CLIPS>
(defrule example
?i <- (object (is-a POINT) (name ?n))
=>
(println "The instance address is " ?i)
(println "The instance name is " ?n))
CLIPS> (run)
The instance address is <Instance-gen1>
The instance name is [gen1]
The instance address is <Instance-p1>
The instance name is [p1]
CLIPS>
|
Using SOAP with WSDL in python with the suds-py3 library:
Why doesn't it show me anything when I use the following code?
What am I doing wrong?
config:
https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?WSDL
https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?xsd=1
```lang-python3
from suds.client import Client
url = "https://appweb.dane.gov.co/sipsaWS/SrvSipsaUpraBeanService?xsd=1"
results = Client(url)
print(results)
```
**Solution:**
Solution to the problem: use the zeep library.
**implementation:**
https://github.com/oigonzalezp2024/cliente-webservice-sipsa/blob/main/etl/etl.py
[show console][1]
[1]: https://i.stack.imgur.com/KIvoV.png |
C++17 path configuration proglem in vscode |
How do i set a null desination using zbus? I want to recieve a signal that has a (null desination) but the ProxyBuilder doens't allow me to do this.
This is how I'm sending the signal. `dbus-send --system --type=signal / com.example.greeting.GreetingSignal string:"wotcha"`
```rust
use tokio::runtime::Runtime;
use tokio_stream::StreamExt;
use zbus::{Connection, ProxyBuilder, Proxy, zvariant::NoneValue};
#[tokio::main]
async fn main() -> zbus::Result<()> {
// Connect to the system bus
let connection = Connection::system().await?;
// Create a proxy builder for the signal interface
let proxy_builder = ProxyBuilder::new(&connection)
.destination(zbus::names::BusName::null_value())?
.path("/")?
.interface("com.example.greeting")?;
let proxy: Proxy<'_> = proxy_builder.build().await?;
let mut signal_rec = proxy.receive_signal("GreetingSignal").await?;
while let Some(msg) = signal_rec.next().await {
let grt: String = msg.body().deserialize()?;
println!("{}", grt);
}
Ok(())
}
```
I tried using None, and an empty string but those still failed. |
Zbus create proxy builder without destination |
|rust|raspberry-pi|dbus| |
null |
{"Voters":[{"Id":174777,"DisplayName":"John Rotenstein"}]} |
I try to make simple Apache Flink MongoDB connector codes to read and write json data in MongoDB. First, Below codes are the MongoDB Sink codes.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
List<Tuple2<String, Integer>> data = new ArrayList<>();
data.add(new Tuple2<>("Hello", 1));
data.add(new Tuple2<>("Hi", 2));
data.add(new Tuple2<>("Hey", 3));
DataStream<Tuple2<String, Integer>> stream = env.fromCollection(data);
MongoSink<Tuple2<String, Integer>> sink = MongoSink.<Tuple2<String, Integer>>builder()
.setUri("mongodb://127.0.0.1:27017")
.setDatabase("test_db")
.setCollection("test_coll")
.setBatchSize(1000)
.setBatchIntervalMs(1000)
.setMaxRetries(3)
.setDeliveryGuarantee(DeliveryGuarantee.AT_LEAST_ONCE)
.setSerializationSchema(
(input, context)
-> {
Document doc = new Document(input.f0, input.f1);
return new InsertOneModel<>(BsonDocument.parse(doc.toJson()));
})
.build();
stream.sinkTo(sink);
These sink codes insert json type documents into MongoDB successfully. The generated documents are
{
"_id": {
"$oid": "65f67f3b9779060fd2390d0e"
},
"Hello": 1
}
But the MongoDB source codes bring some error message.
MongoSource<Tuple2<String,Integer>> source = MongoSource.<Tuple2<String,Integer>>builder()
.setUri("mongodb://127.0.0.1:27017")
.setDatabase("test_db")
.setCollection("test_coll")
.setDeserializationSchema(new MongoDeserializationSchema<Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> deserialize(BsonDocument document) {
String key = document.getFirstKey();
Integer value = document.getInt64(key).intValue(); // this line throws the error message
return new Tuple2<String, Integer>(key, value);
}
@Override
public TypeInformation<Tuple2<String, Integer>> getProducedType() {
return Types.TUPLE(Types.STRING, Types.INT);
}
})
.build();
DataStream<Tuple2<String, Integer>> ds = env.fromSource(source, WatermarkStrategy.noWatermarks(), "MongoDB-Source");
ds.print();
The error messages are
Caused by: org.bson.BsonInvalidOperationException: Value expected to be of type INT64 is of unexpected type OBJECT_ID
at org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419)
at org.bson.BsonValue.asInt64(BsonValue.java:105)
at org.bson.BsonDocument.getInt64(BsonDocument.java:203)
at com.aaa.test.FlinkMongoTest$1.deserialize(FlinkMongoTest.java:63)
at com.aaa.test.FlinkMongoTest$1.deserialize(FlinkMongoTest.java:1)
at org.apache.flink.connector.mongodb.source.reader.deserializer.MongoDeserializationSchema.deserialize(MongoDeserializationSchema.java:58)
at org.apache.flink.connector.mongodb.source.reader.emitter.MongoRecordEmitter.emitRecord(MongoRecordEmitter.java:54)
at org.apache.flink.connector.mongodb.source.reader.emitter.MongoRecordEmitter.emitRecord(MongoRecordEmitter.java:34)
at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:160)
I think the value type of the request json data should be INT64 but the returned type is OBJECT_ID so these codes bring the errors. Kindly inform me how to call the integer value of mongodb document, not the OBJECT_ID. Any reply will be thanksful. |
org.bson.BsonInvalidOperationException: Value expected to be of type INT64 is of unexpected type OBJECT_ID |
|mongodb|apache-flink|flink-streaming|bson| |
One way to do this would be to split the schema down into three parts. The first part checks there is at least one contact method, but it doesn't care data all about `email` or `phone`.
This is then intersected with two other schemas. One to handle `email` and one to handle `phone`. Each of those schemas is a union of the two possibilities for each respective one:
* `phone`/`email` is not in contact methods.
* `phone`/`email` is in contact methods, and `phone/email` is valid.
The whole thing comes together in a single schema.
```
const contactMethods = z.enum(['email', 'phone'])
const schema = z
.object({
// other fields...
contactMethods: z
.array(contactMethods, {
invalid_type_error: 'At least one contact method is required',
})
.min(1, 'At least one contact method is required'),
})
.and(
z.union([
z.object({
contactMethods: z
.array(contactMethods)
.refine((val) => val.includes(contactMethods.Enum.email)),
email: z
.string({ invalid_type_error: 'Enter an email1' })
.trim()
.email('Enter a valid email'),
}),
z.object({
contactMethods: z
.array(contactMethods)
.refine((val) => !val.includes(contactMethods.Enum.email)),
}),
]),
)
.and(
z.union([
z.object({
contactMethods: z
.array(contactMethods)
.refine((val) => val.includes(contactMethods.Enum.phone)),
phone: z
.string({ invalid_type_error: 'Enter a phone number' })
.trim()
.transform((val) => val.replaceAll(' ', '')) // spaces confuse validator function below
.refine((val) => validator.isMobilePhone(val), {
message: 'Enter a valid phone number',
}) // check value is parsable as phone number
.transform((val) => parsePhoneNumber(val, 'GB')?.number), // parse into +44 e.164 format if not already
}),
z.object({
contactMethods: z
.array(contactMethods)
.refine((val) => !val.includes(contactMethods.Enum.phone)),
}),
]),
)
``` |
|woocommerce|subscription|auto-renewing| |
I hope you find a solution for your problem, but i find in the doc the arguments json_secret to switch with secret_type: and secret:
Something like that:
secret manager can make conversion in k/v and string afert
json_secret:
url: "{{private_link}}"
port: "{{access}}"
username: "{{item.item.user}}"
password: "{{item.item.password}}"
database: "{{database}}" |
I have a python script that depends on imported packages, as shown:
```
#script stored as get_command_Voltage.py
import pyabf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#load data
def get_command_V(filepath):
abfdata = pyabf.ABF(filepath)
return abfdata
commandv = get_command_V(filepath1)
```
and I'd like to store the python output, commandv, in the variable commandV in matlab.
I wrote the following code on Matlab:
```
pyabf = py.importlib.import_module('pyabf');
filepath_this = "..." %full file path
commandV = pyrunfile("get_command_Voltage.py","commandv",filepath1 = filepath_this)
```
but it gives me
> ModuleNotFoundError: No Module named 'pyabf'
```
Warning: Unable to load Python object. Saving (serializing) Python objects into a MAT-file is not supported.
> In vclampanal (line 25)
Error using <frozen importlib>_find_and_load_unlocked (line 1140)
Python Error: ModuleNotFoundError: No module named 'pyabf'
Error in <frozen importlib>_find_and_load (line 1176)
Error in <frozen importlib>_gcd_import (line 1204)
Error in __init__>import_module (line 126)
```
I'm just wondering how I could resolve this issue and run python scripts that depend on imports.
Any advice would be greatly appreciated!
Thank you |
How do I import python modules on matlab using pyrunfile? |
|python|python-3.x|matlab|import| |
null |
Note up front: very amateur coder, hence why my request is so direct.
Trying to get the UPC from the page source of https://www.walmart.com/ip/Rice-Krispies-Treats-Original-Chewy-Crispy-Marshmallow-Squares-Ready-to-Eat-12-4-oz-16-Count/10818666?athbdg=L1600&from=/search
The UPC is only in the page source, not on the page itself.
I have a couple different pieces of code that work, but get caught up in Walmart's CAPTCHA "Activate and hold the button to confirm that youβre human. Thank You!" I don't know the first thing of trying to bypass this. Need help going the next step and getting what I would if I did right-click and View page source.
Also, FWIW I am using Anaconda Jupyter Lab.
- TAKE 1
#import urllib2
import urllib.request as urllib2
response = urllib2.urlopen("http://google.de")
page_source = response.read()
page_source
- TAKE 2
response = urllib2.urlopen("https://www.walmart.com/ip/Rice-Krispies-Treats-Original-Chewy-Crispy-Marshmallow-Squares-Ready-to-Eat-12-4-oz-16-Count/10818666?athbdg=L1600&from=/search")
page_source = response.read()
page_source
- TAKE 3
#pip install -U selenium
from selenium import webdriver
import time
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://www.walmart.com/ip/Rice-Krispies-Treats-Original-Chewy-Crispy-Marshmallow-Squares-Ready-to-Eat-12-4-oz-16-Count/10818666?athbdg=L1600&from=/search")
time.sleep(10)
html_source = driver.page_source
html_source |
[`GridBagLayout`](https://docs.oracle.com/javase/8/docs/api/java/awt/GridBagLayout.html) is your best friend when working with Swing. [This explanation](https://stackoverflow.com/a/30656498/5645656) helped me a lot in learning how to use it. You can also look at the [Oracle tutorial](https://docs.oracle.com/javase%2Ftutorial%2Fuiswing%2F%2F/layout/gridbag.html).
For your particular scenario, using [`GridBagConstraints#HORIZONTAL`][1] should do the trick.
[1]: https://docs.oracle.com/javase/8/docs/api/java/awt/GridBagConstraints.html#HORIZONTAL |
OK then, let the DB have the blob in raw format...
canvas.toBlob((blob)=>{
jQuery.ajax({
url:"//domain.com/wp-admin/admin-ajax.php",
type: "POST",
data: {action: "addblobtodb", image: blob},
success: function(id) {console.log("Succesfully inserted into DB: " + id);}
});
},'image/png',1); // <--- specify the type & quality of image here
|
I want to generate a list given a number n which returns all possible combinations from one to `n` recursively.
For example
```
generate 3
```
should return:
```
[[1,1,1],[1,1,2],[1,1,3],[1,2,1],[1,2,2],[1,2,3],[1,3,1],[1,3,2],[1,3,3],[2,1,1],[2,1,2],[2,1,3],[2,2,1],[2,2,2],[2,2,3],[2,3,1],[2,3,2],[2,3,3],[3,1,1],[3,1,2],[3,1,3],[3,2,1],[3,2,2],[3,2,3],[3,3,1],[3,3,2],[3,3,3]]
```
Logically something like this, which obviously returns an error:
```
generate n = [replicate n _ | _ <- [1..n]]
``` |
{"OriginalQuestionIds":[16666353],"Voters":[{"Id":-1,"DisplayName":"Community","BindingReason":{"DuplicateApprovedByAsker":""}}]} |
ERROR: type should be string, got "\r\nhttps://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-base\r\n\r\nsee that py311 is on al2: uses glibc 2_24\r\n\r\npy312 is on al2023: uses glibc 2_28 I believe\r\n\r\nWhen you install bcrypt>4, it necessarily depends on glib 2_28, which al2 does not provide. \r\n\r\nAlso if you are building on a mismatched version of python you will also fail. Make sure that you are building on the same python version, os, arch as the lambda environment, and then downgrade to bcrypt 3\r\n\r\nalternatively you can move to a python312 lambda environment" |
I figured this is a configuration problem. The sample provides the config as follows
```none
OAUTH_AUTHORITY=https://login.microsoftonline.com/common
OAUTH_ID_METADATA=/v2.0/.well-known/openid-configuration
OAUTH_AUTHORIZE_ENDPOINT=/oauth2/v2.0/authorize
OAUTH_TOKEN_ENDPOINT=/oauth2/v2.0/token
```
wreck uses [Url.URL][1] to combine `OAUTH_AUTHORITY` with `OAUTH_TOKEN_ENDPOINT` which results in `https://login.microsoftonline.com/oauth2/v2.0/token` and therefore loses `common`. This results in a `404` and therefore no JSON response anymore.
I changed the config slightly and removed the leading slashes from the relative paths and added a trailing slash to the base URL.
```none
OAUTH_AUTHORITY=https://login.microsoftonline.com/common/
OAUTH_ID_METADATA=/v2.0/.well-known/openid-configuration
OAUTH_AUTHORIZE_ENDPOINT=oauth2/v2.0/authorize
OAUTH_TOKEN_ENDPOINT=oauth2/v2.0/token
```
So that `OAUTH_TOKEN_ENDPOINT` is relative. I have not figured why it worked for authorize though, but still works.
[1]: https://nodejs.org/api/url.html#url_constructor_new_url_input_base |
null |
I had the same issue. In my case I had the wrong input_size to the model. **It should be the number of features**, I hade the sequence length instead which for some reason worked on the cpu but not gpu. |
I am encountering an issue while attempting to print a Jasper report within a Docker container using an Alpine-based OpenJDK image. The application runs smoothly when using a non-Alpine base image, but encounters problems when using Alpine-based images.
I am using `JasperFillManager.fillReport(String sourceFileName, Map<String, Object> params, Connection connection)` method for report printing in java.
# Here's my Dockerfile that works fine with a non-Alpine base image:
FROM openjdk:17-jdk-slim
MAINTAINER my-company.com
RUN apt-get update && apt-get install fontconfig libfreetype6 -y && apt-get update
COPY target/classes/static/fonts/*.ttf /usr/local/share/fonts/
RUN fc-cache -fv
COPY target/my-java-app-?.?*.jar .
ENTRYPOINT ["java", "-jar", "/my-java-app-2.0.0.jar"]
EXPOSE 8040
# However, when I switch to an Alpine-based OpenJDK image with the following Dockerfile, the reports are not printed, and no exceptions are thrown:
FROM bellsoft/liberica-openjdk-alpine:17
MAINTAINER my-company.com
RUN apk update && apk add fontconfig freetype
COPY target/classes/static/fonts/*.ttf /usr/local/share/fonts/
RUN fc-cache -fv
COPY target/my-java-app-?.?*.jar .
ENTRYPOINT ["java", "-jar", "/my-java-app-2.0.0.jar"]
EXPOSE 8040
The application is responsible for printing Jasper reports, and it works fine with a non-Alpine base image. However, when using the Alpine-based OpenJDK image, the reports are not printed, and no exceptions are thrown.
I think the issue is with freetype library which is not allowing me to print the report.
I've ensured that the necessary fonts are copied and cached properly. Is there anything specific I need to configure or install in the Alpine-based image to enable Jasper report printing? Are there any known compatibility issues or additional dependencies required for Jasper printing within Alpine-based images?
Any insights or suggestions would be greatly appreciated. Thank you!
|
**Sample Input:**
> const numbers = [
> 1,
> [3, [2, 8, [12, 9]]],
> [5],
> [12, [[5]]],
> [100, [23, 45]]
> ]
**Sample Output:**
> const numnbers = [
> 1, 3, 2, 8, 12,
> 9, 5, 12, 5, 100,
> 23, 45
> ]
**Solution(1st Method):**
> const numbers = [
> 1,
> [3, [2, 8, [12, 9]]],
> [5],
> [12, [[5]]],
> [100, [23, 45]]
> ]
>
> function flattenNextedArray(arr) {
> let numbers = [];
>
> for (let elt of arr) {
> if (Array.isArray(elt)) {
> numbers = numbers.concat(flattenNextedArray(elt))
> } else {
> numbers.push(elt)
> }
> }
> return numbers
> }
>
>
> var flatten = flattenNextedArray(numbers)
> console.log(flatten)
**Solution(2nd Method):**
> const numbers = [
> 1,
> [3, [2, 8, [12, 9]]],
> [5],
> [12, [[5]]],
> [100, [23, 45]]
> ]
>
> var flatten = numbers.flat(Infinity)
> console.log(flatten) |
{"OriginalQuestionIds":[11205254],"Voters":[{"Id":4834,"DisplayName":"quamrana","BindingReason":{"GoldTagBadge":"python"}}]} |
You can solve this by wrapping the command in a shell command:
xterm -e sh -c "php 1.php" |
One counter example to your solution is
```
[8, 7, 5, 4, 4, 1]
```
Adding as you have done would give the subsets
```
[8, 4, 4], [7, 5, 1]: difference of sums = 3
```
while the optimal solution is
```
[8, 5, 4], [7, 4, 1]: difference of sums = 1
```
Thus, to solve this problem, you need to brute force generate all combinations of (n choose floor(n/2)) and find the one with the smallest difference. Here is a sample code:
```python
comb = []
def getcomb(l, ind, k):
if len(comb) == k:
return [comb[:]]
if ind == len(l):
return []
ret = getcomb(l, ind+1, k)
comb.append(l[ind])
ret += getcomb(l, ind+1, k)
comb.pop()
return ret
def get_best_split(l):
lsm = sum(l)
best = lsm
for i in getcomb(l, 0, len(l)//2):
sm = sum(i)
best = min(best, abs(sm - (lsm - sm)))
return best
print(get_best_split([8, 7, 5, 4, 4, 1])) # outputs 1
```
**EDIT:**
If you don't care about the subsets themselves, then you can just generate all possible sums:
```python
def getcomb(l, ind, k, val):
if k == 0:
return [val]
if ind == len(l):
return []
return getcomb(l, ind+1, k, val) + getcomb(l, ind+1, k-1, val + l[ind])
def get_best_split(l):
l = sorted(l, reverse=True)
best = 1000000
for i in getcomb(l, 0, len(l)//2, 0):
best = min(best, abs(i - (sum(l) - i)))
return best
```
|
i have being building a telegram bot with nodejs where people can send in a url of a tweet and if it contains a media of type video. it downloads it and sends it to the user. Thats the purpose, which i already completed.
The problem i have is with setting a paywall on this bot account so that after 2 or 3 request, the user interacting with the bot have to make a payment to continue.
so i used the [sendInvoice](https://core.telegram.org/bots/api#sendinvoice) method.
FYI, i'm not using any external librarys, i'm directly interacting with the telegram api endpoints.
I'm using stripe for payments in test mode. the problem i have is as you can see this picture:
[enter image description here](https://i.stack.imgur.com/Vm69E.png)
After i send the invoice, then the user clicks the pay button and adds all the necessary card details then after hitting the pay. it keeps buffering and then time outs.
I also tried the [createInvoiceLink](https://core.telegram.org/bots/api#createinvoicelink) method and the same thing happened.
Ofcourse its some mistake in my solution, but i dont know where that is. May be i have to use a webhook or something to catch the checkout initiation/completion. but how can i let the Telegram api know about my payment webhook path (assuming something like that exists).
The only one i found is the method [setWebhook](https://core.telegram.org/bots/api#setwebhook) which is an alternative approach for polling. And that is what i am doing locally with ngrok.
A Part of my code, this is an abstract version of the functionality:
```
const app = express();
const port = 3000;
const botToken = process.env.TELEGRAM_BOT_TOKEN;
app.use(bodyParser.json());
app.post(`/bot${botToken}`, async (req, res) => {
const { message } = req.body;
console.log({ message });
if (message && message.text) {
const chatId = message.chat.id;
const messageText = message.text;
// after successfull 3 responses from the bot, send an invoice
const invoiceRes = await sendInvoice(
chatId,
"Premium Subscription",
"Unlock premium content with a subscription.",
"premium_subscription",
process.env.PROVIDER_TOKEN,
"subscription",
"USD",
[{ label: "Subscription", amount: 1000 }],
);
console.log(JSON.stringify(invoiceRes, null, 2));
}
res.status(200).end();
});
async function sendInvoice(
chatId,
title,
description,
payload,
providerToken,
startParameter,
currency,
prices,
) {
const apiUrl = `https://api.telegram.org/bot${botToken}/sendInvoice`;
const invoice = {
chat_id: chatId,
title: title,
description: description,
payload: payload,
provider_token: providerToken,
start_parameter: startParameter,
currency: currency,
prices: prices,
};
try {
const response = await fetch(apiUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(invoice),
});
const data = await response.json();
// console.log(data);
return data;
} catch (error) {
console.error("There was a problem sending the invoice:", error);
return error.message;
}
}
```
I googled a lot to find some answers, there isnt anything specified in the [payment section of the docs](https://core.telegram.org/bots/payments) either rather than getting the provider token (not anything specifically if i have to do on the stripe account dashboard like adding a webhook endpoint, even if that was the case how would telegram know that is the webhook it should communicate too)
I have been mentioning webhook a lot, because in my mind i am assuming that is my missing solution. if it isnt i'm sorry for doing so. I don't have much experience with stripe or building a telegram bot.
I hope i could get some help with this problem. even a small guidance will be enough if you are busy. just something i can go with that's all i ask. |
Stripe Pre-Checkout Timeout Error while making a checkout from Telegram Bot |
|node.js|stripe-payments|telegram-bot| |
null |
The part of the code giving me the error is borrowed from research on how to find a specific value, match it to a value on another worksheet and delete that row of data. The code has multi tasks to complete:
1. Copy the row on sheet22 and paste it to the next empty row on
sheet13
2. Find a match to Sheet22 cell C2 on Sheet 11 column A and
delete the matching row on sheet 11
3. Add formulas into columns F:M &
Q on row 2 of sheet22
Sub LineArchive_DD118()
Dim TMLastDistRow
Dim Answer As VbMsgBoxResult
Dim LastRowInRange As Long, RowCounter As Long
TMLastDistRow = Worksheets("Trailer Archives").Cells(Sheet13.Rows.Count, "B").End(xlUp).Row + 1
LastRowInRange = Sheets(Sheet11).Range("A:A").Find("*", , xlFormulas, , xlByRows, xlPrevious).Row ' Returns a Row Number
Application.ScreenUpdating = False
Application.EnableEvents = False
If Sheets("Dock Door Status").Range("P2").Value = "F" Then
With Sheets("Dock Door Status")
.Range("P2").Value = "F"
.Range("C2:R2").Copy
End With
With Sheets("Trailer Archives")
.Range("B" & TMLastDistRow).PasteSpecial Paste:=xlPasteValues
End With
Else
Exit Sub
End If
Answer = MsgBox("Are you sure you want to clear DD118?", vbYesNo + vbCritical + vbDefaultButton2, "Dock Door 118 Data")
If Answer = vbYes Then
For RowCounter = LastRowInRange To 1 Step -1 ' Count Backwards
If Sheets("Sheet11").Range("A" & RowCounter) = Sheets("Sheet22").Range("C2") Then ' If Cell matches our 'Delete if' value then
Sheets("Sheet11").Rows(RowCounter).EntireRow.Delete ' Delete the row
End If
Next
With Sheets("Dock Door Status")
.Range("D2:Q2").ClearContents
.Range("D2") = "CLOSED"
End With
With Sheets("Dock Door Status")
.Range("F2").Formula = "=IFERROR(INDEX('SSP Data'!B$2:B$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("G2").Formula = "=IFERROR(INDEX('SSP Data'!C$2:C$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("H2").Formula = "=IFERROR(INDEX('SSP Data'!D$2:D$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("I2").Formula = "=IFERROR(INDEX('SSP Data'!E$2:E$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("J2").Formula = "=IFERROR(INDEX('SSP Data'!F$2:F$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("K2").Formula = "=IFERROR(INDEX('SSP Data'!G$2:G$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("L2").Formula = "=IFERROR(INDEX('SSP Data'!H$2:H$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("M2").Formula = "=IFERROR(INDEX('SSP Data'!I$2:I$50,MATCH($C2,'SSP Data'!$A$2:$A$50,0)),"""")"
.Range("Q2").Formula = "=IF(G2="""","""",IF(AND(M2="""",N2>G2),""Future"",""Current""))"
End With
Else
Exit Sub
End If
Application.EnableEvents = True
Application.ScreenUpdating = True
End Sub
The line that is throwing the error is:
LastRowInRange = Sheets(Sheet11).Range("A:A").Find("*", , xlFormulas, , xlByRows, xlPrevious).Row ' Returns a Row Number
I think that part of it is I would like to replace the wildcard with a static text string, i.e. "*" with "DD118". I am sure there is more to the error than that.
All help is appreciated.
|
{"Voters":[{"Id":1735584,"DisplayName":"Adam Carter"},{"Id":12738750,"DisplayName":"lorem ipsum","BindingReason":{"GoldTagBadge":"swiftui"}}]} |
I'm attempting to implement the case change feature available in Microsoft Word with Shift + F3 into a TinyMCE React editor. The problem I'm running into is the last part where it should keep the same text selected/highlighted. The below works fine, as long as I haven't highlighted the last character of a node. If I have selected the end of a line, I get an error: `Uncaught DOMException: Index or size is negative or greater than the allowed amount`
So far I have the following:
```typescript
const handleCaseChange = (ed: Editor) => {
ed.on("keydown", (event: KeyboardEvent) => {
if (event.shiftKey && event.key === "F3") {
event.preventDefault();
const selection = ed.selection.getSel();
const selectedText = selection?.toString();
const startOffset = selection?.getRangeAt(0).startOffset;
const endOffset = selection?.getRangeAt(0).endOffset;
if (selectedText !== undefined && selectedText.length > 0) {
let transformedText;
if (selectedText === selectedText.toUpperCase()) {
transformedText = selectedText.toLowerCase();
} else if (selectedText === selectedText.toLowerCase()) {
transformedText = capitalizeEachWord(selectedText);
} else {
transformedText = selectedText.toUpperCase();
}
ed.selection.setContent(transformedText);
const range = ed.getDoc().createRange();
// This is what's currently erroring
range.setStart(selection.anchorNode, startOffset);
if (endOffset === selection?.anchorNode?.textContent?.length) {
range.setEndAfter(selection.anchorNode);
} else {
range.setEnd(selection.anchorNode, endOffset);
}
selection.removeAllRanges();
selection.addRange(range);
}
}
}
}
const capitalizeEachWord = (str: string) => str.replace(/\b\w/g, (char: string) => char.toUpperCase());
```
What else could I try in the `range.setStart` to get this to work correctly? |
I'm building the backend of an app in Python and I wanted to use Django. The problem is, I installed pipenv with pip3 the first time and it run smoothly, now if I try to install with pip only it tells me:
Requirement already satisfied: pipenv in c:\users\marco\appdata\roaming\python\python39\site-packages (2023.12.1)
Requirement already satisfied: virtualenv>=20.24.2 in c:\python39\lib\site-packages (from pipenv) (20.25.1)
Requirement already satisfied: certifi in c:\users\marco\appdata\roaming\python\python39\site-packages (from pipenv) (2024.2.2)
Requirement already satisfied: setuptools>=67 in c:\users\marco\appdata\roaming\python\python39\site-packages (from pipenv) (69.2.0)
Requirement already satisfied: platformdirs<5,>=3.9.1 in c:\python39\lib\site-packages (from virtualenv>=20.24.2->pipenv) (4.2.0)
Requirement already satisfied: distlib<1,>=0.3.7 in c:\python39\lib\site-packages (from virtualenv>=20.24.2->pipenv) (0.3.8)
Requirement already satisfied: filelock<4,>=3.12.2 in c:\python39\lib\site-packages (from virtualenv>=20.24.2->pipenv) (3.13.3)
WARNING: You are using pip version 21.2.3; however, version 24.0 is available.
You should consider upgrading via the 'C:\Python39\python.exe -m pip install --upgrade pip' command.
but if I try to run pipenv --version PowerShell tells me:
pipenv : Termine 'pipenv' non riconosciuto come nome di cmdlet, funzione, programma eseguibile o file script.
Controllare l'ortografia del nome o verificare che il percorso sia incluso e corretto, quindi riprovare.
In riga:1 car:1
+ pipenv
+ ~~~~~~
+ CategoryInfo : ObjectNotFound: (pipenv:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
(which by the way is the "not recognized as the name of a cmdlet" error in Italian), meaning that it seems uninstalled.
What should I do? I am new to Python.
I know this is probably a beginner error.
Should I uninstall it? But how can I do it?
If somebody can help me please thank you. |
The snippet below is suppose to add and remove event listener `handleBeforeUnload`. On checking the browser console I see the inside else getting printed however loading the page still throws the alert.
Any ideas why the event listener might not be getting removed?
`useEffect(() => {
const handleBeforeUnload = (e) => {
e.preventDefault();
const message =
'Are you sure you want to leave? All provided data will be lost.';
e.returnValue = message;
return message;
};
if (condition1 && condition2) {
console.log('inside if');
window.addEventListener('beforeunload', handleBeforeUnload);
} else {
console.log('inside else');
window.removeEventListener('beforeunload', handleBeforeUnload);
}
}, [stateVariable1, stateVariable2, stateVariable3]);`
ps.: is there any alternative to detecting a reload from using beforeunload event? Since the returnValue property is deprecated the default message is really unhelpful. How can one detect and create custom alert messages?
I still see the popup even when the console prints the else statement, which means ideally the event listener ought to have been removed. |
Removing the beforeunload event does not work in React |
|reactjs|event-handling|dom-events|simple-html-dom|onbeforeunload| |
null |
How to align a TextView center horizontally in jetpack compose? Here is my code
Text(
text = "Title of ModalBottomSheet",
style = MaterialTheme.typography.headlineSmall,
modifier = Modifier.padding(bottom = 8.dp)
) |
null |
i need to set up text to bar.
I need something like this:
[enter image description here](https://i.stack.imgur.com/pOMlM.png)
or set the names in the middle.
there was an idea to set TextItem by coordinates,
but after writing it, it turned out that with a large number of such elements, they begin to slow down.
|
how to specify text on BarGraphItem pyqtgraph |
|python|pyqtgraph|pyside6| |
null |
You can replace the object as explained in [hsm207 answer][1]. Here is how to do it with the v4 Python API.
```python
my_collections = client.collections.get("Product") # Replace with your collection name
prop_to_remove = "description" # Replace with the property name you want to remove
for obj in my_collections.iterator():
if prop_to_remove in obj.properties:
del obj.properties[prop_to_remove]
my_collections.data.replace(uuid=obj.uuid, properties=obj.properties)
```
[1]: https://stackoverflow.com/a/76177363/8547757 |
You want a result with one row per customerId and baseCurrencyId. So, for all tables where there is no unique constraint on these two columns, you'll have to aggregate the table first in order to produce one row per customerId and baseCurrencyId. Once you are there you can join the results on these two columns. In the following query I assume that the aggregation is necessary for all your views.
WITH
debt AS
(
SELECT
customerid,
basecurrencyid,
SUM(totalolddebt) AS old,
SUM(totalrecovereddebt) AS recovered,
SUM(totalpaiddebt) AS paid
FROM viewcustomerdebt
GROUP BY customerid, basecurrencyid
),
sell AS
(
SELECT
customerid,
basecurrencyid,
SUM(CASE WHEN paymenttypeid = 1 THEN totalprice END) AS price_type1,
SUM(CASE WHEN paymenttypeid = 2 THEN totalprice END) AS price_type2
FROM viewcustomersell
GROUP BY customerid, basecurrencyid
),
purchase AS
(
SELECT
customerid,
basecurrencyid,
SUM(CASE WHEN paymenttypeid = 1 THEN totalprice END) AS price_type1,
SUM(CASE WHEN paymenttypeid = 2 THEN totalprice END) AS price_type2
FROM viewcustomerpurchase
GROUP BY customerid, basecurrencyid
)
SELECT
COALESCE(debt.customerid, sell.customerid, purchase.customerid) AS customerid,
COALESCE(debt.basecurrencyid, sell.basecurrencyid, purchase.basecurrencyid) AS basecurrencyid,
COALESCE(sell.price_type1, 0) AS total_paid_sell_price,
COALESCE(sell.price_type2, 0) AS total_debt_sell_price,
COALESCE(sell.price_type2, 0) + COALESCE(debt.old, 0) - COALESCE(debt.recovered, 0) AS total_sell_debt,
COALESCE(purchase.price_type1, 0) AS total_paid_purchase_price,
COALESCE(purchase.price_type2, 0) AS total_debt_purchase_price,
COALESCE(purchase.price_type2, 0) - COALESCE(debt.paid, 0) AS total_purchase_debt,
(COALESCE(sell.price_type2, 0) + COALESCE(debt.old, 0) - COALESCE(debt.recovered, 0)) - (COALESCE(purchase.price_type2, 0) - COALESCE(debt.paid, 0)) AS total_debt_exchange
FROM debt
FULL OUTER JOIN sell ON sell.customerid = debt.customerid
AND sell.basecurrencyid = debt.basecurrencyid
FULL OUTER JOIN purchase ON purchase.customerid = COALESCE(debt.customerid, sell.customerid)
AND purchase.basecurrencyid = COALESCE(debt.basecurrencyid, sell.basecurrencyid)
ORDER BY
COALESCE(debt.customerid, sell.customerid, purchase.customerid),
COALESCE(debt.basecurrencyid, sell.basecurrencyid, purchase.basecurrencyid);
If SQL Server were standard SQL compliant and supported the `USING` clause, the joins would look like this:
FROM debt
FULL OUTER JOIN sell USING (customerid, basecurrencyid)
FULL OUTER JOIN purchase USING (customerid, basecurrencyid)
and the clumsy
COALESCE(debt.customerid, sell.customerid, purchase.customerid)
COALESCE(debt.basecurrencyid, sell.basecurrencyid, purchase.basecurrencyid)
would become mere
customerid
basecurrencyid
Demo: https://dbfiddle.uk/xtEwUV7h |
To simplify the example, I consider a dataframe with 3 10-Q and 2 10-K entries for each of two values of *cik*.
```python
import polars as pl
import datetime
df = pl.DataFrame({
"cik": [0] * 5 + [1] * 5,
"form": (["10-Q"] * 3 + ["10-K"] * 2) * 2,
"period": [datetime.date(2021, 1, 1+day) for day in range(10)],
})
```
```
shape: (10, 3)
βββββββ¬βββββββ¬βββββββββββββ
β cik β form β period β
β --- β --- β --- β
β i64 β str β date β
βββββββͺβββββββͺβββββββββββββ‘
β 0 β 10-Q β 2021-01-01 β
β 0 β 10-Q β 2021-01-02 β
β 0 β 10-Q β 2021-01-03 β
β 0 β 10-K β 2021-01-04 β
β 0 β 10-K β 2021-01-05 β
β 1 β 10-Q β 2021-01-06 β
β 1 β 10-Q β 2021-01-07 β
β 1 β 10-Q β 2021-01-08 β
β 1 β 10-K β 2021-01-09 β
β 1 β 10-K β 2021-01-10 β
βββββββ΄βββββββ΄βββββββββββββ
```
To filter the dataframe for each group defined by *cik*, we can simply use `pl.DataFrame.filter` together with `pl.Expr.over` (to define the groups) as follows.
```python
(
df
.sort(by=["cik", "form", "period"], descending=[False, False, True])
.filter(
(
((pl.col("form") == "10-Q") & (pl.int_range(pl.len()) == 0)) |
((pl.col("form") == "10-K") & (pl.int_range(pl.len()) < 2))
)
.over("cik", "form")
)
)
```
```
shape: (6, 3)
βββββββ¬βββββββ¬βββββββββββββ
β cik β form β period β
β --- β --- β --- β
β i64 β str β date β
βββββββͺβββββββͺβββββββββββββ‘
β 0 β 10-K β 2021-01-05 β
β 0 β 10-K β 2021-01-04 β
β 0 β 10-Q β 2021-01-02 β
β 1 β 10-K β 2021-01-10 β
β 1 β 10-K β 2021-01-09 β
β 1 β 10-Q β 2021-01-07 β
βββββββ΄βββββββ΄βββββββββββββ
``` |
It's an array of pointers to C strings (or char arrays). Where each string in the array is in the first[] and each character is in a second [], and each string of chars is an array of characters typically terminated by a null character or a zero byte.
In main() the argc parameter specifies how many strings there are in the char *arcv[] array.
The operating system when calling into main will specify the number of parameters, the int argc value as well as the command line parameters provided to main() in the char *argv[] parameter.
The first argv parameter is typically the program name itself, and this is followed by any additional parameters specified on the command line.
So for instance if you did:
./program.bin parameter1 --option1
And the data in it would be dereferenced like this:
puts(argv[0]) // would be "./program.bin"
puts(argv[1]) // would be "parameter1"
puts(argv[2]) // would be "--option1"
and argc would be 3.
The way i'd think of this is as char **argv, in fact you can replace char *argv[] as char **argv.
And I would think of this in the layout char *(argv[stringnum]) or (argv[stringnum])[letternum] or just argv[stringnum][letternum]
Now, the reason:
(char *argv)[]
doesn't work is, is because your declaration of the variable argv is embedded within some code to dereference it and the compiler is confused about whether it's a declaration or a function. In the error message it thinks it should be a declaration, but it doesn't understand the format of your declaration.
|