instruction stringlengths 0 30k ⌀ |
|---|
If you must use a position, you can capture an index variable by reference in your lambda. Of course, the usual caveats about `pow` and floating point numbers apply, and this is unnecessary to accomplish the task, but it is _possible_.
```
int idx = 0;
int number = std::accumulate(
arr.begin(), arr.end(),
[&idx](const int num, const int bit) {
return num + bit * std::pow(2, idx++);
}
);
```
Sidenote: `arr` is a misleading name for a `std::vector` variable. |
I have a LET & FILTER function used to use a drop down (green cell) and then it populates a report underneath based on data in sheet("Master")
You can see it's bringing in all the ("Master") sheet columns B:M in order and I don't need columns C:K
![enter image description here][1]
How can I change
`fRng,Master!B3:INDEX(Master!M:M,lr)` to only index and populate columns B, L, M into the sheet with the green drop down (the sheet with my LET formula)?
rest of formula:
=LET(
lr,COUNTA(Master!L:L),
fRng,Master!B3:INDEX(Master!M:M,lr),
criteriaRng1,Master!L3:INDEX(Master!L:L,lr),
criteriaRng2,Master!M3:INDEX(Master!M:M,lr),
FILTER(fRng,(criteriaRng1=Metrics!A1)*
(criteriaRng2="Negotiating")))
Data that I am indexing and whatnot in formula:
![enter image description here][2]
[1]: https://i.stack.imgur.com/Fsn2b.png
[2]: https://i.stack.imgur.com/0ctOr.png |
That's not jow you cycle through a variable. You should do:
{% for wc in response %}
{{ wc }}
{% endfor %}
By doing `{{ response.wc }}` Django will try to do three things: a lookup for `response["wc"]`, a call to `response.wc()` or getting attribute `response.wc` and a list lookup `response[wc]` where `wc` is an index. Depending on your code behind, it could change the `response` object. |
Builder annotation access level private |
HTTP Status 404 in Struts 2 web application and result JSP |
found the issue. I don't know why and there is nothing in the documentation that indicates this (that I could find) but referencing _two_ metadata variables seems to break it and have all metadata variables return an empty string value.
```json
"expression": "'test_val_myschema'.'||$AR_M_SOURCE_TABLE_NAME",
```
works. Hopefully this answer can help someone else who gets stuck here |
According to the second link you posted, I looked, it seems that the training process has stopped |
Your loop in `Play.move_invaders()` can tell each invader to move, except that any invader that moves beyond the limit left or right will cause *all* invaders to move down, *and* that switches the direction, which changes the test for the other invaders (`if self.direction == 'right':` etc) so not all invaders move the same direction.
Your loop should first move all the invaders and then, if the limit was exceeded, should then call `move_all_down()`:
```
import tkinter
import timeit
tk = tkinter.Tk()
WIDTH = 1000
HEIGHT = 650
global canvas
canvas = tkinter.Canvas(tk, width=WIDTH, height=HEIGHT, bg="black")
canvas.pack()
class Invader():
def __init__(self, canvas, play, x, y):
self.canvas = canvas
self.play = play
self.x_coord = x
self.y_coord = y
self.shape = canvas.create_rectangle(self.x_coord, self.y_coord, self.x_coord + 50, self.y_coord + 50, fill='green')
self.direction = 'left'
def move(self):
limit = False
if self.direction == 'right':
self.x_coord += 10
if self.x_coord + 40 >= WIDTH:
limit = True
elif self.direction == 'left':
self.x_coord -= 10
if self.x_coord - 10 <= 0:
limit = True
canvas.coords(self.shape, self.x_coord, self.y_coord, self.x_coord + 50, self.y_coord + 50)
return limit
class Play():
def __init__(self, canvas):
self.canvas = canvas
self.invaderlist = []
self.last_move_time = timeit.default_timer()
self.move_delay = 0.3443434
def move_invaders(self):
current_time = timeit.default_timer()
if current_time - self.last_move_time > self.move_delay:
move_down = False
for invader in self.invaderlist:
limit = invader.move()
move_down = move_down or limit
self.last_move_time = current_time
if move_down:
self.move_all_down()
def move_all_down(self):
for invader in self.invaderlist:
invader.y_coord += 10
#canvas.coords(invader.shape, invader.x_coord, invader.y_coord, invader.x_coord + 50, invader.y_coord + 50)
if invader.direction == 'left':
invader.direction = 'right'
elif invader.direction == 'right':
invader.direction = 'left'
def run_all(self):
x_coords = [50, 120, 200, 270, 350, 420, 500, 570, 650, 720]
y = 200
for i in range(10):
x = x_coords[i]
invader = Invader(self.canvas, self, x, y)
self.invaderlist.append(invader)
while True:
self.move_invaders()
canvas.after(5)
self.canvas.update()
play = Play(canvas)
play.run_all()
``` |
How can I get the difference in minutes between two dates and hours in C? |
|c| |
how i can move element of dynamic vector in argument of function push_back for dynamic vector. Probably, i’ve said something and it isn’t right. Sorry, but i don't know English so good…
`vector<int> *ans = new vector<int>();
vector<int> *x = new vector<int>();
ans->push_back(x[i]);` |
I'm doing the parallel version of Bellman-Ford algorithm in c++ using std::atomic
This is my main function executed in multiple threads
```
void calculateDistances(size_t start, size_t end, const Graph& graph, std::vector<std::atomic<double>>& distances, bool& haveChange)
{
for (size_t source = start; source < end; ++source) {
for (const auto& connection : graph[source]) {
const size_t& destination = connection.destination;
const double& distance = connection.distance;
double oldDistance = distances[destination];
while (distances[source] + distance < oldDistance)
{
if (distances[destination].compare_exchange_strong(oldDistance, distances[source] + distance)) {
haveChange = true;
break;
}
}
}
}
}
```
Here I'm trying to update `distances[destination]` with `distances[source] + distance` if second is smaller.
However both: `distances[destination]` and `distances[source]` can be changed in other thread during this operation, so I'm using `compare_exchange_strong` here.
But even when using this code - data race is still present and some of the iterations are skipped, resulting in failure of algorithm on some input data.
Why is this going on and how I can fix this? |
**The problem**
In my Flutter app, there is a custom drawer that is accessed through a button. In this drawer, there is a widget (InfoCard) which displays some of the user's information. However, in this widget, there is a FutureBuilder, so every time the drawer is displayed it shows a CircularProgressIndicator until it gets all the pieces of information from the Firestore server. My question is: can I avoid the FutureBuilder so that, when the app is fully loaded, it gets all the data needed for the widget?
Another problem related to this is that I need to wait until the Firestore database is loaded before returning the InfoCard widget (I don't know why because all the data is already loaded when the app il launched). So the thing is: I have to avoid calling the database or I call it somehow before building the widget so that I don't need to use the FutureBuilder.
**The code**
Here is the widget drawer:
class SideDrawer extends ConsumerStatefulWidget {
SideDrawer({
super.key,
required this.setIndex,
});
Function(int index) setIndex;
@override
ConsumerState<SideDrawer> createState() => _SideDrawerState();
}
class _SideDrawerState extends ConsumerState<SideDrawer> {
@override
Widget build(BuildContext context) {
final userP = ref.watch(userStateNotifier);
return Scaffold(
body: Container(
width: MediaQuery.sizeOf(context).width * 3 / 4,
height: double.infinity,
color: Theme.of(context).colorScheme.secondary,
child: SafeArea(
child: Column(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
FutureBuilder<Widget>(
future: Datas().infocard(userP),
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.done) {
if (snapshot.hasError) {
return Text('Errore: ${snapshot.error}');
}
return snapshot.data!;
} else {
return CircularProgressIndicator();
}
},
),
ColDrawer(setIndex: (index) {
widget.setIndex(index);
},),
],
),
),
),
);
}
}
the infoCard method:
Future<Widget> infocard(AppUser userP) async {
var db = FirebaseFirestore.instance;
QuerySnapshot qn =
await db.collection('users').where('id', isEqualTo: userP.id).get();
return InfoCard(
username: userP.username,
age: userP.age,
weight: userP.measurements
.firstWhere((measure) => measure.title == 'Weight')
.datas
.last
.values
.first,
);
}
If I don't wait for the database to be loaded or if I substitute the FutureBuilder with the InfoCard widget, it throws this error ONLY for the userP.measurements... line:
StateError (Bad state: No element)
Lastly, why do I need to call the database even though all the data is already loaded when the app is launched on the Firestore database? |
Stripe React Native - How to save payment method with 'IntentConfiguration' (collecting payment details before creating an intent)_ |
|react-native|stripe-payments| |
|vue.js|twitter-bootstrap|vuejs3|bootstrap-5| |
One way to overcome this limitation is that you perform custom string encoding on both-side. For example:
```
var encodedHashtag = $"U+{Char.ConvertToUtf32("""#""", 0)}";
var valueToSearch = "TEST#12345".Replace("#", encodedHashtag);
```
and
```
var encodedHashtag = $"U+{Char.ConvertToUtf32("""#""", 0)}";
var decodedId = id.Replace(encodedHashtag, "#");
```
Please bear in mind that if the user enters a case id, which included `U+35` then this approach will not work as intended. |
It looks like you're running into the same problem [as described in this blog post][1]. The short of it is that you are responsible for disposing of your logger, which will flush its contents to the appropriate sink.
[1]: https://web.archive.org/web/20161012233113/http://blog.merbla.com/2016/07/06/serilog-log-closeandflush/ |
{"Voters":[{"Id":2756409,"DisplayName":"TylerH"},{"Id":4685471,"DisplayName":"desertnaut"},{"Id":523612,"DisplayName":"Karl Knechtel"}]} |
To delete variables in the current cmd instance, do this:
set http_proxy=
set https_proxy=
or (even better):
set "http_proxy="
set "https_proxy="
To delete variables for future cmd instances, do this:
setx http_proxy ""
setx https_proxy "" |
I am unable to plot a regression line for a simple dataframe that consists of a period formatted date column and a integer column. The sample dataframe may be created using below code:
```
df = pd.DataFrame({"quarter": ['2017Q1', '2017Q2', '2017Q3', '2017Q4',
'2018Q1', '2018Q2', '2018Q3', '2018Q4'],
"total": [392, 664, 864,1024,
1202, 1375, 1532, 1717]
})
df["quarter"] = pd.to_datetime(df["quarter"]).dt.to_period('Q')
```
To generate a regression plot, I am using below code:
```
ax = sns.regplot(
data=df,
x='quarter',
y='total',
)
plt.show();
```
However, I am getting an error as follows:
```
TypeError: float() argument must be a string or a number, not 'Period'
```
Converting the Period column to a string/object format did not fix the issue and I am not able to convert it to an integer format either.
Can somebody help provide a way to monitor a regression trend line for the "total" by "quarter"? |
I have a problem using httpClient and WCF, in particular when I put a hashtag (#) as input.
Here below my code:
for WCF:
```c#
[OperationContract]
[WebGet( RequestFormat = WebMessageFormat.Json, UriTemplate = "/GetCase/{*id}" )]
string[] GetCase( string id );
```
on client:
```c#
HttpClientHandler handler = new HttpClientHandler();
handler.UseProxy = false;
HttpClient client = new HttpClient(handler);
string link = string.Format("https://{0}:{1}/", host, port);
Uri url = new Uri(link);
System.Net.ServicePointManager.SecurityProtocol |= SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;
client.BaseAddress = new Uri(link);
string serviceName = "Service";
string restPart = "rest";
string method = "GetCase";
string valueToSearch = Uri.EscapeDataString("TEST#12345");
string completeLinkWithMethod = link + serviceName + "/" + restPart + "/" + method;
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = await client.GetAsync(completeLinkWithMethod + "/" + valueToSearch);
```
When I make a call, on client I see that `valueToSearch` is "*TEST%2312345*" but when I go on server, id value is "*TEST*"
Is there a way to pass all the string without it has been truncated?
When I make a call, on client I see that valueToSearch is "*TEST%2312345*" but when I go on server, id value is "*TEST*"
Is there a way to pass all the string without it has been truncated? |
I m creating an application using react/node express, I m providing an interface to the user containing a button named import: I want to do this: when the user clicks on this button a popup window will appear to him to connect to his bitbucket and when he connects he choose the repository and files he wants to import and click on the ok button that is set in that popup window, the file chosen by the user will be stored in a folder in the backend folder. I m using vs code to write the code
I tried to look for a similar process over the internet / YouTube and forums and i didn't find that much information on this. does anyone know how can i implement this or any resources i can check |
importing files from bitbucket / jira and store it in my backend |
|import| |
null |
You can try to use this package: https://pypi.org/project/selenium-print/
It uses selenium's `execute_cdp_cmd` function behind the scenes, which is fairly easy to use. The parameters can be found [here][1].
```py
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
options = webdriver.ChromeOptions()
service = Service()
driver = webdriver.Chrome(service=service, options=options)
driver.get('http://localhost:3000')
time.sleep(2)
pdf = driver.execute_cdp_cmd("Page.printToPDF", {"printBackground": True})
pdf_data = base64.b64decode(pdf["data"])
with open("test.pdf", "wb") as f:
f.write(pdf_data)
```
[1]: https://chromedevtools.github.io/devtools-protocol/tot/ |
|vba|image|ms-word|page-break| |
def function1(dd:pd.DataFrame):
dd1=dd.dictionary_column.apply(lambda x:pd.Series(eval(x.strip('"')))).T.reset_index()
dd1.columns=["dictionary_key","dictionary_value"]
return dd.sql().join(dd1.sql(),how="left",condition='1=1').select("ID,ENV,dictionary_key,dictionary_value").df()
df1.groupby(level=0,as_index=0,group_keys=0).apply(function1)
:
ID ENV dictionary_key dictionary_value
0 35702 name1 Employee 1.56
1 35702 name1 IC 1.18
0 35700 nam22 Quota 3.06
1 35700 nam22 ICS 0.37
0 11765 quotation WSS 12235
1 11765 quotation HRPart 485
2 11765 quotation CNTL 1
0 22345 gamechanger Employees 5.192351
1 22345 gamechanger Participant 0.167899
0 22345 supporter a0 31
1 22345 supporter Table 5
2 22345 supporter NewAssignee 1
3 22345 supporter Result 5 |
Try following:
1. Windows terminal:
```ps
# In my case name is "mountain" - see screenshot
docker run -p 8080:8080 mountain
```
[![enter image description here][1]][1]
2. Then:
`http://localhost:8080/`
[1]: https://i.stack.imgur.com/PteGY.png |
null |
Changing your tables from heap tables to clustered tables on `ID` seems like it should help. If `ID` is not unique than making a compound clustered index on all three columns would help with join speed since the data from the table would be ordered on the pages based on the three columns.
The tradeoff that you take by making the clustered key a compound key is inserts and updates will take a little longer. However if the tables structure has been designed well and insert and update statements are also designed well this is a tradeoff worth taking.
Finally if this table does not have a primary key it would be good to minimally add one since you are looking up all the columns on `Table_B` to insert into `Table_A`. Adding a primary key should help the optimizer not have to scan the table but instead use a seek to find the data. |
Why do I need to wait to reaccess to Firestore database even though it has already done before? |
|flutter|google-cloud-firestore| |
{"Voters":[{"Id":2325110,"DisplayName":"user2325110"}],"DeleteType":1} |
I am creating a app that will read data from firebase and display it on a map. The app keeps crashing because there are hell lot of records to retrieve from the database. I need a way with which I get only the latest node data and not the entire data snapshot.
MainActivity.java:
```
package com.example.mrt_app_client;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.OnMapReadyCallback;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
import android.content.pm.PackageManager;
import android.graphics.Color;
import android.os.Build;
import android.os.Bundle;
import android.util.Log;
import android.widget.TextView;
import android.widget.Toast;
import com.google.android.gms.maps.model.PolygonOptions;
import com.google.android.gms.maps.model.PolylineOptions;
import com.google.firebase.database.ChildEventListener;
import com.google.firebase.database.DataSnapshot;
import com.google.firebase.database.DatabaseError;
import com.google.firebase.database.DatabaseReference;
import com.google.firebase.database.FirebaseDatabase;
import com.google.firebase.database.ValueEventListener;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends AppCompatActivity implements OnMapReadyCallback{
FirebaseDatabase db;
DatabaseReference dbReference;
TextView speed_disp;
public List<LatLng> polygon;
public double getLatitude() {
return latitude;
}
public double getLongitude() {
return longitude;
}
private GoogleMap googleMap;
private double latitude=19.10815888216128;
private double longitude=72.83706388922495;
public void setCord(String lat,String lon){
this.latitude=Double.parseDouble(lat);
this.longitude=Double.parseDouble(lon);
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
polygon = new PolyList<>(10);
setContentView(R.layout.activity_main);
db = FirebaseDatabase.getInstance();
speed_disp=findViewById(R.id.speed);
SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager()
.findFragmentById(R.id.map);
mapFragment.getMapAsync(this);
// if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && checkSelfPermission(android.Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
// requestPermissions(new String[]{android.Manifest.permission.ACCESS_FINE_LOCATION}, 1000);
// }
try{
dbReference = db.getReference("Data");
dbReference.addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot datasnapshot) {
Log.d("Lat",datasnapshot.getValue().toString());
for (DataSnapshot snapshot: datasnapshot.getChildren()) {
String latitude = snapshot.child("lat").getValue(String.class);
String longitude = snapshot.child("lon").getValue(String.class);
String speed = snapshot.child("speed").getValue(String.class);
speed_disp.setText(speed);
setCord(latitude,longitude);
googleMap.clear();
double lat = getLatitude();
double lon = getLongitude();
LatLng current=new LatLng(lat,lon);
// Add marker position to polygon list
polygon.add(current);
// Draw polyline
if (polygon.size() >= 2) {
PolylineOptions polylineOptions = new PolylineOptions()
.addAll(polygon)
.width(5)
.color(Color.RED);
googleMap.addPolyline(polylineOptions);
}
googleMap.addMarker(new MarkerOptions().position(current).title("Current Location"));
// Move camera to marker
googleMap.moveCamera(CameraUpdateFactory.newLatLngZoom(current,20 ));
}
}
@Override
public void onCancelled(@NonNull DatabaseError error) {
Toast.makeText(MainActivity.this,"Read error!!!",Toast.LENGTH_SHORT).show();
}
});
}catch (Error e) {
Toast.makeText(MainActivity.this, "Read module error", Toast.LENGTH_SHORT).show();
}
}
@Override
public void onMapReady(GoogleMap googleMap) {
this.googleMap=googleMap;
googleMap.setMapType(GoogleMap.MAP_TYPE_SATELLITE);
// LatLng current=new LatLng(19.10815888216128,72.83706388922495);
// this.googleMap.addMarker(new MarkerOptions().position(current).title("Dub maro pls!!"));
// this.googleMap.moveCamera(CameraUpdateFactory.newLatLng(current));
}
}
```
Firebase:
[The keys are uniquely generated](https://i.stack.imgur.com/SsDZO.png)
I want to read only the newest node from the real-time database. Also, I would appreciate if you guys could improve my code. |
How to read new child from firebase in an android app? |
|java|android|firebase|google-maps|firebase-realtime-database| |
null |
I have a small neural network in Tensorflow, with about 100 input variables and 2 hidden layers. Does Tensorflow have a built-in way of assessing input variable importance?
My first thought is to just drop each variable and calculate the accuracy, and the bigger the drop in accuracy, the more important the variable. However, there is randomness inherent in Tensorflow (I guess?) so every trial is a little different and it makes it difficult. |
Does tensorflow have a way of calculating input importance for simple neural networks |
|tensorflow|variables|deep-learning|neural-network| |
```
io.on("connection", (socket) => {
console.log(`User Connected: ${socket.id}`);
//joining Room
socket.join(selectedOption);
socket.broadcast.emit("userConn", `${username} joined ${selectedOption}`);
socket.on('emitMessage', (data) => {
if(data.room == selectedOption){
socket.broadcast.to(selectedOption).emit('message', {username: data.username , message: data.message});
}
});
});
```
For some reason after the first user has connected, when I login from the second user, they seem to connect twice. Hence:
```
console.log(`User Connected: ${socket.id}`);
```
runs twice, making the server side console looking like:
> User Connected: pPNqnsAIJAUgDrpkAAAB
>
> User Connected: yZPm7IuY673r1hkUAAAD
>
> User Connected: yZPm7IuY673r1hkUAAAD
This is also resulting in the first user receiving the second user's messages twice because they seem to be connected twice.
I was making a simple socket.io chat server, but the second user's messages were being received twice by the first user. Upon further observation, I noticed that the second user connects to the server twice. |
null |
In order for your code to work with any PO, you can set the id to the currentRecord's id:
var purchaseOrder = record.load({
type: record.Type.PURCHASE_ORDER,
id: scriptContext.currentRecord.id,
isDynamic:true
});
However, it will only work for already existing PO's which you are editing. At the time of PO creation there is no id assigned to the current record yet, therefore no PO to load.
As pageInit is a client script function, it will only run when you create/edit a record in user interface.
Futhermore, if you are attempting to work with the current PO, you don't need to load it through record.load(), because the currentRecord refers to the record you are already at.
So all you would really need is:
var purchaseOrder = scriptContext.currentRecord; |
I have a LET & FILTER function used to use a drop down (green cell) and then it populates a report underneath based on data in sheet("Master")
You can see it's bringing in all the ("Master") sheet columns B:M in order and I don't need columns C:K
![enter image description here][1]
How can I change
`fRng,Master!B3:INDEX(Master!M:M,lr)` to only index and populate columns B, L, M into the sheet with the green drop down (the sheet with my LET formula)?
rest of formula:
=LET(
lr,COUNTA(Master!L:L),
fRng,Master!B3:INDEX(Master!M:M,lr),
criteriaRng1,Master!L3:INDEX(Master!L:L,lr),
criteriaRng2,Master!M3:INDEX(Master!M:M,lr),
FILTER(fRng,(criteriaRng1=Metrics!A1)*
(criteriaRng2="Negotiating")))
Data that I am indexing and whatnot in formula:
[enter image description here][2]
value error after trying to add an array to specify columns:
[enter image description here][3]
[1]: https://i.stack.imgur.com/Fsn2b.png
[2]: https://i.stack.imgur.com/0ctOr.png
[3]: https://i.stack.imgur.com/MosUJ.png |
When my breakpoints failed (C#/.NET/VS 2022 Community), my research on this issue (primarily here on SO) focused on issues with missing PDB files. Indeed, reviewing Debug::Windows::Modules indicated problems with numerous PDB files.
My research included reading through most if not all related posts here at SO, including the numerous related questions in this metapost.
I finally focused on one specific file, the PDB file for the specific software I was using, and contacted the sofware vendor's support team. After digging in deeper with them, and trying many of the same proposed solutions offered, the tech suggested uninstalling and re-installing Visual Studio.
Maybe I missed this in the extensive offered answers here at SO, but I do not recall seeing this as a proposed answer.
Worked like a charm and my breakpoints are working again.
I still see various PDB issues listed in the Debug::Windows::Modules, but since my breakpoints are working I am, for now, ignoring the PDB issues. |
I am using Windows Server, and installed Anaconda v2023.09 as my environment. I wanted to install `mamba`, so I called
```bash
conda install mamba -c conda-forge
```
However, this process is almost taking forever. I wonder is there any kind of way I can accelerate such installation?
Before `mamba` installed, I realized it is now almost impossible to use `conda install` anything more on top since solving will take forever. But installing `mamba` itself would `conda` again.... Wondering if we can achieve anything like a fast installation.... (like an installation ignoring all depencies and only after `mamba` is installed, fixed the dependencies using mamba/ or pip install/ or some kind of manual install) |
|anaconda|conda|mamba| |
I am using SQlite3 in Visual Studio code. My goal is to create a table within a database and then import data in that table. The code works fine except for when there is a space involved in the header. The header for the column is "Order ID". When I get rid of the space and import the data it works fine but when I use the space it inputs "order ID" into every spot. Below is the code I am using.
CREATE TABLE Returnedsss
(
Returned Text ,
'Order ID' Text ,
PRIMARY KEY ('Order ID')
)
;
.tables
.mode csv
.separator ,
.import --csv "C://Business_Analytics//SQlite//data//Returns.csv" Returnedsss
SELECT 'Order ID'
FROM Returnedsss;
I tried the above and the output is
"Order ID"
"Order ID"
"Order ID"
"Order ID"
When I change it to get rid of the space though and run this code though it runs fine and gives me the Order IDs as an output:
CREATE TABLE Returnedsss
(
Returned Text ,
OrderID Text ,
PRIMARY KEY (OrderID)
)
;
.tables
.mode csv
.separator ,
.import --csv "C://Business_Analytics//SQlite//data//Returns.csv" Returnedsss
SELECT OrderID
FROM Returnedsss;
Is there a way to do it where it lets me use the Order ID instead of having to do OrderID? |
You simply need to add a CSS class "button" to each element and also to get the cart URL, better use `wc_get_cart_url()` WooCommerce function like:
```php
function step_controls() {
echo '<div class="step_controls_container">
<a class="btn-primary button step_back_to_cart" href="'.wc_get_cart_url().'">Back to Cart</a>
<a class="btn-primary button step_next">Next</a>
<a class="btn-primary button step_prev">Previous</a>
</div>';
}
```
Depending on your theme settings, you will get something like:
[![enter image description here][1]][1]
Also, try to replace the `wp_enqueue_scripts` hook function with:
``` php
add_action( 'wp_enqueue_scripts', 'cart_steps_enqueue_script' );
function cart_steps_enqueue_script() {
wp_enqueue_script('cartsteps', get_stylesheet_directory_uri() . '/assets/js/cart-steps.js', array( 'jquery' ), '', true);
}
```
[1]: https://i.stack.imgur.com/t4u6s.png |
I'm plotting some functions of distance where the functions oscillate with a characteristic distance, and I'd like the units of the x-axis to be in terms of this characteristic distance. For example if the characteristic distance is 200m metres, I want x=50 on the x-axis to be 0.25, x=100 to be 0.5 and so on.
Here's my code at the moment:
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
theta = 0.6
d_mass = 1
energy = 1
L_osc = (4*(np.pi)*energy)/d_mass
osc_constants = d_mass/(4*energy)
def P(x):
return ((np.sin(2*theta))**2)*((np.sin(osc_constants*x))**2)
def S(x):
return 1 - P(x)
x = np.linspace(0, 20, 1000)
y1 = P(x)
y2 = S(x)
df = pd.DataFrame(zip(x, y1, y2), columns=['x', 'Oscillation Probability', 'Survival Probability']).set_index('x')
fig, ax = plt.subplots()
# Plot sns.lineplot() to the ax
sns.set_palette('Set2')
sns.set_style('ticks')
sns.lineplot(df, ax=ax)
ax.set_title('Plotting Functions in Matplotlib', size=14)
ax.set_xlim(0, 20)
ax.set_ylim(0, 1.5)
# Despine the graph
#sns.despine()
plt.show()` |
Changing the units of an axis on a seaborn plot |
|python|matplotlib|seaborn|graphing| |
null |
To display products that do not have the **`"pa_sort-chassis"`** attribute assigned!
Please try this code and it should display the products that do not have the "pa_sort-chassis" attribute assigned in the Admin portal on the Products page.
add_action( 'admin_notices', 'products_no_chassisattribute_admin' );
function products_no_chassisattribute_admin() {
global $pagenow, $typenow;
if ( 'edit.php' === $pagenow && 'product' === $typenow ) {
echo '<div class="notice notice-warning is-dismissible"><h3>Products with NO Sort By Chassis Attribute</h3><ul>';
$args = array(
'status' => 'publish',
'visibility' => 'visible',
'limit' => -1
);
$products = wc_get_products( $args );
foreach ( $products as $product ) {
$product_attributes = $product->get_attributes();
if ( ! array_key_exists( 'pa_sort-chassis', $product_attributes ) ) {
echo '<li><a href="' . esc_url( get_edit_post_link( $product->get_id() ) ) . '">' . $product->get_name() . '</a></li>';
}
}
echo '</ul></div>';
}
}
Use the `get_attributes()` method to retrieve all the attributes of a product. Then, check if the key **`'pa_sort-chassis'`** exists in the `$product_attributes` array. If it doesn't exist, it means the product does not have the `"pa_sort-chassis"` attribute assigned, and it will be displayed in the list. |
Here's a solution to automatically start downloading a file:
[How to trigger a file download when clicking an HTML button or JavaScript][1]
Add this function and update the "downloadLinkClickHandler" function:
```js
// Function to handle download link click
function downloadLinkClickHandler(event) {
// Get the download link element
var linkElement = event.target;
// Check countdown finished
if (linkElement.classList.contains("countdown_finished")) {
return; // default event: manual start dowload
}
// Prevent default link action
event.preventDefault();
// Check countdown working
if (linkElement.classList.contains("countdown_started")) {
return; // Wait for countdown finish
}
// Start countdown
var countdown = 3;
linkElement.classList.add('countdown_started')
linkElement.textContent = 'Download (' + countdown + ')';
var interval = setInterval(function() {
countdown--;
linkElement.textContent = 'Download (' + countdown + ')';
if (countdown === 0) {
clearInterval(interval);
// Mark as countdown finished
linkElement.classList.add('countdown_finished')
// Update the link element with the original download link without parameters
linkElement.href = linkElement.href.split('?')[0];
linkElement.textContent = 'Download';
// Programmatically start downloading file
download(linkElement.href);
}
}, 1000);
}
function download(url) {
const a = document.createElement('a')
a.href = url
a.download = url.split('/').pop()
document.body.appendChild(a)
a.click()
document.body.removeChild(a)
}
```
[1]: https://stackoverflow.com/a/49917066/8244231 |
In an HTML document, suppose I have something like a link that starts after another non-whitespace character, e.g.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<p>This is some text that includes a link (<a href="https://example.com">example</a>)</p>
<!-- end snippet -->
If the viewport width is such that the line has to break in the middle of the first word of the link text, then the preceding character (the opening parenthesis, in this case) "sticks" to the link, i.e. the standard layout algorithm breaks the line at the closest preceding whitespace:
[![Rendering of the paragraph when line needs to break within the word "example"][1]][1]
However, if I want to insert an SVG icon at the start of the `a` element:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<p>This is some text that includes a link (<a href="https://example.com"><svg style="height: 1em; color: green;" viewbox="0 0 10 10" fill="currentColor"><circle cx="5" cy="5" r="5" /></svg>example</a>)</p>
<!-- end snippet -->
Then the line is now allowed to break either side of the `svg`
[![Line break in the wrong place with SVG added][2]][2]
## Question
Is there any way to make the layout engine treat the svg the same as it would normal text characters, so that the open parenthesis still "sticks" to the link text?
I can't make the whole paragraph or the whole link `white-space: nowrap` - I _want_ the text to be able to wrap normally both within and outside the link, I just need it _not_ to break around the `<svg>` (unless the link as a whole is preceded by whitespace). Basically I want to be able to insert an icon at the start of the link without interfering with the normal text layout behaviour, as if the icon were just another character with all the same Unicode properties as the first character of the existing link text.
Is this possible?
[1]: https://i.stack.imgur.com/hTZej.png
[2]: https://i.stack.imgur.com/K8MCt.png |
In this instance in the conext provided, (s & '0') does not create a std_logic_vector, it creates a `signed`. VHDL is a context driven language. Because no arithmatic is available on `std_logic_vector`, and the types of `L` and `R` are signed, the arithemtic must be of the signature (from ieee.numeric_std):
```vhdl
function "-" (L: NATURAL; R: SIGNED) return SIGNED;
function "-" (L: SIGNED; R: NATURAL) return SIGNED;
```
The numeric_std package provides the arithmetic functions above between signed and integer types, hence the `1` value is converted inside the `"-"` function to be the same as the other operand, in this case, a `signed` of 2 bits. |
I have a problem in the Firefox browser that I cannot solve. So the problem is the following: I have a function that I use to collect data when a page receives a refresh
```
const eventName = isOnIOS ? "pagehide" : "beforeunload";
this.window.addEventListener(eventName, async () => {
await this.saveAction(this.typeOfActions.unknown);
await this.saveActionMetaData();
});
saveAction = async (
type,
filters = null,
searchTerm = "",
url = null
) => {
await this.http.post(`${this.collectorApi}${this.endpoints.action}`, {
target: url ? url : this.currentUrl,
duration: Math.round((this.timeSpentOnPage / 1000) * 100) / 100 || 0,
idle_time: isNaN(this.idleTime) ? 0 : Math.round((this.idleTime / 1000) * 100) / 100,
type: type,
filters: ! filters ? {} : filters,
session_id: this.#getSessionId(),
searched_term: searchTerm
});
}
saveActionMetaData = async () => {
await this.http.post(`${this.collectorApi}${this.endpoints.actionMetaData}`, {
filters: [],
data: {},
idle_time: Math.round((this.timeSpentOnPage / 1000) * 100) / 100 || 0,
duration: isNaN(this.idleTime) ? 0 : Math.round((this.idleTime / 1000) * 100) / 100,
action_url: this.currentUrl,
search_terms: [],
session_id: this.#getSessionId()
});
}
post = async (
endpoint,
data
) => {
try {
return await (await fetch(endpoint, {
method: "POST",
headers: this.headers,
body: JSON.stringify(data)
})).json();
} catch (e) {
console.error(e);
}
}
```
I can't figure out where the error is. I simply get that error only on Firefox. |
Ns biding abording issue only on Firefox |
|javascript| |
I'm solving an exercise for class. I have a problem with reading binary files. I have made a function to write different objects to a file and it works fine. But when I read it, it only prints one. Can somebody help me?
Main
```
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int opcion = 0;
System.out.println("Nombre del fichero:");
String nombreFichero = sc.nextLine();
File file = new File(nombreFichero);
try {
file.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
FicheroEmpleados fich = new FicheroEmpleados (nombreFichero);
do {
System.out.println("Menú");
System.out.println("1. Agregar empleado");
System.out.println("2. Mostrar empleados");
System.out.println("3. Salir");
opcion = sc.nextInt();
switch(opcion) {
case 1:
try {
System.out.println("Nombre del empleado: ");
String nombre = sc.next();
System.out.println("Edad: ");
int edad = sc.nextInt();
System.out.println("Salario: ");
double salario = sc.nextDouble();
Empleado e = new Empleado(nombre, edad, salario);
System.out.println("Datos correctos");
fich.agregarEmpleado(e);
} catch (Exception ex) {
System.out.println("Datos incorrectos");
}
break;
case 2:
fich.mostrarEmpleado();
break;
case 3:
break;
default:
System.out.println("Opcion incorrecta");
break;
}
} while (opcion!=3);
}
```
Class for control de files
```
public class FicheroEmpleados {
String nombreFichero;
public FicheroEmpleados (String nombreFichero) {
this.nombreFichero = nombreFichero;
}
public void agregarEmpleado(Empleado e) {
try {
FileOutputStream fichero = new FileOutputStream(nombreFichero, true);
ObjectOutputStream salida = new ObjectOutputStream(fichero);
salida.writeObject(e);
System.out.println("Empleado guardado");
salida.close();
} catch (FileNotFoundException ex) {
System.out.println("No se ha encontrado el fichero");
} catch (IOException ex) {
System.out.println("No se ha podido escribir el fichero");
} /*finally {
try {
salida.close();
} catch (IOException ex) {
System.out.println("No se ha podido cerrar el fichero");
}
}
*/}
public void mostrarEmpleado() {
try {
FileInputStream fichero = new FileInputStream(nombreFichero);
ObjectInputStream entrada = new ObjectInputStream (fichero);
while(true) {
Empleado e = (Empleado) entrada.readObject();
System.out.println("Nombre: " + e.getNombre());
System.out.println("Edad: " + e.getEdad());
System.out.println("Salario: " + e.getSalario());
entrada.close();
}
//* fichero.close();
}catch(EOFException ex) {
return;
}catch (FileNotFoundException ex) {
System.out.println("No se encuentra el fichero");
}catch (IOException ex) {
System.out.println("No se puede leer el fichero");
}catch (ClassNotFoundException ex) {
System.out.println("No se ha podido encontrar la clase");
}
}
}
```
Serialized class
```
public class Empleado implements Serializable {
private String nombre;
private int edad;
private double salario;
//constructor de Empleado
public Empleado() {}
public Empleado (String nombre, int edad, double salario) {
this.nombre = nombre;
this.edad = edad;
this.salario = salario;
}
//getters y setters de Empleado
public String getNombre() {
return this.nombre;
}
public void setNombre(String nombre) {
this.nombre = nombre;
}
public int getEdad() {
return this.edad;
}
public void setEdad(int edad) {
this.edad = edad;
}
public double getSalario() {
return this.salario;
}
public void setSalario(double salario) {
this.salario = salario;
}
public String toString() {
return "Empleado: "+ nombre + " edad: "+ edad+ " salario: "+ salario;
}
}
```
Nombre del fichero:
prueba3
Menú
1. Agregar empleado
2. Mostrar empleados
3. Salir
1
Nombre del empleado:
toni
Edad:
33
Salario:
333333
Datos correctos
Empleado guardado
Menú
1. Agregar empleado
2. Mostrar empleados
3. Salir
2
Nombre: vicent
Edad: 22
Salario: 2222.0
No se puede leer el fichero
Menú
1. Agregar empleado
2. Mostrar empleados
3. Salir |
Helpt with reading files |
|java|file|objectinputstream| |
null |
I am trying to run the momentum program for Trading Evolved (Chapter 12 – Momentum/Momentum Model.ipynb), contained below.
I am having a terrible time with the dates.
Running the code as it was originally provided gives me the error
AttributeError: 'UTC' object has no attribute 'key'
After a large amount of google searching I was able to get around this error for a different program (First Zipleine Backtest.ipynb) by changing
```
start_date = datetime(1996, 1, 1, tzinfo=pytz.UTC)
end_date = datetime(2017, 12, 31, tzinfo=pytz.UTC)
```
to
```
start_date=pd.to_datetime('1996-01-01')
end_date =pd.to_datetime('2017-12-31')
```
However, when I make this change to the Momentum Model program changing
```
start = datetime(1997, 1, 1, 8, 15, 12, 0, pytz.UTC)
end = datetime(2017, 12, 31, 8, 15, 12, 0, pytz.UTC)
```
which initially produces the error
>NameError: name 'datetime' is not defined
So I change the format of the start and end to
```
start=pd.to_datetime('1996-01-01')
end =pd.to_datetime('2017-12-31')
```
I get the error
>TypeError: Invalid comparison between dtype=datetime64[ns] and Timestamp
After searching the program for references to ‘start’, I came across
```
date_rule=date_rules.month_start(),
```
Now, I know that dates in python can take a couple of different forms and after I made the change, the form of start is
type(start)
Out[21]: pandas._libs.tslibs.timestamps.Timestamp
But I don’t know the format of the date in
date_rule=date_rules.month_start(),
So I don’t know how to make the comparison legitimate, if this indeed the source of the error.
I am trying to develop these programs into teaching material at the university at which I am employed but can’t do that if I can’t get passed the sticking point of the date formats. I would really, really, appreciate some help.
Tried to run the momentum program as is provided in the Trading Evolved website (https://www.dropbox.com/s/tj85sufbsi820ya/Trading%20Evolved.zip?dl=0) under Chapter 12 Momentum / Momentum Model.ipynb
Was expecting the program to run but it produced an error when using the dates.
I tried to change the form of the dates but just got a different error.
I tried to paste in the program here, but stack overflow won't accept the indenting and says that the submission has too many characters.
Run the program for momentum from the Tading Evolved book.
Tried changing the format of the dates but no joy. |
How can I prevent a line break at an SVG icon? |
|html|css|svg| |
|asp.net|microservices| |
I noticed that instead of using Numpy to change my data step by step I could develop an end equation with Sympy, simplify it, and then lambdify it, such that I have the end point of a Numpy program.
I'd like to know if such an approach is not a waste of time. |
Should I be using Sympy to first develop an equation or go straight through Numpy? |
According to the official [EF Core docs][1], the method `AddRangeAsync(IEnumerable<TEntity>, CancellationToken)` is used when working with special value generators that require a database round trip. For instance, if your application employs `SqlServerValueGenerationStrategy.SequenceHiLo` to allocate blocks of IDs in advance, when a new entity is tracked by EF it might need first to query the database and ask for a new "Hi" (you can learn more about the Hi/Lo algorithm here: [What's the Hi/Lo algorithm][2]). Therefore, `AddRangeAsync` is applicable in scenarios where EF Core requires to communicate with the database to obtain ID blocks in advance. On the other hand, when the objective is simply to mark an entity as added — without the necessity for `SqlServerValueGenerationStrategy.SequenceHiLo` or similar mechanisms — the `AddRange` method is more appropriate.
[1]: https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbset-1.addrangeasync?view=efcore-2.1
[2]: https://stackoverflow.com/questions/282099/whats-the-hi-lo-algorithm |
|excel|vba|ms-word| |
I've been checking the implementation of the `SimpleCommandBus` and `DefaultCommandGateway`, but the only scenario when the `DefaultCommandGateway` would **not** rethrow the exception, is if the `CommandResultMessage` is not exceptional.
Note that the `CommandResultMessage` is the object Axon Framework creates to carry the result of command handling. Hence, if you throw an exception, it will be captured in the `CommandResultMessage`. Upon doing so, its state is set to be exceptional.
Hence, at this moment, I can only conclude that the command handler may be doing something off concerning the exception throwing. Or, the aggregate is capturing the exception somewhere else. But, to be able to deduce that, I would need to ask you to either:
1. Update your question with the Command Handler, and **Any** interceptors, or,
2. To provide a sample project so I am able to reproduce the predicament locally. This would allow me to debug the scenario in question.
> Even though I am throwing the exception in the command handler, changing from a tracking to a subscribing event processor fixes the issue (?)
Switching a `TrackingEventProcessor` for a `SubscribingEventProcessor` would result in the same thread to be used for handling your events. So, the only way how this would resolve it, is if the event handler throws an Exception that does cause the `CommandResultMessage` to become exceptional. |
You don't mention where the SQL server is located. However if the SQL instance is not on your computer.
Above being the case?
Then the application should work just fine from all computers on that same network.
Just make sure WHEN you link the table(s), make sure you using FILE DSN, and not a user/system DSN. The reason for this is then you don't require any setup on each workstation by using the standard SQL driver as you noted.
So, when you link the table, don't use system or user DSN, but make sure you linked the tables using a FILE DSN. By using a FILE DSN, then no setup is required on each workstation.
So, with above, then do the linked tables work and open just fine, or is it just the custom ADO code you have that fails?
I would suggest you ensure the linked tables work first before you start working on the VBA + ADO code you have.
|
This is an elegant, simple, fast and arbitrary solution that works for Python version >= 3.8:
**Version 1:**
```python
from math import isqrt
def is_square(number):
if number >= 0:
return isqrt(number) ** 2 == number
return False
```
**Version 2:**
```python
def is_square2(number):
return isqrt(number) ** 2 == number if number >= 0 else False
```
This version follows the same analogy but is at least 68% faster than the previous version.
|
As of right now I am using NLTK to create a chatbot in python but am considering rewriting for spaCy . i am looking for a way to feed my AI several pre built sentences to choose from at random from the pairs when responding to a user .
for example; say the bot says something and the user asks “would you like to discuss that?” i want it to choose “yes please!” and “no thank you!” at random.
if you need access to my bot code it can be found on my only other question in here
I have tried doing extensive research on this subject via google and haven’t managed to find a single clear answer |
how do i randomise responses? |
|python-3.x|windows|nltk| |
null |
Firstly the title of the post may not do justice to the question, so my humble apologies for this.
Here is the question:
| Date | Type | Value |
| ----------| -------------- |-------|
| 2024-03-11| 3 |3
| 2024-3-11| 4|5
| 2024-03-12| 3 |3
| 2024-3-12| 4|5
| 2024-3-12| 5|5
| 2024-03-13| 3 |3
| 2024-3-13| 4|5
| 2024-3-13| 5|2
| 2024-3-14| 5|5
Type = [3,4,5]
Is there a simple way for me to, in Pandas, create a new DF from the above one where the data is only present if the date has values for all the elements in the list ?
Meaning the reusltant DF should only contain data for date 12,13 since the original DF has values for elements in the Type array ?
Thanks |
Pandas select rows with values are present for all elements in array |
|python|pandas| |
The correct solution is to use `<input type="file" ...` as answered in several other stack overflow posts.
For example, if you want to just have the built-in camera app get launched, just create element:
```
<input type="file" name="image" accept="image/*" capture="environment">
```
You can use the 'file' input as-is or you can hide it and launch it from another button you can hide the input element and call it from javascript (eg jQuery: `$('#inpTakePhoto#').click()`). Note, that the event must be triggered by the user and cannot be fired automatically as it will likely get blocked for security reasons.
The input - file control can be configured to use just the camera or even allow user to select from other sources by using the `multiple` flag. Its very powerful and easy to use.
For the record, I went down the rabbit hole of `getUserMedia` and this is definitely not the correct answer for most people. `getUserMedia` is really for developers who want to develop an end-to-end camera app. It requires that the developer implement all the main camera features like pinch zoom, switch camera, and I'm not even sure if the flash settings can get accessed. |
Remove @DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
|
You can just use [`Confluent.Kafka.Timestamp.UtcDateTime`][1]:
```
DateTime dt = consumeResult.Message.Timestamp.UtcDateTime;
```
There is no reason to convert it to string even if the type had `ToString` overloaded (currently it does not so you get the default one which returns the type name).
[1]: https://github.com/confluentinc/confluent-kafka-dotnet/blob/a67bd6c06b7eef4293e6476d9ff6f3e93f0e4cd9/src/Confluent.Kafka/Timestamp.cs#L120 |
I have an array of `value,location` pairs
```bash
arr=(test,meta my,amazon test,amazon this,meta test,google my,google hello,microsoft)
```
I want to print the duplicate values, the number/count of them, along with the location.
For example:
```txt
3 test: meta, amazon, google
2 my: amazon, google
1 this: meta
1 hello: microsoft
```
Here `test` appears 3 times, in `meta`, `amazon`, and `google`
So far, this code will print the item and location
```
printf '%s\n' "${arr[@]}" | awk -F"," '!_[$1]++'
```
```txt
test,meta
my,amazon
this,meta
hello,microsoft
```
This will print the count, but it's taking in the `value,location` as one value
```
printf '%s\n' "${arr[@]}" | sort | uniq -c | sort -r
```
```txt
1 my,amazon
1 my,google
1 this,meta
1 test,meta
1 test,google
1 test,amazon
1 hello,microsoft
```
|
Find duplicates in array, print count with pair |
|mongodb|mongodb-query| |
In Smarty, you can use following syntax: `{$index|string_format:"%02d"}`
Sample with a section loop:
```tpl
{section name="myloop" loop="10" step="1"}
{$index = $smarty.section.myloop.index+1}
{$index|string_format:"%02d"}
{/section}
```
Sample with a for loop:
```tpl
{for $index=1 to 10}
{$index|string_format:"%02d"}
{/for}
```
Sample with a foreach loop:
```tpl
{$array = ['a','b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']}
{foreach $array as $index => $el}
{$index = $index+1}
{$index|string_format:"%02d"}
{/foreach}
```
All of this samples will output:
```
01 02 03 04 05 06 07 08 09 10
```
|
|javascript|html|css|frontend|backend| |
In GPC we need to allow VMs on VPC A Subnet 1 to communicate with VMs on other VPCs (in other projects) where there subnets overlap (exchange originates from A).
eg.
Project A | VPC A | Subnet 1 | 10.10.142.0/24
Project B | VPC B | Subnet 1 | 10.10.12.0/24
Project C | VPC C | Subnet 1 | 10.10.12.0/24
We have ruled out VPC Sharing due to a lack of project administration autonomy. And also ruled out VPC Peering due to not coping with overlapping subsets.
And have tried:
NCC with spokes and a Private NAT, and with just Project A and Project B we were able to get a VM on each to SSH (with appropriate FW rules). But when adding C, we need to filter the CIDR 10.10.12.0/24 for both B and C spokes. And after re-adding the spokes, of course now there are no routes from A to B anymore and of course none for A to C (ie. subnets have to be filtered to allow overlap).
I think I need to add routes, but the gcloud command seems to need --net-external-ip-pool, which I do not have/want, as this all needs to be internal and not public.
How best to proceed to be able to route A to B and A to C. VPN maybe... VPC Peering with PNAT (although I think the same problem as the NCC solution we tried described above).
[Overview][1]
[1]: https://i.stack.imgur.com/vlo0D.png |
Zipline Date Trouble |
I am creating an endpoint in charge of verifying that the "pages" table verifies that all the records work by checking with axios to verify that the link field works correctly or at least a status of 200.
The problem with my code is that the application freezes with the first record and does not advance, also if a record contains a link that does not work the loop ends with the axios error and does not advance with the other links. If the page does not work, I delete it from the database
Code
for (var i = 0; i < pages.length; i++) {
var page = pages[i];
var name = page.name;
var link = page.link;
console.log("TEST WITH " + name);
try {
const response = await axios.get(link, {timeout: 2000});
console.log(response.status);
console.log("OK");
} catch (error) {
console.log("FAIL");
// I need link for delete in database
/*
const result = await conn.query("DELETE FROM pages WHERE link = ?", [
link,
]);
*/
}
}
I am using the latest version of NextJS
Something I forgot to mention is that the list of pages contains links to music streams
How could I achieve my goal? |
null |
null |
null |
null |
null |