text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How to replace a specific html tag in bulk?
I imported a Blogger XML to WordPress, but Blogger converts the tag <!--more--> into <a name='more'></a>, which is something that WordPress doesn't recognize as it should (I lost the Read More option).
How can I edit, maybe in plain text, and replace the old tag for the new one? Is there a "file", where I can see all posts and make this replacement?
The first part of your description seems backwards. What do you mean by "lost the read more option"? As for addressing conversion problems like this one, you could try a search and replace in your posts table (in the MySQL database). There are plugins to help with that if you don't know how to go about it. You can also achieve the same thing programmatically by filtering the post content. If your objective is to convert the blogger tag into a WordPress recognizable tag, you could try that. An alternative would be to write a script treating <a name... etc. like WP treats
Thank you! I edited the database and replaced the tag with notepad. It worked perfectly for all my posts!
Glad you worked it out!
| common-pile/stackexchange_filtered |
Error: JavaFX runtime components are missing, and are required to run this application (with Maven)
I get the error Error: JavaFX runtime components are missing, and are required to run this application in IDEA when I try to compile JavaFX application.
I added the modules via maven. There is pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>clockAlarm</artifactId>
<version>1.0-SNAPSHOT</version>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.openjfx</groupId>
<artifactId>javafx-controls</artifactId>
<version>13</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.openjfx</groupId>
<artifactId>javafx-maven-plugin</artifactId>
<version>0.0.5</version>
<configuration>
<mainClass>Program</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
How it can be fixed if I want to use JavaFX modules only via maven with no downloading sdk and setup it by myself?
There is Programm class:
import javafx.application.Application;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.Group;
import javafx.scene.text.Text;
public class Program extends Application {
public static void main(String[] args) {
Application.launch(args);
}
public void start(Stage stage) {
Text text = new Text("Alarm Clock");
text.setLayoutY(80);
text.setLayoutX(100);
Group group = new Group(text);
Scene scene = new Scene(group);
stage.setScene(scene);
stage.setTitle("alarmClock");
stage.setWidth(300);
stage.setHeight(250);
stage.show();
}
}
All project settings are set to 13 java version.
Try adding https://mvnrepository.com/artifact/org.openjfx/javafx as a maven dependency
As I understand this can be fixed only with https://www.jetbrains.com/help/idea/javafx.html#run way. All ways with maven didn't work for me
You are missing all the JavaFX dependencies in your pom. See for reference this pom
| common-pile/stackexchange_filtered |
How to set the mask based on the different values of lengths using C program
In a function I have to pass the bitlength and based on the bit length the mask will be set. For example here is my part of program
if(length == 8)
mask = 0xff;
if(length == 7)
mask = 0x7f;
if(length == 12)
mask = 0x3ff;
if(length == 16)
mask = 0xffff;
How can I use some loop statements to set the mask as the length varies from 1 to 16?
It would be great if someone helps, thanks in advance.
shift a '1' into the mask register for each Bit in the length-register.
Start with a value of zero. Then for every bit in the mask, shift left by 1 then OR a 1 bit at the end.
uint16_t mask = 0;
for (int i = 0; i < length; i++) {
mask <<= 1;
mask |= 1;
}
come on - loop for this task?? . uint32_t mask = (1 << length) -1; or uint16_t mask = length == 16 ? 0xffff : (1 << length) -1;
How to set the mask based on the different values of lengths using C program?
Shift 1u by n, then subtract 1. No loop needed. Best to use unsigned types and guard against a large length with a mask to insure no undefined behavior (UB).
#define UINT_WIDTH 32
unsigned length = foo();
unsigned mask = (1u << (length & (UINT_WIDTH - 1)) - 1u;
How can I use some loop statements to set the mask as the length varies from 1 to 16?
This works well for [1 ... UINT_WIDTH].
If using fixed width types like uint16_t, then set the ..._WIDTH mask to 16.
For portable code, UINT_WIDTH needs to be consistent with unsigned.
#include <limits.h>
#if UINT_MAX == 0xFFFF
#define UINT_WIDTH 16
#elif UINT_MAX == 0xFFFFFFFF
#define UINT_WIDTH 32
#elif UINT_MAX == 0xFFFFFFFFFFFFFFFF
#define UINT_WIDTH 64
#else
// Very rare
#error TBD code
#endif
You can do it like the following:
uint16_t mask = 0;
for (size_t i = 0; i < length; i++)
mask |= (1 << i);
For each iteration of the loop, you or | your mask with a bit shifted by i position to the left.
| common-pile/stackexchange_filtered |
Nuking all the ice
I've been trying to envision a scenario where global warming is accelerated to the fasted possible speed.
One scenario I thought of was detonating large scale nuclear warheads on the North and South Poles, as well as Greenland.
Russia claims to have missiles, the RS-28 Sarmat (a.k.a. the Satan 2), capable of destroying Texas, France or the U.K. in one hit:
Russian media report that the missile will weigh up to 10 tons with the capacity to carry up to 10 tons of nuclear cargo. With that type of payload, it could deliver a blast some 2,000 times more powerful than the bombs dropped on Hiroshima and Nagasaki.
How much faster (in general) would global warming proceed if:
One missile hit dead center on the ice of the North & South Poles, as well as Greenland
Or, an alternative scenario is the missles are placed down inside the ice caps, either halfway down inside the ice, or at the base of the ice, where it meets land (where applicable)
Note: the goal is to melt all the ice on Earth as quickly as possible, so some alterating of the above parameters is acceptable.
The answer to the question how much the water would rise is the topic of a few questions on here is duplicated. Biggest unknwon here is how much of the blast would melt the ice instantly (or outright vaporize it) vs sending chunks of ice miles into the sky. Remember, only land ice will raise sea levels, sea ice is already displacing the amount of water it contains...melting it will not effect sea levels. About 70m / 230ft is the sea level rise.
How about using the nukes to loosen the ice, so it slides down off the Antarctic or Greenland land masses? As soon as it starts to float, the water levels will go up...
For reference, my post on Skeptics on melting ice: http://skeptics.stackexchange.com/a/9194/619
put huge mirrors over the poles, reflecting lots of sunlight directly onto the icecaps. Darken the icecaps, e.g. with coal dust to increase the effect. One of the advantages is not to irradiate the place so much.
As I stated in this answer... http://worldbuilding.stackexchange.com/a/64021/12297 ....the problem is not the amount of heat we have on the planet, but the balance of incoming and outgoing heat. Us humans are really puny and insignificant when it comes to adding and removing heat from the biosphere. We can make a tiny spot very warm or cold, but that is it. The problems we are making for ourselves in the form of global warming is not that we add or remove heat, but that we have altered the parameters of the the natural processes by unwittingly enhancing the greenhouse effect.
And even if we were to use the most powerful devices created by mankind — a barrage of nuclear weapons — this is still is as near to nothing as makes no odds because the Sun hits us with the energy equivalent of the Tsar Bomba... https://en.wikipedia.org/wiki/Tsar_Bomba ... every second of every minute of every day of every year. Humans are really, really puny...
The best solution is to not nuke the poles. The poles are really-really-really big. The arctic ice sheet is around 20000km$^3$ of ice, or<PHONE_NUMBER>0000m$^3$! That's 1.8334 × 10$^1$$^6$ kilograms of ice. Melting ice takes 333.55 kJ/kg, so we'll need about 6,100,000,000 TJ of energy to melt it all.
The RS-28 you reference is believed to be able to deliver a 50Mt warhead. That outputs 209 TJ of TNT. Thus, if you lobbed 10,000 of these, their combined output (2,090,000 TJ) would melt about 0.034% of the ice in the pole.
On the other hand, nuclear war's most terrifying outcome is nuclear winter. Less than a hundred nuclear events is considered sufficient to send the planet into a catastrophic nuclear winter. Your nukes are more likely to have the opposite effect of what you intended!
Frankly, its' good to remember our old friend Order of Magnitude (Energy).
6,100,000,000 TJ -- The firestorm of nukes I just modeled
15,000,000,000 TJ -- The amount of energy from the sun that strikes the planet every day
Planetary scale energies are... special. It's astonishing how small we are.
Though at some not-to-distant point, nuclear winter could become salvation instead of catastrophe.
@jamesqf From what I have heard, there have been explorations into setting off a controlled set of nuclear weapons to combat global warming. Climate change actually has a nuclear option!
@CortAmmon What about the factor that nuke triggered on just ice won't let loose so much ashes and dust, because all that will be burning will be air and water? Then with not much particles released , there won't be such a big effect of nuclear winter. Because the thing is that particles in athmosphere shielding us from sun do the nuclear winter thing. However that does not mean it will be good thing to do, because of radioactive particles directly from bomb.
@AntoineHejlík I didn't have numbers for that, which is why there's a safety margin of a hundred fold: My Ice melting calculations are based on 10,000 nukes, while the climate change ones used by the professionals looked at 100 nukes.
@CortAmmon When in doubt, nuke it!
Am I the only one who thinks it is an awful idea to nuke anything to offset the Earth's natural heating and cooling cycles?
TLDR; Not at all
Launching and exploding nuclear warheads in the arctic weathers wouldn't really dislodge much ice or land, and would actually cool the Earth. This phenomenon is known as a nuclear winter, and although is a misnomer regarding firestorms after a nuclear or other such event, by simulating volcanic eruptions that release dust, soot, and other particles into the air following a nuclear explosion, we can simulate the effects. Luckily, it's already been done for us.
As you can see in 1945 (Hiroshima and Nagasaki), the average temperature went down when compared to the war years in which industry in numerous countries rose. Essentially by removing firestorms, you're making an artificial and radioactive volcano in an area not only devoid of life, but flammable materials. See here for the effects of volcanoes on climate.
Honestly, the best way to speed up global warming would be to increase the population.
Just few meters
If only glaziers melted down ocean levels will not go higher , because since the ice is floating on water , if it melted it will be the volume it took from water when it was floating.
But if all the ice melted on land , that will really be a problem . If nukes hit tomorrow, Maldives will go under the sea in few weeks . And most of the islands will not be able to survive.
But the temperature will not go up that fast. Nukes can't destroy all the ice , ice has this heat repellent quality which makes it a bit hard to melt supplying direct heat.that is called Leidenfrost effect .
But ice will shatter around and it will float in the sea creating a thin layer of ice. So this layer of ice will not go away that easily and it will cover the dark , heat absorbing sea .
Conclusion
Nuking the ice will not make the sea level go up that much . And global warming will not go up as fast as the person who nuked the ice thought.
Does this apply even if the nukes are buried at the bottom of the ice masses before detonating?
Yes , :-) @ThomBlairIII
As has already been shown, there's no point nuking the ice. Assuming you still want the ice to melt, your best bet is to darken the ice. If you coat it with a layer of dust, ash or other general pollution it will absorb more heat and give you a greater summer melt.
http://news.nationalgeographic.com/news/2014/06/140610-connecting-dots-dust-soot-snow-ice-climate-change-dimick/
This trend toward darker snow from soot and dirt has been observed for years. Sources vary from dust blowing off deserts and snow-free Arctic land, to soot from power plants, forest fires, and wood-burning stoves. But now soot and dust are taking a greater toll, according to a report released this week, causing Greenland's ice sheets to darken—and melt—at a faster rate in spring than before 2009.
This is an awesome idea! Thank you!
| common-pile/stackexchange_filtered |
Rearranging jquery Datatable format
I am using Datatables for my mvc project. I would like to know is there any possibility that I can rearrange some items without messing with datatable js file? Below is what I needed to rearrange.
I don't have any idea whether I can do that or not. Please guide.
You can. Have a look here => https://datatables.net/reference/option/dom
Please find this fiddle it might be helpful to moving controls of datatble jquery and for information to customize as you want please refer this link.
In fiddle i have change position and remove "Table information summary"
this is the trick that works behind this to understand how it works please refer this link
"dom": '<"top""row"<"col-sm-6"f>>rt<"bottom""row"<"col-sm-6"l>p><"clear">'
| common-pile/stackexchange_filtered |
How does Windows "netstat -b" determine the name of the process that owns each socket?
I'm looking for the underlying API calls that netstat -b is using to determine the owning processes for each socket. Any ideas?
You need to look at the IPHelper APIs, in this case specifically GetExtendedTcpTable and GetOwnerModuleFromTcpEntry
| common-pile/stackexchange_filtered |
nodejs - include iv with encrypted file data
I'm refactoring this code to encrypt files to fit it inside a class.
async encryptData(file, password){
this.salt = crypto.randomBytes(32);
this.key = crypto.scryptSync(password, this.salt, 32);
//this.buffer = await fs.readFile(file.tempFilePath);
this.base64 = dataURI.getBase64DataURI(file.data, file.mimetype);
this.iv = crypto.randomBytes(16);
this.cipher = crypto.createCipheriv('aes-256-gcm', this.key, this.iv);
this.encryptedData = Buffer.concat([this.cipher.update(this.base64, 'utf8'), this.cipher.final()]);
this.output = `${this.iv.toString('hex')}:${this.encryptedData.toString('hex')}`;
//fs.writeFile(...)
}
The objective is to encrypt a file after converting it to base64 using a library that will mantain the mime type that is used later to decrypt and save the file into the original format. In my last line of code I have the ´output´ variable that will create a string that will contain the iv and the encrypted data. Is there a better way to include the iv with the data so I can avoid to use the toString() function? If this is possible, how I can get the iv and the data when I need to decrypt the file?
UPDATE:
After some test and after the suggestion in the answer to remove the base64 file encoding I've refactored the code in this way. Seems working fine, but any suggestion to improve it will be appreciated.
async runServer(){
this.app.post('/encrypt', async (req, res) => {
let password = req.body.password;
for(let file in req.files){
await this.encryptData(req.files[file], password);
}
//res.send({});
});
this.app.post('/decrypt', async (req, res) => {
let password = req.body.password;
for(let file in req.files){
await this.decryptData(req.files[file], password);
}
//res.send({});
});
}
async encryptData(file, password){
this.salt = crypto.randomBytes(32);
this.key = crypto.scryptSync(password, this.salt, 32);
this.iv = crypto.randomBytes(16);
this.cipher = crypto.createCipheriv('aes-256-gcm', this.key, this.iv);
this.encryptedData = Buffer.concat([this.salt, this.iv, this.cipher.update(file.data), this.cipher.final()]);
this.output = path.format({dir: this.tmpDir, base: file.name});
await fs.writeFile(this.output, this.encryptedData);
}
async decryptData(file, password){
this.salt = file.data.slice(0, 32);
this.key = crypto.scryptSync(password, Buffer.from(this.salt, 'binary'), 32);
this.iv = file.data.slice(32, 48);
this.encryptedData = file.data.slice(48);
this.decipher = crypto.createDecipheriv('aes-256-gcm', this.key, this.iv);
this.decryptedData = this.decipher.update(this.encryptedData);
this.output = path.format({dir: this.tmpDir, base: file.name});
await fs.writeFile(this.output, this.decryptedData);
}
UPDATE 1:
As suggested I've implemented the GCM required tag. If I'm not wrong it will have a length of 16 bytes. I'm not sure where I need to pass it into decryption process and how to extract it.
//file encryption
this.encryptedData = Buffer.concat([this.salt, this.iv, this.cipher.update(file.data), this.cipher.final(), this.cipher.getAuthTag()]);
//file decryption
this.salt = file.data.slice(0, 32);
this.key = crypto.scryptSync(password, Buffer.from(this.salt, 'binary'), 32);
this.iv = file.data.slice(32, 48);
//How I extract the GCM tag at the end of the data?
//this.tag = file.data.slice(48, 64);
this.encryptedData = file.data.slice(48);
//Where I should pass the extracted GCM tag?
this.decipher = crypto.createDecipheriv('aes-256-gcm', this.key, this.iv);
this.decryptedData = Buffer.concat([this.decipher.update(this.encryptedData), this.decipher.final()]);
You should consider the authentication tag. You force decryption without authentication by omitting the final() call. Decryption without authentication is generally insecure. Apart from that, for GCM authentication is the plus over simple encryption. Also, I'm not sure if a missing final() call is robust (i.e. always works). Furthermore, keep in mind that most libraries do not allow decryption without valid authentication for security reasons, i.e. your ciphertext can only be decrypted by that libraries that don't authenticate, which makes you dependent on those libraries.
Where I need to implement the final() in decryption? Before calling the update()?
Typically, IV and ciphertext are concatenated at the binary level. If conversion to a string is necessary, use a suitable binary-to-text encoding, e.g. Base64:
var encryptedData = Buffer.concat([iv, encryptedData]).toString('base64')
console.log(encryptedData)
The IV is not secret and therefore may be sent unencrypted. Also, a separator (like :) is not necessary because the IV has the length of the block size (16 bytes for AES), so the criterion for separation is known:
var encryptedDataBuffer = Buffer.from(encryptedData, 'base64')
var iv = encryptedDataBuffer.slice(0, 16)
var ciphertext = encryptedDataBuffer.slice(16)
Hexadecimal could also be used as binary-to-text encoding (but this is less efficient at 50% than Base64 at 75%). If the ciphertext has to be URL safe, Base64url can be applied instead of Base64, or URL encoding can be performed.
Since you are using GCM, the (non-secret) authentication tag must also be taken into account. This seems to be missing in your code. The tag is determined with cipher.getAuthTag(), is needed for decryption (more precisely for authentication during decryption) and is typically appended to the ciphertext: iv|ciphertext|tag. Separation of the tag is feasible, because the length of the tag is known (defaults to 16 bytes).
Also, for each key generation a new, random (non-secret) salt should be generated, which is to be concatenated analogously: salt|iv|ciphertext|tag.
By the way, Base64 encoding of the file before encryption is generally not necessary, actually the binary data can be encrypted. Base64 encoding only increases the amount of data.
I'm not a security expert so I've missed out some points. I've not included the GCM tag because the user will send the file to the server with a password so I'm realying on the user password to do the encryption process. Also for the salt I think it's automatically generated when the class method is called from the for loop I have to iterate the files to encrypt/decrypt, see the updated code
how I can append the tag to the generated encrypted data?
Buffer.concat([this.salt, this.iv, this.cipher.update(file.data), this.cipher.final()]); According to the docs, the tag needs to be generated after cipher.final() function is called, but in my code I'm using it to generate the final output to write
Ok, thank you. I've implemenyed it but need some help to understand how and where I need to pass the extracted tag into decryption. See question update
@newbiedev - The link in my last comment also contains an example for decryption. Have you had a look at it? No offense, but authentication was not the topic of the original question (and I only mentioned it in passing in my answer). For more details, a separate question would probably make more sense, simply because it's more manageable than updates and comments.
| common-pile/stackexchange_filtered |
How to stop a swing timer
I am working on a GUI. This is the code (one ActionListener for Button and one for timer). I have a list of images to display one by one. On pressing a stop button, i am supposed to stop the timer and display the current image. On pressing the next button, i am supposed to display the next image.
This is the part of a code on Progressive Image Transmission which means displaying each of the current image in an order of clarity(resolution). This is what handler action Listener does for each image.
Problems:
On Clicking stop, the timer is not stopping, rather stop's action listener is further creating more timers i guess.(Note : Next and Stop share same Action Listeners)
On clicking next, the present running timer needs to be stopped and a fresh timer needs to be started. (Note : I used timer.restart() but it also didnt serve the purpose)
Help would be appreciated.
public void actionPerformed(ActionEvent arg0) {
int threads=0;
System.out.println("Listening : "+arg0.getActionCommand());
if(arg0.getActionCommand().equals("Send")){
stopButton.setEnabled(true);
nextButton.setEnabled(true);
}
else if(arg0.getActionCommand().equals("Stop")){
timer.stop();
saveButton.setEnabled(true);
stopButton.setEnabled(false);
}
else if(arg0.getActionCommand().equals("Next")){
stopButton.setEnabled(true);
saveButton.setEnabled(false);
timer.restart();
}
try {
image = ImageIO.read(new File(pathName[imageNo])).getScaledInstance(512,512 , BufferedImage.SCALE_SMOOTH);
}catch (IOException e) {
e.printStackTrace();
}
senderImage = new ImageIcon(image);
senderImageLabel.setIcon(senderImage);
senderFrame.setVisible(true);
ImageToMatrix(getImage(pathName[imageNo]));
Compress();
k=senderMatrix.length-1;
imageR=Decompress(k);
imageR=imageR.getScaledInstance(512, 512, BufferedImage.SCALE_SMOOTH);
receivedImage = new ImageIcon(imageR);
receiverImageLabel.setIcon(receivedImage);
receiverFrame.getContentPane().add(BorderLayout.EAST,receiverImageLabel);
receiverFrame.setVisible(true);
Handler handler = new Handler();
timer = new Timer(1000,handler);
timer.start();
sendButton.setEnabled(false);
imageNo = (imageNo+1)%5;
}
class Handler implements ActionListener{
public void actionPerformed(ActionEvent arg0) {
System.out.println("image: "+imageNo+" matrix:"+k );
k-=1;
imageR=Decompress(k);
imageR=imageR.getScaledInstance(512, 512, BufferedImage.SCALE_SMOOTH);
receivedImage = new ImageIcon(imageR);
receiverImageLabel.setIcon(receivedImage);
receiverFrame.getContentPane().add(BorderLayout.EAST,receiverImageLabel);
receiverFrame.setVisible(true);
if(k==0){
timer.stop();
}
}
}
Which Timer class are you using?
Why are you creating a new timer in the actionPerformed? Use the same timer.
yeah right @peeskillet .. That solved the problem
| common-pile/stackexchange_filtered |
Rails 3 Searching through a string using an array
I am attempting to do a search in Rails 3 like so. I have a database object like so with an attribute like so:
@food.fruit = "Apples, Pears, Plums"
and I have a search that uses params from a checkbox, like so:
params[:search] = ["Apples", "Oranges", "Bananas", "Grapefruit"]
Since my @food object above has "Apples" and the search array has "Apples" I'd like for this search to be successful. However I am having trouble having this work properly.
My initial thought was to do something like
@results = Food.where("fruit LIKE ?", "%params[:search]%")
or
@results = Food.where("fruit IN ?", params[:search])
But the first only works if the params[:search] contains ONLY the @food.fruit elements and no others. The second doesn't work at all.
My last ditch resort is to do something like
@results = Array.new
params[:search].each do |search|
@results << Food.where("fruit LIKE ?", search)
end
but I'd rather not do that if I don't have to. Anyone have any advice?
What you're looking for is some SQL like this:
WHERE LOWER(fruit) LIKE '%apples%'
OR LOWER(fruit) LIKE '%oranges%'
OR LOWER(fruit) LIKE '%bananas%'
OR LOWER(fruit) LIKE '%grapefruit%'
Note that LIKE is not necessarily case insensitive so pushing everything to lower case (or upper case) is generally a good idea.
That's simple enough but saying x.where(...).where(...)... connects the conditions with AND when you want OR. One way is to build the first argument to where as a string by pasting together the right number of "LOWER(fruit) LIKE ?" strings to match the number of elements in params[:search]:
@results = Food.where(
(["LOWER(fruit) LIKE ?"] * params[:search].length).join(' OR '),
*(params[:search].map { |s| '%' + s.downcase + '%' })
)
Thanks for your answer. This makes complete sense to me, and totally should work...and it does, when the params[:search] has one item in it. But when I make that array 2 or more, it throws me the "wrong number of bind variables (1 for 3)" error, with the 3 being how many items are in the array. This really doesn't make sense as to why this is happening.
@Ryan: You could try adding a splat to un-array the bind variable array (as in my fixed answer).
ahh perfect...thank you! I wish I could be as brilliant with my SQL as you! Any tips to learn more?
@Ryan: Practice, make mistaeks, and learn from them. And get familiar with whatever documentation is available. Answering questions on SO helps too :) And there are a lot of people around here that are better at all this stuff than I am.
Essentially you're doing 5 separate searches. While making 5 separate SQL queries might not be the answer, you could always join the array, like:
scope :search_fruit, lambda {|*fruits|
where fruits.flatten.map {|fruit| arel_table[:fruit].matches("%#{fruit}%") }.inject(&:or)
}
and
Food.search_fruit(params[:fruit])
| common-pile/stackexchange_filtered |
Pymssql Error when trying to connect to SQLServer
I need help....
So I am trying to connect to db that is connected to a program in our business Desktop. The program is owned by a company but I got a permission to access the app db since it only has our company informations. Using Sql server management studio I can clearly see and connect to the db. However when I try to connect using mssql it shows me this error
Traceback (most recent call last):
File "harvester_of_tweets.py", line 11, in <module>
conn = pymssql.connect(host='localhost', user='username', password='password!', database='Database')
File "/usr/local/lib/python2.7/dist-packages/pymssql.py", line 607, in connect
raise OperationalError, e[0]
pymssql.OperationalError: DB-Lib error message 20009, severity 9:
Unable to connect: Adaptive Server is unavailable or does not exist
Net-Lib error during Connection refused Error 111 - Connection refused
I wrote this to connect to the db:
conn = conn = pymssql.connect("DESKTOP-J8IS63J\SQLEXPRESS", "user", "pass", "DgtalLaundry")
Please help me fix this.
Thanks
I tried using pyodbc but the same thing ended happening. I looked up and saw that the server might not be forwarded to a port. But we actually don't have sql server configurator so that I could check for it. And since the server is open in ssms I could clearly see that the server is there and I can connect to it
You can see which port it runs on if you run following:
exec MASTER..xp_readerrorlog 0, 1, N'Server is listening on'
Or select distinct local_net_address, local_tcp_port from sys.dm_exec_connections where local_net_address is not null
Yeah, otherwise you wouldn't be able to run anything :)
@siggemannen So I tried it and it showed me no ports it was empty. I also got failed to connect to server whenever I try to export Data tier db.
@siggemannen to be more specific this shows when I try to export data tier https://drive.google.com/file/d/1m2P4i7htLTdYAvUr6fQrwKsVeG2-xcI-/view
Both sql calls was empty?
@siggemannen Yes
| common-pile/stackexchange_filtered |
Modify DataFrame based on previous row (cumulative sum with condition based on previous cumulative sum result)
I have a dataframe with one column containing numbers (quantity). Every row represents one day so whole dataframe is should be treated as sequential data. I want to add second column that would calculate cumulative sum of the quantity column but if at any point cumulative sum is greater than 0, next row should start counting cumulative sum from 0.
I solved this problem using iterrows() but I read that this function is very inefficient and having millions of rows, calculation takes over 20 minutes. My solution below:
import pandas as pd
df = pd.DataFrame([-1,-1,-1,-1,15,-1,-1,-1,-1,5,-1,+15,-1,-1,-1], columns=['quantity'])
for index, row in df.iterrows():
if index == 0:
df.loc[index, 'outcome'] = df.loc[index, 'quantity']
else:
previous_outcome = df.loc[index-1, 'outcome']
if previous_outcome > 0:
previous_outcome = 0
df.loc[index, 'outcome'] = previous_outcome + df.loc[index, 'quantity']
print(df)
# quantity outcome
# -1 -1.0
# -1 -2.0
# -1 -3.0
# -1 -4.0
# 15 11.0 <- since this is greater than 0, next line will start counting from 0
# -1 -1.0
# -1 -2.0
# -1 -3.0
# -1 -4.0
# 5 1.0 <- since this is greater than 0, next line will start counting from 0
# -1 -1.0
# 15 14.0 <- since this is greater than 0, next line will start counting from 0
# -1 -1.0
# -1 -2.0
# -1 -3.0
Is there faster (more optimized way) to calculate this?
I'm also not sure if the "if index == 0" block is the best solution and if this can be solved in more elegant way? Without this block there is an error since in first row there cannot be "previous row" for calculation.
Your data is more like an array. Have you tried to look at numpy functions? Iterating over numpy arrays is much more efficient than iterating over DataFrame rows - never do that!
Iterating over DataFrame rows is very slow and should be avoided. Working with chunks of data is the way to go with pandas.
For you case, looking at your DataFrame column quantity as a numpy array, the code below should speed up the process quite a lot compared to your approach:
import pandas as pd
import numpy as np
df = pd.DataFrame([-1,-1,-1,-1,15,-1,-1,-1,-1,5,-1,+15,-1,-1,-1], columns=['quantity'])
x = np.array(df.quantity)
y = np.zeros(x.size)
total = 0
for i, xi in enumerate(x):
total += xi
y[i] = total
total = total if total < 0 else 0
df['outcome'] = y
print(df)
Out :
quantity outcome
0 -1 -1.0
1 -1 -2.0
2 -1 -3.0
3 -1 -4.0
4 15 11.0
5 -1 -1.0
6 -1 -2.0
7 -1 -3.0
8 -1 -4.0
9 5 1.0
10 -1 -1.0
11 15 14.0
12 -1 -1.0
13 -1 -2.0
14 -1 -3.0
If you still need more speed, suggest to have a look at numba as per jezrael answer.
Edit - Performance test
I got curious about performance and did this module with all 3 approaches.
I haven't optimised the individual functions, just copied the code from OP and jezrael answer with minor changes.
"""
bench_dataframe.py
Performance test of iteration over DataFrame rows.
Methods tested are `DataFrame.iterrows()`, loop over `numpy.array`,
and same using `numba`.
"""
from numba import njit
import pandas as pd
import numpy as np
def pditerrows(df):
"""Iterate over DataFrame using `iterrows`"""
for index, row in df.iterrows():
if index == 0:
df.loc[index, 'outcome'] = df.loc[index, 'quantity']
else:
previous_outcome = df.loc[index-1, 'outcome']
if previous_outcome > 0:
previous_outcome = 0
df.loc[index, 'outcome'] = previous_outcome + df.loc[index, 'quantity']
return df
def nparray(df):
"""Convert DataFrame column to `numpy` arrays."""
x = np.array(df.quantity)
y = np.zeros(x.size)
total = 0
for i, xi in enumerate(x):
total += xi
y[i] = total
total = total if total < 0 else 0
df['outcome'] = y
return df
@njit
def f(x, lim):
result = np.empty(len(x))
result[0] = x[0]
for i, j in enumerate(x[1:], 1):
previous_outcome = result[i-1]
if previous_outcome > lim:
previous_outcome = 0
result[i] = previous_outcome + x[i]
return result
def numbaloop(df):
"""Convert DataFrame to `numpy` arrays and loop using `numba`.
See [https://stackoverflow.com/a/69750009/5069105]
"""
df['outcome'] = f(df.quantity.to_numpy(), 0)
return df
def create_df(size):
"""Create a DataFrame filed with -1's and 15's, with 90% of
the entries equal to -1 and 10% equal to 15, randomly
placed in the array.
"""
df = pd.DataFrame(
np.random.choice(
(-1, 15),
size=size,
p=[0.9, 0.1]
),
columns=['quantity'])
return df
# Make sure all tests lead to the same result
df = pd.DataFrame([-1,-1,-1,-1,15,-1,-1,-1,-1,5,-1,+15,-1,-1,-1],
columns=['quantity'])
assert nparray(df.copy()).equals(pditerrows(df.copy()))
assert nparray(df.copy()).equals(numbaloop(df.copy()))
Running for a somewhat small array, size = 20_000, leads to:
In: import bench_dataframe as bd
.. df = bd.create_df(size=20_000)
In: %timeit bd.pditerrows(df.copy())
7.06 s ± 224 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In: %timeit bd.nparray(df.copy())
9.76 ms ± 710 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In: %timeit bd.numbaloop(df.copy())
437 µs ± 12.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Here numpy arrays were 700+ times faster than iterrows(), and numba was still 22 times faster than numpy.
And for larger arrays, size = 200_000, we get:
In: import bench_dataframe as bd
.. df = bd.create_df(size=200_000)
In: %timeit bd.pditerrows(df.copy())
I gave up and hit Ctrl+C after 10 minutes or so... =P
In: %timeit bd.nparray(df.copy())
86 ms ± 2.63 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In: %timeit bd.numbaloop(df.copy())
3.15 ms ± 66.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Making numba again 25+ times faster than numpy arrays for this example, and confirming that you should avoid at all costs to use iterrows() for anything more than a couple of hundreds of rows.
I tested your solution but there is crucial difference between it's result and my expected result - whenever 'outcome' is greater than 0, I want it to be saved as that value (and not 0). Only the next line should be starting calculation from 0.
In my example data, the first line with quantity = 15 should have outcome = 11. Using your method the outcome = 0
@Malachiasz - For improve performance in looping solutions need numba, here if use only enumerate performance is not nice.
@Malachiasz all you have to do is to swap the last 2 lines of the for loop... the point was: work with arrays rather than dataframes
np. Updated answer to have the full code. Please rate questions and mark the ones that fix your case as accepted.
@jezrael fair point. numba has some overhead due to compilation, so real benefit should show up for much larger arrays. I'll update the benchmark and see =)
I think numba is the best when working with loops if performance is important:
@njit
def f(x, lim):
result = np.empty(len(x), dtype=np.int)
result[0] = x[0]
for i, j in enumerate(x[1:], 1):
previous_outcome = result[i-1]
if previous_outcome > lim:
previous_outcome = 0
result[i] = previous_outcome + x[i]
return result
df['outcome1'] = f(df.quantity.to_numpy(), 0)
print(df)
quantity outcome outcome1
0 -1 -1.0 -1
1 -1 -2.0 -2
2 -1 -3.0 -3
3 -1 -4.0 -4
4 15 11.0 11
5 -1 -1.0 -1
6 -1 -2.0 -2
7 -1 -3.0 -3
8 -1 -4.0 -4
9 5 1.0 1
10 -1 -1.0 -1
11 15 14.0 14
12 -1 -1.0 -1
13 -1 -2.0 -2
14 -1 -3.0 -3
| common-pile/stackexchange_filtered |
Why King's Cross?
When Voldemort does the Avada kadavra on Harry in the forest, Harry goes into some kind of passing between worlds, that looks to him like King's Cross Station. So why does Harry see himself there? What is the significance of that place? Theoretically, would anyone who's half-dead see himself in King's Cross?
In a closer look at DH chapter 35 I have some more questions-
"It looks," he said slowly, "like King's Cross station. Except a lot cleaner and empty, and there are no trains as far as I can see."
"King's Cross Station!" Dumbledore was chuckling immoderately. "Good gracious, really?"
Why does Dumbledore seem surprised? And why is it important that the station is empty?
Possible duplicate of What was the scene in King's Cross all about? - the earlier question covers this one as well, and the answer provides a JKR interview where she was asked "Why was kings cross the place harry went to when he died"
Quite possibly, and this is pure conjecture, Harry sees this as a place where two worlds meet - including the Muggle world and the magical world.
It's part dream, and dreams are mental fruit salad that the right side if your brain whips up while the left side is out of action.
| common-pile/stackexchange_filtered |
Lucene limit file size while saving into data base
I'm new in luene and i want to save the index file into data base but i have this exeption and i would not change the max_allowed_packet but i want to limit the size of file.
Exception in thread "Lucene Merge Thread #0" org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute sql [insert into search_lucene (name_, value_, size_, lf_, deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:309)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:286)
Caused by: org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute sql [insert into search_lucene (name_, value_, size_, lf_, deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:185)
at org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:203)
at org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:204)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:205)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:260)
Caused by: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3915)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2598)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360)
at org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:175)
read from http://wiki.apache.org/lucene-java/LuceneFAQ#Can_I_store_the_Lucene_index_in_a_relational_database.3F
Can I store the Lucene index in a relational database?
Lucene does not support that functionality out of the box, but several people have implemented JdbcDirectory's. The reports we have seen so far indicate that performance with such implementations is not great, but it is doable.
To limit the file size you'll need to implement a Directory yourself. The trick is to split every file into parts. Perhaps you can borrow some code from lucene-appengine, which splits the file into multiple SegmentHunks.
I hope you know what you're doing because keeping lucene indexes in a database will be much slower than using the usual memory-mapped files.
I'm trying to put all the memory-mapped files in a .cfs file ( compound file ) and save it in the database, to keep a copy of my index securised in the DB, I know it's an additionnal work, do you mean that the .cfs file will be divised in multiple files then saved in the database?
Yes, I mean just that. But if all you want is to archive the index into the database, then I suggest you split it with your archival program. For example, with 7z you can split the archive using the "-v" option: 7zr a -v1m archive.7z cfsFile.
I use JDBCDIRECTORY its responsible for the insertion and retrieval of file into database so I can't archived it,you have not other sonution to limit the file .cfs size when indexing
Why should there be another solution? Either tweak the MySQL maximum packet size or implement the splitting in your own code.
essaies :
indexWriter.setUseCompoundFile(false);
LogByteSizeMergePolicy aLogByteSizeMergePolicy = new LogByteSizeMergePolicy();
aLogByteSizeMergePolicy.setMaxMergeMB(2);
aLogByteSizeMergePolicy.setMaxMergeMBForForcedMerge(4);
aLogByteSizeMergePolicy.setUseCompoundFile(false);
//voir aussi setMaxCFSSegmentSizeMB, setMaxMergeDocs, setMergeFactor
indexWriter.setMergePolicy(aLogByteSizeMergePolicy);
//Deprecated. use IndexWriterConfig.setMergePolicy(MergePolicy) instead.
ces propriétés ne limitent que la taille du segment dans le fichier mais pas la taille totale
| common-pile/stackexchange_filtered |
how to use LINQ to query a generic collection
I wanted to know the way to LINQ a generic collection.
My Customer class is as
class Customer
{
public string Name { get; set; }
public string id { get; set; }
}
My collection class is
class genericCollection<T> : CollectionBase
{
public void add(T GenericObject)
{
this.List.Add(GenericObject);
}
}
Then I add some data to customer collection
genericCollection<Customer> customers = new genericCollection<Customer>();
customers.add(new Customer {id= "1",Name="Andy"});
customers.add(new Customer { id = "2", Name = "MArk" });
customers.add(new Customer { id = "3", Name = "Jason" });
customers.add(new Customer { id = "4", Name = "Alex" });
Now i can iterate through customers object using a foreach loop but how can i linq it.
I want to use something like
var query = from c in customers
select c;
But I am not able to successfully cast it.
Regards,
Sab
Why are you using a custom collection?
what do you suggest?Anyway i was trying this as a test pilot for LINQ.
I would suggest using the List class instead of a custom collection.
Some answers suggest using customers.OfType<Customer>; this tests the type of every object in the collection before converting it. You know that each object is of that type, so you don't need the runtime type check. For that reason, you should use customers.Cast<Customer> instead.
Having said that, I agree that it would be better not to use CollectionBase in the first place; it would be better to use a generic collection type; if you prefer to define your own collection type, then you should derive from (or delegate to) a generic collection.
try to change your query to the following (assuming that your CollectionBase implements IEnumerable):
var query = from c in customers.OfType<Customer>() select c;
or let your genericCollection<T> implement IEnumerable<T>
Thanks @Tobias.I tried the oftype but forgot to put round braces.
@user1131926 since all objects in customers are known to be of type Customer, it would be more efficient to use from c in customers.Cast<Customer>() select c
The LINQ standard query operators are extension methods defined for IEnumerable and IEnumerable<T>. You could try:
class genericCollection<T> : Collection<T>
or use another collection type such as List<T>
You can specify type in LINQ query:
var query = from Customer c in customers select c;
or implement IEnumerable<T> for eg:
class genericCollection<T> : CollectionBase, IEnumerable<T>
{
public void add(T GenericObject)
{
this.List.Add(GenericObject);
}
public IEnumerator<T> GetEnumerator()
{
return this.List.Cast<T>().GetEnumerator();
}
}
Thanks Lolo.Just implemented the IEnumerator.Thanks all for the help
The problem is that you derive from CollectionBase. You should also implement ICollection<T> and no cast is needed anymore.
You need to implement the IEnumerable<T> interface:
public class genericCollection<T>: CollectionBase, IEnumerable<T>{}
But doing this worked without using IEnumerable var query = from c in customers.OfType()
where c.Name == "Mark"
select c;
Use Select:
customers.Select(x => {...})
Working with:
public void add(T GenericObject)
{
this.List.Add(GenericObject);
}
IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
return this.List.OfType<T>().GetEnumerator();
}
Now, there is a library which provides strongly-typed, queryable collections in typescript.
These collections are:
List
Dictionary
The library is called ts-generic-collections-linq.
Source code on GitHub:
https://github.com/VeritasSoftware/ts-generic-collections
NPM:
https://www.npmjs.com/package/ts-generic-collections-linq
With this library, you can create collections (like List<T>) and query them as shown below.
let owners = new List<Owner>();
let owner = new Owner();
owner.id = 1;
owner.name = "John Doe";
owners.add(owner);
owner = new Owner();
owner.id = 2;
owner.name = "Jane Doe";
owners.add(owner);
let pets = new List<Pet>();
let pet = new Pet();
pet.ownerId = 2;
pet.name = "Sam";
pet.sex = Sex.M;
pets.add(pet);
pet = new Pet();
pet.ownerId = 1;
pet.name = "Jenny";
pet.sex = Sex.F;
pets.add(pet);
//query to get owners by the sex/gender of their pets
let ownersByPetSex = owners.join(pets, owner => owner.id, pet => pet.ownerId, (x, y) => new OwnerPet(x,y))
.groupBy(x => [x.pet.sex])
.select(x => new OwnersByPetSex(x.groups[0], x.list.select(x => x.owner)));
expect(ownersByPetSex.toArray().length === 2).toBeTruthy();
expect(ownersByPetSex.toArray()[0].sex == Sex.F).toBeTruthy();
expect(ownersByPetSex.toArray()[0].owners.length === 1).toBeTruthy();
expect(ownersByPetSex.toArray()[0].owners.toArray()[0].name == "John Doe").toBeTruthy();
expect(ownersByPetSex.toArray()[1].sex == Sex.M).toBeTruthy();
expect(ownersByPetSex.toArray()[1].owners.length == 1).toBeTruthy();
expect(ownersByPetSex.toArray()[1].owners.toArray()[0].name == "Jane Doe").toBeTruthy();
| common-pile/stackexchange_filtered |
Is the slope of $\vert x\vert$ not $0$ at $x=0$?
The absolute function
$$f(x)=\begin{cases}
x, &\text{if $x\ge0$}\\
-x, &\text{if $x<0$}
\end{cases}$$
has the derivative
$$f'(x)=\begin{cases}
1, &\text{if $x\ge0$}\\
-1, &\text{if $x<0$}
\end{cases}$$
So the slope of the graph at $x=0$ is
$$f'(0)=1$$
The graph of $f(x)$ and it's slope (calculated by $f'(x)$) at $x=0$ are shown below.
However I was wondering that the slope of $\vert x\vert$ at $x=0$ should be $0$, based on the fact that the tangent should be the $x$-axis as shown below.
My question is: Shouldn't the value of the slope of $\vert x\vert$ at $x=0$ be $0$ based on graphical methods. I say this because slope is rather a graphical concept while differential is the calculus concept. If the slope is $0$, what method should be used to derive this result?
Think about it: why "should" the slope be horizontal in your opinion? Symmetry is no reason. Also note that there is no reason that the equal sign is in the upper case rather than the lower. Both will define the same function $|x|$.
"The graph of $f(x)$ and its slope are shown below." You graphed the slope incorrectly, and I think that's causing some confusion. Also, the tangent line you have drawn at the origin is only one of many tangent lines at the origin, which explains visually why the slope at the origin is undefined.
I am curious about your intuition regarding what the tangent line "should" be. What would you suggest for the tangent line at $0$ to the graph of$$f(x)=\begin{cases}
2x, &\text{if $x\ge0$}\
-\frac{1}{3}x, &\text{if $x<0$}
\end{cases}$$
| common-pile/stackexchange_filtered |
Angular: Keep model data in sync with database
i'm making my first weapons with Angular. I cannot understand the best way to handle data modifications in model and store it in the database, e.g:
In controller:
$scope.items = [
{ id: 1, status: 0, data: 'Foo Item' }
{ id: 2, status: 0, data: 'Foooo' }
{ id: 3, status: 1, data: 'OooItem' }
];
In view:
<tr ng-repeat="item in items | orderBy:'-id'">
<td>{{item.id}}</td>
<td>{{item.data}}</td>
<td>
<div class="btn-group" role="group">
<button type="button" class="btn btn-default" ng-click="status(item.id, 1)">Acept</button>
<button type="button" class="btn btn-default" ng-click="status(item.id, 2)">Reject</button>
</div>
</td>
</tr>
What should I do to update the item status to 1 or 2? Make a call to the server, and then retrieve the new model? Update the model data with JS, and make the call? Is any way to do it automatic? Angular provide some method to access to the current "clicked" item to update the prop (status in this case)?
Hope I was clear.
Thanks.
EDIT: So, based on Dr Jones comment, i write this using underscore.
function status(id, status) {
$http.put().then()....
//after success response, update the model
item = _.find($scope.items, function (rw) {
return rw.id == id
});
item.status = status;
}
This is a valid and correct way to this?
The answer is that it's really up to you. As with all async programming you have the option to 'auto-sync' or sync as a result of a user event (e.g. hitting a save button). It really depends on how your app is designed.
Frameworks like firebase (angular-fire lib) have handy built in auto-sync functionality, alternatively the REST Post/Put request is a more traditional design pattern.
Ok got it! But suppose i will not use any extra framework. So I need to call the API, exec the updated operation, and get a response in the UI. My concern is: I need to find in the model the item by the ID received from the server, and sync this data, right?
Correct - it really depends on the context of how your app is built and how expensive it is to refresh the data. So most of the time you may just refresh the data after doing a save. However if calling the data is slow, you may just update the client side model with the new information. Typically I choose to refresh from the server if a model has been updated so I can be sure that I have the latest data after an update.
| common-pile/stackexchange_filtered |
Prevent debugging chrome packaged app
I created a "packaged app" for Chrome browser and I do not want others to debug the app.
Is there a flag in the manifest or any other way to prevent debugging?
No, you cannot prevent devtools access.
If you use native client and don't use any HTML/JS/DOM, except for loading the native client module, then you would not be able to debug very much with devtools.
| common-pile/stackexchange_filtered |
How to check if modal view controller failed to display?
I have an app that uses modal view controllers for various purposes. Most importantly for this question, we use a modal controller to display the login screen when a user is logged out.
The issue I ran into is that it appears that presentModalViewController:animated: will fail silently if another modal controller is being animated on or off screen when the call is made. It will print to the debug console with a warning, but the method itself doesn't return a BOOL or throw an exception, so I can't seem to check in code whether it failed so I can retry it in a second.
Is there some way to detect that the controller failed to display immediately after calling this method?
So as I was typing this question, I realized I could check presentingViewController (or parentViewController) to see if it was displayed, and if nil, try again.
Tested and it works. Figured I'd post and answer the question anyway for others to find that may run into a similar issue.
Aa few more things I noticed in testing:
It looks like if the modal view is not displayed with an animation, it will be successful even if another modal view is animating when you make the call.
Calling dismissModalViewControllerAnimated:NO on the controller you are displaying off of, before presenting the modal controller with animation, will also allow it to complete successfully even if another controller is animating.
Nice approach, helped me!
| common-pile/stackexchange_filtered |
I have a gitlab project that needs to run integration tests in different environments (Oracle etc), how can I change the environment
test:
stage: test
tags:
- linux
- docker
script:
- echo "testing"
- ./grailsw "Oracledev test-app"
artifacts:
untracked: true
name: "$CI_PROJECT_NAME-$CI_JOB_NAME-$CI_COMMIT_SHA"
expire_in: 2 days
when: always
allow_failure: true
The environment name is oracledev, but the job is not able to set the environment to oracledev which is been defined in the Config.groovy file.
What error are you getting and from where?
Try running grails as ./grailsw -Dgrails.env=oracledev test-app
@Daniel the console output is as follows:
Running pre-compiled script
| Script 'Oracledev' not found, did you mean:
GenerateOracleChangelog
Clean
CleanAll
AssetClean
DependencyReport
ERROR: Job failed: exit code 1
I believe that doelleri's suggestion will fix your problem. Grails thinks you're trying to run a command called 'Oracledev' when instead you want to run 'test-app' with the environment 'oracledev'.
@doelleri Thank you, it worked!!
Grails has three pre-defined environments: dev, test, and prod. To run a command in these environments, you would use ./grailsw prod test-app.
To specify any other custom environment for a Grails command you need to use a grails.env system property like so:
./grailsw -Dgrails.env=oracledev test-app
You can read a little more about this in the Environments section of the docs.
| common-pile/stackexchange_filtered |
file_get_contents()/curl getting unexpected page
I'm doing some scraping with php. I've been extracting data including link to the next relevant page so the whole thing is automatic. The problem is that I seem to be getting a page which is slightly modified compared to what I would expect using that URL in my browser (for e.g. the dates are different).
I've tried using curl and get_file_contents but both get the wrong file.
At the moment I am using:
$url = "http://www.example.com";
$ch = curl_init();
$timeout = 5;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
url_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
$temp = curl_exec($ch);
curl_close($ch);
What is going on here?
UPDATE:
I've tried imitating a browser using the following code but still unsuccessful. I find this bizarre.
function get_url_contents($url){
$crl = curl_init();
$timeout = 10;
$header=array(
'User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:<IP_ADDRESS>) Gecko/20101026 Firefox/3.6.12',
'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language: en-us,en;q=0.5',
'Accept-Encoding: gzip,deflate',
'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7',
'Keep-Alive: 115',
'Connection: keep-alive',
);
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
curl_setopt ($crl, CURLOPT_URL,$url);
curl_setopt ($crl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($crl, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt ($crl, CURLOPT_AUTOREFERER, FALSE);
curl_setopt ($crl, CURLOPT_FOLLOWLOCATION, FALSE);
$ret = curl_exec($crl);
curl_close($crl);
return $ret;
}
Further update:
Seems that the site is using my location to discriminate. Is there a locale option?
Try removing any cookies stored along for that domain in your browser, then load again and compare the result to your curl result.
Nope, browser still gets the desired page.
Can be many things...
Server may render pages differently based on cookies and header sent
Server may render pages differently based on existing pre-conditions and states on the server
You may have a proxy in between that modifies the content based on user-agent and since you don't have a specific user-agent (such as CURL browser) then your proxy is sending back different content
This is just a few things that could happen!
Ah ok. Are there any parameters which I can pass to curl to imitate my browser?
You can use curl_setopt() to set a custom user agent identifier: curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Any user agent string you like');
| common-pile/stackexchange_filtered |
What's the difference between Version and Installed in NuGet GUI in Visual Studio?
I've noticed that in Visual Studio's GUI for NuGet, the table showing which projects have a given package installed has the columns Version and Installed. Version always seems to either match what Installed says or is blank. For example, here's a screenshot after having just installed System.New.HttpJson to two projects.
Why does the last project have both columns populated, but the project before it has nothing in the Version column?
See answer in https://stackoverflow.com/questions/69795755/what-is-the-significance-of-the-version-column-in-visual-studio-nugetpackagema
Does this answer your question? What is the significance of the 'Version' column in Visual Studio NugetPackageManager interface? (as distinct from the 'Installed' column)
| common-pile/stackexchange_filtered |
How can I suppress all javascript errors which occur, including those in the console?
I'm trying to find a way to suppress all javascript errors that occur, including those that are invoked from the firebug/chrome console.
It's not that my code is buggy or anything, it's more of an experiment :)
Window.error is only working to a limited degree, are there any other ways I can block the errors?
Thanks!
possible duplicate of How javascript try...catch statement works
What do you mean by "invoked from the console"?
JavaScript errors are thrown based on the code.
If you want to prevent the error console in the browser to report any errors, then place all code inside try-catch:
try {
// all the code
} catch(e) {}
But once an error is thrown, the statements that follow are not executed, so the idea is to not have any errors thrown in the first place.
You can also try setting window.onerror which is the global try / catch.
| common-pile/stackexchange_filtered |
LibGdx Hiero fonts are coming out crooked?
I'm using a Bitmap font (Hiero to produce the text), but for some reason, the text isn't straight as you can see above: notice how the 'a' in the word "bad" is below the 'b' and 'd'. The letter 'd' in the word "and" sits way higher than 'a' and 'n'.
How come this is the case? Is there any method to fix this? Is it an issue with the constructor?
font.setUseIntegerPositions(false);
| common-pile/stackexchange_filtered |
How to determine number of rows in a merged cell using apache poi?
I am using excel to input data for my test automation scripts and I am able read data from excel when i know the exact row and the column to read. The challenge which I am facing is when I have merged cells in the sheet. E.g. Test data excel sample
Here ScriptName and Iteration are my primary keys to identify a unique set of data for my script.
So my question here is:
I want to fetch all the ReferenceSetName with respect to a ScriptName, and Iteration i.e. for Login script, Iteration 1: I have to fetch ABC1 Ref set, ABC2 Ref set, ABC3 Ref set
I want to fetch all the PackageName with respect to a ScriptName, Iteration, and ReferenceSet i.e. for Login script, Iteration 1, ReferenceSet ABC1 Ref set: I have to fetch ABC1, ABC2, ABC3
Currently below is the method - getEntireCellValue() I am using to fetch the data from excel and I need help to solve the above 2 problems. Any kind of support is really appreciated.
public void getExcelRowNum() {
boolean found = false;
String scriptCell = null, iterationCell = null;
try {
@SuppressWarnings("rawtypes")
Iterator iterator = sheet.rowIterator();
while (iterator.hasNext()) {
Row row = (Row) iterator.next();
scriptCell = row.getCell(1).toString().trim();
iterationCell = row.getCell(2).toString().trim();
if (row.getCell(2).getCellTypeEnum() == CellType.NUMERIC)
iterationCell = iterationCell.substring(0, iterationCell.indexOf(".")).trim();
if ((scriptCell.equals(scriptName) && iterationCell.equals(String.valueOf(iteration).trim()))
|| (scriptCell.equals(scriptName) && Integer.parseInt(iterationCell) == iteration)) {
rowNum = row.getRowNum();
found = true;
break;
}
}
if (rowNum == -1 || found == false)
throw new Exception("Please check the test name: " + scriptName + " or the iteration: " + iteration
+ " in the test data sheet");
row = sheet.getRow(0);
}
catch (Exception e) {
e.printStackTrace();
}
}
public void getExcelColNum(String colName) {
boolean found = false;
try {
for (int i = 0; i < row.getLastCellNum(); i++) {
if (row.getCell(i).getStringCellValue().trim().equals(colName.trim())) {
col_Num = i;
found = true;
break;
}
}
if (col_Num == -1 || found == false)
throw new Exception("Please check the column name: " + colName + " in the test data sheet");
}
catch (Exception e) {
e.printStackTrace();
}
}
public void getCell() {
try {
row = sheet.getRow(rowNum);
cell = row.getCell(col_Num);
}
catch (Exception e) {
e.printStackTrace();
}
}
//Prior to calling this method. I am connecting to the excel sheet which
is in .xlsx or xls format
public String getEntireCellValue(String sheetName, String colName) {
try {
sheet = workbook.getSheet(sheetName);
getExcelRowNum();
getExcelColNum(colName);
getCell();
if (cell.getCellTypeEnum() == CellType.STRING)
return cell.getStringCellValue().trim();
else if (cell.getCellTypeEnum() == CellType.BLANK)
return null;
}
catch (Exception e) {
e.printStackTrace();
return null;
}
}
public int getNumOfMergedRows() {
int rowsMerged = 0;
try {
for(int i = 0; i < sheet.getNumMergedRegions(); i++) {
CellRangeAddress range = sheet.getMergedRegion(i);
if (range.getFirstRow() <= rowNum && range.getLastRow() >=
rowNum) {
++rowsMerged;
}
}
System.out.println("Number of rows merged are: " + rowsMerged);
}
catch (Exception e) {
e.printStackTrace();
}
return rowsMerged;
}
P.S. What I am doing here is, I am trying to fetch the number of merged rows for a script e.g. 6 rows are merged for Login script and then find number of cells inside those 6 rows to get the reference set name (3 cells).
Note: When I call the above method - getNumOfMergedRows() to determine number of rows merged for Login script, I am getting 4 as output instead of 6.
What is the value of rowNum variable in your if condition?
Added the code which i use to determine the rowNum. Note: All the variables are instance variable
The below code will determine the number of merged cells in a column - colName, with begining row as startingRow
public int getNumOfMergedRows(String colName, int startingRow) {
int rowsMerged = 0, col = 0;
XSSFRow mergedRow = null;
XSSFCell mergedCell = null;
try {
col = getExcelColNum(colName);
for (int i = startingRow + 1; i < sheet.getPhysicalNumberOfRows(); i++) {
mergedRow = sheet.getRow(i);
mergedCell = mergedRow.getCell(col);
if (mergedCell.getCellTypeEnum() == null || mergedCell.getCellTypeEnum() == CellType.BLANK)
rowsMerged++;
else
break;
}
rowsMerged++;
}
catch (Exception e) {
e.printStackTrace();
}
logger.info(rowsMerged + " rows are merged in columne" + colName + " for " + scriptName + " script");
return rowsMerged;
}
| common-pile/stackexchange_filtered |
Can you be sued for using someone else's last name in app?
We are looking to create an app that contains someones last name in the title, as well as some data created by him.
The data is public for anyone to use in the form of spreadsheets, the goal of the app is simply to make it easier to use. The reason the last name is in the title is because the data is named after him.
We're wondering if we could be sued for this? They have their own app, but we feel it's missing a lot of features. They are on another continent, though I doubt that matters.
I have searched copyright and trademark databases and they come back with no results, but again not sure if that matters.
You can be sued for this (or anything). The question is does he have a reasonable claim - and that would depend on a number of things, including how you use his name (ie if your use implies a relationship with him when it doesn't).
Also, just because data is public, does not mean its not copyrighted - depending on the copyright there is every possibility that your using this data could be a breach of copyright - again, it depends on the data source.
In many (most?) places just because something is not in a trademark database does not mean its not trademarked, it just means it has fewer trademark protections. Similarly for copyright.
If you are confident you are not stepping on his toes, why not tell him what you want to do and ask him if he is OK with it ?
Technically an unregistered trade mark has the same protections as a registered one: the plaintiff just has to go through the additional step of proving the trade mark's existence.
@DaleM this is not universally/entirely correct. In New Zealand at least you can get additional damages if you have a registered trademark. From the little I've read in USA a registered Trademark allows you to get Customs to prevent importing of infringing foreign goods and access to Federal court.
| common-pile/stackexchange_filtered |
How to disable seemingly random syntax higlight in Notepad++
Can anyone suggest:
Why does it higlight like this? I don't seem to do anything to activate it:
How to disable it?
Does it say PHP under the Language menu? Maybe the file extension is making it think its something else and not PHP. This sometimes happened when I had projects with things like .inc or .template files
This is old, but I ran into the same problem and just figured out an easy fix.
From the file menu: Language > N > Normal Text
I'm not sure if this is the same issue as mine, but I wanted to keep the language setting enabled but remove the highlighting.
I did this by going to Settings --> Preferences --> Highlghting
Removed the Enable check mark for Highlight Matching Tags.
1) Go to settings -> Style Configurator.
2) Select the language that Notepad++ has chosen, usually based on filename extension. You can also change this manually under the Language menu setting.
3) Go through the Styles until you find the style that applied the highlighting. You can tell because the Background colour will match the highlighted color.
4) Change Background colour to white.
I do not have an answer for your first question
For your second question do the following steps
Inside Notepad++ press ctrl A( select all)
Right click for context menu.
select Remove style and click on Clear all Styles
This will clear all the highlights.
Tnx, but this didn't help :(
FYI. You may also see this sort of behavior when using a vertical edge in background mode. Switching to line mode will eliminate the highlighting of lines that run over the set character count for the vertical edge.
| common-pile/stackexchange_filtered |
How to copy a complex property value from one user control to another as design time?
TL;DR;
How can I add copy-paste capability to a complex, multiple values property that will enable me to copy the property value from one user control and paste it to another at design time?
The long story
I have created a user control (StylableControl) that has a complex property called Style.
This property contains an instance of a class called StylableControlStyles, that contains multiple instances of a class called Style, where each one holds values such as BackColor, ForeColor, Image, Gradient (another class I've created) etc'.
I've also created a custom control designer to allow editing style property for the user control. It shows a form where each style class in the style property can be edited easily.
Now I want to provide the users of this control an easy way to copy the entire content of the Style property from one instance of the user control to another instance, at design time.
I could, of course, override the ToString() method of the StylableControlStyles object to create a string representation that will encapsulate all the data saved in this object, but that would create a hugh string and of course would need a lot of work parsing it in the class converter (currenty I'm just using an ExpandableObjectConverter).
I would like to avoid that if possible.
Are you maybe simply looking for a way to deep clone or deep copy an object hierarchy? If so, there are already several answers here on SO
@jcb Actually, the colning part is alreay written, that's not my problem. I'm looking for a way to provide design-time support for copy - paste values between controls on a form. Something like a smart tag command that will enable the user to choose another instance of the user control and copy it's style into the current instance's style.
@ZoharPeled, if you already have custom control designer form, add Copy Style and Apply Copied Style buttons there, for example. Copied style can be stored in some buffer (maybe even Clipboard. if it is human-readable format, users will be able to insert copied value into .designer.cs file directly)
@ZoharPeled, alternatively, do you know how DataGridView add some quick access functions in designer (see this screenshot). It is possible to add similar actions (Copy Style..., Apply Style...) to a custom user control
@Ash That's actually a very good idea. I'll try it. If you could post it as an answer I'll be happy to upvote and accept it if I'll be able to make it work.
@ZoharPeled, I'm sure SO has a good QA which can be used as a duplicate (e.g. this one)
@Ash Brilliently simple. I should have thought about it myself. I've used a static variable in the control designer to serve as a buffer. works like a charm, thanks. Sure you don't want a free 25 rep. points bonus?
If StylableControlStyles is marked as Serializable (and it's indeed serializable), it should work as-is
Following Ash's advice in the comments I've used a DesignerVerb to copy and paste the Style to and from a private static member of type Style of the control designer.
So in my control designer class I have:
private static ZControlStyle _CopiedStyle;
And have added these designer verbs:
_Verbs.Add(new DesignerVerb("Copy Styles", CopyStyle));
_Verbs.Add(new DesignerVerb("Paste Styles", PasteStyle));
And the methods for copy ans paste:
private void PasteStyle(object sender, EventArgs e)
{
if (_CopiedStyle != null)
{
var toggleButton = Control as ZToggleButton;
if (toggleButton != null)
{
toggleButton.Style.FromStyle(_CopiedStyle);
}
else
{
(Control as ZControl).Style.FromStyle(_CopiedStyle);
}
}
}
private void CopyStyle(object sender, EventArgs e)
{
var toggleButton = Control as ZToggleButton;
if (toggleButton != null)
{
_CopiedStyle = toggleButton.Style;
}
else
{
_CopiedStyle = (Control as ZControl)?.Style;
}
}
| common-pile/stackexchange_filtered |
How do you recursively add folders and files to an empty Visual Studio 2010/2012 solution?
I've seen answers on SO that were for adding recusively to projects, but no correct answers for empty solutions.
I am trying to recursively add an entire directory to a TFS repository, and this seems like the easiest way of doing it, however Visual Studio says that I can't add folders to an empty solution. This has to be incorrect, right?
Are you adding folders to a solution in solution explorer? A solution is really just a collection of projects. You should add your files and folders to a project you add to the solution.
Is there a blank project? I am trying to add a WordPress site to a Solution so that I can add it to TFS, but there are no PHP or WordPress project types in VisualStudio that I know of.
You can do this in Source Control Explorer. The steps are basically this: Create a mapping for a local folder to a folder in your tree under Manage Workspaces. Then in Source Control Explorer, right-click the folder and choose "add items to folder". You should be able to add an entire folder structure.
Thanks AaronS, that seems to answer my other question :)
No problem. Were you able to get it loaded?
Well it should be simple. Once you have the project created in VS, in the Solution Explorer panel, right-click on the name of your Project. There will be the option to "Add", "Existing Item" - then choose your parent folder.
It should work, TFS should recognize this and sync up as well - assuming you have already tied the project to Source Control.
| common-pile/stackexchange_filtered |
How to break up an If-not-->then statement with multiple "nots" into a block of code?
Here's the code I've written so far, it just goes and goes, I'd like to make it into a block (if possible) to make it more manageable.
The purpose of the code is to erase certain cells if none of the known names are present in a certain cell. I have other individual codes based off each name that paste specific data into specific cells if Namen is present:
Sub Eraser()
If Range("E4").Value <> "Name1" And Range("E4").Value <> "Name2" And Range("E4").Value <> "Name3" And Range("E4").Value <> "Name4" And Range("E4").Value <> "Name5" And Range("E4").Value <> "Name6" And Range("E4").Value <> "Name7" And Range("E4").Value <> "Name8" And Range("E4").Value <> "Name9" And Range("E4").Value <> "Name10" And Range("E4").Value <> "Name11" And Range("E4").Value <> "Name12" And Range("E4").Value <> "Name13" Then
Range("W4").Value = ""
End If
End Sub
I've tried Things like
Sub Eraser()
If Range("E4").Value <> "Name1" & _
Range("E4").Value <> "Name2" & _
Range("E4").Value <> "Name3" & _
Range("E4").Value <> "Name4" & _
Range("E4").Value <> "Name5" & _
Range("E4").Value <> "Name6" & _
Range("E4").Value <> "Name7" & _
Range("E4").Value <> "Name8" & _
Range("E4").Value <> "Name9" & _
Range("E4").Value <> "Name10" & _
Range("E4").Value <> "Name11" & _
Range("E4").Value <> "Name12" & _
Range("E4").Value <> "Name13" Then
Range("W4").Value = ""
End If
End Sub
And
Sub Eraser()
If Range("E4").Value <> "Name1" & _
And Range("E4").Value <> "Name2" & _
And Range("E4").Value <> "Name3" & _
And Range("E4").Value <> "Name4" & _
'And et cetera'
FYI The code works perfectly when it's single line, I'm just a perfectionist
Since a number of people have offered codes that I think contradict what I was trying to do, I'll add my other codes that the Eraser Sub was meant to erase. If anyone has any ideas on how to make them more elegant or concise, I'm open.
Sub Name1()
If Range("E4").Value = "Name1" Then
Range("T6").Value = "EID1#" 'Employee ID Number
Range("U10").Value = "SN1" 'Serial Number
Range("I19").Value = "TN1" 'Trainer Name
Range("AA19").Value = "TED1" 'Trainer Expiration Date
Range("AB6").Value = "CN1" 'Course Name
End If
End Sub
Sub Name2()
If Range("E4").Value = "Name2" Then
Range("T6").Value = "EID2"
Range("U10").Value = "SN2"
Range("I19").Value = "TN2"
Range("AA19").Value = "TED2"
Range("AB6").Value = "CN2"
End If
End Sub
Sub Name3()
If Range("E4").Value = "Name3" Then
Range("T6").Value = "EID3"
Range("U10").Value = "SN3"
Range("I19").Value = "TN3"
Range("AA19").Value = "TED3"
Range("AB6").Value = "CN3"
End If
End Sub
Sub Eraser()
If Range("E4").Value <> "Name 1" _
And Range("E4").Value <> "Name2" _
And Range("E4").Value <> "Name3" Then
Range("T6").Value = ""
Range("U10").Value = ""
Range("I19").Value = ""
Range("AA19").Value = ""
Range("AB6").Value = ""
End If
End Sub
Again, I have about 16 different iterations for this, and while some of my team have the same information, a lot of them have different info. so I coded each one individually, and wrote an update-check code that calls each Sub automatically. The Eraser Sub was mainly to be Personally Identifiable Information (PII) conscious
For diligence sake, here's the update-check Sub
Private Sub Worksheet_Change(ByVal Target As Range)
If Target.Address = "$E$4" Then
Call Name1
Call Name2
Call Name3
Call Eraser
End If
End Sub
Looking back at this now, and knowing what i now know about Excel Syntax and VLOOKUP, I could have avoided this entirely by just putting all the relevant information into a table, and coding the individual cells where the info was to be pasted with VLOOKUP. Still, thanks to all the answerers and commenters, y'all taught me a lot.
@Dominique @Kostas K. @T.M. @Toddleson @bankeris
What about Excel's Match() function, as explained here: https://stackoverflow.com/questions/7031416/return-index-of-an-element-in-an-array-excel-vba/7031744
Your second example is missing the And keyword between conditions. But please don't do this, it's a poor design.
Call has been deprecated due to official help (though often used because of personal preference). @todayimgonnalearn
"&" should be AND. "&" is used to join strings together. AND is used to evaluate boolean expressions.
Sub Eraser()
If Range("E4").Value <> "Name1" _
And Range("E4").Value <> "Name2" _
And Range("E4").Value <> "Name3" _
And Range("E4").Value <> "Name4" _
And Range("E4").Value <> "Name5" _
And Range("E4").Value <> "Name6" _
And Range("E4").Value <> "Name7" _
And Range("E4").Value <> "Name8" _
And Range("E4").Value <> "Name9" _
And Range("E4").Value <> "Name10" _
And Range("E4").Value <> "Name11" _
And Range("E4").Value <> "Name12" _
And Range("E4").Value <> "Name13" Then
Range("W4").Value = ""
End If
End Sub
Also, to save on processing time and to make the code look cleaner I suggest using variables as nicknames for commonly repeated references.
Sub Eraser()
Dim rVal As Variant
rVal = Range("E4").Value
If rVal <> "Name1" _
And rVal <> "Name2" _
And rVal <> "Name3" _
And rVal <> "Name4" _
And rVal <> "Name5" _
And rVal <> "Name6" _
And rVal <> "Name7" _
And rVal <> "Name8" _
And rVal <> "Name9" _
And rVal <> "Name10" _
And rVal <> "Name11" _
And rVal <> "Name12" _
And rVal <> "Name13" Then
Range("W4").Value = ""
End If
End Sub
And Finally, if you have an increasingly large number of names to check against, I suggest adding them to a array or dictionary and creating a function to iterate through the array or using dictionary.Exists as a way to evaluate your expression.
Thanks, your answer is by far the most elegant. It was exactly what I needed. Also, thanks for that explanation about & vs AND. I didn't know there was a difference. I had also tried Sub Eraser()
If Range("E4").Value <> "Name1" & _
And Range("E4").Value <> "Name2" & _
Firstly you missed "AND". Secondly to make a bit nicer use "Select Case"
Sub Eraser()
Select Case (Range("E4").Value)
Case "Name1", "Name2", "Name3", "Name4", "Name5"
MsgBox "FOUND" 'code goes here if TRUE
Case Else
MsgBox "no finding" 'code goes here if FALSE
End Select
End Sub
Two function checks for search term existance
Just to demonstrate a couple of other simple ways to check for term existance.
Both approaches assume a comma separated list as 2nd argument (here e.g. "Name1,Name2,Name3")
a) via counting elements after execution of VBA.Filter()
A positive filtering of splitted list terms by commata results in a zero-based array. The upper boundary (UBound()) of found elements incremented by 1 results in the boolean function return value True, -1 for non-findings plus 1 results in False (-1+1=0).
Function IsListed(ByVal term As String, List) As Boolean
Dim elems: elems = Split(LCase(List), ",")
'return boolean function result
IsListed = UBound(Filter(elems, LCase(term), True)) + 1
End Function
b) via Excel function FilterXML()
Transforming the passed list string into a well formed (html tag-like) xml content allows to apply Excel's FilterXML() function upon it (available since version 2013+).
Function IsListed(ByVal term As String, List) As Boolean
Dim content: content = "<r><s>" & Replace(List, ",", "</s><s>") & "</s></r>"
'return boolean function result
IsListed = Not IsError(Application.FilterXML(content, "//s[.='" & term & "']"))
End Function
Possible example call
Sub Eraser()
Dim rVal: rVal = Sheet1.Range("E4")
If IsListed(rVal, "Name1,Name2,Name3") Then
Debug.Print rVal & " found" 'code goes here if TRUE
Else
Debug.Print rVal & " not found!" 'code goes here if FALSE
'do further stuff
Sheet1.Range("W4") = vbNullString
End If
I think what complicated the matter is that I already had code for each individual case where the inputs were detected. such as Sub N1() If Range("A6").Value = "'Name 1'" Then Range("T6").Value = "'Employee ID Number'" Range("U10").Value = "'Serial Number'" Range("I19").Value = "'Trainer Name'" Range("AA19").Value = "'Trainer Expiration Date'" Range("AB6").Value = "'Course Name.'" End If End Sub
Insert a single sub call in the TRUE part of the If condition, naming it e.g. doReplaces rVal (note the passed argument rVal !) and execute all cases within the sub procedure Sub doReplaces(byval rVal as String) via a Select Case rVal ... Case "Name 1".... End Select structure. You may even call your atomized sub-subs from the different Case selections by a simple N1, N2 etc. Feel free to upvote (up-array) if helpful :) @todayimgonnalearn
| common-pile/stackexchange_filtered |
How to count string num with limit memory?
The task is to count the num of words from a input file.
the input file is 8 chars per line, and there are 10M lines, for example:
aaaaaaaa
bbbbbbbb
aaaaaaaa
abcabcab
bbbbbbbb
...
the output is:
aaaaaaaa 2
abcabcab 1
bbbbbbbb 2
...
It'll takes 80MB memory if I load all of words into memory, but there are only 60MB in os system, which I can use for this task. So how can I solve this problem?
My algorithm is to use map<String,Integer>, but jvm throw Exception in thread "main" java.lang.OutOfMemoryError: Java heap space. I know I can solve this by setting -Xmx1024m, for example, but I want to use less memory to solve it.
Can you please explain your algorithm?
How do you read the file ? Example of relevant part of your code would help.
Is swapping an option to you? In most systems, there is enough disk storage, but it would slow down the process significantly of course.
Can you use disk space? If so how many space can you use?
I assume this is [homework]?
You can solve this with constant memory and no additional disk space by reading the file multiple times.
Do you have to do it in Java? Otherwise you can just use the standard Unix tools sort and uniq: At its simplest it's just sort $FILE | uniq -c, but you might have to sort temporarily into another file if the input is too big.
I suck at explaining theoretical answers but here we go....
I have made an assumption about your question as it is not entirely clear.
The memory used to store all the distinct words is 80MB (the entire file is bigger).
The words could contain non-ascii characters (so we just treat the data as raw bytes).
It is sufficient to read over the file twice storing ~ 40MB of distinct words each time.
// Loop over the file and for each word:
//
// Compute a hash of the word.
// Convert the hash to a number by some means (skip if possible).
// If the number is odd then skip to the next word.
// Use conventional means to store the distinct word.
//
// Do something with all the distinct words.
Then repeat the above a second time using even instead of odd.
Then you have divided the task into 2 and can do each separately.
No words from the first set will appear in the second set.
The hash is necessary because the words could (in theory) all end with the same letter.
The solution can be extended to work with different memory constraints. Rather than saying just odd/even we can divide the words into X groups by using number MOD X.
You've traded a problem for another. Your hash algorithm may well return an odd number for every word, or the same value mod x for every word.
@ChristofferHammarström The data would have to be specifically crafted for such an event to occur. Random and/or real world data would not that have that effect with a decent hash algorithm. Simple patterns in the data won't favour even/odd in the hash.
Well, it's the same caveat as yours, that the data may have been specifically crafted to have each word end with the same letter.
There are real use cases for words ending in the same letter (consider a list of actions: dancing, running, drawing). The list of words that would result in bad hashes has no real use case. This is the difference.
If the data has been crafted to break the implementation then the data isn't very useful anyway so there would be no motivation to check it.
I believe that the most robust solution is to use the disk space.
For example you can sort your file in another file, using an algorithm for sorting large files (that use disk space), and then count the consecutive occurrences of the same word.
I believe that this post can help you. Or search by yourself something about external sorting.
Update 1
Or as @jordeu suggest you can use a Java embedded database library: like H2, JavaDB, or similars.
Update 2
I thought about another possible solution, using Prefix Tree. However I still prefer the first one, because I'm not an expert on them.
This indeed is the right solution for a file of any size. It's more work, but uses constant amount of memory that is not dependant on the data.
@Slanec, wrong. If the input file is too big to fit into memory at once, and speed is not of the essence, and there is enough free disk space available, then yes.
@PéterTörök the question ask to not use more then 60MB, so there are the limits of the problem.
Indeed, in this case. @Slanec, however, seems to claim this is the best general solution, with which I disagree.
LOL, I came to the same idea without reading you answer first. +1.
@MisterSmith maybe you do many time the same procedure with sort and uniq :D
But I don't see the need to sort. This could take considerable time, and swapping lines in a file is expensive. I'd just iterate over the input file and keep a record of the counts in a file. The format of the output file can be designed so that finding a word is fast, without reading the entire line and comparing char by char the strings. In this sense he can keep advantage of the fixed length of each line, and come up with a random access file (records) of some kind.
Read one line at a time
and then have e.g. a HashMap<String,Integer>
where you put your words as key and the count as integer.
If a key exists, increase the count. Otherwise add the key to the map with a count of 1.
There is no need to keep the whole file in memory.
But if all the words are different?
That is the corner case - but if you know that in advance, you could just count the lines ;-)
This, I'd just make it HashMap<Integer,Integer>, for the key would be hashcode of the String. Also, a MUCH more memory-wise HashMap implementation for this would be Trove's TIntIntHashMap, because it doesn't store a big autoboxed Integer, but a pure int.
Heiko's solution is the best and easiest.
His words is 8 chars. In ASCII they are 8 bytes if he is sure about that, or 16 bytes if he cannot use only ASCII chars. However with a HashMap entry you don't saving too many bytes: int + int = 8 + the space of the entry object.
I just use this algorithm with java, but jvm throws Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
@vienna use the readLine() method. You don't have to keep the file in memory. Like this new BufferedReader(new InputStreamReader(ras)); String s; while((s=fileReader.readLine())!=null) { //add to map}
@Tito you know how much size an HashMap.Entry<String,Integer> consumes?
@dash1e As Slanec said above TIntIntHashMap would be a good option in that case. Can reduce the memory consumption by a good amount.
keys are references and also consume memory. This is no solution.
And finally how can you output the results? You will have an int an then? What's the word about it?
@TitoGeorge I also use readLine method to save line into map.but jvm throw outofmemory exception
@vienna Can you also try TIntIntHashMap or TObjectIntMap as Slance Said? It is like normal map, but allows primitives as key values. Helps to reduce memory utilization significantly.
I guess you mean the number of distinct words do you?
So the obvious approach is to store (distinctive information about) each different word as a key in a map, where the value is the associated counter. Depending on how many distinct words are expected, storing all of them may even fit into your memory, however not in the worst case scenario when all words are different.
To lessen memory needs, you could calculate a checksum for the words and store that, instead of the words themselves. Storing e.g. a 4-byte checksum instead of an 8-character word (requiring at least 9 bytes to store) requires 40M instead of 90M. Plus you need a counter for each word too. Depending on the expected number of occurrences for a specific word, you may be able to get by with 2 bytes (for max 65535 occurrences), which requires max 60M of memory for 10M distinct words.
Update
Of course, the checksum can be calculated in many different ways, and it can be lossless or not. This also depends a lot on the character set used in the words. E.g. if only lowercase standard ASCII characters are used (as shown in the examples above), we have 26 different characters at each position. Consequently, each character can be losslessly encoded in 5 bits. Thus 8 characters fit into 5 bytes, which is a bit more than the limit, but may be dense enough, depending on the circumstances.
Wouldn't calculating checksum for each unique word be time consuming?
Different words can have same checksum.
If they have the same checksum, you were using the wrong algorithm to compute it :->
I agree with @dash1e, and even if they don't, then you're not adding value compared to storing words. Also, to answer adarshr, pure hash functions (e.g. MD5) are designed for speed.
Heiko, how should you know that in advance?
@adarshr, sometimes you have to decide whether you optimize for speed or memory efficiency. You can't always have both.
I am convinced that the use of external sorting or other algorithm that work with disk, as I explain in my answer, would be the best solution.
@Romain, do you mean that completing successfully instead of terminating with out of memory error is no added value?
@dash1e, could be, depending on the circumstances. E.g. speed requirements - sorting using external files is a lot slower than just reading through the input file once. Since the problem is ill defined, we can't reliably decide about the best solution - but my vote would go for the simplest one which works.
This approach has a drawback: if the input problem grows in size (number of lines), or if the hash function output value is bigger as the word length grows, then you run into the same problem. My proposal is in my answer.
@MisterSmith, indeed, as all approaches, this one has its drawbacks too. Since the problem is badly defined, we can make a host of generalizing or specializing assumptions and tailor our solutions according to these. But I personally don't like speculative generality :-)
Use H2 Database Engine, it can work on disc or on memory if it's necessary. And it have a really good performance.
Depending on what kind of character the words are build of you can chose for this system:
If it might contain any character of the alphabet in upper and lower case, you will have (26*2)^8 combinations, which is<PHONE_NUMBER>10656. This number can fit in a long datatype.
So compute the checksum for the strings like this:
public static long checksum(String str)
{
String tokes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
long checksum = 0;
for (int i = 0; i < str.length(); ++i)
{
int c = tokens.indexOf(str.charAt(i));
checksum *= tokens.length();
checksum += c;
}
return checksum;
}
This will reduce the taken memory per word by more than 8 bytes. A string is an array of char, each char is in Java 2 bytes. So, 8 chars = 16 bytes. But the string class contains more data than only the char array, it contains some integers for size and offset as well, which is 4 bytes per int. Don't forget the memory pointer to the Strings and char arrays as well. So, a raw estimation makes me think that this will reduce 28 bytes per word.
So, 8 bytes per word and you have 10 000 000 words, gives 76 MB. Which is your first wrong estimation, because you forgot all the things I noticed. So this means that even this method won't work.
8 chars, and you are using only ASCII chars are 8 bytes. One long is 8 bytes, so what are you saving?
@MartijnCourteaux thank you, I just wonder why a string will cost more memory than I estimate. so I need some other algorithm to solve it.
Because an "empty" (= no characters) takes also a lot of bytes (I estimated it at 28 bytes per String object.
I'd create a SHA-1 of each word, then store these numbers in a Set. Then, of course, when reading a number, check the Set if it's there [(not totally necessary since Set by definition is unique, so you can just "add" its SHA-1 number also)]
What is the SHA-1 result size?
I do believe you are correct :), bah, throw all the words into a Set of Strings, and let Java deal with it. It must be optimized by now! lol.
You can convert each 8 byte word into a long and use TLongIntHashMap which is quite a bit more efficient than Map<String, Integer> or Map<Long, Integer>
If you just need the distinct words you can use TLongHashSet
If you can sort your file first (e.g. using the memory-efficient "sort" utility on Unix), then it's easy. You simply read the the sorted items, counting the neighboring duplicates as you go, and write the totals to a new file immediately.
If you need to sort using Java, this post might help:
http://www.codeodor.com/index.cfm/2007/5/10/Sorting-really-BIG-files/1194
You can use constant memory by reading your file multiple times.
Basic idea:
Treat the file as n partitions p_1...p_n, sized so that you can load each of them into ram.
Load p_i into a Map structure, scan through the whole file and keep track of counts of the p_i elements only (see answer of Heiko Rupp)
Remove element if we encounter the same value in a partition p_j with j smaller i
Output result counts for elements in the Map
Clear Map, repeat for all p_1...p_n
As in any optimization, there are tradeoffs. In your case, you can do the same task with less memory but it comes at the cost of increasing runtime.
Your scarce resource is memory, so you can't store the words in RAM.
You could use a hash instead of the word as other posts mention, but if your file grows in size this is no solution, since at some point you'll run into the same problem again.
Yes, you could use an external web server to crunch the file and do the job for your client app, but reading your question it seems that you want to do all the thing in one (your app).
So my proposal is to iterate over the file, and for each word:
If the word was found for first time, write the string to a result file together with the integer value 1.
If the word was processed before (it will appear in the result file), increment the record value.
This solution scales well no matter the number of lines of your input file nor the length of the words*.
You can optimize the way you do the writes in the output file, so that the search is made faster, but the basic version described above is enough to work.
EDIT:
*It scales well until you run out of disk space XD. So the precondition would be to have a disk with at least 2N bytes of free usable space, where N is the input file size in bytes.
The hash table doesn't need to include the full text data, it could simply include the offset of a place the line appears in the original file.
@Random832 Even in that case, if the number of strings in the input file is large enough, the hashtable won't fit in memory. (I know he said only 10M strings, but I was trying to describe a more general solution)
possible solutions:
Use file sorting and then just count the consequent occurences of each value.
Load the file in a database and use a count statement like this: select value, count(*) from table group by value
| common-pile/stackexchange_filtered |
How to apply filter inside a single wp_query?
I've got a special query on my homepage that returns posts from a custom taxonomy.
I'm trying to apply this filter for it.
add_filter( 'post_limits', 'my_post_limits' );
function my_post_limits( $limit ) {
if ( is_home() ) {
return 'LIMIT 0, 3';
}
return $limit;
}
However, this is also applied to my other loop that's further down the page, so I'm guessing it's something that's not supposed to be set globally. I don't have much backend knowledge and can't figure out how to apply a filter such as this only inside my custom query. Is this possible ?
This is how my full query looks :
add_filter( 'post_limits', 'my_post_limits' );
function my_post_limits( $limit ) {
if ( is_home() ) {
return 'LIMIT 0, 3';
}
return $limit;
}
$args = array(
'post_type' => array('post','featured-post'),
'tax_query' => array(
array(
'taxonomy' => 'featured',
'field' => 'slug',
'terms' => 'featured-homepage'
)
)
);
$slider_query = new WP_Query( $args );
if ( $slider_query->have_posts() ):
while ( $slider_query->have_posts() ) :
$slider_query->the_post();
$tip_post=get_post_type();
if (get_post_type()=='post') {
$thumb = wp_get_attachment_image_src( get_post_thumbnail_id($post->ID), 'bones-thumb-1280' );
$url = $thumb['0'];
// posts are here
} elseif(get_post_type()=='featured-post') {
// custom posts are here
} endwhile; else: endif;
You should consider using the posts_per_page parameter as suggested by @Tamil.
But in general you can also remove the filters you add.
In your case you could remove it after your WP_Query() with
add_filter( 'post_limits', 'my_post_limits' );
$slider_query = new WP_Query( $args )
remove_filter( 'post_limits', 'my_post_limits' );
so it won't affect later queries.
You can read more about it here in the Codex:
http://codex.wordpress.org/Function_Reference/remove_filter
So basically I can remove the filters after I'm done with them, but it's better to use the query's parameters.
Marking this as an answer, thanks !
Try post_per_page argument
$args = array(
'post_type' => array('post','featured-post'),
'tax_query' => array(
array(
'taxonomy' => 'featured',
'field' => 'slug',
'terms' => 'featured-homepage'
)
),
'posts_per_page' => 7
);
| common-pile/stackexchange_filtered |
Core Data: Store cannot hold instances of entity (Cocoa Error: 134020)
This is the strangest error. The internet suggests that this is an issue with targeting Tiger; except that I'm actually targeting iOS 3 and 4.
Error Domain=NSCocoaErrorDomain Code=134020 "The operation couldn\u2019t be completed. (Cocoa error 134020.)" UserInfo=0xc502350 {NSAffectedObjectsErrorKey=<PartRecommendation: 0x6a113e0> (entity: PartRecommendation; id: 0x6a0d0e0 <x-coredata:///PartRecommendation/tAE2B5BA2-44FD-4B62-95D7-5B86EBD6830014> ; data: {
"_rkManagedObjectSyncStatus" = 0;
name = "Thin canopy cover";
part = nil;
partRecommendationId = 6;
partType = "0x6a07f40 <x-coredata:///PartType/tAE2B5BA2-44FD-4B62-95D7-5B86EBD683003>";
}), NSUnderlyingException=Store <NSSQLCore: 0x5f3b4c0> cannot hold instances of entity (<NSEntityDescription: 0x6d2e5d0>) name PartRecommendation, managedObjectClassName PartRecommendation, renamingIdentifier PartRecommendation, isAbstract 0, superentity name PartOption, properties {
"_rkManagedObjectSyncStatus" = "(<NSAttributeDescription: 0x6d37550>), name _rkManagedObjectSyncStatus, isOptional 0, isTransient 0, entity PartRecommendation, renamingIdentifier _rkManagedObjectSyncStatus, validation predicates (\n), warnings (\n), versionHashModifier (null), attributeType 100 , attributeValueClassName NSNumber, defaultValue 0";
name = "(<NSAttributeDescription: 0x6d37c10>), name name, isOptional 1, isTransient 0, entity PartRecommendation, renamingIdentifier name, validation predicates (\n), warnings (\n), versionHashModifier (null), attributeType 700 , attributeValueClassName NSString, defaultValue (null)";
part = "(<NSRelationshipDescription: 0x6d2e660>), name part, isOptional 1, isTransient 0, entity PartRecommendation, renamingIdentifier part, validation predicates (\n), warnings (\n), versionHashModifier (null), destination entity Part, inverseRelationship recommendation, minCount 1, maxCount 1";
partRecommendationId = "(<NSAttributeDescription: 0x6d2e6d0>), name partRecommendationId, isOptional 1, isTransient 0, entity PartRecommendation, renamingIdentifier partRecommendationId, validation predicates (\n), warnings (\n), versionHashModifier (null), attributeType 100 , attributeValueClassName NSNumber, defaultValue 0";
partType = "(<NSRelationshipDescription: 0x6d2e720>), name partType, isOptional 1, isTransient 0, entity PartRecommendation, renamingIdentifier partType, validation predicates (\n), warnings (\n), versionHashModifier (null), destination entity PartType, inverseRelationship partRecommendations, minCount 1, maxCount 1";
}, subentities {
}, userInfo {
}, versionHashModifier (null)}
I'm adding a lot of data to Core Data before I get this error, but I'm saving twice (once in the middle, then again at the end). Everything is fine after the first save, it's the second save that is causing the problem, so I'll post the code that I'm using after the first but before the second.
Landscape *landscape = [Landscape object];
Type *type = [Type object];
type.name = @"tree";
MeasurementType *caliperType = [MeasurementType object];
MeasurementType *heightType = [MeasurementType object];
InventoryTree *inventoryTree = [InventoryTree object];
inventoryTree.landscape = landscape;
inventoryTree.type = type;
Assessment *assessmentTree = [Assessment object];
assessmentTree.inventoryItem = inventoryTree;
assessmentTree.type = type;
[[inventoryTree mutableSetValueForKeyPath:@"assessments"] addObject:assessmentTree];
Measurement *caliper = [Measurement object];
Measurement *height = [Measurement object];
caliper.type = caliperType;
height.type = heightType;
[[assessmentTree mutableSetValueForKey:@"measurements"] addObjectsFromArray:[NSArray arrayWithObjects:caliper, height, nil]];
for (PartType *pType in [PartType allObjects]) {
pType.type = type;
Part *treePart = [Part object];
treePart.partType = pType;
[[assessmentTree mutableSetValueForKey:@"parts"] addObject:treePart];
for (PartCondition *cond in pType.partConditions) {
cond.partType = pType;
}
for (PartRecommendation *rec in pType.partRecommendations) {
rec.partType = pType;
}
}
The convenience methods I'm calling on NSManagedObject subclasses can be found here.
I have methods elsewhere in the app (long after this runs) that can add/edit/delete all of the entities referenced above. The error doesn't occur on the same entity every time.
Any help would be greatly appreciated! This is a tricky one.
Okay, I discovered what the problem was.
In the code before what I posted, I used RestKit's seeder object to seed my database from a local json file, and then I continued to add objects after it ran. But it doesn't actually commit the objects to the object store until you call finalizeSeedingAndExit.
Calling that function to seed the first part of my data, then commenting out the seeder and running again, fixed this strange, strange, error.
I had this error in the following situation:
I use configurations in my CoreData Model. Each entity is assigned to
a certain configuration.
I added a new entity and forgot to assign it
to one of the configurations.
That seems to cause the Cocoa Error: 134020.
After adding the new entity to a configuration, everything works fine
For other people that find this page via Google:I had a similar problem with a cause unrelated to the solution above.
The code that triggered the error was seeding a store that used the same managed object model from an existing store.
The wrong code was
NSManagedObject *newObj = [[NSManagedObject alloc] initWithEntity:[obj entity] insertIntoManagedObjectContext:moc]
Instead do
NSManagedObject *newObj = [[NSManagedObject alloc] initWithEntity:[NSEntityDescription entityForName:obj.entity.name inManagedObjectContext:moc] insertIntoManagedObjectContext:moc]
which may seem redundant, but got rid of the error for me.
Thanks this helped me out. It doesn't seem like it would fix it but it did. My only guess is that there is a bug in the bowels of Core Data regarding this.
This fixed an error for me in ios10 - the new init(context) methods on NSManagedObject don't seem to work during migrations
| common-pile/stackexchange_filtered |
FX/Forex Currency - FIX Tag 54 How to use it
I am new to FIX protocol
I am not sure how exactly Tag54 (Buy/Sell) works
According to the API I am reading for making FX Single Order via FIX
They say:
Tag 55 Tag 54 Tag 15
Buy EUR EUR/USD 1 EUR
Sell USD EUR/USD 1 USD <-- Why is this a Sell?
Sell EUR EUR/USD 2 EUR
Buy USD EUR/USD 2 USD <-- Why is this a Buy?
reference : (Page 5) http://www.commanderfx.com/downloads/Commander_Rules_Of_Engagement_v1_5.pdf
I would have expected this:
Tag 55 Tag 54 Tag 15
Buy EUR EUR/USD 1 EUR
Sell USD EUR/USD 2 USD <-- Tag 54 changed
Sell EUR EUR/USD 2 EUR
Buy USD EUR/USD 1 USD <-- Tag 54 Changed?
This isn't a Quickfix question, but a general FIX question that I think is specific to a single counterparty. I have removed the Quickfix tag.
You overlooked this important point.
Please note that the Side (tag 54) always refers to the base currency
So it always points to what side you are on your base currency(sell/buy) and not on what currency you are buying or selling.
Exactly - and in the above examples the base currency is always EUR (Base being on the left of the "/".) what am I not getting :(
Yes that is correct. Sell USD means you buy EUR so it is a BUY order for base currency and vice versa. Take the currency pair in tandem and not in isolation.
The currency pair for each of these trades is EUR/USD so each of the buy or sell orders is relative to this (the rates are in market convention). The rate EUR/USD means how many USD I will get for each unit of EUR that I exchange, buying USD from EUR is termed as buying EUR/USD, selling USD to get EUR is selling the pair. Remember that buy or sell is relative to the pair in this way. In FX trader terms you don't buy or sell a currency you buy or sell the PAIR in MARKET CONVENTION. I hope that helps.
| common-pile/stackexchange_filtered |
Show using the properties of event space (sigma-algebra) that (1/2,2/3) is also an event.
Let $\Omega = [0,1]$ and we know that sets $[0, x]$, $0 \leq x \leq 1$ are events. Show using the properties of event space (sigma-algebra) that $\left(\frac{1}{2},\frac{2}{3}\right)$ is also an event.
Thanks for any help.
$(a,b]=[0,b]\setminus [0,a]$
$(a,b)=\bigcup_n (a,b_n]$ where $b_n$ increases and converges to $b$.
| common-pile/stackexchange_filtered |
Why code create wrong Binary search tree
I am trying to create a Binary search tree sicne morning and i am still not able to do,
I get wrong output when i see the tree formed on debugging then it is not correct.
How i do ?
(1) I have an array of values which will be data of each node in tree.
(2) I create the root node and pass that node in CreateBinarySearchTree(&RootOfTree, values, size); function.
(3) In CreateBinarySearchTree(Tree**RootOfTree, int* values, int size) definition i have 4 conditions:
if ((*RootOfTree)->left == NULL && (*RootOfTree)->right == NULL){...}
else if ((*RootOfTree)->left == NULL && (*RootOfTree)->right != NULL){..}
else if ((*RootOfTree)->left != NULL && (*RootOfTree)->right == NULL){..}
and else{ CreateBinarySearchTree(&(*RootOfTree)->left, values, size);}
My full code is here :
// BinaryTree.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#include <array>
using namespace std;
struct Tree
{
Tree*left = NULL;
Tree*right = NULL;
int data;
};
int counte = 0;
int values[] = { 8, 5, 4, 9, 7, 11, 1, 12, 3, 2 };
int val = values[counte];
Tree*storeRoot = NULL;
int _tmain(int argc, _TCHAR* argv[])
{
Tree *tree = NULL;
void CreateBinarySearchTree(Tree**RootOfTree, int* values, int size);
int size = sizeof(values) / sizeof(values[1]);
Tree* RootOfTree = NULL;
if (tree == NULL)
{
tree = new Tree();
RootOfTree = tree;
tree->data = values[0];
tree->left = NULL;
tree->right = NULL;
}
storeRoot = RootOfTree;
CreateBinarySearchTree(&RootOfTree, values, size);
return 0;
}
void CreateBinarySearchTree(Tree**RootOfTree, int* values, int size)
{
Tree *tree = NULL;
if (counte > size)
{
return;
}
if ((*RootOfTree)->left == NULL && (*RootOfTree)->right == NULL)
{
counte++;
val = values[counte];
tree = new Tree();
tree->data = val;
tree->left = NULL;
tree->right = NULL;
if ((*RootOfTree)->data < val)
{
(*RootOfTree)->right = tree;
}
else if ((*RootOfTree)->data > val)
{
(*RootOfTree)->left = tree;
}
CreateBinarySearchTree(&(*RootOfTree), values, size);
}
else if ((*RootOfTree)->left == NULL && (*RootOfTree)->right != NULL)
{
counte++;
val = values[counte];
if ((*RootOfTree)->data > val)
{
tree = new Tree();
tree->data = val;
tree->left = NULL;
tree->right = NULL;
(*RootOfTree)->left = tree;
}
else
{
counte--;
CreateBinarySearchTree(&(*RootOfTree)->right, values, size);
}
}
else if ((*RootOfTree)->left != NULL && (*RootOfTree)->right == NULL)
{
counte++;
val = values[counte];
if ((*RootOfTree)->data < val)
{
if (storeRoot->data > val)
{
tree = new Tree();
tree->data = val;
tree->left = NULL;
tree->right = NULL;
(*RootOfTree)->right = tree;
}
else
{
if (storeRoot->right == NULL)
{
tree = new Tree();
tree->data = val;
tree->left = NULL;
tree->right = NULL;
(storeRoot)->right = tree;
}
else
{
counte--;
CreateBinarySearchTree(&storeRoot, values, size);
}
}
}
else
{
counte--;
CreateBinarySearchTree(&(*RootOfTree)->left, values, size);
}
}
}
Please correct me wherever i am wrong so that it will create binary searach tree. Please also give detail explanation while answering, Thanks
But it is compiling..where you think is the problem ? please see this http://prntscr.com/9jwogy
AFAIK, you cannot declare a function inside of another function like you've done in your main function.
I've removed the comment.
Your first course of action should be to get rid of the global variables. Globals are extra super-bad in combination with recursion. (Why is RootOfTree a Tree**when you never assign to *RootOfTree?)
Can you give us some idea of what the problem is? How is the created tree not correct? Have you single-stepped your code in the debugger to see what it's doing? Have you tested creating a simple tree with just two or three nodes, and see if that works? You've posted 100+ lines of code and said, basically, "fix this for me." That's not what we do here. You have to put forth a little more effort than that.
I was following your code until it started using storeRoot. Then I decided it just isn't worth the grief, and so should you! Thinking like a programmer means generalizing, NOT splitting into an excessive set of sub cases. You don't need separate code for the first item nor for almost every other case you broke out. You have a Tree**. If that points to NULL then that is where the next node goes. If that points to a non NULL, you recurse into one side or the other with one value comparison. 2 if statements, 3 cases, done!
Ok, could some one please write me function Definiton of void CreateBinarySearchTree(Tree**RootOfTree, int* values, int size) {code inside} by your own perfect way of doing so that i can read it and understand it and will use it as reference for future. ??
@JimMischel Yes i have tried what you all suggested. It creates a tree but miss many nodes in that tree, i feel like my algorithm is wrong. Could you please write it once ,if you have to do it , then how you would have written the code in function definitionCreateBinarySearchTree(Tree*RootOfTree, int values, int size) {} ?
Let's start from the top. What is a binary search tree (BST)? A binary search tree is a data structure where each node to the left of a given node contains a value smaller than it and each node to the right of a given node contains a value greater than it. The binary search tree also has a single root node.
Take a look at your code. You're thinking that the root of a tree is a tree itself (root of tree is a Tree type). This is incorrect; the root of a tree is a node. Each node stores a value, and two pointers to other nodes: its left and right children. Let's translate that into code:
class BST {
public:
BST() : head(nullptr) {}
~BST() { /* Implement */ }
void insert(int value);
private:
struct Node{
Node(int d, Node *l = nullptr, Node *r = nullptr) : data(d), left(l), right(r) {}
int data;
Node *left, *right;
} *head;
void insert(Node *n, int val);
};
Now, onto your insertion algorithm. Your creation function should handle all the details of creating the tree. That is, your user really shouldn't be responsible for creating the tree and passing it in. That would be contradictory to the name of your function. Moreover, your function creates too many subcases that could easily be generalized. We want to check for three things:
Is the tree empty (e.g. is the root pointing to nullptr)?
If it is, populate the root and return.
Otherwise, check the insertion value and recurse down the tree until you get to a suitable location for the value.
You can easily implement this using our new OOP design and delegate the actual insertion to a private member function. The reason we do that is because we only want to modify the head pointer when we're changing what it points to (either when the tree is empty and we're populating it or when we're destroying the tree). In all other cases, we want the head pointer to simply be a starting point for our insertion function. Delegating our insertion to a private insertion function taking the head as a pointer will copy the head pointer and therefore not modify the original one:
void BST::insert(int value)
{
insert(head, value);
}
void BST::insert(Node *n, int val)
{
if (!head) {
head = new Node(val);
return;
}
if (val < n->data) {
if (n->left)
insert(n->left, val);
else
n->left = new Node(val);
} else if (val > n->data) {
if (n->right)
insert(n->right, val);
else
n->right = new Node(val);
}
}
That is a significant cleanup compared to the version in the question, but still seriously special cased and heavy compared to a good design: If the second function had signature static void insert(Node**n, int val) and was called with insert(&head,value) it can all be simpler, more efficient and more easily extended to more interesting types of tree.
Also a programming student should be aware, there is no reason for the function to be recursive, and if it were not recursive there would be no reason for a separate wrapper function to launch the recursion. In more interesting trees that is often needed. But not in this trivial tree. Any decent optimizer will inline your recursive insert as a non recursive loop in the top level insert function. You don't need to do that. But a programming student ought to know how and usually should not write gratuitous recursion for the optimizer to remove.
| common-pile/stackexchange_filtered |
Using zeromq to design a publish subscribe system with multiple brokers
I started with zeromq just a few days ago. My aim is to design a publish subscribe system with multiple brokers (a network of brokers). I have read the relevant sections of the zeromq guide, and have written code for simple pub sub systems. If someone could please help me with the following questions:
From what I can conceive, the brokers(xpub-xsub sockets) will also have push/pull sockets to communicate the pub-sub messages. Is that correct? Any help in terms of how the brokers should communicate would be appreciated. Should there be any intermediaries between brokers?
Any design guidelines would be very helpful. Thank you.
The guide says that we should use xpub and xsub sockets when dynamic
discovery is required. Can someone please explain the difference
between the sockets: xpub and pub, and xsub and sub.
XPUB just means many publishers , compared to PUB, which means single publisher.
XSUB means many subscribers, compared to SUB, meaning single subscriber.
If you want to connect many subscribers to many publishers while still having the benefit dynamic discovery, you need a proxy in the middle; something like illustration below. PUB sockets sending messages to a proxy; the XSUB forwards message to XPUB, which then distributes those messages to all SUBs listening.
The code for creating such a proxy is simple (below), and the PUB and SUB ends are trivial, check the example code.
Socket xsub = ctx.createSocket(ZMQ.XSUB);
input.bind( "tcp://*:5500");
Socket xpub = ctx.createSocket(ZMQ.XPUB);
xpub.bind( "tcp://*:5600");
ZMQ.proxy( xsub, xpub, null);
From what I can conceive, the brokers will also have push/pull sockets
to communicate the pub-sub messages. Is that correct? Any help in
terms of how the brokers should communicate would be appreciated.
Should there be any intermediaries between brokers?
Check the guide for Inter-Broker Routing example
The Inter-Broker routing examples are helpful, but I need help to understand the following in detail: 1. what should be the logical topology of the broker network. 2. how would they know about each other to begin with? 3. how to do the routing of the published messages all the way to the right subscriber? Such questions are clouding my mind, and I really don't know where to start. Even for a small prototype with 4-5 brokers, and a few publishers/subscribers, I feel that I have to answer these questions. Any help will be very appreciated.
I had the same questions, and the best advice I can give you is to start small with examples (https://github.com/imatix/zguide/tree/master/examples/); each one addresses a different use case, once you understand them, you can build complex messaging systems.
Did this help you out at all?
Yes, it did. I started to design a request-reply system using the inter-broker routing example, and the majordomo pattern. Pub-sub needs to be added later on. For a small number of brokers, it works well, but since all the brokers know each other, the system is quite chatty. I am still not sure how to do the routing for a large number of brokers. There are some hints about using a Distributed Hash Table, but I am still thinking about it. Any ideas?
| common-pile/stackexchange_filtered |
React Native trying to pass coords to Polyline getting error: NSNumber cannot be converted to NSDictionary
so I am trying to pass coords from BackgroundGeolocation.getLocations() method.
So when I pass coords I am getting this error:
JSON value '-122.02235491' of type NSNumber cannot be converted to NSDictionary
I tried to look at the Polyline docs and can't figure out what should I change in my coordArray so Polyline would accept it?
The variable location structure is this:
https://transistorsoft.github.io/cordova-background-geolocation-lt/interfaces/cordova_background_geolocation_lt.location.html
Thanks in advance!
if(this.state.locations && this.state.locations.length > 0){
this.state.locations.map((location,index) => {
console.log(this.state.locations)
let coordArray = [latitude = parseFloat(location.coords.latitude), longitude= parseFloat(location.coords.longitude)]
polyline.push(<Polyline key={index} initialRegion={initialRegion} coordinates={coordArray} geodesic />)
})
}
You are passing an array of numbers, to coordinates parameter, effectively having something like (sample coords)
<Polyline ... coordinates={[-122.0, 88]}>
Polyline docs say that you've got to have an array of dictionaries in your coordinates, like so:
<Polyline ... coordinates={[{latitude:-122.0, longitude:88}]}>
This is basically what causes the error, it takes the first element of your array, which is a number, and tries to convert it to dictionary.
Check https://github.com/react-native-maps/react-native-maps/blob/master/docs/polyline.md
for details.
| common-pile/stackexchange_filtered |
Dashed hidden lines & Line-only Graphics3D
Consider the following lists:
coor = {{2.99535, 1.14412, 1.41421}, {2.55834, 2.28825, 0.707107}, {2.28825, 0.707107, 2.55834}, {1.14412, 1.41421, 2.99535}, {1.41421, 2.99535, 1.14412}, {3.43237, 0, 0.707107}, {0.707107, 2.55834, 2.28825}, {3.43237, 0, -0.707107}, {2.55834, 2.28825, -0.707107}, {2.28825, -0.707107, 2.55834}, {2.99535, 1.14412, -1.41421}, {1.14412, -1.41421, 2.99535}, {0, 0.707107, 3.43237}, {0.707107, 3.43237, 0}, {2.99535, -1.14412, 1.41421}, {0, -0.707107, 3.43237}, {-0.707107, 3.43237, 0}, {2.55834, -2.28825, 0.707107}, {-0.707107, 2.55834, 2.28825}, {2.99535, -1.14412, -1.41421}, {1.41421, 2.99535, -1.14412}, {-1.41421, 2.99535, 1.14412}, {2.55834, -2.28825, -0.707107}, {0.707107, 2.55834, -2.28825}, {2.28825, 0.707107, -2.55834}, {0.707107, -2.55834, 2.28825}, {-1.14412, 1.41421, 2.99535}, {1.14412, 1.41421, -2.99535}, {1.41421, -2.99535, 1.14412}, {-2.28825, 0.707107, 2.55834},{-1.14412, -1.41421, 2.99535}, {-1.41421, 2.99535, -1.14412}, {2.28825, -0.707107, -2.55834}, {-2.28825, -0.707107, 2.55834}, {-0.707107, 2.55834, -2.28825}, {1.14412, -1.41421, -2.99535}, {-2.55834, 2.28825, 0.707107}, {1.41421, -2.99535, -1.14412}, {-0.707107, -2.55834, 2.28825}, {-2.99535, 1.14412, 1.41421}, {0.707107, -2.55834, -2.28825}, {-1.41421, -2.99535, 1.14412}, {0, 0.707107, -3.43237}, {0.707107, -3.43237, 0}, {-2.55834, 2.28825, -0.707107}, {0, -0.707107, -3.43237}, {-0.707107, -3.43237, 0}, {-2.99535, 1.14412, -1.41421}, {-2.99535, -1.14412, 1.41421}, {-1.14412, 1.41421, -2.99535}, {-2.55834, -2.28825, 0.707107}, {-2.28825, 0.707107, -2.55834}, {-3.43237, 0, 0.707107}, {-0.707107, -2.55834, -2.28825}, {-3.43237, 0, -0.707107}, {-1.41421, -2.99535, -1.14412}, {-1.14412, -1.41421, -2.99535}, {-2.28825, -0.707107, -2.55834}, {-2.55834, -2.28825, -0.707107}, {-2.99535, -1.14412, -1.41421}};
edges = {{{2.55834, 2.28825, 0.707107}, {2.99535, 1.14412, 1.41421}}, {{2.28825, 0.707107, 2.55834}, {2.99535, 1.14412, 1.41421}}, {{2.99535, 1.14412, 1.41421}, {3.43237, 0, 0.707107}}, {{1.41421, 2.99535, 1.14412}, {2.55834, 2.28825, 0.707107}}, {{2.55834, 2.28825, -0.707107}, {2.55834, 2.28825, 0.707107}}, {{1.14412, 1.41421, 2.99535}, {2.28825, 0.707107,2.55834}}, {{2.28825, -0.707107, 2.55834}, {2.28825, 0.707107, 2.55834}}, {{0.707107, 2.55834, 2.28825}, {1.14412, 1.41421, 2.99535}}, {{0, 0.707107, 3.43237}, {1.14412, 1.41421, 2.99535}}, {{0.707107, 2.55834, 2.28825}, {1.41421, 2.99535, 1.14412}}, {{0.707107, 3.43237, 0}, {1.41421, 2.99535, 1.14412}}, {{3.43237, 0, -0.707107}, {3.43237, 0, 0.707107}}, {{2.99535, -1.14412, 1.41421}, {3.43237, 0, 0.707107}}, {{-0.707107, 2.55834, 2.28825}, {0.707107, 2.55834, 2.28825}}, {{2.99535, 1.14412, -1.41421}, {3.43237, 0, -0.707107}}, {{2.99535, -1.14412, -1.41421}, {3.43237, 0, -0.707107}}, {{2.55834, 2.28825, -0.707107}, {2.99535, 1.14412, -1.41421}}, {{1.41421, 2.99535, -1.14412}, {2.55834, 2.28825, -0.707107}}, {{1.14412, -1.41421, 2.99535}, {2.28825, -0.707107, 2.55834}}, {{2.28825, -0.707107, 2.55834}, {2.99535, -1.14412, 1.41421}}, {{2.28825, 0.707107, -2.55834}, {2.99535, 1.14412, -1.41421}}, {{0, -0.707107, 3.43237}, {1.14412, -1.41421, 2.99535}}, {{0.707107, -2.55834, 2.28825}, {1.14412, -1.41421, 2.99535}}, {{0, -0.707107, 3.43237}, {0, 0.707107, 3.43237}}, {{-1.14412, 1.41421, 2.99535}, {0, 0.707107, 3.43237}}, {{-0.707107, 3.43237, 0}, {0.707107, 3.43237, 0}}, {{0.707107, 3.43237, 0}, {1.41421, 2.99535, -1.14412}}, {{2.55834, -2.28825, 0.707107}, {2.99535, -1.14412, 1.41421}}, {{-1.14412, -1.41421, 2.99535}, {0, -0.707107, 3.43237}}, {{-1.41421, 2.99535, 1.14412}, {-0.707107, 3.43237, 0}}, {{-1.41421, 2.99535, -1.14412}, {-0.707107, 3.43237, 0}}, {{2.55834, -2.28825, 0.707107}, {2.55834, -2.28825, -0.707107}}, {{1.41421, -2.99535, 1.14412}, {2.55834, -2.28825, 0.707107}}, {{-1.41421, 2.99535, 1.14412}, {-0.707107, 2.55834, 2.28825}}, {{-1.14412, 1.41421, 2.99535}, {-0.707107, 2.55834, 2.28825}}, {{2.55834, -2.28825, -0.707107}, {2.99535, -1.14412, -1.41421}}, {{2.28825, -0.707107, -2.55834}, {2.99535, -1.14412, -1.41421}}, {{0.707107, 2.55834, -2.28825}, {1.41421, 2.99535, -1.14412}}, {{-2.55834, 2.28825, 0.707107}, {-1.41421, 2.99535, 1.14412}}, {{1.41421, -2.99535, -1.14412}, {2.55834, -2.28825, -0.707107}}, {{0.707107, 2.55834, -2.28825}, {1.14412, 1.41421, -2.99535}}, {{-0.707107, 2.55834, -2.28825}, {0.707107, 2.55834, -2.28825}}, {{1.14412, 1.41421, -2.99535}, {2.28825, 0.707107, -2.55834}}, {{2.28825, -0.707107, -2.55834}, {2.28825, 0.707107, -2.55834}}, {{0.707107, -2.55834, 2.28825}, {1.41421, -2.99535, 1.14412}}, {{-0.707107, -2.55834, 2.28825}, {0.707107, -2.55834, 2.28825}}, {{-2.28825, 0.707107, 2.55834}, {-1.14412, 1.41421, 2.99535}}, {{0, 0.707107, -3.43237}, {1.14412, 1.41421, -2.99535}}, {{0.707107, -3.43237, 0}, {1.41421, -2.99535, 1.14412}}, {{-2.28825, -0.707107, 2.55834}, {-2.28825, 0.707107, 2.55834}}, {{-2.99535, 1.14412, 1.41421}, {-2.28825, 0.707107, 2.55834}}, {{-2.28825, -0.707107, 2.55834}, {-1.14412, -1.41421, 2.99535}}, {{-1.14412, -1.41421, 2.99535}, {-0.707107, -2.55834, 2.28825}}, {{-1.41421, 2.99535, -1.14412}, {-0.707107, 2.55834, -2.28825}}, {{-2.55834, 2.28825, -0.707107}, {-1.41421, 2.99535, -1.14412}}, {{1.14412, -1.41421, -2.99535}, {2.28825, -0.707107, -2.55834}}, {{-2.99535, -1.14412, 1.41421}, {-2.28825, -0.707107, 2.55834}}, {{-1.14412, 1.41421, -2.99535}, {-0.707107, 2.55834, -2.28825}}, {{0.707107, -2.55834, -2.28825}, {1.14412, -1.41421, -2.99535}}, {{0, -0.707107, -3.43237}, {1.14412, -1.41421, -2.99535}}, {{-2.99535, 1.14412, 1.41421}, {-2.55834, 2.28825, 0.707107}}, {{-2.55834, 2.28825, 0.707107}, {-2.55834, 2.28825, -0.707107}}, {{0.707107, -2.55834, -2.28825}, {1.41421, -2.99535, -1.14412}}, {{0.707107, -3.43237, 0}, {1.41421, -2.99535, -1.14412}}, {{-1.41421, -2.99535, 1.14412}, {-0.707107, -2.55834, 2.28825}}, {{-3.43237, 0, 0.707107}, {-2.99535, 1.14412, 1.41421}}, {{-0.707107, -2.55834, -2.28825}, {0.707107, -2.55834, -2.28825}}, {{-1.41421, -2.99535, 1.14412}, {-0.707107, -3.43237, 0}}, {{-2.55834, -2.28825, 0.707107}, {-1.41421, -2.99535, 1.14412}}, {{0, -0.707107, -3.43237}, {0, 0.707107, -3.43237}}, {{-1.14412, 1.41421, -2.99535}, {0, 0.707107, -3.43237}}, {{-0.707107, -3.43237, 0}, {0.707107, -3.43237, 0}}, {{-2.99535, 1.14412, -1.41421}, {-2.55834, 2.28825, -0.707107}}, {{-1.14412, -1.41421, -2.99535}, {0, -0.707107, -3.43237}}, {{-1.41421, -2.99535, -1.14412}, {-0.707107, -3.43237, 0}}, {{-2.99535, 1.14412, -1.41421}, {-2.28825, 0.707107, -2.55834}}, {{-3.43237, 0, -0.707107}, {-2.99535, 1.14412, -1.41421}}, {{-2.99535, -1.14412, 1.41421}, {-2.55834, -2.28825, 0.707107}}, {{-3.43237, 0, 0.707107}, {-2.99535, -1.14412, 1.41421}}, {{-2.28825, 0.707107, -2.55834}, {-1.14412, 1.41421, -2.99535}}, {{-2.55834, -2.28825, -0.707107}, {-2.55834, -2.28825, 0.707107}}, {{-2.28825, -0.707107, -2.55834}, {-2.28825, 0.707107, -2.55834}}, {{-3.43237, 0, -0.707107}, {-3.43237, 0, 0.707107}}, {{-1.41421, -2.99535, -1.14412}, {-0.707107, -2.55834, -2.28825}}, {{-1.14412, -1.41421, -2.99535}, {-0.707107, -2.55834, -2.28825}}, {{-3.43237, 0, -0.707107}, {-2.99535, -1.14412, -1.41421}}, {{-2.55834, -2.28825, -0.707107}, {-1.41421, -2.99535, -1.14412}}, {{-2.28825, -0.707107, -2.55834}, {-1.14412, -1.41421, -2.99535}}, {{-2.99535, -1.14412, -1.41421}, {-2.28825, -0.707107, -2.55834}}, {{-2.99535, -1.14412, -1.41421}, {-2.55834, -2.28825, -0.707107}}};
where coor is the list of vertices of a polytope and edges is the list of its edges. I would like to generate a Graphics3D whose back edges are dashed, and those front are solid. I have previously tried this solution, but it doesn't work as I only use lines and vertices, and no surface (this solution gives me only solid edges).
My second approach was to separate the space into two regions and to classify the edges as if they were in front of or behind a plane. The edges that cross this plane are cut to give a dashed edge and a solid edge. Here is the code and the result obtained for the given lists:
pointOfView = {1.2728823621935175, -2.3966390529217407, 2.021358885014492};
frontList = {};
backList = {};
sorter[edge_, region_, plan_] :=
Module[{pt1Member = RegionMember[region, edge[[1]]],
pt2Member = RegionMember[region, edge[[2]]], midpoint},
If[pt1Member != pt2Member,
midpoint = RegionIntersection[plan, Line[{edge[[1]], edge[[2]]}]][[1]],Indeterminate
];
Which[
pt1Member == pt2Member == False, AppendTo[frontList, edge],
pt1Member == pt2Member == True, AppendTo[backList, edge],
pt1Member == True && pt2Member == False, AppendTo[backList, {edge[[1]], midpoint}] && AppendTo[frontList, {edge[[2]], midpoint}],
pt1Member == False && pt2Member == True, AppendTo[frontList, {edge[[1]], midpoint}] && AppendTo[backList, {edge[[2]], midpoint}]
];
];
splitEdges[list_, ptOfView_] :=
Module[{backRegion = HalfSpace[ptOfView, {0, 0, 0}], midplan},
midplan = RegionBoundary[backRegion];
Do[sorter[list[[i]], backRegion, midplan], {i, Length@list}];
];
splitEdges[edges, pointOfView];
Graphics3D[{Point[coor], {Dashed, Line[backList]}, {Thickness[0.003],
Line[frontList]}}, ViewPoint -> pointOfView, Boxed -> False]
The result is almost perfect, but I had not thought to edges behind the plan, but not behind the polytope (see the edge designated by the red arrow).
So I appeal to your lights. Would anyone have an idea of how I could succeed in making these graphics without surfaces? I cannot use PolyhedronData because I have to stay general and my inputs are lists of vertices and edges.
Maybe you can use something like this in order to find polygons to glue in. Then you could use Silvia's method.
σ = AssociationThread[coor -> Range[Length[coor]]];
elist = Partition[Lookup[σ, Flatten[edges, 1]], 2];
G = Graph[elist];
polygons = Join[
FindCycle[G, 5, All][[All, All, 1]],
FindCycle[G, 6, All][[All, All, 1]]
];
plot = Graphics3D[
{EdgeForm[Thick], FaceForm[White],
GraphicsComplex[coor, Polygon[polygons]]
},
Lighting -> "Neutral"
]
Thank you, it works when I add this graphic to the one I already had.
That's good to hear! You're welcome.
| common-pile/stackexchange_filtered |
Spring boot always shows lazy loaded fields as null in json
I'm working on a Spring Boot application where I lazy load a bunch of fields in my classes. However, these fields are null when returning the object in a request even when they should have a value.
I added the hibernate aware object mapper to my application
@Bean
public Jackson2ObjectMapperBuilder configureObjectMapper() {
return new Jackson2ObjectMapperBuilder()
.modulesToInstall(Hibernate4Module.class);
}
I can access the lazy loaded fields fine when accessing them in code, but simply converting an object to json won't work.
MediaContent.java
@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class MediaContent extends Content {
@OneToOne(fetch = FetchType.LAZY)
private StoredFile previewFile;
@OneToOne(fetch = FetchType.LAZY)
private StoredFile imageFile;
@OneToMany(fetch = FetchType.LAZY, mappedBy = "downloadFor")
private List<StoredFile> downloadFiles;
@ManyToOne(fetch = FetchType.LAZY)
private Genre genre;
...
}
ContentController.java
This returns the object with all lazy loaded fields null, even when they have a value in the database.
@RequestMapping(value = "/{id}", method = RequestMethod.GET)
@Transactional
public @ResponseBody
Content find(@PathVariable Long id) {
return em.find(Content.class, id);
}
Basically, I want the lazy loaded fields to receive their value when returning the object to a request.
| common-pile/stackexchange_filtered |
Elasticsearch date in range date_histogram extended_bounds
I want to get date_histogram during specific period, how to restrict the date period? how should I use the extended_bounds parameter? For example : I want to query the date_histogram between '2024-07-10' and '2024-07-22', and the fixed_interval could be changed. I query with this expression insuring having the same filter range in the query:
GET /logging-events/_search
{
"size": 1,
"query": {
"bool": {
"filter": [
{
"range": {
"timestamp": {
"gt": "2024-07-10T00:00:00",
"lt": "2024-07-22T23:59:59"
}
}
}
]
}
},
"aggregations": {
"date_ranges": {
"date_histogram": {
"field": "timestamp",
"fixed_interval": "8d",
"extended_bounds": {
"min": "2024-07-10",
"max": "2024-07-22"
},
"format":"yyyy-MM-dd"
}
}
}
}
But I still get the date_histogram not in the range.
Unexpected Response :
aggregations.date_ranges.buckets[0].key_as_string eq 2024-07-08
{
"hits": {
"total": {
"value": 13,
"relation": "eq"
},
"max_score": 0.0,
"hits": [
{
"_index": "logging-events",
"_id": "BuP86ZAB2uzWuz8xZdkz",
"_score": 0.0,
"_source": {
"_class": "com.ecommerce.monitoring.services.event.LoggingEvent",
"event_type": "CLICK",
"session_id": "b2f4d3a1-4d4e-4f67-9b2a-0e7b0c9b5e2f",
"product_id": 22,
"shop_id": 2,
"timestamp": "2024-07-10T06:15:42.000Z"
}
}
]
},
"aggregations": {
"date_ranges": {
"buckets": [
{
"key_as_string": "2024-07-08",
"key":<PHONE_NUMBER>000,
"doc_count": 6
},
{
"key_as_string": "2024-07-16",
"key":<PHONE_NUMBER>000,
"doc_count": 7
}
]
}
}
}
Expected: Starts from 2024-07-10
Can you clarify what you mean by "I still get the date_histogram not in the range."? Perhaps the example of the output of what you get vs what you expect would help? 2) What is your mapping for timestamp? 3) Can your records contain multiple timestamps per record?
@imotov Thanks for reaching out .
1- Post edited. I have attached the response retrieved from the call.
2- timestamp with type date, format date_optional_time||epoch_millis.
3- No, only one single timestamp.
First of all, I think the following sentence in Elasticsearch documentation doesn't capture what's actually going one and might have confused you.
With extended_bounds setting, you now can "force" the histogram
aggregation to start building buckets on a specific min value and also
keep on building buckets up to a max value (even if there are no
documents anymore).
You cannot force to build buckets on a specific value, you can only force building empty buckets before and after your last available document, but the bucket will start on values specified by the fixed_interval regardless of what you specify in extended_bounds. So, if the fixed interval is 8d it will only create buckets every 8 days since Jan 1, 1970.
2024-07-08 is 19,912th day after Jan 1, 1970 and 19,912 is divisible by 8 so this is why bucket starts on that date and not on 2024-07-10. To move bucket to 2024-07-10 you need to add 2d offset to date histogram. So, in your case the query should look like this:
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"timestamp": {
"gt": "2024-07-10T00:00:00",
"lt": "2024-07-22T23:59:59"
}
}
}
]
}
},
"aggregations": {
"date_ranges": {
"date_histogram": {
"field": "timestamp",
"fixed_interval": "8d",
"format":"yyyy-MM-dd",
"offset": "2d"
}
}
}
}
You might also want to take a look at using hard_bounds instead of query to limit buckets that are getting created.
I needed the information about the starting count of buckets because I have a requirement where the fixed interval is configured dynamically from the server. Without the start date information, I was struggling with how to handle the offset. Maybe the official documentation should include this as well.
I really appreciate your assistance.
| common-pile/stackexchange_filtered |
parsing xsd from WSDL using LINQ to XML
I am trying to build a dictionary using an XSD file which I get from WSDL definition using LINQ to XML.
The nodes which I am trying to parse look something like this
<xsd:element maxOccurs="1" minOccurs="0" name="active" type="xsd:boolean"/>
<xsd:element maxOccurs="1" minOccurs="0" name="activity_due" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="additional_assignee_list" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="approval" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="approval_history" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="approval_set" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="assigned_to" type="xsd:string"/>
<xsd:element maxOccurs="1" minOccurs="0" name="assignment_group" type="xsd:string"/>
The link to the XML file is: https://dl.dropboxusercontent.com/u/97162408/incident.xml
I am only worried about "getKeys".
Basically want to build a dictionary which will give me a key-value pair for "name" and "type" from the above sample node list.
I have got to a point where I can get to the Node list using the code
XNamespace ns = XNamespace.Get("http://www.w3.org/2001/XMLSchema");
XDocument xd = XDocument.Load(url);
var result = (from elements in xd.Descendants(ns + "element") where elements.Attribute("name").Value.Equals("getKeys")
select elements.Descendants(ns + "sequence")
);
Now I wanted to build a dictionary in a single function call without writing another routine to parse through the result list using LINQ to XML. Any hints, code samples would be really helpful!!
ToDictionary is your friend here. You can do it all in one statement:
var query = xd
.Descendants(ns + "element")
.Single(element => (string) element.Attribute("name") == "getKeys")
.Element(ns + "complexType")
.Element(ns + "sequence")
.Elements(ns + "element")
.ToDictionary(x => (string) x.Attribute("name"),
x => (string) x.Attribute("type"));
Basically the first three lines find the only element with a name of getKeys, the next two three lines select the xsd:element parts under it (you could just use Descendants(ns + "element") if you wanted), and the final call transforms a sequence of elements into a Dictionary<string, string>.
| common-pile/stackexchange_filtered |
Issues setting up LibTorch on Windows 11
Hello and thanks in advance!
When following the official LibTorch installation guide, I ran across a four separate errors when building the project. I have not found solutions to the last two errors anywhere in PyTorch forums or stackoverflow.
To be as clear as possible, I'll recall what each of the errors and solutions were. Only the two CMake commands found in the guide result in errors.
Issue 1 (resolved):
I learned from one thread that I had to specify the Torch_DIR to the TorchConfig.cmake and torch-config.cmake files. This effectively replaced the command
cmake -DCMAKE_PREFIX_PATH=/absolute/path/to/libtorch ..
with
cmake -DCMAKE_PREFIX_PATH=C:/Users/evana/Documents/MyPrograms/C++/Chess2/libtorch-win-shared-with-deps-2.2.1+cu121 -DTorch_DIR=C:\Users\evana\Documents\MyPrograms\C++\Chess2\libtorch-win-shared-with-deps-2.2.1+cu121\libtorch\share\cmake\Torch ..
Issue 2 (resolved):
Even after including Torch_DIR in the command, I received the same error found in this post. I could be completely misunderstanding this, but I think that one of CUDA's 12.1 libraries' new header-only format was conflicting with LibTorch or one of its dependencies.
The post helped me realize I had to use part of the CUDA 11.8 library.
Issue 3:
Finally, I could enter the first CMake command of the guide without errors:
PS C:\Users\evana\Documents\MyPrograms\C++\Chess2\build> cmake -DCMAKE_PREFIX_PATH=C:/Users/evana/Documents/MyPrograms/C++/Chess2/libtorch-win-shared-with-deps-2.2.1+cu121 -DTorch_DIR=C:\Users\evana\Documents\MyPrograms\C++\Chess2\libtorch-win-shared-with-deps-2.2.1+cu121\libtorch\share\cmake\Torch ..
>>
-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22631.
-- The C compiler identification is MSVC 19.29.30153.0
-- The CXX compiler identification is MSVC 19.29.30153.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1 (found version "12.1")
-- The CUDA compiler identification is NVIDIA 12.1.66
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/include (found version "12.1.66")
-- Caffe2: CUDA detected: 12.1
-- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/bin/nvcc.exe
-- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1
-- Caffe2: Header version is: 12.1
-- C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/lib/x64/nvrtc.lib shorthash is bac8224f
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support
-- Autodetected CUDA architecture(s): 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
-- Found Torch: C:/Users/evana/Documents/MyPrograms/C++/Chess2/libtorch-win-shared-with-deps-2.2.1+cu121/libtorch/lib/torch.lib
-- Configuring done (23.9s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/evana/Documents/MyPrograms/C++/Chess2/build
However, there was a problem with the second CMake command:
PS C:\Users\evana\Documents\MyPrograms\C++\Chess2\build> cmake --build . --config Release
Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
1>Checking Build System
Building Custom Rule C:/Users/evana/Documents/MyPrograms/C++/Chess2/CMakeLists.txt
example-app.cpp
C:\Users\evana\Documents\OtherPrograms\vcpkg\installed\x64-windows\include\glog/log_severity.h(57,1): fatal error C1189: #error: ERROR macro is defined. Define GLOG_NO_ABBREVIATED_SEVERITIES before including logging.h. See the document
for detail. [C:\Users\evana\Documents\MyPrograms\C++\Chess2\build\example-app.vcxproj]
Issue 4:
When I threw in a #define GLOG_NO_ABBREVIATED_SEVERITIES in logging.h to arbitrarily define the variable, I recieved over 100 syntax errors across multiple of LibTorch's dependencies:
PS C:\Users\evana\Documents\MyPrograms\C++\Chess2\build> cmake --build . --config Release
Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
1>Checking Build System
Building Custom Rule C:/Users/evana/Documents/MyPrograms/C++/Chess2/CMakeLists.txt
example-app.cpp
C:\Users\evana\Documents\OtherPrograms\vcpkg\installed\x64-windows\include\c10/core/TensorImpl.h(2065,1): error C2589: '(': illegal token on right side of '::' [C:\Users\evana\Documents\MyPrograms\C++\Chess2\build\example-app.vcxproj]
C:\Users\evana\Documents\OtherPrograms\vcpkg\installed\x64-windows\include\c10/core/TensorImpl.h(2065): error C2062: type 'unknown-type' unexpected [C:\Users\evana\Documents\MyPrograms\C++\Chess2\build\example-app.vcxproj]
C:\Users\evana\Documents\OtherPrograms\vcpkg\installed\x64-windows\include\c10/core/TensorImpl.h(2065,1): error C2059: syntax error: ')' [C:\Users\evana\Documents\MyPrograms\C++\Chess2\build\example-app.vcxproj]
... many more syntax errors later ...
C:\Users\evana\Documents\OtherPrograms\vcpkg\installed\x64-windows\include\ATen/core/jit_type_base.h(87,1): fatal error C1003: error count exceeds 100; stopping compilation [C:\Users\evana\Documents\MyPrograms\C++\Chess2\build\example-a
pp.vcxproj]
Code and other information:
I'm certain I'm overcomplicating things, but I've tried to stay as close as possible to the original guide for installing LibTorch on Windows 11. The code for CMakeLists.txt and example-app.cpp can be seen in the guide.
Here's the code for the first syntax error from issue 4:
TensorImpl.h(2065,1) error C2589: '(': illegal token on right side of '::'
int64_t safe_compute_numel() const {
uint64_t n = 1;
bool overflows = c10::safe_multiplies_u64(sizes(), &n);
constexpr auto numel_max = std::min( // line 2065
static_cast<uint64_t>(std::numeric_limits<int64_t>::max()),
static_cast<uint64_t>(std::numeric_limits<size_t>::max()));
overflows |= (n > numel_max);
TORCH_CHECK(!overflows, "numel: integer multiplication overflow");
return static_cast<int64_t>(n);
}
As of March 17, 2024, I am using the most recent GPU, CUDA 12.1, Windows, C++ API combination: libtorch-win-shared-with-deps-2.2.1+cu121.zip.
I greatly appreciate any help with issues 3 and 4! Apologies for the long post and can provide any more information as necessary.
TensorImpl.h(2065,1): error C2589: '(': illegal token on right side of '::' Well what is on line 2065 of this file. Maybe the problem is it requires a newer c++ standard than c++14 (default in msvc).
It's a little hard to express the code in comment format, but the line says
constexpr auto numel_max = std::min(. Is this compatible with c++14? I can add more lines if you need me to, but the comment format won't be very helpful
Another question: in my CMakeLists.txt, I have set_property(TARGET example-app PROPERTY CXX_STANDARD 17). Would this mean that MSVC is compiling to c++17?
It's a little hard to express the code in comment format You can edit the question to add additional info.
std::min is a problem that windows.h defines a macro for min / max see this question: https://stackoverflow.com/questions/11544073/how-do-i-deal-with-the-max-macro-in-windows-h-colliding-with-max-in-std
Also this question: https://stackoverflow.com/questions/21483038/undefining-min-and-max-macros and this: https://stackoverflow.com/questions/5004858/why-is-stdmin-failing-when-windows-h-is-included
Is there a way I could undefine MSVC's macros at a central location rather than all the dependencies LibTorch has? or maybe use a different compiler?
I think you need to #define NOMINMAX before you #include <windows.h> or does your code not include that?
TensorImpl.h does not have #include <windows.h>, but maybe it could have been included somewhere else in LibTorch's dependencies?
Someone else who experience with LibTorch may have to pick up the help. I was helping because of my c++ and CMake experience.
Thank you so much regardless! I may switch to tensorflow if I can't use libtorch
| common-pile/stackexchange_filtered |
sqlalchemy deferred column loading - does not appear any more efficient
I have been trying to optimize a query and despite the effort, it doesn't seem that the following two queries have substantially different performance. Is it possible that the complexity of the ORM mapped objects are replacing any gains made at the DBAPI? If so, is there a solution?
baseContactQuery = contact.query.options(Load(contact).load_only(contact.user_id, contact.organization_contact, contact.relationship_strength, contact.full_name, contact.first_name, contact.last_name)).\
options(selectinload(contact.organization_references).load_only(organization_contact_reference.contact_sharing_level)).\
options(selectinload(contact.jobs).load_only(job.is_primary, job.role).options(selectinload(job.tied_company).load_only(company.name), lazyload('*'))).\
options(selectinload(contact.emails).load_only(email.email, email.is_primary)).\
options(selectinload(contact.contact_user).load_only(user.id).selectinload(user.organization_references).load_only(organization_user_reference.default_contact_sharing_level))
vs.
baseContactQuery = contact.query.\
options(selectinload(contact.organization_references)).\
options(selectinload(contact.jobs)).\
options(selectinload(contact.emails)).\
options(selectinload(contact.contact_user).selectinload(user.organization_references))
If these turn into SQL, let's see the SQL.
So I just compared the two and it seems to slow down once a bunch of filters are added (not shown in the baseContactQuery). Here's the SQL that is generated with all the filters included.
https://docs.google.com/document/d/1YD3l6EHHhM7E5cZZG-LPB9_nmsTdtX9oy8eBdEZcDgg/edit?usp=sharing
For sufficiently large tables, adding filters (WHERE criteria) means the db engine must do more work, so the query itself will take longer. Particularly if the right indexes are not in place. So you need to first of all establish how much time is being spent in the db before deciding where to optimise.
I see some cases of
a AND b OR c
Make sure you want (a AND b) OR c, not a AND (b OR c)
OR is hard to optimize.
For organization_contact_reference:
INDEX(contact_id, contact_sharing_level)
Unless you have "ascii_quotes" turned on, this should be a syntax error:
WHERE "user".id = ...
organization_user_reference needs
INDEX(user_id, organization_id, default_contact_sharing_level)
ORDER BY job.is_primary DESC, job.start_time DESC may benefit from
INDEX(is_primary, start_time)
Not "sargeable": ... AND lower(email.email) IN ... Use a suitable COLLATION on email so that you don't need to use LOWER().
| common-pile/stackexchange_filtered |
Regex with lowercase, uppercase, alphanumeric, special characters and no more than 2 identical characters in a row with a minimum length of 8 chars
I'm trying to create a regex that allows the 4 main character types (lowercase, uppercase, alphanumeric, and special chars) with a minimum length of 8 and no more than 2 identical characters in a row.
I've tried searching for a potential solution and piecing together different regexes but no such luck! I was able to find this one on Owasp.org
^(?:(?=.*\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[^A-Za-z0-9])(?=.*[a-z])|(?=.*[^A-Za-z0-9])(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9]))(?!.*(.)\1{2,})[A-Za-z0-9!~<>,;:_=?*+#."&§%°()\|\[\]\-\$\^\@\/]{8,32}$
but it uses at least 3 out of the 4 different characters when I need all 4. I tried modifying it to require all 4 but I wasn't getting anywhere. If someone could please help me out I would greatly appreciate it!
does it need to be a RegEx ? It would likely be easier to split the work between the format and the letter duplication
Can you try the following?
var strongRegex = new RegExp("^(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*[!@#\$%\^&\*])(?=.{8,})");
Explanations
RegEx Description
(?=.*[a-z]) The string must contain at least 1 lowercase alphabetical character
(?=.*[A-Z]) The string must contain at least 1 uppercase alphabetical character
(?=.*[0-9]) The string must contain at least 1 numeric character
(?=.[!@#\$%\^&]) The string must contain at least one special character, but we are escaping reserved RegEx characters to avoid conflict
(?=.{8,}) The string must be eight characters or longer
or try with
(?=.{8,100}$)(([a-z0-9])(?!\2))+$ The regex checks for lookahead and rejects if 2 chars are together
var strongerRegex = new RegExp("^(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*[!@#\$%\^&\*])(?=.{8,100}$)(([a-z0-9])(?!\2))+$");
reference
I think this might work from you (note: the approach was inspired by the solution to this SO question).
/^(?:([a-z0-9!~<>,;:_=?*+#."&§%°()|[\]$^@/-])(?!\1)){8,32}$/i
The regex basically breaks down like this:
// start the pattern at the beginning of the string
/^
// create a "non-capturing group" to run the check in groups of two
// characters
(?:
// start the capture the first character in the pair
(
// Make sure that it is *ONLY* one of the following:
// - a letter
// - a number
// - one of the following special characters:
// !~<>,;:_=?*+#."&§%°()|[\]$^@/-
[a-z0-9!~<>,;:_=?*+#."&§%°()|[\]$^@/-]
// end the capture the first character in the pair
)
// start a negative lookahead to be sure that the next character
// does not match whatever was captured by the first capture
// group
(?!\1)
// end the negative lookahead
)
// make sure that there are between 8 and 32 valid characters in the value
{8,32}
// end the pattern at the end of the string and make it case-insensitive
// with the "i" flag
$/i
You could use negative lookaheads based on contrast using a negated character class to match 0+ times not any of the listed, then match what is listed.
To match no more than 2 identical characters in a row, you could also use a negative lookahead with a capturing group and a backreference \1 to make sure there are not 3 of the same characters in a row.
^(?=[^a-z]*[a-z])(?=[^A-Z]*[A-Z])(?=[^0-9]*[0-9])(?=[^!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]*[!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-])(?![a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]*([a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-])\1\1)[a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]{8,}$
^ Start of string
(?=[^a-z]*[a-z]) Assert a-z
(?=[^A-Z]*[A-Z]) Assert A-Z
(?=[^0-9]*[0-9]) Assert 0-9
(?= Assert a char that you would consider special
[^!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]*
[!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]
)
(?! Assert not 3 times an identical char from the character class in a row
[a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]*
([a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-])\1\1
)
[a-zA-Z0-9!~<>,;:_=?*+#."&§%°()|\[\]$^@\/-]{8,} Match any of the listed 8 or more times
$ End of string
Regex demo
| common-pile/stackexchange_filtered |
UI gets stuck even if I use GCD
When I go back from my secondView to my mainView I'm processing something in the viewDidDisappear method of my secondView. The problem is, my mainView gets stuck due to the work the app has to do.
Here is what I do:
-(void)viewDidDisappear:(BOOL)animated
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
dbq = [[dbqueries alloc] init];
[[NSNotificationCenter defaultCenter] postNotificationName:@"abc" object:nil];
//the notification should start a progressView, but because the whole view gets stuck, I can't see it working, because it stops as soon as the work is done
dispatch_sync(dispatch_get_main_queue(), ^{
//work
});
});
What am I doing wrong?
Thanks in advance!
You need to perform your work in the dispatch_async to your queue. You are currently doing the work (assuming the // Work comment is where it goes) in the main thread, and in addition you are blocking your worker thread waiting for this work.
Try to rearrange your GCD calls a bit:
-(void)viewDidDisappear:(BOOL)animated
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dbq = [[dbqueries alloc] init];
[[NSNotificationCenter defaultCenter] postNotificationName:@"abc" object:nil];
dispatch_async(queue, ^{
// Perform work here
dispatch_async(dispatch_get_main_queue(), ^{
// Update UI here
});
});
}
No, the problem is that he's doing the work on the main thread.
The view still gets stuck and no loadingindicator
So I have to create another thread? But do I have to use this GCD thingy then? Because, if its on another thread, why would I need this?
@NikolaiRuhe Good spot, missed the // work comment there :). I updated the answer.
@Krumelur I have another problem now =/
when the work is done, a method in the mainView gets called, but it won't do these two lines now [self.progressView setHidden:YES];
[[self.progressView.subviews objectAtIndex:0] stopAnimating];
Isn't it possible to access the UI from an async thread? But that wouldn't make sense because this line works: self.startARButton.enabled = YES;
it is just before the two other lines
You should always update UI from the main thread. Use dispatch_async again when the UI needs updating. See my updated answer.
| common-pile/stackexchange_filtered |
local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in ipage server how to fix it
I made a manually website in php. After upload on ipage server when I try to access admin page which is inside in admin folder it show me the error Can't connect to local MySQL server through socket /var/run/mysqld/mysqld.sock (2). Please tell me how do I fix it.
Please provide code for use to work with.
whihc type of code mysql code or any other
this is my domain artical-center.com
when i check this on XAMPP its work perfectly but after upload it on ipage server database title show but picture missing and when i want to access admin.php it show the message Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
please check this artical-center.com/admin
Possible duplicate of error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
no there show only localhost error and not much detail there i want to ask specific server on ipage is any extra config file need to be uploaded during website upload on live server
iPage have a live chat support maybe your question would be better asked there. As all you have provided in your question here is It works here but doesn't work there.
thanks i ask it there.
| common-pile/stackexchange_filtered |
javaScript fire php file and dont wait for reply
I am writing some accommodation web-site.I am using PayPal as billing system.Before I redirect the client to the PayPal page I need to insert all relevant data to the MySql database.
My question is:how do I fire the php( from the javaScript) without waiting to it´s replay?( and that the fact that the page that fire the php will no longer exist will make no problem)
You need to look into AJAX and probably a javascript framework like jQuery.
Use AJAX. Send an XMLHttpRequest to the appropriate php script with relevant data and redirect the client upon receiving successful response back from the script.
Ye,I used this option finally. I didn't want to drag all the data across pages...For those how still don´t have JavaScript..well..any way half of my site will not function properly..
If you need to insert data into the database and redirect to a Paypal page you only really have one choice: post the form to the server, write the data to the database and then send a redirect back to the client. So:
<?php
// save data to database
header('Location: http://www.paypal.com');
exit;
I wish I had thought of that!
If it has to be JavaScript, I would make this a normal AJAX operation (See here for JQuery based Ajax.)
I find it dangerous, though, to start a request and not wait for the reply. What if the user's network is congested, and the request has to wait on the client end, and never makes it out before the user proceeds to the next page? Better wait for the success event.
Using JavaScript for this has the massive downside that if JavaScript is turned off, or a JavaScript error occurs, vital data will not be written into your database, while the shopping procedure may go on as planned. Check whether this is really what you want. The header based approach by Cletus is definitely the safest.
| common-pile/stackexchange_filtered |
Round datetime64 array to nearest second
I have an array of type datetime64[ns]. Each element looks something like '2019-08-30T14:02:03.684000000'. How do I round the values to the nearest second such that I would obtain '2019-08-30T14:02:04' in this example?
I know I can truncate the values by
t = t.astype('datetime64[s]')
but I specifically need to round the values and not truncate them. And the numpy 'round' function doesn't seem to like the datetime64[ns] data type.
You can do it by converting np.datetime64 to datetime.datetime.
import numpy as np
from datetime import datetime, timedelta
dt64 = np.datetime64('2019-08-30T14:02:03.684000000')
# np to datetime object
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
dt = datetime.utcfromtimestamp(ts)
# Rounding
if dt.microsecond/1000000 > 0.5:
date = (dt + timedelta(seconds=1)).replace(microsecond=0)
else:
date = dt.replace(microsecond=0)
# Datetime to np
date_rounded = np.datetime64(date).astype('datetime64[s]')
Output:
numpy.datetime64('2019-08-30T14:02:04')
Another handy utility for rounding datetime64 by specified interval ('accuracy'):
from datetime import datetime
from typing import Union
import numpy as np
UNIX_T0 = np.datetime64('1970-01-01T00:00:00')
def round_t64(time: Union[np.datetime64, datetime], dt: Union[np.timedelta64, int]):
"""
Round timestamp by dt
"""
if isinstance(dt, int):
dt = np.timedelta64(dt, 's')
if isinstance(time, datetime):
time = np.datetime64(time)
return time - (time - UNIX_T0) % dt
And use case:
# floor to 1 sec
round_t64(np.datetime64(datetime.now()), np.timedelta64(1, 's'))
# ceil to 1 sec
round_t64(np.datetime64(datetime.now()), -np.timedelta64(1, 's'))
# floor to 5 min
round_t64(np.datetime64(datetime.now()), np.timedelta64(5, 'm'))
You can use round function from the .dt-accessor:
import pandas as pd
t = pd.Series(['2019-08-30T14:02:03.684000000'], dtype='datetime64[ns]')
# 0 2019-08-30 14:02:03.684
# dtype: datetime64[ns]
t.dt.round('s')
# 0 2019-08-30 14:02:04
# dtype: datetime64[ns]
| common-pile/stackexchange_filtered |
Reload page on browser history back button
I want the page to reload on hitting the browser history back button. However, since the URL gets changed often using JavaScript's window.history.pushState I do not want to reload the page every time the location changes. On default the browser just changes the URL without reloading the page on hitting the back button.
(By this I would like to use the browser history back button as some kind of "undo" function.)
Maybe HTML5 can help you, look that http://stackoverflow.com/questions/17507091/replacestate-vs-pushstate
Thanks for your comment. However, using replaceState() instead of pushState() kills the "undo" function.
Try this.
function HandleBackFunctionality(){
//For IE
if(window.event){
//To check if its Back
if(window.event.clientX < 40 && window.event.clientY < 0){
alert("Browser back button is clicked…");
//Its Refresh
} else {
alert("Browser refresh button is clicked…");
}
}
//Other browsers
else {
if(event.currentTarget.performance.navigation.type == 1){
alert("Browser refresh button is clicked…");
}
if(event.currentTarget.performance.navigation.type == 2){
alert("Browser back button is clicked…");
}
}
}
You need to call this method on onbeforeunload event.
<body onbeforeunload="HandleBackFunctionality()">
This method, however, does not work with keyboard controls.
Gives me a "ReferenceError: event is not defined" in the console in Firefox. In Chrome there is no error, but the back button still does no refresh.
| common-pile/stackexchange_filtered |
Please don't shoot the messenger, what can I do better?
Three hours ago I popped into the site to check in. I was pinged about this post, about a deleted comment. I made the grave mistake of writing a light hearted answer to lift the mood.
The most time consuming and draining aspect is addressing meta. If we decline a flag or delete the comment, it can end up on meta.
When I post an answer, there's a 25% chance it will be downvoted (this is taken from my posts stats - many of the downvoted answers have been deleted). There's also a good chance it will be flooded with comments. Latest 33 in 90 mins
Shortly after being elected the welcoming blog happened and the goal posts on the site have been dragged far along from where they were when I was an infant mod (I was sworn in Mar 27th - blog posted 26th April). It's been a steep learning curve.
The mods don't have control over the changes in the site. We have special powers, but we have one voice, as does each person reading this. We are instructed what is expected of us as moderators and that's all we can do. We're doing our best.
I'm held accountable for my actions, and I'm glad for it. I try to stay on top of our flag queue, as it gets out of control quickly. I am also fallible. I get tired. I make mistakes. If I find myself making too many mistakes, I take a break.
Barraging some of my posts with dozens of comments and a flood of downvotes, doesn't actually help to effect change. It's just exhausting. What am I doing wrong?
We were elected by the community to handle difficult tasks and make the line calls. People are not always going to like our choices. Believe it or not, we're trying to improve the site, one flag at a time.
So I'm writing this to stimulate some discussion about how I can make my communication better on meta.
I have upvoted all the answer. They've all be helpful, so have the comments. I accepted this, as it is so simple and something that is easy to follow. That may sound strange to some people. As a literal thinker, simple step by step instructions work well for me. I'm hoping the community will see an improvement in my communication. My goal is to be helpful for our site, our community (old and new), otherwise there's no point being here.
Thank you
I want to thank everyone for their feedback. It has been helpful (answers and comments included). I'm hoping the community will see an improvement in my communication.
Please feel free to post an answer here or ping me if you have an issue with me. I'd welcome the discussion and am always hopeful that any rift or misunderstanding can be repaired.
My goal is to be helpful for our site, our community (old and new), otherwise there's no point being here. Thanks for bearing with me.
This comment thread: [Constructive responses redacted for maximum welcoming capability] :P
The irony is that all those downvoters on your lastest answer, if in your position, would do exactly the same thing, i.e. err on the side of deleting comments. If they feel so strongly, they should put themselves up in the next elections. Some empathy would not go amiss.
Hi :) The reason why I - and others I assume - downvoted your recent answer was the "erring on the site of deletion can be the wisest choice"-part of your (otherwise pretty good) poem. Meta is, always has been, and probably always will be a minefield, with very sensitive mines. Most people here dislike the general direction SO is headed in at the moment, and mass-deletion of comments is also not really "nice" for a lot of us.
@Seth, I think there's a hidden message / hint / idea (whatever you want to call it). If you spend a lot of time / effort writing comments, you're doing it wrong!
What a total an utter mess "the welcoming" is... Yvette this isn't your fault. Your trying your best, we (well at least I) appreciate that.
I don't thing anyone disagrees with "lets be nice" but working out whats nice, whats not nice, etc. is a nightmare. Not even to mention that this is a world wide forum. Manners/politeness often do not translate between cultures. I (for one) have just given up on the whole thing. It's a nonsense. I'll keep on doing my thing as I've always done it and leave SO (inc) to it's hand wringing.
Anyway, go @YvetteColomb :)
How is it different than anything else on meta? You aren't being downvoted, your posts are. People are just disagreeing... I know I downvoted that answer. I wasn't downvoting you, I was downvoting for the reason I left in a comment on said answer. I am confused how better I am supposed to let know where I'm agreeing or disagreeing with something on meta now :/
@Patrice don't be confused. I'm trying to be a better mod... every time I see you, I still think of this. From a conversation years ago.
@Yvette GOD DAMMIT! Here I was hoping it was the name-gender issue where I "outed" myself as a bearded giant who people confuse for a woman :p (I mean, it is that one, but just linked to that HIMYM video...I have to find a way to remove that show from the collective minds of humanity). In any case, I do see a lot of efforts to improve on your part and respect that immensely :) My own particular issue with the previous answer was really the inferrence that could be done between what you said and the kind of "bad question, but answered so won't be improved" situation I thought it could generate.
@Patrice thanks for noticing the effort - yeh I'm honestly trying to improve, which is why asked the question and upvoted all the answers.. yes you did indeed out yourself as the bearded Canadian? If I remember correctly.
@Yvette well there are many bearded Canadians :P. But yes I definitely am one of them lol. Good memory ^^. Or I make more of an impact than I thought I did :p
@Patrice sometimes I don't know what day of the week it is. But I do remember that chat well. It was a turning point for me with SO. It's been a slow and steady (up and down at times) but improving graph. I've lost my persecution complex - that always helps :D Oh an rene outed himself - as a he- we already knew he was a flower.
slowly massages temples over the <sub> tag misuse
@canon it was an aside :p
Re: "it was an aside :p": "In rhetoric, a parenthesis ... or parenthetical phrase is an explanatory or qualifying word, clause, or sentence inserted into a passage. The parenthesis could be left out and still form grammatically correct text. Parentheses are usually marked off by round or square brackets, dashes, or commas." Wikipedia: Parenthesis (rhetoric)
@TinyGiant yes, and I also made mine small. We all know here by now I'm not regular. I'm just trying to be positive and constructive not regular :)
There's irregular, then there's misuse of formatting that causes old men to squint unnecessarily.
@TinyGiant well you're a young whipper snapper :) magnification of the browser is an effective defence against the aging eyeballs.
Not the alignment. WHO WILL THINK OF THE CHILDREN!
For reference, Yvette's post that got shot.
While everyone has room for improvement, and framing can help, overall you probably just need a thick skin to be a mod :/.
@Cœur Wait, what? That is the best Stack Exchange post ever! I regret that I didn't write it myself.
I downvoted your answer on that thread largely because of the "Comments are extremely difficult in this political climate, it's like we're all walking on egg shells." in the first paragraph. You are reinforcing the negative attitude of the OP instead of just pointing out that comment simply did not have enough value to be retained. I'd suggest being more factual and less banter/joking in your answers.
All aboard the nitpicking train ;-): light-hearted, not light hearted. time-consuming, not time consuming. I could have edited your post, but then you might miss this.
Don't understand, I thought down-votes on meta denote agree / disagreement? Just means many didn't agree with your pov.
@Lankymart The point is that posts by Yvette seem to receive a disproportionate amount of downvotes when compared to other moderators posting on meta.
@MarkRotteveel in which case, they may need to assess some of their views and how they differ from the majority if they are concerned with being disagreed with. Either way, it is not a slur to be downvoted on meta, just a marker to say that people don't agree.
While the answers others posted provide some valuable insights, I would just like to say that being a mod who gets more downvotes than others is not a bad thing, per se. I see it as a sign that you are more controversial than other mods, and that you look after minorities rather than the majority of users. We need people like that. Regardless, you were elected, and people knew who they were voting for. I for one would vote for you again any time. Kudos for making this post. It's good to know you're on the mod team.
I downvoted your answer on that thread largely because of the "Comments are extremely difficult in this political climate, it's like we're all walking on egg shells." in the first paragraph.
In my opinion, with this comment you are reinforcing the negative attitude of the OP on 'are we being censored and can we no longer comment or what' (my exaggerated impression of that question) instead of just pointing out that comment simply did not have enough value to be retained. I'd suggest being more factual and less banter/joking in your answers.
That last point about (not) joking: I get the impression that you try to soften your responses with humour/jokes, but if those jokes fall flat or even rub people the wrong way it will only serve to increase the negative impressions of your post (and hence downvotes). Sticking to the facts or an explanation will far more likely maintain a neutral outlook even if people disagree.
I have upvoted all the answer. They've all be helpful, so have the comments. I accepted this, as it is so simple and something that is easy to follow. That may sound strange to some people. As a literal thinker, simple step by step instructions work well for me. I'm hoping the community will see an improvement in my communication. My goal is to be helpful for our site, our community (old and new), otherwise there's no point being here.
Which politics does that quote refer to? World/domestic politics or StackOverflow politics?
@camden_kid I would guess SO politics
@AxelRichter That is not at all what I'm saying, I'm specifically talking about the case of Yvette, who is a moderator and thus represents both the community and SE, and is - whether that is appropriate or not - held to a higher standard. I'm just remarking on what I observe from her posts, and providing guidance to how she may address it (and to be honest, with the current sentiments running wild in the meta sub-community, that might not even work).
You don't always have to answer. When you do, don't lay it on so thick.
In the first paragraph, you're emphasizing how insignificant the deleted comment is compared to the 16m questions. Don't do that. It's an unreasonable comparison, and only invites someone to respond.
Now, your "high chance to get downvoted" has everything to do with how you respond. Your opinions are often controversial. That does result in downvotes.
The point isn't that you're doing it wrong. People just disagree.
"Barraging some of my posts with dozens of comments and a flood of downvotes, doesn't actually help to effect change. It's just exhausting. What am I doing wrong?"
You're posting on meta and expect users not to pile in with opinions. The only thing wrong here, in my opinion, is what you expect users to do.
My suggestion?
Stick to facts. "I did X because Y". Don't try to convince people, and don't take it personally if someone downvotes you.
The poem is cute, but it's noise. I wouldn't be surprised if it were deleted, had a normal user posted it.
you make a good point. Expect it and not view it as a bad thing.
and a point worth noting-as a mod,-we try to answer the questions that address the flags we handled personally.
That's actually very good advice for everyone.
While this is a good answer. I have an issue with the point in the middle: "The point isn't that you're doing it wrong. People just disagree." While disagreement doesn't automatically mean you're doing something wrong, such consistent and widespread disagreement suggests that your actions are at odds with the community's expectations, and as a community-driven site, StackOverflow has decided that the community consensus is (usually) what we try to abide by.
@anaximander: I think Yvette is free to have her own interpretations. That doesn’t make her “wrong”.
@Cerbrus I'm not saying she's wrong, per se; I'm just saying that on a site where so much is decided by community consensus, I'm not entirely comfortable with any advice that suggests to a moderator that they needn't be worried when large numbers of people voice disagreement, or that they needn't pay much attention when that happens. It may not be a guaranteed sign that there's an issue, but it is smoke, and I'd expect any mod seeing it to pause and check whether there's a fire (which is of course why Yvette asked this question, which I applaud).
@anaximander: I'm not saying they shouldn't be worried. I'm saying they shouldn't be surprised.
@Anaximander, IMO, moderation is an exception to the rule. In the past I have flagged comments massively upvoted (by "the community") for deletion as "no longer required" (usually because they are belittling or use inappropriate metaphors, see here for examples). I wouldn't go as far as to say anti-community-consensus is correct, but neither is groupthink appropriate for moderation.
@jpp I agree that at times, moderation needs to stand against the tide rather than go with the flow, but if one particular moderator is consistently meeting opposition, I'd expect them to ask why; to just check they're still making the right calls and haven't drifted away from what we've collectively decided we want the site to be. There's a distinction between groupthink and consensus. I'm not saying to always obey community sentiment; I'm just saying to remain aware of it and not discard it out of hand. It's not the only metric as to whether a mod is doing a good job, but it is a metric.
@anaximander The problem with that is it would be defining the community as those who stick around on meta. Personally I sometimes don't visit for weeks (if not months) because of the amount of bike-shedding that happens here. And although there are aspects about the welcoming-discussion that I don't like, the amount of vitriol and "end of the world"-response it invokes on meta seems really out of proportion.
What am I doing wrong?
In the context of responding to Meta questions, not much as far as I am concerned.
The point is just that a regular users are sick and tired of the vagueness around the "welcoming" drama. What are we doing wrong? What is expected of us? To what problem is the approach taken by Stack Overflow a solution, and is it an appropriate one?
When you, with your moderator diamond next to your name, step in and try to answer that, even when merely explaining it from a personal viewpoint, you will be downvoted by the many who are totally and utterly done with this nonsense and the uncertainties surrounding it. Don't take that personally.
We get it, you have your instructions. We just want to know where we stand and what we can do, while not having to fear all our actions (voting, flagging, commenting) are in vain.
This discussion was triggered by a comment being deleted
No, it was triggered by seeing many comments disappear without ever getting feedback why.
I'd add that Yvette is known for her open support of whatever the welcoming drama is about (from before the welcoming drama even started), so many people will see her response as support of what they're sick and tired about.
@AndrasDeak I never liked the blog. That's the irony. ;) CodeC thanks a good answer, it helps clarify some issues.
@Yvette I've never seen you endorse it, I believe, and I didn't mean to imply that (I've already edited my comment to clarify a bit). But you're known to stand up for minorities and hostility towards them, which the welcoming drama tries to fix (albeit in the worst ways possible^[citation needed]). I'm pretty sure this weighs in strongly with how people react to your responses in [[meta-tag:welcoming]].
@AndrasDeak yes, true. I also think that our community needs support that has been lacking. But you make a valid point.
@AndrasDeak Actually Yvette, even if she initially supported the post has clearly shown support for the established contributors and site quality and against the "over-welcoming-ness" in meta posts such as this one: https://meta.stackoverflow.com/a/371015/2036035
CodeCaster I want to apologise if I came across as frivolous or mocking. That wasn't my intention. It was not good judgement to post the poem there, but it certainly wasn't meant as malicious. I can see how it could come across as all wrong. this answer helped me to understand how it came across.
Keep in mind that you're an elected representative of this community, just as much as you're a representative of SO as a company. You have a duty to those that elected you to value their interests, not just the company. (They have their employees to do that.)
When you go around interacting with people here on meta from the perspective of fighting against this community to protect SO's interests, rather than going around trying to protect this community's interests that...isn't going to be well received by the community.
You're saying not to shoot the messenger, but the voice of SO as a company, the people that are here to express their opinions and views, are the community managers, and other employees of the company. If you don't want to get caught up in the fact that the company is putting forth a lot of policies that large portions of the community are strongly disagreeing with, then leave that job to them. To this you can look to some of your peers who simply aren't injecting themselves into many of these discussions.
"many of these discussions." > "any of these discussions." That seems a lot less stressful.
@Cerbrus Sometimes it's unavoidable, for example, the example of this question. Saying something in that question is likely important, as one of her moderation actions was questioned. But using that post as a platform to talk about the greater views and changes of the company, rather than keeping it as much about the specifics of that post, certainly isn't necessary.
Agreed, Servy. Like I said in my answer: Stick to the facts.
that's interesting. Stop defending the company. Yes I do that. Point taken.thanks
I hadn't thought of this at all but it's a good point. I'm sure Yvette means well (I really am) but, indeed, she is not (or should not be) a messenger of SE. Perhaps that's the problem.
@YvetteColomb I'm not saying you can't, I'm more saying that, if you choose to do so, you're not just the messenger anymore. You're making the decision yourself, to advocate for these policies, and so the consequences of doing so are yours to bear. You're not "just the messenger", and someone should should be immune form the consequences of users not agreeing with the policies you're advocating, because you're actually advocating them personally. Obviously you, just like any other member of the community, are allowed to advocate for policy changes.
@Servy yes it's food for thought. Clearly I've been at odds. There were obvious improvements I could and did make.. But I hit a wall. I know I sound thick - but I am on some things, very literal. So it takes me time to get the hang of things in a social sense. But once I work out what's expected of me it will be ok. Does that make sense? So you're saying I should be more in tune with what people want and not so worried about the company's policies?
@YvetteColomb I'm saying you have two choices here. You can choose to advocate for these policies personally, because you personally believe in them, and if you make that choice, you should be prepared for the feedback that those opinions are going to evoke in others (just as anyone is who advocates a position on meta), or, if you so choose, you can not personally advocate for the changes, and leave it to those who are responsible for conveying the companies policies to convey the company's policies. Both options are acceptable.
@Servy ah yes that makes sense. Some things SE does I agree with some I don't. I have advocated for the community to have more effective tools in handling poor content. That's important. I also don't like how the community has become increasingly censured / given negative feedback. We need support to keep going.
@LightnessRacesinOrbit yes I think that's part of the problem. That and poor social skills. :)
For what it's worth I didn't actually see anything wrong with your answer, nor any significant backlash against it (other than some downvotes which, as you know, are not personal). But I didn't look very closely :)
@Servy It's funny when you're banging your head on a brick wall to get your point across and you realise that's what the other person has been doing all along. sounds of pennies dropping
The problem with this, is it that it seems to define the community as those who stick around on meta. That is too limited.
Please try to only delete comments when there's a clear reason for deletion.
Personally, I'd try something like the following:
Rude/abusive/spam -> remove
Clearly off-topic discussion -> remove
Flagged no longer needed (or whatever that's called nowadays) and has clearly been acted upon (e.g. typo fixed, answer improved) -> remove
Anything else should stay by default, unless there's a really good reason to remove it. Like CodeCaster, I've seen other threads where comments have been removed because they have been read so served their purpose. This is NOT a reason to remove comments in my opinion: if they have been read, but not acted upon, requests for clarification/improvement still have a clear purpose: they note deficiencies.
Of course, something as complex as comment moderation can't be captured this simply, but not removing on-topic comments that haven't been acted upon is very important to me.
Regarding that last bullet: Better mod tooling (specifically suggestion #2) would be incredibly helpful. The lack of context in the flag queue is a huge problem.
One of the bit points in this regard is that mods are constantly saying, "comments are ephemeral" as a justification for deleting comments. And while true, that doesn't make it a justification for deleting any comment without any other reason. Yes, comments are designed to be temporary in the sense that the reason you post a comment is to get the author of the post to improve said post, and that the comment should be deleted as soon as said improvement happens. When everything is working right, they should be deleted. But they shouldn't be deleted before said improvement has been made.
The fact that comments are designed to be deleted once they have served their purpose, isn't a justification for deleting comments that have not yet served their purpose, as Erik has brought up rather well.
I've upvoted this (or whatever that's called nowadays)
@Erik: "This is NOT a reason to remove comments in my opinion: if they have been read, but not acted upon, requests for clarification/improvement still have a clear purpose: they note deficiencies." So what you're saying is that, if someone comments on my answer, notes a "deficiency" that isn't actually a problem, and I choose to ignore it, I have to live with this tumor stuck to my answer in perpetuity? Or do I have to take time out to explain to someone why the "deficiency" isn't a deficiency before their comment can be removed?
@Nicol that last one. You can't expect mods to know if a problem is real, and if it would be a real problem, it shouldn't be removed
@NicolBolas Why should someone who posts a problematic answer have the right to have every comment pointing out those problems removed? Just as you're allowed to post your own answer explaining what you think the solution is, and it won't be deleted just because others think it's wrong or bad, others are allowed to provide feedback on it and indicate ways in which they think it's problematic. You're each allowed to state your case, and readers are then able to decide what to do given each sides' information.
@Servy: Why should someone who posts a non-problematic answer have to defend it constantly?
@NicolBolas You don't. If you feel that your answer stands on its own merits, you don't need to respond at all. You just don't have the ability to unilaterally silence every one else on the topic. Whether you think it's worth either editing your answer or responding to the comment is worthwhile, to further explain why your answer is right, is up to you. And keep in mind that moderators simply can't (not won't, or shouldn't, but can't) be the judge of who's technically correct in every single disagreement on the site. They have neither the manpower nor the expertise to do so.
@Servy: If I ask a question and someone posts a bad answer to it, I can just downvote and move on. I don't have to explain why or justify it. If I post an answer and someone makes a stupid comment, how do I do the equivalent? Also, let's not forget the old saying about arguing with fools; even if you're right, it's hard for people to tell which is the fool.
@NicolBolas If you feel that the answer stands on its own merits, and already adequately covers the issues raised in the comments, then you do nothing. If not, then it's a sign that your answer doesn't adequately cover a relevant issue (even if the correct explanation of that issue isn't what the commentor thinks it is) and you should edit your answer, just not in the way the commentor expected.
@Servy: "If you feel that the answer stands on its own merits, and already adequately covers the issues raised in the comments, then you do nothing." And thus leave misinformation forever attached to my answer. Why is this a good thing?
@NicolBolas Because the alternative is allowing people to post misinformation in answers and delete all feedback indicating how it's problematic. If it were actually possible to determine if the comment were correct or not, then sure, we could just delete all misinformation and leave up all useful information. That can't be done.
@Andy Since we're talking about comments, how did the discussion regarding your bot resolve? Were you allowed to run it from your account, a sock puppet, or is it dead for now?
@LordFarquaad https://meta.stackoverflow.com/a/356509/189134 I have not updated the model to support the new "Relevant" flag yet. I also haven't been running it as consistently as I did previously. This is due to several factors but mostly because until recently we couldn't see who flagged a comment and it felt like a conflict of interest if I went in an validated a flag, even though I didn't know if my system had cast it.
I can give a reason why I downvoted your poem. Because it was a poem.
There was a serious question about someone wanting a answer/motivation for the deletion and what stood as answer was a poem from a moderator.
Why was my comment deleted? A common thought.
One that is often asked and answered. One we ought
to remember this one fact:
Had the comment served it's purpose as a keen didact?
So don't mourn the loss of your comments, they are ephemeral (yes we sigh).
It doesn't mean you've done anything wrong,
just that they have passed their usefulness and it's time to say goodbye.
poetry courtesy of bet wagered with Jon Clements
It came over for me as inappropriate and unprofessional, a bit like a jester running around someone singing an answer in a spottish tone.
That is the image I had in my mind. And that caused me to downvote, because for me it came over as if you didn't take the user serious enough to give a normal answer and ridiculed the user for asking such a question. This may not have been your intention, but that is how it appeared to me.
FWIW, when Yvette first posted that poem, I told her pretty much the same thing in private. So you have a different mod who agrees with your opinion, and that says a lot.
that's quite an insightful explanation. I've seen other answers that looked like voted negatively on the grounds of taking the discussed issue lighter than it deserves. I think I don't vote for this reason myself but I find it fair when others do
@gnat I used to own a few fora back in the day with thriving communities, and the worst one can do is having a attitude like: Look at all the work I do, I deserve respect and look at the big picture, what I did is no big deal. For the affected user it is a big deal. Being humble, reasonable and firm would be the best way to approach it. You don't need to agree with the user always, you can be at opposite sites, but always listen, acknowledge you understand the other and realize you're a servant serving out of love for the community. My down vote was more meant as a signal, not OK!
So it's a bit like responding to a serious question with an animated gif? Except this one is upvoted. So it's ok to appear to be mocking people, it depends on who is being mocked and who is doing the mocking?
@YvetteColomb Order of events is also important on how you present your point. Your post had just the poem. Not an introductory text explaining the deletion in "normal voice" leading up to the tongue in cheek poem. If you had posted a serious answer first, then introduced the poem it would have been received more favorably. Humor is all about timing, presentation and introductions.
@Tschallacka and also it's usually funnier for the people it's not being directed at. I intended to make a light hearted response. The intent wasn't to ridicule the OP. I can now see how it could be taken that way. Your intent was clear. And the gif is ridiculing. However, as evidenced by the upvotes, it's clearly acceptable to people. So it's me that either has to like it or lump it. Constructive criticism I can cope with, ridicule I cannot. This was the only answer I downvoted.
For the record. I don't disagree with what you have said. The gif is ridiculing.
@YvetteColomb I see where you're coming from. The gif was meant to illustrate the image I had in mind. I moved it to a link. It was not my intention to ridicule you with that image.
@YvetteColomb it's not quite straightforward really. I wrote I saw such answers scored negatively, but I also saw when these were scored positively. For myself I just dropped using this criteria to predict the score (your answer you ask about here is a good example, at first I thought it will eventually get positive score) and instead use it only to explain what I observe after the fact
lol, still combative...
@canon I don't need to roll over and play dead. I'm allowed to have my opinions and say if something has upset me. Every individual on this site is allowed to voice their opinion. If mine doesn't agree with someone elses, to dismiss it as "combative" is a pejorative take on what I have to say and an unnecessary slight.
I want to thank you for this post. As I honestly didn't understand how the OP of the other question may have felt until you posted this. It has helped. I reversed my downvote and gave you an upvote. Also seeing how you didn't have mal-intent and neither did I, it's a wake up call of how things can be misinterpreted and also whats just not appropriate (especially for a mod). Sorry to leave so many comments. It's been a hectic few days and I've been thinking it over.
I have a mild form of autism, so I know from experience that the path to becoming "socially acceptable" can be hard with many walls to crash into. Keep growing, keep learning. It's okay to make mistakes as long as you learn. Good on you for seeking to better yourself @YvetteColomb
@YvetteColomb It sucks to be mocked in the process of being reprimanded or, worse, while asking for help. The fact that you launched this entire spectacle to combat criticism over your behavior is just ridiculous. I think you knew what you did wrong (edit history?) and you're just going through the motions in a thinly veiled attempt to validate your actions. I find it all very disingenuous. Still, that's just my take. Maybe I expect too much out of a moderator.
I think you're getting the short end of the stick here because you're:
Very visible (people on meta know you and know your name)
Very polarizing (you've had posts people really disagreed with before)
Posting at a time where people are generally weary and negative towards staff and mods
None of those are really your fault, I think.
The "shoot the messenger" thing is something that you have to come to grips with. With that diamond next to your name, people will ascribe officiality to what you post, and the general discontent against staff and moderators will manifest on your posts, too.
Thanks Magisch (happy cactus). Yes, that diamond changes the dynamics.
I think you're getting a lot more attention/flak than other moderators because you're a highly polarizing figure, as made evident by a few of your earlier posts and comments before you became a moderator, and a handful afterwards.
It feels to me like issues which crop up which involve you on Meta have a dual problem - you're both the catalyst and the solution, and in my mind, an effective moderator cannot see themselves in both positions at the same time. I get it; people hate to see their comments deleted or for whatever reason dislike poetry, but I would recommend following this advice from here on out.
...As it persisted, if the other mods were happy with my actions and attitude, I'd start ignoring the posts. There becomes a point, when trying to reach an agreement becomes too difficult, and it's better to walk away and focus on flags.
Haters gonna hate.
True words. That's why I'm attempting to make peace with the community and show true effort. Can you elaborate on this point "you're both the catalyst and the solution"? Also I do understand the point of being snug in the mod den and not worrying about meta, but that's not solving the heart of the issue. I want a working and productive relationship with the community. So people can feel I'm reliable and someone they can come to if they need.
Sure - you're the person that may handle a mod flag and come to Meta to defend it in response of a Meta post about it. However (and this is what I've simply observed since you took the post), there is a "because it's you" attitude towards flags you take action on and posts you author. People/denizens are well within their right to disagree, but there are some people you may not be able to reach because of that mental stigma. I'd recommend taking a page out of Martijn's book on this one, since it feels like its Meta activity fell off a cliff after he was elected.
thanks for the clarification and the answer. I'm absorbing everything. Time also helps stigmas (if the cause of the stigma is removed).
Here's my take on things: in your position as a moderator, no matter how eloquently you communicate your rationale for an action you took, you're likely to get downvotes if people disagree with the action. When posting about an action that you took, you aren't just "the messenger". It may not have anything to do with the way you're expressing yourself - at the end of the day, what you're writing about is something you did that may have been unpopular, and people are likely using their votes on Meta to signal that feedback.
And I'd consider this - this is just one person's perspective, but I feel like I see posts where you've taken an action that I'd characterize as not necessarily consistent with the way the community is used to other mods behaving, noticeably more often than I do with other specific moderators. Now, that doesn't mean that the actions are necessarily wrong - but when people become accustomed to certain assumptions about how the moderators act, broadly, and then those assumptions start to have exceptions, it can breed resentment -- even if the action you took isn't controversial on its face, people just don't like being hit with moderator action that they feel that they couldn't have anticipated.
Not to overstep the scope of this question, but since you're seeking advice of sorts here, my personal suggestion to you would be to spend a bit more time talking to other moderators about some of the actions you've taken that haven't been received the way you expected. Ask how they would have handled them, and explain why you took the actions that you chose - maybe you'll learn from them, maybe they'll learn from you. If you believe that you're bringing a better or more nuanced way of looking at things that other moderators are applying, then advocate for them to adopt the same mindset. Because the more consistency with which the rules are applied, the less likely you are to be hit with these surprise waves of backlash on Meta.
You can't rule out Yvette was elected by those users that felt a different take on moderation was needed. She didn't pop-up out of nowhere, users kind of knew her reputation. Suggesting that she needs to blend in with the rest might relief the friction but does a disservice to those who elected her to expect something would change. Things do change now with a gentle push and that is met with reluctance to change. If anything, it might need need more push, not less.
I'm not necessarily suggesting that she needs to learn how the others moderate and just act like them - that's why I emphasized that "maybe you'll learn from them, maybe they'll learn from you". But if she brings a new take on moderation only by solely moderating differently than the rest, and doesn't make an effort to pull others along to her way of thinking, she'll always stand out. But I appreciate the critique, and I made a small edit to clarify my intent.
@rene I think you might be overestimating the amount of research that the average voter does in mod elections.
And to be clear - I personally voted for Yvette in the last election, and I do think that the moderation on Stack Overflow could use a new direction, broadly speaking. I just feel concerned that there may be approaches that better socialize a new moderation perspective and work towards normalizing it, rather than having it stand out as an anomaly that may frustrate users.
@Servy I know ... I'll count on that when the time is right ...
@SamHanley fair enough. The edits are appreciated.
I'm going to answer because I haven't seen anyone put this and I keep running into it on meta:
People on meta still downvote to say "I don't agree with this post"
It's not like the main SO where a downvote means something is wrong, you need to re-format this post, its low-quality, or anything like that - it just says I don't agree with the message.....the emotional or at least rational "what did I do wrong?" response we see in even a seasoned member/moderator does kinda re-enforce the "we should look at our voting policies over comments" but I digress.
What am I doing wrong?
First, I actually have agreed with many things that Yvette has said during this whole welcoming business because I, like many, am afraid that this focus on the negative will turn users (answers) away - I know it already has with me - due to this focus on comments instead of other areas in need and a lot of Yvette post seem to try and convey that.
So you have one post, one of many and a meta post, which initially had points I disagreed with and downvote is an appropriate way of saying "I disagree". The update changed the perspective of the post and so downvote gone but I still didn't really agree with the idea of posting it as a joke so no upvote.
a comment being deleted
This is the odd part to me, the current attitude and fear over comments - makes it so I comment less. There have been many "downvoters please comment" (I'm picking the nicer way it is said, not the common one) that I've had to ignore. With other posts where I would have given information on improving the post to keep it from being put on hold and instead just downvote and/or flag to avoid the negativity or repercussions of posting a comment.
And that's what happened here, I did not comment because I hold back more and more with comments on SO. I catch myself sometimes and remind myself its not about SE (company) its about helping people (to me) and make myself do it - I did not here. If I did I would have simply said, I don't agree with the tone of this post but the decision was fine.
I hope you stay the course as I, for one am happy with your efforts as moderator.
You were elected at a difficult time for SO, but to quote one of my favorite authors:
"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
Nit: voting is and has very much been established as being different on Meta.
yes, and I'm sure people (at least many of the people involved here and certainly a mod) know this but I'm talking about how it can still feel like "I disagree" is a negative just because of how people perceive downvotes. I disagree (downvote) can simply mean "I don't agree" and it can be hard to remember that @Makoto
@pnuts: I'm sorry, I didn't realize that "perceived usefulness" was that much different from agreement. If I don't think a discussion is useful, I'm tacitly disagreeing with it. If I don't think a support question is warranted, I'm tacitly disagreeing with the request. If I don't think a response to either of those is particularly useful, I'm tacitly disagreeing with it.
...cont: Note, I still see people asking for comments due to downvotes fairly regularly who do not seem to understand that fact which requires searching meta or actually reading the tour. In this case though I just feel like this post is a similar type of question from a mod (who as an elected position wants and deserves feedback) and my answer is "cause I disagreed". Which I'll give cause that could have been the one & only reason others downvoted (i.e. not personal, just ehh...I didn't agree with this one)
I might just delete this soon anyway as I have had a fairly negative view of SO in recent months and that might have too much influence in my view of moderators/staff and anything even dealing with the "welcome wagon" to really post anything objective
I'm a mod and I upvoted this answer.
@pnuts The help center is descriptive, not prescriptive. It's also just...not very good in its description. It's not telling people that they should be voting based on their agreement with a feature request, it's saying that many people on the site do that. And it's wrong to say that it only applies to feature requests, it tends to apply to any post putting forth a proposal. Tags have nothing to do with it.
@pnuts Yeah, and clearly people use their votes to reflect their opinions of proposals regardless of how their tagged. That makes it a poor description of how people vote. If it accurately described how people vote, it would be a good description (or at least a better one).
@pnuts Lots of people feel that it's desirable for people to be able to express their opinion on a proposal by voting, rather than requiring everyone to post an answer or comment just to say whether they support it or not. Much less noise that way. If you think a given proposal is useful, you're free to vote accordingly, just like everyone else does. But regardless, that you wish people voted differently doesn't mean someone describing how people do vote is wrong.
@pnuts Again, it's not a statement on how people are required to vote. It's a statement trying to inform people not familiar with meta how other's do vote. As mentioned before, it's descriptive, not prescriptive. And it does a bad job of it, because it doesn't accurately describe how people actually vote.
| common-pile/stackexchange_filtered |
Can a creature levitate (or be levitated) out of a grapple?
The 2nd-level Sor/Wiz spell levitate [trans] (Player's Handbook 248), in part, says
Levitate allows you to move yourself, another creature, or an object up and down as you wish. A creature must be willing to be levitated, and an object must be unattended or possessed by a willing creature. You can mentally direct the recipient to move up or down as much as 20 feet each round; doing so is a move action. You cannot move the recipient horizontally, but the recipient could clamber along the face of a cliff, for example, or push against a ceiling to move laterally (generally at half its base land speed).
Can a Medium creature that's grappled by a Medium opponent be automatically liberated from the grapple by using on itself a levitation effect or by having an ally use on it a levitation effect?
Note: A low-level totemist that's fond of riding animals is deciding whether to bind to his totem chakra the Magic of Incarnum soulmeld blink shirt (60) that allows the wearer to use an effect like dimension door except only on himself or the Dragon #350 soulmeld gravorg tail (87) that allows the wearer to use an effect like the spell levitate except that it can affect all allies within 10 ft. of the totemist and that subjects can be levitated 20 ft. (+10 ft./essentia) per round. Sure, the totemist likes himself alive, but he likes himself and his mount alive, too! Note that I'm aware that the levitate effect, if possible, will likely provoke attacks of opportunity from the grappler for leaving its threatened area, but, y'know, that's what the feat Mounted Combat is for. Also, I'm aware of this inconclusive 2013 Pathfinder thread on this topic that, while for a different system (with substantially different grappling rules!), might nonetheless provide respondents with food for thought.
I'm not aware of some rules about this kind of situation and I agree with those who said is up to the DM.
In my opinion one way you could handle it is using a method similar to freeing someone from quicksand (Sandstorm manual), just to quote an official rule. I would have the grappler (the one without levitate cast on him) to make a Strengh check (DC up to DM, bound to the DC of the spell OR the weight of the greature) and if he fails the other slips from his hands and levitate out of grapple. If he wins I would have the levitating one make another Strengh check to see if he can "lift" the opponent. If he fails he stays on the ground, if he wins they both lift.
That's just my thought and how i would have handle it
First, you can't cast the spell if your hands aren't free... If someone else is casting it on you or you are casting it on an ally, you risk levitating whatever may be grappling you as well. So I wouldn't say it's an automatic liberation, but it could work towards that purpose.
(Not my downvote) Note that the usage that the question hangs on is not the casting of levitate, but the usage of an ability that replicates the spell. Thus the casting while grappled is not especially pertinent.
Despite false refference to a free hand, this answer seems pretty much right to me in answering what is really asked. I think the most reasonable outcome will be grappling creature hanging on your levitated ally still grappling him even if its some kind of irresistible force.
Based on the quote you gave us;
A creature must be willing to be levitated, and an object must be unattended or possessed by a willing creature.
While the second mentions an object, the implication seems to be that you can't 'snatch' something from someone's grip using levitation. To that end, it seems pretty clear that you couldn't escape a grapple by being the subject of a levitation spell unless the grappler was willing to cooperate.
If you can't escape the grapple, I'd say it would fail, as you couldn't levitate the grappler along with the grapplee, again unless the grappler was "willing to be levitated."
D&D 3.5e has a rather strict creature/object divide. While it's accurate to say that a levitation effect can't snatch an object from a unwilling creature's grasp, lifting a willing creature using levitation really is a different—and, in this case, more puzzling—scenario.
@HeyICanChan I don't see Isaac's argument as saying that the rules are explicit, but that levitate can't lift objects that 'resist' (due to being attended). Following that 'resistance' train of thought, it's reasonable to rule that levitate can't lift creatures that are resisted either. To an extent, the 'willing' clause is the same thing; all a creature has to do to not be levitated is not want to be. I don't think that there is an explicit or more strongly implicit rule to be found regarding this; it will most likely come down to individual DM ruling.
@Chemus It's unfortunate that the levitate spell doesn't say what happens when it's used against an unattended yet secured object. I mean, I guess the DM must decide either that levitate is some kind of irresistible force or that levitate is only usable on the already unrestrained. Probably—as a 2nd-level spell—the latter's a better choice than the former.
Note that if levitate is used on a willing ally in a grapple, that might work, but then you have to figure out what happens to the grappling unwilling enemy that wasn't targeted. Sounds like it would be an external attempt to break a grapple, which aside from figuring out how much STR a levitate translates to, starts addressing aspects of a grapple that the rules never seemed to have envisioned.
| common-pile/stackexchange_filtered |
What would it take to have NXP and CO produce matched transistors in a DIP6 package?
What would it take to have NXP and CO produce matched transistors, like BCM847 & BCM857 in a DIP6 package?
The SOT 23-6 is so small, pretty to hard to solder with a hobby soldering station that has a 0.05 inch wide point.
edit: i found a substitutes:
NSS40302
NSS40301
NSS40300
Witch are SOIC8 and available again after a chip shortage.
A couple of barrels of (twenty) dollar bills ought to do it.
Just solder the flippin' things. SOT-23-6 can easily be done by hand. I use an iron with a 1mm tip and have no trouble with parts that size (and smaller.)
A hell of a lot of money. I'm sure they'll do it if you pay them enough, but "enough" would likely be on the order of $1M or so. Far cheaper to practice soldering and/or get a more suitable soldering station.
@JRE, they could use the same silicon part, but use a different package.
i'm aware i could use solder wick to remove to excess.
@NaturalDemon You underestimate how difficult it is to package up semiconductors. You're right, using existing dice would drop the price from millions to a hundred thousand or so, but it would still be a major undertaking. Not something they'll do just because you asked; something they'll do because you asked and agreed to buy a few million parts.
It's not like there's a guy in the back room, stuffing little pieces of silicon into little black housings, one by one, on demand. When they do it, they do it by the righteous boatload - tens of thousands to millions on a production line made for high volume.
Then, too, that's just your one part covered. What about the other parts in your circuit? All the op-amps in SOIC, or the single transistors in SOT? You'll still have to handle them.
If NXP would answer you (and they won't, I know, I've worked there, they're not interested in small customers) then they would refer you to Nexperia (split-off from NXP) as that is now the manufacturer of discrete transistors. Also Nexperia will not talk to you. I know, I'm a consultant there, they're only interested in very large customers.
@JRE the SOT23-3 has bigger leads looks a lot more manageable with a soldering iron, you can see the pins against a SOT 23-6 or SOT23-5, there also exists SOT23-8, but that would be a challenge if you have a design that's really good, passed the stage of prototyping and outsource the population of the boards. i could mount the SOT23-6 on a DIP8 sized PCB and stick it on the back, but than it's almost as big at 2 regulator transistors.
@Bimpelrekkie Yeah, i know, they told me contact NEXPERIA and i did get a reply, they refused, but still the parts appeared on the market later on. a year or so later when i accidentally fell over them. ( Mini Logic packages ), have a single CD4013 on a chip instead of 2.
Matched transistors in a single package exist, go look for them on mouser etc. There are few of these and that's because they're only needed in special applications. These days, almost everyone would simply use an opamp. You might want to explain why you need matched transistors in a new question where you ask for an alternative design solution.
@Bimpelrekkie Exponential converter for synthesizer, current mirrors for a VCA.
i know the Renesas HAF3046/HAf3096 exist, but they are very very expensive. 8€ and upwards.
but they are very very expensive Yeah, because they're only for niche applications, almost no one needs them. I'm thinking small SMD BJTs mounted close together would be "good enough" but difficult to solder (use hot air soldering!). Alternative: 2 through-hole transistors in a metal can (BC108?) or just TO92 plastic housing and thermally couple them. Also consider adding a small emitter resistor to even out some mismatch. There's also this one-NPN logamp: https://www.electronicshub.org/operational-amplifier-applications/
@Bimpelrekkie, yeah, i could epoxy them together, but than i have to match them manually, testing some 100 transistors.
https://web.archive.org/web/20151002134800/http://home.comcast.net/~ijfritz/MiscProj/transmat001.pdf + the cost of that PCB, switches, precision resistors.
i have to match them manually, testing some 100 transistors It all depends on how "good" your matching needs to be. If you can allow 10% mismatch, a couple of emitter resistors might be you need for a current mirror. For 1% matching: just drop more voltage across the emitter resistors. I would try to design the circuit such that the mismatch is not an issue, if that's not possible, I would consider calibration. Matched components is the last thing I would consider as that gives all the issues you now have.
@Theodore, maybe some day i have the change and resources to approach manufacturer, i have found one that also makes pcb and populates them for the fastest and most powerful electric cars on the planet. i just wanna know the result before, i do have BCM847 and BCM857 here, it's just a bit scary i fail on that part and have to trow away an entire pcb.
::Boggle:: If you mess up soldering your BCM847 or BCM857, then you just remove them from the board and try again. They are as easy to remove as they are to solder. Easier, even.
@JRE, mounting and removing is easy, agree, just worried to have a short circuit and not seeing. still wished that they (NXP, Nexperia, Texas Instruments and co) also have DIY'ers, prototyper, hobbyist in mind. i'm sure they would sell at least 10.000 if not more of matched transistors around the globe to these people.
@JRE even SOIC format was better than SOT23-6
I am a DIYer and hobbyist. I'd take the SOT-23-6 over the DIP6 any day. SMD is worlds easier to handle, solder, and remove. I only use a plain, standard soldering iron. No reflow oven or hot air tools. Works fine, no problems.
A few things that come to mind that would be certainly cheaper and easier:
Buying a reflow oven or fine tip soldering iron. Doesn't need to be super fancy or top of the line, but should at least be decent quality. SOT-23-6 is not a terribly difficult package as long as you have decent equipment and a decent footprint on your PCB.
Getting a PCBA manufacturer (or a friend with good equipment) to fabricate breakout boards with the part and some headers populated. If you absolutely need a DIP for prototyping, this is a simple way of using one or two surface mount parts in an otherwise through-hole design. The breakout boards themselves are fairly cheap, but a PCBA assembler may have requirements like panelization or a minimum order quantity.
Having your whole design made by a PCBA fabricator. You would send Gerber files and pick and place instructions and they would do assembly of the whole board. For this case, committing to SMT throughout may actually save costs because of the ease with which surface mount parts can be picked and placed with automated equipment. Since you're using SOT-23-6 packages and not something extremely difficult like WL-CSP, you don't need the most advanced assembly capabilities and almost any surface mount assembly service should suffice.
| common-pile/stackexchange_filtered |
LIME Shows Very High Probability Score, But Breakdown Has All Negative Factors
I'm using LIME to break down the observation for each row and am taking a look at the positive and negative factors that contribute to the probability outputted.
I filtered my dataset down to only records with a 95% or higher probability score, but when I look at the factors, all of them have a negative weight on the prediction. How can an observation have a 95% probability to be class 1 if all the factors are considered negative?
Fun little question (+1). I think what you report, a manifestation of some of the short-coming around the overall "local surrogate models" explainer methodology; please see my answer below for more details. :)
While somewhat unlikely this phenomenon can indeed happen. The results from local explainer by LIME can disagree (on occasion substantially) with the results of the global model. Probably it is worth considering different kernel widths as well as checking the goodness-of-fit of the LIME explainer too.
More details: LIME is training a model in the "neighbourhood" of the point we are trying to explain. There is no need for the global model (i.e. our overall ML model) and for our local model (i.e. the explainer trained by LIME) to be outputting the same results. They are two potentially very different models that are trained on different datasets. (The LIME explainer is trained on a perturbed version of the data close to our point of interest, the overall ML model uses all our training data.)
For this case in particular, as the focus are sample instances with very high probability, it is likely that all the neighbouring instances of them are of the same class too; the intercept (assuming we are training an LASSO regression model as an explainer) will be reasonably high and therefore that most of the features' factors are of negative weight.
I would suggest trying different kernel widths so the neighbourhood size is varied. Finally, do check how good the LIME explainer is fitting the overall examples; while it unreasonable to expect great performance from it, it may happen that the explainer under-fits so substantially that any insights are misleading at best.
Thanks for the in-depth explanation. What is a good range for the kernel_width to experiment with?
Cool, I am glad I could help. By default, the kernel width is 0.75 times the square root of the number of features. I would use increment of that (say [1.0, 1.5, 2.5, ...], etc.), unfortunately there is no hard advice here. The wider our kernel, the more "global" our LIME explainer will be.
@usεr11852 - I have a Lime related problem. Would you be interested to help me with it? https://stats.stackexchange.com/questions/569621/does-lime-score-matter-for-continuous-variable-discretozation
@TheGreat Sure, I will check it in the following day or two.
| common-pile/stackexchange_filtered |
System.Threading.ThreadAbortException caused by Response.Redirect
In my application I am calling a WebMethod from JavaScript, where I am trying to redirect to some page:
[WebMethod]
public string Logout() {
if (User.Identity.IsAuthenticated) {
HttpContext.Current.Response.Redirect("~/Pages/Logout.aspx");
}
return "";
}
The aspx page:
<input onclick="callLogout();" id="btn" type="button" value="Click Me" />
<asp:ScriptManager ID="ScriptManager" runat="server">
<Services>
<asp:ServiceReference Path="~/WebServices/EMSWebService.asmx" />
</Services>
</asp:ScriptManager>
<script type="text/javascript">
function callLogout() {
EMSApplication.Web.WebServices.EMSWebService.Logout(OnComplete, OnError);
}
function OnComplete(result) {
alert(result);
}
function OnError(result) {
alert(result.get_message());
}
</script>
And I am getting:
A first chance exception of type
'System.Threading.ThreadAbortException' occurred in mscorlib.dll
An exception of type 'System.Threading.ThreadAbortException' occurred in
mscorlib.dll but was not handled in user code
in my VS2010's Output window.
Why I am getting this exception and how can I resolve this?
Don't just catch the exception. Use the overload of Redirect which takes a boolean. Pass false to indicate that you don't want the thread aborted.
I'll help you to help yourself: Here is the documentation: http://msdn.microsoft.com/en-us/library/a8wa7sdt.aspx
please don't use this approach.... wrong answer because you will still get thread abort and... your code after the redirect still runs!!!
@Yuki not sure what you are talking about. You can specify that you don't want a thread abort. That's the point of this answer. You of course need to ensure that you end processing yourself now that you don't abort the thread.
The ideal way to redirect is to call Response.Redirect(someUrl, false) and then call CompleteRequest()
Passing false to Response.Redirect(...) will prevent the ThreadAbortException from being raised, however it is still important to end the page lifecycle by calling CompleteRequest().
When you use this method in a page handler to terminate a request for
one page and start a new request for another page, set endResponse to
false and then call the CompleteRequest() method. If you specify true
for the endResponse parameter, this method calls the End method for
the original request, which throws a ThreadAbortException exception
when it completes. This exception has a detrimental effect on Web
application performance, which is why passing false for the
endResponse parameter is recommended. For more information, see the
End method.
Note that when the Response.Redirect(...) method is called, a new thread is spawned with a brand new page lifecycle to handle the new redirected response. When the new response finishes, it calls Response.End() on the original response which eventually throws a ThreadAbortException and raises the EndRequest event. If you prevent Response.End() from being called (by passing false to Response.Redirect) then you need to call CompleteRequest() which:
Causes ASP.NET to bypass all events and filtering in the HTTP pipeline
chain of execution and directly execute the EndRequest event.
Word of Caution:
If you call Response.Redirect(someUrl, false) allowing code continue to execute, you may want to change your code such that the processing gracefully stops. Sometimes this is as easy as adding a return to a void method call. However, if you are in a deep call stack, this is much trickier and if you don't want more code executing it might be easier to pass true like Response.Redirect(someUrl, true) and purposely expect the ThreadAbortException - which by the way isn't a bad thing, you should expect it during Response.Redirect(...) and Server.Transfer(...) calls.
ThreadAbortException Cannot Be Stopped By Catching the Exception
The ThreadAbortException is not your ordinary exception. Even if you wrap your code in a try catch block, the ThreadAbortException will immediately be raised after the finally clause.
When a call is made to the Abort method to destroy a thread, the
common language runtime throws a ThreadAbortException.
ThreadAbortException is a special exception that can be caught, but it
will automatically be raised again at the end of the catch block. When
this exception is raised, the runtime executes all the finally blocks
before ending the thread. Because the thread can do an unbounded
computation in the finally blocks or call Thread.ResetAbort to cancel
the abort, there is no guarantee that the thread will ever end. If you
want to wait until the aborted thread has ended, you can call the
Thread.Join method. Join is a blocking call that does not return until
the thread actually stops executing.
Often what I've seen in code is a try catch block around Response.Redirect that will log exceptions that are not ThreadAbortExceptions (since you expect those). Example:
private void SomeMethod()
{
try
{
// Do some logic stuff
...
if (someCondition)
{
Response.Redirect("ThatOneUrl.aspx", true);
}
}
catch (ThreadAbortException)
{
// Do nothing.
// No need to log exception when we expect ThreadAbortException
}
catch (Exception ex)
{
// Looks like something blew up, so we want to record it.
someLogger.Log(ex);
}
}
Good answer. Thanks for including the Word of Caution which points out (as is usual in this game) that there's not always one "right" way of doing things.
This is a standard exception caused by Response.Redirect caught only if you have an explicit try-catch over a block which does the redirect. ASP.NET throws it upon redirection so that no code is executed after you redirect.
A solution is to add an empty catch to swallow this particular exception
try {
...
Response.Redirect( ... );
}
catch ( ThreadAbortException ) { } // swallow the exception
catch ( Exception ex ) {
// an existing catch clause
}
Note the ThreadAbortException can also happen if you do a Response.End within a try-catch block.
| common-pile/stackexchange_filtered |
How to use the console integration toolkit to determine if user is chat capable?
We have a custom button that uses the console integration toolkit's sforce.console.chat.getDetailsByPrimaryTabId method to determine if the button is being clicked in the context of a chat, or not.
chat.getDetailsByPrimaryTabId(response.id, function(chatSummary) {
console.log('Got a Response!')
if (!chatSummary.success)
sforce.console.openSubtab(...) // Go Here
else
sforce.console.openSubtab(...) // Go There
})
For profiles associated with Live Agent (or technically Omni-Channel), it works great and the callback does one thing when the current primary tab is a chat, and something else when it isn't.
But for profiles not associated with presence statuses or service channels, routing configurations, etc (e.g. System Admin), the getDetailsByPrimaryTabId never executes the callback. In fact this seems to be true of every other method in sforce.console.chat object. None of them actually seem to finish executing.
The object itself, sforce.console.chat, is defined.
Is there any way of determining if the current user is capable of chat while avoiding the use of sforce.console.chat callbacks? Then I can modify the above code to avoid the callback request altogether and just "Go There" immediately.
Thanks,
hi Mike, I am trying to use the same method when the chat is accepted I am trying to fetch the value sent by addcustomdtail from the primary tab, Can you help me as in how can I use sforce.console.chat.getDetailsByPrimaryTabId? thank you I really appreciate your help
| common-pile/stackexchange_filtered |
Assign correct Qualification to all rows associated with Client per Month - Python / Pandas
I need to assign correct value (Qualified or Not Qualified) to all rows associated with Client per Month if condition for all of the associated rows is met.
test_data = {'Client Id': [1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2],
'Client Name': ['Tom Holland', 'Tom Holland', 'Tom Holland', 'Tom Holland',
'Tom Holland', 'Tom Holland', 'Tom Holland', 'Tom Holland',
'Brad Pitt', 'Brad Pitt', 'Brad Pitt', 'Brad Pitt',
'Brad Pitt', 'Brad Pitt', 'Brad Pitt', 'Brad Pitt',],
'Week': ['01/03/2022 - 01/09/2022', '01/10/2022 - 01/16/2022',
'01/17/2022 - 01/23/2022', '01/24/2022 - 01/30/2022',
'01/31/2022 - 02/06/2022', '02/07/2022 - 02/13/2022',
'02/14/2022 - 02/20/2022','02/21/2022 - 02/27/2022',
'01/03/2022 - 01/09/2022', '01/10/2022 - 01/16/2022',
'01/17/2022 - 01/23/2022', '01/24/2022 - 01/30/2022',
'01/31/2022 - 02/06/2022', '02/07/2022 - 02/13/2022',
'02/14/2022 - 02/20/2022','02/21/2022 - 02/27/2022'],
'Month': ['January', 'January', 'January', 'January',
"February", "February", "February", "February",
'January', 'January', 'January', 'January',
"February", "February", "February", "February"],
'Year': [2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022,
2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022],
'Payment Status': ["Pending", "Paid in Full", "Didn't Paid", "Paid in Full",
"Paid in Full", "Paid in Full", "Paid in Full",
"Paid in Full", "Paid in Full", "Paid in Full",
"Paid in Full", "Paid in Full", "Paid in Full",
"Paid in Full", "Paid in Full", "Pending"]}
test_df = pd.DataFrame(data=test_data)
Data:
Client Id Client Name Week Month Year Payment Status
1 Tom Holland 01/03/2022 - 01/09/2022 January 2022 Pending
1 Tom Holland 01/10/2022 - 01/16/2022 January 2022 Paid in Full
1 Tom Holland 01/17/2022 - 01/23/2022 January 2022 Didn't Paid
1 Tom Holland 01/24/2022 - 01/30/2022 January 2022 Paid in Full
1 Tom Holland 01/31/2022 - 02/06/2022 February 2022 Paid in Full
1 Tom Holland 02/07/2022 - 02/13/2022 February 2022 Paid in Full
1 Tom Holland 02/14/2022 - 02/20/2022 February 2022 Paid in Full
1 Tom Holland 02/21/2022 - 02/27/2022 February 2022 Paid in Full
2 Brad Pitt 01/03/2022 - 01/09/2022 January 2022 Paid in Full
2 Brad Pitt 01/10/2022 - 01/16/2022 January 2022 Paid in Full
2 Brad Pitt 01/17/2022 - 01/23/2022 January 2022 Paid in Full
2 Brad Pitt 01/24/2022 - 01/30/2022 January 2022 Paid in Full
2 Brad Pitt 01/31/2022 - 02/06/2022 February 2022 Paid in Full
2 Brad Pitt 02/07/2022 - 02/13/2022 February 2022 Paid in Full
2 Brad Pitt 02/14/2022 - 02/20/2022 February 2022 Paid in Full
2 Brad Pitt 02/21/2022 - 02/27/2022 February 2022 Pending
If every row (Week) associated with Client per each Month is Paid in Full, then Qualified assigned to all rows (Weeks) associated with Client per each Month. Even if 1 Week is not Paid in Full (3 weeks can be Paid in Full, but 1 Didn't Paid or Pending), then all rows assigned to Not Qualified.
Desired output:
Client Id Client Name Week Month Year Payment Status Qualification
1 Tom Holland 01/03/2022 - 01/09/2022 January 2022 Pending Not Qualified
1 Tom Holland 01/10/2022 - 01/16/2022 January 2022 Paid in Full Not Qualified
1 Tom Holland 01/17/2022 - 01/23/2022 January 2022 Didn't Paid Not Qualified
1 Tom Holland 01/24/2022 - 01/30/2022 January 2022 Paid in Full Not Qualified
1 Tom Holland 01/31/2022 - 02/06/2022 February 2022 Paid in Full Qualified
1 Tom Holland 02/07/2022 - 02/13/2022 February 2022 Paid in Full Qualified
1 Tom Holland 02/14/2022 - 02/20/2022 February 2022 Paid in Full Qualified
1 Tom Holland 02/21/2022 - 02/27/2022 February 2022 Paid in Full Qualified
2 Brad Pitt 01/03/2022 - 01/09/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/10/2022 - 01/16/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/17/2022 - 01/23/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/24/2022 - 01/30/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/31/2022 - 02/06/2022 February 2022 Paid in Full Not Qualified
2 Brad Pitt 02/07/2022 - 02/13/2022 February 2022 Paid in Full Not Qualified
2 Brad Pitt 02/14/2022 - 02/20/2022 February 2022 Paid in Full Not Qualified
2 Brad Pitt 02/21/2022 - 02/27/2022 February 2022 Pending Not Qualified
I don't know how to achieve this, I though about value_counts in the loop:
for name, month in zip(list(test_df["Client Name"].unique()), list(test_df["Month"])):
print(test_df[(test_df["Client Name"] == name) & (test_df["Month"] == month)].value_counts(["Payment Status"]))
The key is to create a boolean mask: if Payment Status is "Paid in full" then True else False. Now you can group by Client Id, Month AND Year to check if all values are True. Use transform to broadcast the result to every row of the group. Finally, replace True/False by its respective values.
The boolean mask is created dynamically by adding a new column is_paid to the dataframe:
df['Qualification'] = (
df.assign(is_paid=df['Payment Status'] == 'Paid in Full')
.groupby(['Client Id', 'Month', 'Year'])['is_paid']
.transform('all').replace({True: 'Qualified', False: 'Not Qualified'})
)
print(df)
# Output
Client Id Client Name Week Month Year Payment Status Qualification
0 1 Tom Holland 01/03/2022 - 01/09/2022 January 2022 Pending Not Qualified
1 1 Tom Holland 01/10/2022 - 01/16/2022 January 2022 Paid in Full Not Qualified
2 1 Tom Holland 01/17/2022 - 01/23/2022 January 2022 Didn't Paid Not Qualified
3 1 Tom Holland 01/24/2022 - 01/30/2022 January 2022 Paid in Full Not Qualified
4 1 Tom Holland 01/31/2022 - 02/06/2022 February 2022 Paid in Full Qualified
5 1 Tom Holland 02/07/2022 - 02/13/2022 February 2022 Paid in Full Qualified
6 1 Tom Holland 02/14/2022 - 02/20/2022 February 2022 Paid in Full Qualified
7 1 Tom Holland 02/21/2022 - 02/27/2022 February 2022 Paid in Full Qualified
8 2 Brad Pitt 01/03/2022 - 01/09/2022 January 2022 Paid in Full Qualified
9 2 Brad Pitt 01/10/2022 - 01/16/2022 January 2022 Paid in Full Qualified
10 2 Brad Pitt 01/17/2022 - 01/23/2022 January 2022 Paid in Full Qualified
11 2 Brad Pitt 01/24/2022 - 01/30/2022 January 2022 Paid in Full Qualified
12 2 Brad Pitt 01/31/2022 - 02/06/2022 February 2022 Paid in Full Not Qualified
13 2 Brad Pitt 02/07/2022 - 02/13/2022 February 2022 Paid in Full Not Qualified
14 2 Brad Pitt 02/14/2022 - 02/20/2022 February 2022 Paid in Full Not Qualified
15 2 Brad Pitt 02/21/2022 - 02/27/2022 February 2022 Pending Not Qualified
First convert Payment Status to bools:
test_df['Paid'] = test_df['Payment Status'] == 'Paid in Full'
>>> test_df
Client Id Client Name Week Month Year Payment Status Paid
1 Tom Holland 01/03/2022 - 01/09/2022 January 2022 Pending FALSE
1 Tom Holland 01/10/2022 - 01/16/2022 January 2022 Paid in Full TRUE
1 Tom Holland 01/17/2022 - 01/23/2022 January 2022 Didnt Paid FALSE
1 Tom Holland 01/24/2022 - 01/30/2022 January 2022 Paid in Full TRUE
1 Tom Holland 01/31/2022 - 02/06/2022 February2022 Paid in Full TRUE
1 Tom Holland 02/07/2022 - 02/13/2022 February2022 Paid in Full TRUE
1 Tom Holland 02/14/2022 - 02/20/2022 February2022 Paid in Full TRUE
1 Tom Holland 02/21/2022 - 02/27/2022 February2022 Paid in Full TRUE
2 Brad Pitt 01/03/2022 - 01/09/2022 January 2022 Paid in Full TRUE
2 Brad Pitt 01/10/2022 - 01/16/2022 January 2022 Paid in Full TRUE
2 Brad Pitt 01/17/2022 - 01/23/2022 January 2022 Paid in Full TRUE
2 Brad Pitt 01/24/2022 - 01/30/2022 January 2022 Paid in Full TRUE
2 Brad Pitt 01/31/2022 - 02/06/2022 February2022 Paid in Full TRUE
2 Brad Pitt 02/07/2022 - 02/13/2022 February2022 Paid in Full TRUE
2 Brad Pitt 02/14/2022 - 02/20/2022 February2022 Paid in Full TRUE
2 Brad Pitt 02/21/2022 - 02/27/2022 February2022 Pending FALSE
The group them by Month and Client Id and check all Paid values in a group is True:
status = test_df[["Client Id", "Month", "Paid"]].groupby(["Month", "Client Id"]).all()
>>> status
Paid
Month Client Id
February 1 True
2 False
January 1 False
2 True
Now reset index and convert Paid back to text (Qualified or Not Qualified):
status = status.reset_index()
status['Paid'] = status['Paid'].map({True: 'Qualified', False:"Not Qualified"})
>>> status
Month Client Id Paid
February 1 Qualified
February 2 Not Qualified
January 1 Not Qualified
January 2 Qualified
Now merge with original table to get the desired results (and drop unnecessary columns made by merging. Also rename the new column:
output = pd.merge(test_df, a, on=['Client Id', 'Month']).drop(columns='Paid_x')
output = output.rename(columns={'Paid_y': 'Qualification'})
>>> output
Client Id Client Name Week Month Year Payment Status Qualification
1 Tom Holland 01/03/2022 - 01/09/2022 January 2022 Pending Not Qualified
1 Tom Holland 01/10/2022 - 01/16/2022 January 2022 Paid in Full Not Qualified
1 Tom Holland 01/17/2022 - 01/23/2022 January 2022 Didnt Paid Not Qualified
1 Tom Holland 01/24/2022 - 01/30/2022 January 2022 Paid in Full Not Qualified
1 Tom Holland 01/31/2022 - 02/06/2022 February2022 Paid in Full Qualified
1 Tom Holland 02/07/2022 - 02/13/2022 February2022 Paid in Full Qualified
1 Tom Holland 02/14/2022 - 02/20/2022 February2022 Paid in Full Qualified
1 Tom Holland 02/21/2022 - 02/27/2022 February2022 Paid in Full Qualified
2 Brad Pitt 01/03/2022 - 01/09/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/10/2022 - 01/16/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/17/2022 - 01/23/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/24/2022 - 01/30/2022 January 2022 Paid in Full Qualified
2 Brad Pitt 01/31/2022 - 02/06/2022 February2022 Paid in Full Not Qualified
2 Brad Pitt 02/07/2022 - 02/13/2022 February2022 Paid in Full Not Qualified
2 Brad Pitt 02/14/2022 - 02/20/2022 February2022 Paid in Full Not Qualified
2 Brad Pitt 02/21/2022 - 02/27/2022 February2022 Pending Not Qualified
FYI. When you use groupby and you want to broadcast the aggregated result to each row of the group use transform. So you can avoid the merge. Check my answer to see how to use it.
| common-pile/stackexchange_filtered |
INVALID_LOGIN_CREDENTIALS error on NetSuite, but correct credentials
I've been trying to use the NetSuite api for sometime using the netsuite gem.
I can login to the website, but when I try to authenticate from the API I get an INVALID_LOGIN_CREDENTIALS error.
This is the payload of the request:
<?xml version="1.0" encoding="UTF-8"?>
<env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:platformMsgs="urn:messages_2011_1.platform.webservices.netsuite.com" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:platformCore="urn:core_2011_1.platform.webservices.netsuite.com">
<env:Header>
<platformMsgs:passport>
<platformCore:email>email@email.com</platformCore:email>
<platformCore:password>--snip--</platformCore:password>
<platformCore:account>ACCOUNTNO</platformCore:account>
<platformCore:role type="role" internalId="ROLE"/>
</platformMsgs:passport>
</env:Header>
<env:Body>
<platformMsgs:get>
<platformMsgs:baseRef xsi:type="platformCore:RecordRef" internalId="4" type="customer"/>
</platformMsgs:get>
</env:Body>
</env:Envelope>
This is the payload of the response:
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<soapenv:Fault>
<faultcode>soapenv:Server.userException</faultcode>
<faultstring>You have entered an invalid email address or account number. Please try again.</faultstring>
<detail>
<platformFaults:invalidCredentialsFault xmlns:platformFaults="urn:faults_2011_1.platform.webservices.netsuite.com">
<platformFaults:code>INVALID_LOGIN_CREDENTIALS</platformFaults:code>
<platformFaults:message>You have entered an invalid email address or account number. Please try again.</platformFaults:message>
</platformFaults:invalidCredentialsFault>
<ns1:hostname xmlns:ns1="http://xml.apache.org/axis/">sb-partners-java002.svale.netledger.com</ns1:hostname>
</detail>
</soapenv:Fault>
</soapenv:Body>
</soapenv:Envelope>
Does the Role you are attempting to log in with have Web Services permissions?
Setup > Users/Roles > Manage Roles
Find your role
Check Permissions > Setup subtab for Web Services permission
I've just solved the issue. If you're having trouble make sure that:
You are connecting to the right environment. (non-sandbox vs sandbox)
Your user (or your role) have WebServices permission (see in Permissions > Setup)
I faced both of the issues. My account, even belonging to an Administrator role, lacked Web Services permission. And I was using the sandbox url to a non-sandbox account.
https://webservices.na1.netsuite.com/wsdl/v2012_1_0/netsuite.wsdl (non-sandbox)
https://webservices.sandbox.netsuite.com/wsdl/v2012_1_0/netsuite.wsdl (sandbox)
Could you say more about how to add the webservices permission? Also how do you figure out the role id?
ah, turns out i'd done that part correctly, i was using the sandbox URL ;)
Another possible cause of this issue is if the password contains + or % characters. Removing these from the password fixed this for me.
| common-pile/stackexchange_filtered |
XSLT Dynamically merge two xml files using filter on element
I am new to xslt, I have two xml files as follows file1.xml:
<?xml version="1.0" encoding="UTF-8"?>
<people-data id="test-id" timestamp="20014-03-30T09:00:00">
<person>
<id>12345</id>
<first-name>John</first-name>
</person>
<person>
<id>67890</id>
<first-name>Mike</first-name>
</person>
<person>
<id>11111</id>
<first-name>Dan</first-name>
</person>
</people-data>
The second xml file is as follows file2.xml:
<?xml version='1.0' encoding='UTF-8'?>
<people-appointment-data>
<person-data>
<id>12345</id>
<first-name>John</first-name>
<appointments>
<appointment>
<code>5124</code>
<pass>14920329324</pass>
<states>
<state>IL</state>
<state>IN</state>
</states>
</appointment>
<appointment>
<code>1001</code>
<pass>14921119324</pass>
<states>
<state>NV</state>
<state>CA</state>
</states>
</appointment>
</appointments>
</person-data>
<person-data>
<id>67890</id>
<first-name>Mike</first-name>
<appointments>
<appointment>
<code>6666</code>
<pass>14920</pass>
<states>
<state>AK</state>
<state>MA</state>
</states>
</appointment>
</appointments>
</person-data>
</people-appointment-data>
What I am trying to achieve using xslt is to copy the appointments information into the first xml file, filtering a match on id tag.
This is how I am expecting the output to be if there is no match on id the information in file1.xml will be retained:
<?xml version="1.0" encoding="UTF-8"?>
<people-data id="test-id" timestamp="20014-03-30T09:00:00">
<person>
<id>12345</id>
<first-name>John</first-name>
<appointments>
<appointment>
<code>5124</code>
<pass>14920329324</pass>
<states>
<state>IL</state>
<state>IN</state>
</states>
</appointment>
<appointment>
<code>1001</code>
<pass>14921119324</pass>
<states>
<state>NV</state>
<state>CA</state>
</states>
</appointment>
</appointments>
</person>
<person>
<id>67890</id>
<first-name>Mike</first-name>
<appointments>
<appointment>
<code>6666</code>
<pass>14920</pass>
<states>
<state>AK</state>
<state>MA</state>
</states>
</appointment>
</appointments>
</person>
<person>
<id>11111</id>
<first-name>Dan</first-name>
</person>
</people-data>
Please state which version of XSLT you are using - 1.0 or 2.0.
Can you guide me on this using XSLT 1.0, the current solution given below doesn't work. @michael.hor257k
Although the "current solution" is pretty awful, it still should "work" in the sense that it should get the appointments of each person. So I suggest you get things working properly on your side first.
thanks for the prompt reply, there was a mistake in my xml file with closing tag. The solution given below is working. What is your suggestion to improve this? I am very new to xslt and still learning. @michael.hor257k
I would suggest you do it this way:
XSLT 1.0
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:strip-space elements="*"/>
<xsl:param name="lookup-document" select="document('file2.xml')"/>
<xsl:key name="pdata" match="person-data" use="id" />
<xsl:template match="/">
<people-data>
<xsl:for-each select="people-data/person">
<person>
<xsl:variable name="id" select="id" />
<xsl:copy-of select="*"/>
<!-- switch context to lookup-document in order to use the key -->
<xsl:for-each select="$lookup-document">
<xsl:copy-of select="key('pdata', $id)/appointments"/>
</xsl:for-each>
</person>
</xsl:for-each>
</people-data>
</xsl:template>
</xsl:stylesheet>
I had a query if there are two attribute values that I would like to copy to the merged file for example I have updated the question (id="test-id" timestamp="20014-03-30T09:00:00") how should I modify this solution. I tried using but it didn't get copied its blank.
@user2603537 This has absolutely nothing to do with your original question of retrieving data from another document; 2. You can easily solve this by adding a <xsl:copy-of select="people-data/@*"/> instruction immediately after <people-data>. If this doesn't work for you, I suggest you post a separate question (and close this one).
The document() function will let you match values in an alternate document.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="/">
<xsl:apply-templates select="//person"/>
</xsl:template>
<xsl:template match="person">
<xsl:copy>
<xsl:apply-templates select="*" mode="copy"/>
<xsl:variable name="id" select="id/text()"/>
<xsl:apply-templates select="document('file2.xml')//person-data[id/text()=$id]/appointments" mode="copy"/>
</xsl:copy>
</xsl:template>
<xsl:template match="*" mode="copy">
<xsl:copy>
<xsl:apply-templates select="*|text()" mode="copy"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
how can you test this, I tried to look for xml/xslt tester online but just came across websites, which allows you to use one xml file as input?
I have an XML editor / XSLT editor, so I tested this on my file system. In order to test on the web, you would need both input files to be accessible from a web server.
Can you please guide me what XML editor/ XSLT editor you are using.
I have used Oxygen XML Editor and Altova XML Spy. Eclipse might also offer a plugin for XSLT editing.
I tried to run the above XSLT, I am using Pentaho for running the xslt. I doesn't seem to work as I wanted.
12345
John
67890
Mike
11111
Dan
Appointments didn't get copied.
I am not familiar with that tool. I recommend that you confirm that the document command is opening your second file correctly. In your root template, try outputting <xsl:value-of select="count(document('file2.xml')//person-data)"/>. If the count is 0, then troubleshoot from there.
Thanks for the solution, I had problem in the xml file and hence it didnt work. Its working now. Is this a optimum solution, what would be other ways to implement it much better. @terrywb
I am glad that it worked for you. What else are you seeking from the solution?
Having a root element in the output would be a good start, I think.
I had a query if there are two attribute values that I would like to copy to the merged file for example I have updated the question (id="test-id" timestamp="20014-03-30T09:00:00") how should I modify this solution. I tried using but it didn't get copied its blank. Can you also recommend for the solution given by michael
| common-pile/stackexchange_filtered |
Advanced jQuery-Fu: Ajax feedback on a single call
I'm trying to create a generic function that provides feedback to a user on the status of their ajax call. It will draw some loading arrows after a target, and then replace it with a tick that will fade out when the call completes. Here's what I've got so far:
The function:
function ajaxFeedback(target){
jQuery.ajaxSetup({
beforeSend: function(){
jQuery(target).after("<img src='/images/loading/small_dark_arrows.gif' class='loading_arrows'>");
jQuery.ajaxSetup({beforeSend: ''});
},
complete: function(){
jQuery(target).siblings(".loading_arrows").replaceWith("<img src='/images/icons/tick.png' class='loading_arrows'>");
jQuery(target).siblings(".loading_arrows").fadeOut('slow', function() {
jQuery(target).siblings(".loading_arrows").remove();
});
jQuery.ajaxSetup({complete: ''});
}
});
}
In use:
function saveDefaultBlockGroup(e){
// e is the event from a button click
ajaxFeedback(e.target);
jQuery.get("ajax/derp.cfm");
return false;
}
This works great, but there's a few problems. If we later decide to define other things in jQuery.ajaxSetup this function will nuke them. Also if something happens in between ajaxFeedback(e.target) and the ajax call we will get unexpected results (unlikely, but I like my code to be a bit more bullet proof).
Suggestions? I want to keep the usage syntax simple, but the function can get crazy if we need to.
EDIT: Most of the ajax requests on the page WILL NOT USE THIS. Only a few specific places throughout the site. This will still come out to sever hundred places.
Here's what I ended up doing (adapted from the jQuery source code):
jQuery.fancyAjax = function( target, url, data, callback, type ) {
// shift arguments if data argument was omitted
if ( jQuery.isFunction( data ) ) {
type = type || callback;
callback = data;
data = undefined;
}
return jQuery.ajax({
type: type,
url: url,
data: data,
success: callback,
dataType: type,
beforeSend: function(){
jQuery(target).after("<img src='/images/loading/small_dark_arrows.gif' class='loading_arrows'>");
},
complete: function(){
jQuery(target).siblings(".loading_arrows").replaceWith("<img src='/images/icons/tick.png' class='loading_arrows'>");
jQuery(target).siblings(".loading_arrows").fadeOut(2000, function() {
jQuery(target).siblings(".loading_arrows").remove();
});
}
});
};
In use:
function saveDefaultBlockGroup(e){
// e is the event from a button click
jQuery.fancyAjax(e.target, "ajax/derp.cfm");
return false;
}
If you are updating the beforeSend and complete callback function each time you submit an AJAX request, why use the intermediary ajaxFeedback function? It seems like you could just mush your two functions together like this:
function saveDefaultBlockGroup(e){
// e is the event from a button click
jQuery.ajax({
url : 'ajax/derp.cfm',
type : 'get',
beforeSend : function(){
jQuery(e.target).after("<img src='/images/loading/small_dark_arrows.gif' class='loading_arrows'>");
},
complete : function(){
jQuery(e.target).siblings(".loading_arrows").replaceWith("<img src='/images/icons/tick.png' class='loading_arrows'>").fadeOut('slow', function() {
jQuery(this).remove();
});
}
});
return false;
}
Update
Try this out:
function setGlobalAJAX(target) {
return {
beforeSend: function(){
jQuery(target).after("<img src='/images/loading/small_dark_arrows.gif' class='loading_arrows'>");
},
complete: function(){
jQuery(target).siblings(".loading_arrows").replaceWith("<img src='/images/icons/tick.png' class='loading_arrows'>").fadeOut('slow', function() {
jQuery(this).remove();
});
}
};
}
function saveDefaultBlockGroup(e){
// e is the event from a button click
var opts = setGlobalAJAX(e.target);
opts.url = "ajax/derp.cfm";
opts.type = "get";
jQuery.ajax(opts);
return false;
}
This uses a function to set properties of an object that can be passed to jQuery.ajax(). The setGlobalAJAX function sets-up the beforeSend and complete callbacks for the specific AJAX call. You can then add other properties to this object, like the url and the type of request (get/post).
I think this is more of what you are looking for. You don't really need to use global functions as you are not doing something repeatedly on a global scale (each beforeSend and complete is different).
Sorry, I must not have been clear. This is for an extremely large, complex webapp. I'm trying to create a generic tool for us to use, so that when we need a specific type of feedback from a user interaction, we can just trigger this function, instead of specifying it each time. The majority of ajax request will not be using this, but this will still probably be used a few hundred times.
I'm with @Jasper on this. ajaxSetup should only be called once even if it is used everywhere. You are setting it on every request by calling ajaxFeedback(e.target); If you the target is changing just assign it to a global variable before you make the ajax request and read it in the ajaxSetup handlers.
@LukeTheObscure Try my update. I think this is what you're looking for.
That seems a lot better way of doing things. I'm curious though... Can you think of any ways to simplify the syntax? The simpler the syntax the more likely it will stick with the other devs and the more likely it will be used. I considered using $.fn.extend, but I think that would confuse people. Maybe a chainable method? jQuery.ajaxFeedback(e.target).get(...). My hopes are low, but I thought I might ask.
| common-pile/stackexchange_filtered |
What is the difference between こだわり and 良{よ}い?
Consider for example this article with the following headline.
〈JAL〉機内食、こだわりの冬メニュー登場
日本航空(JAL)は「空の上のレストラン」をコンセプトに展開している機内食「スカイオーベルジュ BEDD(ベッド) by JAL」で12月1日から提供される冬のこだわりメニューを発表した。
[...]
What is the difference between こだわり and 良い? Both seem to mean "good" in certain contexts. But when should you use which?
Maybe I'm mistaken something, however こだわり means "obsession, fixating" etc, from the verb こだわる "to be obsessive, to be particular about". You can see examples of the usage here.
こだわり I found in sentence こだわりの冬メニュー登場.
I think that when we talk about the food with こだわり, we focus not on as much on the food itself, but more on the quality of the ingredients, the taste of the food etc. So guess it would be "finely selected menu", I also found the translation - "fastidious", so I'd say "fastidious winter menu".
| common-pile/stackexchange_filtered |
Can a customer of a business make a recording of an employee on the business’s privately premises?
Suppose a customer enters a store. The business, through an employee, treats the customer unlawfully. Perhaps this is by verbally or physically abusing them, discriminating against them, or denying their consumer rights or rights as a member of the public. The customer begins to film the employee using their phone, and the employee strongly objects to the recording. Meanwhile, in case it is relevant, the business is constantly recording everyone within the premises with its CCTV.
The customer is an individual who will only be using their recording for private household purposes. But they are informed by the indignant employee(s) that they have no right to record anyone without their permission.
Does the employee, in a purportedly individual capacity, have any grounds to object to the customer recording them “without their consent“?
@ohwilleke what is crash blossom ambiguity?
The original title contained the word "business record" next to each other which is such a common adjective-noun phrase that it read like it was referring to an accounting document rather than the intended parsing of (customer of a business) record (verb)., even though the first reading was very weird.
The customer must stop recording
australia
The customer is on the store’s premises and an agent of the store has told them to stop. If they continue recording in spite of this they are now trespassing and can be ordered to leave. If they stay in spite of this, the police can remove them.
Filming from a public place cannot be prohibited. So, if the person were to leave the store and film into it, that would be fine.
In America commercial businesses are still private property. Stores can refuse service and boot them out for any objective reason such are recording inside the building without permission.
The filming is illegal in germany
In Germany, people have a Recht am Eigenen Bild (right of pictures taken of them* under § 22 Kunsturheberrechtsgesetzes (KUG). It is part of personality rights.
Due to those rights, combined with the gdpr, you can under no circumstances publish or give away the recordings without the agreement of the recorded person.
Since a case from 2000 at the highest court, making recordings while there is express non-consent of the subject and the subject is the center of the picture or video is a violation of the law. It can even be a felony if the picture depicts anything intimate, such as someone's flat or even worse, their undressed body.
In general, the following rules can make photography hard:
Filming or Photographing on someone else's property without the consent of the owner is illegal - that includes publicly accessible areas. That is part of the Hausrecht.
If the employee demands you to stop filming, you have to do so based on Hausrecht. He is agent of the Owner.
Because the subject loses control of the publication after the photo is made and can't control the publication after the fact, they can prohibit the recording from being taken in the first place, as the highest court ruled in BVerfGE NJW 2000, 1021
The employee is no person of public interest. Even if a person of public interest can bar photos made of them.
Even though the employee is present in a capacity of representing their corporate employer?
That does not matter in germany. He is a person. As a private citizen, you can't record them. The company has a justified interest under GDPR, the customer does not.
This answer is wrong or inapposite. The OP asked about the lawfulness of recording, and for the most part this answer refers to publishing the recording. These are separate issues. Furthermore, even if the GDPR applied to the customer whose rights are violated by the employee, art. 6.1(d) and (e) entitle the customer to record the incident for evidentiary purposes. The law does not require the customer to risk that the company's recording will be lost, altered, or inconclusive.
The customer also is not subject to gdpr if they’re using the recording strictly for household purposes, no?
An obvious premise for the protection of a person's privacy is that the person is acting lawfully. The OP's premise is the opposite of that: Someone "treats the customer unlawfully". Also, given the OP's scenario that "a customer enters a store [...] as a member of the public", the presumption that the matter typically would count as private (in the court's words: "gewöhnlich als private geltende Angelegenheiten") seems misplaced.
| common-pile/stackexchange_filtered |
Changing Mysqli_result before returning result to rest of application
I have around 75 php existing scripts that access mysql database similar to this pseudo code:
$query="SELECT * from table";
$result=mysqli_query($conn,$query);
if (mysqli_num_rows($result)) {
while($row=mysqli_fetch_assoc($result)) {
//use the rows
}
}
I was recently forced to encrypt all the database fields individually, so now when those 75 php scripts run as shown above, all the $row come back with all the fields encrypted, thus unusable.
So rather than change all the 75 php scripts to decode each field, i wanted to create a function that executes the mysqli_query, and then decrypts all the fields, and returns the result as if it was returned by the mysqli_query, but decrypted. Something like
function QueryAndDecrypt($conn,$query){
$result=mysqli_query($conn,$query);
if (mysqli_num_rows($result)) {
while($row=mysqli_fetch_assoc($result)) {
$row=decrypt($row);
}
}
return $result; <<----- return result with fields decrypted
}
// now all 75 scripts would have to change just one line to call above
$query="SELECT * from table";
$result=QueryAndDecrypt($conn,$query); <<--- change only 1 line
if (mysqli_num_rows($result)) {
while($row=mysqli_fetch_assoc($result)) {
//use the rows as normal decrypted
}
}
As you can see I just want to change that one line in all the 75 scripts so that it will do the same thing as before, and the result will come back with all the fields already decrypted.
I tried writing this QueryAndDecrypt function, but when i change the result from mysqli_result $row as shown above it wont change because the result from mysql is some sort of set that is not changeable (I was told), or something like that.
So is there anyway to do this by writing a common function that can be called from all the scripts which does the sql query and also decrypts the result in such a way that it can be accessed by all the other scripts like a regular mysql query result?
Can anybody help, im "fresh off the boat", so i dont know sql that well or php, i'm so desperate right now because all the scripts are broken because of this!!
Thanks
Possible duplicate of Result set not updating after mySQL query
If you were using the msqli in OO style, you could just create a class that fakes the method to return the number of rows (1 if you got any), and returns an array of the columns for a row. But you have the procedureal style, so you would have to edit the num rows line and the fetch line.
There is no solution on the other question
Ans, it's not a duplicate. He wants to simulate the query.
Is the encryption on the fields of a type that mysql can decrypt? If so, you could modify the queries in the function before executing them.
Sloan, maybe, i can change the encryption to anything i want, even something the database can do, but i was told that I should not send unencrypted data to mysql over the connection, if i let the database do the encryption, then i would have to send clear text over the connection to the database before it can encrypt it
I would think this would be a common thing because if credit card numbers and social security numbers are stored in a database, dont they store it as encrypted?
Possible duplicate of Can I edit mysqli_result object in php?
Sorry, you can't modify the rows of the result and then somehow 'unfetch' them back into the result to be fetched again.
But you can fix your code by changing one line:
$query = "SELECT * from table";
$result = mysqli_query($conn,$query);
if (mysqli_num_rows($result)) {
while ($row = MyFetchAssocAndDecrypt($result)) { <<--- change only 1 line
//use the rows as normal decrypted
}
}
You'd have to write functions something like this:
function MyDecrypt(&$item, $key) {
$item = openssl_decrypt($item, OPENSSL_CIPHER_AES_256_CBC, MY_SECRET_KEY);
}
function MyFetchAssocAndDecrypt($conn, $result){
$row = mysqli_fetch_assoc($conn, $result);
array_walk($row, 'MyDecrypt');
return $row; <<----- return row with fields decrypted
}
PS: You mentioned the requirement that you aren't supposed to send unencrypted data over the network to the database. That wouldn't be my concern, because you can use a VPN or else connect to the database via SSL.
The greater concern is that the query that contains your plaintext data and the plaintext encryption password would be written to database logs on the MySQL server, and these logs are not encrypted.
There are some optional extensions to MySQL that promise to do full-database encryption, but these extensions overlook the query logs.
| common-pile/stackexchange_filtered |
User interaction from the PostScript executive
I'm building an application in PostScript that needs to take input fom the user at a prompt (I will be using the GhostScript executive, and the file won't be sent to the printer). I can't see anything in my PostScript Language Reference Manual that suggests this is possible, and I don't want to drop back to the executive, so is this possible?
It's not that hopeless! But it ain't exactly easy, either. There are two other special files besides %stdin that you can read from. (%lineedit)(r)file dup bytesavailable string readstring pop will read a line from stdin into a string. There is also the (%statementedit) file which will read until a syntactically valid postscript fragment is typed (plus newline). This will match parentheses and curlies, but not square brackets. Before reading, you should issue a prompt like (> )print flush.
One more thing, you can catch ^D by wrapping all this in a stopped context.
{ %stopped
(> )print flush
(%lineedit)(r)file
dup bytesavailable string readstring pop
} stopped not {
(successfully read: )print
print
}{
(received EOF indication)print
}ifelse
Instead of popping that bool after readstring, you could use it to cheaply detect an empty input line. Note also, that the stop that triggers on EOF is from an error in file (ghostscript calls it /invalidfilename), but although file is defined as an operator, and operators are supposed to push their arguments back on the stack when signaling an error, I've noticed ghostscript doesn't necessarily do this (I forget what it leaves, but it's not 2 strings like you'd expect), so you might want to put mark in front and cleartomark pop after this whole block.
The special files (%lineedit) and (%statementedit), if available, will successfully process backspaces and control-U and possibly other control. I believe real Adobe printers will respond to ^T with some kind of status message. But I've never seen it.
PS. :!
I've got a more extensive example of interactive postscript in my postscript debugger. Here's a better version of the debugger, but it's probably less useable as an example.
@Jashank I've got a more fun example in my Mandelbrot explorer. It just issues a menu and an internal PS> prompt, and defines single-letter procedures as commands.
May I suggest: (%lineedit) (r) file dup bytesavailable 1 max string readstring pop so that it also works for the empty line, i.e., if you simply press RET!
PostScript isn't designed as an interactive language, so there is no great provision for user input.
You can read input from stdin, and you cat write to stdout or stderr, so if those are wired up to a console then you can theoretically prompt the user for input by writing to stdout, and read the input back from stdin.
Note that reading from stdin won't allow the user to do things like backspace over errors. At least not visually, the data will be sent to your PostScript program which could process the backspace characters.
That's about the only way to achieve this that I can think of though.
Looks like I'm going to have to take that approach.
On the other hand, since you are running on an itneractive computer, the other way to do that would be running an interactive program in another language, and the data to a postscript program (command line/file, or even templating the postscript file)?
I'd like to implement the entire solution in PostScript, to learn a bit more about the language, but also so I'm not having to pass things around between programs.
| common-pile/stackexchange_filtered |
How To use fail2ban for Nginx?
How can I use fail2ban on an Nginx server? What are the rules to put in the jails.conf?
Start with below
http://snippets.aktagon.com/snippets/554-How-to-Secure-an-nginx-Server-with-Fail2Ban
New filter in /etc/fail2ban/nginx-dos.conf:
# Fail2Ban configuration file
#
# Generated on Fri Jun 08 12:09:15 EST 2012 by BeezNest
#
# Author: Yannick Warnir
#
# $Revision: 1 $
#
[Definition]
# Option: failregex
# Notes.: Regexp to catch a generic call from an IP address.
# Values: TEXT
#
failregex = ^<HOST> -.*"(GET|POST).*HTTP.*"$
# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =
In our jail.local, we have (at the end of the file):
[nginx-dos]
# Based on apache-badbots but a simple IP check (any IP requesting more than
# 240 pages in 60 seconds, or 4p/s average, is suspicious)
# Block for two full days.
# @author Yannick Warnier
enabled = true
port = http,8090
filter = nginx-dos
logpath = /var/log/nginx/*-access.log
findtime = 60
bantime = 172800
maxretry = 240
Of course, in case you would be logging all resources of your site (images, css, js, etc), it would be really easy to get to those numbers as a normal user. To avoid this, use the access_log off directive of Nginx, like so:
# Serve static files directly
location ~* \.(png|jpe?g|gif|ico)$ {
expires 1y;
access_log off;
try_files $uri $uri/ @rewrite;
gzip off;
}
location ~* \.(mp3)$ {
expires 1y;
access_log off;
gzip off;
}
location ~* \.(css)$ {
expires 1d;
access_log off;
}
location ~* \.(js)$ {
expires 1h;
access_log off;
}
nginx-dos.conf should be in filter.d folder right?
late response but for anyone else that sees this... yes in filter.d
| common-pile/stackexchange_filtered |
How to serialize a class from an external file
In the application I am developing, I need to load some classes from an external file using CSharpScript and (de)serialize the content into a csv string using XMLSerialization. When i try to serialize i get this error: "Identifier 'Submission#0' is not CSL-Compliant".
I created the same class inside my project use the same exact code and it works perfectly.
The class I am trying to (de)serialize looks like this:
[Serializable()]
public class PlantData : IPlantData
{
public short Index { get; set; }
public int ItemID
{
get
{
return item_data_?.ID ?? 0;
}
}
public DATA_ELEMENT_SOURCE Source { get; set; }
private IPlantItemData item_data_;
public object ItemData
{
get => item_data_;
set
{
if (value is IPlantItemData data)
item_data_ = data;
}
}
}
This is the code I am using to serialize the class, obj is the class and types is the explict types to use for conversion.
XmlSerializer xmlSerializer = new XmlSerializer(obj.GetType(), types);
While debugging i have checked that obj.GetType() returns one if the types i passed in 'types'
I tried to google and searched on here but I only found questions related to Azure projects, which don't work in my case.
What am I doing wrong?
what is Submission#0 and how does it relate to this code? does it at all? it is very hard to understand the question without that context... (also, minor note: don't add [Serializable] - it doesn't have any impact here)
As I said, i am loading the class from an external file using CSharpScript and compiling at runtime, I don't have any reference to the class itself in the code, I can only get its type and the class is defined as Sumbission#0.PlantData
not sure how you want us to comment on it then, really... but I agree with the compiler: Submission#0 is not a valid name in C#
| common-pile/stackexchange_filtered |
Replacing internet slang with for loop in Python
I am rather new to Python and I'm not really sure why my code is not working.
I have a large Twitter dataset in which I want to replace slang with words from a dictionary (as a CSV) I got.
Reading the Data as a dictionary:
TestDict = pd.read_csv("C:/Users/lukas/Desktop/Script_Output/Converted_Slang_CSV.csv", sep =";", index_col = 0, header = None, skiprows = 1).to_dict()
Trying to replace the words with a for loop:
def change_words(input):
words = input.split()
new_words = []
for word in words:
if word.lower in TestDict:
word = TestDict[word.lower()]
new_words.append(word)
new_text = " ".join(new_words)
return new_text
If I feed the function with for example "atm" I expect to get "at the moment" back, as this phrase is in the dictionary. But instead, I just get "atm" back.
change_words("atm")
This will just bring back "atm" and not "at the moment". It also doesn't work if I apply it to my Twitter Dataset.
if word.lower in TestDict: - you don't call lower method, so it is never True as you are checking if method itself is in TestDict.
To elaborate on @matszwecja's comment, it should be word.lower() to change the string to all lower case.
After fixing a error in my dataset and the error in the loop it works flawless now! Thanks!
| common-pile/stackexchange_filtered |
Need advice on how to abstract my simulator for opening collectible card packs
I've been building simulators in Excel with VBA to understand the distribution of outcomes a player may experience as they open up collectible card packs. These were largely built with nested for loops, and as you can imagine...were slow as molasses.
I've been spinning up on R over the last couple months, and have come up with a function that handles a particular definition of a pack (i.e., two cards with particular drop rates for n characters on either card), and now am trying to abstract my function so that it can take any number of cards of whatever type of thing you want to throw at it(i.e., currency, gear, materials, etc).
What this simulator is basically doing is saying "I want to watch 10,000 people open up 250 packs of 2 cards" and then I perform some analysis after the results are generated to ask questions like "How many $ will you need to spend to acquire character x?" or "What's the distribution of outcomes for getting x, y or z pieces of a character?"
Here's my generic function and then I'll provide some inputs that the function operates on:
mySimAnyCard <- function(observations, packs, lookup, droptable, cardNum){
obvs <- rep(1:observations, each = packs)
pks <- rep(1:packs, times = observations)
crd <- rep(cardNum, length.out = length(obvs))
if("prob" %in% colnames(lookup))
{
awrd = sample(lookup[,"award"], length(obvs), replace = TRUE, prob = lookup[,"prob"])
} else {
awrd = sample(unique(lookup[,"award"]), length(obvs), replace = TRUE)
}
qty = sample(droptable[,"qty"], length(obvs), prob = droptable[,"prob"], replace = TRUE)
df <- data.frame(observation = obvs, pack = pks, card = cardNum, award = awrd, quantity = qty)
observations and packs are set to an integer.
lookup takes a dataframe:
award prob
1 Nick 0.5
2 Alex 0.4
3 Sam 0.1
and droptable takes a similar dataframe :
qty prob
1 10 0.1355
2 12 0.3500
3 15 0.2500
4 20 0.1500
5 25 0.1000
6 50 0.0080
... continued
cardnum also takes an integer.
It's fine to run this multiple times and assign the output to a variable and then rbind and order, but what I'd really like to do is feed a master function a dataframe that contains which cards it needs to provision and which lookup and droptables it should pull against for each card a la:
card lookup droptable
1 1 char1 chardrops
2 2 char1 chardrops
3 3 char2 <NA>
4 4 credits <NA>
5 5 credits creditdrops
6 6 abilityMats abilityMatDrops
7 7 abilityMats abilityMatDrops
It's probably never going to be more than 20 cards...so I'm willing to take the speed of a for loop, but I'm curious how the SO community would approach this problem.
Here's what I put together thus far:
mySimAllCards <- function(observations, packs, cards){
full <- data.frame()
for(i in i:length(cards$card)){
tmp <- mySimAnyCard(observations, packs, cards[i,2], cards[i,3], i)
full <- rbind(full, tmp)
}
}
which trips over
Error in `[.default`(lookup, , "award") : incorrect number of dimensions
I can work through the issues above, but is there a better approach to consider?
The general approach seems reasonable. Personally, I would tweak the mySimAnyCard function (and definitely get rid of the assign). Look into 'expand.grid'. As for mySimAllCards, it is very inefficient to use rbind in this way. You'd better generate a list of data.frames and then rbind them in one step using do.call(rbind,ListName). Also, I don't see how you are going to deal with droptables
use apply instead of a for loop
@MaratTalipov I removed the assign step as it wasn't doing anything in this context.
@MaratTalipov, thinking about your comment about the for droptables...I'll just have the credits lookup have 1 option of "credits" with a prob = 1, and take the values I previously had in credits and move them into a creditsdrops.
Combining both ideas from @MaratTalipov and @B Williams, could you
create a list of data.frames with necessary dimensions and some value to replace
use apply to run mySimAnyCard on each data.frame within the list
do.call(rbind,ListName)
| common-pile/stackexchange_filtered |
getting xml:lang attribute with a sparql request
i'd like to know if it's possible using a sparql query to get the language tag on some litteral in my graph.
for instance, i could have things like :
<skos:prefLabel xml:lang="fr">Bonjour</skos:definition>
<skos:prefLabel xml:lang="en">Hello</skos:definition>
and i would like to have a result set with each label and it's corresponding language.
You can use the "lang" built-in function, as described in the SPARQL spec (section <IP_ADDRESS> in the spec for SPARQL 1.1: http://www.w3.org/TR/sparql11-query/). So your query might look like:
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT ?x ?label ?language
WHERE {
?x skos:prefLabel ?label ;
foaf:mbox ?mbox .
BIND ( lang(?label) AS ?language )
}
Note that using BIND in this way requires SPARQL 1.1
Is there a way to have this attribute using sparql 1.0 ?
Finally found it, it would work in sparql 1.0 with something like
PREFIX skos:http://www.w3.org/2004/02/skos/core#
SELECT DISTINCT ?label (lang(?label) AS ?lang) WHERE {
?data skos:prefLabel ?label
}
That's actually SPARQL 1.1 too, just a more commonly implemented syntax.
| common-pile/stackexchange_filtered |
Let $IJ$ be the set of all sums of elements of the form $ij$?
$(15)$ If $I,J$ are ideals of $R$, let $IJ$ be the set of all sums of elements of the form $ij$, where $i \in I$ and $j \in J$. Prove that $IJ$ is an ideal of $R$.
This is a question from Abstract Algebra, by Herstein. I don't quite understand the set of all sums of elements of the form $ij$. Is that suppose to mean $i+j$? If not, why the word sums?
From user input, I have refined my understanding of the set $IJ$ as follows.
$IJ \doteqdot \big\{ \sum_{i} a_{i}b_{i} \colon a_{i} \in I, b_{i} \in J \big\}$
No, he means the sums. Just the products themselves do not form an ideal.
See for example http://math.stackexchange.com/q/1208933/29335 and http://math.stackexchange.com/q/21440/29335
What Herstein wants to define is the least ideal of $R$ which contains all elements of the form $ij$, for $i\in I$ and $j\in J$.
The answer is, of course, the ideal generated by the set
$$
X=\{ij:i\in I,j\in J\}
$$
and we want to see how it looks like. In general, the ideal generated by a subset $A$ of $R$ must contain all elements of the form $ras$, for $r,s\in R$ and $a\in A$, and sums thereof. The set of all elements of the form
$$
r_1a_1s_1+r_2a_2s_2+\dots+r_na_ns_n
$$
for $r_k,s_k\in R$ and $a_k\in A$ is clearly an ideal of $R$ and so it is the ideal generated by $A$.
In the case of the set above, we see that
$$
r(ij)s=(ri)(js)
$$
and, since both $I$ and $J$ are ideals, we have, for $i\in I$ and $j\in J$, $ri\in I$ and $js\in J$. Thus the ideal generated by the set $X$ consists of all elements of the form
$$
i_1j_1+i_2j_2+\dots+i_nj_n
$$
for $i_k\in I$ and $j_k\in J$. This is the ideal denoted by $IJ$.
$$IJ=\{i_1j_1+\cdots+i_nj_n:n\in\mathbb N, i_1,\ldots,i_n\in I,j_1,\ldots,j_n\in J\}$$
How do we know that this is a finite sum? Also what about $i_{2}j_{1} + i_{1}j_{2}$ or other combinations?
Well it's defined to be a finite sum. As far as the subscripts go they are arbitrary. $i_2j_1+i_1j_2$ would be in $IJ$ but we can rename $j_1$ and $j_2$. The subscripts are not chosen at the outset, but as they are chosen as part of the sum.
So is this a correct interpretation? $IJ = \big{ \sum_{i} a_{i}b_{i} \colon a_{i} \in I, b_{i} \in J \big}$. Is it implicitly defined to be a finite sum? It does not say that explicitly.
You can't really talk about infinite sums unless there is a concept of convergence. In other words a topology.
beyond the scope of the class? haha...so should I write is as $IJ = \big{ \sum_{i = 1}^{n} a_{i}b_{i} \colon a_{i} \in I, b_{i} \in J \big}$ as you did?
Yes. The fact that the sums are finite is an assumption which is implicit when deal with an algebraic structure that does not have a corresponding topological structure.
Well, this notation is actually a little confusing. Is $n$ fixed or not? I assume $n$ is allowed to range over the non-negative integers, but it's not clear from the notation above.
Yes, $n$ is allowed to range over the non-negative integers. I'll edit my answer.
| common-pile/stackexchange_filtered |
Python Multiprocessing: Detecting available threads on multicore processors
At risk of adding the the queue of multiprocessing questions, is there a way to detect the number of available threads per CPU similar to the multiprocessing.cpu_count()? I have a main() function being asynchronously called from a pool that has one process per core available (default behavior if processes=None).
items = [2,4,6,8,10]
pool = multiprocessing.Pool(processes=args.cores)
results = [pool.apply_async(main, (item,), {threads=1}) for item in items]
However, I would like each of the main() calls to take advantage of all available threads by setting the threads arg explicitly.
Is there a way to do this, or would it be too platform/system specific? Perhaps it could be done with the mysterious from multiprocessing.pool import ThreadPool, by combining with the threading module, or some other way?
Any direction is appreciated, thanks!
| common-pile/stackexchange_filtered |
habtm foreign key
I tried to use habtm relationship , but I need to use a uid as foreign key
SELECT "friends".* FROM "friends" INNER JOIN "friendships" ON "friends"."uid" = "friendships"."uid" WHERE "friendships"."user_id" = 4
#User
has_and_belongs_to_many :friends, :class_name => "Friend", :join_table => "friendships", :association_foreign_key => "uid"
#Friend
has_and_belongs_to_many :users, :class_name => "User", :join_table => "friendships", :foreign_key => "uid"
SELECT "friends".* FROM "friends" INNER JOIN "friendships" ON "friends"."id" = "friendships"."uid" WHERE "friendships"."user_id" = 4
Following the Rails Style Guide:
Prefer has_many :through to has_and_belongs_to_many. Using has_many :through allows additional attributes and validations on the join model
In your case:
class Friendship < ActiveRecord::Base
belongs_to :friend
belongs_to :user
# the validates are not mandatory but with it you make sure this model is always a link between a Friend and a User
validates :user_id, :presence => true
validates :uid, :presence => true # the Foreign Key is 'uid' instead of 'friend_id'
end
class User < ActiveRecord::Base
has_many :friends, :through => :friendships
end
class Friend < ActiveRecord::Base
has_many :users, :through => :friendships, :foreign_key => "uid"
end
| common-pile/stackexchange_filtered |
Is condensing the number of columns in a database beneficial?
Say you want to record three numbers for every Movie record...let's say, :release_year, :box_office, and :budget.
Conventionally, using Rails, you would just add those three attributes to the Movie model and just call @movie.release_year, @movie.box_office, and @movie.budget.
Would it save any database space or provide any other benefits to condense all three numbers into one umbrella column?
So when adding the three numbers, it would go something like:
def update
...
@movie.umbrella = params[:movie_release_year]
+ "," + params[:movie_box_office] + "," + params[:movie_budget]
end
So the final @movie.umbrella value would be along the lines of "2015,617293,748273".
And then in the controller, to access the three values, it would be something like
@umbrella_array = @movie.umbrella.strip.split(',').map(&:strip)
@release_year = @umbrella_array.first
@box_office = @umbrella_array.second
@budget = @umbrella_array.third
This way, it would be the same amount of data (actually a little more, with the extra commas) but stored only in one column. Would this be better in any way than three columns?
Please dont do that. The save on space in minimal, and the waste in resurces trying to split the data again is awfull. Also trying to join table will be a pain
@JuanCarlosOropeza Is it possible to calculate how much space it does save?
@JeffCaros It's going to take more space as you're no longer storing 3 integers, but a string. Integers will take up less space.
The varlena header takes space too. And the delimiters. Really. don't do this.
@CraigRinger What is a varlena header?
@JeffCaros Variable length columns need extra space to store how long they are and some other info about compression, out-of-line TOAST storage etc. So storing 16 bytes of uuid as bytea will use more than 16 bytes due to headers and alignment. Storing as uuid will use exactly 16 bytes. Storing as text is worst as it has to escape the binary too so it will probably use more than 32 bytes.
There is no benefit in squeezing such attributes in a single column. In fact, following that path will increase the complexity of your code and will limit your capabilities.
Here's some of the possible issues you'll face:
You will not be able to add indexes to increase the performance of lookup of records with a specific attribute value or sort the filtering
You will not be able to query a specific attribute value
You will not be able to sort by a specific column value
The values will be stored and represented as Strings, rather than Integers
... and I can continue. There are no advantages, only disadvantages.
Agree with comments above, as an example try to use pg_column_size() to compare results:
WITH test(data_txt,data_int,data_date) AS ( VALUES
('9999'::TEXT,9999::INTEGER,'2015-01-01'::DATE),
('99999999'::TEXT,99999999::INTEGER,'2015-02-02'::DATE),
('2015-02-02'::TEXT,99999999::INTEGER,'2015-02-02'::DATE)
)
SELECT pg_column_size(data_txt) AS txt_size,
pg_column_size(data_int) AS int_size,
pg_column_size(data_date) AS date_size
FROM test;
Result is :
txt_size | int_size | date_size
----------+----------+-----------
5 | 4 | 4
9 | 4 | 4
11 | 4 | 4
(3 rows)
| common-pile/stackexchange_filtered |
zgrep to search pattern in log
I have a log file, in this pattern:
IP - - [date] "command" response time
I want to search in the log the lines which contains the ip:<IP_ADDRESS> and part of the command: "/api/con"
So this is a correct result:
<IP_ADDRESS> - - [05/Nov/2015:03:48:25 -0500] "GET /5.0/api/con/1" 20:01
How can I do it?
Try something like:
zgrep "^<IP_ADDRESS>.*\/api\/con" access.log.*.gz
assuming of course that your files are something like access.log.10.gz etc. (change the name of the file if this isn't the case).
| common-pile/stackexchange_filtered |
Return the largest value of a given element of tuple keys in a dictionary
I have a dict with tuples as keys, and I want to obtain the largest value available, say for the second element of the tuple keys currently in the dictionary. For example, given:
my_dict = {('a', 1):value_1, ('b', 1):value_2, ('a', 2):value_3, ('c', 3):value_4, ('b', 2):value_5}
so the largest value for the second elements of the keys is 3.
What is the fastest way to derive this value?
Do you mean you want the dictionary value associated with the key whose second element is largest, or you want the second element of the key tuple itself? That is, in your example, do you want 3 or do you want value_4?
@BrenBarn, I want the second element of the key tuple itself, so in my example, I want 3.
Either:
largest_key = max(my_dict, key=lambda x: x[1])
Or:
from operator import itemgetter
largest_key = max(my_dict, key=itemgetter(1))
According to DSM, iterating over a dict directly is faster than retrieving and iterating over keys() or viewkeys().
What I think Ms. Zverina is talking about is converting your data structure from a dict with tuple keys to something like this:
my_dict = {
'a': {
1: value_1,
2: value_3
}
'b': {
1: value_2,
2: value_5
}
'c': {
3: value_4
}
}
That way, if you wanted find the max of all values with a, you could simply do:
largest_key = max(d['a'])
At no extra cost. (Your data is already divided into subsets, so you don't have to waste computation on building subsets each time you do a search).
EDIT
To restrict your search to a given subset, do something like this:
>>> subset = 'a'
>>> largest_key_within_subset = max((i for i in my_dict if i[0] == subset), key=itemgetter(1))
Where (i for i in my_dict if i[0] == subset) is a generator that returns only keys that are in the given subset.
It looks like iterating over my_dict itself is faster than my_dict.viewkeys() which is itself faster than my_dict.keys(), or so says my 2.7.2 timeit result.
@JoelCornett, what if the first element of the key is given, say a in my example above, is there anything we can do to speed things up by restricting the search to a subset of keys?
@MLister: Searching a smaller subset would definitely be faster, but you have to take into account the cost of constructing the smaller subset on demand.
@MLister know first element doesn't help in your structure as looking for 'a' (a string) is more expensive then looking for biggest integer. It could help if you stored your items in a dictionary of lists, indexed by the first value in the tuple.
@JoelCornett, so how can we construct this smaller subset of keys efficiently? I cannot quite see how max function would work for us in this case...
@MariaZverina, thanks for the suggestion. Could you please elaborate a bit more on the 'dictionary of lists' part? Are you suggesting that instead of using tuples as keys, I should use lists as keys? And what do you mean by building an index of a dictionary?
@MLister: You can't use lists as keys (and you shouldn't) because they are mutable objects and their hash values could change.
@JoelCornett, thanks for the clarification. so basically we need to deconstruct the tuple keys and use a nested dictionary instead?
@MLister: It depends. If you plan on doing these kinds of restricted searches often, than I would say yes. If not, I would consider whether keeping the data in this form makes other common operations on the data more difficult.
@MLister +JoelCornett Thank you - that's exactly what I was thinking off :)
@JoelCornett, actually, i would rather keep the tuple key construction . So what would be the fastest way to get the key with the largest value for the second element of the tuple, among those whose first element is a given value (say a). So I am not searching for any keys, but only among the ones with first element as a.
If you have no additional information about any relation between elements in any set (like keys in dictionary in that case) then you have to check each element => complexity O(n) (linear) - the only improvement can be using some build-in function like max
If you need quite often to obtain (or pop) max value then think about different structure (like heap).
If you looking for largest value i.e. 3 use this:
print max(my_dict.keys(), key = lambda x: x[1])[1]
If you are looking for largest value from the dict, use this:
my_dict = {('a', 1):'value_1', ('b', 1):'value_2', ('a', 2):'value_3', ('c', 3):'value_4', ('b', 2):'value_5'}
largest = max(my_dict.keys(), key = lambda x: x[1])
print my_dict[largest]
| common-pile/stackexchange_filtered |
If an aerofoil is placed in a narrow wind tunnel, how will that affect the lift it produces?
This is NACA 0000 aerofoil:
This is just a straight line - roughly what a paper plane's aerofoil section looks like (I chose this aerofoil for its simplicity). If we put this aerofoil in a wind tunnel, then this is my approximation of what the air-flow around it might look like:
(Infinite wind tunnel - ignore ground effect)
$$(AoA=12°)$$
The air-flow isn't ideal; there is flow seperation already at the leading edge. But still, the aerofoil will produce a net positive lift.
But what happens if we put this aerofoil in a narrow wind tunnel?
The pressure distribution appears to have been reversed: high pressure above and low pressure below.
This is explained as follows: through a convergent duct, the air-flow velocity (and thus dynamic pressure) increases. This is accompanied by a reduction in static pressure, such that the total pressure remains constant (the opposite is true for a divergent duct).
Is this true? and so will the aerofoil produce a downforce instead of lift in the narrow wind tunnel?
This is a case where working backwards from actual wind tunnel data may be more helpful.
A higher pressure stagnation zone will form underneath the plate. This is the ground effect, increasing lift. Air velocity will increase at the exit of this area (at the trailing edge).
A lower pressure area forms above the plate, with a lower (or even reverse) air flow at its exit. This will increase lift.
The Formula 1 car is producing downforce (or a reverse ground effect).
So, Bernoulli can't explain everything. Local pressure may be a more accurate way to predict lifting effects.
However, if a more smoothly cambered airfoil is chosen, interference with upper airflow may indeed reduce lift, especially at higher Reynolds numbers.
Read on about biplane wing interference effects.
"A higher pressure stagnation zone will form underneath the plate." - but the pressure never increases at the entrance of a Venturi, it only decreases, there is no stagnation (based on my limited knowledge of the Venturi effect). What would make this setup different from a Venturi?
@AdityaSharma it may be that the trailing edge is where velocity increases, and pressure decreases. The upper wall may indeed reduce lift (even with the flat plate). I think the upper wall may cause a stall at a lower AoA.
"It may be that the trailing edge is where velocity increases, and pressure decreases." - In my opinion, that shouldn't be the case. To justify my opinion, I would first like to summarise why the velocity increases and pressure decreases in a Venturi. There are two principles in action: 1) Law of conservation of mass, and 2) Bernoulli's principle. Law 1 tells us that mass flow rate (of an ideal fluid) through a streamtube will always be equal at every section of the tube. This implies that if the section area decreases, velocity must increase for the mass flow rate to remain constant.
Law 2 tells us that the increase in velocity produces an increase in dynamic pressure. Now, the static pressure must reduce for the total pressure to remain constant. Based on this description of the venturi effect, the increase in dynamic pressure and decrease in static pressure must take place gradually as the section area reduces, not suddenly at the choke (trailing edge).
@AdityaSharma: there's a misunderstanding here. In a venturi tube the whole airflow flows inside it but in your picture "half" of the air is going over the plate and the other half under it: that's definitely not a venturi. Due to the blockage effect of the bottom wall I'd expect a lower speed on the belly i.e. a higher pressure and therefore lift.
@sophit yes, what is pictured seems more like a throttle (air inlet) plate on a carburetor. That swirly area above and behind the plate trailing edge might be a good place to inject fuel.
@RobertDiGiovanni: yep, "butterfly valve" might be a good starting point to understand that airflow
@sophit you say that it cannot be considered a Venturi since not all of the flow is entering it. Admittedly, I do not understand how the flow would behave at the entrance of the "butterfly valve", but after half of the flow has entered the "duct" (formed by the "butterfly valve" and the wind tunnel walls), what would prevent the duct from behaving like a Venturi?
@AdityaSharma look at a stone in a river. The momentum of incoming fluid creates a rise in pressure when any obstacle slows it. Consider each half of a Venturi as an obstacle.
@RobertDiGiovanni But then what we observe in practice is that the static pressure decreases and the velocity increases through the Venturi. The same will happen around the stone. The velocity of water flow must be increased in order for a constant mass flow rate to be maintained at each section of the river. If flow doesn't speed up through a constriction, then that would violate the law of conservation of mass. You are right about that static pressure will be higher than free flow at the stagnation point, but then it will be lower than free flow around the stone.
@AdityaSharma then there is an alternative: mass flow is not constant. Water can "back up" behind a dam. Pressure can rise with air. Interestingly, keep the walls, but remove the two Venturi halves. Now glue the long ends back together. Looks like a airfoil. Now put that back in the tunnel.
@RobertDiGiovanni "Then there is an alternative: mass flow is not constant" But then what about the law of conservation of mass? If the flow is backing up behind the dam, it's also backing up at the source, and at all points between the dam and the source. Even after the dam, the mass flow rate will be the same as that before the dam. If mass flow rate is altered at one point, it will be altered at all points within the stream (unless there is change in density resulting from change in pressure, an effect which is generally negligible in aerodynamics at speeds below Mach 0.3).
@AdityaSharma while the pressure is building the mass flow in the system may not be constant. But once the pressure is high enough to force sufficient flow through the venturi then mass flow becomes constant: flow in = flow out. But we know (with our kitchen sink) if inflow is too great, and the drain too small, it overflows! Is not a wind tunnel analysis where steady state is reached? I think this is why a higher pressure stagnation area exists.
Let us continue this discussion in chat.
@AdityaSharma because the flow is subsonic: what happens in one point downstream depends on what happen upstream. If upstream half of the flow goes somewhere else then you have already lost the definition of Venturi and whatever happens downstream cannot be anymore related to a Venturi.
@sophit Then perhaps even Venturi tubes cannot be considered "Venturi", since some of the upstream flow enters them and some does not. Lets not call my apparatus a Venturi then, we'll call it a convergent duct instead. So how will the flow behave through this convergent duct, as far as it's static and dynamic pressures are concerned?
You need to consider more than just the static pressure. In the usual demonstrations of the Bernoulli effect, the pressure tap is placed perpendicular to the streamlines, which means it only "feels" the static pressure, but an object placed directly in the middle of the streamlines acts as a "stagnation point," and feels an additional pressure associated with actually forcing the streamlines to take a different path.
A small hole perpendicular to the streamlines can be used to measure the static pressure. A small hole placed parallel to the streamlines at a point where the streamlines are forced to stop can be used to measure the stagnation pressure. In this case, the airfoil is neither parallel nor perpendicular to the streamlines, and the resultant pressure will be somewhere in between the two.
Yes, but what will be the net effect? will the lift reduce due to change in pressure distribution? will it stay same? (or will it increase due to ground effect?)
@AdityaSharma The lower wall should increase the lift due to ground effect. The upper wall should decrease the lift. The net effect probably depends on the actual distances.
Alright, so if we remove the lower wall, the lift should reduce. By the way, I believe that the upper wall will also contribute towards "ground" effect, since it is also limiting the downwash, just like the lower wall (think of a Formula 1 car that uses ground effect to improve downforce - this is a similar case, just inverted). So if the F1 car with the same setup achieves an increase in lift (downforce), why would our setup suffer from a decrease in lift?
@AdityaSharma Hmm, I think you're right, actually. (The bit about the upper wall I was not so sure about to begin with). In that case, both the floor and the ceiling would increase the lift and it makes the overall effect much easier to figure out.
Yes, that is exactly what I thought when I first came across this problem, but then I thought: How would the apparent "reversal" of pressure distribution (as explained by Bernoulli's principle) affect the lift? - This is something I was never able to understand.
| common-pile/stackexchange_filtered |
Can not find link to download OpenSolaris source code
I want to understand how OpenSolaris ptools(process tools) works. How exactly pstack, pmap, pargs etc works, but I can't find any link to its full source code. I can only find an online version of the source. Any advice where I can download source code for offline use?
I heard that OpenSolaris is not "open" anymore? a quick google for "oracle opensolaris" shows a lot of results like this http://blogs.computerworld.com/16741/oracle_dumps_opensolaris
Possibly deleted answer now from @JDoe: Opensolaris with source is available here.
Like Kristof Provost mentioned, the official source for the code is:
<EMAIL_ADDRESS>
Like you said, the source tarballs are now deprecated.
and I can't install Mercurial :(
? But you should have access to some machine where you can? If not, another possibility would be a live CD with mercurial installed, for example the excellent GRML.
Beside that, I cloned the repository for you ;-) You can find it under: http://solaris.oark.org/usr/src/. What you are looking for is the directory http://solaris.oark.org/usr/src/cmd/ptools/. wget should now do the job :-)
Note: I will delete this cloned repository the next weeks...
Have fun.
Will it be possible for you to share tar for that folder? I don't want to over load you server with lots of download request ;)
Its not problem ;-) the webserver scales fine ;-)
ok. Starting now :).
@echox: I am done with the download if you want you can remove the repository. Thanks alot for your efforts to copy and share repository on you server. :)
thx for the response, no problem. Since we all are using linux/unix such 'tasks' are no big thing ;-)
hg.opensolaris.org no longer exists. Looks like we now have to either look forks (Illumos) or mirrors ( such as https://github.com/kofemann/opensolaris ).
You can download illumOS code from github. Here is the link.. You can also download as a zip file.
Oracle quickly closed down the OpenSolaris project after taking over Sun.
Because of the previously used open source license, the code is still available.
As of 2023, there is only one Mercurial clone of the old Mercurial repository online:
https://sourceforge.net/p/schillix-on/schillix-on/ci/b23a4dab3d500aa3e57159dfbf2b3f9a5bbf4bd6/log/
NB: that repository isn't just a mirror, but a fork, i.e. after the end of the official SUN.com OpenSolaris history the 'SchilliX' development history follows.
However, one can still hg clone that repository like this:
hg clone http://hg.code.sf.net/p/schillix-on/schillix-on schillix-on-schillix-on
To view and navigate to the end of the OpenSolaris history:
hg log -r b23a4da -f
hg checkout b23a4da
Alternatively, one can checkout the nearest tag, i.e. schillix-on_00 and go back a few commits.
The last genuine OpenSolaris commit is:
Sukumar Swaminathan
b23a4da
2010-08-18
6973228 Cannot download firmware 2.103.x.x on Emulex FCoE HBAs
...
There is also an archive of the LSARC/PSARC documents previously published on opensolaris.org.
In general, one can also consult the current illumos source code browser (-> illumos-gate) as illumos is based on the last available OpenSolaris. See also the Github mirror.
A direct link to the illumos github history, i.e. where OpenSolaris development stopped and the first illumos commit was added:
https://github.com/illumos/illumos-gate/commits/master?since=2010-08-18&until=2010-08-18
In the meantime, Illumos development continued, thus some code might include bug fixes and other improvements. But in general, the code should still be very close to the last OpenSolaris state of the art because illumos development resources are quite limited.
When OpenSolaris was still alive, Sun regularly also published open sourced man page archives. The illumos equivalent is a browsable man page site and there are browseable HTML repositories of the last 2009.06 OpenSolaris man pages dump:
https://www.unix.com/man-page-opensolaris-repository.php
https://manpath.be/osol
github.com/nxmirrors is gone for some reason, but I found https://github.com/kofemann/opensolaris .
@mtraceur Hm, many of these old links are gone. I'm wondering whether this is just normal bit-rot or the result of some lawyers bullying people. However, the kofemann mirror is incomplete, i.e. its last commit is from Feb 4, 2010 whereas the other mirrors went up to Aug 18, 2010. You still see those commits in the illumos repository: https://github.com/illumos/illumos-gate/commits/master?since=2010-08-18&until=2010-08-18 - i.e. the last commit from Sun is the one I've quoted in my answer, the newer ones contain non-Sun.com review/signed/approve tags.
@mtraceur found a good Mercurial mirror, updated some links and added some notes.
Get The Source
It's possible you'll need to use Mercurial to get it.
I just saw following comment. "Source tarballs have been deprecated in favour of the onnv project's Mercurial repository." and I cant install Mercurial :(.
OpenSolaris + source code available here
Download it from the main download page.
Edit (2013, 3 years later): this links to Solaris 11 download. OpenSolaris is no more; you should go to one of the forks, like IllumOS, if you want source material.
I checked that page but it seems to have only link to live cd and some other Consolidations. Cant find link to source archive. Can you please post the link to source.
| common-pile/stackexchange_filtered |
How can I move my Form Processor configuration between two sites and/or reorder my actions?
I see an Export link when I go to Administer menu » Automation » Form Processors, but it seems like you need to write an extension to move the form between sites. Is this my only option?
As a related question, can I reorder my actions?
I deleted my old answer because there's a much simpler one now: Use the Export link next to the processor you want to copy, and use Import Processor on the other site to bring it in.
If you want to make a copy on the same site for testing purposes, open the exported file and change the name and title values at the top. Note that the name can only have lowercase letters and underscores, whereas the title can be whatever you want.
Reordering actions is a built-in feature now and doesn't require any trickery.
good to know Jon G could answer your question - suspect not many other folk could
| common-pile/stackexchange_filtered |
Angular cli, overwrite the command that gets run by 'ng e2e'?
I'm trying run Angular's e2e tests against an instance of the application ON A DIFFERENT SERVER than my local machine.
So to be clear, I'm not testing my local code.
I just need to run protractor without the angular build steps because it's a waste of time since the code I'm testing is on another server. Unfortunately, the angular.json file throws an error if i excessively modify/remove the following line:
"builder": "@angular-devkit/build-angular:protractor",
I already have a solution for this, but it's long winded and I'd like to be able to not change how my teammates are running tests from their shells:
node node_modules/protractor/bin/protractor e2e/protractor.conf.js
I have two thoughts:
Write npm script which runs this command (what i'll likely end up doing)
Find out how to overwrite what ng e2e does. If I can run the more complicated command here, it'll save productivity and feedback time.
I'm on Angular V7.
Is overwriting ng e2e so that it executes node node_modules/protractor/bin/protractor e2e/protractor.conf.js instead possible?
Yup. I would do #1. That makes sense to update your package.json
"scripts": {
"protractor": "protractor e2e/protractor.conf.js"
}
and then just run npm run protractor. The e2e command is also downloading chromedriver, the selenium jar file, and maybe geckodriver? with webdriver-manager. If you want that as a pre-step:
"scripts": {
"protractor": "protractor e2e/protractor.conf.js",
// just download chromedriver and the selenium jar
"preprotractor": "webdriver-manager update --gecko false"
}
It also starts your angular application. If you need to do that, I would just call ng serve and run it in a background process. I hope that helps.
That's basically the conclusion i've come to. at least running locally they can still do something like 'npm e2e' and not have to worry about the gritty details. Thanks for your input!!
| common-pile/stackexchange_filtered |
Confused about useEffect
I'm building my first Custom React Hook and am confused about what I think is a simple aspect of the code:
export const useFetch = (url, options) => {
const [data, setData] = useState();
const [loading, setLoading] = useState(true);
const { app } = useContext(AppContext);
console.log('** Inside useFetch: options = ', options);
useEffect(() => {
console.log('**** Inside useEffect: options = ', options);
const fetchData = async function() {
try {
setLoading(true);
const response = await axios.get(url, options);
if (response.status === 200) {
setData(response.data);
}
} catch (error) {
throw error;
} finally {
setLoading(false);
}
};
fetchData();
}, []);
return { loading, data };
};
I pass to useFetch two parameters: A url and a headers object that contains an AWS Cognito authorization key that looks like this: Authorization: eyJraWQiOiJVNW... (shortened for brevity)
When I do this the options object on does exist near within useFetch but within the useEffect construct it is empty. YET the url string is correctly populated in BOTH cases.
This makes no sense to me. Might anyone have an idea why this is occurring?
I believe it's an async timing issue. I added options as dependency in useEffect and now it appears to work. That said, there are now 2 calls to the API Endpoint with the first one returning a 401 error. I guess I can put an if ... then construct around async function call though that inherently seems wrong.
The bottom of my code now looks like this:
if (options) {
fetchData();
};
}, [options]);
However, the function is being called twice from the parent component. I don't yet know if this is because of a flaw in the code I've been discussing or something in the parent component.
As explained in my answer below I'm pretty sure the problem comes from the component using this hook, not the hook itself. Could you provide more code showing how you use it?
Below an implementation of your code showing that it works as expected.
The async/await has been converted to a Promise but should have the same behavior.
"Inside use fetch" is outputed 3 times:
on mount (useEffect(()=>..., [])
after first state change (setLoading(true))
after second state change (setLoading(false))
and "Inside use effect" is outputed 1 time on mount (useEffect(()=>..., [])
Since it doesn't work for you this way it could mean that when the component mounts, options is not available yet.
You confirm it when saying that when you put options as a dependency, useEffect is called two times with the first fetch failing (most likely because of options missing).
I'm pretty sure you will find the problem with options in the parents of the component using your custom hook.
const axios = {
get: (url, options) => {
return new Promise(resolve => setTimeout(() => resolve({ status: 200, data: 'Hello World' }), 2000));
}
};
const AppContext = React.createContext({ app: null });
const useFetch = (url, options) => {
const [data, setData] = React.useState();
const [loading, setLoading] = React.useState(true);
const { app } = React.useContext(AppContext);
console.log('** Inside useFetch: options = ', JSON.stringify(options));
React.useEffect(() => {
console.log('**** Inside useEffect: options = ', JSON.stringify(options));
const fetchData = function () {
setLoading(true);
const response = axios.get(url, options)
.then(response => {
if (response.status === 200) {
setData(response.data);
}
setLoading(false);
})
.catch(error => {
setLoading(false);
throw error;
});
};
fetchData();
}, []);
return { loading, data };
};
const App = ({url, options}) => {
const { loading, data } = useFetch(url, options);
return (
<div
style={{
display: 'flex', background: 'red',
fontSize: '20px', fontWeight: 'bold',
justifyContent: 'center', alignItems: 'center',
width: 300, height: 60, margin: 5
}}
>
{loading ? 'Loading...' : data}
</div>
);
};
ReactDOM.render(
<App
url="https://www.dummy-url.com"
options={{ headers: { Authorization: 'eyJraWQiOiJVNW...' } }}
/>,
document.getElementById('root')
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.3/umd/react-dom.production.min.js"></script>
<div id="root" />
Thank you so much for your response! So options is populated with an AWS Access Token header string, which is permanently stored for each session here: const { appStore } = useContext(AppContext); Above the useEffect construct I do this:
var options;
if (appStore.awsConfig) {
options = appStore.awsConfig;
}
It's still confusing to me why the appStore.awsConfig value isn't instantly available ... but it isn't! It appears that the Context API data store has an async nature to it, which I didn't expect.
I don't think context is async. It may be because the session is populated when a higher ancestor mounts. A component mounts when all his children have mounted, so by the time the component using your hook mounts, his ancestors have not mounted yet....
| common-pile/stackexchange_filtered |
How to Highlight Single Row or Item of Recycler View and Scroll the Highlighted Row to Top of screen
I want to Highlight a single row of recyclerview with sound one after another and scroll the highlight row to top of screen This is what i have done:
Here its code:
fun AnimateToSurahAlFeel(recyclerView: RecyclerView, layoutManager: LinearLayoutManager, currentPosition: Int) {
var position: Int = currentPosition / 1000
when (position) {
0 -> {
recyclerView.smoothScrollToPosition(0)
recyclerView.getChildAt(0).isSelected = true
}
4 -> {
recyclerView.smoothScrollToPosition(1)
recyclerView.getChildAt(0).isSelected = false
recyclerView.getChildAt(1).isSelected = true
}
11 -> {
recyclerView.smoothScrollToPosition(2)
recyclerView.getChildAt(1).isSelected = false
recyclerView.getChildAt(2).isSelected = true
}
17 -> {
recyclerView.smoothScrollToPosition(3)
recyclerView.getChildAt(2).isSelected = false
recyclerView.getChildAt(3).isSelected = true
}
21 -> {
recyclerView.smoothScrollToPosition(4)
recyclerView.getChildAt(3).isSelected = false
recyclerView.getChildAt(4).isSelected = true
}
28 -> {
recyclerView.smoothScrollToPosition(5)
recyclerView.getChildAt(4).isSelected = false
if (recyclerView.getChildAt(5) != null)
recyclerView.getChildAt(5).isSelected = true
}
}
}
In the Function currentPosition is Media player current position
Problem in this code is:
In the Screen shot Row 4 and 5 are currently not visible,when highlighting Row 4 and 5 the App crash and giving Null Pointer Exception, according to my knowledge these two row are not yet created that's why
recyclerview.getChildAt(4) or recyclerview.getChildAt(5) return null and that cause the App crash.
Now
How to fix the App crash that recyclerview.getchildAt(4) or recyclerview.getchildAt(5) return null and also getChildAt(position) return n-1 row, so the App crash at recyclerview.getchildAt(5) will occur anyhow but i want n Row because i want to highlight all rows
How to scroll the highlighted row to position 0 (at top)
i.e. Row 0 go up from screen and Row 1 take it position and so on...
I want to achieve like this the highlighted one is at top and that will go off from screen when another row is highlighted
You need time for View to Bind. Just for ex. you can post action.
....
17 -> {
recyclerView.smoothScrollToPosition(3)
new Handler(Looper.getMainLooper()).postDelayed(new Runnable() {
recyclerView.getChildAt(2).isSelected = false
recyclerView.getChildAt(3).isSelected = true
}, 500);
}
....
But I strongly recommend you use some state Collection, which will save and handle states of your running and showing Views.
From MainActivity already calling this method after 400ms mUIRunnable = object : Runnable{
override fun run() {
if(mHandler!=null){
mHandler!!.postDelayed(this,400)
SurahAnimator.AnimateToSurahAlFeel(recyclerViewSurahContent,linearLayoutManager!!,mediaPlayer!!.currentPosition)
}
}
}
mHandler!!.post(mUIRunnable)
This is because recycler views don't have all the views inflated, but only the visible ones. This is by design and should not be tinkered with. Personally, I think you should use the recycling functionality.
You need to make the selected state part of your model in the adapter - the items in the adapter. Let's say this is called RowItem, so in your adapter you'd have a list of RowItems for example. Aside from the text in both languages, you need to add the selected state too. Then it's just a matter of getting the list's adapter, setting the correct position to selected and deselecting the ones you want.
For example, in your adapter:
fun select(position: Int) {
data[position].selected = true
notifyItemChanged(position)
// deselect all other positions that you don't want selected
}
When you bind the view holder you could do then:
public void onBindViewHolder(ViewHolder viewHolder, int position) {
val item = data[position]
viewHolder.itemView.selected = item.selected
// take care of the rest of the views
}
data would be a list where you store your RowItems
Now you can scroll with no problem and set the item to selected. Once the view is visible in the recycler view, the adapter will set the correct state.
It's fair to say I'm guessing a bit since there's no adapter code in your question, but the idea I think it's easy to understand. You need to change the state in the adapter and let the implementation of the recycler view handle it for you. After all the purpose is to get the recycler view to recycle your views based on the models adapted by the adapter.
Remember you can always get your adapter from the recycler view itself. In the end you can do something like this:
...
0 -> {
recyclerView.smoothScrollToPosition(0)
(recyclerView.adapter as MyAdapter).select(0)
}
Here MyAdapter would be the class name of your adapter
For the scrolling part you can take a look at this
At the time of item inflation in adapter i don't want to select any item,in the screenshot i have two button play and stop, when i click play button then i want to select item according to recitation of Quran
When i click on play button below code is called
mUIRunnable = object : Runnable{
override fun run() {
if(mHandler!=null){
mHandler!!.postDelayed(this,400)
SurahAnimator.AnimateToSurahAlFeel(recyclerViewSurahContent,linearLayoutManager!!,mediaPlayer!!.currentPosition)
}
}
}
mHandler!!.post(mUIRunnable) .....
and that will call the Above method AnimateToSurahAlFeel()
I don't understand why any of what you said prevents you from doing what I said. At time of inflation just don't select anything. When the onBindViewHolder gets called, then select them as needed. This method gets called as the views are shown, that's why selecting them from your animation (like you already have) and scrolling to them would trigger calls to the onBindViewHolder. The difference is that now the adapter knows they have to be selected or not. This is how you're suppose to use recycler views.
| common-pile/stackexchange_filtered |
Can we add other password service like password hash as backup for ADFS?
I have configured ADFS password authentication. Now I have question if ADFS goes down for any reason, how do I cope with this situation? What steps do I need to add other password authentication service as backup for ADFS?
You don't, ADFS is now your source of truth for authentication. You need to build your ADFS infrastructure to be resilient to down time.
you mean to say, either I have to Build ADFS server with high redundancy or else I have to manually change the sign in method from AD Connect to Password Hash in case Organization facing big network disaster at least remote user can able to get access of resources......right?
Correct. When you made the choice to use ADFS you added a reliance on your infrastructure to your auth process. It’s now up to you to make that reliable.
thank you very much.....
| common-pile/stackexchange_filtered |
Word/phrase to fill the blank: "I could see my vague reflection on the misty window ______ my surroundings."
I'm writing a short story, and here's where I'm stuck over word-choice:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window ______ my surroundings.
You're on a minibus. It's raining—just a light shower. You look at and through the window pane which is made of glass. Of course, you see your surroundings. But you also see a vague reflection of yourself, as well as the real outside through the glass window.
trees, roads |||| you, your cat, the seats of the bus
(outside bus - surrounding) (window) (inside bus - reflection)
Your reflection's on the top of or over the picture of your surroundings. I want to describe exactly how it's on top of that.
It's floating over the picture of your surroundings? It's lightly pasted on that picture? It's acting like a two-way mirror?
What do you think should fill the blank?
EDIT: I want to convey the lightness or vagueness of the reflections over the surroundings. It's not another layer over your surroundings. Rather it just blends with it, very nicely and very subtly. You don't even notice it if you don't look hard enough.
The shorter the word/phrase, the better—since if too long a word goes before "...my surroundings" it gets hard to understand that your "surroundings" is something that you see on the window, along with your reflection. The "my surroundings" part gets further away from the main clause "I could see my [...]".
So far at one with and watermarking get the closest to what I had in mind.
Love love love the drawing! I feel your frustration but Lambie's deleted his post so folks are going to wonder why you felt the need to illustrate the situation. But keep it. Love the tree label :)
@Mari-LouA Well, at least somebody appreciates my terrible drawing skills. :-)
This is opinionated writing advice. Are you looking for a co-author?
The "surroundings" must include not only what's outside, but what's inside and also reflected in the window. And you can't comprehend both the details of the reflections and the details of the surroundings without changing focus. The question needs to account for those facts, as does the word or phrase used.
Soha -- Have you decided not to assign the bounty award to anyone?
I think the original text was changed: I could see my vague reflection on the misty window ______ my surroundings. The original was not that, as I remember it. I am sitting right now in my house, it is raining, and it is getting dark. I can see a reflection of my room in the window. Basically, like in a mirror. And through that reflection I see the outside. Basically, the image is transparent and the outside can be seen through the reflected image. So, it's the same deal as you and your minibus. The outside images are seen THROUGH the reflection of the inside of the room in the window glass.
In other words, the reflection of the room in the window glass overlays my view of what is outside. "I could see the vague reflection of my surroundings in the misty window glass".
@Lambie You still don't fully understand. And no: the text wasn't changed.
It is a perfect blending of images, excellent. I can well imagine myself sitting in the mini bus and conceptualize the scene.
watermark
transitive verb
1: to mark (paper) with a watermark
2: to impress (a given design) as a watermark
for English Language Learners
: a design or symbol (such as the maker's name) that is made in a piece of paper and that can be seen when the paper is held up to the light
for Students
2 : a mark made in paper during manufacture that is visible when the paper is held up to the light
Fill in the blank with the word watermarking if it appeals to your fancy.
As an alternative, you may try the phrase "in a montage with", or only the participle form "montaging".
That's it! Just the kind of word I was looking for!
Question updated.
I'm not sure this is exactly what you're looking for, but perhaps it will help someone else get the word you want:
superimpose
VERB
[WITH OBJECT]
Place or lay (one thing) over another, typically so that both are still evident.
‘the number will appear on the screen, superimposed on a flashing button’
‘different stone tools were found in superimposed layers’
(From the Oxford Dictionaries)
So, in your sentence, it would be:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window superimposed on my surroundings.
I just thought of another option: overlay
According to the Oxford Dictionaries:
overlay
VERB
[WITH OBJECT]
Cover the surface of (something) with a coating.
‘their fingernails were overlaid with silver or gold’
1.1 Lie on top of.
‘a third screen which will overlay the others’
This gives:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window overlaying my surroundings.
Hope this helps!
-----------------------------------------------------------------
I have a few more suggestions, based on recent discussion:
embrace -- Your reflection is there but enveloped by the stronger image of the surroundings. If you wanted to indicate a sense of belonging or feeling at home in the surroundings (or even just an affinity for them), this could work. Your sentence would be:
I could see my vague reflection on the misty window embraced by my surroundings.
Per the Oxford Dictionaries:
embrace
VERB
with object Hold (someone) closely in one's arms, especially as a sign of affection.
‘Aunt Sophie embraced her warmly’
[no object] ‘the two embraced, holding each other tightly’
Accept (a belief, theory, or change) willingly and enthusiastically.
‘besides traditional methods, artists are embracing new technology’
Include or contain (something) as a constituent part.
‘his career embraces a number of activities—composing, playing, and acting’
infuse -- Your reflection has become a part of the view out of the window, infused in your view. Thus:
I could see my vague reflection on the misty window infused with my surroundings.
Per the Oxford Dictionaries:
infuse
VERB
[WITH OBJECT]
Fill; pervade.
‘her work is infused with an anger born of pain and oppression’
1.1 Instil (a quality) in someone or something.
‘he did his best to infuse good humour into his voice’
shadow -- It's not actually you, but a shadow of yourself.
I could see my vague reflection on the misty window, a shadow on my surroundings.
Per the Oxford Dictionaries:
shadow
VERB
[WITH OBJECT]
Envelop in shadow; cast a shadow over.
‘the market is shadowed by St Margaret's church’
‘a hood shadowed her face’
ghost -- Even more abstract, I think I like this best.
I could see my vague reflection on the misty window, a ghost in my surroundings.
I could see my vague reflection on the misty window, a ghost haunting my surroundings.
Per the Oxford Dictionaries:
ghost
NOUN
An apparition of a dead person which is believed to appear or become manifest to the living, typically as a nebulous image.
‘the building is haunted by the ghost of a monk’
[as modifier] ‘a ghost ship’
1.1 A slight trace or vestige of something.
‘she gave the ghost of a smile’
1.2 A faint secondary image caused by a fault in an optical system, duplicate signal transmission, etc.
‘What we saw were clearly ghosts from the static image we'd left on the screen.’
Note especially definitions 1.1 and 1.2.
You get four upvotes and how is this not "writing advice"?? Reflections inside a bus in a window are not superimposed on surroundings. That makes zero sense.
@Lambie - No, it makes perfect sense. Superimposed (or overlaid) is exactly what they are, when "surroundings" is what is outside the window.
@AndyT No, no and no. The sentence says "my surroundings", which leads any informed reader to believe they are inside the bus. Not outside of it. "my vague reflection in the misty window superimposed on my surroundings" cannot mean outside the damn bus. grhhh. :). No wonder I am confused.
@Lambie -- I interpreted surroundings as being the surroundings of the person+bus combined unit. That is, if I'm driving along a mountain road, my surroundings might consist of trees, streams, rocks, deer, etc., not just my car and the crap my kids have left in it. So the window forms a picture of what's outside the bus (the surrounding countryside) and my reflection is indeed superimposed upon that picture. My only problem with superimpose is that it is not as poetic as I suspect the OP would like it to be.
@RogerSinasohn Question updated.
@RogerSinasohn Your interpretation of "surroundings" is exactly right. It's lightly raining, so the phrase "of course you can see your surroundings," must refer to what's outside the bus, a reassurance that the trees, etc. do appear in the window. You're also correct that superimposed is too technical for the mood OP is trying to create.
@Lambie I'm looking at the window, not around me.
You can go with dissolve here.
I could see my vague reflection on the misty window, dissolve into my surroundings as a heavy downpour hit the panes.
(I added a part to the sentence to get into that feel. Can be omitted.)
Yeah,really good idea!
In light of the most recent editing of the question, you could use:
merge: 'to blend gradually by stages that blur distinctions'
I could see my vague reflection on the misty window, merging with my
surroundings.
Or you could choose words with a similar 'merging/almost hiding' sense, such as 'coalescing', 'veiled by', 'fused with'.
Because you are writing a story, your word choice here will vary greatly by your writing style and the nature of your character (who is speaking in first person). Other answers here have already given good choices for words that describe the physical phenomenon of your reflection appearing over your surroundings, but you have an opportunity to color your character through the narrator's word choice here.
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window ______ my surroundings.
draped quietly across
shyly coloring
mingling among
dancing through
gazing back from
One approach to the word choice problem here that would work well is to choose an action that echoes the character of your narrator and turn it into a metaphor. There are so many ways to do this that I'm not confident I can pick the best metaphor for your character. Hopefully the examples I've given put you on a productive track.
Alternately, if you really just want to describe the lightness of the reflection, consider "obscured among" or "hidden among".
Couldn't agree any more. +1
I too am taking a few artistic routes, specifically from the world of print-making.
Etched: to outline clearly or sharply; delineate, as a person's features or character. (Though this may not work too well with 'vague'):
I could see my vague reflection on the misty window etched into my
surroundings.
Inscribed: I particularly like the meaning 'to draw within a figure so as to touch in as many places as possible':
I could see my vague reflection on the misty window inscribed on my
surroundings.
Chased: 'To decorate by engraving or embossing':
I could see my vague reflection on the misty window chased onto my
surroundings.
Embossed: 'Having a moulded or carved decoration or design on the surface so that it is raised above the surface in low relief'
I could see my vague reflection on the misty window embossed onto my
surroundings.
Wonderful list of suggestions! But the only problem is the words all refer to an image being inscribed over a surface. That's not how you see reflections on a window surface.
Question updated.
In light of your additional comments (originally in relation to shroud):
I want to convey the lightness or vagueness of the reflections over the surroundings. It's not another layer over my surroundings. Rather it just blends with it, very nicely and very subtlely. You don't even notice it if you don't look hard enough.
I offer tinge and bleed over (or bleed into) and have combined them into one answer, preserving my earlier suggestions that led to these two:
tinge
verb: tinge; 3rd person present: tinges; past tense: tinged; past participle: tinged; gerund or present participle: tinging; gerund or present participle: tingeing
colour slightly. "a mass of white blossom tinged with pink"
permeate or imbue slightly with a feeling or quality. "this visit will be tinged with sadness", "his optimism is tinged with realism"
noun: tinge; plural noun: tinges
a trace of a colour. "there was a faint pink tinge to the sky", "the light had a blue tinge to it"
a slight trace of a feeling or quality. "in their sound you'll find punky tinges and folky tinges", "a tinge of cynicism appeared in his writing"
This gives:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window tingeing my surroundings.
bleed
verb; gerund or present participle: bleeding
(of a liquid substance such as dye or colour) seep into an adjacent colour or area. "I worked loosely with the oils, allowing colours to bleed into one another"
PRINTING (with reference to an illustration or design) print or be printed so that it runs off the page after trimming. "the picture bleeds on three sides"
Leading to:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window bleeding over|into my surroundings.
shroud
noun: shroud; plural noun: shrouds
a thing that envelops or obscures something. "a shroud of mist"
verb: shroud; 3rd person present: shrouds; past tense: shrouded; past participle: shrouded; gerund or present participle: shrouding
cover or envelop so as to conceal from view. "mountains shrouded by cloud", "a sea mist shrouded the jetties"
This conveys the (partial) concealing of the surroundings by the author's reflection:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window shrouding my surroundings.
If you want to avoid the "double -ing" (from shrouding and surroundings) you could use the noun form: "...vague reflection on the misty window, a shroud on my surroundings."
veneer
noun: veneer; plural noun: veneers
a thin decorative covering of fine wood applied to a coarser wood or other material. "a fine-grained veneer"
a layer of wood used to make plywood.
an attractive appearance that covers or disguises someone or something's true nature or feelings. "her veneer of composure cracked a little"
verb: veneer; 3rd person present: veneers; past tense: veneered; past participle: veneered; gerund or present participle: veneering
cover (something) with a decorative layer of fine wood.
"a veneered cabinet"
cover or disguise (someone or something's true nature) with an attractive appearance. "he exuded an air of toughness, lightly veneered by the impeccably tailored suit"
Using the "attractive appearance" / "disguise" sense, somewhat poetically, you could have:
I peered through the window with the slick navy blue curtains, swinging to and fro to the movement of the minibus, blocking my view to some extent. I could see my vague reflection on the misty window, a veneer on my surroundings.
(Using the noun form seems more natural here, although you could have "...vague reflection on the misty window verneering my surroundings.").
As an added bonus, you get some alliteration with "vague".
Hmm...nice one. But I want to convey the lightness or vagueness of the reflections over the surroundings. It's not another layer over my surroundings. Rather it just blends with it, very nicely and very subtlely. You don't even notice it if you don't look hard enough.
Another thought: "misty window, tingeing my surroundings"?
Wow! Good one! That's more like what I had in mind.
Question updated.
"... set against the backdrop of the surroundings"
Since it's not exactly a "single-word" or "phrase" the question may qualify more as "writing advice" (not sure, though), which is OT.
However, if backdrop is a helpful concept here, then it works.
This works too. But not exactly what I had in mind. It doesn't meet the criterion I set in my question.
Question updated.
Not to be too obvious here, but you did use the word blend in your question and others used it to explain their answers.
Maybe you could use 'blend' itself in the short story, as in
I could see my vague reflection on the misty window, blending with my surroundings.
Or,
I could see my vague reflection on the misty window, blending into my surroundings.
The reason I suggest 'blend' is because that is exactly what your vague reflection is doing. Sometimes it is more effective to use a simple word rather than a 'more poetic' one, if it will help the reader form an accurate 'mind picture' of what you want to convey.
So consider 'blend' and be sure to post a link to the whole short story, if you have published it online, OK!
Note: I later found that @Mari-lou A has already suggested 'my reflection (...) floated and blended into my surroundings' in a comment on August 5th.
I've just deleted the comment before seeing your revision. Sorry!
It's quite OK, @Mari-lou A -- besides, 'floated and blended' seems a good option for OP.
Awesome answer...simplicity is always the best.
Thank you @Soha Farhin Pine -- on the other hand, as shown by the 5 excellent suggestions in the answer that won the bounty: when it comes to creative writing, 2 words are often more expressive than one!
Love your story; love your mini-bus.
1. at one with
I could see my vague reflection on the misty window—at one with my surroundings.
And I can hardly wait to read the rest.
2. barely visible over
I could see my vague reflection on the misty window—barely visible over my surroundings.
Ahhh, taking into consideration your Question update of yesterday, I think I may have just the words.
3. visually whispering to
I could see my vague reflection on the misty window—visually
whispering to my surroundings.
EDIT: I want to convey the lightness or vagueness of the reflections over the surroundings. …just blends with it, very nicely and very subtly. You don't even notice it if you don't look hard enough.
A whisper can be so soft that you don’t even notice it if you don’t listen hard enough.
Whisper: speaking very softly using one's breath without one's vocal cords…
(from Online Dictionary)
If the perception you are describing were sound based rather than vision based, a perfect word to describe it would be, “whispering.”
I think there may not be an English word that is the visual equivalent of, “to whisper.”
But I don’t think the word gods would disallow the repurposing of such a perfectly descriptive word from the audio realm if it were for a good cause.
And what better cause than to poetically communicate “the lightness or vagueness of the reflections over the surroundings. …just blend(ing) with it, very nicely and very subtly. You don't even notice it if you don't look hard enough."
A whisper can be barely audible – the same way your reflection is barely visible.
https://www.google.com/search?q=whispering&oq=whispering&gs_l=psy-ab.3..0l2j0i20k1l2.141141.143188.0.1436<IP_ADDRESS>.<IP_ADDRESS>8.1605.0j6j0j1j1.8.0....0...1.1.64.psy-ab..3.8.1605.6..35i39k1j0i131k1j0i67k1.-8k7ECNB7T8
Online Dictionary
whis•per
ˈ(h)wispər/
verb
gerund or present participle: whispering
speaking very softly using one's breath without one's vocal cords…
Aww. Thanks for the compliments. I really am flattered. The phrase fits in very nicely too. Actually the story was finished a long time ago, I'm currently revising it. I post my stories on sites like Wattpad and ShortStories101.
Question updated.
I think haunting. The image would be like a ghost seen over the view.
Great for setting up foreboding.
Perhaps
"I could see my vague reflection on the misty window festooning my surroundings."
VERB
[WITH OBJECT]
often be festooned with
Adorn (a place) with chains, garlands, or other decorations.
‘the staffroom was festooned with balloons and streamers’
https://en.oxforddictionaries.com/definition/festoon
This would reflect the way your reflection impacts a wide area of what you can see, and also the pleasing effect it imparts.
(This could also be modified in some way, such as '...as it were festooning my surroundings')
I'm going to take some artistic license with this answer since it seems to me that is what the OP is looking for.
For the benefit of literary aesthetics and depending on mood, I would recommend:
Sift
to scatter by or as if by sifting; sift sugar on a cake
I could see my vague reflection on the misty window sifting my
surroundings.
or,
Percolate
to be diffused through
I could see my vague reflection on the misty window percolating my
surroundings.
or,
Soft
to make soft or softer
I could see my vague reflection on the misty window soften on my
surroundings.
Other possible options are: Deliquesce, Relent, Yield, Surrender, Cede, Relinquish, Concede
EDIT: Notice you seem to be looking for something that is not literal to the circumstances but a romanticized version of it. As so, and again considering literary aesthetics (always subjective but an opinion is an opinion) and the various possible moods of the scene you wish to convey (harmony, apathy, anxiety, etc.):
Coalesce (neutral, apathy)
1: to grow together The edges of the wound coalesced.
2: to unite
into a whole : fuse
3: to arise from the combination of
distinct elements
or,
Entangle (bad, anxious)
1: to wrap or twist together : interweave
2: to involve in a
perplexing or troublesome situation;
or,
Cleave (firm, serious)
to adhere firmly and closely or loyally and unwaveringly
or,
Wed (romantic)
1: to take for wife or husband by a formal ceremony : marry
2: to join in marriage
Espouse (romantic)
to take up and support as a cause : become attached to
or,
Weave (neutral)
to interlace especially to form a texture, fabric, or design
Loved the drawing BTW.
Thanks. :-))) Loved your answer, as well. Interesting suggestions! Definitely will keep in mind! If nothing good comes, I'll accept your answer.
Question updated.
@SohaFarhinPine I've updated the answer but notice that by transforming your question into something opinion based its a matter of time before the moderators close it or put it on hold.
I've discovered that on the contrary, it was a matter of time before the moderators protected it.
@SohaFarhinPine My mistake I guess.
No problem. It's fine. You actually misunderstood me. Your update was based on this understanding. You didn't get me. Those words you suggested were "romaticized version of [the circumstances]", but I actually wanted something that is literal to the circumstances.
Okay. This is essentially a creative writing question, not just looking for a synonym. I think vague and misty are vague and misty. You want to capture the image in a way that hasn't been done before.Psychologically, you're trying to blend the character with the exterior world.
I could see my reflection floating as if within the glass that harbored the outside world.
You need something that says the purpose of the glass is to allow the outside world to enter; the character imposes her reflection on that world.
Your interpretation is absolutely right. Loved it!
| common-pile/stackexchange_filtered |
Which representation describes the composite Hilbert space?
Very often in the standard textbooks on quantum mechanics, one finds that the joint Hilbert space of two systems is given by the tensor product of the individual Hilbert spaces. That is, if $H_1$ and $H_2$ are the Hilbert spaces associated with systems $S_1$ and $S_2$, then the composite Hilbert space of the entire system is given by $H_{1} \otimes H_{2}$, where $\otimes$ is the tensor product as defined here.
$\textbf{My question:}$ Under what conditions can a direct sum of two Hamiltonians, $H_1 \oplus H_2$, be used to represent the Hamiltonian in composite Hilbert space for the entire system?
It looks like you are thinking of the Dirac-Nambu picture. Of course you could apply Fock Functor.
did you mean to write "*composite Hilbert space"? Otherwise, what's the Hamiltonian got to do with the rest of the post?
@glS gave a fine answer, but given who is asking, would be remiss to not answer with some functoriality.
In section 4 of Twisted Equivariant Matter, the Dirac-Nambu space is defined. For a free fermion Hilbert space based on $(\mathcal{M},b)$ with a symmetry $(G,\phi)$ where $\phi$ encodes anti-linearity. Take $G=e$ if you want.
$$
H_{DN} \equiv \mathcal{M} \otimes \mathbb{C}
$$
A choice of complex structure on $\mathcal{M}$ gives a Hermitian structure on $H_{DN}$.
In the particle-hole picture $H_{DN} = V \bigoplus \bar{V}$ is the 1-particle and 1-hole.
The amalgam of two systems is then the direct sum $H_{DN,1} \bigoplus H_{DN,2}$. This is easily visualized as one-particle or one-hole of different types.
Fock: On objects, send a vector space $V$ to $\bigoplus_{i=0}^n \Lambda^i V$.
$\Lambda (V \bigoplus W) \simeq \Lambda (V) \otimes \Lambda (W)$. This is because above was talking about free fermions, so that's why alternating not $Sym$.
You can think as combine the systems as usual by tensor product, but when you apply the functor that gets you back to 1-particle/hole, you are picking out degree one summand which has a direct sum instead.
You may also like http://math.ucr.edu/home/baez/photon/tensor.htm
What you are modelling when you use a tensor product is a space that can accommodate tuples of basis states of different kinds.
For example, say you want to describe a three-level system. You can use a three-dimensional Hilbert space $\mathcal H$ to accommodate the possible states of this system.
But what if you have two three-level systems, each one of which can be in one of its three available states? In this case you have nine possible basis states ($00$, $01$, $02$, $10$, and $11$, $12$, $20$, $21$, and $22$), and thus need a nine-dimensional Hilbert space. As it happens, $\mathcal H\otimes\mathcal H$ has just the right dimensions (you could also use any other nine-dimensional Hilbert space, only it would make the notation more awkward when dealing with local operations).
On the other hand, $\mathcal H\oplus\mathcal H$ is a six-dimensional space, with basis the union of the bases of the two copies of $\mathcal H$ (see also this post over at math.SE). You could write this basis as
$$\{|0\rangle,|1\rangle,|2\rangle,|0'\rangle,|1'\rangle,|2'\rangle\},$$
denoting with $|i'\rangle$ the $i$-th basis element of the second copy of $\mathcal H$.
Clearly, this does not describe a system that is obtained by combining multiple elementary systems. Rather, it can be used to describe how a high-dimensional system is "composed" of smaller-dimensional ones.
Indeed, one can always think of an $n$-dimensional Hilbert space as the direct sum of $n$ copies of one-dimensional spaces.
The direct sum is, therefore, what you always do "under the hood" when you build up higher dimensional spaces from lower dimensional ones.
Similar reasoning applies to Hamiltonians or other operators. As an example, if you have a five-dimensional space $\mathcal H$, and two operators $A_1$ and $A_2$ operating on three- and two-dimensional systems, respectively, then $A_1\oplus A_2$ is a valid operator acting on states in $\mathcal H$. This represents an operation which does not correlate the first three and the last two modes (because of the block structure of $A_1\oplus A_2$).
| common-pile/stackexchange_filtered |
Can I write code in SOASTA CloudTest? or, the right way to test a RESTful service in CloudTest
In my last job, I did load testing using Gatling. I loved it.
In my new job, they have Soasta CloudTest. At first blush, it looks like a nice tool.
However, all I can find is information about record-and-playback. I am not a fan of record-and-playback for many reasons, which I will not reiterate here.
But just because I can't find anything other than record-and-playback doesn't mean it can't do it. It just means that the tool is best known for its record-and-playback capabilities, as that is a selling point when marketing to novice testers that do not have a development background.
My question is, can I do code-based development in CloudTest? Can someone point me to documentation on how to do it?
Thanks!
I am a performance engineer at SOASTA, the makers of CloudTest.
CloudTest is designed for script development to take place in its UI. You can manually create web service requests there, but that still takes place inside of the interface. I don't think you can move exclusively to a text editor.
A few CloudTest abstraction techniques that may be relevant to what you want to accomplish:
Chaining of smaller test script objects for code encapsulation/re-use in and across script groups and scenarios.
JavaScript components written/maintained in one place, then reused and referenced from multiple scripts.
Java Custom Modules: CloudTest scripts call Java JAR files using JavaScript calls.
Flow control with dynamic properties, based on server responses, JavaScript code, or external(seed) data (csv file or database query).
Tools for testing in CI/CD pipelines, referencing already prepared test scenarios. Once those are built, they can be executed from command line scripts.
There is a community site at cloudlink.soasta.com with documentation and videos on how to use these techniques. If you'd like to discuss any of this in more detail, I would be happy to connect directly.
Sadly, I was afraid that would be the case. Which means I am going to push for using Gatling instead of CloudTest.
| common-pile/stackexchange_filtered |
sudo issues - not gaining proper permissions
I have a bash script that will log on to remote device and copy a file. It works perfectly from the command line either if logged in as the user (support) or by running\
sudo -u support -i '/opt/IIS/getScreenShot' <parameter>
The support user has keys set up on every device so that the password is not asked on the scp command in the getScreenShot bash script.
I now need for this script to be able to be called/run by Apache. I have a PHP script that executes the following:
$output = shell_exec("sudo -u support -i '$command' $serial");
where $command is /opt/IIS/getScreenShot.
The command will start running (outputs back to the browser). Commands such as if [ ! -d /tmp/$1 ]; or mkdir -p /tmp/$1 do not seem to execute inside of the bash script if it is called through the Apache server and I am not sure why.
I would have thought the sudo command would have run it as the support user just as it did from the command line but it isn't. Ultimately I need for /opt/IIS/getScreenShot to be able to run from the command line and from the Apache call. Any suggestions?
Edit:
sudoers entry:
apache ALL=(ALL) NOPASSWD: ALL
/opt/IIS/getScreenShot will run as echo statements will show in the browser. Just certain commands in the script will not though. getScreenShot is:
set -e
#set -x
#Check runnable
if [ $# -lt 1 ] ; then #Check there is enough command line parameters.
echo "Usage: $0 [<Serial#> | <Socket#>] "
echo " Example: $0 GESC637005W1 "
exit 1
fi
_socket=$(/opt/Allure/socketdata "$1" |awk -F '[(/: ]'+ '{print $10}')
#echo "socket $_socket"
if [ $_socket = "400" ];
then
echo "400 Serial number not found"
exit 400
fi
echo "Checking for dir /tmp/$1"
if [ ! -d /tmp/$1 ]; then
echo "mkdir -p /tmp/$1"
mkdir -p /tmp/$1;
fi
echo "test"
echo "scp -P$_socket support@localhost:/srv/samba/share/clarity-client/client-apps/digital-poster/screens/screenshot.png /tmp/$1/screenshot.png"
scp -P$_socket support@localhost:/srv/samba/share/clarity-client/client-apps/digital-poster/screens/screenshot.png /tmp/$1/screenshot.png
return=$?
if [ $return -ne 0 ];
then
echo "401 scp1 failed $return"
exit 401
fi
echo "test2"
scp /tmp/$1/screenshot.png support@<IP_ADDRESS>:/opt/digital_media/dm_content/screenshots/$1.png
return=$?
if [ $return -ne 0 ];
then
echo "402 scp2 failed $return"
fi
A quick update to clarify my actual question. The script will actually run and the parameter gets passed to it correctly. The problem is that certain of the 'commands' in the script will not run correctly and it seems mostly tied to the test/creating of the directory. If the directory does not exist, the 'test' for it does not work. Separately, if I remove the test but issue the mkdir command directly (after making sure the directory is not there - or deleting it if it is), mkdir shows the error : mkdir: cannot create directory '/tmp/GESC637005UH': File exists. I have checked the directory directly and it isn't there. I have run the locate command on the system and can't find it so I am not sure why the mkdir command thinks it is there - and might also explain why the 'test' seems to 'fail' (i.e. it thinks it exists).
Which user is Apache running as?
it is running under the user apache.
And presumably the sudoers file does have a permission for apache to run getScreenShot as user support? Please [edit] your question to show that the obvious basic configuration is correct.
question has been updated.
What sort of parameter is it receiving? The quoting errors would break this if you have a parameter which contains more than a single token. Try http://shellcheck.net/ for detailed diagnostics.
The parameter is just a string - it is the hardware serial number or service tag of the device. For testing I have been using one of the devices serial number which is in the format of XXXX111111XX where X is a letter and 1 is a number.
ok, i have been trying to remove some complexity just to try and find out what may be happening. I have create test.sh and put it in /var/www/cg-bin. It is very simple:
if [ ! -d /tmp/"$1" ]; then
echo "Content-type: text/html"
echo ""
echo "Demo"
echo "creating directory /tmp/$1 "
echo ""
#echo "mkdir -p /tmp/$1"
mkdir -p /tmp/"$1";
fi
oops, I left out: OUTPUT="$(date)"
echo "Content-type: text/html"
echo ""
echo "Demo"
echo "Today is $OUTPUT "
everything works except the if test. Do I have to do something different with that because I cam calling it through apache?
I would look at your PHP code in more detail. Can you change it to shell_exec("sudo -u support -i sh -x '$command' $serial"); just to see that it is receiving the parameter correctly?
... though if your PHP script isn't able to capture the output, maybe put in sh -x ... 2>&1 | tee /tmp/support.$$ to get the output in a temporary file you can examine. (Don't forget to remove the temporary files when you are done debugging.)
@triplee - This is the output that I get:+ set -e
'[' 1 -lt 1 ']'
++ /opt/Allure/socketdata GESC742006Q4
++ awk -F '[(/: ]+' '{print $10}'
_socket=4445
'[' 4445 = 400 ']'
'[' '!' -d /tmp/GESC742006Q4 ']'
scp -P4445 support@localhost:/srv/samba/share/clarity-client/client-apps/digital-poster/screens/screenshot.png /tmp/GESC742006Q4/screenshot.png
return=0
'[' 0 -ne 0 ']'
scp /tmp/GESC742006Q4/screenshot.png support@<IP_ADDRESS>:/opt/digital_media/dm_content/screenshots/GESC742006Q4.png
return=0
'[' 0 -ne 0 ']'
That looks precisely like it's working with no issues. The argument GESC742006Q4 was successfully processed right through the end of the script. (The script is clumsy but that's outside the scope of this question.) Could you clarify which part of that transcript exactly is problematic, and maybe [edit] your question with these details?
@tripleee - edited original post about the exact problem.
mkdir -p quite specifically does not care whether the directory already exists. You can take out the if ! [ -d "/tmp/$1" ] completely and just create the directory unconditionally and atomically.
Your problem description reads like a race codiition - could multiple instances of this script have been running at basically the same time when you tested? For production you definitely need to make this script robust for that scenario - maybe look into using mktemp to create a unique temporary directory for each run if clients with the same serial number mustn't be able to clobber each others' results.
@tripleee - quick update. It isn't a racing condition as this is not in production yet. I am the only one running it - but point well taken for once I get past this step. As for an update, I have modified my code to try and gleam more information. I added 'ls -lt' and 'pwd' to the script to see what it it would report. PWD is showing /tmp but ls -lt is showing files that I do not see when running ls -lt by hand. I thought maybe I am not on the server I think I am but I checked the IP to make sure. Would apache create some type of 'temp' file system in memory?
No, but many files in /tmp are very short-lived. mktemp shields you nicely from this detail so I would simply suggest you switch to that and forget about this problem.
@tripleee - thanks for you help on this. I think I should close this since it does not seem related to my original issue that I stated.
If you like I'll try to find the time to post a refactored and updated version of your script as an answer, with some sort of hand-waving explanation of what I think was wrong.
Here is an attempt to refactor your script. I have added some comments in-line but the main beef here is using a guaranteed-unique file using mktemp and cleaning it up when we are done with trap. Some other minor remarks:
Exit codes are in the range 0-255 so I have subtracted 200 from yours.
Diagnostic messages should be sent to standard error.
The purpose of if and the other flow control statements of the shell is precisely to run a command and examine its result code. You should very rarely need to explicitly examine $? in a conditional (though including it in a diagnostic message, for example, is of course and obviously useful).
I have removed your comments and added some of my own which are probably not useful to keep in a production script.
#!/bin/sh
set -e
if [ $# -lt 1 ] ; then
# >&2 sends diagnostics to standard error
echo "Usage: $0 [<Serial#> | <Socket#>] " >&2
echo " Example: $0 GESC637005W1 " >&2
exit 1
fi
# Encapsulate error exit in a convenient function
die () {
rc=$1
shift
# include $0 in all diagnostic messages
echo "$0: $@" >&2
exit "$rc"
}
_socket="$(/opt/Allure/socketdata "$1" |awk -F '[(/: ]'+ '{print $10}')"
[ $_socket = "400" ] || die 200 "Serial number $1 not found (socket $_socket)"
# Create a temporary file for this script
t=$(mktemp -t screenshot.XXXXXXXXXX) || exit
# Remove the temporary file on regular or error exit or signal
trap 'rm -f $t' EXIT ERROR HUP INT TERM
scp -P$_socket support@localhost:/srv/samba/share/clarity-client/client-apps/digital-poster/screens/screenshot.png "$t" ||
die 201 "401 scp1 failed $?"
scp "$t" support@<IP_ADDRESS>:/opt/digital_media/dm_content/screenshots/"$1".png ||
die 202 "402 scp2 failed $?"
The die function is explained in more detail in a separate Stack Overflow question
foo || bar is just a convenient shorthand for
if foo; then
: nothing here
else
bar
fi
You could almost equivalently say if ! foo; then bar; fi but that loses the failure exit code if foo doesn't run successfully. (Similarly, foo && bar is equivalent to if foo; then bar; fi)
thanks for all of your help on this. I am going to make the recommended changes.
| common-pile/stackexchange_filtered |
How to change request url in wsgi middleware
I got a request url in WSGI middleware when someone sent the POST request. I need to change the url in middleware before it pass to the application. Then, I did something like that:
class myfunction(wsgi.middleware):
def process_request(self, req):
req.url = 'http://XXXX'
Actually I just want to add some parameters in the request url.
But unfortunately it was not working and I got an error in webob/request.py:
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1387, in __setattr__
object.__setattr__(self, attr, value)
AttributeError: can't set attribute
I just wonder whether I will never change the POST request url because it is a rule. If possible, kindly let me know how to change it.
| common-pile/stackexchange_filtered |
Vector Integral over a Hemi-Sphere
I'm trying to calculate the vector integral over a hemi-sphere, i.e. the integral of all vectors from the origin to a hemisphere. Let's say the $xy$ cuts the sphere in two. I thought the following rough idea could work:
I calculate the vector integral over a half circle with an analogy to complex numbers and then rotate the result from $0$ to $\pi$.
For the first part I get:
$$
\int_0^{\pi} \exp(i\phi) d\phi=\left[\frac{\exp(i\phi)}i \right]_0^{\pi}=\frac{-1}i-\frac1i=2i
$$
Reinterpreted in my real world scenario this means a vector of lenght 2 along the $z$ axis.
For the second part I get:
$$
\int_0^{\pi} 2d\theta = 2\pi
$$
So I conclude that the vector integral results in a vector of length $2\pi$ along the $z$-axis, right?
A direct computation of the vector integral shows the correctness of my result:
$$
\int_0^{2\pi} \int_0^{\pi/2} \pmatrix{\cos\alpha&-\sin\alpha& \\ \sin\alpha& \cos\alpha& \\ & & 1} \pmatrix{\cos\beta& &\sin\beta\\ & 1 & \\ -\sin\beta& &\cos\beta} \pmatrix{1\\ 0\\ 0} d\beta d\alpha = \dots = \pmatrix{0 \\ 0 \\ 2\pi}
$$
| common-pile/stackexchange_filtered |
Small solar panel dc to 5vdc for charging
I have a small solar panel which i want to use as cellphone, batties charger or any other gadget that requires 5vdc. I tried using 7805 but it reduces volts to 3.something. 7805 works fine on 9-12v battries. How can i do that. I need simple circuits because many ics used in those circuits may not be available here.My solar panel has
Maximum power (Pmax) = 5w
Voltage at Pmax (Vmp) = 17.2V
Current at Pmax (Imp) = 0.30A
Open circuit Volatage (Voc) = 21.6V
Short circuit Current (Isc) = 0.31A
I think it has enough power to charge powerbank or cellphone.Is there anything usefull that i can pull from other circuits. I have plenty of them.
Welcome to EE.SE and congratulations for at least posting the cell data. Your question is too broad at the moment and has already attracted one close vote. (4 to go.) Add a schematic of your circuit using the built-in editor button and use the node element to show your voltage readings at various points in the circuit. List the panel specifications using bullet points. It will make them easier to read. Then ask a specific answerable question.
@user6557161 please use the help center for guidelines on asking questions.
A simple linear regulator, such as the 7805 isn't going to work. The maximum current output of the panel is only 0.3A, while most gadgets will expect at least 0.5A. All the 7805 does is to reduce the voltage - it won't boost the current.
To boost the current, you'd need a proper DC-DC converter, perhaps a buck converter.
you can try two options.
With a battery
Without battery
I suggest the second option with LiPo battery. A solar charge controller like LT3652 will be very usefull in this case. This uses the concept of maximum power point tracking (MPPT) . There are breakoutboards available for those as well, like Sunnu Buddy from sparkfun.
I tried the same thing with a mobile powerbank and it worked perfectly. There is a chip inside power bank which trasfer 3.6v battery voltage to 5v. Tou can use dc to dc converters as well.
Can i connect 3.6v battery directly with solar cell or reducing volatage is necessary here?
No. You should use a solar charge controller as mentioned. Its a small chip. The battery/power bank can be commected to that controller.
| common-pile/stackexchange_filtered |
Jenkins ModuleNotFoundError: No module named 'jenkinsapi.jenkins'; 'jenkinsapi' is not a package
I have a virtual machine running Centos 6. I have a simple python script I am trying to run in Jenkins. I can run the script successfully on the virtual machine, but I cannot run the script once it exists in Jenkins Workspace.
<EMAIL_ADDRESS>~]# /usr/local/bin/python3.7
Python 3.7.0 (default, Mar 20 2019, 14:31:35)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from jenkinsapi.jenkins import Jenkins
>>>
As you can see above, I can import jenkinsapi module successfully from the command line, I just cant run it in Jenkins.
I have python3.7 installed with pip3.7. The jenkinsapi package exists but I cannot execute the script from workspace directories.
[EnvInject] - Loading node environment variables.
Building on master in workspace /home/jenkins/workspace/jenkinsapi
[jenkinsapi] $ /bin/sh -xe /tmp/jenkins5545466490229682574.sh
+ cd /home/ccuevas
+ pip3 install jenkinsapi
Requirement already satisfied: jenkinsapi in /usr/local/python37/lib/python3.7/site-packages (0.3.8)
Requirement already satisfied: pytz>=2014.4 in /usr/local/python37/lib/python3.7/site-packages (from jenkinsapi) (2018.9)
Requirement already satisfied: requests>=2.3.0 in /usr/local/python37/lib/python3.7/site-packages (from jenkinsapi) (2.21.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/python37/lib/python3.7/site-packages (from jenkinsapi) (1.12.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/python37/lib/python3.7/site-packages (from requests>=2.3.0->jenkinsapi) (2019.3.9)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/python37/lib/python3.7/site-packages (from requests>=2.3.0->jenkinsapi) (3.0.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/python37/lib/python3.7/site-packages (from requests>=2.3.0->jenkinsapi) (1.24.1)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/python37/lib/python3.7/site-packages (from requests>=2.3.0->jenkinsapi) (2.8)
+ python3 jenkins.py
Traceback (most recent call last):
File "jenkins.py", line 1, in <module>
from jenkinsapi.jenkins import Jenkins
ModuleNotFoundError: No module named 'jenkinsapi'
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have tried uninstalling jenkinsapi package and reinstalling.
After much toil, I resolved the issue. The problem was the file name for my job in Jenkins was named 'jenkinsapi.py'. The job was trying to import the file itself instead of the 'jenkinsapi' module from the first line 'import jenkinsapi'. I renamed the file in Jenkins to something else and there error is no longer occurring.
| common-pile/stackexchange_filtered |
Is there a way to remove part of a file name (path) in python?
I have around 50 files that have their name and then the date they were created at 3 times. How can I remove that part from the file name in python (You can show an example with other data it doesn't really matter)
I tried something like that:
file = 'directory/imagehellohellohello.png'
keyword = 'hello'
if (file.count(keyword) >= 3):
//functionality (here I want to remove the hello's from the file path)
Look into the os module, this is quite a frequent task.
yeah i know how to use it (give or take) but i can't get that to work.
Then please edit your question and show the code you're having problems with.
kindly show your approach to solve it so that we can improve it.
This can be done quite simply using pathlib:
from pathlib import Path
path = Path("directory/imagehellohellohello.png")
target = path.with_name(path.name.replace("hello", ''))
path.rename(target)
And this indeed renames the file to "directory/image.png".
From Python version 3.8 the rename method also returns the new files' path as a Path object. (So it is possible to do:
target = path.rename(path.with_name(path.name.replace("hello", '')))
Methods/attributes used: Path.rename, Path.with_name, Path.name, str.replace
file = 'directory/imagehellohellohello.png'
keyword = 'hello'
if keyword*3 in file:
newname = file.replace(keyword*3, '')
os.rename(file, newname)
| common-pile/stackexchange_filtered |
How to convince python that an int type is actually a hex type?
I have been given a hex value that is being treated as an int by python. I need to get this value treated as hex without changing the value. I tried using data = hex(int(data, 16)) but this makes my data a string. Is there a way to change an int to a hex without actually changing the representation?
Ex: data =<PHONE_NUMBER>, type(data) = int
I want: data = 0x9663676416, type(data) = hex
The representation is the only thing that does change. They're both stored as binary in the backend and converted to decimal or hex when you display them.
x = int(str(9663676416),16)
print x
print hex(x)
as mentioned in the comments what you have is a number .... all you do is mess with the representation ...
if you want type(x) == hex you will need to do something else (tbh im not sure what ... you can certainly do stuff like isinstance(x,hexNumber)
class hexNumber:
def __init__(self,x):
if isinstance(x,str):
self.val = int(x,16)
elif isinstance(x,int):
self.val = int(str(x),16)
def __str__(self):
return hex(self.val)
y = hexNumber(12345)
print "%s"%y
print isinstance(y,hexNumber)
y = hexNumber("1234abc")
print "%s"%y
print isinstance(y,hexNumber)
Re-read the OP's question. He's trying to get type(data) to return "hex" and not "string". This is a confusion about the type system, not how to get Python to show a hex representation of an int.
Thanks! Haha pretty easy in the end!
| common-pile/stackexchange_filtered |
How to add Class active on Click in Navbar
I have some tabs in my page, when I click a tab it refresh the page and removes the active class.
I have tried the following code.
$(".nav-item").on("click", function() {
$(".nav-item").removeClass("active");
$(this).addClass("active");
});
check if you have tag with href in navbar
You should build this in the code that renders the navbar on page load instead.
@sohan bairwa In HTML add .nav-item .active class by default to the first tab.
When you refresh the page, all the code added by js was removed. so you have to check in another way how to get active tab when the page is loaded
add your html code
<?php
$active_menu = $this->uri->segment(1);
$active_submenu = $this->uri->segment(2);
?>
<li class="nav-item <?php echo ($active_menu == 'quiz'?'active':''); ?>">
<a class="nav-link" href="<?php echo base_url(); ?>quiz/">Quiz</a>
</li>
I'm gonna assume that .nav-item is an anchor link to which you've assigned the click event, hence the page refresh. You need to disable its default operation
$(".nav-item").on("click", function(e) {
$(".nav-item").removeClass("active");
$(this).addClass("active");
e.preventDefault();
});
Alternatively, you can achieve it like so
<a href="javascript:void(0)" class="nav-item"></a>
Also, if you want to have the tab selected on page load too, you need to call its function on, well page load.
Edit: Extending @Bram Verstraten's comment, you can use query strings for the link's href and set the active class on page load extracting it, like so
$(document).ready(function(){
var $queryStirng = location.search;
//the above fetches the query string from the URL. For e.g., ?tab2 from http://localhost?tab2
$(".nav-item" + $queryStirng).addClass("active");
});
Your code is work But i want to go on click another page
Then why are you concerned what happens on this page after you redirect?
This can only be done on page load. Not when clicking on the link. Put something in the query string like: ?page=2 and on page load you can set the correct link to active.
@Nimsrules Course wise
| common-pile/stackexchange_filtered |
How to acknowledge a deceased advisor’s contributions to a paper?
One of my advisors suddenly passed away while I was in graduate school. We had some discussions and ideas about future publications, but he passed away before any of the work was completed. When the work was finally completed and published, I and my co-authors were therefore presented with an ethical dilemma about how best to acknowledge his contributions to the ideas behind the paper. Should we list him as a co-author? Put him in the acknowledgements? Listing him as an author would give credit for the original idea, however, we would have no way of knowing if he actually approved of—and would want his name attached to—our methods and writing.
In the end my co-authors and I decided to list him as a co-author with a footnote stating that he passed away before publication.
I’m interested to hear from others who have been in similar situations and/or suggestions on what constitutes “co-authorship” when one of one’s collaborators passes away before the publication or work is complete.
Actually, while ethics are an issue, I imagine that this is something which your university has a policy on.
I can't find any policy about posthumous co-authorship at my university (and we have LOTS of ethics policies).
My master's thesis adviser passed away suddenly after I had obtained my master's degree and after we had written a paper about it, but before the paper had been accepted for publication. I included him as co-author as we had previously planned, but I added the word "(deceased)" after his name.
Two words: Paul Erdos :)
@Suresh: Good point, but I suspect any "rules" for situations like this will vary across disciplines. For example, the thresholds for "co-authorship" are likely very different in theoretical disciplines like mathematics than they are in the experimental sciences.
Caroline Series, a mathematician, published a celebrated paper co-authored with Rufus Bowen, which died before the completion of the article; it is available here, you can have a look at the end of the introduction to see a way to proceed.
I was one of four authors of a paper that was undergoing revisions (suggested by referees) when one of the co-authors died. We kept his name on the list of authors, and we added a brief statement at the start of the paper, saying that he had died, and dedicating the paper to his memory. I don't remember whether this required any consultation with the editors, but my best guess is that it did not.
Did the deceased researcher consent to any copyright transfer associated with publication?
@PatriciaShanahan In my case, the researcher died before contributing any text, so that was not an issue for me because he technically did not have any claim of copyright on the work. The publisher either did not notice or did not care that his name was absent from the copyright transfer. I could foresee that being an issue with some publishers, however.
I had a similar situation. In this case, we did exactly what you did: we indicated that the participant (not a team leader, but a team member in this case) was a co-author, but that he was deceased. I think this is the only fair way to recognize substantial contributions.
Of course, the difficult comes if there is a challenge to the work of the deceased. In our case, however, we had a very substantial paper trail which was audited and reviewed, so the individual work could have been sorted out and dealt with appropriately.
So, I think the best defense is generally to keep good working notes and use version control.
aeismail's answer is definitely good advice, but I'll add two more bits:
Check the journal policy and author guidelines. There may be something in there that can guide your choice, like the Journal of the American Chemical Society has:
Deceased persons who meet the criteria for inclusion as coauthors should be so included, with an Author Information note indicating the date of death.
Check with the editor, if in doubt. They have the final say in the matter, and these things are probably best run by them if no official policy is established.
In terms of papers with deceased authors, I think the record holder is probably this one:
Can you spot it? One author died in 1919, and one had her PhD in 1911: while no date of death is provided for her, I don't think she's still around. (Also, it was probably quite an achievement for a woman to get a PhD at the time.)
As we say: old chemists don't die, they just reach equilibrium!
Any idea what Werner's contribution was? Not knowing if scanavy-Grigorieff is alive or dead suggests to me she did not make a contribution.
The compound whose chemical structure is reported in the paper was synthesized by Werner and Scanavy-Grigorieff, but it had not been identified at the time. The MIT team identified and solved the structure of that compound, from Werner's collection (Werner was a famous guy, so his collection was kept as historical artifact)
I have not been able to find much information about Marie Scanavy-Grigorieff online besides the year of birth, which is 1881 according to her thesis bio and university records.
| common-pile/stackexchange_filtered |
Spyder debugger hangs up on import influxdb_client
When I try to run a script in debug mode, the Spyder debugger hangs up on the first line, even though the first breakpoint isn't until line 162. It doesn't seem to be frozen, it's just not progressing to the next line in the terminal:
debugfile('------------------------------------------------------------------------------------------/processDataV02.py', wdir='C:/Users/----------/OneDrive - -------/Documents/DAC-X (doc)/data')
c:\users-------------\onedrive - ------------\documents\dac-x (doc)\data\processdatav02.py(1)()
----> 1 from influxdb_client import InfluxDBClient
2 import pandas as pd
3 import numpy as np
4 import matplotlib.pyplot as plt
5 from scipy import integrate
!continue
The problem seems to be with the import influxdb_client line - when I comment out this line the debugger works fine. However, without this line I can't pull the data that forms the basis of the analysis script. I need to be able to debug with the import influxdb_client line. Has anyone experienced similar issues?
I tried to debug the file in Spyder. The debugger hangs up on the first line (import influxdb_client as InfluxDBClient) instead of allowing me to debug. I have already tried updating to the latest version of Spyder. Restarting the kernel also does not help.
| common-pile/stackexchange_filtered |
How to open my app out of sms app via sms link message?
Currently, tapping a URL to open my app works using the following code.
<intent-filter>
<action android:name="android.intent.action.VIEW"></action>
<category android:name="android.intent.category.DEFAULT"></category>
<category android:name="android.intent.category.BROWSABLE"></category>
<data
android:host="www.waystride.com"
android:path="/launch"
android:scheme="https"></data>
</intent-filter>
The trouble is that when I tap the link inside the sms app, my app is opened 'inside' sms app, unlike when I chose google chrome or other browser. I mean, when I tap 'app switcher' icon, my app is shown in 'Messages/SMS' app. (Samsung S9 & Moto G5Plus)
The reason that I need to fix this is because my app uses GPS, within Messages/SMS app, my app couldn't get GPS location info. Thanks in advance!
I couldn't find answer for this either 1) it is hard to search the problem 2) I am not good at search stack overflow. But anyway, this is important...
Try to use "singleTask" launch mode. You can specify it in the AndroidManifest for your activity which contains the intent filter.
<activity
...
android:launchMode="singleTask"
>
...
</activity>
You can read more here https://developer.android.com/guide/components/activities/tasks-and-back-stack#TaskLaunchModes
Thanks a lot! It worked great! the link was helpful, too. I upvote your answer, but since my points is so low...
Also this is useful: https://developer.android.com/reference/android/app/Activity#onNewIntent(android.content.Intent)
| common-pile/stackexchange_filtered |
ScrollMagic offset on different window height
I am using ScrollMagic to get some animations on scroll. The problem is that I need to use offset so that the animation triggers on some point of scroll but it totally depends on the window height.
So in the example i have provided
https://jsfiddle.net/5tvrnfkx/12/
you can see the box coming out on scroll. https://tppr.me/RpkVa
Notice the window height https://tppr.me/hoyhs try resizing the height or preview panel and run.
So in 426px window height, it works perfectly. starts the page with no box and animates on scroll.
Try increasing the height and check https://tppr.me/sYJ5a, the box appears at the start. similarly, if you reduce the height, the box appears only after few scroll.
So I was wondering if there's any way to make the offset value dynamic so in any window height, the animation starts at the exact same point of scroll of page.
Yes, instead of using offset, you can use triggerHook and set it to 0 (or very close to it).
Like this:
jQuery(function() {
var controller = new ScrollMagic.Controller();
var tween = TweenMax.to("#boxAnim", 1, {className: "+=animate"});
var scene = new ScrollMagic.Scene({triggerElement: "#trigger", duration: 300, triggerHook: 0})
.setTween(tween)
.addTo(controller);
var height = $(window).outerHeight();
$('.height').append(height);
});
that worked, thank you. could you briefly explain how this is working? what the difference using offset and triggerhook ?
Sure. Briefly, offset is based on pixels and triggerHook is relative - number 0 to 1. They're use triggerElement as reference. In general you may prefer to use triggerHook instead of offset. I found this if you want to know more: https://ihatetomatoes.net/visual-guide-scrollmagic (Probably explains a lot better than me).
do you know if there's any way to disable it for wider screen? I tried something like this $(window).on('resize', function() { var winWidth = $(window).width(); if (winWidth >= 1680) { scene.destroy(); } else { scene.refresh(); } }).resize(); below the code but it won't work.
Yes, but we need some more changes. Take a look: https://jsfiddle.net/02ng716z/. Quick explanation: var scene now is declared globally. Created function initScrollMagic(). Then when resize, if winWidth>= 1680 && scene is not undefined, it'll destroy scene (true to reset tween) and set scene to undefined. The opposite, if winWidth < 1680 and scene is undefined, it will call initScrollMagic to create it again.
that helped but now i figured out exactly what my problem was. It's when we have more that 1 element to animate. like in real time, i have lot of animation to be handled and this is how i have used it https://jsfiddle.net/z4srp2bq/3/ so what happens is, it works great on page refresh but on resize the page jumps. should i be using $(each) statement?
with some workaround, i used unique variable for all scene and tween like this https://jsfiddle.net/z4srp2bq/5/ and it worked. would this be a good call though?
That's cool. I believe it can be simplified because it has details that repeat. At the moment I don't know exactly how.
| common-pile/stackexchange_filtered |
One code to support multiple version of Odata service
Sorry for 2nd question on the same topic . I try to frame the question in better way.
our multi tenant SCP based solution supports both on premise (number of versions) and cloud version of S4HANA as we have mix of customers from both the worlds. We have a scenario where new fields are added in the odata service in latest S4HANA version which are required to be used to develop a new application feature based on this field which will be consumed only by customers who are on the latest version of S4HANA.
But we also need to support customers which are not on latest S4HANA version (on premise)
My understanding on how possibly we can handle this -
1. Generate VDM on the latest odata version (with maximum fields)
2. Check if value in VDM field exists then only use it . This should help to avoid any unexpected runtime error
Can you please let me know if this is correct understanding or we should follow any different approach
Regards,
Apoorv
I think the answer to how to approach this problem requires first and foremost a good understanding of all the API versions that you want to support.
Ideally, and this I think would be the metaphorical grand prize, is if the oldest version offers everything you need and every newer version of the same API has only compatible (i.e. non-breaking) changes compared to that older version. In this case it should suffice if you make sure that your client code is based on this API and you're good to go.
If this is not the case, it comes down to what the differences are. If there are radically different ways to use the (semantically) same API in e.g. SAP S/4HANA OnPremise 1809 vs SAP S/4HANA Cloud 2002, I'd try to reflect this fact in my system. Now there are probably again different ways to approach this. For example, you could think about representing this via different destinations. So you'd need to check which destinations are present in the subscriber's subaccount and then use them accordingly. Alternatively (and I'm not sure whether this is possible), you might be able to somehow determine the target system's version and edition via some API call. This would probably also give an angle with which to approach this problem.
As is probably evident from my answer, I'm not aware of any catch-all elegant solution to this problem. Maybe there are other people who've dealt with this before.
| common-pile/stackexchange_filtered |
Centos yum install Transaction check error
I have downloaded and trying to install sphinx in centos 7 using command
yum localinstall sphinx-2.2.6-1.rhel7.x86_64.rpm
It shows the dependency mariadb-libs. I have mariadb and mysql already installed and working
While installing dependency mariadb-libs for sphinx, the errors are thrown:
Transaction check error:
file /usr/share/mysql/charsets/README from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/Index.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/armscii8.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/ascii.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp1250.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp1251.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp1256.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp1257.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp850.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp852.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/cp866.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/dec8.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/geostd8.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/greek.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/hebrew.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/hp8.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/keybcs2.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/koi8r.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/koi8u.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/latin1.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/latin2.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/latin5.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/latin7.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/macce.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/macroman.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/charsets/swe7.xml from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/czech/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/danish/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/dutch/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/english/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/estonian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/french/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/german/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/greek/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/hungarian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/italian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/japanese/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/korean/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/norwegian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/polish/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/portuguese/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/romanian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/russian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/serbian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/slovak/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/spanish/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/swedish/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
file /usr/share/mysql/ukrainian/errmsg.sys from install of mariadb-libs-1:5.5.44-2.el7.centos.x86_64 conflicts with file from package MySQL55-server-5.5.48-1.cp1148.x86_64
Is there will be any problem removing these files and retry install as I think these are language and charset files that do not cause issues. Or is it better to install with replace files option. Please advice.
Install in a different place?
there is conflicts between mariadb-libs and mysql-server rpm package.
because some files will be replace, so yum give warning error.
first of, you can download mariadb-libs rpm . and forced install .eg:rpm -ivh --force mariadb-libs.xxx.rpm
then execute you own cmd.
yum localinstall sphinx-2.2.6-1.rhel7.x86_64.rpm
| common-pile/stackexchange_filtered |
Uk Post Code Regex
Possible Duplicates
Regex to validate UK Post Codes
I Need a Regex syntax which accepts the posts codes like the following formates
CR03JP
cr03jp
CR0 3JP
cr0 3jp
what I have now is
^([A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]? {1,2}[0-9][ABD-HJLN-UW-Z]{2}|GIR 0AA)$
which accepts CR0 3JP but rejects cro3jp CR03JP
I have tried the following also
/((GIR 0AA)|((([A-PR-UWYZ][0-9][0-9]?)|(([A-PR-UWYZ][A-HK-Y][0-9][0-9]?)|(([A-PR-UWYZ][0-9][A-HJKSTUW])|([A-PR-UWYZ][A-HK-Y][0-9][ABEHMNPRVWXY])))) [0-9][ABD-HJLNP-UW-Z]{2}))/i
if this has been answered already please point me to the link
Use this regex
^ ?(([BEGLMNSWbeglmnsw][0-9][0-9]?)|(([A-PR-UWYZa-pr-uwyz][A-HK-Ya-hk-y][0-9][0-9]?)|(([ENWenw][0-9][A-HJKSTUWa-hjkstuw])|([ENWenw][A-HK-Ya-hk-y][0-9][ABEHMNPRVWXYabehmnprvwxy])))) ?[0-9][ABD-HJLNP-UW-Zabd-hjlnp-uw-z]{2}$
Wouldn't it be simpler to use a 'case-insensitive modifier' for the regex, and then only deal with the normal upper-case letters?
Thanks for you instant reply's Prasanth's answer solved my prob
The problem is that you require 1 or 2 spaces in the middle
^([A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]? {1,2}[0-9][ABD-HJLN-UW-Z]{2}|GIR 0AA)$
^^^
change this to {0,2} and it will also accept Postcodes without space in between. See Limiting Repetition.
Also the i modifier is needed for case insensitive matching, like you used it in your second regex.
| common-pile/stackexchange_filtered |
Why is this q/kdb+ join more efficient? replacing a 2-column left-join with each-xgroup-1-column left joins
I was playing around with left-joining two in-memory tables. Both tables include columns `date`sym with a couple of millions of rows each and are sorted by date/sym with `s#date set.
I was doing initially a naive left-join using:
t: t1 lj 2!t2;
and it was taking ~15s, then I played around and wrote it like:
t: ungroup {flip flip[x] lj 1!flip y}'[`date xgroup t1;`date xgroup t2];
and this dropped runtime to 0.5s while yielding the same result (at least on my tables).
I don't understand why this would be more efficient. Would this be expected? Or am I doing something wrong that's preventing lj from automatically doing something more efficiently than it currently is?
Are you able to share representative data? Or failing that an idea of relative size of t1 Vs t2? I have a feeling it's to do with relative amount of iteration on t2 but that's just a hunch. Also worth noting at this point that flips on tables Vs dict amounts to a bit of pointer magic so is free for our purposes.
@user20349 I think it was roughly 300 distinct dates and 10000 distinct symbols. I also timed t1[`date`sym] inter t2[`date`sym], and that operation took as long as the entire join, so I'd guess it's spending all of its time in trying to match the keys; but I don't understand why it's not doing it one date at a time automatically as I wrote it out in my fast version.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.