text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Sequelize: don't return password
I'm using Sequelize to do a DB find for a user record, and I want the default behavior of the model to not return the password field for that record. The password field is a hash but I still don't want to return it.
I have several options that will work, but none seems particularly good:
Create a custom class method findWithoutPassword for the User model and within that method do a User.find with the attributes set as shown in the Sequelize docs
Do a normal User.find and filter the results in the controller (not preferred)
Use some other library to strip off unwanted attributes
Is there a better way? Best of all would be if there is a way to specify in the Sequelize model definition to never return the password field, but I haven't found a way to do that.
I would suggest overriding the toJSON function:
sequelize.define('user', attributes, {
instanceMethods: {
toJSON: function () {
var values = Object.assign({}, this.get());
delete values.password;
return values;
}
}
});
Or in sequelize v4
const User = sequelize.define('user', attributes, {});
User.prototype.toJSON = function () {
var values = Object.assign({}, this.get());
delete values.password;
return values;
}
toJSON is called when the data is returned to the user, so end users won't see the password field, but it will still be available in your code.
Object.assign clones the returned object - Otherwise you will completely delete the property from the instance.
This is an interesting approach. Would this preclude ever getting the password (or other blacklisted attributes) out of the model?
EDIT: this is not filtering out the password for me. Looks like the instanceMethod toJSON isn't getting called for the return of either user.get() or user.dataValues. Should I be using another method to return attributes?
You should either call toJSON, or just return the user object. For example res.send(200, user) will internally call JSON.stringify on the user object, which in turn calls toJSON
IMPORTANT! If you follow the above to the letter, delete values.password will actually remove the password attribute from the user instance - not just JSON output. Use var values = Object.assign({}, this.get()) or appropriate polyfill to avoid mutating the actual user's properties.
got error while using above code Unhandled rejection Error: TypeError: Cannot read property 'get' of undefined
@AkshayPratapSingh Are you using an arrow function?
@JanAagaardMeier yeah
You can't do that in this case - we use .bind to set the context to the instance - But you can't do that with arrow functions
This will work if you're just creating a JSON representation of a user. BUT! If you include a user from another model, the toJSON that gets called is the other models! This will cause you to leak your hashed password out when user's are eagerly loaded. See this: https://github.com/sequelize/sequelize/issues/3891
this does not work anymore with sequelize 4, instanceMethods are deprecated, replace with this Model.prototype.someMethod = function () {..}, according to this http://docs.sequelizejs.com/manual/tutorial/upgrade-to-v4.html#breaking-changes
I want to exclude certain attributes after a create - is that possible? The toJSON approach is doing it all the time, that is not working for me.
Instead of Object.assign({}, this.get()) you can use this.get({ clone: true })
ES syntax makes this nice and tidy: { ...this.get(), password: undefined }
This method will only work when directly fetching this model. It will not exclude the attribute through associations. So, if you fetch UserTransactions and include Users, the user password will show up in the response. The defaultScope answer below solves this problem.
Is there any why to override the toJSON function for all Models at once and not per each Model?
Another way is to add a default scope to the User model.
Add this in the model's options object
defaultScope: {
attributes: { exclude: ['password'] },
}
Or you can create a separate scope to use it only in certain queries.
Add this in the model's options object
scopes: {
withoutPassword: {
attributes: { exclude: ['password'] },
}
}
Then you can use it in queries
User.scope('withoutPassword').findAll();
IMPORTANT! This is the only answer that worked for me. It is important to know that accepted anwser will work ONLY if you directly fetch user model. If you include user model throug another model toJSON functio in user model will not get called and you will leak your passwords to the client!!
that's the best answer. I don't know why that is not on the top.
NOTE: This answer works fine but the excluded field will still be exposed for create. Overriding toJSON protects the field from being exposed during create.
@DeanKoštomaj .Using toJSON worked fine for me when using include in findAll. The deleted field wasn't included.
@DeanKoštomaj. Noticed the problem when I tried including Parent in the child.
This still works in Sequelize 5. Also, it's a great solution!
@nonybrighto Wouldn't you want to expose the password field for create? Its needed to set the user's password when their account gets created.
@pawan samdani Yes, It is needed, but if you will send the created user as JSON after creation, you will also be sending the password too.
This comment brought exactly what I needed but I still didn't know. Very good thank you!!
Below is a link to the scope definitions
https://sequelize.org/master/manual/scopes.html
@nonybrighto Thanks for highlighting that security flaw with this solution! The best fix I found is to add await user.reload(); in an afterCreate hook for the model.
For people who want to use the scope way inside an include for relational tables:
include: {
model: models.users.scope('withoutPassword'),
as: "developer"
},
I like to use a combination of both of Pawan's answers and declare the following:
defaultScope: {
attributes: { exclude: ['password'] },
},
scopes: {
withPassword: {
attributes: { },
}
}
This allows me to exclude the password by default and use the withPassword scope to explicitly return the password when needed, such as when running a login method.
userModel.scope('withPassword').findAll()
This ensure that the password is not returned when including the user via a referenced field, e.g.
accountModel.findAll({
include: [{
model: userModel,
as: 'user'
}]
})
This should be the best practice!
Maybe you can just add exclude at your attribute when you find, look like this:
var User = sequelize.define('user', attributes);
User.findAll({
attributes: {
exclude: ['password']
}
});
Read the docs for more details
adding the block of attributes to each query is not so good. a way to define exclude attributes on the model level to apply all queries is needed!
The code below worked for me. We wanted access to the instance attributes at runtime but remove them before sending the data to the client.
const Sequelize = require('sequelize')
const sequelize = new Sequelize('postgres://user:pass@example.com:5432/dbname')
const PROTECTED_ATTRIBUTES = ['password', 'token']
const Model = Sequelize.Model
class User extends Model {
toJSON () {
// hide protected fields
let attributes = Object.assign({}, this.get())
for (let a of PROTECTED_ATTRIBUTES) {
delete attributes[a]
}
return attributes
}
}
User.init({
email: {
type: Sequelize.STRING,
unique: true,
allowNull: false,
validate: {
isEmail: true
}
},
password: {
type: Sequelize.STRING,
allowNull: false
},
token: {
type: Sequelize.STRING(16),
unique: true,
allowNull: false
},
},
{
sequelize,
modelName: 'user'
})
module.exports = User
Github Gist
I was able to get this working by adding a getter to the field which returns undefined
firstName: {
type: DataTypes.STRING,
get() {
return undefined;
}
}
The accepted answer does not work when the model is included from another model.
I had a virtual field, say fullName that depends on the fields that I wanted to hide, say firstName and lastName. And the solution based on defaultScope does not work in this case.
Thanks for the hint. I struggled for an hour to find the solution where I needed to hide the column but show in some virtual field.
there's a plugin for scoping attributes as discussed here.
i went with the override as mentioned in the accepted answer, except i called the original toJSON rather than get with this.constructor.super_.prototype.toJSON.apply(this, arguments) as described in the api docs
You can do this in a simple way by excluding attributes. See the below code (comment line)
Patient.findByPk(id, {
attributes: {
exclude: ['UserId', 'DiseaseId'] // Removing UserId and DiseaseId from Patient response data
},
include: [
{
model: models.Disease
},
{
model: models.User,
attributes: {
exclude: ['password'] // Removing password from User response data
}
}
]
})
.then(data => {
res.status(200).send(data);
})
.catch(err => {
res.status(500).send({
message: `Error retrieving Patient with id ${id} : ${err}`
});
});
| common-pile/stackexchange_filtered |
How do I connect this hot plate
This is in Germany, brand new flat.
The wall socket is:
The device has the following sticker:
But the blue and grey wires are linked:
Should I:
separate the blue and grey and connect them with their respective colors in the socket
connect them both in the blue
connect them both in the grey
I am thinking they should both go in blue (neutral) as there are only 2 phases. Is that correct?
The panel has a 400V 63A breaker for this:
Your kitchen has a three-phase supply.
Your hob can run on a single-phase to neutral (1N AC 32 A) or on two phases of a three-phase supply (2N AC 16 A).
Note that the current is split into two 16 A circuits for the three-phase supply. This is the one you want.
Figure 1. Probable internal wiring. R1 and R2 each represent one or more hobs.
So if I run on two phases, I can connect the black and brown in their respective phases and then the blue to neutral; but why is the third phase (grey) wire bundled with the neutral (blue)? Can I leave it floating?
Can you see my Figure 1?
That’s for single phase though, how can I connect it with the two phases?
L1, L2 and L3 are the three phases in your wall socket. I've coloured the wires to show you how the hob is wired internally and how to connect it to the wall. I've shown your hob connected to two phases.
So the hub has 2 phases, with brown and black wires and a blue neutral. In the sticker that comes with it, they indicate, for two phases, to connect these two colors (the second picture I sent). For some reason the cable comes with a third phase wire (grey). If we ignore the grey, I assume black->black, brown->brown, blue->blue; but then since the third phase wire is not used, what I am wondering about is of it can be left floating or is it better to hook it to neutral?
Can you not see that there are two circuits in the hot plate. One is brown and grey, connected from L1 to N. The other is black and blue, connected from L2 to N. Can you see that I have connected both blue and grey to N to match the 2N AC 16 A diagram on the label?
Ahhh yes, ok I get it now! Thanks
| common-pile/stackexchange_filtered |
NextJS - Server side css render
I'm wondering how to render the CSS style in the server as well?
I'm on production mode!
_app.js
import {useEffect} from "react";
import Head from "next/head";
import Script from "next/script";
import '../style/global.css'
function MyApp({ Component, pageProps }) {
return (<>
<Head>
{/*<link type="text/css" rel="stylesheet preload prefetch" href={'../style/global.css'} />*/} //Not Working
<title>MyTitle</title>
<main id={'container'} className={'containerApp'}>
<Component {...pageProps} />
</main>
</>)
}
export default MyApp
When I put external CSS link it works and I can see the server loaded the CSS style
_app.js
<link type="text/css" rel="stylesheet preload prefetch" href={'https://google.com/style/global.css'} />
Firstly, what does "server render css" even mean? Do you want inline CSS in the server response? Maybe like this (https://stackoverflow.com/q/70283331)? Secondly, "I can see the server loaded the CSS style" how (and where)?
@brc-dd Thanks for your reply! I saw your answer its helpful but is there a way to load critical css in the server response without using critters?
You can read your critical CSS in getStaticProps and then pass it as string to a component that will add it to head.
@brc-dd Thanks for the information! I installed critters the css loaded perfectly but not the images and fonts? Is there a way to fix that?
I appreciate your help!
For images you can use https://github.com/twopluszero/next-images with some large inlineImageLimit. For fonts you need to configure your webpack to use asset/inline. But I won't recommend using these as inlining images and fonts actually results in slower rendering, zero caching and increased bandwidth usage. There won't be multiple requests but your site will still "feel slower" to the users. Inline things only to a certain limit. Use lighthouse to analyse what is optimal. The next/image component should be the best choice in most of the cases, and load fonts using separate requests.
| common-pile/stackexchange_filtered |
Create FTP user to access /var/www/html/uploads folder to write from PHP programs
I did setup FTP and created user and changed home directory to /var/www/html as stated in below link.
Setting up FTP on Amazon Cloud Server
Now, I can connect FTP but I can't see files list, unable to upload files to /var/www/html folder.
By the way, when i try to upload file am getting below error. Response: 200 PORT command successful. Consider using PASV.
Command: STOR thankyou.php
Response: 553 Could not create file.
what is the outpu tof ls -ld /var/www/html, and what user are you uploading as?
A few steps to do here -
1) Check your vsftpd.conf file and look for line chroot_local_user, it should be set to yes.
2) Check for permissions on files and folders under /var/www/html.
| common-pile/stackexchange_filtered |
Use iptables nat to redirect gateway for LAN PCs
I have a Linux server which functions as the gateway for my home network. It has two ethernet devices:
p3p1: WAN, public IP address a.b.c.d
p2p1: LAN, private IP address <IP_ADDRESS>/24
It also connects via a point-to-point OpenVPN tunnel to a remote Linux server (which I also administrate). This adds the device
tun2: VPN, private IP address <IP_ADDRESS>/32
The question is: how do I make all traffic from clients on the LAN redirect through the OpenVPN tunnel?
I can redirect all traffic (including that originating from the gateway server) using the VPN client configuration option redirect-gateway def1. But that isn't what I want.
Would there be a way to do this using IPTables NAT?
Thanks!
I managed to do this using policy based IP routing, as A. Fendt mentioned in a comment:
Insert a new IP routing table:
$ echo "200 vpndef1" | sudo tee -a /etc/iproute2/rt_tables
Add routes for the VPN redirect:
$ sudo ip route add <IP_ADDRESS>/24 via <IP_ADDRESS> dev p2p1 table vpndef1
$ sudo ip route add default via <IP_ADDRESS> dev tun2 table vpndef1
Insert a new rule to direct LAN traffic to the new routing table:
$ sudo ip rule add from <IP_ADDRESS>/24 lookup vpndef1
Here are the Steps you should do:
In the first step your local DHCP Server has to configure the Client default gateway to your Server Address <IP_ADDRESS>
Then use the routing policy database to route your local network traffic to the VPN Default Gateway behind p2p1 and route your servers traffic to the default gateway behind p3p1
After that you have to MASQUERADE your traffic which comes from your local network and goes into the VPN:
# enable ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
# configure iptables
iptables -t nat -A POSTROUTING -s <IP_ADDRESS>/24 -d <IP_ADDRESS>/32 -j MASQUERADE
iptables -P FORWARD DROP
iptables -A FORWARD -i p2p1 -o tun2 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED -j ACCEPT
The default gateway is already set to the server. I don't want to redirect all traffic, only traffic originating from the LAN (i.e. I don't want to redirect traffic from the server itself over the VPN).
In this case remove the redirect-gateway def1 directive and create some policy based routing entries: http://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.rpdb.simple.html As you said: every traffic from the local network goes to the VPN Default Gateway and the traffic of the server goes to the Default Gateway behind p3p1.
| common-pile/stackexchange_filtered |
Pass opencv inputarray and use it as std::vector
I want write a customized function which use the cv::InputArray as parameters.
Within the function, I understand I can use cv::InputArray::getMat to obtain a header of the input cv::Mat.
I have some confusions while passing std::vector to the cv::InputArray.
1.If I pass std::vector into a function, can I still get the std::vector in the function? For example:
void foo(cv::InputArray _input)
{
std::vector<cv::Point2f> input = _input.getVector() // getVector function doesn't exist
}
std::vector<cv::Point2f> a;
foo(a);
2.If I pass std::vector to the function and use the getMat to get a cv::Mat within the function, how the mat will looks like?
Poly has made a clear explanation in case of std::vector<char>. What if I want to get std::vector<cv::Point2f> in the function, any suggestions?
Thanks very much.
When you pass vector to the function which takes InputArray, you implicitly call converting constructor InputArray::InputArray(vector). (Converting constructor is explained here: https://stackoverflow.com/a/15077788/928387)
In this constructor, the vector's pointer is simply assigned to obj member variable in InputArray. If you use OpenCV 3.0, InputArray has getObj() method, so you can get vector by the following way:
// Only works on OpenCV 3.0 or above
const std::vector<Point2f>& input = *(const std::vector<Point2f>*)_input.getObj();
If you use OpenCV 2.X, you can use InputArray::getMat(). It returns Mat object that has a pointer to the data. So you can do the following way as well.
// Should Work on any OpenCV version
cv::Mat mat = _input.getMat();
Point2f *data = (Point2f *)mat.data;
int length = mat.total();
std::vector<Point2f> input;
input.assign(data, data + length);
Regarding your second question, if you call InputArray::getMat() on InputArray object with N element, it returns (N*1) matrix.
Hi, Thanks for your answer. I tried both ways but neither of them worked. For the first way, the InputArray doesn't have getObj() method so I assume you mean getMat(). This give me compilation error like: cannot convert from 'cv::Mat' to 'const std::vector<_Ty> *'. For the second way, I want to recover vector of cv::Point2f, but the results were not correct.
Seems like only OpenCV 3.0 has getObj() method. I update my original answer to include version description and use Point2f. I successfully recover vector on my machine using OpenCV 2.4.8. Please check it.
Note that InputArray::getObj() returns the object by whom it was created. So casting only works, if _input was created using a std::vector! This can be checked via InputArray::isVector().
Otherwise, a new std::vector object must be created. Unfortunately, there is no way to tell std::vector to use existing data. I think it is not even possible when using your own allocator. If you still want a std::vector, use pointers/iterators (either in constructor or in std::vector::assign()) to create a new object with a copy of the data. You can obtain the size directly from _input via InputArray::total().
Vector
Based on the previous observations, I combined the attempts proposed by Poly.
std::vector<Point2f> *input;
if (_input.isVector()) {
input = static_cast<std::vector<Point2f>*>(_input.getObj());
} else {
size_t length = _input.total();
Point2f* data = reinterpret_cast<Point2f*>(_input.getMat().data);
input = new std::vector<Point2f>(data, data + length);
}
Template
To reuse code for other types, I recommend using templates.
template<class T>
std::vector<T>& getVec(InputArray _input) {
std::vector<T> *input;
if (_input.isVector()) {
input = static_cast<std::vector<T>*>(_input.getObj());
} else {
size_t length = _input.total();
T* data = reinterpret_cast<T*>(_input.getMat().data);
input = new std::vector<T>(data, data + length);
}
return *input;
}
Further, you should check if types are compatible via InputArray::type().
Arrays
If you just want easy indexing, you could of course use standard C-style arrays (Note that C++-style std::array also needs to copy data).
Point2f* data = reinterpret_cast<Point2f*>(_input.getMat().data);
Then you can access data via
Point2f p = data[5];
| common-pile/stackexchange_filtered |
Making a Form appear above everything
I'm working on a stand-in Start Menu for Windows 8 and I've tried:
this.TopMost = true;
but it seems to only work until the form loses focus. Is there was an easy way to make the "start button" appear above the Task Bar permanently?
Raymond Chen talks about the problems with topmost windows on his blog. I recommend checking that out.
This should be a comment, not an answer.
It's the right answer. There's no such thing as making a program appear above all other programs "permanently", just a lot of hacks and counter-hacks. Raymond Chen's post (or series of posts) is the best answer to why you shouldn't try to play that game.
| common-pile/stackexchange_filtered |
I'm having a lot of trouble with setInterval and class methods
I keep running into bizarre problems. I've been unable to find anything on them after doing some research, so I thought I'd come here to present them. I have a class which is rather long, but I'll include the relevant bits:
class AnimatedSnake {
constructor(canvasId, coordinates) {
this.coordinates = coordinates;
this.direction = 2;
this.ctx = document.getElementById(canvasId).getContext("2d");
// 0 - .99, describes how far along snake is between coordinates
this.progress = 0;
}
erase() {
for (let i = 0; i < this.coordinates.length; i++) {
let c1 = this.coordinates[i][0],
c2 = this.coordinates[i][1];
this.ctx.clearRect(c1 * 31, c2 * 31, 31, 31);
}
}
next() {
this.progress += 0.01;
if (this.progress >= 1) {
this.progress %= 1;
let nextCoord = this.coordinates[4].slice();
nextCoord[0] += ((this.direction % 2) * this.direction);
nextCoord[1] += ((!(this.direction % 2) * (this.direction / 2)));
this.coordinates.push(nextCoord);
this.coordinates.shift();
}
console.log(this.erase);
this.erase();
this.draw();
}
}
So far, I can call AnimatedSnake.next() indefinitely if I'm doing it manually (i.e. from the console). However, when I put the function in an interval or timeout - setInterval(AnimatedSnake.next, 100) - it all of a sudden, on the first run, claims that AnimatedSnake.erase is not a function. I tried putting AnimatedSnake.erase() directly in the interval, and when I do THAT, for some absurd reason it goes and tells me that it cannot take the length property of AnimatedSnake.coordinates, which it claims is undefined. Nowhere in my code to I redefine any of these things. coordinates is altered, but it should not be undefined at any point. And erase is of course a method that I never change. Does anyone have any insight into why, when these are called with setInterval or setTimeout weird things happen, but if I call the functions repeatedly (even in a for loop) without the JavaScript timing functions everything works out fine? I'm genuinely stumped.
When you pass a function to setIntverval you loose the context that binds this because it just passes a function reference. You can try setInterval(() => AnimatedSnake.next, 100) or setInterval(AnimatedSnake.next.bind(AnimatedSnake), 100) to keep the correct context for the function call.
Where's the instance of AnimatedSnake?
They aren't static methods to be used as AnimatedSnake.next
@SajalPreetSingh I have an instance, I just felt I didn't need to explicitly show one to describe my problem. Don't worry :)
Consider these two snippets:
animatedSnake.next()
And:
let method = animatedSnake.next;
method();
In the first snippet next is called as a member of animatedSnake object, so this within the context of next method refers to the animatedSnake object.
In the second snippet the next method is detached from the object, so this no longer refers to the animatedSnake instance when the method function is invoked. This is how passing a method to another function, like setInterval works. You can either use Function.prototype.bind method for setting the context manually:
setInterval(animatedSnake.next.bind(animatedSnake), 100)
or wrap the statement with another function:
setInterval(() => animatedSnake.next(), 100)
I'm gonna wrap it in another function - I probably should've thought of that when I was debugging, but I'm still glad I asked the question because I didn't realize that callbacks detached methods from objects/classes. Thanks for the explanation!
@ElleNolan You are welcome! Since the main topic is the this keyword, I think you may find answers of this question useful: How does the “this” keyword work?
| common-pile/stackexchange_filtered |
Compiling tinygroupdtls
I am trying to compile the tinygroupdtls (tindygroupdtls) on Contiki 2.7. Currently, I am getting a linker error.
LD multicast-client-example.sky
multicast-client-example.co: In function `secure_group_creation':
multicast-client-example.c:(.text.secure_group_creation+0x6): undefined reference to `dtls_new_group'
multicast-client-example.c:(.text.secure_group_creation+0x5e): undefined reference to `dtls_cipher_new'
multicast-client-example.c:(.text.secure_group_creation+0x6c): undefined reference to `dtls_add_group'
contiki-sky.a(er-coap-13.o): In function `dtls_config_context':
er-coap-13.c:(.text.dtls_config_context+0xa): undefined reference to `dtls_new_context'
contiki-sky.a(er-coap-13.o): In function `coap_send_message':
er-coap-13.c:(.text.coap_send_message+0x48): undefined reference to `dtls_write'
contiki-sky.a(er-coap-13-engi): In function `process_thread_coap_receiver':
er-coap-13-engine.c:(.text.process_thread_coap_receiver+0x1c): undefined reference to `dtls_init'
er-coap-13-engine.c:(.text.process_thread_coap_receiver+0xaa): undefined reference to `dtls_handle_message'
collect2: error: ld returned 1 exit status
make: *** [multicast-client-example.sky] Error 1
Process returned error code 2
rm multicast-client-example.co obj_sky/contiki-sky-main.o
I am not sure if the source files in the tinygroupdtls folder are getting compiled, and causing this problem...
Following are my makefiles:
The project makefile
all: multicast-client-example server-example
# variable for this Makefile
# configure CoAP implementation (3|7|12|13) (er-coap-07 also supports CoAP draft 08)
WITH_COAP=13
WITH_CONTIKI=1
# variable for Makefile.include
WITH_UIP6=1
# for some platforms
UIP_CONF_IPV6=1
CFLAGS += -DUIP_CONF_IPV6=1
CONTIKI=../..
CFLAGS += -DPROJECT_CONF_H=\"project-conf.h\"
# linker optimizations
SMALL=1
# REST framework, requires WITH_COAP
ifeq ($(WITH_COAP), 13)
${info INFO: compiling with CoAP-13}
CFLAGS += -DWITH_COAP=13
CFLAGS += -DREST=coap_rest_implementation
CFLAGS += -DUIP_CONF_TCP=0
APPS += er-coap-13-mcast-dtls
endif
CFLAGS += -DWITH_COAP=13 -DWITH_CONTIKI=1 -DUIP_CONF_IPV6=1
CFLAGS += -DUIP_CONF_IPV6_RPL=0
CFLAGS += -DNDEBUG=0
#CFLAGS += -DDEBUG=1
CFLAGS += -DWITH_MULTICAST=1
CFLAGS += -DWITH_GROUP_RESPONSE=1
CFLAGS += -DWITH_DTLS=1
APPS += tinygroupdtls/aes tinygroupdtls/sha2 tinygroupdtls
APPS += erbium
include $(CONTIKI)/Makefile.include
The application makefile (Makefile.tinygroupdtls)
ifeq ($(WITH_GROUP_RESPONSE), 1)
tinygroupdtls_src = dtls.c crypto.c hmac.c rijndael.c sha2.c ccm.c netq.c dtls_time.c peer.c group.c key_derivation.c
else
tinygroupdtls_src = dtls.c crypto.c hmac.c rijndael.c sha2.c ccm.c netq.c dtls_time.c peer.c group.c
endif
Can anyone please tell me what I could be doing wrong here?
Thanks
| common-pile/stackexchange_filtered |
Using MySQL functions in PHP PDO prepared statements
What's the right way of using a MySQL function while using PHP PDO? The function NOW() gets saved as a string instead of showing the time.
$sth = $dbh->prepare("INSERT INTO pdo (namespace, count, teststring) VALUES (?, ?, ?)");
// these protect you from injection
$sth->bindParam(1, $_a);
$sth->bindParam(2, $_b);
$sth->bindParam(3, $_c);
$_a = 'Wishy-washy';
$_b = 123;
$_c = 'NOW()'; // Doesn't work. Comes out as the string 'NOW()' (w/o the quotes) and not as a date
will teststring always be NOW()? If so just put that in the query directly.
Yes. But for the sake of argument, is it possible to use a function in bindParam()?
I would not pass functions as the bound params:
$sth = $dbh->prepare("INSERT INTO pdo (namespace, count, teststring) VALUES (?, ?, NOW())");
$_a = 'Wishy-washy';
$_b = 123;
$sth->execute(array($_a, $_b));
So it's not possible at all to pass a function using bindParam?
@enchance, I don't think so. The functions are parsed when the statement is prepared - not when the data is given. The data isn't interpreted when it's bound - it is just inserted/sent.
While compiling my pdo statement I was able to just detect the presense of the ( ,which could mean a mysql function and didn't :bind it normally, just used the value as-is: codeforeach ($where as $key => $value) {
if (strpos($key,"(")>0) {// you're sending a non-parameter-able key
$w .= " and " . $key . " = " . $value;
} else {
$w .= " and " .$key. " like :".$key;
$a[":".$key] = $value;
}
}
Why not replace it to something like..
$_c = date("H:i:s");
Using the power of the PHP date function?
| common-pile/stackexchange_filtered |
Operating System Concepts by Silberschatz and Galvin, how much down the edition timeline can I go to safely understand the core-concept
I am student from CS background and I have Operating Systems in my upcoming semester. A simple search around the internet revealed that that Operating System Concepts by Silberschatz and Galvin is one of the best ones to follow.
Now the above text is probably in its 10th edition currently. Now I won't be able to afford a physical copy of the latest edition, so I was looking around for few cheap used copies and found abundance of 5th edition and few 6th edition texts.
Are these two older editions still recommended? If I only want to build my basics and I have no prior knowledge of the above subject. Any help shall be greatly appreciated...
$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$6th edition
$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$5th edition
Get the one with a dinosaur on it. just for the laughs!
I don't know this book in particular, but I do know Computer Science text books in general, and offer this advice:
Consider the title: Operating System Concepts. It is about basic and fundamental concepts that underlie operating systems. The basic concepts, the core material of the text, are not going to change much from first edition to the tenth.
You are not being asked to study the text in such detail that you would be able to discover the differences between versions.
Many computer science books do get out of date. If the book was a user manual for Windows or a iMac then having an up to date one might be important, but I suspect not in this case.
| common-pile/stackexchange_filtered |
java unsigned byte to stream
I am making an application that works with serial port. The problem is that the device I am controlling receive unsigned bytes range and as I saw java only accepts signed bytes range.
I have googled how to send, but I only got how to receive unsigned bytes.
Thanks
EDIT 2: Fix proposed by @durandal to my code to receive:
public void serialEvent(SerialPortEvent event) {
switch (event.getEventType()) {
case SerialPortEvent.DATA_AVAILABLE: {
System.out.println("Datos disponibles");
try {
int b;
int disponibles = input.available();
byte[] rawData = new byte[disponibles];
int count = 0;
while ((b = input.read()) != -1) {
if (count == disponibles - 1) {
break;
}
rawData[count] = (byte) b;
count++;
}
serial.serialDataReceived(bytesToHex(rawData), rawData);
} catch (IOException ex) {
Logger.getLogger(PuertoSerie.class.getName()).log(Level.SEVERE, null, ex);
}
}
break;
}
Try converting as per https://stackoverflow.com/questions/7401550/how-to-convert-int-to-unsigned-byte-and-back, and then send the value you get.
You can treat signed bytes as unsigned and vice versa. It's just a matter of being careful.
@soong I have tried what you say I put in the first post a code that I use to test the receeive and I receive 00. Thanks
You're making things overly complicated over nothing. A byte is a byte, there are no signed/unsigned bytes, only bytes. There is a signed/unsigned interpretation of a byte, but thats an entirely different concept.
You receiving code is broken, it will stop reading when it receives the byte value 0xFF, treating it as end-of-stream:
byte b;
int disponibles = input.available();
byte[] rawData = new byte[disponibles];
int count = 0;
while ((b = (byte) input.read()) != -1) {
if (count == disponibles - 1) {
break;
}
rawData[count] = b;
count++;
}
The problem is the declaration of "b" as byte (it should be int, you absolutely need the return value of read() as an int!) and the cast of input.read() to byte before checking for the -1 value. You should instead cast the int when you put it into the array, not in the for.
I have modified the part you said, but i continue receiving 00
@Cako From your comment I gather you still have a problem getting it to work as intended, but from the scope of the question I cannot guess what the problem could be. I think you should see the correct hex values provided bytesToHex() works ... maybe extend a little on what problem is left?
No problem. The part I want it is working, the other doesn't matter. Anyway, I thank your help
A byte is just 8-bits. Java assumes it is signed by default but you can treat it as unsigned if you wish. A common way to handle this is to use an int value which can store 0 to 255.
// from unsigned byte
byte[] bytes = ...
int value = 255;
bytes[0] = (byte) value;
// to unsigned byte
int value2 = bytes[0] & 0xFF;
// value2 == 255
| common-pile/stackexchange_filtered |
Test promise inside dynamic import in Angular
I have this kind of code:
component.ts
async ngOnInit() {
import('dom-to-image').then(module => {
const domToImage = module.default;
const node = document.getElementById('some-id');
domToImage.toPng(node).then(dataUrl => {
// The test is not getting over here
}).catch(() => {});
});
}
component.spec.ts
describe('SomeComponent', () => {
beforeEach(
waitForAsync(() => {
TestBed.configureTestingModule({
....
}).compileComponents();
fixture = TestBed.createComponent(SomeComponent);
component = fixture.componentInstance;
fixture.detectChanges();
})
)
it('should create', async () => {
expect(component).toBeTruthy();
});
}
So the question is, How do I mock this promise domToImage.toPng? Is there a solution so the test can continue its execution and resolve the promise?
Thanks in advance
Isma
You have to mock module.default:
module.default = {
toPng: () => new Promise((resolve, reject) => {resolve('myExpectedResponseData')})
};
And a mock calling reject in error tests.
Note: If you cant mock module.default directly, try spyOnProperty
I remember I had a similar problem and I couldn't spy on the import to mock it.
What I did to make the test happy was I moved it to its own method and I spied on that method.
async ngOnInit() {
importDomToImage().then(module => {
const domToImage = module.default;
const node = document.getElementById('some-id');
domToImage.toPng(node).then(dataUrl => {
// The test is not getting over here
}).catch(() => {});
});
}
importDomToImage(): Promise<any> { // can make any more specific
return import('dom-to-image');
}
The first fixture.detectChanges() is when ngOnInit() is called, so we have to mock before that.
describe('SomeComponent', () => {
beforeEach(
waitForAsync(() => {
TestBed.configureTestingModule({
....
}).compileComponents();
fixture = TestBed.createComponent(SomeComponent);
component = fixture.componentInstance;
// mock here
spyOn(component, 'importDomToImage').and.returnValue(Promise.resolve({
default: {
toPng: (arg) => Promise.resolve('abc'), // dataUrl will be abc
}
}));
fixture.detectChanges();
})
)
it('should create', async () => {
// await fixture.whenStable() to resolve all promises
await fixture.whenStable();
expect(component).toBeTruthy();
});
}
The above should hopefully do the trick and get you started.
| common-pile/stackexchange_filtered |
Simplify Query with Rails
I have this code:
@messages = Message.where(["user_id = ? AND receiver_uuid = ? OR user_id = ? AND receiver_uuid = ?", current_user.id, @friend_user[0].id, @friend_user[0].id, current_user.id])
I need to look for the relationship between the two ids, in 2 columns.
My create method:
def send_message
@message = Message.new(user: current_user, receiver_uuid: message_params[:receiver_uuid], body: message_params[:body])
respond_to do |format|
if @message.save
flash[:notice] = "Mensagem enviada com sucesso!"
format.html { redirect_to messenger_path(message_params[:receiver_uuid]) }
else
flash[:alert] = "Erro ao enviar a mensagem!"
format.html { redirect_to messenger_path(message_params[:receiver_uuid]) }
end
end
end
If you're looking for simplification, start by using models instead of ids in your query. You can do smth like this:
@messages = Message.where(user: current_user, receiver: @friend_user).or(Message.where(user: @friend_user, receiver: current_user))
Or better yet:
@messages = current_user.messages.where(receiver: @friend_user).or(@friend_user.messages.where(receiver: current_user))
Of course, this assumes that your Message model has a :receiver relation, but there's no reason why it shouldn't have it.
One way to do it...
In your model:
# message.rb
class Message
scope :by_user_id, -> (user_id) { where(user_id: user_id) }
scope :by_receiver_uuid, -> (receiver_uuid) { where(receiver_uuid: receiver_uuid) }
end
And then you can do:
# messages_controller.rb (or wherever)
ids = [current_user.id, @friend_user[0].id]
messages = Message.by_user_id( ids ) | Message.by_receiver_uuid( ids )
| common-pile/stackexchange_filtered |
How to publish multilevel shoot using Street view publish API
I'm trying to publish multilevel shoot using Street View Publish API but levels are not showing on Google map.
I have sent this below python request for Upload the metadata of the photo:
Request for level 1:
metadata_upload_url = "https://streetviewpublish.googleapis.com/v1/photo?key={}".format(API_KEY)
headers = {"Authorization": "Bearer {}".format(ACCESS_KEY), "Content-Length": "0",
"Content-Type": "application/json"}
data = {
"uploadReference": {
"uploadUrl": "https://streetviewpublish.googleapis.com/media/user/100547264652003378315/photo/5844140439745949662"
},
"pose": {
"latLngPair": {
"latitude": 18.51314,
"longitude": 73.85670
},
"heading": 0.0,
"pitch": 0.0,
"level": {
"number": 1,
"name": "arr"
}
},
"places": [{
"placeId": "ChIJb3sWh27AwjsRkiAc5rqoVvs",
}],
}
meta_photo_request = requests.post(metadata_upload_url, json=data, headers=headers)
photoid = meta_photo_request.json()['photoId']['id']
Request for level 2:
metadata_upload_url = "https://streetviewpublish.googleapis.com/v1/photo?key={}".format(API_KEY)
headers = {"Authorization": "Bearer {}".format(ACCESS_KEY), "Content-Length": "0",
"Content-Type": "application/json"}
data = {
"uploadReference": {
"uploadUrl": "https://streetviewpublish.googleapis.com/media/user/100547264652003378315/photo/5844140439745949662"
},
"pose": {
"latLngPair": {
"latitude": 18.51315,
"longitude": 73.85671
},
# "altitude": 500,
"heading": 0.0,
"pitch": 0.0,
"level": {
"number": 2,
"name": "brr"
}
},
"places": [{
"placeId": "ChIJb3sWh27AwjsRkiAc5rqoVvs",
}],
}
meta_photo_request = requests.post(metadata_upload_url, json=data, headers=headers)
photoid = meta_photo_request.json()['photoId']['id']
Result with status 200
{
"results": [
{
"status": {
"code": 200
},
"photo": {
"photoId": {
"id": "CAoSLEFGMVFpcE5UOXQzcDBwa0kwTGVROG81Nm1Qc05HdFo4djROUjB4YXM0UGNf"
},
"pose": {
"latLngPair": {
"latitude": 18.51315,
"longitude": 73.856709999999993
},
"altitude": "NaN",
"pitch": "NaN",
"roll": "NaN",
"level": {}
},
"connections": [
{
"target": {
"id": "CAoSLEFGMVFpcE9VaEpXRU03SWZod0dkdFVJUDgwNHhsY0p2YWktcTVldHVmZ0ZV"
}
}
],
"captureTime": "2017-07-27T00:00:00Z",
"places": [
{
"placeId": "ChIJb3sWh27AwjsRkiAc5rqoVvs"
}
],
"thumbnailUrl": "https://lh3.googleusercontent.com/p/AF1QipNT9t3p0pkI0LeQ8o56mPsNGtZ8v4NR0xas4Pc_=-no",
"viewCount": "7",
"shareLink": "https://www.google.com/maps/@18.51315,73.85671,0a,75y/data=!3m6!1e1!3m4!1s-W7huarDveuA%2FWXnJ6zKkzAI%2FAAAAAAAAia8%2FhTVrH8aZO54yds7DERdBRcwHUvgzg_6BACLIBGAYYCw!2e4!3e11!6s%2F%2Flh3.googleusercontent.com%2F-W7huarDveuA%2FWXnJ6zKkzAI%2FAAAAAAAAia8%2FhTVrH8aZO54yds7DERdBRcwHUvgzg_6BACLIBGAYYCw%2Fno%2Fphoto.jpg"
}
},
{
"status": {
"code": 200
},
"photo": {
"photoId": {
"id": "CAoSLEFGMVFpcE9VaEpXRU03SWZod0dkdFVJUDgwNHhsY0p2YWktcTVldHVmZ0ZV"
},
"pose": {
"latLngPair": {
"latitude": 18.51314,
"longitude": 73.8567
},
"altitude": "NaN",
"pitch": "NaN",
"roll": "NaN",
"level": {}
},
"connections": [
{
"target": {
"id": "CAoSLEFGMVFpcE5UOXQzcDBwa0kwTGVROG81Nm1Qc05HdFo4djROUjB4YXM0UGNf"
}
}
],
"captureTime": "2017-07-27T00:00:00Z",
"places": [
{
"placeId": "ChIJb3sWh27AwjsRkiAc5rqoVvs"
}
],
"thumbnailUrl": "https://lh3.googleusercontent.com/p/AF1QipOUhJWEM7IfhwGdtUIP804xlcJvai-q5etufgFU=-no",
"viewCount": "8",
"shareLink": "https://www.google.com/maps/@18.51314,73.8567,0a,75y/data=!3m6!1e1!3m4!1s-huvo4fBlnjw%2FWXnJARb4q7I%2FAAAAAAAAia0%2FJDjPyYRA2L8S4n48xtakPUSglymSICRIACLIBGAYYCw!2e4!3e11!6s%2F%2Flh3.googleusercontent.com%2F-huvo4fBlnjw%2FWXnJARb4q7I%2FAAAAAAAAia0%2FJDjPyYRA2L8S4n48xtakPUSglymSICRIACLIBGAYYCw%2Fno%2Fphoto.jpg"
}
}
]
}
In the result, level object is empty while I have put the level name and number. I'm not getting why it is showing empty.
Can anyone tellme about what step should be follow for publish the multilevel shoot on Google map?
You need to make sure that all photos are very close to each other (~5m) to make the levels control show up.
You could try to send the levels data with a separate photo.update call. Don't forget to use the correct updateMask.
Yeah I have already published all the photos which are very close to each other but levels are not showing on Google map.For testing, I have published 2 connected panos on level 1 and and 2 connected panos on level 2.
I updated my answer. I use a two step process. I first upload all images and then I use a batchUpdate to set all the connection and levels data.
Hey @Thomas Rauscher I have used separate photo.update method for levels. In the result I'm getting the levels value but levels aren't showing on Google map. Any help would be appreciated.
Do they show now? Sometimes it takes 24hrs before the elevator control shows up, and it only shows if at this photo there is another photo with a different level in a ~5m range.
In addition to @Thomas Rauscher's answer, be noted that there should have an existing indoor map in your latitude/longitude (where your panoramas are taken). You may check this guideline on how to upload a floor plan to Google Maps.
Is street view publish API strictly require floor plan for multi-level? @abielita
AFAIK, you cannot upload a multi-level pictures if the area doesn't have a floor plan.
@abielita You can post a multi level tour anywhere, as long as the photos are close to each other.
| common-pile/stackexchange_filtered |
Django Form ajax post with checkboxes
I am trying to post via AJAX a Django Form with multiple checkboxes selected. When one checkbox is selected, everything works fine. When more than one is selected, it doesn't save anything. I am guessing this happens because of how I am organizing my data in JS before sending to the server.
The model in question is:
class Room(models.Model):
hotel = models.ForeignKey(Hotel)
name = models.CharField(max_length=32)
capacity = models.IntegerField(
choices=((i, i) for i in range(1, 31)),
default=3)
taxes = models.ManyToManyField(
Tax, related_name='room',
blank=True, limit_choices_to={'hotel': F('hotel')})
The form for creating or editing a room is:
class RoomForm(forms.ModelForm):
name = forms.CharField(widget=forms.TextInput)
class Meta():
model = Room
fields = ('name', 'capacity', 'taxes')
widgets = {
'capacity': forms.Select,
'taxes': forms.CheckboxSelectMultiple,
}
def __init__(self, *args, **kwargs):
super(RoomForm, self).__init__(*args, **kwargs)
if 'initial' in kwargs:
self.fields['taxes'].queryset = Tax.objects.filter(hotel=kwargs['initial']['hotel'])
I post the form via ajax like this:
var elements = $('form').serializeArray();
var params = {}, i;
for (i in elements) {
element = elements[i];
if (element.name in params) {
if (!(params[element.name] instanceof Array)) {
params[element.name] = Array(params[element.name]);
}
params[element.name].push(element.value);
} else {
params[element.name] = element.value;
}
}
params['csrfmiddlewaretoken'] = CSRF_TOKEN;
$.post(e.target.action, params, function(response) {
callback(response);
});
When one tax checkbox is selected and posted, it works perfectly. However, when more than one tax is selected, django receives the request.POST like this:
{..., u'taxes[]': [u'269', u'268', u'156'], ...}
Instead of this:
{..., u'taxes': [u'269', u'268', u'156'], ...}
And so, the form is validated, but no taxes are saved... :(
Another note: I have tried jumping in with a breakpoint before posting the form and the params object does not have a taxes[] key, but it does correctly have a taxes key.
Well, after many many hours of debugging and searching, this answer (https://stackoverflow.com/a/21016757/1580632) helped me solve the issue. In a nutshell, in the $.ajax call, just add:
$.ajax({
...
traditional: true,
...
}
And that's it.
| common-pile/stackexchange_filtered |
Fourier transform of an image in EmguCV
Can anyone tell me if there is an inbuilt function present in emgucv 2.3 for finding out fourier transform of images ?
Thanks in advance
From my answer Fourier Transform + emgucv
The function you are after is CvInvoke.cvDFT it is technically calling the opencv method but it should be what your after
Here is the code that splits the Imaginary and Real parts from cvDFT:
Image<Gray, float> image = new Image<Gray, float>(open.FileName);
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(image.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
//Show The Data
CvInvoke.cvShowImage("Real", outReal);
CvInvoke.cvShowImage("Imaginary ", outIm);
Cheers,
Chris
For some reason using exactly this code, it hangs on CvInvoke.cvCopy
Hi, What version of EMGU are you using. Please make sure that the image and complexImage are the same size (including ROI) else the method will just freeze. cheers
| common-pile/stackexchange_filtered |
comprehension of ancient languages
is it necessary to have deep understanding/comprehension of ancient languages to know the "fullness" of the message of the Bible?
I'm not sure this is a question that belongs here. While I read Greek, Hebrew, and Aramaic, the most important thing is to know the Bible in its entirety and the context of a passage. For a complete hermeneutic the Biblical languages are important. However, if you don't want to do what the Bible says, you won't fully understand it (John 7:17).
And why would not that q belong here? do you mean on this site? or otherwise?
perry, I still do not understand your comment...please clarify.
| common-pile/stackexchange_filtered |
Handling multiple accounts in PJSUA2
I'm making an Android VoIP app using PJSUA2 library. There is one Account instance and I'm calling account.create(accountConfig) method when logging in. If I keep on giving wrong credentials, same function is called repeatedly on the same account instance. After 3 attempts, this function is giving exception.
Title: pjsua_acc_add(&pj_acc_cfg, make_default, &id)
Code: 70010
Description: Too many objects of the specified type (PJ_ETOOMANY)
Location: ../src/pjsua2/account.cpp:700
How can I handle this error?
As far as I know, the PJ_ETOOMANY usually has to do more with transports than accounts. Whenever you add an account, it will try creating transports. The library's hardcoded maximum of transports is 8. Feel free to look into that file ../src/pjsua2/account.cpp:700 and see what you can see/fix or contact the authors/users on the mailing list.
Thanks Shark, for the guidance.
There is a chance i could be wrong though, will have to look in the source to be sure; I know i patched it to allow more than 8 transports and the patch isn't that big and could be found in the mailing list's archives.
But i'm leaning more towards the "transport-related problem" than the "accounts-related problem"; however i'm only using one account.
But in general, my tip would be that the mailing list should be your goto place for asking PJSIP/PJSUA questions instead of StackOverflow as there is probably more people reading it who have been involved with or just used the project.
And quite possible have the domain knowledge and are familiar with the source enough to give you better tips than people here simply flagging you for lacking a MCVE or asking what have you tried so far or telling you that this is not a code-writting or debugging service and blahblahblah. Hope you get it fixed, and be sure tu run make clean; make distclean; make after changing the PJSIP/PJSUA source in both PJSIP and PJSUA folders to see changes in the resulting library.
Finally I got it right by changing account.create() to account.modify()!!
| common-pile/stackexchange_filtered |
Two questions about groups and presentations
i have two questions about group presentations:
Given the presentation $\langle a,b:a^2=1=b^3,(ab)^3=1\rangle$. I know that this is the presentation of $A_4$ but how to deduce that. Should i give a homomorphism from the presentation to $A_4$ and conclude that this is an isomorphism? If so, how to prove that such a mapping is injective/surfective/homomorphism? For example we can make $a\mapsto (12)$ and $b\mapsto (123)$. But how to go further to get the result?
Given the groupspresentation $G=\langle a_1,\cdots,a_g:\prod_{i=1}^g{a_i^2}=1\rangle$ (especially this is the presentation of the fundamentalgroup of closed non-orientable surfaces. I want to compute the abilization $G/[G,G]$. From my point of view it has to be $(\Bbb{Z}/2\Bbb{Z})\times\Bbb{Z}^{g-1}$. But how must i argument to make this clear?
Thank you for help, hints and solutions :)
For (1), you know $A_4$ satisfies the presentation (just take $a = (12)(34), b = (123)$, and note that $a b = (134)$; please note that the element $(12)$ you consider is not in $A_{4}$), so the presented group
$$
G = \langle a,b:a^2=1=b^3,(ab)^3=1\rangle
$$
has order at least $12$, because it has $A_{4}$ as a homomorphic image. (For all we know at this stage, it might also be infinite.)
Now in $G$, note that the first relation and the second relation mean
$$a^{-1} = a, \qquad b^{-1} = b^{2};\tag{pow}$$ also, note the consequences of the third relation:
$$
b a b = a b^{-1} a,
\qquad
a b a = b^{-1} a b^{-1}
.
\tag{cons}
$$
Now consider the elements
$$
a, b^{-1} a b,
$$
and the subgroup $V = \langle a, b^{-1} a b \rangle$ they span in $G$.
The two elements commute with each other, as (pow) and (cons) imply
$$
a (b^{-1} a b) = (a b^{-1} a) b = (b a b) b = b a b^{-1} = b^{-1} (b^{-1} a b^{-1}) = b^{-1} (a b a) = (b^{-1} a b) a.
$$
Moreover, again by (pow) and (cons),
$$
b^{-1} (b^{-1} a b) b = (b a b) b = (a b^{-1} a) b = a (b^{-1} a b) \in V.
$$
So $V$ is a normal subgroup of $G$, of order at most $4$, and the quotient group $G/V$ is generated by $b$, so $G$ has order at most $12$.
It follows that $G$ has order precisely $12$, and it is isomorphic to $A_4$.
For (2), if $H$ is your abelianization, then note that $H$ is generated by (the images of) $a_1, \dots, a_{g-1}$ and by $b = a_{1} a_{2} \dots a_{g}$. With respect to these generators, the relation becomes $b^{2} = 1$. So you are talking of the abelian group with presentation
$$
\langle a_1, \dots, a_{g-1}, b : b^2 = 1 \rangle
$$
which is indeed isomorphic to $\Bbb{Z}^{g-1} \times \Bbb{Z}_{2}$.
As for your second question, if you let $a=\prod^g_{i=1} a_i$ you can rewrite the abelianization of $G$ as
$$
G/[G,G]=\langle a_1,\dotsc,a_{g-1},a : a^2=1,[a_i,a_j]=1,[a_i,a]=1\rangle
$$
i.e. it is just $\Bbb Z^{g-1}\times C_2 \simeq \Bbb Z^{g-1}\times \Bbb Z/2\Bbb Z$.
Why is this a presentation of $G/[G,G]$?
A presentation for $G/[G,G]$ is just a presentation for $G$ plus relations for the commutators of the generators. Then by commutativity $\left(\prod^g_{i=1}a_i\right)^2=\left(\prod^g_{i=1}a_i^2\right)$. Moreover $a_1,\dotsc,a_{g-1},a$ is a system of generators, since $a_g=a(\prod^{g-1}_{i=1}a_i^{-1})$.
| common-pile/stackexchange_filtered |
If $p <q$ (primes), how to classify the semi-direct products of $\mathbb{Z}_{q}$ by $\mathbb{Z}_{p}$?
I have solved several exercises on classifying groups and I have been wanting to generalize my results. however, I came across this problem where I know that there is no semi-direct products of $\mathbb{Z}_{p}$ by $\mathbb{Z}_{q}$, only direct products, since $p<q$, but, I had difficulty classifying the semi-direct products of $\mathbb{Z}_{q}$ by $\mathbb{Z}_{p}$.
Just to be clear, you are looking for $\Bbb Z_q\rtimes \Bbb Z_p$?
@AOrtiz exactly
We need to find all the homomorphisms $\Bbb Z_p\to \mathrm{Aut}(\Bbb Z_q)$. Recall that $\mathrm{Aut}(\Bbb Z_q) \cong \Bbb Z_{q-1}$ since $q$ is prime. Hence we need to know all the homomorphisms $\phi\colon \Bbb Z_p\to\Bbb Z_{q-1}$, each of which is determined by where we send the generator $1$ of $\Bbb Z_p$. For any such homomorphism, the order of $\phi(1)$ divides $p$ and it divides $q-1$. One option for $\phi$ is the trivial homomorphism, which corresponds to $1\mapsto \mathrm{Id}_{\Bbb Z_q}$, and that gives us the usual direct product $\Bbb Z_q\times\Bbb Z_p$.
The other option is $p$ divides $q-1$, i.e. where $q \equiv 1\bmod p$. Then the image $\phi(1)$ generates a unique cyclic subgroup of $\Bbb Z_{q-1}$ of order $p$, generated by $\frac{q-1}{p}$. This corresponds to the automorphism $\psi$ of $\Bbb Z_q$ defined by $\psi\colon 1\mapsto \frac{q-1}{p}$.
Hence the isomorphism classes of groups $G$ of order $pq$, with $p < q$ and $p,q$ primes are
$$
G\cong \Bbb Z_q\times\Bbb Z_p\qquad\text{or}\qquad G\cong \Bbb Z_q\rtimes_\phi\Bbb Z_p,
$$
where $\phi\colon \Bbb Z_p\to \mathrm{Aut}(\Bbb Z_q)$ is defined by $\phi\colon 1 \mapsto \psi$, where $\psi$ is the automorphism of $\Bbb Z_q$ we described above.
| common-pile/stackexchange_filtered |
Hyphen's writing problem
I'm new in this website so if my question is inappropriate, obviously tell me.. Sorry in advance :D
I'm writing a sentece and in this sentence there are two words that are near. These two words are: self-esteem and group-esteem.
Is it correct in English to put this words like this: self and group-esteem ?
(the complete sentence is: The process of comparison is automatic when there is a categorial distinction whose aim is the increase of self and group-esteem)
Thank you! :)
Possible duplicate of How to use hyphens appropriately when listing multiple hyphenated terms? Also Can one use a hyphen to form 2 words with same prefix?
Your question title here probably won't help any future visitors find the answer to this type of question, but the site search facilities also index the question text, so it's always useful to have multiple questions about the same thing (if someone ends up looking at this, they'll be easily able to follow the link to what will hopefully be a satisfactory answer). Anyway, don't feel bad that you didn't find an earlier question - I knew it existed, but it still wasn't easy for me to find it!
| common-pile/stackexchange_filtered |
Prediction models, Objective functions and Optimization in Python
How do we define objective functions while doing optimization in Python. We have defined Prediction models separately. Next step is to bring objective functions from prediction models (Gradient boosting, Random forest , Linear regression etc) and optimize to achieve maximum and minimum outputs. please suggest if there are any examples from pyomo/Pulp or any other optimization package in Python?
Please don't post duplicate questions: https://stackoverflow.com/questions/62814537/prediction-models-objective-functions-and-optimization -- please delete one of them or edit them to be a different question
In Pyomo Documentation 5.7, you can find the answers you want.
def profrul(model):
return summation(model.p, model.x) + model.y
model.Obj = Objective(rule=ObjRule, sense=maximize)
https://pyomo.readthedocs.io/en/stable/pyomo_modeling_components/Objectives.html
| common-pile/stackexchange_filtered |
The next step after Java Play Framework 1.2.x?
I am wondering what's the next logical step after developing applications with java play framework?
I really love to develop with play 1.2 but I am inconfident about its future, the main developers stopped their support on it (yet it is still opensource) and play 2.0 is a completely different product.
I tried to study play 2.0, but I just couldn't like the scala language (although it sounds like a great language to code)
So I decided to focus my web application projects to another framework. It shouldn't have to be java, but I prefer it to be a platform independent framework like ruby, or else. (I am also a .net developer with mcp certificate but i usually use osx enviroment for coding and I'm not a big fan of windows).
My Current problems with the play framework:
It works quite well but i dont see a future with it i am afraid the opensource community will stop developing 1.2.x after some time
Play 2.0 threads java as a second class citizen, and i am starting to losing my faith to its developers.
There are not much people looking for play framework jobs
The framework should be:
Platform independent
Database independent (can use hibernate
or else..)
Has a large user community
Has to be a proven framework with large enterprise applications
I've searched a little bit and I found grails, spring and RoR frameworks.
Ok then to make things clearer, heres a summary about my question:
Should i continiue from the "java" path?, i have concerns about time is changing and in few years, there will be more "scala" like functional languages used in web frameworks and they will be more useful in future frameworks
I am also wondering about Ruby langugage? Any insights about where will they be in the next 5 years?
Where do you see "Play framework with scala/java" in the next 5 years? Will they be worth the time invested on them?
Thanks for helping!
Next step: Play 2.0 with Java, there are only few places where you'll need to learn very basic Scala syntax: templates (simple statements) and configs, (documented)
If you do not further specify your question according to the FAQ, this question is likely going to be closed. Quote: "You should only ask practical, answerable questions based on actual problems that you face." However, your current question is way to broad and based on subjective opinion to fall under that category.
i've edited my post and wrote about my problems with the current framework, i dont think its a broad question but i hope it is enough for not removing the question.
For me it is still unclear what is being asked. If it's about web application frameworks in general, this questions is too broad. There is no "THE right framework" for you. However, if you consider rails (probably because you want to learn ruby) but are not sure about performance/scalability etc. compared to e.g. PHP or Django, then this would be a more proper question.
i have improved my question, howerver, i am asking for opinions from the people who have some experience on open source web frameworks, searching for a framework that is worth for time invested and have a long life span. it doesnt have to be the "one and the right" framework, just i dont want to waste my time for a framework that lasts after few years. So i cannot be specific until i have some more information from people who acutally use it
Spring.
If you know Java then a reasonable thing is to know Spring also.
People crap on Spring because they think:
Its not new and shiny
You need gallons of XML to do anything.
Its humongous monolithic beast.
Besides being mature none of the above is true. And unlike Play! Spring is in it for the long haul.
Spring also doesn't go off and build its "own" of everything but instead relies on best of breed libraries that you plug in. Thus with Spring you can play with what ever templating language, what ever build system, persistence, etc...
Now the only PITA with Spring is finding a good starting point. I recommend either Spring Roo or MWA
UPDATE:
I don't know why I got the -1 when the question was bad anyway (put a comment or something).
He asked for:
Platform independent
Database independent (can use hibernate or else..)
Has a large user community
Has to be a proven framework with large enterprise applications
IMHO There is not a framework that fits the above points better (particularly enterprise).
HE asked an opinionated question I gave him one.
Thank you, i've also heard those things about spring but i'll keep it in mind.
| common-pile/stackexchange_filtered |
zend framework "$this"
I am new into the Zend framework, and I have a basic question.
Assume I am working with the layout.phtml or with the index.phtml of any script.
When I am using "$this->" to what instance am I referring to?
I read in a book the following:
"$this is available within the template file, and it is the gateway to Zend_View’s functionality".
Does it mean that I can access any method, of any class that lies in any file inside the library/Zend/View/Helpers directory?
Excuse me if this question is silly and/or simple enough.
Thank you
$this-> In a view template is a reference to the Zend_view object you create in your controller.
Try var_dump($this) or print_r($this) (echo out a <pre> before the print_r for nicer formatting) in the template. Var dump might help you figure out what is going on a little better.
When you use $this from within a .phtml file you are refering to an instance of Zend_View. This object is setup for you by your controller object which is an instance of Zend_Controller_Action.
Zend_Controller_action ensures that your view object has access to any view helpers that it needs. So, yes, you do have access to any helpers in the library/Zend/View/Helpers directory through the $this variable.
You also have access to any helpers that you write yourself and place in the application/views/helpers directory through $this. See the manual about writing your own view helpers. Once you start, you'll use them all the time as this is a very simple and powerful method of keeping your code DRY.
Incidentally, you also have direct access in the same way to any filters you place in the application/views/filters directory as you can see from the docblock for initView() in Zend/Controller/Action.php.
/**
* Initialize View object
*
* Initializes $view if not otherwise a Zend_View_Interface.
*
* If $view is not otherwise set, instantiates a new Zend_View
* object, using the 'views' subdirectory at the same level as the
* controller directory for the current module as the base directory.
* It uses this to set the following:
* - script path = views/scripts/
* - helper path = views/helpers/
* - filter path = views/filters/
*
* @return Zend_View_Interface
* @throws Zend_Controller_Exception if base view directory does not exist
*/
public function initView()
The whole process from request to response in Zend Framework is quite complicated. There are some diagrams available here and here if you are interested.
Zend Framework is very powerful and is easy to use once you have overcome the learning curve which is quite steep. It is worth persevering though as you eventually 'get it' and produce better code faster as a result. I struggled with the documentation and the API, but found that the best documentation is the code. I now have the code for any component of ZF that I am using open in a seperate netbeans window for ease of reference.
Good luck wth ZF.
Typically you will assign some bit of data to the view object inside a controller action using something like:
$form = My_Form;
//assign My_Form to the view object
$this->view->form = $form;
in your view script you would normally access that data using something like:
//this bit of code would display your whole form in the view script
//along with any layout information contained in your layout file
<?php echo $this->form ?>
also items can be assigned to the view object from the bootstrap and these items will be available to the layout or view scripts. Here is an example:
protected function _initView() {
//Initialize view
$view = new Zend_View();
//get doctype from application.ini
$view->doctype(Zend_Registry::get('config')->resources->view->doctype);
$view->headTitle('Our Home');
//get content-type from application.ini
$view->headMeta()->appendHttpEquiv('Content-Type',
Zend_Registry::get('config')->resources->view->contentType);
//add css files
$view->headLink()->setStylesheet('/css/blueprint/screen.css');
$view->headLink()->appendStylesheet('/css/blueprint/print.css', 'print');
$view->headLink()->appendStylesheet('/css/master.css');
$view->headLink()->appendStylesheet('/css/main.css');
$view->headLink()->appendStylesheet('/css/nav.css');
//add it to the view renderer
$viewRenderer = Zend_Controller_Action_HelperBroker::getStaticHelper(
'ViewRenderer');
$viewRenderer->setView($view);
//Return it, so that it can be stored by the bootstrap
return $view;
now this data is access inside of a layout.phtml in this manner:
<?php echo $this->doctype() . "\n"; ?>
<html>
<head>
<?php echo $this->headMeta() . "\n" ?>
<?php echo $this->headLink() . "\n" ?>
<!--[if lt IE 8]>
<link rel="stylesheet" href="/css/blueprint/ie.css" type="text/css" media="screen, projection" />
<![endif] -->
</head>
now for completeness here is the PHP manual version of $this:
Within class methods the properties, constants, and methods may be
accessed by using the form $this->property (where property is the name
of the property) unless the access is to a static property within the
context of a static class method, in which case it is accessed using
the form self::$property. See Static Keyword for more information.
The pseudo-variable $this is available inside any class method when
that method is called from within an object context. $this is a
reference to the calling object (usually the object to which the
method belongs, but possibly another object, if the method is called
statically from the context of a secondary object).
This is not a complete explaination but I hope it get's you started.
Excellent answer, but it doesn't actually refer to the OP's question. You have explained how to use the $this pseudo variable rather than what object it refers to.
@vascowhite you're correct I referenced the view object several times but never explained what the view object was. Although I think between the two of us he got a crash course in what it is and how to use it. :) Even money I answered the question he meant to ask. ;)
As said $this in view is instance of Zend_View.
Please see the method render in Zend_View class
public function render($name)
{
// find the script file name using the parent private method
$this->_file = $this->_script($name);
unset($name); // remove $name from local scope
ob_start();
$this->_run($this->_file);
return $this->_filter(ob_get_clean()); // filter output
}
Basically ZF action helper (ViewRnderer) creates an instance of Zend_View and calls the render method by passing the name of view file (index.phtml)
$view = new Zend_View();
$view->render('index.phtml');
As you can see output buffering is used ob_start
in render method . Which loads the index.phtml file in context of Zend_View class hence $this can be used inside index.phtml as its code is part of the class.
In addition to the other answers, $this will help you to use the helpers you define in your project's application/view/helper directory. You can use all these helpers anywhere in the .phtml or the view files in your project by just binding them with zend_helper in the initializer.
| common-pile/stackexchange_filtered |
How to fix this wifi FPV antenna
I do not know much about antenna's. A wire broke off of a Wifi antenna, being part of a video transmission unit of a quadcopter.
The short grey colored wire on the right side of the picture broke off.
I simply tried soldering it back on. But that does not seem to work. The range of the antenna now is not even 2 meters. The thick shrinkwrapped black part of the antenna is a holow metal tube.
I assume that the center of the antenna is going through the tube, while the outside of the antenna should be attached to the tube. But this is just a guess of mine, I have not much knowlegde about antennas.
The black antenna wire is so small that I can not see a distinct core or shield.
Is this a coaxial wire? How should it be attached to the metal tube? In other words: How could I fix this antenna?
Thanks.
Frequency is not given, but this looks like it could be based on a quarter-wavelength antenna main element (your broken wire), and a quarter-wave sleeve balun element:
Coax cable extends into the left end of the metal sleeve, which is hollow. The left end of the sleeve is electrically floating, unattached to anything. The right end of the sleeve is attached to the coax shield.
Coax centre conductor is attached to the quarter-wave antenna wire, where it appears at the right end of the sleeve.
This looks like a basic 1/2 wavelength dipole antenna, similar to the one shown here, with the central element broken off. It may be difficult to solder it back on without shorting it to the metal sleeve to which the braid of the coax cable is connected.
A better idea is to find a spare coax cable with the right connector (or just shorten the existing one), peel off 1/4 wavelength (about 3 cm or 1.2" for 2.4GHz) of the braid, and then crimp or solder the sleeve to the coax braid so that the central element is sticking out:
If attaching the sleeve is too difficult, you can simply solder another 1/4 wavelength piece of wire to the sleeve, and make it point in the direction opposite to the central element.
| common-pile/stackexchange_filtered |
CSS: Can you use `:has` within `:host()` selector?
In the below example, I am trying to style the **host** depending on whether it has a slotted element that has the empty attribute. If it does have such an element, then I wish to add a lime green border:
class Component extends HTMLElement {
constructor() {
super().attachShadow({mode:'open'});
const template = document.getElementById("TEMPLATE"); this.shadowRoot.appendChild(template.content.cloneNode(true));
}
}
window.customElements.define('wc-foo', Component);
<template id="TEMPLATE">
<style>
:host(:has(::slotted([empty]))) {
border: 2px solid lime;
}
</style>
<div>Custom web component should have a lime border
</div>
<slot></slot>
</template>
<wc-foo>
<div empty>"Empty Div"</div>
</wc-foo>
However this does not work, and I am not sure why. Guessing probably because the :host() selector has to be a simple selector. Is there any other way of achieving it?
PS: This question is not a dup of How to use ":host" (or ":host()") with ":has()" cause that is about selecting the host's children, whereas I am trying to select the host depending on its children.
As of writing this answer, :host(:has(...)) selecting the light DOM is only implemented in Safari.
So the following example currently only works in Safari:
class WcFoo extends HTMLElement {
constructor() {
super();
this.attachShadow({mode: 'open'});
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
margin: 1em;
}
:host(:has([empty])) {
border: 4px solid lime;
}
</style>
<div>Custom web component should have a lime border</div>
<slot></slot>
`;
}
}
customElements.define('wc-foo', WcFoo);
<h3>Preview this example using Safari</h3>
<wc-foo>
<div empty>"Empty Div"</div>
</wc-foo>
<wc-foo>
<div>"No empty attribute"</div>
</wc-foo>
While researching this question, I found some GitHub issues that may provide more context: https://github.com/web-platform-tests/interop/issues/208 .
This Safari only answer is also courtesy of Westbrook's GitHub comment: https://github.com/w3c/webcomponents-cg/issues/5#issuecomment-1220786480
What is a solution that works in all browsers today?
You can reflect an attribute onto the host to style the host. To detect that your element has been slotted by the element containing the empty attribute, you can use the slotchange event.
In the slotchange event reflect a styling attribute onto the host, and use the selector: :host([empty]) to add the lime border.
class WcFoo extends HTMLElement {
constructor() {
super();
this.attachShadow({
mode: 'open'
});
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
margin: 1em;
}
:host([empty]) {
border: 4px solid lime;
}
</style>
<div>Custom web component</div>
<slot></slot>
`;
this.shadowRoot.querySelector('slot').addEventListener('slotchange',
(evt) => {
const hasEmpty = evt.target.assignedElements()
.some(el => el.matches('[empty]'));
if (hasEmpty) {
this.setAttribute('empty', '');
} else {
this.removeAttribute('empty');
}
}
);
}
}
customElements.define('wc-foo', WcFoo);
<wc-foo>
<div empty>"Has lime border"</div>
</wc-foo>
<wc-foo>
<div>"Will not have border"</div>
</wc-foo>
| common-pile/stackexchange_filtered |
Objects were released still they gave their values . ARC were Not mark (means Off)
Objects were released still they gave their values . ARC were Not mark (means Off)
-(void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//test of retain and copy
NSString *s1 = [[NSString alloc] initWithString:@"String1"];
NSString *s2 = [s1 copy];
[s1 release];
[s1 release];
[s2 release];
if(s1!=nil)
{
NSLog(@"11111");
NSArray *array = [[NSArray alloc] initWithObjects:@"1",@"2",@"3", nil];
[array release];
NSLog(@"S1 - %@ \n S2 - %@ \n Array - %@",s1,s2,array);
}
}
===output===
2012-12-14 15:04:01.165 testMM[940:207] 11111
2012-12-14 15:04:01.168 testMM[940:207] S1 - String1
S2 - String1
Array - S1 - String1
S2 - String1
Array -
First, accessing a deallocated object is undefined behavior. It may access what looks like the original object (if the memory it used hasn't been overwritten), or it may access another object (that happens to be allocated there later), or it may access random garbage (not an object at all), or it may crash or do other weird things. There is no way to tell whether an object is deallocated or not, unless you run with zombies turned on to catch calls to deallocated objects.
Not only that, there is also no guarantee of when an object will be deallocated even after you release all your retains on it. An object is deallocated when its retain count goes to 0. But even if you allocated and release an object, it is possible that some API could have retained and then autoreleased it. e.g. you allocate and then immediately release the array, but it is possible that in initWithObjects: it retains and autoreleases itself (it is never incorrect to retain and autorelease an object), though that is unlikely in an init.
Specifically in this case, for the strings, @"String1" is a literal string that exists in static memory and memory management operations on it like retain and release don't do anything. NSString, when you create a new string based on a constant string, simply returns that constant string; and copy on a constant string also returns that constant string. So basically s1 and s2 both point to a string literal in static memory that lives forever. release on them have no effect. (But are still incorrect from a memory management rules point of view.)
array was allocated and released. What likely happened here (I am guessing here, due to the above reasons) is that the array is indeed deallocated, but that part of memory was not overwritten by the time you print it since it was such a short period of time. Again, this is undefined behavior.
| common-pile/stackexchange_filtered |
How to disable or replace X-Powered-By header in Sails.js application
When I run Sails.js application, it adds the following HTTP header automatically to every response: X-Powered-By: "Sails <sailsjs.org>".
Is it possible to disable or override it?
Yes, it's quite possible.
You will need to disable the Sails's middleware called poweredBy and also tell Express.js server not to add it's own header.
Just update your config/http.js configuration file to looks like this:
module.exports.http = {
middleware: {
disablePoweredBy: function(request, response, next) {
var expressApp = sails.hooks.http.app;
expressApp.disable('x-powered-by');
// response.set('X-Powered-By', 'One Thousand Hamsters');
next();
},
order: [
// ...
// 'poweredBy',
'disablePoweredBy',
// ...
]
}
};
Here, we are retrieving an instance of Express Application from Sails hooks and then using it's disable() method to set the x-powered-by configuration parameter to false value. That will prevent the header from appearing.
And in order to enable this custom middleware, you will need to add it to the order array. You can just replace poweredBy middleware with disablePoweredBy.
Also, by un-commenting the response.set() method you can set your own header value.
As of at least 2016 (maybe even earlier), you don't need to disable Express' X-Powered-By, as Sails already does this, even if you disable Sails' poweredBy.
Edit your config/http.js and set poweredBy to false:
module.exports.http = {
middleware: {
poweredBy: false
}
}
Since Sails will disable the express X-Powered-By header there is no need to disable it manually.
This works perfectly with sails v 0.12. This should be the accepted answer
This also works well with v1.5.3.
No need to create a new middleware, You can over ride the poweredBy middleware of Sails.js, for example
module.exports.http = {
middleware: {
poweredBy: function (req, res, next) {
// or uncomment if you want to replace with your own
// res.set('X-Powered-By', "Some Great Company");
return next();
}
}
}
| common-pile/stackexchange_filtered |
How to understand VIM configuration noremap <leader>M mmHmt:%s/<C-V><cr>//ge<cr>'tzt'm
I find many results on internet using this confiuration to remove ^M. I can understand the substitute command :%s/<C-V><cr>//ge<cr>. But I cannot figure out why mmHmt and 'tzt'm is necessary.
noremap <leader>M mmHmt:%s/<C-V><cr>//ge<cr>'tzt'm
mmHmt:%s/<C-V><cr>//ge<cr>'tzt'm
^^................................ create mark m
^............................... move the cursor to the top of the window
^^............................. create mark t
^^...... move the cursor to mark t
^^.... position the current line at the top of the window
^^.. move the cursor to mark m
Creating those marks and jumping back to them after the substitution seems to be an attempt at keeping the cursor in place.
Hints:
The mapping is a normal mode mapping so the commands in the RHS are assumed to be normal command by default.
mmHmt is thus a sequence of normal mode commands, and so is 'tzt'm.
Doing :help m explains mm and mt. It also explains 't and 'm indirectly.
You are left with H: :help H, and zt: :help zt.
See :help m, :help H, :help ', :help zt.
Learning Vim is really easy. It not only allows one to understand random snippets found on the internet but, more importantly, to not need those in the first place.
See :help user-manual.
| common-pile/stackexchange_filtered |
Pull up to refresh in android RecyclerView
I have used the SwipeRefreshLayout of v4 support library according to the following way:
swipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() {
@Override
public void onRefresh() {
refreshItems();
}
});
void refreshItems() {
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
@Override
public void run() {
swipeRefreshLayout.setRefreshing(false);
}
}, 3000);
}
In this strategy, if I pull down the screen when first list item is visible then onRefresh() method is called.
This is called pull down to refresh. But I want the reverse effect. That is if I pull up the screen when last list item is visible then a method should be called or it should be notified anyway. Is it possible? If possible please provide me the way.
Can I ask you why do you want to accomplish that behaviour?
I want these because if number of data is huge and these data are loaded from server then I will load some data first time and more data will be loaded on pull up behaviour
Then you should look for "Endless Scroll Listener", there are some libraries out there, or you can write your own onScroll() implementation of OnScrollListener to accomplish that.
@KanchanChowdhury see my answer below otherwise Grender is right you have to used Endless Scroll Listener.
Thanks Grender, Harshad. I will check it.
In your Adapter class of RecyclerView/ListView while inflating last item you can put if statement and call a method.
Code if you are using RecyclerView:-
private boolean loading = true;
int pastVisiblesItems, visibleItemCount, totalItemCount;
mRecyclerView.addOnScrollListener(new RecyclerView.OnScrollListener()
{
@Override
public void onScrolled(RecyclerView recyclerView, int dx, int dy)
{
if(dy > 0) //check for scroll down
{
visibleItemCount = mLayoutManager.getChildCount();
totalItemCount = mLayoutManager.getItemCount();
pastVisiblesItems = mLayoutManager.findFirstVisibleItemPosition();
if (loading)
{
if ( (visibleItemCount + pastVisiblesItems) >= totalItemCount)
{
loading = false;
Log.v("...", "Last Item Wow !");
//Do pagination.. i.e. fetch new data
}
}
}
}
});
Also add below code:-
LinearLayoutManager mLayoutManager;
mLayoutManager = new LinearLayoutManager(this);
mRecyclerView.setLayoutManager(mLayoutManager);
@KanchanChowdhury check my new answer
you said it yourself, you are using SwipeRefreshLayout..
it is used in order to swipe to refresh,
it's not designed/capable to achieve your goal.
the way I see it you'll have to write your own custom view.
Well, you could check when the last item of the list is reached and trigger the animation+update.
Thanks. But it's not the desired behavior. If the last item is visible and I pull up again it will not be notified.
In Android app Pull To Refresh aka SwipeRefreshLayout is used whenever we need to refresh the content’s of a view via a vertical swipe gesture. It accepts only one child means the component we want to refresh. It uses the listener mechanism to inform the listener who holds this component that a refresh event has occurred...
Basic Pull To Refresh / SwipeRefreshLayout XML code:
<android.support.v4.widget.SwipeRefreshLayout
android:id="@+id/simpleSwipeRefreshLayout"
android:layout_width="match_parent"
android:layout_height="wrap_content">
< Add View's Here..../>
</android.support.v4.widget.SwipeRefreshLayout>
See more with examples from http://abhiandroid.com/materialdesign/pulltorefresh.
Here is my version of Vishwesh's answer
mRecyclerView.addOnScrollListener(new RecyclerView.OnScrollListener() {
@Override
public void onScrolled(RecyclerView recyclerView, int dx, int dy) {
super.onScrolled(recyclerView, dx, dy);
// Prevents Swipe to Refresh if only scrolling up.
LinearLayoutManager llm = (LinearLayoutManager) recyclerView.getLayoutManager();
int pos = llm.findFirstCompletelyVisibleItemPosition();
if( pos != 0){
swipeRefresh.setEnabled(false);
swipeRefresh.setRefreshing(false);
}
else
{
swipeRefresh.setEnabled(true);
}
}
});
I got same issue .so in on Refresh() methods i had initialized again recycle view and 'endless Scroll Listener' it worked for me
@Override
public void onRefresh() {
Adapter = new Adapter(this, List, );
recyclerView.setLayoutManager(LayoutManager);
recyclerView.setItemAnimator(new DefaultItemAnimator());
recyclerView.setAdapter(Adapter);
loadMoreData();
recyclerView.addOnScrollListener(new EndlessRecyclerOnScrollListener() {
@Override
public void onLoadMore() {
loadMoreData();
}
});}
| common-pile/stackexchange_filtered |
ZF2/Doctrine 2 : Dynamic mapping for both ORM/ODM?
I am using Zf2 and Doctrine2.
What I am trying to accomplish is to write an event subscriber that can determine which objectmanger (EntityManager/DocumentManager) is being used and dynamically map a model class to a table or document.
How would I map the model to the doctrine configuration and what driver would I use?
Can I set the table and field names dynamically in a configuration file somewhere?
Is this possible and if so, please point me in the right direction.
Here is an example:
<?php
class Model
{
protected $id;
protected $name;
protected $data;
}
I would like to map this one model to both ORM\ODM with the possibilty of changing property names.
Please advise.
Frankly this sounds like a bad idea. All mapping information is stored in doctrine's metadata (mapping information), which is not configureable by default but can be changed at runtime. What you're asking for is a complete replacement of how doctrine determines this information. With that in place, you probably would need to replace the way doctrine hydrates your objects too. Your database schema has to be prepared for dynamic propoerties as well. By then you probably have replaced most of Doctrine anyway, so writing a custom ORM might be the better choice - or a different approach.
Hello @Fge. I have managed to come up with a solution to this at https://github.com/jeandormehl/jhd-base and https://github.com/jeandormehl/jhd-session. I am concerned that this approach might be a problem but I will continue tinkering at this. Thanks
| common-pile/stackexchange_filtered |
Where did Heinlein say "Once you get to Earth orbit, you're halfway to anywhere in the Solar System"?
I know what it means. I've seen delta-V charts. But I don't know if Robert Heinlein wrote this down, or simply said it off-the-cuff to somebody.
Variations include:
"Reach low orbit, and your halfway ..." (See Space Access Society logo http://space-access.org)
"Make orbit, and you're halfway ..."
If we want to attribute this to him, a citable source would be handy.
This might possibly be on-topic here (not sure), but there is also Science Fiction SE and you are probably going to get faster, better, and more answers there than here. Consider asking there instead?
Just for the record, it should be noted that Heinlein seemed to have a good understanding of orbital mechanics. Books he wrote as early as the 1940' describe maneuvers that sound reasonable, not the usual "shoot from earth to mars in a couple hours".
I think 3/4 is more realistic, or even more.
I don't know where, but it is true because if you escape elliptic gravity then it is so much cheaper to go anywhere you want. Any little thrust any direction is effective immediately.
@DiegoSánchez He not only understood orbital mechanics, he did the math. He and his wife laboriously worked out the orbits by hand any time he had to describe how long or what procedures it would take to get somewhere in the solar system.
This phrase was quoted by Jerry Pournelle in an article entitled "Halfway to Anywhere" first published in the Galaxy Magazine in the April 1974 Issue in his column "A Step Farther Out". This article was then collected with others into his book of the same name.
Here's the article's opening:
One of my rivals in the science-writing field usually begins his columns with a personal anecdote. Although I avoid slavish imitation, success is always worth copying. Anyway, the idea behind this column came from Robert Heinlein and he ought to get credit for it.
Mr. Heinlein and I were discussing the perils of template stories—interconnected stories that together present a future history. As readers may have suspected, many future histories begin with stories that weren't necessarily intended to fit together when they were written. Robert Heinlein's box came with The Man Who Sold the Moon. He wanted the first flight to the moon to use a direct Earth-to-moon craft, not one assembled in orbit—but the story had to follow Blowups Happen in the future history.
Unfortunately, in Blowups Happen a capability for orbiting large payloads had been developed. "Aha," I said. "I see your problem. If you can get a ship into orbit, you're halfway to the moon."
"No," Bob said. "If you can get your ship into orbit, you're halfway to anywhere."
He was very nearly right.
You can also read the whole article.
Okay, this seems to be the unanimous answer.
| common-pile/stackexchange_filtered |
My username and password in not case sensitive C#
I have used this code to make my login form, but my username and password is now not case sensitive.
private void btnLogin_Click(object sender, EventArgs e)
{
string sqlquery = "SELECT * FROM users WHERE username = '"+txtUsername.Text.Trim()+"' AND password = '"+txtPassword.Text.Trim()+"' ";
SqlDataAdapter da = new SqlDataAdapter(sqlquery,con);
DataTable dt = new DataTable();
da.Fill(dt);
if (dt.Rows.Count > 0)
{
MessageBox.Show("Hi " + txtUsername.Text + ", Welcome to the program!");
}
else
{
MessageBox.Show("Incorrect Username or Password");
}
You should not be storing passwords in plain text in your database. You should be storing a hash. When you do that, case sensitivity won't be an issue.
Thank you... But could you please make it more clear, because i did not get it.. as i am a beginner...
There are tools that help you to manage authentication and authorization for users such as Identity Framework
There are plenty of great tutorials and other resources on how to build an authentication system, you merely need to do your research on how to do this properly. It's well beyond what can be posted in a comment, or even an answer.
Thanks Servy... I will have to look more onto this then....
There are many many things wrong with this. The two big ones though are the following:
1: DO NOT and I mean DO NOT store plaintext passwords in your database. Ever. Make it a secure hash using an existing well tested implementation.
2: Your sql queries are incredibly vulnerable to SQL injection. For example when I login as "'; DROP ALL TABLES; --" it will delete everything in your database. NEVER put raw user input into your sql queries.. Ever.
In short: DO NOT ROLL YOUR OWN AUTHENTICATION. Use one of the numerous tried and true authentication libraries out there or you will end up hurting yourself/your company/others.
There are many issues I will not address in this answer (plain text password, inline sql without parameters, etc). But will say: I hope this was done only as an example.
That said, SQL Server does not perform case-sensitive searches by default. For that you need to use collation. A great answer to this is given here already:
How to do a case sensitive search in WHERE clause (I'm using SQL Server)?
For the sake of the unfortunate users of whatever software this poster is making, I wouldn't give him the answer to this as it might encourage him to continue with his grievously insecure method.
Tnx buddy, i can temporarily use this method since i need this immediately... and look further more about this and improve on this... so however tnx as this will be useful for me now.
+1 There is not always time or money to update everything that is "not done right" and sometimes the best solution is somewhere in the middle of "good enough" and "done right"
For security vulnerabilities that can cause catastrophic data loss(I can delete entire databases) or massive data leaks(leak all customer passwords) there is no way except the correct way. If you are doing it wrong it WILL come to bite you. and it WILL cost more money to mitigate the lawsuits than it does to do it right the first time. Time/money constraints are no excuse for gross negligence. If you can't afford to employ BASIC security on your authentication, you shouldn't be in business.
| common-pile/stackexchange_filtered |
MacOS Monetery File Dates are not rendering correctly
MacOS Monterey Version: 12.4 (21F79)
So since doing a massive jump in updates, finder is no longer showing dates correctly, all I get is at.
Also if I go and "Get Info" on the file you see the same, all the dates show as at.
If I go to the Console App, and look at any of the options with a date, all it shows is a ,.
So now to the things I've tried:
SMC Reset
PRMA/NVRAM Rest
Disk Repair (Disk Utility First-Aid)
Re-install (without erasing data)
Any and all help will be very much appreciated.
UPDATE:
Tried to re-index with spotlight, but no difference
Using the command GetFileInfo shows the correct data
TEAL-C02ZR0G2MD6W:tmp:$ GetFileInfo com.PM2.err
file: "/private/tmp/com.PM2.err"
type: "\0\0\0\0"
creator: "\0\0\0\0"
attributes: avbstclinmedz
created: 05/17/2022 15:25:09
modified: 05/17/2022 15:25:24
Doing a stat on the file also shows the correct information:
stat com.PM2.err
16777222 170309676 -rw-r--r-- 1 adrianbrowning wheel 0 4006 "May 17 15:25:09 2022" "May 17 15:25:24 2022" "May 17 15:25:24 2022" "May 17 15:25:09 2022" 4096 8 0 com.PM2.err
I’ve got my first Mac installing 12.4 now. Will test. Do you have a common date format selected - https://support.apple.com/en-gb/guide/mac-help/mh27073/mac
@bmike - Thank you! That was exactly what the issue was. It was set to a "Custom" variant, now it's back to the standard
If you can answer your question with a screen shot, I’ll vote that up too +1. That picture will help others I am sure know this got answered.
So thanks to @bmike and his suggestion to check the Language & Region settings I found the culprit.
The Preferred Language was United Kingdom (Custom). With all of the Date fields being empty.
After resetting to the default United Kingdom settings, and now everything is now back to normal!
I’ve seen some rumblings that other apps / panels lost settings like this. Do you think you made these changes intentionally or the system dropped these unexpectedly / recently? (Excellent, excellent post - thanks for all the details!)
| common-pile/stackexchange_filtered |
How to define parameter to class constructor before creating a instance with dependency injection
I have a situation about dependency injection. There is a class in my project that I need and it's a member of another library (RestClient class of RestSharp lib.). I added that class in the start-up class as a transient but it has to take a parameter to its constructor before creating an instance. The problem is I don't know the parameter when the program initializes: because these parameters are changing according to the user's actions. I am searching for a way to define parameters to this class before creating an instance by the container. Is this design wrong? What should be the right design to solve this issue?
Added here image to better illustrate my question
I would 1) Create a class which can create RestClient instances using the correct configuration (this also tracks the user's actions, etc), 2) Register RestClient using a factory, which calls into that first class to get an instance to return
You dont need to pass the parameter here, you have to add like this services.AddTransient<IRestClient, Restclient>();. Where you have to create an interface IRestClient and implement the interface in your RestClient class. Sorry my bad, you can use services.AddTransient<Restclient>();
Please add code as text and not as an image.
@canton7 I have modified my comment.
@viveknuna Stick with IRestClient, RestSharp also has one of them and it should definitely be used to support testability later on.
It's worth noting that if you do that, and use the RestClient parameterless ctor, you'll need to set BaseUrl later on after you've fetched an instance
@DavidG sorry I don't have an idea of RestSharp. I have provided the concept.
@Emre is your problem solved by my answer?
@viveknuna actually it's working when I use it as you said. However it's not what I want exactly, I need to pass a parameter to the constructor before an instance created. İf I use factory for RestClient Class it's working as I want. But I need more study to make sure ı have done it right.
| common-pile/stackexchange_filtered |
Prove $2^N$ is uncountable or $2^N$ is equinumerous to power set of $N$.
I've encounted this question many time in similar form in different books, but I still don't understand it.
In Enderton's set theory, it says:
I don't how this proof proved H(B) is injective and subjective.
Or in Munkres's Topology
What does the $g(n)$ exactly mean in this proof?
Thanks!
In Enderton, the author is defining a function $H : \mathcal{P}(A) \to {}^A2$ by letting $H(B) = f_B \in {}^A2$, where $f_B : A \to 2$ is the characteristic function of $B$, for each $B \in \mathcal{P}(A)$.
The function $H$ is injective since distinct subsets have distinct characteristic functions. Explicitly, if $B \ne B'$ then either there is some $a \in B$ with $a \not \in B'$, in which case $f_B(a) = 1 \ne 0 = f_{B'}(a)$, or there is some $a \in B'$ with $a \not \in B$, in which case $f_B(a) = 0 \ne 1 = f_{B'}(a)$.
The function $H$ is surjective since each function $A \to 2$ is the characteristic function of the preimage of $1$, namely given $g : A \to 2$ we have $g = H(B)$, where $B = \{ x \in A \mid g(x) = 1 \}$.
In Munkres, it means exactly what it says: $g(n)$ is the result of applying the function $g : \mathbb{Z}^+ \to X^{\omega}$ to the element $n \in \mathbb{Z}^+$. Note that since the codomain of $g$ is $X^{\omega}$ we have $g(n) \in X^{\omega}$, so $g(n)$ is itself an $\omega$-sequence of elements of $X$. The author has then chosen to express the terms of $g(n)$ by writing $g(n) = (x_{n1}, x_{n2}, \dots)$; in other words, $x_{nk}$ is the $k^{\text{th}}$ term of the sequence $g(n)$.
I think I may have some misunderstanding about definition of $2^A$. It should be a set of function from A to 2, right? But the domain of H is B which is in A, but the range of H is just 0 or 1, I think the domain of H should identical to $2^A$, which are bunch of functions. Where I am wrong?
You're correct that $2^A$ (or ${}^A2$) is the set of functions $A \to 2$. The domain of $H$ is not $B$. The domain of $H$ is $\mathcal{P}(A)$, which is the set of all subsets of $A$. It just so happens that the values of $H$ are themselves functions (since they're elements of ${}^A2$), so $H(B)$ is a function $A \to 2$ for each $B \in \mathcal{P}(A)$.
Or in short: $H$ is a function $\mathcal{P}(A) \to {}^A2$, and $H(B)$ is a function $A \to 2$ for each $B \in \mathcal{P}(A)$, since $H(B)$ is an element of ${}^A2$.
I do not sure I know what the value of H are themselves functionmean.
And in the Munkres, should $X^w$ also function? why here becomes sequence? (Sorry for the terrible type as I typed on the phone)
To the proposer: $H$ is a function. The domain of $H$ is the Power-Set of $A$. The range of $H$ is a set of functions, namely the set of characteristic functions of subsets of $A$. Note that a function $f_B$ can be considered to be a single object, so the function $H$ maps the set $B$ to the object $f_B.$ That is, $H(B)=f_B.$..... To a set-theorist, everything is a set, and a function is defined to be equal to "its graph".
Thanks @DanielWainfleet Clive Newstead, I see why H is subjective and injective now. But I still don't know why Munkres written g(n) as that. Should the domain of g also functions? I cannot see why ($x_1$, $x_2$, ...) here is a function.
@Cathy A sequence in $X$ is exactly the same thing as a function $\mathbb N\to X.$ Just write the sequence elements like $x(n)$ rather than $x_n.$
@spaceisdarkgreen Thanks. I think I'm understood now. Just what to make sure, is that "a sequence in X is exactly the same thing as a function ℕ→X" because we defined sequence as function whose domain is N?
@Cathy: Yes. Sequences of elements of $X$, and functions $\mathbb{N} \to X$, are two different (but equivalent) descriptions of the same thing.
What Munkres means by $g(n)=(x_{1n},x_{n2},x_{n3},...)$ is that $g(1)=x_{n1}, g(2)=x_{n2}, g(3)=g_{n3},$ etc... I hadn't noticed that this could be confusing.....BTW it's "surjective" not "subjective". (Maybe just a typo of yours? Like "sunset" for "subset"?)
@DanielWainfleet: That's not true. It means that $g(n)(1) = x_{n1}, g(n)(2) = x_{n2}$ and so on, where we're considering $g(n)$ as a function $\omega \to X$ rather than a sequence of elements of $X$.
@CliveNewstead. Yes. You are right. I got a little careless with the notation.
| common-pile/stackexchange_filtered |
Specify required argument type of function as arrow function
Is it possible to require an arrow function as type for function argument in some way in typescript? For example, if I use publish-subscriber model, I pass a function of listener to 'server' object, that call this function when a publisher send message to the topic. And If I pass not arrow function it will throw an error if 'this' is used. So, I would like to find a way to add this type of restriction to make such a mistake impossible.
Which error it will throw ? And personnally I use this and it works
No, there is no way in typescript to differentiate functions from arrow functions as types.
@Elikill58 playgorund
You can't differentiate between the different ways of declaring a function. But what you do is declare the type of this, and if your function call would result in an incompatible type for this, then a type error is raised.
For example:
interface MyObj {
testMethod(this: MyObj): void // declare `this` as `MyObj` in this function.
}
const myObj: MyObj = {
testMethod() {}
}
Now if you call that normally:
myObj.testMethod() // good
But if you call it the wrong way:
const method = myObj.testMethod
method() // The 'this' context of type 'void' is not assignable to method's 'this' of type 'MyObj'.(2684)
So if you type this properly, typescript should enforce the rest.
Read more here
Playground
Another example showing the three different ways to declare a method:
interface MyObj {
testMethod(this: MyObj): void // declare `this` as `MyObj` in this function.
}
const myObjA: MyObj = {
testMethod: function() {
this // good
}
}
const myObjB: MyObj = {
testMethod() {
this // good
}
}
const myObjC: MyObj = {
testMethod: () => {
this // Element implicitly has an 'any' type because type 'typeof globalThis' has no index signature.(7017)
}
}
Playground
| common-pile/stackexchange_filtered |
Is this effective enough to stop email spam-bots?
So for my responsive site, when in the mobile-scale, I have an "Email Us" button that the user can tap to open up the email client.
Originally this was a simple mailto:, but I've since changed it, but as I wanted to keep the changes to an absolute minimum, I decided upon the following method:
Replace the<EMAIL_ADDRESS>with a link to redirect.php in my site directory.
All that is in redirect.php is this:
<?php
header('Location: mailto:example@email.com');
exit();
?>
And it behaves totally fine!
That was the only spot where the email address was present in the HTML or JS, so I felt like it would be overkill to do a complete encryption of the email.
So my question is this: Is this enough to effectively keep spam-bots out?
If no, what extra steps are necessary?
Obviously you can't 100% stop them from happening, but I figured as the actual address is only on the server-side, that would significantly reduce the risk.
Right?
You're still broadcasting the email address. Keep it behind closed doors at all times.
It's a workaround. Actually somewhat clever. But unlikely to deter well-written spiders.
@JohnConde Yeah, you're right, I can see it via the web inspector... well, guess I'll have to rethink it a bit. Thanks for pointing that out.
@mario heh thanks, I certainly thought so.
The way you're doing it is not wrong and can help you a lot, however if you really want to kill spam the best way is to use a Captcha, even if basic.
If I were you I would try this method for some time and if you keep receiving spam, I would introduce a simple Captcha in a lightbox with a button asking if the user is human. If so the user would be redirected to your redirect.php
I think this way is simple for the user to press a button, for you to implement and would kill 95% of bots.
The correct answer to this question will always be no, as even if you come up with a new way of obfuscating your email address that nobody has ever used before, the bots will be modified to get around it eventually
I find that building the address with Javascript is still an effective solution, as crawlers generally don't parse javascript for various reasons
I've seen bots parse the source with a series of regular expessions to nail down the E-mail address when using JS, it's a very rare occasion that they do it, but it can be done. Bots will only be modified to keep up to a standard which is publicly/majority used
| common-pile/stackexchange_filtered |
how to create a project which support mvn jetty:run?
it's better that once mvn archetype:generate is runed, it can do all the things needed to create a project which support mvn jetty:run, but the fact is there are so many templates that i don't even know which one i should choose in order to create a project which support mvn jetty:run, even when -Dfilter option is used, i cannot find the right template to do the thing i want.
so how to how to create a project which support mvn jetty:run quickly?
it's better that i can do all the things simply with one command, that is
command 1: create projects
command 2: mvn jetty:run to run this web application:)
From the Maven Jetty plugin doc:
In order to run Jetty on a webapp project which is structured
according to the usual Maven defaults (resources in
${basedir}/src/main/webapp, classes in
${project.build.outputDirectory} and the web.xml descriptor at
${basedir}/src/main/webapp/WEB-INF/web.xml, you don't need to
configure anything.
Simply type:
mvn jetty:run
This will start Jetty running on port 8080 and serving your project.
So you need to use the jee6-basic-archetype (number 414) which will generate the required folder structure.
More information about configuring the plugin is available in the doc.
seems there are lots of archetype which support jetty:run, so how to know which one is best?
| common-pile/stackexchange_filtered |
Twitter API getting any user's all tweets
I am newly started searching the twitter API and to gain time I would like to ask and hope to learn about that. Which method of Twitter API can give me someone's all tweets between a date period. (I will set that user and date period. And using PHP)
None.
Twitter API has limitations and this is quite understandable because of the number of tweets in the database and the load that could be created by such queries.
You may use user_timeline however, if last 3,200 updates will suffice.
EDIT: I have corrected my answer following Jessycat's answer (was: 32,000 tweets, changed to 3,200 tweets).
Hey Tadeck, There is not sufficient explanation about the user_timeline so I would like to ask you that. When I use this:
$content = $connection->get('statuses/user_timeline', array('screen_name' => 'mytwittername')); It won't work. How can I access that 32000 update?
@gencay The single request can get you up to 200 tweets (see: count parameter in the documentation). To access these 32000 tweets, you will have to make 160 requests (32000/200). Thus you will have to read about rate limiting (and probably employ some walkaround techniques listed there, or try to convince Twitter you really need that). As I said, Twitter API has to be limited because of the amount of data etc.
@Jessicat: Apparently I have made a typo, and then based my comment on it. Will correct the answer
i think you can use since and until operators offered by the search api
check:
https://dev.twitter.com/docs/using-search
the operators part.
an example would be:
http://search.twitter.com/search.json?q=egypt&since=2011-11-28&until=2011-11-30
For the standard Twitter API, you can't do that, you can only get tweets since a certain tweet id, and even then you're capped at how many tweets you can get back.
For the search API, I tried a few methods using the until parameter, but could not get anything useful out of it.
| common-pile/stackexchange_filtered |
networkx calculating numeric assortativity requires int?
I am trying to use networkx to calculate numeric assortativity based on a numeric attribute that I set to nodes. My node attributes are floats. When I call the assortativity function:
assort = nx.numeric_assortativity_coefficient(G,'float_attr')
I got the following errors.
File "/some dir.../networkx/algorithms/assortativity/correlation.py", line 229, in numeric_assortativity_coefficient
a = numeric_mixing_matrix(G,attribute,nodes)
File "/some dir.../networkx/algorithms/assortativity/mixing.py", line 193, in numeric_mixing_matrix
mapping=dict(zip(range(m+1),range(m+1)))
TypeError: range() integer end argument expected, got float.
I checked the documentation page of networkx assortativity algorithm and it did not say the numeric attributes have to be int. Anyone knows if that's required?
BTW, I used the same network and a gender attribute (set to 0 and 1) to calculate both the attribute and the numeric assortativity. I had no problem with that. So it seems that the problem is with the int/float type of the node attribute.
m is somehow a float which you cannot use in range, you can try casting to int.
the m is a variable in the networkx package, not in my code. I am not sure what the m represents, and I am afraid of making changes to it as it might introduce other problems.
Ah ok, yes looking at the source there is no stipulation on type, m is the max of the dict keys returned from attribute_mixing_dict. Seems like a bug to me.
Thanks! I guess the best way is to convert my float attribute variable into int. although that means losing some fine grained information.
You might be as well to create an issue https://github.com/networkx/networkx/issues
converting the float attribute to int solves the problem. Thanks! Never submitted an issue before, guess I can give it a try. :P
I agree that this is a bug (@PadraicCunningham - want to give that as an answer?). Looking at the code, it looks like it was written for degree assortativity originally, and then modified for arbitrary values.
@sophiadw make sure you change all your floats to ints (maybe multiply by 10 or 100 first, though that may make performance slow?). I suspect just making the max be an int would lead to a bug.
problem solved by converting the float variable into int using the following method
int(round(float_attr*1000, 0))
submitted an issue here and got a confirmatory answer that it only deals with discrete int values.
Peformance-wise, since my network is not huge (200+ nodes), it still takes <1 min to do the calculation.
That was a fast turn around, you can also accept your own answer.
| common-pile/stackexchange_filtered |
MySQL Queries: find any record with a PENDING status and find any records with a FAILED status
I have three columns with three values that can be set for each column
Column_1 | column_2 | column_3
________ | ________ | ________
COMPLETED| FAILED | PENDING
COMPLETED| COMPLETED| COMPLETED
FAILED | COMPLETED| COMPLETED
COMPLETED| PENDING | COMPLETED
COMPLETED| COMPLETED| PENDING
COMPLETED| COMPLETED| COMPLETED
COMPLETED| COMPLETED| COMPLETED
PENDING | COMPLETED| COMPLETED
Looking for two queries, One to find any record with a PENDING status and the other to find any records with a FAILED status
NOTE: PENDING and FAILED should have the same logic
Query: PENDING
SELECT * FROM tbl
WHERE Column_1 = 'PENDING' OR Column_2 = 'PENDING' OR Column_3 = 'PENDING'
Query: FAILED
SELECT * FROM tbl
WHERE Column_1 = 'FAILED' OR Column_2 = 'FAILED' OR Column_3 = 'FAILED'
These queries are not pulling the correct records. I think it's matching the first condition in the WHERE clause and then doing the OR clause as a separate condition. I've tried a couple variations but still no luck.
Alt Query:
SELECT * FROM tbl
WHERE Column_1 OR Column_2 OR Column_3 = 'PENDING'
So for the PENDING status, the query should return 4 rows for the data grid above
and for the FAILED status, the query should return 2 rows for the data grid above
Your first set of queries look fine - what's the problem?
what are you getting? those first 2 queries look fine at first glance to me... never tried that alt query style before... but it looks funny to me :)
It's forcing the WHERE condition to be a separate constraint so the OR conditions are considered as a separate constraint. Does this make sense?
Try wrapping parenthesis around each condition to see if that works:
SELECT * FROM tbl
WHERE (Column_1 = 'PENDING') OR (Column_2 = 'PENDING') OR (Column_3 = 'PENDING')
I am actually not familiar with mysql, but this might be a better way, anyone can correct me if I am wrong:
SELECT * FROM tbl
WHERE 'PENDING' IN(Column_1,Column_2,Column_3,Column_4)
Again, if the above is incorrect, I apologize.
I like the update as well, looks cleaner than having all the OR conditions. Thanks again
Great. Nice to hear it is working. This is my first accepted answer, lol, so it is an historic day :)
Your first query looks correct. It is possible that you are using char datatype and this is confusing things, i.e., you may need to add trailing spaces. Can you post your schema?
A quick way to check this would be to change the queries to use LIKE; i.e. "WHERE column_1 LIKE "PENDING%" OR column_2 = "PENDING%" ...". If that works, it's clear that you've got space issues.
No space issues as the data is entered by script and is trimmed before insertion
If the behaviour is weird, I'd try parenthesis.
Query: PENDING
SELECT * FROM tbl
WHERE ((Column_1 = 'PENDING') OR (Column_2 = 'PENDING') OR (Column_3 = 'PENDING'))
Query: FAILED
SELECT * FROM tbl
WHERE ((Column_1 = 'FAILED') OR (Column_2 = 'FAILED') OR (Column_3 = 'FAILED'))
Though, I'll be honest, I don't know why that would work better/worse: since all you're using is "OR" it shouldn't matter.
SELECT * FROM tbl
WHERE Column_1 OR Column_2 OR Column_3 = 'PENDING'
This is definitely not going to return what you think. Here's how it's evaluated:
SELECT * FROM tbl
WHERE (Column_1) OR (Column_2) OR (Column_3 = 'PENDING')
Using a string column as a truth term evaluates the string as an integer, which in these examples is 0, interpreted as false. So this is like:
WHERE (false) OR (false) OR (Column_3 = 'PENDING')
The effect is that it ignores the first two columns and returns rows only where the third column matches 'PENDING'.
In other words, you gave a query that is syntactically valid but doesn't have the result you intend.
The first two queries you gave do return 4 rows where any column is 'PENDING' and two rows where any column is 'FAILED'. I just tested this and it works. If it doesn't match your expectations, perhaps you can clarify:
What query did you try?
What was returned?
What result were you expecting instead?
Bill, Curious, does t-sql or pl-sql do the same thing?
Why, also in this case are the strings turned into 0 and not 1.
In MySQL, casting a string to an integer uses any leading numeric digits in the string, defaulting to 0 if there are none. So '123abc' casts to the integer 123, but 'abc123' casts to the integer 0.
In Microsoft or Oracle (or standard SQL), you can't treat an integer as a boolean implicitly as you can in MySQL, so I don't think your query would have valid semantics at all. In other words, "WHERE 1" is valid only in MySQL. In other brands, you'd have to use either "WHERE true" or "WHERE 1 = 1" or something.
| common-pile/stackexchange_filtered |
Java class: for getOrder, getData, updateData operation
Problem Statement :
Design a class for three operations for given set of integers :
int getOrder(int data) /* order meaning rank in list */
int getData(int order)
void updateData(int oldData, int newData) /* [Most used operation] */
My solution
class App {
Set<Integer> appData;
public App(List<Integer> data) {
this.appData = new TreeSet<Integer>(new Comparator<Integer>() {
@Override
public int compare(Integer o1, Integer o2) {
if(o1 > o2) {
return -1;
} else if (o1 < o2) {
return 1;
} else {
return 0;
}
}
});
appData.addAll(data);
}
public int getOrder(int m) {
int counter = 1;
for(Integer i : this.appData) {
if(i == m) {
return counter;
}
counter ++ ;
}
return -1;
}
public int getdata(int order) {
int counter = 1;
for(Integer data : this.appData) {
if(counter == order) {
return data;
}
counter++;
}
return -1;
}
public void updatedata(int olddata, int newdata) {
if(appData.contains(olddata)) {
this.appData.remove(olddata);
}
this.appData.add(newdata);
}
}
What is the variable mark in your code? I don't see it declared anywhere. When copying code to Code Review, please make sure that you copy your code, all your (required) code, exactly your code, and nothing but your code, so help you.... code.
Are you able to use Java 8 ?
No.. No java 8. Only Java 6.
(Please post working code next time.)
So first off, there are imports missing, so this doesn't compile as it
is, and the variable mark doesn't exist.
import java.util.List;
import java.util.Set;
import java.util.TreeSet;
import java.util.Comparator;
Either you import by name, or the whole package, just stick with one.
...
if(counter == order) {
return data;
}
...
Then you should generally specify the visibility on all things,
otherwise you end up with package visibility, which is probably not what
you want.
public class App {
...
private Set<Integer> appData;
Use of generics is good, the constructor is okay except for the
unnecessary manual comparison. This should be either using
TreeSet.descendingSet, or Comparator.reverseOrder, e.g.:
public App(List<Integer> data) {
this.appData = new TreeSet<Integer>(Comparator.<Integer>reverseOrder());
appData.addAll(data);
}
Some whitespace is also off and the spelling of methods should be
consistent, i.e. getdata should be get getData and updatedata
should be updateData as is mentioned in your spec.
That just to get you started here.
When posting on Code Review, it is very normal to just skip the imports, especially for java.util classes.
I don't agree because that means I have to do work to get it to compile.
I think this would be a good discussion for chat in the 2nd monitor
Is treeset best thing for these operations ? I don't see any mention of complexity on java doc of that. Was confused between priority queue and treeset..
| common-pile/stackexchange_filtered |
Updating SharePoint sites through the onet.xml file
I've updated a site definition in the SharePoint onet.xml file so new sites are created without a particular web page on a page.
However I've read that the onet.xml file is only read when a site is first created.
Is there some way for me to get the existing sites to refresh??
It's not really partical to loop through each site and make the change.. even through code.
I'd think looping through every site from code is very practical? Doing it manually might not be.
You might not even have to code for it if you install the stsadm extensions.
You can delete items using this command for instance and it has commands to enumerate sites with.
There is no other option to synchronize the changes made to onet.xml to be reflected in the existing site that are created based on the onet.xml.
Loop through each of the site & create the Page is the option you are left with.
Another alternate you could do is that you can create a Feature that will Provision web page file and activate it in each of the web.
Finally one option I could think of is to place the page in _Layouts (If you are really against both the above options)
Do not modify onet.xml as this file can be overwritten when updating sharepoint!
And looping throught SPSite.AllWebs is not really impractical if you want to update SPWeb. Ofcourse, it would be impractical if you run this code each time item or whatever changes, but if you just need to fire this code once in a while, then this is no problem.
Oh right, but it may be a problem if you add new webs. Well, Kusek already provided you with an answer:
Another alternate you could do is that
you can create a Feature that will
Provision web page file and activate
it in each of the web.
That's called feature stapling. Activate your feature when web has created and do your modifications for that web.
| common-pile/stackexchange_filtered |
How can I redirect before render?
I'm trying to build an web app using react and I kind of hit the wall.
The thing is, I don't want any users who has already logged in to access the login page.
I tried to redirect them using render method and componentDidMount method, but since the render method is called before componentDidMount, the login page flashes out before redirection.
These are my codes.
I'm using redux and firebase to control the component state and authenticate.
class CenterPanel extends Component {
render() {
const { user }=this.props;
if(user!=null) { //this is how I tried to redirect
return ( //
<Redirect to='/somewhereNotHere' />
);
}
return ( //this flashes out before redirection
<div className={cx('center_panel')}>
<div className={cx('button')}>
login
</div>
</div>
);
}
componentDidMount() {
const { changeUserStatus }=this.props;
firebase.auth().onAuthStateChanged(
(user) => changeUserStatus(user)
);
}
}
would appreciate any kind of advice.
Thx in advance.
try settings window.location to '/somewhereNotHere'
I've tried window.location and history. Both of them succeeded in redirection, but login page still flashes out before redirection. I want users who has already logged in to have absolutely no access to login page.
just return null instead of the <div ... /> tree when user is not null. But redirect before returning null...
where do you get user prop value ?
Maybe the props user is undefined before you call onAuthStateChanged
If it is right, you can control your component by three user values. undefined, null, user value.
render() {
const { user }=this.props;
if(user === undefined){
return null; //or Loading component
}
if(user!=null) { //this is how I tried to redirect
return ( //
<Redirect to='/somewhereNotHere' />
);
}
return ( //this flashes out before redirection
<div className={cx('center_panel')}>
<div className={cx('button')}>
login
</div>
</div>
);
}
Thx! solved the issue! The idea of dividing user status into undefined, null, user obj is working perfectly.
Quick Note: After react-router-dom 6v Redirect got replaced by Navigate.
You should try this,
if(user && user!==null) { //this is how I tried to redirect
return <Redirect to='/somewhereNotHere' /> //Make sure you have imported Redirect from correct package i.e. react-router-dom
}else{
return ( //this flashes out before redirection
<div className={cx('center_panel')}>
<div className={cx('button')}>
login
</div>
</div>
);
}
Another way of doing this is in Routes itself,
<Route exact path="/your_Path_to_component" render={() => (
user ? ( //get the user data as you are passing as props to CenterPanel component
<Redirect to="/somewhereNotHere"/>
) : (
<CenterPanel />
)
)}/>
In this way no need to pass user as props to CenterPanel component as well as no need to write any redirection logic in CenterPanel component.
Late comer here, to put it in the router might be viewed as breaking the principle of encapsulation as if in the future you will add more complex validations prior to rendering the router in the main index file and the component will be coupled.
On the other hand, the entire concpet of react that components in render method are responsible for redirect is against many other OOP principles.
First method you showed is better in my humble opinion.
you can solve it by conditioning the view
user && <div>...</div>
the content of the page will only show up if the user is truthy (authenticated).
| common-pile/stackexchange_filtered |
Openpyxl Excel cell value is non existent
Hy all
I'm writing a script to check if there are empty values in my excel columns. If there is an empty value i want the corresponding row to that empty cell copied to a new worksheet for easy analysis.
This is the code:
from openpyxl import workbook, load_workbook
from openpyxl.utils import get_column_letter
def searchForBlanks(wb, ws, header):
ws2 = wb.create_sheet(header + " irregularities")
#copying over the header
for row in ws.iter_rows(min_row=1, max_row=1):
ws2.append((cell.value for cell in row))
#Getting the header coordinates to check
for col in ws.columns:
column = get_column_letter(col[0].column)
for cell in col:
if str(cell.value) == str(header):
char = column
print(char)
#checking if the dedicated column contains a irregularity and than copying the whole row
for row in ws:
value = ws[char + str(row[0].row)].value
coll = ws[char + str(row[0].row)]
print(str(coll) + ' ' + str(value))
if ws[char + str(row[0].row)].value == 0 or ws[char + str(row[0].row)].value == None or ws[char + str(row[0].row)].value == False:
ws2.append((cell.value for cell in row))
My input that i give is wb= workbook path, ws = worksheet, header = header column title i want to check
If i run it i get the following example output:
K
<Cell 'LoadFile'.K1> City
<Cell 'LoadFile'.K2> None
<Cell 'LoadFile'.K3> None
<Cell 'LoadFile'.K4>
<Cell 'LoadFile'.K5>
<Cell 'LoadFile'.K6>
<Cell 'LoadFile'.K7>
<Cell 'LoadFile'.K8>
If i check the new excel sheet that was created i get only the first 2 rows from the 'None' values. From K4 to K8 are not copied. How do i make sure i copy these rows aswell?
So for some reason my criteria are not broad enough to copy these non-existent values.
Can somebody give me a suggestion?
I was constantly validating the output in my console that i did not validate them in Excel.
I turns out there was a cell value in excel. It was just a space ' ' that was the answer and i had to add it to my criteria and now it works.
if ws[char + str(row[0].row)].value == 0 or ws[char + str(row[0].row)].value == None or ws[char + str(row[0].row)].value == " ":
| common-pile/stackexchange_filtered |
Aligning values with timestamps to a timeline
I want to visualize data given at certain timestamps from multiple sources along a timeline. For example, with the follwing to input files with column 1 being the timestamp and column 2 the data:
O1.dat:
100 5
300 10
O2.dat:
200 7
400 3
Along with that the average of all values is sampled at certain intervals:
Avg.dat:
250 6.5
500 6.25
I would like to plot all values in a table-like manner so it looks something like this, with the values aligned to the time on the top:
My real data reaches timestamps of up to 10000, so something dynamic would be nice.
So far I only plotted simple box or line plots, so I'm not sure how to go about this one.
Thanks for your time.
EDIT:
This is what it looks like so far with adjustments made to the accepted answer:
There is still some overlapping, but that is simply because of the data being too close to each other. The script used for this:
#set term pdf
#set term pdf size 8, 5
#set output 'out.pdf'
set term png
set term png size 1200, 700
set output 'out.png'
set termoption font ",20"
set label 'Time (ms)' at graph 0, graph 1 offset -0.75, char 1 right
unset border
unset key
unset xtics
set ytics scale 0
set x2tics () scale 0
set yrange [0:5.5]
set x2range[0:10000]
set lmargin 9
set arrow from graph -0.15, graph 1 to graph 1.1, graph 1 nohead
set arrow from graph -0.01, graph 1.2 to graph -0.01, graph -0.2 nohead
set arrow from graph -0.15, first 0.3 to graph 1.1, first 0.3 nohead
set style data labels
plot for [i=0:9] 'desc'.i.'.txt' using 1:(5-0.5*i):(sprintf('%d', $2)):ytic('Object '.i) axes x2y1, \
'Avg.dat' using 1:(0):(sprintf('%d', $2)):ytic('Avg') axes x2y1
The conventional, simple part is plotting of the actual data. For this you can use the labels plotting style. A very simple example would be:
set xtics (0)
set xrange [0:*]
set offsets graph 0, graph 0.2, graph 0.2, graph 0.2
set style data labels
unset key
plot 'O1.dat' using 1:(5):(gprintf('%g', $2)):ytic('O1'),\
'O2.dat' using 1:(4):(gprintf('%g', $2)):ytic('O2'),\
'Avg.dat' using 1:(3):(gprintf('%g', $2)):ytic('Avg'):xtic(1)
That simply plots the values from your data files as labels at the x-positions given in the first columns. The y-positions are set as fixed numbers:
In order to move the xtick labels to the top and have some table-like lines you need a bit more tweaking:
reset
set termoption font ",20"
set label 'Object' at graph 0, graph 1 offset -1, char 1 right
unset border
unset key
unset xtics
set ytics scale 0
set x2tics () scale 0 format "%g"
set yrange [2:5.5]
set x2range[0:*]
set lmargin 8
set arrow from graph -0.15, graph 1 to graph 1.1, graph 1 nohead
set arrow from graph 0, graph 1.2 to graph 0, graph 0 nohead
set arrow from graph -0.15, first 3.25 to graph 1.1, first 3.25 nohead
set style data labels
plot 'O1.dat' using 1:(5):(sprintf('%d', $2)):ytic('O1') axes x2y1,\
'O2.dat' using 1:(4):(sprintf('%d', $2)):ytic('O2') axes x2y1,\
'Avg.dat' using 1:(2.5):(gprintf('%g', $2)):ytic('Avg'):x2tic(1) axes x2y1
Such a table layout isn't a typical task, so you must adapt several settings to your final result. Main impact comes from canvas size, font and, font size.
If you have more than those two files you could of course also iterate over a file list.
This was spot-on, thank you very much. I edited my question with the current result. There's still some adjusting to do, but looks pretty good.
| common-pile/stackexchange_filtered |
HTML/CSS: show (part of) hyperlink destination as link text without repetition in source code?
I'd like to automatically have a style for using (part of) the hyperlink destination as link text without manually repeating it in the source code. Context: I'm using a Markdown file (only basic HTML and CSS) to note down many external links to academic publications. For example, a link should look and work like journal.pbio.0020449, but without having to repeat the chunk journal.pbio.0020449 in the source code. Phrased differently, I'd like to automatically use the hyperlink destination as link text. If possible, only the last part of the displayed thereby. How could this be done?
So far, I only achieved to change the appearance of the link:
<style>
a.doi {color: red;}
</style>
<a class="doi" href="https://doi.org/10.1371/journal.pbio.0020449">journal.pbio.0020449</a>
for markdown with only basic HTML and CSS no chance. Both HTML and Markdown are "merely" markup languages to format documents (originally scientific publications). For what you're trying to do you would at least require a scripting language such as JS.
You could use the anchor element's after pseudo element to show the href value.
<style>
.doi::after {
content: attr(href);
}
</style>
<a class="doi" href="https://doi.org/10.1371/journal.pbio.0020449"></a>
However, I don't recommend it if your site is for wide publication because screen reader users will not necessarily hear the value.
Thanks! Screen readers are no issue, since it's only for internal use. Your solution works nicely, but I guess shortening/trunctating the link text to only show the part after of the last slash is not possible with pure CSS?
I don't think it is possible with pure CSS. You could show the last n characters (using a monospace font and positioning and a bit of hackery) but that's not going to be useful!
@Thanks! Since this should be quite doable with JavaScript (although I myself don't now much about it), I'm considering switching to HTML instead of Markdown ...
| common-pile/stackexchange_filtered |
Gcp support binding service account from firebase?
Im hosting a webapp in firebase(angular), who's triggering some functions in Google cloud functions, for some reason of security, we probably call this functions with a register service account, the question is, the angular should contain this service account (json file), to validate in front of gcp ?
maybe is just a good practica use a single token, per user who call the API/FUNCTION?
You absolutely should not include service accounts in your web and mobile apps. That's a huge security hole. Service accounts are meant to authenticate secure backend-to-backed communications. Putting one into the public is basically saying that you want anyone to be able to invoke your function at any time (and potentially do anything else that account is authorized to do).
Instead, you should be using Firebase Authentication to identify your users, and authorize them to make use of your backend functions. You can use the Firebase Admin SDK to verify that a function is being invoked by a registered user, and that the user has the permission necessary to invoke it.
That's is what im trying to confirm, so we keep using Firebase Admin SDK
| common-pile/stackexchange_filtered |
How to show drop down result based on selected radio button | JSP
I'm trying to write a JSP page where it needs to display a drop-down list of states based on the select country radio button on the same page. When I run the page it is displaying c:otherwise dropdown even after selecting the USA radio button. Please help
USA:<html:radio property="country" value="country" >
<c:set var ="p" value="usa"/>
</html:radio>
Germany:<html:radio property="country" value="country" >
<c:set var ="p" value="germany"/>
</html:radio>
<br>
list:
<c:choose>
<c:when test = "${p == processors }" >
//drop down for states in USA
</c:when>
<c:otherwise >
//drop down for states in germany
</c:otherwise>
</c:choose>
enter code here
The <c:set and <c:when is executed only in the moment, the result page get's rendered. The radio button is active, after the page is rendered and sent to the browser.
You have to propagate your user's selection to the server side somehow, because with JSP, you can primarily do this in this way. A working example:
<%@ taglib uri = "http://java.sun.com/jsp/jstl/core" prefix = "c" %>
<HTML>
<HEAD>
<TITLE>Reading Radio Buttons</TITLE>
</HEAD>
<BODY>
<H1>Reading Radio Buttons</H1>
<%
boolean radio2checked = false;
if(request.getParameter("radios") != null) {
if(request.getParameter("radios").equals("radio1")) {
out.println("Radio button 1 was selected.<BR>");
}
else {
out.println("Radio button 1 was not selected.<BR>");
}
if(request.getParameter("radios").equals("radio2")) {
out.println("Radio button 2 was selected.<BR>");
radio2checked = true;
}
else {
out.println("Radio button 2 was not selected.<BR>");
}
}
pageContext.setAttribute("showUSAdropdown", radio2checked);
String checked = radio2checked ? "checked=\"checked\"" : "";
%>
<FORM name="radioform" METHOD="post">
<INPUT TYPE="radio" onclick="document.radioform.submit()" NAME="radios" VALUE="radio1" CHECKED>
Radio Button 1
<BR>
<INPUT TYPE="radio" <%=checked%> onclick="document.radioform.submit()" NAME="radios" VALUE="radio2">
Radio Button 2
<BR>
States:
<SELECT name="states">
<c:choose>
<c:when test="${showUSAdropdown}">
<option>Alabama</option>
<option>Washington</option>
</c:when>
<c:otherwise>
<option>Germany</option>
</c:otherwise>
</c:choose>
</SELECT>
</FORM>
</BODY>
</HTML>
However this can be done in client side only using Javascript, the code above not belongs to the current/modern solutions.
| common-pile/stackexchange_filtered |
Checking if checkbox is checked or not via macro
I have check box in my xls sheet. How can i find if these are checked or not via macro.
I have created these checkbox manually from Forrm Controller.
Thanks,
Mona
if CheckBox1.Value = true then
'your code
end if
| common-pile/stackexchange_filtered |
Staying in Phase On The Grid
I have been an EE for over forty years and never did find out the right answer to this one....
How do power-stations and transformer switching stations ensure that the power they are feeding into the grid is in-phase with the existing power on the lines.
I know they are VERY serious about setting the line frequency to a ridiculously good accuracy. However, obviously, you can not connect a power line to another line that is 180degress out of phase. Even a small deviation would presumably cause a huge drain on the system and generate a rather strange, and out-of-spec AC waveform.
OK I can imagine a solution at the power station that uses the target line frequency to synchronize the alternators before flipping the switch perhaps. However, that switching station 100km away maybe switching onto a line from a different alternator that is much closer or farther away and consequently at a different point in the phase cycle...
How do they do that...
Note his is NOT the same as "How to synchronize a generator on the electrical grid?" That article only pertains to a local generator and is not, in my mind, the same as the main power grid and transformer switching.
The concept of the infinite bus. 1 generator is insignificant with respect to the full bus. Match up phase, voltage and speed. Make oncoming generator a little faster than bus, so it will take up load when it comes online. Throw breaker. Generator will motor to become perfectly in sync. The more the generator is out of sync, the higher the current. Ideally, we want no current. Once online, it will take up it's share of the load. It becomes part of the infinite bus and will remain synchronized.
SO you are saying it self regulates.. and presumably causes a brief disturbance in the grid as it synchronizes which is deemed acceptable?
Yup. There will be a glitch as it comes on line. It motors, to become perfectly in sync with the bus.
So where and how is the master frequency generated?
No master. All will be 60.00Hz (or 50.00Hz). Think about a ship with 3 to 5 generators. 1st is master. 2nd synchronizes to 1st at whatever the frequency. They will be perfectly in sync or breakers trip. 3rd syncs to the two. etc. The same thing for infinite bus with 100's of generators. No master.
That would drift though, especially if you shut down the original generator. The power company keeps 60hz very accurately, so some other regulation must be used. I mean, like.. a convoy moves at the speed of the slowest ship...
Possible duplicate of How to synchronize a generator on the electrical grid?
Yes and no. On a ship, if a large load is started, voltage dips as diesel engines speed up to pick up load. When load shuts down, voltage will overshoot, but automatic voltage regulators will vary to keep voltage and frequency at target levels. On land, once synchronized to the infinite bus, any generator which tries to change trips out. 1 generator can more overcome the majority.
@ThePhoton, kind of duplicate, but on a much larger scale. With different economics of scale and compromises. Tuning your gas powered generator in the garage to the grid is a bit different and trivial in comparison to synchronizing a hydro-electric power station 500km away.
@Trevor, read Li-Aung's answer. He covers your question.
@StainlessSteelRat that still does not explain the master clock though. It could just as easily stabalize at 55hz. But it doesn't You are pretty much guaranteed a number of cycles per day plus or minus one or two.
See also http://www2.nationalgrid.com/uk/services/balancing-services/frequency-response/
(note that almost everything assumes the availability of grid frequency to sync to; starting from a completely down grid is called "black start" and somewhat harder. That term in a search engine will tell you more)
I'd follow The Photon's advice and read Li-Aung's answer. There is more at play, but take your hydroelectric dam. They have water at a fixed height, which falls through a penstock at a fixed rate to a turbine, which spins the generator at a fixed rate, which produces a 60.00Hz output. Each generator has been designed to run at 60.00Hz. No generator can go faster or slower than the bus. If it tries (water slows or stops), it will be forced to run as a motor. High currents will flow and reverse current breakers will trip it off the bus.
Yes @StainlessSteelRat, as I said in the question, I can understand how to regulate and slew at the power-station, it's switching 300km away from there that brings in the question.
There's videos of the sync process, although it just looks like someone throwing switches: https://www.youtube.com/watch?v=Zw39gxIqfVU (1 min explanation from about 2:30)
@StainlessSteelRat A large base-load station on the grid is set as frequency source. The other stations are brought on line and is controlled by following the load. Say the load is stable and the phase of the other stations are slightly ahead, this means that the frequency source will supply a little less load and the others will take on the load and their phase will return to normal and in step with the frequency source. So one large base-load is frequency controlled and the other stations are load controlled following the frequency controlled source.
Before connecting a generator to the grid, they spin it up to more or less the right speed. Then they hook what is basically a voltmeter between a generator phase, and the corresponding line phase. They adjust the generator drive until the observed voltage is
a) very slowly changing (frequency difference below some threshold) and
b) drops below some low voltage threshold (phase difference close enough so the power flow that results when they throw the big switch is manageable).
Once the generator is connected to the grid, it always stays in phase. If not driven mechanically, it will act as a motor. The amount of power it draws from or exports to the grid is controlled by how hard it is driven mechanically.
Each generator is connected to its local part of the grid, synced to its local frequency. There will be a slight phase difference between the generator and the local grid. If the generator is supplying power to the grid, its phase will be slightly in advance. The larger the power input to the generator, the larger the phase difference, and the larger will be the power exported to the grid.
This 'power flow follows phase difference' extends to whole areas of the grid. If there is a large load in the south, the generators in the south will slow down initially, retarding their phase with respect to the north. This phase difference will create a power flow from north to south.
Where you have a nationwide grid, the management strive very hard never to let any significant part become 'islanded' from the other part. Once they drift apart in phase, it may take a long time before they can be brought together again, as the phase matching will need to be exquisitely accurate to avoid a huge power flow at the time of connection.
Where two separately controlled grids are to be connected, say by the Anglo-French undersea cable, it is done with DC. It is easy at the receiving end to synchronise the inverters to the grid.
Keeping the grid in phase with an average of 50 cycles per second over the course of a day, is simply done by feeding in more or less power, to speed or slow the grid frequency respectively, usually at night when there's a bit more slack in the demand.
So you are saying they just swallow any distance effects as negligible until the distance is too great, at which point they "regenerate" the power? BTW: I am more thinking like continental USA/Canada. It's hard to grasp these concepts when power stations could be 3,000km apart.
No, the phase is matched locally - so two generators might not be in phase if seen from space.
Continental US has a different answer; the US has 5 grids, not 1.
No, it's to do with management. The English - French thing is two grids that will never be coordinated. Where a grid covers a continent, for instance the US, if it is allowed to 'lose the middle', and then get a cut in the outside, the phase shift round the boarders may be large enough to cause problems. This was the cause of a continent wide blackout that occurred a decade or so ago there.
Yes, but the grids have more to do with fault tolerance than syncing. 1 grid would not bring down all. But all could be synced to same frequency.
They sync to infinite bus, but the power they provide will be local. Electrons are lazy. As pc50 says, from space two generators at extreme distance from each other will be out of phase slightly.
So basically what I am hearing is, the generators whine and moan for a bit when you switch them in, and there is a loss of power and line disruption which is "tolerated" during the switchover as cost of doing business.. And presumably, while switching in, things have to be within some tolerance before the "controller" flips the switch..
@CharlesCowie, if it's "connected", either it is supplying power or it is consuming power or it simply is not connected. How can it be connected and isolated at the same time..
@Trevor There should be no loss of power when syncing a new generator. There may be a bit of a thump from that generator as it's forced into phase with the rest of the grid. After that, it will automatically stay locked to the frequency of the rest of the grid.
@SimonB, I can't imagine a hundred ton rotor going "thump" can you? Maybe your Honda portable in the garage, but not a power station generator. There must be a finite slew time.
@Trevor Ideally, when you sync a new generator, the frequency and phase difference is zero, so there's no thump, and the mechnaical power input = the no load losses, so there no change in grid power. A tolerance on 'zero difference' allows syncing to be done practically, there's a bit of 'inrush' as the generator is hauled into the precise phase.
Anyhoos... interesting stuff... thank's for the feedback folks.
Confused by your ADD Neil, how does adding more power change the frequency. Simply adding more power at 59.5HZ wont increase the count. You would need to force the whole grid up to like 60.5 or some such.
We don't want any whine or moan. We are talking 20MW generators. They don't like that. But no matter how close each individual generator is matched to the infinite bus, they will be forced into perfect synchronization for voltage and frequency. They will act as a motor. The goal is to minimize that motoring.
Adding more power, at the existing frequency of the grid, means there's an excess of energy input to energy output. That excess energy is stored as kinetic energy in all the rotating machinery, which means it goes faster. Similarly, if you turn off the steam turbine, the grid slows down. If one part of the grid is driven, and another part is loaded, then you have a huge power flow from the former part to the latter. This is how you control the direction of power flow on feeders, alter the power input at various points on the grid.
@pjc50 it looks like there are either 3 or 8/9 grids in the continental US depending on how you count
@Trevor: Perhaps you're thinking in purely electrical terms where frequency is an independent variable. What we have instead is a big magnet rotating inside coils - the frequency is how fast the magnet is rotating. Electric load (how much power people are using at any one time) manifest itself as drag on the spinning magnet - causing it to slow down
@Trevor Motors (and generators, which are the same thing) try to lock themselves onto the grid frequency and phase. The prime mover tries to spin the generator faster and faster, the only thing holding it back is all the other generators and motors on the grid. If there is not enough load the generator will spin faster and faster. This is compensated for by giving the prime mover less fuel/wind/etc.
If you continuously connect a generator to grid when it is out of sync, it will eventually destroy itself, as white hat hackers demonstrated: https://youtu.be/LM8kLaJ2NDU
You're confusing an accurate number of cycles over a 24 hour period with very rigid instantaneous frequency control. That's not how it's done in most places.
The frequency is maintained at around its nominal frequency by matching generation to load - all the time that the load is greater than the generation, the frequency will be (very) gradually falling, and all the time the load is less than the generation the frequency will be increasing.
The inertia is enormous and, in general, both load and generation change fairly gradually, so there's lots of time to make adjustments to generators (or loads, where people have contracted to control their loads in this way) to keep the system balanced. The frequency is allowed to drift between various limits (operational and regulatory).
In the UK at least, the correct number of cycles per day is maintained by keeping track of 'real time' and 'grid time', and the grid is run a bit fast or a bit slow to make sure they don't get too far apart.
There are accurate frequency references in use within the grid control system - that's what they're comparing with/measuring against, but the grid itself isn't phase/frequency-locked to them in any direct way.
At the bottom left of the big display in this image is a graph with a vertical wiggly yellow trace - that's the frequency of the UK National grid for a while before the photo was taken - as you can see it's not locked to anything very tightly, though the graph is probably only about ±0.3 Hz.
Cool information and picture thanks. Yes I read elsewhere the total cycles per day is the actual measure that is controlled. Still leaves me wondering what mechanism is used to tweak it into count though...
Or is it simply a grid wide control knob that tells everyone to speed up the generators a bit in a tolerable amount of unison.
Or is it simply a grid wide control knob that tells everyone to speed up the generators a bit in a tolerable amount of unison - Yes
As an engineer and musician, this is interesting. The old Hammond organs derived their tuning from (instantaneous) mains frequency. 0.3Hz in 50Hz works out at about 1/10 of a semitone, which is noticeably out of tune. But If you mean the axes of the graph are +/-0.3 Hz then the trace is only about +/-0.1 Hz, which is hard to detect.
Sort of: all generators are always in unison with the grid at their point of connection, but any individual generator can vary their current (I) output by controlling mechanical shaft power.
@LevelRiverSt you can read the labels on the graph if you zoom in and squint; the trace does indeed cover a range of +/- 0.1 Hz.
@LevelRiverSt a little research says that National Grid has a legal requirement to stay within 1% (+/- 0.5 Hz), but aims for +/- 0.2 Hz as a matter of regular practice. The continental European grid (ENTSO-E) also has a +/- 0.2 Hz standard. The US has stricter limits, with the NERC having a "trigger" limit of 0.05 Hz (in the Eastern region) and 0.144 Hz (in the Western region) deviation from 60 Hz, and "emergency" limits of 0.092 Hz (Eastern) and 0.2 Hz (Western). Frequency is within 0.05 Hz >99% of the time. That's less than 2 musical cents.
They use a Synchroscope. I have seen this done in power plant control rooms.
https://en.wikipedia.org/wiki/Synchroscope
This is the right answer IMO, but only for small generators (<500 KW), and at small power limits (<2 MW). But this misses the use of automation to manage tap switchers and close the contactors (It's not done by human eye on large alternators) and for grid level balancing (100 kV and above) it's normally done with DC drive (Thyristors). See articles such as this: https://library.e.abb.com/public/793bfb6d691ddf0bc125781f0027d91f/A02-0223%20E%20LR.pdf
This is definitely wrong. A synchroscope can only display a difference in phase and frequency. When a synchronous generator is already synchronized with the grid the synchroscope always points right upwards. That's the very nature of synchronous generators. When they are synchronized and connected they are perfectly in phase like there was a 1:1 gear box or a single shaft between them.
Having parts of an individual power system run at different phase angles from other parts is routine and unavoidable. This is not a problem until it is necessary to re-connect parts. In the Utility where I worked, the service people at the site would connect a phase-meter to each of the parts. Due to the difference in phase, the phase-meter would run like a clock, indicating the instantaneous phase difference. The person doing the connection (by means of an electrically-actuated circuit breaker, usually) would simply time the breaker closure for the instant at which the phase-meter showed zero phase difference. Since this zero-point occurs every few seconds, it is not difficult to catch it. We even used this with our HVDC Back-to-Back converter station; it works very, very well.
20 years ago, just after uni, I worked on a company doing exactly this.
It used to be that there were all sorts of complex phase-adjustment circuits with complex analogue electronics. These days that's generally not the case.
What my company back then specialised in was high-voltage AC/DC conversion technology. They built the first cross-channel link, and various HVDC links round the world since then. (Over long distances the losses in cables due to reactance are significant, so DC gives more efficient transmission.) When the DC gets turned back into AC (with what's essentially a very high power, very smooth inverter) you can synchronise the timing so that the resulting AC is exactly in phase with the local grid.
As this got more efficient with better high-power electronics, what people realised was that it had become more efficient to convert from DC to AC and back to DC again than it was to use any alternative methods. The result is called a "back-to-back converter". Where a cross-channel link would have miles of cable between the AC-to-DC and DC-to-AC converters, a back-to-back scheme just has a few feet of extremely thick busbar.
Of course the conversion is not 100% efficient, so the electronics are mounted on water-cooled heatsinks and the whole thing is pretty carefully monitored. But it's efficient enough that the losses are perfectly acceptable in exchange for the power going into the grid perfectly in phase.
Back in the day (1979) just after university I worked at a UK generator manuafacturer, and in the test lab (this was for smaller equipment) they used the crossed lights method to simplify the 'voltage measurement' that others have mentioned.
Basically they connected L1-L1 via a lamp, which needed to go out (zero volts / in phase) before closure, and a crossed lamp L2 (gen) - L3 (grid) which had to go to maximim first. Once the phase difference lamp was 'out' the connection relay / contactor / switch could be thrown.
There were various apocryphal stories about things that had gone wrong in various places which were educational!
Again, the light bulb method was used to synchronize a generator to the grid. However that was not the question.
| common-pile/stackexchange_filtered |
Vim: Edit file starting from ancestor project directory
I have a multiple working directories of a git repository on my machine, and when I have files from each open in vim I would like to be able to open a new file starting at the top level project directory that contains the current buffer file.
So for example, let's say I have these files open in buffers:
~/testing/MyProject/src/main.cc
~/mirror/MyProject/src/lib/module.h
If I am editing the module.h buffer, I want to be able to type :e <something?> and have it autocomplete to ~/mirror/MyProject/.
Once common method is to keep your current working directory set to the root of your project.
However if you do change your current working directory then you may want to look at something like fugitive.vim's :Gedit command which can be used to edit files relative the the repository's root. e.g. :Gedit /foo.txt
wow, fugutive.vim is pretty awesome! Thanks for the pointer!!
I managed to get it to work by adding a cabbr <expr> / to my .vimrc:
" Makes // expand to the containing MyProject directory.
cabbr <expr> / FindMyProjectInPath()
function FindMyProjectInPath()
let path=expand("%:p:h")
while path != "/" && fnamemodify(path,":t") != "MyProject"
let path = fnamemodify(path, ":p:h:h")
endwhile
return path
endfunction
So then if I'm editing ~/mirror/MyProject/src/lib/module.h, I can type :e // and it expands to :e ~/mirror/MyProject/.
If I'm not in a MyProject directory, it will remain a //: if I'm editing ~/other_project/main.cpp, then :e // won't expand.
The first / you type matches the cabbr <expr> / and the second one makes it expand.
You could maybe also use a variable to allow "MyProject" to be set dynamically while editing.
Be careful with that abbreviation as it will also expand during searches as well. I would suggest you take a look at :h getcmdtype() so you can guard against those cases
| common-pile/stackexchange_filtered |
How to use existing VPC in AWS CloudFormation template for new SecurityGroup
I am trying to EC2 instance (new), Security group (new) and VPC(existing). Here is my cloudformation template.
When I run the template in Stack, I got error as *"Value () for parameter groupId is invalid. The value cannot be empty"*. How to solve this?
Template:
Parameters:
VPCID:
Description: Name of an existing VPC
Type: AWS::EC2::VPC::Id
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.medium
AllowedValues:
- t2.medium
- t2.large
AccessLocation:
Description: The IP address range that can be used to access to the EC2 instances
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref 'InstanceType'
SecurityGroups:
- !Ref 'InstanceSecurityGroup'
KeyName: !Ref 'KeyName'
ImageId: !Ref 'ImageId'
InstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPCID
GroupDescription: Enable SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref 'AccessLocation'
SecurityGroups can only be used for default VPC. Since you are explicitly assigning VPCID to InstanceSecurityGroup, this will be considered as non-default, resulting in failed deployment.
You must use SecurityGroupIds (not SecurityGroups) in your case as your VPC use will be considered as non-default:
SecurityGroupIds:
- !GetAtt 'InstanceSecurityGroup.GroupId'
You solved a related mystery for me. I was setting the subnetid for an instance to a non default VPCID, and also using the securitygroups property and getting the most obscure message and trying to figure out how a groupName had anything to do with my subnet setting. I switched to SecurityGroupIds and all fixed...
The parameter groupName cannot be used with the parameter subnet (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterCombination
The error in EC2Instance resource in SecurityGroups attribute. SecurityGroups needs an array of GroupId but when you use !Ref InstanceSecurityGroup this returns ResourceId. So you need to use GetAtt instead to get GroupId.
Parameters:
VPCID:
Description: Name of an existing VPC
Type: AWS::EC2::VPC::Id
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.medium
AllowedValues:
- t2.medium
- t2.large
AccessLocation:
Description: The IP address range that can be used to access to the EC2 instances
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref 'InstanceType'
SecurityGroups:
- !GetAtt InstanceSecurityGroup.GroupId
KeyName: !Ref 'KeyName'
ImageId: !Ref 'ImageId'
InstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPCID
GroupDescription: Enable SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref 'AccessLocation'
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html
| common-pile/stackexchange_filtered |
Debian installation not booting after Nvidia drivers installation
I recently moved my Debian Gnome 3 installation to a new computer with a GTX 1070. I had previously installed old Nvidia drivers which didn't support the 1070, so when I booted up I was faced with a blinking prompt. To remedy this, I went into the command prompt with CTRL-ALT-F2, did sudo apt-get purge nvidia* and installed the Nvidia 367.27 Drivers.
When I rebooted after installing the drivers successfully, Debian displayed "Loading, please wait...", indefinitely. When I rebooted again, it (Gnome, I presume) displayed this "Oh no! Something has gone wrong." message. Now, when I boot into Debian, I always get one of those two messages.
Does anyone have a clue what has gone wrong? Is there any way to fix this, or is it time for me to reinstall?
When you get the "Oh No!"-message, can you switch to a terminal prompt?
@Fiximan Yes. When I log in on the terminal prompt, I get the message systemd-logind[1814]: Failed to start user service: Unknown unit<EMAIL_ADDRESS>(after that message I can still type commands into the prompt)
First of all, I'd suggest installing the driver from the Debian repository, read these two pages on how and why. This usually should fix your problem, but use an older driver - usually things should work as before then. Note that the latest driver is not always necessary.
@Fiximan can't install the drivers from the Debian repository because they don't support the GTX 1070. Before I tried installing the new drivers, I had the old drivers from the Debian repository, and all I got was a blinking prompt
Merge your accounts, don't reregister for everything.
| common-pile/stackexchange_filtered |
Arduino Micro draws too much power from iPhone. How can I change that?
I'm building a USB keyboard that has two buttons - space and enter. The plan is to use this USB keyboard (with an Apple Lightning connector to USB) with the built in iOS switch control. I bought the lightning to USB dongle, and hooked it up to a normal keyboard, and it works fine.
Next, I took an Arduino Micro (ATMEGA32u4) and programed it to act as a keyboard with the two keys I need (space and enter). On a PC, it works just fine, but when I hook it up to my iPhone, I get the message:
Arduino Micro: The connected device requires too much power.
I've done quite a bit of research on this, and I found this post. In a nutshell, that post said that when you connect a device to an iOS device, one of the first things it does is tell the iDevice how much current it could potentially draw. This number (about 200mA for the Arduino Micro) is what decides whether the iDevice will support the device or not, when in truth, the device will not come close to its max current draw, at least not in my case.
I hooked up a meter to the normal keyboard, and it draws just over 4mA. When I hook up the Arduino, it is drawing almost 40mA. While the Arduino is drawing much more than the keyboard, it should be okay, because when I plugged in a flash drive, it was drawing 50mA, but the iPhone didn't complain.
There is my story, here is my question:
Is there any way to change it so that the Arduino Micro doesn't request so much power? In other words, is there a way to reset the value that is causing the iPhone to not use the device?
do not use an arduino for this ..... take apart a regular USB keyboard ..... discard the switches and install a couple of push buttons to replace the two keys
Did you get this to work? I am working on a keyboard I need to connect to iPhone and I can't get it to work. Please help
The power consumption is part of the exchange with the PC when it is plugged in. You can change that. Find the file USBCore.h in your Arduino install directory. In my case (under Linux) it was:
./hardware/arduino/avr/cores/arduino/USBCore.h
Inside that file, at around line 269 (depending on the distribution) you should see these lines:
#define D_CONFIG(_totalLength,_interfaces) \
{ 9, 2, _totalLength,_interfaces, 1, 0, USB_CONFIG_BUS_POWERED | USB_CONFIG_REMOTE_WAKEUP, USB_CONFIG_POWER_MA(500) }
On the right of the second line is the requested power consumption in milliamps (currently 500). Change that to (say) 100:
That is, change USB_CONFIG_POWER_MA(500) to USB_CONFIG_POWER_MA(100).
Save and recompile.
You may find that the bootloader initially requests 500 mA even with this change (as it initially runs the bootloader). However when the sketch starts it should re-establish a USB connection and only request 100 mA. To fix that you would need to recompile the bootloader with the same fix, and reinstall it, a somewhat more complex task.
Another possible approach would be to tweak the fuses so that it doesn't run the bootloader, if you have finished debugging your code. Only do that if you are confident with playing with the fuses.
Majenko beat me to it while I was testing my answer. :)
I have a simple Arduino keyboard and it works great on all other devices then iPhone. Did you guys get this to work on iPhone?
@ErikAndershed This sounds like a new question to me. Feel free to make one.
Yes, but it requires manual modification of the Arduino core software.
Find the file USBCore.h within your AVR boards installation (it could be in the data storage folder wherever that is on your OS, or within the actual IDE software)
Look for the line #define D_CONFIG(_totalLength,_interfaces) \
The next line has the power setting. Change USB_CONFIG_POWER_MA(500) to what you require (for example USB_CONFIG_POWER_MA(50)
I did test with 50. But it do not work. When I plug it in to my iPhone with a adapter (from usb-c to lighting and it’s supports both power and data). My Arduino chip don’t wake up. Did you get this to work?
| common-pile/stackexchange_filtered |
SOX - Slow audio without stuttering effect
So, I have an audio file and I would like to slow it down to 0.5x it's speed without changing the pitch, the problem is that when I do that, I get a weird stuttering effect. Is there any way to have sox slow the audio "smoothly" so there's no noticeable stuttering? Here is an example that I have found where somebody slowed down the Windows XP startup sound to make it 24 hours long. If you skip to the middle of the video you will notice it is playing smoothly.
I take it you're using the tempo effect? Have you tried playing around with the parameters, like reducing the segment size and increasing the search space and segment overlap, ending up with something like this:
play test.aiff tempo 0.5 10 20 30
Chances are, however, that you won't ever get a pleasing result using SoX to so drastically stretch audio without changing the pitch. Not that the SoX algorithm is bad, it just isn't quite the right tool for the job.
You'd be better off using something like Amazing Slow Downer, or Paul's Extreme Sound Stretch, both employing algorithms specifically designed for stuff like this.
Are there any command line programs for this that support linux?
Sadly not that I'm aware of. Though there might be an appropriate LADSPA plugin available, but I have little experience with those.
| common-pile/stackexchange_filtered |
Recursively Echo Relative Path In A Batch
I've seen a lot of posts similar to this but none of them answer my question or they simply do not work for me. I am trying to loop through a directory and echo out the relative path of all files in that directory.
My Directory:
- Name
- TestA
* Subfolder
- Test.txt
* Test2.txt
* Test3.txt
- TestB
* Test4.txt
What I want it to output:
Name/TestA/Subfolder
Name/TestA
Name/TestA
Name/TestB
I tried this post: batch programming - get relative path of file but it only works for the Subfolder case and even then it cut off everything after Subf.
Please help!
The batch file
@echo off
setlocal enabledelayedexpansion
set ParentPath=C:\Temp
for /F "usebackq delims=" %%I in (`dir "%ParentPath%\Name\*.txt" /ON /B /S`) do (
rem Get drive and path of found text file.
set "RelativePath=%%~dpI"
rem Remove the path of parent folder from path of file.
set "RelativePath=!RelativePath:%ParentPath%=!"
rem Remove the backslash at beginning and end of the remaining path.
set "RelativePath=!RelativePath:~1,-1!"
rem Replace every backslash by a slash character.
set "RelativePath=!RelativePath:\=/!"
echo !RelativePath!
rem echo is the relative path of file
rem echo %%~nxI
echo.
)
endlocal
outputs
Name/TestA
Name/TestA
Name/TestA/Subfolder
Name/TestB
As you can see the order of the relative paths is not as you requested.
Is the order important for you?
| common-pile/stackexchange_filtered |
python wavebender module raspberry pi raspbian
I am trying to generate audio in python on my raspberry pi running raspbian, I am using the wavebender module, found here:
https://github.com/zacharydenton/wavebender
The example programs run in IDLE, but instead of audio being produced, random letters and symbols appear in random places across the python shell window.
Why is this happening?
Thanks
You probably need to pipe the output to aplay as stated by the webpage you posted:
$ python examples/binaural.py | aplay
The generated sound is shown as random chars since that's what aplay reads as input.
| common-pile/stackexchange_filtered |
Cross-Library JSON Configuration in .NET Core
Let's say I have some projects, a library Foo and two projects Bar and Baz, which depend on Foo. Foo contains some configuration that will be shared between Bar and Baz, but Bar and Baz will also do some configuration that is different between them.
In Foo, I have a configuration file:
/* /dev/Foo/fooConfig.json */
{
"lorem": "ipsum",
"dolor": "set"
}
and a method that does the initial configuration:
/* /dev/Foo/configuration.cs */
public static IConfigurationBuilder BuildBaseConfiguration()
{
return new ConfigurationBuilder()
.AddJsonFile("fooConfig.json")
}
Then in Bar, I have something similar:
/* /dev/Bar/barConfig.json */
{
"semper": "suspendisse"
}
/* /dev/Bar/Program.cs */
public static void main()
{
BuildBaseConfiguration()
.AddJsonFile("barConfig.json")
.Build();
}
Normally, Foo is distributed as a NuGet package, but during development, I reference it locally by including the following Bar.csproj:
<Reference Include="Foo">
<HintPath>../Foo/bin/Debug/net6.0/Foo.dll</HintPath>
</Reference>
I've made sure that fooConfig.json is being copied to the output directory, and that it is indeed appearing after successfully running a build.
However, after running Bar, I get the following error:
System.IO.FileNotFoundException: The configuration file 'fooConfig.json' was not found and is not optional. The expected physical path was '/dev/Bar/bin/Debug/net6.0/fooConfig.json'.
It would seem that .NET Core is looking for the config file using a relative file path based on the working directory at runtime (/dev/Bar/bin/Debug/net6.0), rather than where the file is actually kept (../Foo/bin/Debug/net6.0/fooConfig.json).
How do I correct this behavior, so that .NET Core references the real location of fooConfig.json?
By default ConfigurationBuilder is using AppContext.BaseDirectory (see the source code) as root for the file search. You can try something like the following to override this behaviour:
// for development environment:
var compositeFileProvider = new CompositeFileProvider(new[]
{
new PhysicalFileProvider(AppContext.BaseDirectory),
new PhysicalFileProvider(Path.GetDirectoryName(typeof(SomeTypeFromFooDll).Assembly.Location))
});
var cfgBuilder = new ConfigurationBuilder()
.SetFileProvider(compositeFileProvider);
// ... rest of cfg setup
P.S.
Personally I think that library should not come with it's config files and I would refactor the code in such way that it exposes the settings type (with some default values) and it is up to the consuming project to handle those.
Yeah, as I've been reviewing things, it seems like the more appropriate course of action would be to re-factor these into enums or other types, something that will be naturally included in the dll.
| common-pile/stackexchange_filtered |
Regex not matching the string format in C#
i am receiving data over serial port and i want to verify if the data format is right. the data format i am expecting is like this
number,number,number,number -> 1200,2500,6500,90
i am using the regex like this
Regex.IsMatch(s, @"^[0-4095]\,[0-4095]\,[0-4095]\,[0-4095]$")
using 4095 because the number range is between 0 and 4095. need help with this. Thanks in advance.
Parse the numbers to ints and compare the values that is far easier than to compare number ranges in Regex Lingo.
If you want to make the pattern shorter, use an optional comma but force it by use of a word boundary \b. I came up with ^(?:(?:40\d[0-5]|[1-3]\d{3}|[1-9]\d?\d?|0),?\b){4}$
you could do it without the need to depend on regex with a simple LINQ expression and int.TryParse method:
var sections = e.Split(',');
sections.Count() == 4 &&
sections.All(s => int.TryParse(s, out int i) && i >= 0 && i <= 4095);
After working through a two step process validating the format with Regex, then parsing the integers to check their values, I concluded this is the best answer. Regular expressions have their limits. This is a case where I don't think regex is the right answer.
agree, regex becomes hard to read pretty fast which overcome the conciseness.
Maybe,
^(?:40\d[0-5]|[1-3]\d{3}|\d{1,3}),(?:40\d[0-5]|[1-3]\d{3}|\d{1,3}),(?:40\d[0-5]|[1-3]\d{3}|\d{1,3}),(?:40\d[0-5]|[1-3]\d{3}|\d{1,3})$
Demo 1
might work OK, if 000,000,000,000 would be valid, otherwise,
^(?:(?:40\d[0-5]|[1-3]\d{3}|[1-9]\d{2}|[1-9]\d|\d),){3}(?:40\d[0-5]|[1-3]\d{3}|[1-9]\d{2}|[1-9]\d|\d)$
might be an option too.
Demo 2
Test
using System;
using System.Text.RegularExpressions;
public class Example
{
public static void Main()
{
string pattern = @"^(?:(?:40\d[0-5]|[1-3]\d{3}|[1-9]\d{2}|[1-9]\d|\d),){3}(?:40\d[0-5]|[1-3]\d{3}|[1-9]\d{2}|[1-9]\d|\d)$";
string input = @"1200,2500,6500,90
1200,2500,6500,90
1200,2500,4095,90
0,0,0,0
999,1,0,99
000,000,000,000
4095,4095,4095,4095
";
RegexOptions options = RegexOptions.Multiline;
foreach (Match m in Regex.Matches(input, pattern, options))
{
Console.WriteLine("'{0}' found at index {1}.", m.Value, m.Index);
}
}
}
If you wish to simplify/modify/explore the expression, it's been explained on the top right panel of regex101.com. If you'd like, you can also watch in this link, how it would match against some sample inputs.
RegEx Circuit
jex.im visualizes regular expressions:
This is really helpful !
We cannot do range on numbers like that. You may use this regex for your use case:
^(([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|0[0-9][0-9][0-9]|1[0-9][0-9][0-9]|2[0-9][0-9][0-9]|3[0-9][0-9][0-9]|40[0-9][0-5]),){3}([0-9]|[0-9][0-9]|[0-9][0-9][0-9]|0[0-9][0-9][0-9]|1[0-9][0-9][0-9]|2[0-9][0-9][0-9]|3[0-9][0-9][0-9]|40[0-9][0-5])$
Edited:
^(([0-9]{1,3}|0[0-9]{3}|1[0-9]{3}|2[0-9]{3}|3[0-9]{3}|40[0-9][0-5]),){3}([0-9]{1,3}|0[0-9]{3}|1[0-9]{3}|2[0-9]{3}|3[0-9]{3}|40[0-9][0-5])$
Or,
^(?:(?:\d{1,3}|[0-3]\d{3}|40\d[0-5]),){3}(?:\d{1,3}|[0-3]\d{3}|40\d[0-5])$
I guess you now understand the idea. To check if a number lies between 0-4095,
The number can be single digit 0-9
It can be double digit [0-9][0-9]
It can be any triple digit number [0-9][0-9][0-9]
But, for a 4 digit number, we must ignore all numbers which are
greater than 4095,
That is why the query needs to be longer than usual.
1[0-9][0-9][0-9] covers all 4 digit numbers starting from 1.
...
40[0-9][0-5] covers all numbers between 4000 and 4095
| common-pile/stackexchange_filtered |
Error returning an object literal in Node.js
I have the following code in my application as i am following lynda.com tutorial to learn Node.js I get some error on line where it says origin: o saying "unexpected token"
var number, origin, destination;
exports.setNumber = function(num){
number = num;
}
exports.setOrigin = function(o){
origin = o;
}
exports.setDestination = function(d){
destination = d;
}
exports.getInfo = function(){
return
{
number: number,
origin: origin,
destination: destination
};
};
I have no idea what is the error, i am following the tutorial line by line on lynda.com
There is no syntax error, still, try to add semicolon after exports sentences
@wZVanG: Yes, there is see my answer.
return
{ ... }
is equivalent to
return;
{ ... }
because of JavaScript's automatic semicolon insertion. If you want to spread the return value over multiple lines, you have to start the object literal on the same line:
return {
// ...
};
You got the error because
{
number: number,
origin: origin,
destination: destination
};
is interpreted as a block, number: as a label and the , as a sequence expression, which is basically equivalent to
(number, origin: origin, destination: destination)
origin: is simply invalid at this position.
@user1010101 That doesn't affect the ASI issue.
@DaveNewton you are correct. It works now, thanks a ton, i would have never known this! I will accept your answer in 3 minutes as I can't at the moment.
| common-pile/stackexchange_filtered |
Sitecore Media Library - missing Alt Text
I am using Sitecore 9.0. There used to be an option while uploading an image to use either the normal "File Upload" or "Upload File Advanced" to enter the Alt Text before uploading the image in the upload dialog box itself. It is now missing.
Can someone guide me on how to get it back? The Alt Text option used to be visible in the red highlighted box below.
The File Upload dialog allowing you to set the Alternate text is based on a Flash uploader. There is an Upload.Classic setting in Sitecore.config indicating whether the uploading runs in classic (no Flash) mode or not; the default value is false, so that the Flash uploader should be used by default:
<setting name="Upload.Classic" value="false" />
But, for security reason, Flash is not enabled by default in modern browsers, therefore, if you want to use the Flash uploader you also have to allow it in your browser.
How to allow Flash in Google Chrome browser?
To the left of the web address, click Lock or Info icons;
Then at the bottom on the popup, click Site Settings;
In the new tab, to the right of "Flash", click the Down arrow and then select Allow;
Go back to the site and reload the page;
Note, that Sitecore client interface issues a persistent sc_fv cookie where it stores the Flash status, its value for the blocked Flash mode is '0.0.0'. Therefore, once the Flash is allowed in your browser you need to delete the sc_fv cookie to force the Flash status check.
So, summing all up, if you wish to use the Flash file uploader you have to complete 3 simple steps below:
Make sure that Upload.Classic setting is set to false in Sitecore.config;
Allow Flash in your browser;
Delete sc_fv cookie.
And you will get the Alternate text field back in the Upload File dialog:
| common-pile/stackexchange_filtered |
How do we remake the program to run and save new values in c.csv file?
I need to search in b.csv and remove a record in a.csv where the number and name existed in both b.csv and a.csv. Also for example ACURA and HONDA can have the same address and if we need to remove ACURA, HONDA with the same address can not be deleted.
file a.csv:
address;;name;;
8971018564;ACTUATOR, AXLE;ACURA;28932,9075;50
8972534230;ACTUATOR, DOOR LK F;ACURA;16597,035;50
8971283222;ACTUATOR, DOOR LK FR;ACURA;18548,46;50
8971283232;ACTUATOR, RR DOOR LO;NISSAN;17838,45;50
8972534250;ACTUATOR,DR LK RR LH;ACURA;16063,425;50
file b.csv
address;name
8971018564;ACURA
8971283232;NISSAN
8971283222;ACURA
8972534250;ACURA
8971018564;HONDA
The script below does not work correctly. script return wrong output only one column and must return seven columns, also the file a.csv is about 120 mgbit and it is stall,has powershell limitation for file size?
#Read the first file into a second variable using Import-Csv.
$a = Import-Csv 'a.csv' -Delimiter ';'
#Read the second file and expand the address field, so you get an array with just the values. Assign that to another variable.
$b = Import-Csv 'b.csv' | Select-Object -Expand 'address'
#For better performance make $b a hashtable instead of an array
$b = @{}
Import-Csv 'b.csv' | ForEach-Object {
$b[$_.address] = $true
}
#and check for the absence of an address with a hashtable lookup.
$a | Where-Object {
-not $b.ContainsKey($_.address)
} | Export-Csv 'c.csv' -Delimiter ';'
script return one column and must return seven columns
#TYPE System.Management.Automation.PSCustomObject
"address"
"8971018564"
"8972534230"
"8971283212"
"8971283222"
"8972534240"
"8971283242"
"8943586191"
"8943586201"
"8973159320"
"8972546570"
Can you add the header rows for a.csv and b.csv? Can you add what you expect c.csv to be?
I have significantly improved the formatting of your question to maker it both understandable and readable. Please take more care in future, when creating a question, or answer, in order that your question is not just ignored by those not willing to try to decipher it. Also I'm not sure if your example csv content files are even relevant to your question. My reading of it is to match both the number and name in b.csv with a record in a.csv, however there is no single record in a.csv which contains both the number and name.
To clarify the above, a.csv only contains two numbers from b.csv,<PHONE_NUMBER> and<PHONE_NUMBER>. However, the record in a.csv with<PHONE_NUMBER> matches with NISSAN not ACURA and the record in a.csv with<PHONE_NUMBER> matches with ACURA not NISSAN. That means in order for others to reproduce your issue, they would have to manually create their own CSV files, because yours will not match anything. Until you have rectified that particular observation, your question is technically off topic, because all we'd be doing is copying a.csv to c.csv.
do you have an answer?
It's still not entirely clear what you're trying to do - what does "The script below does not work correctly" mean? Does it throw errors? Produce unexpected output? What is the expected output?
script return wrong output only one column and must return seven columns, also the file a.csv is about 120 mbit and it stall, do powershell has limitation for size of file?
| common-pile/stackexchange_filtered |
Selenium with Python, trying to click a "pseudo element"
I'm trying to do web scraping of a page, everything went normal until I figured out, there is a "pseudo element" (::before), let me show you
btw, the inspector is above the "Magnifier Glass"
So the problem comes with detalle_causa = driver.find_element(By.CSS_SELECTOR,'i.fa.fa-search.fa-lg').click()
if t_rol == rol_yr:
rol_c0.append(t_rol)
materias_e_c0.append(t_tiporecurso)
carat_c0.append(t_caratulado)
fecha_c0.append(t_fecha)
estado_c0.append(t_estado)
data = {"ROL": rol_c0,"TIPO RECURSO":materias_e_c0, "CARATULADO":carat_c0, "FECHA":fecha_c0, "ESTADO":estado_c0}
sleep(3)
df = pd.DataFrame(data)
print(df)
sleep(3)
detalle_causa = driver.find_element(By.CSS_SELECTOR,'i.fa.fa-search.fa-lg').click()
print(detalle_causa)
df.to_csv('2022_cs_p2.csv', index=False, encoding='utf-8')
sleep(2)
else:
print("No era")
EDIT: I need to get access to this.
So if you guys could give me a hand here, I will really appreciate it.
some pointers here https://www.lambdatest.com/blog/handling-pseudo-elements-in-css-with-selenium/
detalle_causa = driver.find_element(By.CSS_SELECTOR,'i.fa.fa-search.fa-lg')
detalle_causa.click()
print(detalle_causa)
You are trying to store a click.
Your print statement is going to print nothing human readable... it's going to print the guid for the webelement, etc. I'm not sure what you are intending to print here given that it's an icon so even detalle_causa.text is not going to print anything either.
The problem is with detalle_causa, since I already copy/paste the selector, created the variable and added the click at the end. From that I'm not getting access to the 3rd image website, which is the goal
Was unsure of what he was trying to print to so I just printed the guid.
| common-pile/stackexchange_filtered |
Text Format in Excel using VBA
When I use WinSQL to run a SQL statement, the result is<PHONE_NUMBER>0001812. However, when I incorporate the SQL as a macro, the result is 2.01008E+16. What should I do in the macro in order to maintain the result as<PHONE_NUMBER>0001812 ?
Thanks,
Bob
Is it a large integer number or a text (looks like YYYYMMDDHHMMSSMMM)?
According to this article ActiveCell.NumberFormat = "@" should do the trick.
I hv inserted it into the macro & yet, it doesn't work. It's still shows as 2.01008E+16.
As suggested by Tahbaza on 4 Oct 2010, an apostrophe if you're just wanting the column to display as text, something like :- select Cno,Itno,CONCAT('''',Ref) as Ref from test.
ActiveCell.NumberFormat = "0" works for me (not what I expected, but so it goes)
You might want to throw in a Cells.Columns.AutoFit to resize the columns as necessary.
@Bob - Where are you putting this code in? What if you try setting the number format manually?
Someone suggested to add CONCAT to my select statement & it works (eg. select Cno,Itno,CONCAT('''',Ref) as Ref from test)
@Bob good that you got it figured out. You should post the solution and accept that as your answer to close out the question.
| common-pile/stackexchange_filtered |
Contact info requirement after 5 minutes
I have a PHP site that the client would like to require the viewer to fill out and submit a contact form after viewing the site for 5 minutes. Info does not need to be added to a database, just sent to their office email. I have created the form but am not sure of the best way to handle the pop-up and make it required before continuing to view the site.
Make it required? What a way to ensure all your visit lengths are < 5 minutes.
@Matt I get even more stupid requirements at my work :P
I suppose you could track a session value. Set it to the current timestamp when a session starts, compare it to the current timestamp on every page request. If the comparison exceeds 5 minutes, redirect to the form. It's... not a very good user experience. But I guess that's the problem of whoever defined the requirement.
And, it would be very easy to get around that kind of requirement without server-side intervention (simply reset the session by closing the browser). This kind of functionality also means that if a user returns to the site, they would have to re-enter their information after 5 min even if they already did previously. Very odd requirement. How do you handle javascript being turned off?
This may help you;
PHP for the website page
if (false === isset($_SESSION['startTime'])) {
$_SESSION['formSubmitted'] = false;
$_SESSION['startTime'] = time();
}
PHP for the jqueryCheck.php;
$return = 0;
if (false == $_SESSION['formSubmitted']) {
if ((time() - $_SESSION['startTime']) > 300) {
$showForm = 1;
}
} else {
$return = 2;
}
echo $return;
PHP after form submitted and validated
$_SESSION['formSubmitted'] = true;
JQUERY
var intervalId = setInterval(function(){
$.ajax({ url: "jqueryCheck.php", success: function(showForm){
switch (showForm) {
case 0: //not yet at 5 mins and not submitted
break;
case 1: //at 5 minutes no form submitted
//code to show form
break;
case 2: //form submitted
clearInterval ( intervalId );
break;
}
}});
}, 60000);
You're doing your client a significant disservice if you haven't tried to talk them out of this. If they want people not to visit the site, they should just not have it developed. If they want people to visit it (or ever come back) they shouldn't even consider this.
That being said, you could set up some sort of timer to "force" a popup to appear, but many popup blockers would handle that. There would be ways around any scheme like this, regardless of how you do it (cookies, serving a different page, etc.), but unless you've got something incredibly unique on that site, your client's readers probably won't even bother with them-they'll go to a site that isn't deliberately broken.
I would suggest to your client that they instead put in a box where users who WANT to be contacted can enter an email address. That won't annoy your users, but will still let you take in leads from those who are interested.
Love it - I totally agree - that's what I had been telling my client all along - glad to hear you are all backing me up. We'll see how it works out and thanks for the suggestions!
This is a simply solution without Javascript.
if (!isset($_SESSION['trial_end'])) {
$_SESSION['trial_end'] = time() + 5 * 60;
}
// Check if trial has expired and we are not already on the form
if (time() > $_SESSION['trial_end'] && $_SERVER['REQUEST_URI'] !== '/form') {
header('Location: /form');
exit;
}
| common-pile/stackexchange_filtered |
Confused about differential form
I have only taken one calculus course in my life so please bear with me.
I came across this equation in a paper:
dt = dz/u + d[f(y)]
It looks like Leibniz-notation, but taken apart. My friend told me it's called "differential form". However, it doesn't make sense to me that dt would be something in and of itself. Same goes for dz. An actual infinitesimal difference doesn't really exist, right? Could anyone explain to me what an equation like this is supposed to mean or signify?
Edit: I don't necessarily want a formal definition. Just an intuitive understanding.
Edit 2:
Have you googled "diferential forms"? Are you familiar with the definition of a vector space? how about linear maps?
I have googled that term, but unfortunately I still couldn't understand it. I know what a vector space is, but not what a linear map is.
If you know something about tangent spaces to a manifold I could help you
In general, given a functon $f\in\mathcal C^{\infty}(M)$, you define the differential as the $1$-form such that $$df\vert_p(v)=v(f),\forall v\in T_pM.$$A smooth 1-form is a smooth section of the cotangent bundle defined as a map $\omega:M\to T^*M$, $\omega\in\mathcal C^{\infty}(M)$ and $\pi\circ\omega=id_M.$
If you know what a vector space is but not what a linear map is, then you should instead focus on studying linear algebra first. Because (multivariable) calculus is HIGHLY based on linear algebra. Anyway here's a previous answer of mine (see also the link there), which is about as simple as I can explain it. But again, without even a basic understanding of linear algebra, these concepts are impossible to explain properly.
The whole $dx$ and $dy$ thing are essentially notation that are meant to capture an extremely small area unit. Differential forms are confusing to advanced undergraduates, so I think most honest thing I can say that will be effective is that forms are an alternative way of looking at integration that has some advantages over standard integration, but some disadvantages as well.
| common-pile/stackexchange_filtered |
How to specify location of create task using schtasks.exe (NOT run-in location)
I'm trying to create a task using SCHTASKS.EXE and specify the location is per the following image (Windows 10):
Is it possible to do this using schtasks? I don't see a parameter for it in Microsoft's documentation. Whenever I create a task, it puts it in the "Task Scheduler Library" folder, not any sub-folder:
I've only been able to find SO questions regarding trying to specify RUN-IN. This is not what I'm looking for.
I realized you can create a task using the GUI, then export it to XML. Then you can use that file to create the task.
You have two options:
Using: Windows UI
Open Task Scheduler
Look for the task browser on the left-hand side of the window. You will see a couple of folders (e.g. Task Scheduler Library, Microsoft, etc.)
On Task Scheduler Library, right click: New Folder... (e.g. Contoso)
Left-click on Contoso to select it.
Right-click on Contoso Create Task...
Using: Command Prompt
Launch command prompt with elevated privileges.
Execute the following: schtasks.exe /Create /tn "Contoso\MyFirstTask" /SC HOURLY /TR "C:\Windows\System32\calc.exe"
Note: The "Folder Name" (e.g. Contoso) is specified as part of the task name (see: /tn).
Note: If the Task Scheduler window was open prior to executing the command from the command line... you may have to refresh the task browser to see the new Contoso folder.
These instructions assume that you have the appropriate Windows permissions to create a new task.
| common-pile/stackexchange_filtered |
RazorPages Page Remote not working on model
as per https://www.mikesdotnetting.com/article/343/improved-remote-validation-in-razor-pages
I followed the tutorial and implemented the PageRemote. However it does not work if applied to a property of a model and I use the model as property.
public class Draft
{
public int Id { get; set; }
[PageRemote(ErrorMessage = "Invalid data", AdditionalFields = "__RequestVerificationToken", HttpMethod = "post", PageHandler = "CheckReference")]
public string Reference { get; set; }
}
[BindProperty]
public Draft Draft { get; set; }
public JsonResult OnPostCheckReference()
{
var valid = !Draft.Reference.Contains("12345");
return new JsonResult(valid);
}
on my page
<tab>
<tab-item icon="fas fa-arrow-left" url="@Url.Page("../Index")"></tab-item>
<tab-item icon="fas fa-list" url="@Url.Page("Index")"></tab-item>
<tab-item icon="fas fa-plus" is-active="true"></tab-item>
</tab>
<form method="post">
<card>
<card-header icon="fas fa-plus" title="Draft"></card-header>
<card-body>
<input asp-for="Draft.Reference" />
<span asp-validation-for="Draft.Reference" class="text-danger"></span>
</card-body>
<card-footer>
<button class="btn btn-success"><i class="fas fa-plus"></i> Adicionar </button>
</card-footer>
</card>
</form>
@section Scripts{
@{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); }
<script src="~/lib/jquery-ajax-unobtrusive/dist/jquery.unobtrusive-ajax.min.js"></script>
}
Remote validation on nested model properties is not straightforward. The framework prefixes all additional fields with the name of the model, so request verification fails, resulting in a 400 error.
The work around is to separate the field that you want to validate remotely from the sub-model, and make it a first class property of the PageModel instead. Then, if ModelState is valid, assign the value to the nested model.
public class Draft
{
public int Id { get; set; }
public string Reference { get; set; }
}
[BindProperty]
public Draft Draft { get; set; }
[BindProperty, PageRemote(ErrorMessage = "Invalid data", AdditionalFields = "__RequestVerificationToken", HttpMethod = "post", PageHandler = "CheckReference")]
public string Reference {get;set;}
public JsonResult OnPostCheckReference()
{
var valid = !Reference.Contains("12345");
return new JsonResult(valid);
}
Then in the form:
<input asp-for="Reference" />
<span asp-validation-for="Reference" class="text-danger"></span>
I see, i tried this before just to test and it worked but i wasn't aware this was the case of the framework. Thanks for clearing it up
Remote validation on nested model properties doesn't allow you to specify additional fields on a parent object. The __RequestVerificationToken is always on the root of the model. The source for jquery.validate.unobtrusive.js is looking for fields prefixed with *. and prefixes the model name to them. The asp-for tag helper is adding *. to the beginning of the fields.
You can circumvent this prefixing of *. by manually specifying the attribute in html and removing AdditionalFields from the attribute.
PageRemoteAttribute:
public class Draft
{
public int Id { get; set; }
[PageRemote(ErrorMessage = "Invalid data", HttpMethod = "post", PageHandler = "CheckReference")]
public string Reference { get; set; }
}
Html:
<input asp-for="Reference" data-val-remote-additionalfields="__RequestVerificationToken" />
Thank you! This is by far the easiest way I have seen to tackle the key issue.
Hello. Thanks for this answer.
But I have another problem and can't find any info on that. What If the developer needed to add the Draft.ID as a parameter to the PageRemote attribute? My case is very similar, as I also have a View model and an inner property to validate, but it also requires another property. Taking this case as example, I've tried passing the id parameter with AdditionalFields = "Draft.ID" or even AdditionalFields = "ID", but it is not being posted together with the Draft.Reference when the post handler is triggered
Unfortunately, this doesn't seem to work in 10/2023, but was immensely helpful in finding a solution. The data-val-remote-addtionalfields gets overwritten in .Net 7. I had to write a small JS file that selects all elements with data-val-remote-additionalfields and replaces the value. I found that the script had to be placed before the unobtrusive validation scripts in order to work.
The solution above where you specify the data-val-remote-additionalfields attribute directly on the input appears to no longer work for .Net 7. The tag builders seem to be overwriting the value and putting *. in front. This happens whether additional fields are specified in the data annotation or not.
Since I couldn't figure out a way to control the output via annotations and attributes, I wrote a simple script to find all inputs with the data-val-remote-additionalfields attribute and strips out the prefix for the request verification token.
We still use jquery because the FluentValidation clientside adapters rely on it. This script uses the $(function(){}) short-hand to run when the DOM is ready. You can easily replace it with a DOMContentLoaded event listener. The rest is Vanilla JS.
$(function () {
var remoteValidators = document.querySelectorAll('[data-val-remote-additionalfields]');
if (remoteValidators) {
remoteValidators.forEach(val => {
var value = val.getAttribute('data-val-remote-additionalfields');
value = value.replace('*.__RequestVerificationToken', '__RequestVerificationToken');
val.setAttribute('data-val-remote-additionalfields', value);
});
}
});
NOTE: This script needs to be placed after jQuery, but before the validation libraries.
| common-pile/stackexchange_filtered |
Json.net deserialize complex class with interface property
I am trying to find an elegant way deserialize a complex class with an interface property whose type I want to pass at runtime.
The scenario is that I have a Web API 2 service that is consumed by a separate application using HttpClient. The service controllers produce a number of different responses that wrap a payload in a response class. The payload can be any number of separate objects that implement IPayload, and when I deserialize I need to tell the deserializer what concrete class to deserialize IPayload to.
The class looks like this:
public class Response
{
public bool Success bit {get; set;}
public IEnumerable<IPayload> Payload {get; set;}
}
And the code that attempts to deserialize
...
Response response = await hr.Content.ReadAsAsync<Response>();
...
It's almost as if I would have to pass two types to the deserializer.
I've tried to do this using TypeNameHandling, and while I can get the service to include a $type property, that won't work because the code that generates the response also utilizes a generic. I wind up with "$type": "System.Collections.Generic.List`1[[Interfaces.IPayload, ClassLibrary1]], mscorlib" which I similarly can't deserialize to a concrete class.
Is there a way to do this out of the box, or would I have to create a custom deserializer?
Something like this? https://stackoverflow.com/a/18490583/3608792 I like the JsonConverter approach becuase there's no need to maintain a constructor. Just throw an attribute on the property.
Possible duplicate of Casting interfaces for deserialization in JSON.NET
You can create the constructor with actual concrete type so that JSON.net inject with appropriate type while deserializing. Check the link I've mentioned in previous comment.
@user1672994 I like this approach, but in my case it's not really the elegant solution I was hoping for, since the would require me to create a separate Response constructor for every possible IPayload, right?
@DanWilson - not sure if this completely gets to what I'm looking for as it does not seem that there's an opportunity to specify the type at the time of deserialization. But perhaps I'm reading it wrong and will look at this a bit more.
| common-pile/stackexchange_filtered |
What is the number of possible simple directed graphs of $n$ elements without 2-cycles and self-loops?
Let $A$ be the adjacency matrix of a graph with $n$ vertices. Let $a_{ij}$ denote the entry in the $i$-th row and $j$-th column.
For a given $n$, how can we compute the number of possible networks, $f(n)$, such that
There are no self-loops: $a_{ii}=0$
There are no 2-cycles: $a_{ij}=1\implies a_{ji}=0$
there is at most one edge between two vertices: $a_{ij}\in\{0,1\}$
To get some intuition, I wrote a script that generates all such networks for $n=3,4,5$ and got $27,729,59049$ possible networks respectively. I did some reverse engineering and got that
$$f(n)=3^{\frac{n(n-1)}{2}}$$
However, I don't understand why is this the case. It is possible, of course, that the function I deduced is wrong and that it only works for those three integers.
Can someone shed some light on whether this formula is correct and why is this the case?
Do you allow multiple edges between two vertices?
@ParclyTaxel right, I forgot to clarify, I will edit the question. Thank you for pointing it out! :)
Assuming all vertices are labelled, for each of the $\binom n2=\frac{n(n-1)}2$ pairs of vertices there are three possibilities: there is an edge from $A$ to $B$, there is an edge from $B$ to $A$, there is no edge between $A$ and $B$. There are no other restrictions, so the number of different graphs is $3^{n(n-1)/2}$.
| common-pile/stackexchange_filtered |
Midbar Kodesh, reason for title
The sefer Midbar Kodesh contains the teachings of Rav Shalom Rokeach, the first rebbe of Belz. The title Midbar Kodesh appears to be an intentional mispronunciation of "Midbar Kadesh." Why is this the title chosen for this sefer, who chose this title, and what is its intended meaning?
| common-pile/stackexchange_filtered |
Send password by email to User
I would like to know if it's a good practice in terms of security to send the decrypted password to a new user by email. Someone could tell me his feeling?
If i would like to send the password decrypted should i use this ?
$decrypt= Crypt::decrypt($user->password);
thanks a lot in advance
Why do you want to send decrypted password to new user ?
This has already been answered here https://security.stackexchange.com/questions/17979/is-sending-password-to-user-email-secure
because it's not the user itself who get registered , so i try to find an easy way to inform to him his connection acces
@MathieuMourareau Once they register, send them a link to create a password. This would be more secure. Apart from that password must be hash.
The ability to decrypt passwords is bad practice to begin with.
You can't decrypt hashed password. The good practice is to use Laravel resetting password feature.
Once you have defined the routes and views to reset your user's passwords, you may simply access the route in your browser at /password/reset. The ForgotPasswordController included with the framework already includes the logic to send the password reset link e-mails, while the ResetPasswordController includes the logic to reset user passwords.
After a password is reset, the user will automatically be logged into the application and redirected to /home
https://laravel.com/docs/5.4/passwords
Based on the comments:
Once user is register send him/her a link to create a new password.
If you don't want to allow them to access other pages until they create a new password. Add the middleware to check whether user has create a new password or not.
From view point of security, password must be hashed value. You shouldn't use encryption/decryption for password.
thanks for your reply ! wish route should i put inside the email ? reset password ?
| common-pile/stackexchange_filtered |
Validating users upon login and giving them limited rights to database
I'm new to PHP and MYSQL, trying to create a website which users can use to input data into a database. An example of what i'm trying to do would be a database for various banks and the various services they provide.For example, a user from Citibank creates an account on my website, he will enter his LoginID,Password,Email & the name of his bank(which would be Citibank in this case).
Upon successfully creating an account and logging in, he would be the "Admin" account for Citibank with the rights to Create,Delete,Insert & View all data from Citibank ONLY. He would also be able to further create & delete Outlets, and create/delete a SubUser account for that outlet.The SubUser account would have all the rights the Admin account would have minus the right to create further SubUsers, BUT restricted to only the Outlet it is in charge of.Both Admin and Sub accounts would be logging in through the website.
I've listed down the rights which i think the accounts would need:
Rights to database
SELECT,INSERT,UPDATE,DELETE,(JOIN?)
I am currently thinking of implementing the following table for the Admin account:
Admin
+----------+-----------+------------+------------+
| BankID | BankName | UserName | Password |
+----------+-----------+------------+------------+
| 1 | Citibank | CitiAdmin | PassCiti |
| 2 | StanChart | StanAdmin | PassStan |
| 3 | HSBC | HSBCAdmin | PassHSBC |
+----------+-----------+------------+------------+
Where the BankID would be of type SERIAL, while the BankName,UserName and Password would be entered by the user upon creation of his account.The reason why i do not split the above table into 2 tables with one containing the BankID and BankName and the other containing Username & Password would be for ease of use as i feel that splitting it up would be needless, and be over-normalising it.
While the following table would be for the Subuser accounts:
SubUsers
+------+------------+--------------+-------------+
| ID | OutletID | Name | Password |
+------+------------+--------------+-------------+
| 1 | 1 | CitiSub1 | PassSub1 |
| 2 | 1 | CitiSub2 | PassSub2 |
| 3 | 2 | StanSub1 | PassSub1 |
| 4 | 2 | StanSub2 | PassSub2 |
| 5 | 3 | HSBCSub1 | PassSub1 |
| 6 | 4 | HSBCSub2 | PassSub2 |
+------+------------+--------------+-------------+
By doing this, upon user login, i would get the userentry from $_POST[User] and $POST[Pass] and match if against the data drawn from the query
$query="SELECT Username AND Password FROM Admin AND SubUsers";
and if there is a match, the user will be logged in.By doing this i am able to achieve a first level of verification where only registered users are able to access the database.
However how would i restrict access to both the Admin account, AND the SubUser account.The Admin account would only be able to access data pertaining to his Bank, and the SubUser account would only be able to access data pertaining to his Outlet.
I've considered using PHP sessions to perhaps record data about the user when logging in by changing the login query from
$query="SELECT Username AND Password FROM Admin AND SubUsers";
to a query that first selects Username and Password from Admin, and runs the $_POST[User] and $_POST[Pass] through it, and if there isnt a match it would draw Username and Password from SubUser and repeat the process, and would log a result into the session depending if the match happened in the Admin table or SubUser table.
However,doing this would only change the webpages available to the user upon login and not their actual access to the database itself.The closest solution i can think of using this method would be to create a brand new set of webpages for the user depending on whether the user is an Admin or SubUser, which i would rather NOT do as i am still new to programming, and increasing the number of webpages would only increase the number of bugs that will ineveitably show up.
Are there any other methods to restrict user access to the database, and or other solutions to optimise what i'm trying to do?
I've looked at How to configure phpMyAdmin for multiple users - each with access to their database only but it's a little too technical for me and seems to be dealing with user access to databases instead of tables.
Any advise/help/guidance will be MUCH appreciated.
Are you simply providing MySQL hosting for your customers, such that they will connect their own applications directly to your MySQL server, or are your customers interfacing with your application which in turn is the only thing that connects to your MySQL server? I suspect the latter, but if that's the case then your customers should not be able to specify their own SQL and you can implement your access control at the application layer.
@eggyal The customers are interfacing with my application which will then connect to my MYSQL Server. I'm assuming that by "application" you mean the website which the users will log in to. Could i have some examples on how i could go about implementing access control at the application layer?Also what do you mean by the customers specifying their own SQL? If it helps clarify things, the website they will be logging in will only consists of forms, which they will use to input data into my database.
Right, so by "implement access control at the application layer" I mean that your website will identify who the users are and only issue database commands that affect their records.
What an interesting and thorough question. It is rather of the type that requires a book to answer thoroughly though. I admire your ambition.
First design it properly.
Ask yourself what actions users might need to do and give them a name. Once you store the privelege names in a table, you can assign them to roles or users as required. You authenticate the ability to do each thing at PHP level, either by checking before each action that the appropriate privelege is applied or by writing each action as a function that includes authentication of priveleges.
Put the bank id and branch id as Foreign Keys in each table. That way you simply include the bankid and branchid as 'AND' additions to your WHERE clause. This wa you need only one database but you control who gets to see what using intelligently written SQL.
If you need the users to be able to run SQL on their data, ensure that all queries are run through a function that adds the requisite AND (bankid='%s' AND branchid='%s') clause. This effectively separates the data. You can add a check of your returned data if you need to and also consider using encryption (different key for each bank) though that is going a bit far.
This pretty much is what is meant by application-layer control. The PHP application selects what data you have access to based on stored priveleges. I cannot re-inforce how important it is to plan your priveleges, given them meaningful names and verbose descriptions. It seems a lot of work when you start but it makes the difference. it certainly beats having to create a new database for each user. Don't worry about filling up your SERIAL ids - a BIGINT can handle a million transactions per second for over 200,000 years.
Once designed, authentication is the next hurdle. I reckon you should do this before you write anything fancy as it's really quite hard to get right.
What I would do is:
Collect bank,branch and username (allow these to autocomplete in your HTML) and then password.
Store the password as an SHA1 or MD5 hash.
Once authenticated, you pop the usernumber, bank and branch numbers into your $_SESSION They can then easily be retrieved for SQL later.
For added security, though increased complexity, you can also pick these numbers out of the database as required. Some recommend storing them in a separate session table.
There is so much more to say about how to design this sort of project and much of it can be found elsewhere on this site so I will not prattle on further. Please feel free to ask if anything is unclear.
I hope this helps.
EDIT:
Handling the priveleges.
There is no simple way to handle priveleges. I use a single header file for all my pages that automatically extracts privelege information:
a. Identify the user, usually picking the usernumber from $_SESSION.
b. Identify the user's priveleges from the DB table users_priveleges.
c. Create an array containing the privelege names.
d. For Each through the array to compare whenever a privelege-required operation is required.
This method needs a lot of tables and is perhaps a bit advanced for your needs but if you have the following tables (skeleton details provided here only) it is pretty much infinitely expandable:
roles (role_id,rolename,role_detailed_description)
priveleges (privelege_id,privelegename,privelege_detailed_description)
users (user_id,user_details)
users_roles (user_id,role_id) (optional but a good idea)
users_priveleges (user_id,privelege_id) - priveleges granted to each user
roles_priveleges (role_id,privelege_id) - the priveleges each role has.
What you do is enter a line in the roles_priveleges table linking a role to a privelege. Repeat for all priveleges required by the role. Could be a lot. Not a problem.
When a user is added, you grant them a role. I then read the roles_priveleges table and present the super-user with a list of possible roles as checkboxes, ticked if the privelege would usually be granted, not if otherwise. The super-user deselects or selects from the list as required then saves the list.
On saving the list, I mark all entries for that user in the users_priveleges table as inactive and insert a new line for each privelege. This allows you to track the changes and, importantly, the date the priveleges were reviewed, even if they were not changed. It does not end up using much data as each line in users_priveleges consists of three Bigints,a bool and 2 dates.
If you never want to grant one user a privelege that their role would not normally posess then you can simply use roles_priveleges and users_roles. This is minimally less data-hungry but is notably less flexible.
I will concede the method I have described is a little inelegant but it provides very good role based and user based privelege management whilst keeping the DB in the 4th Normal Form or higher. IMHO it is worth going the extra mile because your application will one day be bigger and it is far easier to add this stuff now rather than later.
Also, from a beginner's point of view, it is very easy to create dummy data and ensure your SQL joins are working before you embark on something a bit harder.
Thanks for your detailed reply robert, what i am currently doing for my login authentication is to first get the username and run it through the list of all "Admin" users in the database and if there is a match, it will then match the password entered against the password stored in the database.I will then store both the username, and a value indicating the type of user in $_SESSION which indicates the type of user currently logging in.The user will then be directed to different sets of web pages depending on the user type.
The problem i am facing now is regarding password encryption.From what i could glean from google, hashes like SHA1,SHA2 and MD5 are all a one way process with no method of getting the original string back once it has been hashed.If so, suppose a user has forgotten his password and requests it back, how would i be able to retrieve his password and send it to him if i am unable to un-hash the encrypted string?
Vis the hashing, you don't give the user their password back. The whole point of hash is that the data can be comapred but not retrieved so if the DB is hacked, you don't lose your passwords. No-one else can read them either, not even you, which gives the user security. Just send them a random string by email as a replacement password. I usually either make the password horrible to remember to encourage changing it or require the user to select a new password.
I have addedd a brief description of a normalised database structure that should be able to handle the storage of any priveleges for just about any system. Have a peek.
| common-pile/stackexchange_filtered |
How to install AMD Radeon HD7000 drivers for Ubuntu 16.04?
I searched on Google but everything I could find was about AMD removing the support for Ubuntu 16.04. Lots of people mentioned there is an open-source driver but I couldn't find it and I am not sure I can set it up even if I find it. So, it would be great if someone tells me how I can install the drivers for AMD Radeon HD7850 for Ubuntu 16.04.
...related: http://askubuntu.com/questions/306963/why-dont-i-need-to-install-any-drivers-for-ubuntu
Possible duplicate of How to install Radeon Open Source Driver?
This is question is answered here. How to install Radeon Open Source Driver?
Purge AMD catalyst
sudo apt-get purge fglrx
and download xserver-xorg-video-ati make sure you have the ATI driver first xf86-video-ati (or let dpkg resolve it.)
Note link does not mention fgrlx16.04, edited while I was posting. And release notes in 16.04 say fglrx will be uninstalled and currently only the open source driver works. AMD is updating drives due to incompatibility with Linux kernel. https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#fglrx and: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-March/016315.html But it seems beta is out there if you are not installing in your main working install. http://www.phoronix.com/scan.php?page=article&item=amdgpu-pro-rx460&num=1
| common-pile/stackexchange_filtered |
How to make the variable to aggregate data frame columns by reactive?
In the below MWE code and as shown in the image below, the aggregate() function is used to sum columns in a data frame. I'd like the user to be able to choose which variable to aggregate by, either Period_1 or Period_2 via clicking the radio button. Currently the below is coded only for Period_1.
How would I modify the $Period... in each aggregate() function, to reflect the user radio button input? So the user can also aggregate by Period 2 in this example.
MWE code:
library(shiny)
data <- data.frame(Period_1=c("2020-01","2020-02","2020-03","2020-01","2020-02","2020-03"),
Period_2=c(1,2,3,3,1,2),
ColA=c(10,20,30,40,50,60),
ColB=c(15,25,35,45,55,65)
)
ui <-
fluidPage(
h3("Data table:"),
tableOutput("data"),
h3("Sum the data table columns:"),
radioButtons(
inputId = 'vetaDataView2',
label = NULL,
choices = c('By period 1','By period 2'),
selected = 'By period 1',
inline = TRUE
),
tableOutput("totals")
)
server <- function(input, output, session) {
sumColA <- aggregate(data$ColA~Period_1,data,sum)
sumColB <- aggregate(data$ColB~Period_1,data,sum)
totals <- as.data.frame(c(sumColA, sumColB[2]))
colnames(totals) <- c("Period_1","Sum Col A","Sum Col B")
output$data <- renderTable(data)
output$totals <- renderTable(totals)
}
shinyApp(ui, server)
If you only have two options, you could hard code the computation inside renderTable (or inside a reactive that is called inside renderTable) using if/else.
One option to achieve your desired result would be to use paste and as.formula to create the formula to aggregate your data base on the user input:
Note: To make my life a bit easier I switched to choiceNames and choiceValues.
library(shiny)
data <- data.frame(
Period_1 = c("2020-01", "2020-02", "2020-03", "2020-01", "2020-02", "2020-03"),
Period_2 = c(1, 2, 3, 3, 1, 2),
ColA = c(10, 20, 30, 40, 50, 60),
ColB = c(15, 25, 35, 45, 55, 65)
)
ui <-
fluidPage(
h3("Data table:"),
tableOutput("data"),
h3("Sum the data table columns:"),
radioButtons(
inputId = "vetaDataView2",
label = NULL,
choiceNames = c("By period 1", "By period 2"),
choiceValues = c("Period_1", "Period_2"),
selected = "Period_2",
inline = TRUE
),
tableOutput("totals")
)
server <- function(input, output, session) {
sumColA <- reactive({
fmlaA <- as.formula(paste("ColA", input$vetaDataView2, sep = " ~ "))
aggregate(fmlaA, data, sum)
})
sumColB <- reactive({
fmlaB <- as.formula(paste("ColB", input$vetaDataView2, sep = " ~ "))
aggregate(fmlaB, data, sum)
})
output$data <- renderTable(data)
output$totals <- renderTable({
totals <- as.data.frame(c(sumColA(), sumColB()[2]))
colnames(totals) <- c(input$vetaDataView2, "Sum Col A", "Sum Col B")
totals
})
}
shinyApp(ui, server)
#>
#> Listening on http://<IP_ADDRESS>:6231
| common-pile/stackexchange_filtered |
Error while doing bundle install with jruby
Currently, I am using JRuby <IP_ADDRESS> (2.5.8) and I am getting the error below:
2024-03-19T16:27:22.302+05:30 [main] WARN FilenoUtil : Native subprocess control requires open access to the JDK IO subsystem
Pass '--add-opens java.base/sun.nio.ch=ALL-UNNAMED —-add-opens java.base/java.io=ALL-UNNAMED' to enable.
Your RubyGems version (3.1.6) has a bug that prevents `required_ruby_version` from working for Bundler. Any scripts that use `gem install bundler` will break as soon as Bundler drop
s support for your Ruby version. Please upgrade RubyGems to avoid future breakage and silence this warning by running `gem update —-system 3.2.3`
/Users/***********/.rvm/gems/jruby-<IP_ADDRESS>/gems/bundler-2.3.26/lib/bundler/vendor/net-http-persistent/lib/net/http/persistent.rb:162: warning: Process#getrlimit not supported on this platform
warning: thread "Ruby-0-Thread-1: /Users/***********/.rvm/rubies/jruby-<IP_ADDRESS>/lib/ruby/stdlib/open3.rb:200" terminated with exception (report_on_exception is true):
NotImplementedError: waitpid unsupported or native support failed to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
waitpid at org/jruby/RubyProcess.java: 936
git version 2.40.0
warning: thread "Ruby-0-Thread-4: /Users/***********/.rvm/rubies/jruby-<IP_ADDRESS>/lib/ruby/stdlib/open3.rb:200" terminated with exception (report_on_exception is true):
NotImplementedError: waitpid unsupported or native support failed to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
waitpid at org/jruby/RubyProcess.java: 936
NotImplementedError: waitpid unsupported or native support failed to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
waitpid at org/jruby/RubyProcess.java: 936
I have tried different versions of JRuby, but it is giving the above error. Below are the versions in my M2 Mac:
jruby-<IP_ADDRESS>, jruby-<IP_ADDRESS>, jruby-<IP_ADDRESS>
Feels like the net-http-persistent is not fully compatible with JRuby (see note about it in the source code. Also, the message suggests reading about Native Libraries in the JRuby wiki.
Please post code, errors, sample data or textual output here as plain-text, not as images that can be hard to read, can’t be copy-pasted to help test code or use in answers, and are barrier to those who depend on screen readers or translation tools. You can edit your question to add the code in the body of your question. For easy formatting use the {} button to mark blocks of code, or indent with four spaces for the same effect. The contents of a screenshot can’t be searched, run as code, or easily copied and edited to create a solution.
that's a challenging color set in your terminal
The first warning message in your screenshot says:
Native subprocess control requires open access to the JDK IO subsystem
Pass --add-opens java.base/sun.nio.ch=ALL-UNNAMED —-add-opens java.base/java.io=ALL-UNNAMED to enable.
The errors below are related to the getrlimit and waitpid methods, which are used for subprocess control. That being the case, I would strongly suggest starting off by passing the recommended --add-opens options to the JVM. (See Configuring JRuby for how to pass options to the JVM when starting JRuby.)
If that doesn't help, your error messages also include a link to the Native Libraries page on the JRuby wiki, which is likely worth taking a look at. While not all the advice there will necessarily apply in your situation, one of the first recommendations is to pass the -Xnative.verbose=true command line option to JRuby (or -Djruby.native.verbose=true directly to the JVM), which should at least give you more detailed output about what's going wrong.
Since you say you're on Apple Silicon (M2), you might also want to take a look at reports like this one. Long story short, older JRuby versions may not fully support things like native subprocess control on Apple Silicon. I would recommend using the latest version (currently JRuby <IP_ADDRESS>), which has fixed many of the problems with earlier versions. IME using M2, the latest 9.3 branch versions should also be OK, but 9.2 may be iffy.
| common-pile/stackexchange_filtered |
Calculating DCT with DFT
The discrete cosine transform is given by
(DCT($f$))$_j$= $\sum\limits_{k=0}^{n-1}f_k cos(\frac{\pi}{2}(k+\frac{1}{2})j)$
with reference points $x_k=\frac{\pi}{2n}(2k+1)$
How do we show that (DCT$(f))_j$=2n(DFT($\hat{f}))_j$
where $\hat{f}=(0,f_0,0,f_1,...,f_{n-1},0,f_{n-1},0,f_{n-2},...,0,f_1,0,f_0)$?
After understanding better I came up with a solution myself.
The DFT($f$)$_j$ I am using is
DFT($f$)$_j = \frac{1}{n}\sum\limits_{k=0}^{n-1} f_k e^{-j2\pi\frac{1}{n}ki}$
So $2n$*DFT($\hat{f}$)$_j$ = $2n\frac{1}{4n}\sum\limits_{k=0}^{4n-1} \hat{f}_k e^{-j2\pi\frac{1}{4n}ki}$ = $\frac{1}{2}\sum\limits_{k=0}^{4n-1} \hat{f}_k e^{-j\pi\frac{1}{2n}ki}$.
Now we see that only the odd $\hat{f}_k$ matter here so we rewrite it as:
$\frac{1}{2}\sum\limits_{k=0}^{2n-1} \hat{f}_{2k+1} e^{-j\pi\frac{1}{2n}(2k+1)i}$ = $\frac{1}{2}(\sum\limits_{k=0}^{n-1} \hat{f}_{2k+1} e^{-j\pi\frac{1}{2n}(2k+1)i}$+ $\sum\limits_{k=n}^{2n-1} \hat{f}_{2k+1} e^{-j\pi\frac{1}{2n}(2k+1)i})$
Let $f=(f_0,f_1...,f_{n-1})$. We see that we can replace $\hat{f}_{2k+1}$ in the first sum with $f_k$. We then want the other sum to also "be" $f$ and let the second sum run from the end to the start meaning :
$\frac{1}{2}(\sum\limits_{k=0}^{n-1} f_k e^{-j\pi\frac{1}{2n}(2k+1)i}$+ $\sum\limits_{k=n}^{2n-1} \hat{f}_{2(2n-1-k)+1} e^{-j\pi\frac{1}{2n}(2(2n-1-k)+1)i})$
We can't put those two sums together yet, so we substitute $m=2n-1-k$ which means $k=2n-1-m$ and brings us to:
$\frac{1}{2}(\sum\limits_{k=0}^{n-1} f_k e^{-j\pi\frac{1}{2n}(2k+1)i}$+ $\sum\limits_{m=0}^{n-1} \hat{f}_{2m+1} e^{-j\pi\frac{1}{2n}(2m+1)i})$
Finally we have that for identical $m$ and $k$ that $f_k = \hat{f}_2m+1$ and we take the two sums together:
$\frac{1}{2}\sum\limits_{k=0}^{n-1} f_k (e^{-j\pi\frac{1}{2n}(2k+1)i}+e^{-j\pi\frac{1}{2n}(2(2n-1-k)+1)i})$=$\frac{1}{2}\sum\limits_{k=0}^{n-1} f_k (e^{-j\pi\frac{1}{2n}(2k+1)i}+e^{-j\pi\frac{1}{2n}(4n-2k+1)i})$ = $\frac{1}{2}\sum\limits_{k=0}^{n-1} f_k (e^{-j\pi\frac{1}{2n}(2k+1)i}+e^{-j2\pi}e^{j\pi\frac{1}{2n} (2k+1)i)}$ = $\frac{1}{2}\sum\limits_{k=0}^{n-1} f_k (e^{-j\pi\frac{1}{2n}(2k+1)i}+e^{j\pi\frac{1}{2n} (2k+1)i)}$
Now remember that $cos(x)=\frac{1}{2}(e^{ix}+e^{-ix})$ or $2cos(x)=(e^{ix}+e^{-ix})$. Concluding you have:
$\frac{1}{2}\sum\limits_{k=0}^{n-1} f_k 2cos(\frac{\pi}{2}(2k+1)j)$=$\sum\limits_{k=0}^{n-1} f_k cos(\pi(k+\frac{1}{2})j)$ which had to be shown.
| common-pile/stackexchange_filtered |
JavaScript: Compare the structure of two JSON objects while ignoring their values
I use a Node.js based mock server for specifying and mocking API responses from a backend. It would greatly help to have some kind of check if both backend and frontend comply with the specification. In order to do that, I need some kind of method of comparing the structure of two JSON Objects.
For example those two objects should be considered equal:
var object1 = {
'name': 'foo',
'id': 123,
'items' : ['bar', 'baz']
}
var object2 = {
'name': 'bar',
'items' : [],
'id': 234
}
Any ideas how I would go about that?
I think this post may help you:
http://stackoverflow.com/questions/1068834/object-comparison-in-javascript
Best regards
This is an elegant solution. You could do it simple like this:
var equal = true;
for (i in object1) {
if (!object2.hasOwnProperty(i)) {
equal = false;
break;
}
}
If the two elements have the same properties, then, the var equal must remain true.
And as function:
function compareObjects(object1, object2){
for (i in object1)
if (!object2.hasOwnProperty(i))
return false;
return true;
}
This will return true if object2 has a property than object1 doesn't have. You can do the full check with compareObjects(object1, object2) && compareObjects(object2,object1) but it's not so elegant. I guess there is best solution now in 2022 but I don't know yet :)
You can do that using hasOwnProperty function, and check every property name of object1 which is or not in object2:
function hasSameProperties(obj1, obj2) {
return Object.keys(obj1).every( function(property) {
return obj2.hasOwnProperty(property);
});
}
Demo
That is a good starting point. The problem here is that it doesn't work for nested objects or arrays. I extended your solution for nested objects, but there are still issues with arrays in case they have different lengths: Demo
| common-pile/stackexchange_filtered |
What is a good book to learn about how to think about maths in higher dimensions
I am really struggling with maths at the moment. People are telling me a sphere is just as abstract as a Klein bottle.
Im not comfortable with the way I think about math anymore, I think I am confusing it a bit with reality and physics.
For example, I watched “thinking outside the 10-d box” by 3blue1brown and it’s just not sitting with me. I don’t understand the extensions of Pythagorean distance to higher dimensions. If you claim the spheres act differently in higher dimensions why define fundamental properties about distance using a 3 dimensional framework?
Any books that teach me how to think about it proper and discuss its abstract mess might just save me. Otherwise I’m going to have to end my time studying the subject. Thank you.
Flatland is a classic.
What also will contribute to "save" you is to avoid online math videos , they are at best entertaining but useless to make progress. I know what I am talking about , for the sake of curiousity I watched many , really many of them. None convinced me concerning teaching , the best I remember was a journey through the Mandelbrod-set with fascinating pictures.
I have read flatland but I am still not comfortable. I want rigorous mathematics and a mathematician simply explaining their perception. I know I’m probably looking for a lost cause but it’s really the only thing that can save me from giving up.
"a sphere is just as abstract as a Klein bottle." - I disagree
Mathematically , it is no problem to generalize the Pythagorean theorem to higher dimensions. But visualization is only possible for the cases of two and three dimensions. So my question : Are you rather interested in the links between mathematics and physics ? Or just how we can describe things (like a 4-dimension hypercube) which do not exist in reality ?
Hi Peter, all I want is some kind of text that teaches me how I should be interpreting higher dimensional space purely in terms of mathematics. When I think of a hypersphere do I think of the 16 vertices in R4. I want something that says to me there is no visualisation at this point. It talks about what the hell im actually doing and discusses the abstraction of it, compared with the abstraction of say a sphere
You want a document showing you to "think well/have good representations". Otherwise said, you want advices whether either 1) such and such concept persist, is still a good blindstick, can be maintained for $n > 3$, 2) or 2) doesn't persist. Examples of 1) : the notion of vertices, edges, faces generalized with hyperfaces (what about the generalization of Euler relationship F-E+V=2 for a polyhedron ?). (Counter-)examples for 2) : the notion of interior and exterior which is meaningless for example for an hypercube.
This kind of question has already been asked in different places. See for example here or there.
My answer to Where can I start learning about higher dimensions in mathematics? lists a lot of references, many of which could be read by high school students (e.g. The Fourth Dimension Simply Explained edited by Henry Parker Manning).
| common-pile/stackexchange_filtered |
Task hangs if terminal window doesn't lose focus
While debugging the problem, I minimized the code to the one below to understand why I have the problem. The program is a console application.
The code simply creates an instance of a third-party class and subscribes to an event that does nothing. When I run the code as is:
Line (9) is never executed unless the terminal window loses focus or I press the Enter key multiple times. It's like it's stuck on the line (8). If I replace the line (8) with any other method from ThridParty, such as obj.MethodX() I get the same problem.
If I delete line (8), then I won't have this problem.
If I remove Task.Run (line 5) and let the code on lines (7-9) run outside of Task.Run, then I won't have this problem.
If I uncomment lines 24-25 and 32-33 then I won't have this problem.
I looked at the ThirdParty constructor through dotPeek. It has a few lazy initializations (Lazy<>) of some pluggins and services. So it's likely that calling any obj.MethodX() initializes those plugins and services first. But I still don't understand why it matters if lines 7-9 and 22-34 run on separate threads. It seems as if one of them is starving.
1. class MyService
2. {
3. public void Start()
4. {
5. Task.Run(() =>
6. {
7. var obj = new ThirdParty();
8. obj.ItemChanged += (...) => {};
9. Console.WriteLine("Instantiated ThirdParty");
10. });
11. }
12. }
13. internal class Program
14. {
15. static async Task Main(string[] args)
16. {
17. var service = new MyService();
18. service.Start();
19.
20. await Task.Run(() =>
21. {
22. while (true)
23. {
24. // if (Console.KeyAvailable)
25. // {
26. var read = Console.ReadKey(true);
27. if (read.Key == ConsoleKey.X)
28. {
29. service.Stop();
30. return;
31. }
32. // }
33. // Thread.Sleep(100);
34. }
35. });
36. }
37. }
The terminal will stop reading from your programs stdout when the user (you) selects text. Which leads to other Console methods blocking until the terminal continues to read data.
@JeremyLakeman, what do you mean? I don't select any text.
So in other words, "Why is unnamed third party software crashing in a strange way?" Pretty hard to guess. I wonder if you have to press enter to clear an error dialog that you can't see..? Was this third party class designed to work in a console app?
@JohnWu, it doesn't crash. I never said it crashed anything.
Does the third party require a STA thread?
| common-pile/stackexchange_filtered |
Make the boxes stay still when scrolling down using the scroll bar
when I click a box, i can drag it around the screen. You can click the folder icon to open up information view, and a scroll bar will appear because there are a lot of text.
Problem: when i use my mouse to scroll the scrollbar, it also drags the boxes as well. How do I make it not move the box when I click the scroll bar to move the bar?
I am using jsPlumb.draggable() to enable dragging.
jsfiddle: http://jsfiddle.net/7PuN3/2/
What browser are you using? I'm not seeing this behaviour you've described in Chrome
firefox is what im using
I would stop/start dragging:
$(function(){
$('#1 .button_wrap').on('click', function(e){
e.stopPropagation();
$(".info").html(newHtml).show();
jsPlumb.setDraggable("1", false)
});});
$(function(){
$("#1").on("click", ".info .ui-icon-close", function(){
$(".info").hide();
jsPlumb.setDraggable("1", true)
});
});
then in your css add this class, not to let the div fade when dragging is disabled:
.ui-state-disabled{opacity: 1;}
do you know why the box seems more gray after making it not draggable?
@JennyC no, trying to figure it out :)
I think it might be a jsplumb thing
@JennyC I think hacking it this way, would work fine: http://jsfiddle.net/7PuN3/6/
@jennyC discovered this issue, when disabling it is adding this class to the div: .ui-state-disabled fixed it by putting this class into css: .ui-state-disabled{opacity: 1;} check it out here: http://jsfiddle.net/7PuN3/16/
Quick look suggests to me, use relative or absolute positioning not fixed on button wrap. On my mobile it seems to work fine though.
Tried those 2 options, still moves the box, im using firefox btw
| common-pile/stackexchange_filtered |
How to update field all rows using auto increment?
How to update field all rows using auto increment?
I need to update field num from 1 to count of rows. How to apply AUTO INCREMENT for update operation?
I have tied:
UPDATE table set num = (select num from table order by num limit 1) + 1;
You want to update num in every row of the table to the same value?
To next value from 0 to rows.length
Is this what you mean? https://stackoverflow.com/questions/2643371/how-to-renumber-primary-index
What you do is esentially join a rownumber to the every row and update the num column
with it
For the join part you need a unique identifier in the example it is id
UPDATE table AS t
JOIN
(
SELECT @rownum:=@rownum+1 rownum, id
FROM table
, (select @rownum := 0) rn
) AS r ON t.id = r.id
SET t.num = r.rownum
| common-pile/stackexchange_filtered |
When I take a Gaussian surface inside an insulating solid sphere, why does the outer volume have no effect on the electric field?
Say I try to find the magnitude of the electric field at any point within an insulating solid sphere. I know that in the case of a conductor, the electric field within it is 0. However, I have not learned anything about an insulator, so I assume that it would not be 0.
I used Gauss' Law and calculated the charge of the volume within the Gaussian surface, the radius of which is equal to the distance between the point of interest to the center of the sphere. So I got the right answer, but I want to know the physics behind it. Why does the remaining volume of the insulating sphere, which is just right outside the Gaussian surface, have no effect on the electric field at that point? Even to me, my question sounds flawed as I am pretty much asking why an insulator has no effect on an electric field. However, I just don't think it would be that simple.
I'm not sure how to properly explain this without a chalkboard, but it's worth remembering that using gauss's law usually requires the shape to have a certain symmetry. In this case, it's the symmetry of the object that makes the outside piece not affect the electric field.
Related: http://physics.stackexchange.com/q/150238/2451 , http://physics.stackexchange.com/q/18446/2451 , and links therein.
This is somewhat similar to why the rest of the earth doesn't influence the gravitational field inside it. By the same logic, the net electric force of all of the charges on 1 half of the outer side cancel each other due to the presence of corresponding charges on the other half, resulting in no net field due to the outer shell charges.
That makes sense to me. This reminds me of calculating the electric field producdd by a circular ring centered on a horizontal axis.
This is the same, just extrapolated into 3D
I totally see that now. So it's all about the symmetry of the object that cancels out each other's electric field, not the fact that it's an insulator?
As was said in the comment to your question by a person far more qualified to explain this than I am. Probably a Resnick and Halliday will help you farther, like it did for me.
My course does use that book; however, I have been relying mostly on the professor's lecture as it has been more relevant to the extremely difficult homework problems.. but thank you for the adivce!
I'm less qualified than any of you out here. I'm a 12th standard student. Please forgive me for any errors or misconceptions.
Also, insulators are capable of induced dipole formation or charge polarization. Further reading on insulators and dielectrics can help.
There certainly does seem to be confusion here. It is a question of symmetry, but not just the symmetry of the shape and Gaussian surface.
It is electric charges that are responsible for electric fields, whether you are talking about insulators or conductors. You don't say, but I am assuming you are dealing with an insulator that has a certain amount of charge distributed uniformly within its volume?
If so, then its spherical shape ensures the symmetry which enables you to use Gauss's law to solve the problem. If the shape was non-spherical or the charge was not uniformly distributed, then although Gauss's law is of course still completely applicable, in practice it is very difficult to apply in a simple analytical way.
A "Gaussian surface" that you construct should have the electric field lines either perpendicular or parallel to the surface. This is so that you can easily calculate the electric flux on the LHS of Gauss's law without having to worry about scalar products or resolving the electric field in the direction of the surface. If the charge is uniformly distributed and the insulator is spherical, then you can be sure that the electric field also has spherical symmetry and that the electric field is parallel to the surface vector of a spherical Gaussian surface with the same central point. In this case, the electric fields produced by the charges distributed in the material outside your Gaussian surface can be shown to cancel out inside that surface. This is often called the shell theorem. However, if the electric field was not spherically symmetric, because perhaps the charge was not uniformly distributed in the sphere, then a simple application of Gauss's law would not be possible and the fields due to charges outside a spherical surface would not cancel inside that surface.
In other words, the symmetry that you exploit to solve this problem is a symmetry of the
chosen surface you construct and an assumption that the electric field is also symmetric.
The crucial difference between insulators and conductors is that in conductors the charges can rearrange themselves in response to asymmetries in the electric field in a way that can remove these asymmetries.
Who says the outside field doesn't effect.The Gauss's law gives the net field due to entire charges inside or outside the Gaussian surface only the charge taken is what inside.
| common-pile/stackexchange_filtered |
Is possible to change color of ScrollBar?
I changed in ScrollView
android:fadeScrollbars="false"
to ScrollBar be visible and it works fine. My question is possible to change color of ScrollBar ? ( Default is gray and my background is gray so there is small contrast between ).
Scrollbar color varies between vendors though
You can with android:scrollbarThumbVertical="@drawable/youdrawable
In my case, for instance yourdrawable is:
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android">
<gradient android:startColor="#66C0C0C0" android:endColor="#66C0C0C0"
android:angle="45"/>
<corners android:radius="6dp" />
</shape>
how about the color of the fastScrollBar?
You can use:android:scrollbarThumbVertical="@drawable/yourImage"
, where 'yourImage' can be a small 2 pixel image of ur desired color
| common-pile/stackexchange_filtered |
key value style javascript array`s length keeps being 0
I fill my key value array like this:
/**
* Callback: Gets invoked by the server when the data returns
*
* @param {Object} aObject
*/
function retrieveStringsFromServerCallback(aObject){
for(var i = 0;i < aObject['xml'].strings.appStringsList.length;i++){
var tKey = aObject['xml'].strings.appStringsList[i]["@key"];
var tValue = aObject['xml'].strings.appStringsList[i]['$'];
configManager.addStringElement(tKey,tValue);
}
}
Here is the setter of my object
/**
* @param {string} aKey
* @param {string} aValue
*/
this.addStringElement = function(aKey, aValue){
self.iStringMap[aKey] = aValue;
console.log("LENGTH: "+self.iStringMap.length);
}
I add about 300 key value pairs, according to google chrome inspector, the iStringMap is filled correctly. However the length of the array seems still to be 0. There must be something wrong. Any help much appreciated
You can get the count of object keys using:
Object.keys(obj).length
Object.keys() returns an array of keys of an object passed as a first parameter. And after that you're accessing the .length of that array.
ok nice, it prints the correct length. So the array.length operator returns 0 because the array is filled with objects?
@dan: no, it returns 0 because it's not an array, it's an object. Array and object are different containers in JS
| common-pile/stackexchange_filtered |
Letter combination ea
The alphabet letter combination ea makes 6 sounds: bread [bred], teacher ['tiːʧə], break [breɪk], idea [aɪ'dɪə], pageant ['pæʤənt], bearable ['beərəbl].
I know that that the fact that the same letter combination can be read in so many different ways is related to the history of the English language. And basically it's easier to consult a dictionary and just remember the correct way of reading.
But still why can ea be read in 6 different ways?
All vowel combinations in English have more than one sound: minute, the i and u are pronounced the same. One way is to look for English Pronunciation Ilustrated by John Trim. It will give you all the letter combinations and their sounds. And it's contrasted: Sheep/cheap and ship/chip. I am not giving the phonemes. Once you master the sound system combinations, then we can talk. Bread/shed/fed/lead [past part.] are the same phoneme (sound); teach/preach/leech/lead [present] are the same phoneme.
So, there's no point asking about the same letters. It's the sounds that matter. How are the sounds of English materialized. There are basically 44: you can start here: http://www.dyslexia-reading-well.com/44-phonemes-in-english.html [note there are some differences between AmE and BrE, but in general except for the some a's, (tomato) it's the same.
In the words "idea" and "pageant", the "ea" shouldn't be analyzed as the same digraph that occurs in the other words that you mention.
The pronunciation [aɪ'dɪə] came to have the sound [ɪə] via "smoothing" of "long e" sound followed by a schwa [ə] (the letter "a" regularly corresponds to schwa in word-final position, when the vowel is unstressed).
In the pronunciation ['pædʒənt], the [ə] sound should be thought of as corresponding to the letter "a" only. The letter "e" is silent, and can be grouped with the preceding [ʤ] sound. When the letter "e" or "i" occurs after a consonant letter and before another vowel letter, and does not represent a stressed vowel, it may represent:
an unstressed "happy" vowel sound (often transcribed /i/, and usually pronounced as something like [i] or [ɪj], but pronounced as [ɪ] in old-fashioned "RP" English): e.g. video, ideology
a non-syllabic palatal glide [j]—this mainly occurs when the preceding sound is [n] or [l]: e.g. some pronunciations of spontaneous, chameleon (words that can be pronounced with a glide often have variant pronunciations with a syllabic vowel)
No sound at all:
if the preceding sound is [dʒ] [tʃ], [ʒ] or [ʃ]: e.g. ocean, righteous, courageous, some pronunciations of nausea (some words with [dʒ] [tʃ], [ʒ] or [ʃ] before "e" have two pronunciations: one where the "e" is silent and one where it is pronounced as the "happy" vowel /i/)
in some cases, an "e" before a vowel letter is silent after a consonant sound that is not in the list above if the word is related to a shorter word ending in "silent e": e.g. sizeable. But this word has the variant spelling sizable. Also, this criterion doesn't always give you the right pronunciation: the e in "phraseology" is not silent, even though the related word "phrase" ends in a "silent e".
Note that in some words, a "silent" letter "e" occurs just to indicate that a preceding letter "g" is pronounced as [dʒ] rather than as [g]: for example, words ending in "geable" such as manageable, changeable. These words must be spelled with "silent e", unlike sizeable.
That leaves us with 4 cases to explain of words that are truly spelled with the "ea" digraph.
teacher. The word "teacher" has the "expected" value for "ea": the "long e" sound /iː/. In most words spelled with "ea", this developed from Middle English /ɛː/ via a regular sound change (part of what is called the "Great Vowel Shift", which changed the pronunciation of long vowels between Middle English and Old English).
bread. This word shows a sporadic but fairly common change: Middle English /ɛː/ was shortened to /ɛ/ and so did not develop according to the "Great Vowel Shift".
break. This word shows a very irregular development of Middle English /ɛː/. There are only a couple of other words that show the same change of Middle English /ɛː/ to present-day English /eɪ/: great and steak.
bear(able). The pronunciation of bearable is just based on the pronunciation of the root verb bear. In this word, the Middle English vowel /ɛː/ ended up developing to /eə/ (the r-controlled "long a" sound), partly due to the influence of the following "r". But in other words such as "fear" we see the r-controlled "long e" sound instead ([ɪə(r)]), so the pronunciation of the trigraph "ear" is somewhat unpredictable. A rule of thumb that might be useful is that if a verb has an irregular past-tense form with the /ɔː(r)/ sound, and a present-tense form spelled with "ear", then you can expect that it will be pronounced with /eə(r)/ in the present tense.
You can see a more detailed discussion of break, great and steak in my answer to the following ELU question: Why is “great” pronounced as “grate”, but spelled with “ea”?
I cannot tell difference between my pronunciation of "ea" in "bread" and "pageant", except to the extent that "pageant"'s primary stress is on the first syllable. I speak American English.
@Jasper: Since the second vowel in "pageant" is unstressed, its quality is variable. Some speakers might pronounce it more like the vowel in "bread", some speakers might pronounce it more like the vowel in "mint", some speakers might pronounce it more like the vowel in "grunt"--since it doesn't make a difference to the meaning, variation is possible.
@Jasper: I assume you would use the same vowel in the last syllable of "student", right? Here is a blog post written by someone who reports hearing pronunciations like yours: http://languagelore.net/2015/01/19/desyllabication-of-n-in-consonant-clusters/ (the author's explanation/analysis seems pretty iffy, though).
Right. I can tell the difference between "bread"'s or "Brent"'s vowel and "mint"'s vowel, but "pageant" and "student" are in-between. My pronunciation of "pageant" is closer to "Brent"; my pronunciation of "student" is closer to "mint". I don't follow the blog author's observations, let alone his explanations.
| common-pile/stackexchange_filtered |
how do i resolve this exception "Syntext error into INSERT INTO statement".p
private void save_Click(object sender, EventArgs e)
{
ACCOUNT.oo.Open();
string QRY = "insert into size(SIZENO,SIZE,COVERAGE,WEIGHT)
values('" + size_id.Text + "','" +txt_size.Text + "','" +txt_coverage.Text
+ "','" +txt_weight.Text + "')";
OleDbCommand ODB = new OleDbCommand(QRY,ACCOUNT.oo);
ODB.ExecuteNonQuery();
MessageBox.Show("Inserted Sucessfully..");
ACCOUNT.oo.Close();
}
If you set a break point, what is the actual value of QRY? Also look into parameterizing your queries. String concatenation is the worst.
SIZENO,SIZE,COVERAGE,WEIGHT all numerics? remove the single quotes you have added or at least for the columns that are numeric.
It looks like your database columns are for numeric data and the values you're inserting are quoted. Removing the single quotes will fix that.
You should also parameterize your query, e.g.:
private void save_Click(object sender, EventArgs e)
{
ACCOUNT.oo.Open();
string QRY = "insert into size(SIZENO,SIZE,COVERAGE,WEIGHT) values(?,?,?,?)";
using(OleDbCommand ODB = new OleDbCommand(QRY, ACCOUNT.oo))
{
// change the OleDbType based on your actual data types
ODB.Parameters.Add("SIZENO", OleDbType.Integer).Value = int.Parse(size_id.Text);
ODB.Parameters.Add("SIZE", OleDbType.Integer).Value = int.Parse(txt_size.Text);
ODB.Parameters.Add("COVERAGE", OleDbType.Integer).Value = int.Parse(txt_coverage.Text);
ODB.Parameters.Add("WEIGHT", OleDbType.Integer).Value = int.Parse(txt_weight.Text);
ODB.ExecuteNonQuery();
}
MessageBox.Show("Inserted Sucessfully..");
ACCOUNT.oo.Close();
}
OleDbCommand doesn´t support named parameters (e.g. @size_id) as MS SQL does. However, you can access to a parameter by its index. So the SQL fragment will be ... values(?, ?, ?, ?); and C# one ...ODB.Parameters.Add(size_id.Text); ...
put IDisposable into using: using (OleDbCommand ODB = new ...) {...}
Your SQL when printed as a single line is not readable. Use @"..." string format to make your query text multilined and through that more comprehansible.
@DmitryBychenko I've edited the answer based on your feedback.
This would add the parameters as string types, which is what you have stated is the original problem. The AddWithValue method will infer the type of the parameter from the value being added. Since you are adding TextBox.Text, the type of the parameter will be inferred as text. I would be inclined to use a type with the parameter, and also cast the text value to the correct type. e.g. ODB.Parameters.Add("SIZENO", OleDbType.Integer).Value = int.Parse(size_id.Text);
See also - Can we stop using AddWithValue() already?
| common-pile/stackexchange_filtered |
A closed rectangular box has volume 32 cm$^3$.
Find the lengths of the edges giving the minimum surface area of following the three steps below:
Step1: Let the length, width, and height of the box be $x,y,z$. Write $z$ in terms of x and y using the condition that the volume of the box is 32cm$^3$.
I have xyz=32cm^3 which then gives me 32/xy=z
Step2: Write the surface $S$ as a function of $x,y,z$, then replace z with the expression in Step1 to write $S$ as a function of $x$ and $y$.
I don't know what to do here
Step3: Find the critical point(s) of the function $S(x,y)$ in step 2 and determine the local minimum.
This will be the partial derivatives of the function in step2
Step4: Is the local minimum the global minimum? Make your conclusion on the values of $x,y$ ans $z$ that minimize the surface area.
I don't know what to do here
I also don't know what bounds I should be operating in here
I believe it's asking for surface area. Can you determine what the surface area should be?
(That is, as a function of $x,y,z$?)
The surface of your box has two $x \times y$ rectangles, two $x \times z$ rectangles, and two $y \times z$ rectangles, so $S=2(xy+xz+yz)$
| common-pile/stackexchange_filtered |
Differentiable Complex Functions
Let $f(z) = u(x,y) + iv(x,y)$ be differentiable for all $z = x + iy$. If $$v(x,y) = x + xy + y^2 - x^2$$ for all $(x,y)$, find $u(x,y)$ and express $f(z)$ in terms of $z$. I'm lost because it never says what $f(z)$ is explicitly equal to, just that it equals $u + iv$ so I'm not sure how to start.
On a side note, how do you format questions in Latex? I couldn't find the option.
Here is a MathJax tutorial. Enclose mathematical expressions in $$
Since $f$ is complex differentiable, $u$ and $v$ have to satisfy the Cauchy-Riemann differential equations. Write them down, and integrate them. The solution is unique up to a constant.
You know that$$\frac{\partial u}{\partial x}(x,y)=\frac{\partial v}{\partial y}(x,y)=x+2y$$and that$$\frac{\partial u}{\partial y}(x,y)=-\frac{\partial v}{\partial x}(x,y)=2x-y-1.$$Can you take it from here?
I think I know what to do next but I just want to make sure. If you integrate partial of u w.r.t x over x, then you can u(x,y) plus some function of y, then that function of y is just partial of u w.r.t y?
It's someting like that. If you integrate $u$ with respect to $x$, you get that $u(x,y)=\frac12x^2+2xy+C(y)$ and so on.
I ended up getting my answer but I'm lost on the last part. I have that u(x,y) = 1/2(x^2 - y^2) + 2xy - y, but I'm not sure how to convert that in terms of z if z = x + iy
You have$$f(x+yi)=\frac12(x^2-y^2)+2xy-y+(x+xy+y^2-x^2)i=\frac12(x+yi)^2-i(x+yi)^2+i(x+yi)$$and therefore$$f(z)=\left(\frac12-i\right)z^2+iz.$$
Thanks so much!!
| common-pile/stackexchange_filtered |
Intersection between a set and its boundary is a closed set?
Is it possible to show that for a neither open nor closed subset $ A \subset \mathbb{R}^n$, $\partial A \cap A$ is a closed set in $\mathbb{R}^n$?
No. $A=\mathbb{Q}$ has $\mathbb{R}$ as its boundary so $A \cap \partial A = A$ is neither open nor closed.
| common-pile/stackexchange_filtered |
Node ExpressJS - events.js: 167 throw er Unhandled error event
I have installed mocha chai in working expressjs application using the following commands:
npm install mocha chai sinon supertest --save-dev
When I run the application with: npm run start, I have the following errors:
events.js:167
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8001
at Server.setupListenHandle [as _listen2] (net.js:1334:14)
at listenInCluster (net.js:1382:12)
at Server.listen (net.js:1469:7)
at Object.<anonymous> (folderName/app.js:33:24)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
How to fix this?
You have another application using port 8001. Change that or change your own app to start on another port.
it works .I have killed port which i was trying to run server on .Thanks for the help.
you're welcome. This is a common issue amongst Node.js developers - it happens to me a lot. The key is to notice the 'EADDRINUSE' message which is stating that the address is already in use. Please mark my answer as accepted if it helped you.
| common-pile/stackexchange_filtered |
Python join a list of strings
This seems easy, but I'm getting an error and I don't know how to get rid of it:
counter = 0
list1 = [''] * 11
list1[1] = '000'
list1[6] = counter
list1[10] = '999'
print(list1)
a = "^".join(list1)
print(a)
The error I get is
a = "^".join(list1)
TypeError: sequence item 6: expected str instance, int found
I initialized the list of strings to null. I need the counter to assign it a unique number in each iteration.
How can I fix this error?
list1[6] = counter - Why are you putting an int in your list?
a = "^".join(map(str,list1)) should be used.
The error message tells you exactly what the problem is, so what's still confusing you?
I had initialized the string list to empty string. So I made the mistake of assuming that the integer would be converted to a string in Python.
Why would it? You told Python to replace the empty string with a number, so it did.
You have to do the conversion of integer values to string, before using .join(...).
list1[6] = str(counter)
It looks like some of the elements in the list are not strings.
You could try
a = "^".join(map(str, list1))
| common-pile/stackexchange_filtered |
Stuck at javascript promises inside forEach loops
lets consider dictionary that maintains score of various types, like
{a:0,b:0,c:0}
i have 3 collections of mongo db A,B and C.
performing,
A.find({<key>:<value>})
.then(res=>{
//gets an array of the response <RES1>
})
Now forEach element in the array, i am doing like
B.find({_id : <RES1[i].some_id>})
.then(res=>{
//again a new array of responses <RES2>
})
Finally, i get the type for which scores needs to be updated in collection C and i again query in a loop like
C.find({_id : RES2[i].some_key})....
I am not sure where i am wrong!!! embedding code below
getScoreForEachTypes() {
User.find({ company: req.params.company })
.then(users => {
var responsesOfAllUsersArr = [];
usersForThisCompany = users;
users.forEach(user => {
responsesOfAllUsersArr.push(Response.find({ email: user.email }));
});
return Promise.all(responsesOfAllUsersArr);
})
.then(responsesOfAllUsersArr => {
var data = { _E: 0, _M: 0, _A: 0, _Q: 0, _E: 0 };
responsesOfAllUsersArr.forEach(el => {
el.forEach(_el => {
var j = getQuestionType(_el.questionId);
data[j] += _el.responseChoice;
})
})
});
}
function getQuestionType(qid) {
return Question.findOne({ _id: qid })
.then(el => {
return el.quesType;
})
.catch(err => {
console.log("err while fetching type of question", err);
});
}
How do you use getScoreForEachTypes()? That function doesn't have a return value.
ya, thats where i am stuck, i need its return value to use further..i need to return the updated data which would be somthing like { _E: 1, _M: 4, _A: 6, _Q: 3, _E: 0 }
What if you write return just before User.find ? (first line inside the function)
@Zyigh It will return a promise in pending state which in the logs is like Promise { <pending> }
Yep sorry I wasn't clear at all... What if you return the Promise (User.find) but write the .then() after the call of the function ?
Ya might work but real pain is to get the value from promise inside forEach loop, as data[j] is again adding a new key [Object Promise] with value NaN,
because getQuestionType also returns a Promise
Finally i solved my problem.
I just had to maintain an array.
function a(){
var data = { PE: 0, RM: 0, BA: 0, AIQ: 0, FE: 0 };
var responsesAll = [];
var xyz = [];
return User.find({ company: req.params.company })
.then(users => {
var responsesOfAllUsersArr = [];
usersForThisCompany = users;
users.forEach(user => {
responsesOfAllUsersArr.push(Response.find({ email: user.email
}));
});
return Promise.all(responsesOfAllUsersArr);
})
.then(responsesOfAllUsersArr => {
responsesOfAllUsersArr.forEach(el => {
el.forEach(_el => {
responsesAll.push(_el);
xyz.push(getQuestionType(_el.questionId))
})
})
return Promise.all(xyz);
}).then(xyz => {
for (var i = 0; i < xyz.length; i++) {
data[xyz[i].quesType] += parseInt(responsesAll[i].responseChoice);
}
return data;
})
}
| common-pile/stackexchange_filtered |
Container-based virtual machine solution like Docker?
I'm trying to find a solution to build a consistent development environment across different machines: Windows PC, Windows laptop, MacBook …
Recently I learned Docker, the concept attracts me. But finally I found it doesn’t intend to solve my issue by asking a question on SO, it'll be hard and unnatural to use Docker as virtual machine. It’s a container for one application.
But I really like how Docker works:
Use Dockfile to define an image, which make it really easy to rebuild the same image.
Dockerhub, it’s so amazing that so many official docker images are contributed by the Docker community, which makes it really simple to create new image starting from those base images.
It’s very easy to create a new image from a container if I’ve done some changes in it.
In the question I asked, Mark pointed out the new project LXD which is intended to make containerized virtual machines. I looked into their website and searched some online introduction and tutorial, it seems it doesn’t have the above mentioned features, so I’m wondering if there’s some other open source project which meets the requirement?
You might consider using Vagrant, which is used often together with VirtualBox, although it supports other solutions as well.
Their motto is:
Create and configure lightweight, reproducible, and portable development environments.
Regarding your requirements:
An image can be defined through a file called Vagrantfile
There are community-provided Vagrant boxes
I am not sure what you mean by the third requirement, but you can base a Vagrant box on an existing one, as far as I understand their documentation
| common-pile/stackexchange_filtered |
Namespace assigned to SOAPEnvelope get overriden
Goal: In our Spring Application we have to call a External service which uses a Namespace "http://www.w3.org/2003/05/soap-envelope".
Issue: Currently when I'm adding the following code snippet
webServiceTemplate.marshalSendAndReceive((Object) (SubmitTransaction2) request, new WebServiceMessageCallback() {
private static final String PREFERRED_PREFIX = "SoapTest";
@Override
public void doWithMessage(WebServiceMessage message) {
try {
if (message instanceof SaajSoapMessage) {
SaajSoapMessage saajSoapMessage = (SaajSoapMessage) message;
SOAPMessage soapMessage = saajSoapMessage.getSaajMessage();
SOAPEnvelope envelope = soapMessage.getSOAPPart().getEnvelope();
SOAPHeader header = envelope.getHeader();
SOAPBody body = soapMessage.getSOAPBody();
SOAPElement soapElement = envelope.addNamespaceDeclaration(PREFERRED_PREFIX, "http://www.w3.org/2003/05/soap-envelope");
envelope.setPrefix(PREFERRED_PREFIX);
header.setPrefix(PREFERRED_PREFIX);
body.setPrefix(PREFERRED_PREFIX);
soapMessage.saveChanges();
}
} catch (SOAPException soape) {
soape.printStackTrace();
}
}
});
The SOAP Envelope I get is
<SoapTest:Envelope
xmlns:SoapTest="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
But if I just change this one line
envelope.setPrefix("SOAP-ENV");
I get this response
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SoapTest="http://www.w3.org/2003/05/soap-envelope">
Can anyone suggest why is it that SoapTest namespace value gets overridden as soon as I assign it to SOAPEnvelope?
The namespace for SOAP 1.1 is http://schemas.xmlsoap.org/soap/envelope/. The namespace for SOAP 1.2 is http://www.w3.org/2003/05/soap-envelope. The External service uses SOAP 1.2, but Spring defaults to SOAP 1.1, so you need to configure Spring to use SOAP 1.2. Attempting to override the SOAP namespace is not the way.
Thanks Andreas. You pointed me exactly what I was doing wrong. It worked.
| common-pile/stackexchange_filtered |
Question about the Off-topic close reason vs migration close reasons
Looking at an example here: https://serverfault.com/questions/526087/how-do-i-direct-inbound-network-traffic-to-a-specific-internal-ip-based-on-the-r
The question I have is, I know voretaq's close reason here: https://meta.serverfault.com/a/5606/7861 made the new list and seems to be used quite often now.
But since this has been implemented it's been the go to instead of the old "migration" found here:
In the example at the top, it seems like the better choice would be to simply migrate it than to force the user to decide if they want to pursue it on SU or not.
Is the idea to force most of these questions back on the user to figure out if they want to ask on Superuser or just let the question die, or is there a valid reason for picking the "close" vs. the "migration choice.
The same applies to the SO choices...
If this is a duplicate or already been answered, please let me know. I searched Meta here and on Meta.SE and couldn't find anything.
FWIW, I'm not a fan of those close reasons for the issues you have cited here.
Yes, the idea is to force the question back onto the asker to figure out what to do.
When we were rebelling against the crap being migrated in to us from SO, one thing I found was that only a minority of people would follow their migrated questions here. So migrated questions, even if they were good ones, would languish - no replies if there were comments seeking clarification, no accept if a good answer was given...
Because of this and because many of us don't want to migrate crap to another site, we treat "close without migrating" as the default.
I'll only pick a "Belongs on..." close reason if a question is clearly on-topic on the other site and clearly really good. If there's any doubt in my mind about either the topicality or quality, I'll find some other close reason. Before it was taken away, I'd pick the generic "off-topic" reason, now it's either one of the "May be suitable for ..." or one of the other reasons. Anything in preference to possibly migrating crap.
Makes sense, especially the "languish" part. I can see a good reason to migrate being if the question would have lasting value for future visitors as well, even if the OP never bothers to come back and accept the answer after migration.
+1 -- the "You should ask on" close reasons are certainly not ideal (they will probably go away), but migrating has its own problems.
Accepted - and the question is pretty irrelevant now with the new close reasons set.
| common-pile/stackexchange_filtered |
Different packageRules for release and prelease versions
Is there a way to configure renovate with packageRules, so that I get a MR with automerge disables for pre-release versions, like v1.0.0-alpha.1 and MR with automerge enabled for patch versions, liken v1.0.0.
I have enabled unstableVersion support in renovate, but I want different behaviors for release/prerelease versions. My current configuration looks like the following, but I am unsure if this works, because the documenation of renovate states that prerelases is not a valid value for matchUpdateTypes.
{
"matchDatasources": [
"git-tags"
],
"matchManagers": [
"ansible-galaxy"
],
"matchUpdateTypes": [
"patch"
],
"enabled": true,
"automerge": true,
"platformAutomerge": true
},
{
"matchDatasources": [
"git-tags"
],
"matchManagers": [
"ansible-galaxy"
],
"matchUpdateTypes": [
"prerelease"
],
"enabled": true,
"automerge": false,
"platformAutomerge": false
},
What I want is that renovate automerges
v1.0.0 -> v1.0.1
v1.0.2-alpha.1 -> v1.0.2
but no automerge (only MR) for
v1.0.0 -> v1.0.1-alpha.1
v1.0.1-alpha.1 -> v1.0.1-alpha.2
According to https://docs.renovatebot.com/modules/versioning/#regular-expression-versioning, you could set "ignoreUnstable": true
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.