text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Gaps in FASTA formatted genome?
I'm just getting interested in genetic structure and I decided to find a FASTA formatted genome (human) and I found one here. However, I'm confused. I know that A can only match up to T and C to G but it appears as though T is matching up to T, A to A, C to A, and etc. How is this possible? Can there be gaps where T,A ,C ,or G is connected to nothing? Small part of genome: ATTTAGAAATGACTAACAATTATGTAGGTTTATTTCTCTCAGTATAGAATGTTCATATAGAATT
Your question is based on a misunderstanding. You have a single strand, lets assume it's the red sequence in the image. The linear sequence along this strand is not limited by base-pairing rules.
The base-pairing rules (A-T, G-C) apply to the complimentary strand, given in blue in the image.
It's convention to only give the sequence for one strand because the base-pairing rules allow us to infer the complimentary sequence.
If I got your question correctly, then you are referring to the base pairing of the genome. Here you have been misled as to how base pairing is presented. ATTT does not refer to a base pair between A,T and T,T but rather the sequence of letters in the sense strand of the human genome. What you see here are more stacking interactions than base pairing between bases. Whenever navigating databases please remember that whatever genome you are viewing, you will always be presented with the sense strand of that genome. To get the base paired strand or anti-sense strand of that particular genome, you need to a reverse complement that particular sequence of letters.
| common-pile/stackexchange_filtered |
Angular 17: Module used by 'node_modules/xyz' is not ESM
Application runs perfectly fine with npm start, when it is built using ng build, it gives the following errors:
▲ [WARNING] Module 'amazon-quicksight-embedding-sdk' used by './dashboard.component.ts' is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'qrcode' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'ts-access-control' used by './permission.service.ts' is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'style-dictionary/lib/utils/deepExtend.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'style-dictionary/lib/utils/flattenProperties.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'lodash/kebabCase.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'style-dictionary/lib/utils/references/usesReference.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'google-libphonenumber' used by './phone.service.ts' is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module '@aws-crypto/sha256-js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'lodash/pickBy.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
▲ [WARNING] Module 'lodash/merge.js' used by<EMAIL_ADDRESS>is not ESM
CommonJS or AMD dependencies can cause optimization bailouts.
For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies
I am willing to provide more information required to fix this issue.
Add all your CommonJS modules (for those you are getting warning) to the list allowedCommonJsDependencies in your angular.json file as stated below.
The Angular CLI outputs warnings if it detects that your browser application depends on CommonJS modules. To disable these warnings, add the CommonJS module name to allowedCommonJsDependencies option in the build options located in angular.json file.
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"allowedCommonJsDependencies": [
"amazon-quicksight-embedding-sdk",
"@aws-crypto/sha256-js",
"qrcode",
"ts-access-control",
"lodash/kebabCase.js",
....... and the list goes on
]
…
}
…
}
Source: https://angular.io/guide/build#configuring-commonjs-dependencies
| common-pile/stackexchange_filtered |
Swift func - Does not conform to protocol "Boolean Type"
Yes it's man vs compiler time and the compiler winning yet again!
In the func getRecordNumber I am returning a Bool and a Dictionary
func getRecordNumber(recordNumber: Int32) -> (isGot: Bool, dictLocations: Dictionary <String, Double>)
...
return (isGot, dictLocations)
However after I have called the func and question the Boolean isGot return I get the error message
(isGot: Bool, dictLocations: Dictionary <String, Double>) Does not conform to protocol "Boolean Type"
Any ideas what I have left out?
You don't need to add parameters into return like this (isGot: Bool, dictLocations: Dictionary <String, Double>). you just need to tell compiler that what type this function will return.
Here is the correct way to achieve that:
func getRecordNumber(recordNumber: Int32) -> (Bool, Dictionary <String, Double>)
{
let isGot = Bool()
let dictLocations = [String: Double]()
return (isGot, dictLocations)
}
| common-pile/stackexchange_filtered |
Generation of realistic real-valued sequences using Wasserstein GAN fails
My goal is to generate artificial sequences of real-valued data (e.g. time series) with GANs. Starting simple I tried to generate realistic sine-waves using a Wasserstein GAN. But even on this simple task it fails to generate any useful samples.
This is my model:
Generator
model = Sequential()
model.add(LSTM(20, input_shape=(50, 1)))
model.add(Dense(40, activation='linear'))
model.add(Reshape((40, 1)))
Critic
model = Sequential()
model.add(Conv1D(64, kernel_size=5, input_shape=(40, 1), strides=1))
model.add(MaxPooling1D(3, strides=2))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv1D(64, kernel_size=5, strides=1))
model.add(MaxPooling1D(3, strides=2))
model.add(LeakyReLU(alpha=0.2))
model.add(Flatten())
model.add(Dense(1))
Is this model capable of learning such a task or should I use a different model architecture?
| common-pile/stackexchange_filtered |
Which type of memory is faster than average for server use?
I am building a server computer which will be used for SQL Server and I am planning to use like 32GB+ of RAM and putting the databases in memory. (I know all about data loss issues when power is gone). I haven't been up to date with the new types of memory sticks out there.
What kind of memory should I get which is faster than average and not very expensive? I am buying a lot of ram so I am looking for memory that's above average but below high end if high end is very expensive.
(I will be using Windows Server 2008 R2 Standard or Windows HPC Server 2008 R2)
What system board will you be using?
I haven't decided on the MB. Even the memory type and capacity could drive my MB choice. All I know for now is it will hold a quad core CPU.
You may want to look at this: http://msdn.microsoft.com/en-us/library/aa366778%28VS.85%29.aspx Standard Edition won't support more than 32GB.
PC3-10600 (DDR3-1333 - peak transfer of 10667 MB/s) is plenty fast, but not the cash burner that PC3-12800 is. (DDR3-1600 - peak transfer of 12800 MB/s).
One step below PC3-10600 is PC3-8500 (DDR3-1066 - peak transfer of 8533 MB/s). That's seen in most off the shelf desktop PCs these days, and is considered an average speed.
Be warned that RAM prices are high nowadays.
+1, exactly what I'd say, easily the best performance/price available today.
"You like me! You really like me!" =)
You will probably be looking at three different types of memory for a server today:
DDR2 800 MHz
DDR3 1333 MHz
DDR3 1600 MHz
Don't expect there to be any cheap way out of this. You generally get the memory you pay for. A lot of servers are configured to use use common memory (non-ECC, Unregistered, Unbuffered), and such a server will save you some money at the cost of stability.
What you can possibly do is to get memory with a lower clocking, but you will lose performance then. If you want to get better than average performance you will be paying much more than average.
I don't think I'd even consider DDR2 for a new server... Also, almost all the server I've looked at lately (midrange) require registered ram, not common desktop memory.
pehrs and Wesley 'Nonapeptide" have pretty much nailed the speed\type options but it is also important to pay attention to the correct sizing for your particular server as that can limit speed and total bandwidth significantly. Intel Xeon 5500 & 5600 series systems have three on die memory controllers and memory bandwidth is maximized when they are all utilized so RAM sizes with slightly strange sizes like 6,12, 18 etc are going to have up to 50% more effective bandwidth than sizes of 4,8 etc on those systems. Also relevant is the maximum permitted DIMM speed for the number of DIMMs you populate per memory channel - most (all?) Xeon 5500's limited 1333Mhz speeds to configs with no more than 1 DIMM per channel.
Great points! I'm pretty much oblivious to this. Shame on me. =)
If by "put databases in RAM" you are referring to your previous question about RAM disks on Windows Server 2008, the answer is again the same: SQL Server has excellent caching, you don't need to mess with RAM disks.
I don't know if cheap memory and putting the database in memory is really going togehter well. Depends on how mission critical Your server will be. Have a look at the prices for ecc memory. It is more expensive, bot it could be worth the premium. It is not faster as far as I know.
If You opt for ECC then You just have to make sure the CPU You pick supports it. Intel core (desktop) CPU's usually don't, so that leaves AMD and the intel XEON line, but not any of the xeons though. I have made a list for my purposes, so just ask if You're going in that direction and I'll lok it up.
| common-pile/stackexchange_filtered |
Set variable with time to use as random
If I set a variable as follows:
$randNum = md5(time());
and use that to create:
$tempFileName = $randNum.'_temp.'.$type;
and then, later in the script:
$newFileName = $randNum.'_new.'.$type;
Will it be generating a new random number the second time around or will $randNum value be the same as the 1st?
Are you using $randNum = md5(time()); again after using it once ?
Same as the first if you don't call $randNum = md5(time()); again
make a function that generates the random number and call the function over and over. For instance function RandomNumber() { return md5(time()); }, test case: for ($i=0; $i<10; $i++) { echo RandomNumber(), "\n"; }
If you think that this may produce two different numbers, you may be more interested in learning lazy languages like Haskell, in which this may actually work this way. Not so in classic imperative languages like PHP. Also, there's virtually no point in hashing the time value, you're not going to add any more entropy to it. You may as well be using the time() number as is.
it randomly changes its value, for no apparent reason. that's why it's called a variable.
There's a race condition here. time() only has a granularity of 1 second. If the script is run twice in a second then the second time it's run it'll overwrite the files from the first.
@KarolyHorvath That is absolutely not why it is called a variable. It is called so because the contents of it can be varied by whoever or whatever wants to vary them. If all variables were random, coding would be a lot more freaking difficult!
Also keep in mind that using time() is not a good idea as it may chock with concurrent access, instead, use uniqid()
I should have said, in my original script, I use a random number function, I was using md5(time()); in the question to save time typing :)
The $randnum value will be the same as the first. If in doubt just test it:
$randNum = md5(time());
echo $randNum;
sleep(5);// sleep for 5 seconds
echo $randNum;
The values will be the same.
"Should"? You can change that to a "must". If they're not the same, something is seriously wrong with your PHP. :)
I would say you have to change that to a must.
Yes, I've changed the wording. Thanks for noticing.
Try this,
$newFileName = $randNum.'_new.'.$type;
instead of
$newFileName == $randNum.'_new.'.$type;
output:
87bc7ff76220f988dd191bd804482bff_temp.
87bc7ff76220f988dd191bd804482bff_new.
sorry, typo in question. I'm not using == :)
| common-pile/stackexchange_filtered |
Convert ListView to RecyclerView
i have a list view in my app.
I want to change it to an recycler view, but even with different tutorials i dont get it.
I was using this: https://www.spreys.com/listview-to-recyclerview/
I failed with the "getView" part here.
Here is my code of the ListActivity and LeagueArrayAdapter.
Thanks for your help.
ListActivity:
public class ListActivity extends AppCompatActivity implements AdapterView.OnItemClickListener {
private ArrayAdapter adapter;
private final int REQUEST_CODE_EDIT = 1;
private static LeagueDAO leagueDAO;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_list);
leagueDAO = MainActivity.getLeagueDAO();
List<League> allLeagues1 = leagueDAO.getLeagues();
if (allLeagues1.size() == 0) {
leagueDAO.insert(new League("HVM Landesliga 19/20","https://hvmittelrhein-handball.liga.nu/cgi-bin/WebObjects/nuLigaHBDE.woa/wa/groupPage?championship=MR+19%2F20&group=247189"));
allLeagues1 = leagueDAO.getLeagues();
}
adapter = new LeagueArrayAdapter(this, R.layout.league, allLeagues1);
ListView lv = findViewById(R.id.league_list);
lv.setAdapter(adapter);
lv.setOnItemClickListener(this);
registerForContextMenu(lv);
Button btn_league_add = findViewById(R.id.btn_league_add);
btn_league_add.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent intent = new Intent(ListActivity.this, EditActivity.class);
startActivity(intent);
}
});
}
@Override
public void onCreateContextMenu(ContextMenu contMenu, View v,
ContextMenu.ContextMenuInfo contextmenuInfo) {
super.onCreateContextMenu(contMenu, v, contextmenuInfo);
getMenuInflater().inflate(R.menu.team_list_context_menu, contMenu);
}
@Override
public boolean onContextItemSelected(MenuItem item) {
AdapterView.AdapterContextMenuInfo acmi=
(AdapterView.AdapterContextMenuInfo) item.getMenuInfo();
League league = (League)adapter.getItem(acmi.position);
switch (item.getItemId()) {
case R.id.evliconit_edit:
editEntry(league, acmi.position);
return true;
case R.id.evliconit_del:
leagueDAO.delete(league);
adapter.remove(league);
adapter.notifyDataSetChanged();
return true;
default:
return super.onContextItemSelected(item);
}
}
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
League team = (League)adapter.getItem(position);
editEntry(team, position);
}
private void editEntry(League league, int position) {
Intent intent = new Intent(this, EditActivity.class);
intent.putExtra("team", league);
intent.putExtra("position", position);
startActivityForResult(intent, REQUEST_CODE_EDIT);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
super.onActivityResult(requestCode, resultCode, intent);
if (requestCode == REQUEST_CODE_EDIT && resultCode == RESULT_OK && intent != null) {
Bundle extras = intent.getExtras();
int position = extras.getInt("position");
League league = (League) adapter.getItem(position);
league.setLeague_name(extras.getString("league_name"));
league.setLeague_url(extras.getString("league_url"));
adapter.notifyDataSetChanged();
}
}
@Override
protected void onRestart() {
super.onRestart();
leagueDAO = MainActivity.getLeagueDAO();
List<League> allLeagues1 = leagueDAO.getLeagues();
adapter = new LeagueArrayAdapter(this, R.layout.league, allLeagues1);
ListView lv = findViewById(R.id.league_list);
lv.setAdapter(adapter);
lv.setOnItemClickListener(this);
registerForContextMenu(lv);
}
LeagueArrayAdapter:
public class LeagueArrayAdapter extends ArrayAdapter<League> {
private List<League> leagues;
private Context context;
private int layout;
public LeagueArrayAdapter(Context context, int layout, List<League> leagues) {
super(context, layout, leagues);
this.context = context;
this.layout = layout;
this.leagues = leagues;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
League league = leagues.get(position);
if (convertView == null)
convertView = LayoutInflater.from(context).inflate(layout, null);
TextView tv_name = convertView.findViewById(R.id.tv_name);
TextView tv_url = convertView.findViewById(R.id.tv_url);
tv_name.setText(league.getLeague_name());
tv_url.setText(league.getLeague_url());
return convertView;
}
}
Update: app crashed when opened the activity
public class ListActivity extends AppCompatActivity implements AdapterView.OnItemClickListener {
private ArrayAdapter adapter;
private final int REQUEST_CODE_EDIT = 1;
private static LeagueDAO leagueDAO;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_list);
leagueDAO = MainActivity.getLeagueDAO();
List<League> allLeagues1 = leagueDAO.getLeagues();
if (allLeagues1.size() == 0) {
leagueDAO.insert(new League("HVM Landesliga 19/20","https://hvmittelrhein-handball.liga.nu/cgi-bin/WebObjects/nuLigaHBDE.woa/wa/groupPage?championship=MR+19%2F20&group=247189"));
allLeagues1 = leagueDAO.getLeagues();
}
// adapter = new LeagueArrayAdapter(this, R.layout.league, allLeagues1);
RecyclerView lv = findViewById(R.id.league_list);
lv.setHasFixedSize(true);
lv.setLayoutManager(new LinearLayoutManager(this));
lv.setAdapter(new LeagueArrayAdapter(allLeagues1));
// lv.setOnItemClickListener(this);
registerForContextMenu(lv);
Button btn_league_add = findViewById(R.id.btn_league_add);
btn_league_add.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent intent = new Intent(ListActivity.this, EditActivity.class);
startActivity(intent);
}
});
}
what's the problem?
You have to extend RecyclerView.Adapter and not ArrayAdapter
I reconstructed your ListView Adapter using RecyclerView.Adapter.
onCreateView is called for every visible container on your screen. If your screen can show 10 rows of data RecyclerView makes 11 - 12 containers (ViewHolder)
onBindView updates those containers with new data when you scroll.
MyViewHolder is the object that holds data about every row of data (container)
static class and bind() function inside to avoid any memory leak in your adapter.
We have access to Context in RecyclerView.Adapter using itemView and parent
itemView is the inflated View for each container (ViewHolder).
Initialize your Views inside ViewHolder's constructor so they get assigned once.
public class LeagueArrayAdapter extends RecyclerView.Adapter<LeagueArrayAdapter.MyViewHolder> {
private ArrayList<League> leagues;
public LeagueArrayAdapter(ArrayList<League> leagues) {
this.leagues = leagues;
}
@NonNull
@Override
public MyViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
return LayoutInflater.from(parent.getContext()).inflate(R.layout.row_league, parent, false);
}
@Override
public void onBindViewHolder(@NonNull MyViewHolder holder, int position) {
holder.bind(leagues.get(position));
}
@Override
public int getItemCount() {
return 0;
}
static class MyViewHolder extends RecyclerView.ViewHolder {
TextView tv_name;
TextView tv_url;
public MyViewHolder(@NonNull View itemView) {
super(itemView);
tv_name = itemView.findViewById(R.id.tv_name);
tv_url = itemView.findViewById(R.id.tv_url);
}
void bind(League league) {
tv_name.setText(league.getLeague_name());
tv_url.setText(league.getLeague_url());
}
}
}
Your Activity:
LinearLayoutManager for a linear layout
GridLayoutManager for a grid layout
setHasFixedSize() enhances your RecyclerView's speed if you are sure the RecyclerView itself won't change width or height.
public class LeagueActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.YOUR_ACTIVITY_LAYOUT_ID);
...
RecyclerView recyclerView = findViewById(R.id.YOUR_RECYCLER_VIEW_ID);
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new LinearLayoutManager(this));
recyclerView.setAdapter(new LeagueArrayAdapter(SEND_YOUR_ARRAY_OF_LEAGUE));
}
}
Thanks for your help, after the RecyclerView.Adapter i have to change the ListActivity but i dont get that
@Philipe check out the updated answer, hope it helps.
Thanks! Updatet my post, app crashed when opened the activity.
I suggest checking the Groupie Lbrary, it works with recyclerView and it makes life much easier for you, here is an example of how you can use it (I'm writing the code in kotlin but it shouldn't be that different from java)
first, add those lines to your Build.gradle (app) file
implementation 'com.xwray:groupie:2.3.0'
implementation 'com.xwray:groupie-kotlin-android-extensions:2.3.0'
then inside your activity create an adapter then set the adapter to your recyclerView
val myAdapter = GroupAdapter<ViewHolder>()
recyclerView.adapter = myAdapter
and just like this you set the adapter to your recyclerView, but you have to create "Items" to put inside the recyclerView. you can create an Item like so
class MyItem: Item() {
override fun getLayout() = R.layout.layout_seen
override fun bind(viewHolder: ViewHolder, position: Int){
}
}
in the getLayout method, you return the layout that you to display inside the recyclerView,
you can use the bind method to do any kind of modifications we want to apply to our layout that we are displaying.
lastly, we can add our items to the adapter this way
myAdapter.add(MyItem())
myAdapter.notifyDataSetChanged()
for more details check the library, in here I just explained how you can simply add an item to your RecyclerView
| common-pile/stackexchange_filtered |
Number of flops for $AA^{T}$
Suppose $A\in\mathbb{R}^{n\times n}$, it is known that a matrix-matrix multiplication between any two matrices accounts for $2n^{3}-n^{2}$ flops. Now I am not assuming anything special about $A$ that is to say consider its entries to be all non-zero in the worst case. Would computing $AA^{T}$ have number of flops less than $2n^{3}-n^{2}$? What I noticed for $n=2$ is that the first row of $A$ is multiplied with first column of $A^{T}$ which means that the first row of $AA^{T}$ is just multiplying first row of $A$ by itself but I can't seem to establish a general connection on how it may or may not reduce total number of flops.
Since you know that multiplication of two general $n \times n$ matrices requires $(2 n - 1) n^2$ operations, I will assume you know that any one entry requires $2 n - 1$ operations.
The matrix $A A^\mathrm{T}$ is a symmetric matrix. This means that the entry at row $i,$ column $j$ is the same as the entry at row $j,$ column $i.$ In other words, if you flip the matrix about the main diagonal, the entries are the same. This is because the entry is given by $$\sum_{k = 1}^n a_{i, k} a_{j, k}$$ whose value is clearly unchanged if we switch $i$ and $j.$
That being said, if we calculate the entries on the diagonal and the entries below the diagonal, then the entries above the diagonal have already been calculated; just take the entry whose row and column position is the reverse of the entry above the diagonal, and copy it in its place. Thus, we only have to calculate the entries on or below the main diagonal.
Since there are $n (n + 1)/2$ such entries and $2 n - 1$ operations per entry, we have $$(2 n - 1) \frac{n (n + 1)}{2}$$ operations to do.
quick question, I was trying to write down an algorithm that computes $AA^{T}$ using matrix-vector operation. I know that computing it as vector-vector operation would be more optimal in order to maintain least number of flops but can we preserve these least number of flops if we were to perform matrix vector multiplication? I am afraid I might have unnecessary multiplication. What I did is first compute the diagonal entries which are simply $a_{i,1:n}^{T}a_{i,1:n}$ but can the computation of the lower triangular part be done in a matrix vector multiplication?
@TYWQ Unfortunately, no. You will always end up calculating repeated values if you multiply $A$ by a column of $A^\mathrm{T}.$ You could, however, write the algorithm so that it only calculates the last $n - i + 1$ entries of $A \cdot (\text{$i$-th column of $A^\mathrm{T}$}).$ This approach is not substantially different from what I wrote in the answer though.
alright thank you once more!
| common-pile/stackexchange_filtered |
iOS: Default Battery Percentage Level for individual services to switch off
We all know iOS switches off services one by one when the battery reaches critical level. But is there any fixed battery level to cut down individual services. For example, Bluetooth will be switched off at 20%, Mobile data at 15%, Wifi at 10%, etc. I am searching these in iOS documentation and Googling as well. But I am not able to find such info. Any help please.
I've never heard of such a thing. Do you have any sources, such as an Apple KB article, that say that iOS turns of services when the battery is low?
Hi Andrew, that is what my question is. If anybody has the sources, it would be helpful. I know in Android, when battery level comes down to 10, the brightness is set to minimum. No services will be shut down until the battery completely drains out. I am looking for some reliable sources for iOS.
On iOS, services don't automatically turn off until the device runs completely out of battery (and at that point the device is shutting down).
| common-pile/stackexchange_filtered |
Electron: Length of an array set to 0 after going outside a query
I'm starting a project, a kind of interactive text game, on Electron. I have a database with 4 tables to use ONLY at the start of the project, in order to get info of characters, places and staff like.
The problem is that, after getting outside the query, the array when I put the data of characters, says that length is 0, but it show the correct info.
This photo you can see that the array above says it has more than 5 houndred items (this come from another version of the project with others techs), but the line under it, the console.log of the lenght, say it is 0
https://i.sstatic.net/6go8C.png
This is the code of the query
connection.query("SELECT * FROM persona", function (error, rows) {
if (error) {
throw error;
} else {
obtenerPersonas(rows);
}
})
function obtenerPersonas(rows) {
rows.forEach(row => {
/*I try to not introduce the data directly, but using local variables that I previusly create. It doesn't work*/
auxPer = new Persona(
row.NombreClave,
row.NombreMostrar,
row.Apodo,
row.Localizacion,
row.Genero,
row.Sexualidad,
row.ActDep,
row.ComePrimero,
row.ActPreHombres,
row.ActPreMujeres,
row.Prota
);
listaPersonas.push(auxPer);
});
}
/*This are the console.log of the picture, if I put them inside the connection query, they show all perfecty, the array content and length*/
console.log(listaPersonas);
console.log(listaPersonas.length);
If you need more info, tell me, please. I need to solve this. Without the length, I can't advance more
Have you tried returning the listaPersonas array outside, after, of the loop but inside the obtenerPersonas() method, then referencing the method in your log?
connection.query is asynchronous function so it's being called after console.log(listaPersonas); console.log(listaPersonas.length);. However listaPersonas fills up after time, so console.log(listaPersonas) shows like this. If you need length then you have to write code after calling the obtenerPersonas function.
connection.query("SELECT * FROM persona", function (error, rows) {
if (error) {
throw error;
} else {
obtenerPersonas(rows);
// write code here
}
})
Thanks for your answer. Could I ask you for a little advice on the code I have to write? Right now, I don't have any idea what it has to be
Just write there what you want do with listaPersonas filled up.
| common-pile/stackexchange_filtered |
Powerful router or access points
Why don't we use more powerful routers that have better signal reception instead of adding access points to the network?
It's not clear what you hope someone will tell you... perhaps read the NE Q checklist...
I'm not sure I really understand the question, but I think you are confused. Why would routers have better signal reception that WAPs?
A router with Wi-Fi capability merely has a WAP added to it.
WAPs may be placed where you wouldn't place a router (e.g., in the ceiling in the middles of a room), moving the WAP closer to the clients. Multiple WAPs can all be on the same LAN, allowing roaming without reauthentication.
I know routers aren't more powerful than APs, but why don't we use more powerful routers/AP so we can install less of them?
More powerful doesn't mean better. In fact, it is often better to turn the power down. There are many factors involved. If you have more users per WAP (remember, a router is not more powerful than a WAP in any sense of Wi-Fi), you have worse performance. It is often better to have many WAPs with lower power to get better performance and coverage.
@MarcoMenardi, you must also remember that Wi-Fi communications are two-way. You could have an AP with a very powerful radio, but the client could be too far away, or have radio or physical interference, for the client's radio to reach back to the WAP. Rest assured, many very smart people have worked this stuff out. A prerequisite for proper Wi-Fi deployment is the wireless site-survey to determine proper WAP placement and radio power. This will determine the best performance and eliminate radio shadows.
So I think it's important here to get the terminology correct, even though it's VERY common that people use the wrong terms. Properly, an access point allows wireless clients to connect to the existing network (which is usually, but not exclusively, a wired ethernet network). This is distinct from a router, which forwards packets from one computer network to another, and often includes functionality such as network address and/or port translation. The confusion arises because many very common devices, especially home "wifi routers" put all of this functionality into a single device.
The "power" of a router can mean a lot of things, and it's not at all clear what's being asked here. At least one important distinction would be whether we're talking about "how good is this system at sending and receiving wireless signals" but another might be "how good is this device at doing packet forwarding?" - these might well be very different things for a given device.
In large deployments, you can have lots of access points that cover a large area (think about a large commercial building) but none of these access points are actually doing the routing or NAT/PAT functionality.
So... all that said, it will really help us answer the question if you describe in better detail what you're asking and use the terminology most appropriate. I briefly looked over the relevant wikipedia pages and they seem to be reasonably well written:
[1] https://en.wikipedia.org/wiki/Wireless_access_point
[2] https://en.wikipedia.org/wiki/Router_(computing)
[3] https://en.wikipedia.org/wiki/Network_address_translation
| common-pile/stackexchange_filtered |
can you change the way get method makes links for django?
I'm using django for a website that has a searchbar setup with a simple form:
<form method="get" action="/browse">
<div class="input-group col-md-12">
<input type="text" name="searchquery" class="form-control input-lg" placeholder="Search" style="margin-right:1vw; border-radius: 5px;"/>
<span class="input-group-btn">
<button class="btn btn-primary btn-lg" type="submit">
{% fontawesome_icon 'search' color='white' %}
</button>
</span>
</div>
</form>
This creates url's like this:
http://<IP_ADDRESS>:8000/browse/?searchquery=<searchquery>
However I've setup my django url like this:
http://<IP_ADDRESS>:8000/browse/<searchquery>/
I would like to use the second url (as it just looks a lot better in my opinion).
Is there a way I can make my form do this?
This isn't a question about Django. The browser simply can't do this with an HTML form. The action attribute of the form is set when it is loaded.
You could possibly write some JavaScript to make it do this. But that would be the wrong thing to do. Queries like search should be part of the querystring, not the URL.
allright thanks. now I have the url like: http://<IP_ADDRESS>:8000/browse/?searchquery=test but is there a way I can get rid of the slash after browse and have it like: http://<IP_ADDRESS>:8000/browse?searchquery=test
Yes, don't put one in the URL pattern in urls.py.
that does not seem to work... I now get an error that django has no url for it...
| common-pile/stackexchange_filtered |
HZX D882 transistor not blocking 12V DC current
I am not too savvy with electronics so I apologize in advance for any major ignorance on my part. I am trying to use a transistor (listed in the title) as a digital switch for my Raspberry Pi 4 to control. Here is the circuit diagram.
Here is a picture of how I hooked up the circuit:
I inserted an LED in the circuit (the 5mm one that comes with an Arduino board) and it is always on. I am using an adapter that I found laying around as the power source. It reads 12V, 1.5A on the back. I would like to only have the LED turn on when the base terminal is given the proper voltage. The LED should be off when the Raspberry Pi is not providing any power to the circuit. Am I not understanding something? Should I be using different resistors? Thank you for any help!
Well for one your emitter and collector are swapped. If that's actually what happened, it's likely your base-emitter junction is acting like a zener, and your rpi input is sinking current.
I realize that the arrow on the transistor and the LED collide but I used a 5V power source with a 330 ohm resistor between the power source and the LED in this configuration and the circuit worked as expected. Would this still be a problem?
That makes me even more certain that this is the problem. Swap the collector and emitter, your transistor's hooked up wrong.
I attached the wire that went to the collector to the emitter and the wire that went to the emitter to the collector. I turned the LED around also to allow current to flow through it but I still get the same problem. Did I misunderstand what you said?
The LED? I'm not sure what you mean by turning the LED around, everything other than the transistor should be the same.
My apologies. I meant that I swapped the long leg and the short leg of the LED around. It stayed in the same spot in the circuit. I did what you said but nothing changed. Is it possible that I need more or less resistors?
The LED was the right way around, only the transistor needed to change. I'm very confused as to how it's on at all if you swapped it... At that voltage it should have blown.
That's what I thought too. I will attach a picture of the circuit on the post. Hopefully that will shed some light on the problem.
The Part Number you have given tells that it is an NPN transistor. But the way you connected the circuit seems incorrect.
In a switch configuration of NPN transistor, the emitter is usually connected to the ground and the collector is connected to the power source.
The emitter section is identified with the help of the arrow mark marked on the pin in the schematic. The direct opposite section of the emitter is the collector section.
Since you did connect the base of the transistor correctly with the Raspberrypi I hope that you are clear on the base side of the transistor.
The picture below is drawn as you did.
Picture A depicts that your Raspberrypi is not supplying anything to the transistor and the led is in an OFF state.
Picture B depicts that your Raspberrypi is supplying power to the transistor and the led is in an ON state.
To add more, I suspect the resistor value you have chosen and connected in the collector section of the transistor. The resistor connected there is to limit the current flow for the LED to glow on. If you did the calculations correctly, leave it.
If the current flow is higher than the LED need, it will burn the LED and you will see no result from it.
Have a nice day!
Thank you! I will try that out right now.
| common-pile/stackexchange_filtered |
ArrayLists and Advanced Functions
I've got an ArrayList of ArrayLists of Strings that I need to process with an advanced function. I am modeling that function off of what I learned in "Learn Powershell Toolmaking in a Month of Lunches."
In order to make the functions flexible, they tell you in that book to set it up so that a parameter can be piped in, but also taken as an array. So basically, you make the advanced function to accept ValuesFromPipeline for a particular parameter, but then you also make that parameter type an array and add a ForEach-Object in your Process.
What I am running into is that if I configure my function this way, it no longer works to pipe the values in. Because then it treats them one at a time, and the ForEach-Object breaks that ArrayList into its subsequent strings for processing. Is there an easy way to prevent this?
See code example below. When I pass $OutsideArray to Get-Data as a normal parameter (Get-Data $OutsideArray), it works. When I remove the ForEach-Object in the function, it works. But when I pipe $OutsideArray in as is, it no longer works.
Have tried stepping through to see if I can understand why it acts this way, but so far no luck.
function Get-Data {
[CmdletBinding()]
param (
[Parameter(Mandatory = $True, ValueFromPipeline = $True)]
[System.Collections.ArrayList]$ModuleBlocks
)
PROCESS {
$ModuleBlocks | % {
$test = $_
}
}
}
$OutsideArray = New-Object System.Collections.ArrayList
For ($i = 0; $i -lt 10; $i++) {
$InsideArray = New-Object System.Collections.ArrayList
For ($j = 0; $j -le 10; $j++) {
$InsideArray.Add("$i This is a test $j") | Out-Null
}
$OutsideArray.Add($InsideArray) | Out-Null
}
$OutsideArray | Get-Data
How about ValueFromPipelineByParameterName?
Can you please expand on that a bit? Never heard of it.
I recommend running help about_Functions_Advanced_Parameters and reading the built-in help.
How do you know whether it works or not? Your Get-Data function never returns or emits anything
Removed $test = from your code. I'm getting the same output when running $OutsideArray | Get-Date and Get-Data $OutsideArray
I'm just stepping through it in an IDE to see how it's treating the data within the function.
How about ValueFromPipelineByPropertyName? You can pipe the arraylist in as a property of another object. I posted a similar answer here (unscored): Piping an arraylist from first script to second script
EDIT: I thought of a second way. Pass it in using the comma operator:
$a = [collections.arraylist](1,2,3)
,$a | foreach gettype
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True ArrayList System.Object
| common-pile/stackexchange_filtered |
Why I can't view my UDP received Image in Android Studio?
I succeeded in sending a string from Android Studio Java UDP Client to Visual Studio C# UDP Server, and a response back from the Server to the Client.
Now I am trying to send an Image back from the server instead of a string response. So on client button click, I am supposed to get an Image back from the server. No syntax errors and no weird behavior. It's just nothing happens and I can't seem to see the received image on my ImageView and when I step over it my final Bitmap ReceivedImage is null
Do I need to convert into something other than Bitmap? Is this the right way? Maybe something else.. Can anyone help me point out my mistake? Thanks
C# Server:
class UdpServer
{
static void Main(string[] args)
{
byte[] dataReceived = new byte[1024];
UdpClient serverSocket = new UdpClient(15000);
string path = "C:\\Users\\kkhalaf\\Desktop\\Capture.PNG"; // image path
FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read); // prepare
BinaryReader br = new BinaryReader(fs); // prepare
byte[] dataToSend = br.ReadBytes(Convert.ToInt16(fs.Length)); //Convert image to byte[]
int i = 0;
while (true) // this while for keeping the server "listening"
{
Console.WriteLine("Waiting for a UDP client..."); // display stuff
IPEndPoint sender = new IPEndPoint(IPAddress.Any, 0); // prepare
dataReceived = serverSocket.Receive(ref sender); // receive packet
string stringData = Encoding.ASCII.GetString(dataReceived, 0, dataReceived.Length); // get string from packet
Console.WriteLine("Response from " + sender.Address); // display stuff
Console.WriteLine("Message " + i++ + ": " + stringData + "\n"); // display client's string
// Here I am sending back
serverSocket.Send(dataToSend, 8, sender);
}
}
}
Java Client: On button click this function gets called to send-receive-display
public void SendUdpMsg(final String msg)
{
Thread networkThread = new Thread() {
// No local Host <IP_ADDRESS> in Android
String host = "<IP_ADDRESS>"; // Server's IP
int port = 15000;
DatagramSocket dsocket = null;
public void run() {
try {
// Get the Internet address of the specified host
InetAddress address = InetAddress.getByName(host);
// wrap a packet
DatagramPacket packetToSend = new DatagramPacket(
msg.getBytes(),
msg.length(),
address, port);
// Create a datagram socket, send the packet through it.
dsocket = new DatagramSocket();
dsocket.send(packetToSend);
// Here, I am receiving the response
byte[] buffer = new byte[1024]; // prepare
DatagramPacket packetReceived = new DatagramPacket(buffer, buffer.length); // prepare
dsocket.receive(packetReceived); // receive packet
byte[] buff = packetReceived.getData(); // convert packet to byte[]
final Bitmap ReceivedImage = BitmapFactory.decodeByteArray(buff, 0, buff.length); // convert byte[] to image
runOnUiThread(new Runnable() {
@Override
public void run() {
// this is executed on the main (UI) thread
final ImageView imageView = (ImageView) findViewById(R.id.imageView);
imageView.setImageBitmap(ReceivedImage);
}
});
dsocket.close();
} catch (Exception e) {
e.printStackTrace();
}//catch
}//run
};// Networkthread
networkThread.start();//networkThread.start()
}
UDP is not designed for bulk transfers of data that needs to stay in order and arrive intact. UDP packets sent in some order are not guaranteed to arrive in that order, or even arrive at all (for example if you send packets A,B,C,D,E you may get A,B,D,E,C or even A,E,C,D). Your string fit into one packet, but for the image, a single lost packet can spell disaster. Use TCP instead (using a normal Socket instead of a DatagramSocket).
@hexafraction I think I am aware of this but here I am testing with a 900 bytes image. Which should fit in my byte[] buffer = new byte[1024] no? However, my result is a null and not misordered or less than
Isn't much of an answer, but since I can't comments yolo
You should use TCP, it is so much better. Using UDP, I even loose data when it is 400 bytes.
It might be
serverSocket.Send(dataToSend, 8, sender);
Change it to
serverSocket.Send(dataToSend, dataToSend.Length, sender);
Also, try first sending the length of the image (dataToSend.Length) to the client, then have the client use the length you sent it to create a byte array. Kind like
byte[] buffer = new byte[recievedLengthOfImage]
That's what I've been told, TCP. I am really a noob student trying to achieve big thing which is why I am excited and motivated. Would you like to share your goal and how did you achieve it? Maybe part of the code? Anything to help me learn or move forward. And thanks for passing by, you helped anyway. Maybe now you can comment :)
Actually, this worked. I received the image without sending the Image length. Just wanted to tell you good job for catching it. However, on a new click I don't receive a new image. Do you know why?
| common-pile/stackexchange_filtered |
Changing Locations, please help?
I have been trying to figure out how to change the location because it always thinks I am in Europe. Is there a way to change the location to what you want? For example, I want the computer to think i am in paris or if I want it to look like florida. How can I do this? If anybody has the answer, state like i am a 5 year old.
Thanks.
what are you using, Tor Browser?
You can edit your torrc file, but you'll only be able to change to a specific country, or to a specific exit relay. If you wanted a particular city, you'd have to find a relay in that city by checking the IP addresses of all relays. https://www.torproject.org/docs/faq.html.en#ChooseEntryExit
Thanks Ron, for helping but I am still lost. I understand the torrc file needs to be edited but how? more to the point what examples can I use to understand the entry and exit point of Tor. still tad confused. thanks.
| common-pile/stackexchange_filtered |
is there a mathematical subset relation in java?
I want to express a mathematical subset relation in Java.
In Mathematics we have nice relations like: the integers are a subset of the real numbers.
I try to somehow express that in java with a Point class with generics:
public class Point<T extends Number> {
protected Number xCoord;
public T getXCoordinate() {
// do some stuff
return (T) this.xCoord;
}
Unfortunately I have to cast here to T, but then I get the usual warning of unchecked conversion, what I want to avoid.
Here it would be nice if Number were instanciable, but it is unfortunately abstract. So I wonder if there is a common instanciable Superclass of Integer and Double (with an explicit Zero). I didn't find one. But mathematically we really do have N as a subset of R. How can I reflect this in Java? Here I am at the end of my knowledge according to generics, because I cannot define a subclass (say InstNumber) of Number that is instanciable and superclass of Integer and Double since I am not able to change the java.lang package.
So does anyone have an Idea how to "make Points of Integers a subset of Points of Double" or find a common instanciable class InstNumber. However if not, why is the type safety so hard at this point?
In the end I try to implement an algorithm to encrypt with elliptic curves. But to do so you often need that each Point in FpxFp (p prime) c NxN is also a Point in RxR. So it I want to compute with these "Numbers" even if I didn't know the current subtype. Since mathematically a field is interesting in here it would be nice to have a kind of "abstract" but instanciable Subclass called Field of Number which is Parent class of all the children of number. In Field it would be sufficient to have a concrete 0 and 1 which are present in most subclasses of Number. Thus could Field be some "mathematical field" hence a set with explicit 0 and 1 (like Z/2Z which is indeed a mathematical field). Now let for instance Integer be a subclass of Field. The only "additional job" of the Integer class is to map the "abstract" 0 and 1 of Field to the concrete 0 and 1 interpretation of Integer. This could be also implemented for other subclasses.
I know that associativity is not really possible. But for most calculations and decisions in programming it would be nice to test whether a Field element is 0 or 1 (Specifically to invert a Field element). So I want to restrict myself to the calculation of existing numbers in R, not such things as NaN.
I am sorry that I could not give you more code, but I think its too much Code that I have right now to post it here. For short: I have a ProjectivePoint class which has to do the projective addition of points in RxR. But it would be nice if the calculations in this class also are availible on FpxFp c RxR. Which is mathematically more or less trivial, but I see no current solution in Java.
Very interesting question. I've wondered about stuff like that myself and I don't know that there are any really satisfying answers. Java's type system is probably not a great place to start for any kind of subtle question -- have you looked at Haskell? But if working with Java is a requirement, you'll probably find you'll have to make some compromises in order to get something working. PS. By the way, I don't understand why you didn't say T xCoord instead of Number xCoord. Maybe you can edit your post to explain. PPS. Probably cs.stackexchange.com is more suitable for this question.
For short: My first implementation was with Number and I programmed a static class Calculator to calculcate with Numbers based on their real subtype using the instanceof operator. But also this Calculator class is very repetetive in the way that I repeat the if(x instanceof Double || y instanceof Double) condition multiple times in the implementation of the operations Calculator.add .sub .mult and ,div. So to tell the thruth I am not happy with my Calculator implementation either...
Could you please provide more code so it is clearer which problem you are trying to solve? It is currently not clear to me how you are planning to create these Point objects or why you want Number to be instanciable.
Sorry for the missing code, I have given more Information in the main thread.
Maybe you just just go ahead and define new classes which fit mathematical concepts better. You could have methods plus, times, inverse, etc. in a base class which are then redefined for each subclass. You might try to find out if anyone has already created such classes.
Unfortunately my base class has to be instanciated otherwise I could again use java.lang.Number. I think I found a logical cat that bites itself here... Suppose for example I have a class Field which is father of all possible fields (like Z/2Z,Fp,Q and R). How should I define the elements in Field? Unfortunately they have to have a type and this type should be overriden in the subclases.with elements like int or double I think this is somehow impossible or I don't unsterstand enough in inheritance of java. Even if I use an interface the methods do have to have a return type...a logical cat!
May I ask why protected Number xCoord; is not protected T xCoord;?
The problem is that the formulas for projective points are "kind of the same" whether you calculate over FpxFp or over RxR. Hence I want to express this calculation somehow. To express a calculation I need methods like addition, multiplication, inversion and so on. Hence T is too abstract to calculate. Moreover if you define such a finite set T and the needed calculations which you could do, you also need to "embed" this set T into R (since mathematically Fp has an injection into R). The problem is that R is overcountable and hence in particular infinite. And Fp can be large for large prime p.
Did you take a look in existing Java projects and how they do structure the hierarchy? https://github.com/kredel/java-algebra-system/tree/master/src/edu/jas/structure (java-algebra-system) or https://github.com/Hipparchus-Math/hipparchus/blob/master/hipparchus-core/src/main/java/org/hipparchus/FieldElement.java (hipparchus)
Thanks axelclk for this hint. I found @SuppressWarnings("unchecked") directives in there too. Fortunately in the mean time I have constructed a solution by myself which I will post as soon as possible.
Thanks for all your support and ideas (especially the idea of defining an instanciable super class based on Roberts comment drove me further), it's been a pleasure to discuss such things with all of you.
For short: I used polymorphism multiple times to do calculations and used an instanciable fatherclass Z2 of Fp and R.
I have constructed constructed an inheritance tree of Z2 extends Fp and Fp extends R (all instanciable).
The calculations inside Z2 are easy to code. Fp in contrast uses integers to calculate different values modulo p (where p is a prime passed to the constructor. R (or RealNumber) is a class that encapsulates a double and uses "doubled methods" (methods with double arguments and return type) for its calculations. I override with each inherited class the calculation methods with different parameters (i.e. Fp.add(int,int) or R.add(double,double)). Because of the instanciable class Z2 I can now define a PrjoectivePoint and do all needed calculations on Z2. The polymorphism is located in my static Calculator class, where for example add(Z2,Z2) is defined. In my first implementation I use the instanceof operator to decide whether the Z2 argument of add is really a Z2 instance, a Fp instance of a R instance. (By really I mean explicitly hence an Fp object is not instance of R) In a better version I think I can replace this by polymorphism.
The methods intValue() and doubleValue() are common from Z2 to R and implemented in the expected way since each class encapsulates it's own value.
Now my calculations in Calculator can be "polymorphically" done over Z2. For security my Calculator class has a reference Z2 instanceType = new Z2|Fp|R dependend on which field I want to calculate. With all my add,mult,invert and so on methods implemnted polymorphically in Calcuator I can now do the calculations for the ProjectivePoint on Z2. Again polymorphism decides which calculations are really done in Calculator (because R extends Fp extends Z2). Moreover Z2 is instanciable and that has made the success, since every calculation can be done polymorphically on Z2.
Note that 4x in a formula for x in Z2 can also be written as Calculator.add(Calculator.add(x,x),Calculator.add(x,x));
| common-pile/stackexchange_filtered |
Understanding oscillation frequency for SAF-C509 processor
Good day.
For the time being, I'm reversing automotive ECU which has Intel51 architecture processor - SAF-C509 by Infineon/Simens. Its command exection part has 6 states with 2 phases for each state. Hence a single machine cycle is executed within 12 semi-phases.
One can add an external oscillator to set frequency of the processor. Lets assume that this frequency is 16MHz.
The user's manual on the processor says:
XTAL1 and XTAL2 are the output and input of a single-stage on-chip
inverter which can be configured with off-chip components as a Pierce oscillator.
The oscillator, in any case, drives the internal clock generator.
The clock generator provides the internal clock signals to the chip at half
the oscillator frequency.
These signals define the internal phases, states and machine cycles.
Now, I can't see in any obvious way what exactly is oscillated at that frequency. Is this 16MHz/2 frequency for semi-phases or something smaller?
Rephrasing the question: what's the frequency of machine cycles with 16MHz oscillator connected to the processor?
A 8051 machine cycle is 12 clocks. At 12MHz the machine cycle would be 1us. At 16MHz it would be 750ns.
You can measure the ALE signal, if external memory is used, to be sure.
@Kartman Thanks for your response! So, when they refer to f_osc in the manual is it for the clock itself (the one connected to the processor) or for the machine cycles frequency? I wouldn't have asked about this if they had stated names explicitly somewhere in the manual. Oh, and the other thing here, where did the half-frequency got? I mean, the manual states The clock generator provides the internal clock signals to the chip at half the oscillator frequency. Is this halving for internal purposes and phases are clocked at the original oscillator frequency?
| common-pile/stackexchange_filtered |
I want to create an output with all the Azure AD applications, which returns a)listing of all b) listing of expiring apps
This is what I have so far, but the $expirationlist doesn't return back a listing of ALL the applications expiring with the $expirationlist variable.
$date= get-date
$expirationdate= $date.AddDays(30)
$ADApplications = Get-AzADApplication
$result = foreach ($application in $ADApplications)
{
$credentials = Get-AzADAppCredential -ApplicationId
$application.ApplicationId
$StartDate = $credentials.StartDate
$EndDate = $credentials.EndDate
[PSCustomObject]@{
ApplicationName = $application.DisplayName
ApplicationID = $application.ApplicationId
ObjectID = $application.ObjectId
CreationDate = $StartDate
ExpirationDate = $EndDate
}
if($EndDate -lt $expirationdate)
{
$expirationlist = [PSCustomObject]@{
ApplicationName = $application.DisplayName
ApplicationID = $application.ApplicationId
ObjectID = $application.ObjectId
CreationDate = $StartDate
ExpirationDate = $EndDate
}
}
}
$expirationlist
The error comes from casting a PSCustomObject with a $null key:
Example:
PS /> [pscustomobject]@{ $null = 'asd' }
A null key is not allowed in a hash literal.
At line:1 char:19
+ [pscustomobject]@{ $null = 'asd' }
+ ~~~~~
+ CategoryInfo : InvalidOperation: (System.Collecti...deredDictionary:OrderedDictionary) [], RuntimeException
+ FullyQualifiedErrorId : InvalidNullKey
Since these variables are not defined they are basically null:
$ApplicationName, $ApplicationID, $ObjectID, etc.
Try with this, it should work, I also added a minor efficiency improvement:
$ADApplications = Get-AzADApplication
$result = foreach ($application in $ADApplications)
{
$credentials = Get-AzADAppCredential -ApplicationId $application.ApplicationId
foreach($credential in $credentials)
{
$StartDate = $credential.StartDate
$EndDate = $credential.EndDate
[PSCustomObject]@{
ApplicationName = $application.DisplayName
ApplicationID = $application.ApplicationId
ObjectID = $application.ObjectId
CreationDate = $StartDate
ExpirationDate = $EndDate
}
}
}
$result | Where-Object {[datetime]$_.EndDate -lt [datetime]::Now.AddDays(30)}
Thank you this worked! I've updated the original question with another section of the code, that doesn't work. I used the same method you had, but it's not exactly working. Do you have any ideas as to why?
I tried that but it returned all of them instead
I tried the following instead and it worked! $result | Where-Object {[datetime]$_.ExpirationDate -lt [datetime]:: Now.AddDays(30)}
Is there any way to show ALL the different credentials per application? Currently, if there is more than one, it just outputs [System.Object] for that app.
@TechGeek I edited the answer, it should give you the expected output now.
| common-pile/stackexchange_filtered |
Referenced dll (built from referenced projects) are locked after each build in Xamarin Forms Portable project?
I'm building my first Xamarin Forms app (I've built one before but it's a Xamarin Android app). There is one weird issue that will prevent me from debugging my app each time. The main Xamarin Forms Portable project has 2 other references from other portable library projects. So after each build on the Android project (just an example), the 2 output DLLs (from the referenced projects) are locked (by devenv.exe). That means I cannot simply rebuild it while keeping VS open. I have to close the VS and delete the output DLLs manually before re-opening my projects and build again. That's completely not acceptable.
I've found out that somehow the output dlls are cached in a folder named ProjectAssemblies and it's usually located at C:\Users[UserAccount]\AppData\Local\Microsoft\VisualStudio\14.0\ProjectAssemblies (for Visual Studio 2015).
BUT that's just a strange signal I've recognized, I mean it seems not be the actual problem because even when I deleted all the files there, after restarting VS the output dlls are somehow still loaded into devenv.exe (I used Process Hacker to find those dlls and know that they are loaded by devenv.exe). The only way to prevent that is to delete the output dlls in the folder bin. But that works just once, if rebuilding the projects I have to redo all those steps (closing VS, deleting output dlls, reopening VS).
The only final way may be to just add the output DLLs as reference (not the projects) but that would not allow to debug code in the referenced projects easily (still the referenced projects are part of the main solution, they are not some kind of ready-to-use separate libraries).
Update
By using Process Hacker, I know that some essential DLLs for Xamarin Forms project (such as Xamarin.Forms.Core.dll) are also loaded into devenv.exe BUT they are loaded from the path in ProjectAssemblies (as mentioned before). So those DLLs are not locked, they can be cleaned OK from the bin folder. But somehow my project dlls are not treated that same way. There seem to be 2 versions loaded into devenv.exe, one from the ProjectAssemblies and the other is from the bin folder (of platform-specific projects, not of the main portable project). So that's why it's locked and cannot be cleaned/overwritten while rebuilding.
I had a similar problem with VS2017 and Xamarin.Forms project. I fixed it by deleting all of the bin and obj folders in all of the referenced projects and the main app project. Shut down VS, in windows explorer or command prompt go through all of the referenced projects, delete their bin and obj folders. After restarting VS and doing a complete rebuild, my builds have worked ever since. This was on VS2017, so it's a little different, so not sure if VS2015 can be worked around in the same way.
@PedroSilva thank you for your input. Well actually I found out that the problem raises only after a build is failed (such as for some syntax error). So once that happens, I need to close VS, delete the dlls in bin folder and reopen VS. But if I carefully avoid syntax errors before building and debugging, it should work fine.
I think you are hitting this issue (Compilaion fails because proces cannot access dll) that was fixed in version 15.2 (26430.12). Check out the release notes
That particular bug was a nightmare for many devs, me included.
If that´s the case, updating VisualStudio will fix it.
I'm not sure if this is my case (see my comment to Predro Silva on my question), but still thank you for the answer. Currently I will avoid updating VS (the current version I have is 14.0 (23107.0) because of huge time consuming it may do. I'll give this answer 1 upvote but still not sure if accepting is fine.
| common-pile/stackexchange_filtered |
ADFS Implementation On Angular
I don't have idea on implementing adfs authentication with username and password from frontend application.. Nodejs will be backend..
Please guide me on this..
Hi Dinesh, welcome to Stack Overflow! Your question as it stands isn't really answerable on here. Generally speaking, here on SO, we ask questions when we've tried something, gotten stuck, and been unable to find an answer elsewhere. It seems like this question doesn't fulfil those criteria. I'd advise you do some research online (there are a bunch of tutorials for implementing ADFS on Angular) and come back if you get stuck somewhere and can't find an answer. How does that sound?
Hey Will .. I had done research based on this .. but i didn't get answer..Please share your ideas ..
It's great if you've done research. The best way of getting an answer is by showing that research, and showing why what you found doesn't work for you.
Please provide enough code so others can better understand or reproduce the problem.
| common-pile/stackexchange_filtered |
2 TextWatchers for 1 EditText
I'm working on a simple android app project for myself. I found an article that made me decided to use TextWatcher. So simply, I add TextWatcher into my code. But here's the thing. I have designed 2 RadioButtons and 1 EditText. Both RadioButton will determine the format of my one and only EditText.
Here's how I want the apps to be reproduced:
If the user pressed Button A, then the input format for the Edittext will be like this:
ABCD 1234 EFGH 5678
So, it has space in every 4 characters
If the user pressed Button B, then the input format for the EditText will be like this:
ABCD1234EFGH5678
So, it has no space in every 4 characters
Even the user pressed the opposites again, the format of the EditText will follow the condition
This is What I've tried:
Use the if-else condition, and even create 2 TextWatchers, but still, the flow will be like this:
Make 2 TextWatchers to determine the case, but still here's what I've got:
The user pressed Button B, the Edittext format will be like ABCD1234EFGH5678
Then, The user pressed Button A, the Edittext format will be like ABCD 1234 EFGH 5678
When The user pressed Button B again, the Edittext format will be like ABCD 1234 EFGH 5678 instead of ABCD1234EFGH5678
How am I supposed to do to reach this approach?
Here's my MainViewGenerator.java:
this.editTextNumberSpaceTextWatcher = new EditTextNumberSpaceTextWatcher() {
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
super.onTextChanged(s, start, before, count);
editTextString = s.toString();
}
};
this.editTextNumberNormalTextWatcher = new EditTextNumberNormalTextWatcher() {
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
super.onTextChanged(s, start, before, count);
editTextString = s.toString().replaceAll(" ", "");
}
};
withId(R.id.my_main_editText, new Anvil.Renderable() {
@Override
public void view() {
if (!btnRadioGroupValue.isEmpty()){
if (btnRadioGroupValue.equals(valueButtonA)){
onTextChanged(EditTextNumberSpaceTextWatcher);
text(editTextString);
} else if (btnRadioGroupValue.equals(valueButtonB)){
onTextChanged(EditTextNumberNormalTextWatcher);
text(editTextString);
}
}
}
});
Here's my EditTextNumberSpaceTextWatcher.java:
public abstract class EditTextNumberSpaceTextWatcher extends PhoneTextWatcher {
private static final String MY_NUMBER_FORMAT = "**** **** **** **** **** **** **** ****";
public EditTextNumberSpaceTextWatcher() {
super(MY_NUMBER_FORMAT );
}
@Override
public char[] getSeparatorCharacters() {
return new char[]{' '};
}
}
Here's my EditTextNumberNormalTextWatcher.java:
public abstract class EditTextNumberNormalTextWatcherextends PhoneticTextWatcher {
private static final String MY_NUMBER_FORMAT = "********************************";
public EditTextNumberNormalTextWatcher() {
super(MY_NUMBER_FORMAT );
}
}
I think you can use just one TextWatcher:
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
//here evaluate the value of radio button
if(mRadioButton1.isChecked()){
//set pattern
}else if(mRadioBUtton2.isChecked()){
//set another pattern
}
}
I've done this before, but still, whenever Button A called, the EditText will only follow the Button A format even though I pressed the Button B @Igmer Rodriguez
| common-pile/stackexchange_filtered |
Wristbands and nausea
Acupressure wristbands are widely sold as a remedy for nausea caused by motion sickness. I have no doubt that they work for some people, because I experience exactly the opposite effect: elasticized wristbands, such as those found on sweatshirts, often make me feel extremely nauseous. There are some kinds of garments I simply can't wear; for others, stretching the wristbands out until they are saggy is good enough.
My two part question is this: first, how widely is my problem experienced by others? And second, is there any scientific understanding of this (apparently very real) connection between wrist pressure and the feeling of nausea?
| common-pile/stackexchange_filtered |
How to Format Excel Bar Chart Date Axis Tied to Pivot
In the barchart below I want to format the X-axis to be "7/21". You can see that I've formatted in my pivot chart but that didn't help. Then in the Format Axis panel at the bottom under the Number section the formatting options for Date are there but nothing happens if I press the Add button.
Any ideas?
I just checked, and I have exactly the same issue, whatever I change nothing happens (I have Excel for Office 365 16.0.12527.20880 64 bit)
seems to be a bug.
i have version 16.0.130001.20338 64-bit, and it seems to be fine. Can you update yours and check if the issue persists?
I've a work pc, unfortunately I don't control updates.
I just tried it on 16.0.13426.20308 and nothing happens when I change the formatting using this method.
You can try to set it this way:
Right-click on the date button on the pivot chart, then select Field settings.
On the windows that opens, click on Number format.
Set the date format, then click Ok, and OK once more. The date format should be set as you selected.
This approach doesn't have any effect for me. The formatting remains no matter what I select.
I just had the same problem - the number format button would not show up in the field list. So I reset the Pivot-data source to just the rows containing the data (I had previously just selected the whole columns ($A:$J)). With that done, the button reappeared and the date-format changes worked for the graph as well. Then, after formating the whole date coulumn to "Date" in the source data, it also worked with whole coulumns as the source, although that reformating step had not been successful before changing the Pivot-Data-Source. So there seems to be a bug which keeps a coulmn from being recognised as "date" as long as something else sits there.
| common-pile/stackexchange_filtered |
How to add mouselistener on random drawn image in java?
I am building a game in Java and I need to add a mouselistener to an random drawn image in my game.
I made the image appear on random places every x seconds, when I click the image I would like to add 1 point to the scoreboard.
My code for the random image adding is:
Random rand = new Random();
private int x = 0;
private int y = 0;
Timer timer = new Timer(700, new ActionListener(){
@Override
public void actionPerformed(ActionEvent e) {
x = rand.nextInt(400);
y = rand.nextInt(330);
repaint();
}
});
public void mousePressed (MouseEvent me) {
// Do something here
repaint();
}
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(achtergrond, 0, 0, this.getWidth(), this.getHeight(), null);
//g.drawImage(muisje, 10, 10, null);
g.drawImage(muisje, x, y, 100, 100, this);
}
I looked on google and found that I need to add a new class with a mouse event, but How do I add this? It's not clear enough because I'm just a beginner in Java.
You need to set your mouse listener implementation to the component you're drawing the image at. Since you seem to be implementing everything in the same classe I guess you could do this.addMouseListener(this);
http://docs.oracle.com/javase/tutorial/uiswing/events/mouselistener.html
As you can see I have two images, achtergrond (background) and muisje (mouse/rat) I only need the mouse event on the second image(muisje) So I don't think it's working with a mouselistener on the jpanel.
You could solve it both ways: drawing the images to different components and adding your mouse listener to both of them, or adding your mouse listener to the component that draws everything and checking the images boundaries to figure out which image the mouse is over (like @StuPointerException said).
You need to register a MouseListener for your Gamevenster class. Instead of making the class implement MouseListener, just use a MouseAdapter, where you only have to implement the method mouseClicked. So in your constructor, you something like this
private JLabel scoreLabel = new JLabel("Score: " + score);
private int score = 0;
public Gamevenster() {
scoreLabel.setFont(new Font("impact", Font.PLAIN, 30));
scoreLabel.setForeground(Color.WHITE);
add(scoreLabel);
addMouseListener(new MouseAdapter(){
@Override
public void mouseClicked(MouseEvent e) {
Point p = e.getPoint();
int clickX = (int)p.getX();
int clickY = (int)p.getY();
if(clickX > x && clickX < x + 100 && clickY > y && clickY < y + 100) {
score++;
scoreLabel.setText("Score: " + score);
}
}
});
}
This helped me a lot :) When I click the image, I get a score now! when I click not on the image nothing is done with the score! thanks a lot.
You know where the image is drawn (x,y) and you know the size of the image (100,100), therefore to tell if the mouse click is inside the image you can do something like this:
public void mousePressed (MouseEvent me) {
int clickX = me.getXOnScreen();
int clickY = me.getYOnScreen();
if(clickX > x && clickX < x+100 && clickY > y && clickY < y+100) {
//the image has been clicked
}
repaint();
}
The class you're writing can then implement MouseListener.
EDIT in response to comment:
You don't need to link the code to the image, the component that you're writing should implement the mouse listener since this maintains the state and knows where the image is drawn. I would start out by looking at this link and implementing a basic MouseListener to print out the x and y co-ordinates of mouse clicks on your component:
http://docs.oracle.com/javase/tutorial/uiswing/events/mouselistener.html
Example component implementing Mouse Listener:
public class TestComponent extends JComponent implements MouseListener {
public TestComponent() {
this.addMouseListener(this);
}
@Override
public void mouseClicked(MouseEvent e) {
int clickedX = e.getX();
int clickedY = e.getY();
System.out.println("User Clicked: " + clickedX + ", " + clickedY);
}
@Override
public void mousePressed(MouseEvent e) {}
@Override
public void mouseReleased(MouseEvent e) {}
@Override
public void mouseEntered(MouseEvent e) {}
@Override
public void mouseExited(MouseEvent e) {}
}
I don't know how to link this event to the image, I'm new to Java. It's not that easy to understand the code.
I really can't figure out how to do this :/
I've added a basic example to the answer.
I've added your example code to my project and tried it, but I don't get a system.out.
| common-pile/stackexchange_filtered |
How can I use nested loop for widget?
I understand I can code like single loop to build widget like;
final icon =["lock.png","map.png","stake.png","cheer.png","sushi.png"];
// omit
Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
for(int i = 0; i < icon.length; i++) ... {
selectIcon(id: 1, iconPass: icon[i]),
},
],
),
But when those widgets structured as nested like:
final icon =["lock.png","map.png","stake.png","cheer.png","sushi.png","drink.png","toy.png","christmas.png","newyear.png","flag.png")];
//omit,
child: Column(
children: [
Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
selectIcon(id: 1, iconPass: "lock.png"),
selectIcon(id: 1, iconPass: "map.png"),
selectIcon(id: 1, iconPass: "sushi.png"),
selectIcon(id: 1, iconPass: "stake.png"),
selectIcon(id: 1, iconPass: "cheer.png"),
],
),
Container(
margin: const EdgeInsets.only(top:10),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
selectIcon(id: 1, iconPass: "drink.png"),
selectIcon(id: 1, iconPass: "toy.png"),
selectIcon(id: 1, iconPass: "christmas.png"),
selectIcon(id: 1, iconPass: "newyear.png"),
selectIcon(id: 1, iconPass: "flag.png"),
],
),
),
]
);
How can I build with nested widget with loop in flutter / Dart ? or is it unacceptable ?
where did you find such strange form like ... { selectIcon() }? instead just use selectIcon() after for loop
I don't see any "Nested" data structure here, only an simple Array.
@pskink it is selfdefied class, and sorry for inadequate explanation
@Tuan Nesting things are just for widgets but not data itself, sorry
You can use collection package to split your icon into lists that have 5 items in them like this:
class _CustomTitleWidgetState extends State<CustomTitleWidget> {
final icon = [
"lock.png",
"map.png",
"stake.png",
"cheer.png",
"sushi.png",
"drink.png",
"toy.png",
"christmas.png",
"newyear.png",
"flag.png"
];
Widget selectIcon({required int id, required String iconPass}) {
return Container(
child: Text(iconPass),
);
}
@override
Widget build(BuildContext context) {
var iconChunks = icon.slices(5).toList();
return Scaffold(
appBar: AppBar(),
backgroundColor: Colors.white,
body: Column(
children: iconChunks
.map((e) => Padding(
padding: const EdgeInsets.only(bottom: 10.0),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children:
e.map((e) => selectIcon(id: 1, iconPass: e)).toList(),
),
))
.toList()),
);
}
}
Thanks for comment, and may I ask Should I user map() but not for or I cannot user for sentence the case above ?
I recommended use map, I didn't see for loop in this situation.
icon is already a list - so it is simple to produce a list of Widgets using map and toList. For is useful when you don't have a list to begin with - for example - you need to put 5 Text widgets in children. You can use List.generate, but for perhaps looks better.
Thanks a lot, if the number of items in icon[] are unpredictable how should I use getRage(X,Y) like this case ?
it depends on how do you want to split them in two list.You want to split them in half and git them to those row? @SugiuraY
I'm sorry that I should explain accurately
I can not predict the number of items in icon[] map
and row object contains five items at most
3.so If there are more than six items in icon[], it should structered with several rows
you mean if the icons have 15 items, you want 3 rows each contains 5 items? @SugiuraY
for example, if 16 items exist in unpredictable icon[], there are 4 rows and five items for 1st to 3rd row and one item for 4th row.
yes I do mean that. I expressed it as nested widgets but sorry that was not good way to ask
I updated my answer, check it out @SugiuraY
this is really impressive way for me and I should say super super thank you! I could not imagine in this way..
and thanks for complete my selfdefined class which I have not disclosed!
You can try to use List.generate to generate a list of widgets.
final icon = [
"lock.png",
"map.png",
"stake.png",
"cheer.png",
"sushi.png"
];
Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: List.generate(
icon.length,
(i) => selectedIcon(
id: 1,
iconPass: icon[i]
)
)
);
thanks for your comment, this is first time to know how to generate a list of widgets!
Here's a way to do it dynamically, and works with an item list of any size. It will group your items in groups of 5
Column(
children: [
for(int i = 0; i < icon.length / 5; i++)
if (i == 0)
Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
for (int j = 5 * i; j < min(icon.length, 5 * i + 5); j++) selectIcon(id: 1, iconPass: icon[j])
],
)
else
Container(
margin: const EdgeInsets.only(top:10),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
for (int j = 5 * i; j < min(icon.length, 5 * i + 5); j++) selectIcon(id: 1, iconPass: icon[j])
],
),
),
]
)
thanks for your comments, and sorry for my bad explanation, as I was going to generate each rows with using loop as well.
@SugiuraY it was clear to me. And I do believe my answer also works for your situation. But of course eamirho3ein's also works well
| common-pile/stackexchange_filtered |
Getting values of an object after iteration
I have this object i copied from my console
Object { input_name: "hi", input_type: "world", input_number: "200" }
which i had earlier put together this way
var post = {
input_name: name,
input_type: type,
input_number: number
};
console.log(post);
I am passing the data as an array to another function which does the inserting to a mongodb. I need to get the first,second and third values in seperate variables in order to be inserted into the database.
I have tried this
for (var key in post) {
var one = post[key];
console.log(one);
break;
}
and have only just got the first value. How can i hold the three values each in its own variable?.
The order of iteration of the keys in a for...in loop is not guaranteed. So don't do that. You know what the properties are called, so access them post[input_name] or post.input_name.
You don't need to have new variable per each value. Just access object properties by it's keys:
console.log(post.input_name);
console.log(post.input_type);
console.log(post.input_number);
remove the break from your loop.. your code should be:
for (var key in post) {
var one = post[key];
console.log(one);
}
The order of iteration is not guaranteed, however.
yea, thats a thing though
| common-pile/stackexchange_filtered |
Is it possible to define a Sonarqube portfolio with multiple criteria?
For our reporting purposes, I've created a few portfolios in Sonarqube. They are configured to associated projects by regular expression. However, we have now deprecated a number of projects, and would like to remove those from the portfolio, but keep for historical purposes. Is it possible to add a secondary configuration criterion to say exclude those with a "DEPRECATED" tag? We are running Sonarqube enterprise 8.4.
Thanks
Received a response from support: It is not possible.
| common-pile/stackexchange_filtered |
animated sprite from texturepacker xml
new to android dev and andengine in general. trying to animate a sprite using the AndEngineTexturePackerExtension but im unsure how the tiledTextureRegion gets created for the animated sprite. below is what im trying which i have gotten from guides and other posts in this forum. Im creating the xml,png and java from texturepacker
private TexturePackTextureRegionLibrary mSpritesheetTexturePackTextureRegionLibrary;
private TexturePack texturePack;
try
{
TexturePackLoader texturePackLoader = new TexturePackLoader(activity.getTextureManager());
texturePack = texturePackLoader.loadFromAsset(activity.getAssets(), "spritesheet.xml");
texturePack.loadTexture();
mSpritesheetTexturePackTextureRegionLibrary = texturePack.getTexturePackTextureRegionLibrary();
}
catch (TexturePackParseException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
TexturePackerTextureRegion textureRegion = mSpritesheetTexturePackTextureRegionLibrary.
get(spritesheet.00000_ID);
TiledTextureRegion tiledTextureRegion = TiledTextureRegion.create(texturePack.getTexture(),
textureRegion.getSourceX(), textureRegion.getSourceY(),
textureRegion.getSourceWidth() , textureRegion.getSourceHeight() ,
COLUMNS, ROWS);
AnimatedSprite sprite = new AnimatedSprite((activity.CAMERA_WIDTH - tiledTextureRegion.getWidth()) / 2,
(activity.CAMERA_HEIGHT - tiledTextureRegion.getHeight()) / 2,
tiledTextureRegion, activity.getVertexBufferObjectManager());
the problem is that i dont understand where the values from COLUMNS and ROWS comes from? the sprite sheet itself has uneven rows and columns as it includes rotated sprites etc. So im confused as to where these values come from. Any help on getting this working would be great thanks
edit: Ok i can get the sprite sheet animation working if i just use the basic algorithm within texture packer and not the MaxRects algorithm. But this doesnt make use of all the space within a sheet so i would rather get it working using a MaxRects generated sprite sheet. I see with in the xml that it pass a bool for being rotated or not so the information is there to make this work i just cant figure out how. how do i use a texturepackertexture region to make an animated sprite when some of the textures are rotated on the sheet
I have also had this issue. I disabled the Trim option and used the basic algorithm instead of maxrects and it is fine right now.
TexturePacker doesn't know how much columns and rows have your sprites, not even if they are tiled or not, it just packs everything into a single spritesheet (png file for example). Also, its goal isn't to create TiledSprites from separated Sprites.
So, in order to get back a TiledSprite (or AnimatedSprite) from a spritesheet, you have to know how much columns and rows (it can be hardcoded somewhere) it had before being put into the spritesheet, since TexturePacker won't give you that kind of information.
I personally use a TextureRegionFactory which looks like this:
public class TexturesFactory {
public static final String SPRITESHEET_DIR = "gfx/spritesheets/";
public static final String TEXTURES_DIR = SPRITESHEET_DIR+"textures/";
private TexturePack mTexturePack;
private TexturePackTextureRegionLibrary mTextureRegionLibrary;
/**
*
* @param pEngine
* @param pContext
* @param filename
*/
public void loadSpritesheet(Engine pEngine, Context pContext, String filename) {
try {
this.mTexturePack = new TexturePackLoader(
pEngine.getTextureManager(), TEXTURES_DIR).loadFromAsset(
pContext.getAssets(), filename);
this.mTextureRegionLibrary = this.mTexturePack.getTexturePackTextureRegionLibrary();
this.mTexturePack.getTexture().load();
} catch (TexturePackParseException ex) {
Log.e("Factory", ex.getMessage(), ex);
}
}
public TextureRegion getRegion(int id) {
return this.mTextureRegionLibrary.get(id);
}
public TiledTextureRegion getTiled(int id, final int rows, final int columns) {
TexturePackerTextureRegion packedTextureRegion = this.mTextureRegionLibrary.get(id);
return TiledTextureRegion.create(
packedTextureRegion.getTexture(),
(int) packedTextureRegion.getTextureX(),
(int) packedTextureRegion.getTextureY(),
(int) packedTextureRegion.getWidth(),
(int) packedTextureRegion.getHeight(),
columns,
rows);
}
}
Edit: About the rotated problem, it is written inside the xml generated by TexturePacker, so you can get it by calling
TexturePackerTextureRegion packedTextureRegion = this.mTextureRegionLibrary.get(id);
packedTextureRegion.isRotated()
Then, you can create the tiledTextureRegion according to that value with:
TiledTextureRegion.create(
packedTextureRegion.getTexture(),
(int) packedTextureRegion.getTextureX(),
(int) packedTextureRegion.getTextureY(),
(int) packedTextureRegion.getWidth(),
(int) packedTextureRegion.getHeight(),
columns,
rows,
packedTextureRegion.isRotated());
Also, I hope it is clear to you that TexturePacker isn't meant to create tiled sprites. You must create your tiled sprites (nice fit or rows and columns) before using TexturePacker.
but texturepacker doesnt give the spritesheet in rows and columns as it rotates some images to make more fit on a given page. so for me to even hard code a value would be wrong as the sheet is not structured in that way
wait do you mean i have to imagine how the sheet would look before texturepacker gets its hands on it ? and work out the rows and columns that way ?
I use the free version of TexturePacker so my sprites are not rotated, and I don't know if this information is written inside the generated xml or not. What i meant above is that you have all information about your sprites inside the generated xml (position, width/height..), but you don't have the rows/columns of each sprite. So if you pack an explosion tiled sprite that have 8 columns and 2 rows, it won't be written on your xml. When you create the explosion animated sprite from texture packer, you have to manually indicate this explosion is a tiled sprite of 8 columns, 2 rows...
yeah i get you, thats where my problem is , some of my images are rotated so they dont fit in a nice rows and columns arrangement. its damn frustrating
i think i love you man!!! that handles the rotated sprites as it should, thanks for clearing that up for me i was going mad here! just 1 last thing. say the animation doesnt fill the last row of the rows and columns numbers that i pass in. currently it will display blank as it tries display the empty space from with in the sheet. anyway around that?
actually i got around it by making change to andengine that takes in number of frames and handles the loop that way. thanks for the help again so much appreciated!
| common-pile/stackexchange_filtered |
LaTeX incorrectly displays a URL with Cyrillic characters
Hello everyone
\documentclass[a4paper,12pt]{article}
%%% Работа с русским языком
\usepackage{cmap} % поиск в PDF
\usepackage[T2A]{fontenc} % кодировка
\usepackage[utf8]{inputenc} % кодировка исходного текста
\usepackage[english,russian]{babel} % локализация и переносы
%%% ссылки
\usepackage[unicode, pdftex]{hyperref}
\begin{document} % Конец преамбулы, начало текста.
\begin{thebibliography}{2}
\bibitem{Доступное объяснение ROC и AUC!}
\textbf{Доступное объяснение ROC и AUC!:} \url{https://www.youtube.com/watch?v=4jRBRDbJemM}
\bibitem{Вычисление ROC и AUC.}
\textbf{Вычисление ROC и AUC:} \url{https://dyakonov.org/2017/07/28/auc-roc-площадь-под-кривой-ошибок/}
\end{thebibliography}
\end{document}
This won't work. You will have to percent encode the url. Or use \href to separate the url and the printing. With the unicode engines you can use ideas from here https://tex.stackexchange.com/q/355136/2388
I would like to recommend very strongly that you switch to either XeLaTeX or LuaLaTeX to compile your code and that you employ a suitable Opentype font. You may also want to load the xurl package; then specify \urlstyle{same} unless you can also load a mono-spaced font that features both Latin and Cyrillic characters.
Addendum: As @UlrikeFischer has pointed out in a comment, this approach isn't guaranteed to work for all conceivable pdf viewers and web browsers. I can report (happily...) that the approach does appear to work for the Adobe Reader / Safari browswer combination.
% !TEX TS-program = xelatex
\documentclass[a4paper,12pt]{article}
\usepackage{fontspec} % 'fontspec' requires LuaLaTeX or XeLaTeX
%% see https://fonts.google.com/noto for "Noto" font families
\setmainfont{Noto Serif} % or some other suitable font
\setsansfont{Noto Sans}
\setmonofont{Noto Mono}
\usepackage[english,russian]{babel} % локализация и переносы
%%% ссылки
\usepackage{xurl} % allow line breaks anywhere in URL strings
%\urlstyle{same} % optional
\usepackage[colorlinks,urlcolor=blue]{hyperref}
\begin{document} % Конец преамбулы, начало текста.
\begin{thebibliography}{2}
\bibitem{Доступное объяснение ROC и AUC!}
\textbf{Доступное объяснение ROC и AUC!:}
\url{https://www.youtube.com/watch?v=4jRBRDbJemM}
\bibitem{Вычисление ROC и AUC.}
\textbf{Вычисление ROC и AUC:}
\url{https://dyakonov.org/2017/07/28/auc-roc-площадь-под-кривой-ошибок/}
\end{thebibliography}
\end{document}
while this will print the link correctly, it will not give a valid link in the pdf. It will work with some pdf viewers, but there is no garanty.
@UlrikeFischer - thanks. My one and only pdf viewer is Adobe Reader, and my web browser is Safari (I'm a MacOS person...). With this combination, there doesn't seem to be a problem with the link being invalid. I'll readily concede that other combinations of pdf viewer and web browser may not work as well. I'll add a note to this effect to my answer.
it worked for me after I changed the font names:
\usepackage{noto-serif}
\usepackage{noto-sans}
\usepackage{noto-mono}
@DarMaster - Glad to heat that you're able to make use of the proposed solution. :-) About font naming conventions: that's definitely still an area that could stand a lot of consolidation and standardization across computer systems. On my particular system (MacOS 12.0.1 "Moneterey", MacTeX2021), Noto Serif and noto-serif are both valid, under both XeLaTeX and LuaLaTeX. On other systems, though, there may well be a need to choose one or the other font-naming convention. As I said, we'd all warmly welcome some more standardization in this field...
| common-pile/stackexchange_filtered |
git shows dropbox synced files as modified
I sync a directory with an R project among some computers using dropbox. The computers are synced. In 3 PCs (ubuntu and debian) git status shows no changes. In one debian all R scripts show as modified with git status. When I do git diff in all the files I see old mode 100644 new mode 100755
What does this mean and how can I fix it?
Is someone changing the permissions on the files? That's what you're seeing in your git diff.
this could be it. One server has execute permission on the files. Any fix?
On the server that has executable permissions, what happens if you try to remove the executable permissions, either with git checkout . or chmod -x file1 file2 ?
Git already does a great job in synchronising files across local copies, why are you using Dropbox on top of it?
The "fix" is make your files consistent, or don't work on all 3 machines together. Why are you using both dropbox and git? Shouldn't one of those be the thing that is used as the source?
Guys I know, but at the present time I cannot get rid of Dropbox and I have to use git. I have to use all the computers (but not simultaneously). Would git config core.filemode false do the fix?
| common-pile/stackexchange_filtered |
Introducing custom line spacing between chapter number and the title
The university requires that the following conditions be met for the thesis. I use sectsy package to get the required font size and am using standard report class. How to get the proper spacing between the top margin and chapter number and chapter number and chapter title?
Package titlesec can be of help here. \titleformat and \titlespacing are the commands to read about.
As Johannes_B writes: Use the package titlesec instead of sectsty because titlesec has more advanced features, including the ability to define custom spacing between the elements of a chapter.
You use the command
\titlespacing*{command}{left}{before-sep}{after-sep}[right-sep]
To define the spacing, see section 3.2, page 4 in the manual. For you, the correct values may be:
\titleformat{\chapter}[display]
{\filcenter\normalfont\LARGE\bfseries}
{\chaptertitlename\ \thechapter}{25mm}{\LARGE} %% \LARGE gives 17.28 pt if main
%% font size is 10 or 11 pt, which
%% should be close enough
\titlespacing*{\chapter}{0mm}{75mm}{25mm}
The space between chapter number and the chapter text is defined by the 25 mm command in the \chapterformat. The space before and after the chapter by the \titlespacing. See section 9.2 in the manual to see the definition of all section commands in the standard class. You can then just redefine those to get the result you desire.
You will learn more if you read the manual and experiment and come back and ask more questions (if you search for titlesec, you will probably find most problem already solved).
Thanks for your suggestion but am working with Lyx and when I enter the above mentioned commands in the preamble, it gives me the following error. ! Package titlesec Error: Not allowed in `easy' settings. ! LaTeX Error: Missing \begin{document}.
@Curious I do not know Lyx, but I am pretty sure it is possible to enter LaTeX-code in the preamble. Maybe some LyX-experts may help you.
@Curious Looking into your error message, it seems that you have chosen one of the easy-options described in section 2.1 in the manual. Try loading titlesec without any options and see if it works better.
I didn't specify any option. I just copy paste the code written above to see whether it works or nor.
@Curious Did you remember to load titlesecin you preamble? I do not have LyX installed, and cannot test myself.
I did add \usepackage {titlesec} and then only added the above code.
Let us continue this discussion in chat.
@Curious I rechecked my code, and it was errors in it. See updated example.
Thanks for the update. When I add the code in the preamble, it gives me an error saying that "can't use \eqno in math mode". This is happening only in the case of a numbered formula. Does not happen when the same formula is inline. This is strange. If I removed the titlesec code, then everything works fine.
@Curious My only guess is that titlesec interfere with another package in you setup. I have no idea why you get such strange error message.
@Curious Do you have a equation in the chapter title?
Sorry for the late reply. I solved it. The text was defined as a paragraph instead of standard.
| common-pile/stackexchange_filtered |
Multiple-valued analytic functions
Although our definition requires all analytic functions to be single-valued, it is possible to consider such multiple-valued functions as $\sqrt{z}$, $\log z$, or $\arccos z$, provided that they are restricted to a definite region in which it is possible to select a single-valued and analytic branch of the function.
For instance, we may choose for $\Omega$ the complement of the negative real axis $z\le 0$; this set is indeed open and connected. In $\Omega$ one and only one of the values of $\sqrt{z}$ has a positive real part. With this choice $w=\sqrt{z}$ becomes a single-valued function in $\Omega$; let us prove that it is continuous.
(The set $\Omega$ is the open set on which $f$ is defined.)
I don't really understand what all this means. Why is the negative real axis described by $z\le 0$ (shouldn't it be $x\le 0$?) Isn't it always the case that one value of $\sqrt{z}$ has a positive real part, since the two values are negative of each other? And why does $w=\sqrt{z}$ become a single-valued function, when we restrict the domain but not the range?
We do restrict the range here. But "slicing the complex plane along the negative real axis" is needed to allow us to restrict the range in such a way that we get a continuous function.
$z \leqslant 0$ is another way to express $\Re z \leqslant 0 \land \Im z = 0$. $x \leqslant 0$ would be $\Re z \leqslant 0$, which is the closed left half plane. Except for $z \leqslant 0$, one of the two square roots of $z$ has positive real part, and the other has negative real part. The point is that selecting the one with positive real part gives a continuous function on $\Omega$ (selecting the other one would too). You can't select one of the two square roots in such a way that you get a continuous function on all of $\mathbb{C} (\setminus {0})$.
Why is the negative real axis described by $z\le 0$ (shouldn't it be $x\le 0$?)
Whenever a complex number appears in an inequality, the inequality implicitly says that the number is real. Hence, $z\le 0$ is indeed the negative real axis. The inequality $x\le 0$, on the other hand, would be understood as describing the left half-plane (since it does not say anything about $y$).
Isn't it always the case that one value of $\sqrt{z}$ has a positive real part
True (if "positive" means $\ge 0$). But after removing the negative semiaxes we can say the same, but with "positive" being $>0$. The difference is substantial. Strict inequalities are stable under small perturbations; thus, choosing the root with positive real part gives us a continuous function.
And why does $w=\sqrt{z}$ become a single-valued function, when we restrict the domain but not the range?
The range is determined by the domain. Whenever we talk about the restriction of some function, it's the domain that is being restricted.
There are still two values of square root to choose from, but it's now possible to make the choice continuously, (and, consequently, in a holomorphic way).
| common-pile/stackexchange_filtered |
"stray '/302' in program error" when compiling
For some weird reason, the following code doesn't compile. I get a "stray '\302' in program" error around volatile unsigned int encoderPos = 0;, and I have no idea what the issue is. I've been trying to figure this out for over 40min, and nothing works. It doesn't make any sense
#include <U8g2lib.h>
#include <SPI.h>
//Pin definitions:
const int control_PWM = A3; //PWM output for the delay
const int btn_1 = 1; //Button for mode 1
const int btn_2 = 4; //Button for mode 2
const int btn_3 = 5; //Button for mode 3
const int r_A = 2; //Rotary encoder A's data
const int r_B = 3; //Rotary encoder A's data
const int r_SW = 0; //Rotary encoder's button data
const int oled_CLK = 9; //SPI cloack
const int oled_MOSI = 8; //MOSI pin
const int oled_CS = 7; //Chip Select pin
const int oled_DC = 6; //OLED's D/C pin
U8G2_SH1106_128X64_NONAME_F_4W_HW_SPI u8g2(U8G2_R0, /* cs=*/ 10, /* dc=*/ 9, /* reset=*/ 8);
int mode = 1; //1: RGB, 2: HSL, 3: Distance control
int value_selection = 1; //Actual value selectrion
int value1 = 0; //red in mode 1; tint in mode 2
int value2 = 0; ////green in mode 1; saturation in mode 2
int value3 = 0; //blue in mode 1; luminosity in mode 2
volatile unsigned int encoderPos = 0; // rotary encoder's current position
unsigned int lastReportedPos = 1; // rotary encoder's previous position
static boolean rotating=false; // is the encoder activity status
// interrupter variables
boolean A_set = false;
boolean B_set = false;
boolean A_change = false;
boolean B_change= false;
void setup() {
}
void loop() {
}
What IDE version?
Could you please edit your question to include the exact error message.
@canadiancyborg: Your edit obscured the problem.
@IgnacioVazquez-Abrams I just translated the french comments to english, how did it change anything?
@canadiancyborg: You removed the non-ASCII characters in the source, which were the cause of the error.
@IgnacioVazquez-Abrams wait, so non-ASCII characters in comments also affect the program?
0302 is 0xc2. Somewhere in your source you have one or more non-breaking spaces (0xa0) encoded in UTF-8 (0xc2 0xa0). Use od or a similar tool to find them, and then replace them with normal spaces. Since you have non-ASCII Latin-1 characters in your source, those characters are encoded as two bytes with the first being 0xc2 or 0xc3. Remove all non-ASCII characters before proceeding.
You can probably just copy the code from your post and past it over the original code in the IDE. As I can't detect any weird characters in the text above (I think browser replace NBSPs with regular spaces). Invisible characters can be quite a pain for some compilers.
One cause of the /(302) error is copy and paste code from a word processor. You have ASCII codes copied that add spaces, etc to your code. Go through each identified line and remove any extra spaces at the beginning and end of any identified line. Then, (Arduino IDE) go to TOOLS, Auto Format. At least, this cleared up the problem for me.
| common-pile/stackexchange_filtered |
is there any ResponseCacheAttribute for WebAPI Core?
I'm currently working with WebAPI 2 and considering to upgrade to ASP.Net Core.
as I've reached the Http Caching topic I've noticed that asp.net core only has a ResponseCacheAttribute (which is an MVC attribute) and no parallel attribute for WebApi.
My questions are:
a. due to the shift from ApiController and MvcController to a united Controller, will the MVC attribute work on WebApi actions?
b. if not, is there an implemented alternative for WebApi?
Answers to your questions:
a. There's no such thing as MVC and WebAPI anymore. As you noted, the products have been unified into just MVC. Actually, the team usually just refer to everything as just "ASP.NET Core", since it's mostly different middleware composed together anyway. This also means that there's no such thing as a "WebAPI action". It's all just MVC actions. Which again means that yes, ResponseCacheAttribute will work.
b. See above. BTW, there's also a response caching middleware being worked on for v1.1 of ASP.NET Core.
| common-pile/stackexchange_filtered |
I want to find out which IDs have both visit 2 and visit 4, e.g. My dataset looks like the following and the excel I have available is from 2015
I want to find out which IDs have both visit 2 and visit 4, e.g. My dataset looks like the following and the excel I have available is from 2015.This is an example of how my dataset looks:
ID
Visit
unique id
Variable
101
2
101-v2
1234
101
3
101-v3
1234
102
2
102-v2
1234
102
4
102-v4
12234
103
2
103-v2
12234
103
3
103-v3
1234
103
4
103-v4
12234
I tried the following:
=IF(AND(COUNTIFS(A:A, 101, B:B,2)\>0,COUNTAIFS(A:A,"ID",B:B,4)\>0) "yes","no")
The formula didnt work at all...
After this i want to remove all the IDs that did not have paired values of visit 2 and 4..
What is the formula returning? #error, #value,... ? I'm also not sure what \ is doing in your formula.
COUNTIFS as entered for the whole range B:B would give a single value, not a row by row count. Also removing the backslash (\\) may get the your formula working.
It may be easier if you use table I think:
Overview of Excel tables - Microsoft Support
Insert a table for your data (select the range and choose Insert > Table)
Use filter button to add your conditions
Copy the data after filtering
=AND(1-ISNA(MATCH(A2&"-v"&{2,4},$C$2:$C$8,0)))
Or using table references: =AND(1-ISNA(MATCH([@ID]&"-v"&{2,4},[unique id],0)))
(Unsure if it requires being entered with ctrl+shift+enter)
| common-pile/stackexchange_filtered |
Insert many json file inside one document in mongoDB
i have a question:
but how can i insert many json files inside one document collection?
I have a ruby script connected with mongoDB which generate json files for each ID product.
In mongo i should want a structure like this:
Id(document's name) : {
many json for same ID
}
how can i get this structure in ruby?
DB's name is "test_db" and collection's name is "test_coll"
the code is:
Client = Mongo::Client.new([<IP_ADDRESS>:27017], :dtaabase => paperino)
json_array ={}
my_hash =Hash.new{}
my_array = Array.new
coll = Client[:ID_product]
add sensitive data for populate Json files
my_hash = JSON.parse(json_array.to_json)
my_array = my_hash
coll.update({{id: single_id_product} {"$push" => {json:
my_array}},{upsert:true})
the ruby script generate more json files for 1 id_product
i want to save all json file for 1 product inside 1 document,
i try classic insert.many or insert one but doesnt work, because generate 1 doc for 1 json.
i try method $push for append all json array inside one document, but doesnt work too.
how can i do?
thanks
Can you show a more actual structure than this...sketch?
the actual structure of json for each ID_product , which is generated by ruby script, is:
{
number:10
price:9
}
{
number:9
price:9
}
{
number:10
price:9
}
{
number:10
price: 10
}
and this structure is an update on time of that ID_product.
in mongoDB i want a structure like this:
{
ID_product{
number:10
price:9
}
{
numero:9
price:9
}...
in other words, i would want to insert more JSon file in a single document identified by ID_product
Please [edit] your question and include the Ruby code that you are using to create the MongoDB document. You don't have to include any sensitive data but without code, we can hardly tell what's wrong.
i update the question, i hope in your help
Documents in MongoDB are composed of field and value pairs. You typically have an _id field with an ObjectID value to identify the document and additional fields holding the document's data, e.g.:
doc = {
_id: BSON::ObjectId('650aeca18772677bb6613a89'),
name: "foo"
}
The above document can then be inserted into a collection via insert_one:
coll.insert_one(doc)
The same works for embedded data, e.g. an array called data:
doc = {
_id: BSON::ObjectId('650aee198772677bb6613a8a'),
data: [
{ name: 'foo' },
{ name: 'bar' }
]
}
coll.insert_one(doc)
To append a new value to an existing array there's the $push operator:
coll.update_one(
{ _id: BSON::ObjectId('650aee198772677bb6613a8a') },
{ '$push': { data: { name: 'baz' } } }
)
To append multiple values to an array, you can use $push with the $each modifier:
coll.update_one(
{ _id: BSON::ObjectId('650aee198772677bb6613a8a') },
{ '$push': { data: { '$each': [ { name: 'qux' }, { name: 'quux' } ] } } }
)
The document after the above changes:
coll.find(_id: BSON::ObjectId('650aee198772677bb6613a8a')).first
#=> {
# "_id"=>BSON::ObjectId('650aee198772677bb6613a8a'),
# "data"=>[
# {"name"=>"foo"},
# {"name"=>"bar"},
# {"name"=>"baz"},
# {"name"=>"qux"},
# {"name"=>"quux"}
# ]
# }
ok, i tried to use insert_one(json_array), but mongo store them in different document, he make 1 document for each element of json array
i dont understand why doesnt work, i try every possible command but mongo doesnt store in 1 document this json array
@JACKoJONS you seem to be passing an array to insert_one. The method is supposed to be called with a hash holding the data for a single document. See my example above: doc is a hash, not an array.
ok thanks, but when i pass "my_hash", that is an hash of hash, i have the same problem. first line of hash save in 1 document and the same for other line of hash
and i try insert_one(my_hash)
How does my_hash look like? (don't paste a huge hash in the comments, update your question instead). Also, did you try to run my code? Does it work as expected? If so, you can use it as a starting point for your code.
i cant share sensible data but"my_hash" structure is like : {"production" => {"value" => "0" ,"price" => "0"}, "type" =>"1"} , {"production1" => {"value" => "1", "price" => "1"}, "type" => "2"} etc.. for thousands of objects
but when i try to do insert_one(my_hash), mongo save 1 json object for 1 document, for example in this case mongo store document1:{{"production" => {"value" => "0" ,"price" => "0"}, "type" =>"1"} }, document2{ {"production1" => {"value" => "1", "price" => "1"}, "type" => "2"}}
and doesnt regroup all this objects inside 1 doc
Okay, and does my code above work for you? In particular the last example with '$push': { data: { '$each': [...] } }
i tried for the example of " embedded data, e.g. an array called data", but terminal say me "BSON::InvalidKey: hash instances are not allowed as keys in a Bson document"
i just write inside insert_one all doc content
and now im trying '$push': { data: { '$each': [...] } }
Regarding the error: double check that you are running the very same code. Such error should not occur as I didn't use hashes as keys in my examples – the only keys are _id and data.
first example work, but when i run second example db cant create
but the script work
Now that you have a working script that can push multiple elements to an embedded document, it should be relatively easy to adapt it to your situation. Replace data: with the field name of your document's array and replace [...] with the actual array you want to append.
hi Stefan, i update you on this work, with the second example the script create db, but when i see, mongoDb store only the second_name "bar", soo strange, i hope in your help.
and this happen also when i replace variable with my variable and array, when i did this mongoDB store only the last json file, i try to change position for this code but when i run the script there is an "duplicateErrorCollection"
@JACKoJONS please edit your question and add the exact script you are running in a way that I can copy and paste it. Also include the document before and after the script.
ok stefan i try, change a part of code and all works fine, thank you so much. But now occur another problem: the document exceeds maximum allowed BSON object size after serialization
any suggestion for that?
@JACKoJONS if this answer solved the problem in your question, you should accept it. With enough reputation, you can even upvote useful answers. Regarding your other problem: post a new question for that.
| common-pile/stackexchange_filtered |
Rails Query - how to compare the months of two dates?
I know rails support between query.
[Example]
Performance (have a start_date, end_date)
I want to bring in the performances by month.
First performance (2017.07.01~ 2017.08.31)
Second performance (2017.08.08~ 2017.08.20)
Third performace (2017.08.09~ 2018.08.09)
I wanna just get 2017. 08 performances..
Rails Query - how to compare the months of two dates?
I think
@performances = Performance.where("? BETWEEN start_date(* change month?) AND end_date(* change month?)", @filter_date(* "2017-08 DATE").month)
UPDATED
beginning_of_month = @filter_date.beginning_of_month
end_of_month = @filter_date.end_of_month
@performances = @performances.where("(start_date BETWEEN ? AND ?) OR (end_date BETWEEN ? AND ?) OR (start_date < ? AND end_date > ?)", beginning_of_month, end_of_month, beginning_of_month, end_of_month, beginning_of_month, end_of_month)
Use beginning_of_month and end_of_month
I know beginning and end of month method, Are you said "2017-08-01 .. 2017-08-31 between start date and end date" ? that is multiple queries :(
greater than beginning_of_month and less than end_of_month
I do not know Rails Query, but how about this way?
(start_date between 2017.08.01 and 2017.08.31) or
(end_date between 2017.08.01 and 2017.08.31) or
(start_date < 2017.08.01 and end_date > 2017.08.31)
| common-pile/stackexchange_filtered |
Definition of continuous function with $\delta-\epsilon$
The definition says: Let $(X,d_{1})$ and $(Y,d_{2})$ be a metric space. A map $f:X\rightarrow Y$ is called continuous if for every $x\in X$ and $\epsilon>0$ there exists a $\delta>0$ such that:
$d_{1}(x,y)<\delta\Rightarrow d_{2}(f(x),f(y))<\epsilon$
does it mean that $x,y\in X$ and $f(x),f(y)\in Y$?
Yes, that's exactly what that means.
@DMcMor ok, so the use of elements $x,y$ was confusing, thanks!
Your wording is a bit unclear but here's a rephrasing of the definition that may be a bit more clear:
Fix $x\in X$. We say that $f:X\to Y$ is continuous at $x$ if for every $\epsilon>0$, there exists $\delta>0$ such that the following condition holds: if $y\in X$ such that $d_1(x,y)<\delta$, then $d_2(f(x),f(y))<\epsilon$.
We say $f$ is continuous if it is continuous at every $x\in X$.
If I've interpreted your question wrong, let me know and I can try to update my answer.
$\;\;\;\;f $ is continuous at $X $
$$\iff $$
$(\forall x\in X )\;\;\;f $ is continuous at $x $
$$\iff $$
$$(\forall x\in X) \;\;(\forall \epsilon>0)\;\;(\exists \delta>0 )\;:\;\color {green}{(\forall y\in X )}$$
$$(d_1 (x,y)<\delta\implies d_2 (f (x),f (y))<\epsilon).$$
of course $f (x) $ and $f (y) $ are in $Y $ since $f :X\to Y $.
| common-pile/stackexchange_filtered |
HAProxy mail forward rules
I use HAProxy as mail frontend (IMAPS) in SSL termination mode (mail clients configured to imap server <IP_ADDRESS> (haproxy host)).
Config:
frontend ft_imaps
mode tcp
bind <IP_ADDRESS>:993 ssl crt /etc/pki/tls/certs/cert.pem
default_backend bk_imaps2
log global
timeout client 1m
option tcplog
backend bk_imaps
mode tcp
log global
option tcplog
timeout server 1m
timeout connect 30s
server SRV1 <IP_ADDRESS>:993 check maxconn 20 ssl verify none
All ok if all virtual domains served by server <IP_ADDRESS>. But if I want route mail traffic to virtual domain 'domain1.local' to <IP_ADDRESS> and 'domain2.local' - to another (for example, <IP_ADDRESS>), how can I filter this on HAProxy?
As for me, it is impossible, because HAProxy can't analize to which virtual domain every email. On configuration.txt:
mode tcp is for SSL, SSH, SMTP. And in tcp mode "no layer 7 examination will be performed". So, SMTP headers is not accessed in tcp mode.
So, if I need use one point for mail traffic, I can try acl based on src (client ip address) or make several frontends (:1994 -> <IP_ADDRESS>, :1995 -> <IP_ADDRESS>) and setup client mail software?
'mode http' is not suitable for this?
Does this answer your question? haproxy multihost with ssl acl
The upvoted post (not the accepted one) shows how to handle multiple backends with SSL for different domains. However, it uses SNI, which is AFAIK not implemented in the SMTP protocol. I don't think this is possible with the SMTP protocol due to this, but I'm not an expert in this area and could be wrong.
Interesting idea about SNI!
If email client configured to HAProxy address (and only login point to email domain), SNI will not work, because domain (which will see SNI) is HAProxy address?
In order to use SNI, HAProxy shoud receive mail traffic, but thi traffic should be addressed not for HAProxy host, but for certain services "behind" HAProxy? https://www.haproxy.com/blog/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/
| common-pile/stackexchange_filtered |
Making the UI buttons function with a keyboard cursor
Here's what it looks so far
I've been trying to make a character select, and I'm just kinda near to the final result. The problem is that it doesnt recognizes my cursor which are sprites and gameObjects at the same time.
In this time I just want that my cursor 1 moves through the buttons of the UI elements that brings unity.
This is my script
public class CharacterSelect : MonoBehaviour
{
public Button[] WindowsChar = new Button[9];
public Vector3[] Buttondir = new Vector3[9];
public Sprite TextPlayr1, TextPlayr2;
public Canvas orderImage;
public Transform[] posButton, tempos;
public GameObject Cursor1;
public GameObject Cursor2;
int i,v;
// Use this for initialization
void Start()
{
orderImage.sortingOrder = -1; //*This is my intent to make it display the cursors in the canvas*//
Cursor1.GetComponent<Canvas>().sortingOrder = -2;
Cursor2.GetComponent<Canvas>().sortingOrder = -2;
foreach (Button element in WindowsChar) //*For each element of the canvas of the class button, activate it*//
{
Debug.Log("Estoy entrando a esta verga"); //it enters to this succesfully//
WindowsChar[i].gameObject.SetActive(true);
}
Cursor1.GetComponent<SpriteRenderer>().sprite = TextPlayr1; //*Initiates the sprites*//
Cursor2.GetComponent<SpriteRenderer>().sprite = TextPlayr2;
Cursor1.transform.position = WindowsChar[0].transform.position;//* And then they are put in a default position *//
Cursor2.transform.position = WindowsChar[1].transform.position;
Cursor1.gameObject.SetActive(true);
Cursor2.gameObject.SetActive(true);
}
// Update is called once per frame
void Update()
{
WindowsChar.GetLength(i);
foreach (Button element in WindowsChar) //*For each element of type button in windowschar do this with the condition above*//
{
if (Input.GetKey(KeyCode.RightArrow) && element.FindSelectable(Buttondir[i]) & Cursor1 & i<=3)
//*If the right keyboard is pressed then moves the cursor*, and also fulfilling the requiste that ,//
//finds the selectable button that is next to it, increment the i and moves the cursor, before that saving that in an tempPos//
{
Debug.Log("Oh, tambien estoy entrando a esta cosa");
i = i + 1; //*Incremente el rango a i+1*//
WindowsChar[i].transform.position = posButton[i].position; //*ahora obtiene la posicion en una variable temporal de i//
posButton[i].position = tempos[i].position;
Cursor1.transform.position = tempos[i].position; //*asignandose al cursor y traslandolo al buton necesario**/
}
}
}
}
first you should add GameObject -> UI -> EventSystem to your scene. and add event trigger component to it
Thank you very much for answering, so then I should put the event system in the player 1, and then add a public void option that does the thing in the update isnt it, so that the cursor can translate.
yes but you can put event system directly in the scene it doesn't have to be a child of player
Great! I will see if that works then.
| common-pile/stackexchange_filtered |
Basic math question about factorials
What is the mathematical term for this? It's something like a factorial.
I honestly don't even know how to phrase this question, so let me demonstrate it with an example:
If the count is 1 then the formula is x = 1y
If the count is 2 then the formula is x = 1y + 2y
If the count is 3 then the formula is x = 1y + 2y + 3y
If the count is 4 then the formula is x = 1y + 2y + 3y + 4y
etc..
What is this called? And bonus question: is there a way to do this in Excel?
Coudld you clarify what the "count" is? And could you clarify what the variables $x,y$ indicate?
I think this question can be interpreted as the recursive definition of $x$ based on the expression of $x$ at a lower count (number of terms) and not about factorial. This question is asking, "in mathematics, the ____ of a non-negative integer n, is the sum of all positive integers less than or equal to n."
@S.C.B. the "count" is the number of terms in the final formula. X and Y don't represent anything in particular, they are just there to illustrate the concept. In my specific usage, Y represents a combination of velocity and time and x represents the total distance.
Just for funs, your x=1y, x=1y+2y, etc. looks like a triangle :-)
Factor out all the $y$'s and you get
$$x_n=y(1+2+3+\dots+n)=yT_n$$
where $T_n$ is a triangle number. We know that $T_n=\frac{n(n+1)}2$, thus,
$$x_n=\frac{yn(n+1)}2$$
Is there a name for this process?
Follow the link on triangle numbers and you will find all the answers!
What if your range isn't 1 through n but rather (n-5) through n?
You take one triangle number and subtract a smaller triangle number. What do you get?
| common-pile/stackexchange_filtered |
Asynctask causes exception 'Can't create handler inside thread that has not called Looper.prepare()'
I am working on an android application.In my activity I am using the following code.
LocationResult locationResult = new LocationResult(){
@Override
public void gotLocation(Location location){
//Got the location!
Drawable marker = getResources().getDrawable(
R.drawable.currentlocationmarker);//android.R.drawable.btn_star_big_on
int markerWidth = marker.getIntrinsicWidth();
int markerHeight = marker.getIntrinsicHeight();
marker.setBounds(0, markerHeight, markerWidth, 0);
MyItemizedOverlay myItemizedOverlay = new MyItemizedOverlay(marker);
currentmarkerPoint = new GeoPoint((int) (location.getLatitude() * 1E6),
(int) (location.getLongitude() * 1E6));
currLocation = location;
mBlippcoordinate = currentmarkerPoint;
mBlippLocation = location;
myItemizedOverlay.addItem(currentmarkerPoint, "", "");
mBlippmapview.getOverlays().add(myItemizedOverlay);
animateToCurrentLocation(currentmarkerPoint);
}
};
MyLocation myLocation = new MyLocation();
myLocation.getLocation(this, locationResult);
I am using the above code to find location from gps or network .The animateToCurrentLocation(currentmarkerPoint); method contains a asynctask .So I am getting
java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()
Thanks in advance.
I bet you have some sort of dialog or toast in your asynctask :) You need to rearange your code so that you don't do network operations from UI
You get this error when you are trying to create and run an AsyncTask from a thread that has no Looper attached to it. The AsyncTasks needs a Looper to post its "task completed" message back on the thread that started the AsyncTask.
Now your real question: how to get a thread with a Looper? It turns out you already have one: the main thread. As the documentation also states, you should create and .execute() your AsyncTask from the main thread. The doInBackground() will then run on a worker thread (from the AsyncTask threadpool) and you can access the network there. The onPostExecute() will then be run on you main thread, after it has been posted there via the main thread's Handler/Looper.
@baske....I understood something..But No idea about how to fix my current problem..now my asynctask is called from thread. How can i make my asynctask task to work from main thread?...
If I understand you correctly you are now creating and executing your AsyncTask from a worker thread and you want to know how you can do that instead on the main thread? Well the cheesiest way I can think of is using the Activity's runOnUiThread() method.
Your current code (mockup code, not actually working ;-):
...
task = new AsyncTask();
task.execute()
...
Your new code:
...
runOnUiThread(new Runnable() {
@override
public void run() {
task = new AsyncTask();
task.execute();
});
...
Sorry for the crappy markup.. new to StackOverflow, so still have to get used to all the stuff the forum has to offer. Hope you can understand it. What it basically does is wrap you existing code in a Runnable and post this runnable on the main thread's handler.
| common-pile/stackexchange_filtered |
Which is the best OS for ForgeRock products (i.e. AM, IDM, DS and IG) to be used in Docker container?
Currently Forgerock is providing different OS for different products for docker container (below list):
AM(v6.5.2) available on Debian-9
IDM(v6.5.0) is available on CentOS 7.7.1908
DS(v6.5.2) is available on Ubuntu 18.04.2
IG(v6.5.1) is available On Alpine 3.9.4
We want to use single flavor of OS for these deployments on ForgeRock products.
So my Question is- Which OS I should go for preparing the Docker image?
Probably the best is Centos/Red Hat since there is more documentation of deployments on this environment.
To be honest, in terms of the ForgeRock stack (apart from the WebAgents) - it doesn't really matter because they all run within a JVM. What JDK you choose will have more bearing, but in terms of distribution, it's up to you and whatever you feel aligns with your deployment best.
| common-pile/stackexchange_filtered |
Best way to ensure latest F# FAKE?
What is the best way to ensure that all developers and the build server are using the latest version of FAKE?
If a build.cmd like the one from FSharp.Data is used, the developer will not be on the latest until they delete FAKE from the packages folder or just delete the whole packages folder.
If you add FAKE as a dependency in .nuget\packages.config, your build.fsx script must include the version information and be updated each time you change versions. You will not automatically get the latest version.
With NuGet 2.8.1 you can remove the "if not exists" parts - NuGet will check (very slowly) if the latest FAKE is installed.
| common-pile/stackexchange_filtered |
Entity Framework change tracking
Is there any way of tracking EF entities changes between contexts for ASP.NET applications?
Self-Tracking entities doesn't work well for me as it is primarily designed for WCF. And all approaches for tracking changes for POCO I have found are oriented on shared context.
No you have to track changes by yourselves or you have to use STE and store them in ViewState/Session between postbacks.
Edit: If you work with simple entity you can use some methods to track changes for you but first you have to load the entity from database (= additional database query). Then you can use for example ApplyCurrentValues method of the ObjectContext instance. This appraoch doesn't work for updating object complex graphs.
I suggest using the Change Tracking built into SQL Server (http://technet.microsoft.com/en-us/library/cc280462(v=sql.105).aspx). This framework allows you to identify whether a given row has changed, or even if a column has changed. You do need to somehow manage the question of changed since when. The Change Tracking in SQL server is accomplished by passing the table and desired 'since when' revision number into the ChangeTable function (http://technet.microsoft.com/en-us/library/bb934145.aspx). You can use the results of this function to determine when a table was last changed, and the primary key of the table. As far as I know you can only use this with tables that have a primary key defined. You can then create a table valued function or a view that returns the records that have changed. Both of these are easy to consume using the Entity Framework.
(http://msdn.microsoft.com/en-us/data/hh859577.aspx mapping to a table valued function) Link to how to use a table valued function from Entity.
| common-pile/stackexchange_filtered |
SQL - invalid column name
I have the following query:
SELECT o.outcode AS lead_postcode, v.outcode AS venue_postcode, 6 * o.lat AS distance
FROM venue_postcodes v, uk_postcodes o
WHERE o.outcode = 'CF3'
GROUP BY v.outcode
HAVING SUM(distance)>100
ORDER BY distance
This stopped working when I added the part GROUP BY v.outcode HAVING SUM(distance)>100
It says Server was unable to process request. ---> Invalid column name 'distance'.
Any ideas why?
The "alias" distance only just defined within the query as "6*o.lat" can not yet be used within the query but only afterwards.
alternative solution is
SELECT i.*
FROM (
SELECT o.outcode AS lead_postcode, v.outcode AS venue_postcode, 6 * o.lat AS distance
FROM venue_postcodes v, uk_postcodes o
WHERE o.outcode = 'CF3'
) i
GROUP BY i.outcode
HAVING SUM(i.distance)>100 ORDER BY i.distance
distance is a column alias and you can't refer to a column alias in a HAVING clause. But you can use aliases in an ORDER BY.
Try changing to:
HAVING SUM(6 * o.lat)>100
ORDER BY distance
I believe you need to use SUM(6* o.lat), because not every database server can use aliased columns in having clause (It has to do with query planning, parsing etc.). Depends on what DB you use.
Use ORDER BY 6 * o.lat. You cannot use the clause AS for an ORDER BY
Not totally true... see dogbane's comment.
| common-pile/stackexchange_filtered |
Rails, Has and belongs to many, match all conditions
I have two models Article and Category
class Article < ApplicationRecord
has_and_belongs_to_many :categories
end
I want to get Articles that have category 1 AND category 2 associated.
Article.joins(:categories).where(categories: {id: [1,2]} )
The code above won't do it because if an Article with category 1 OR category 2 is associated then it will be returned and thats not the goal. Both must match.
I faced exact same problem. Solution should be Article.joins(:categories).where(categories: { id: 1 }).where(categories: { id: 2 }) but it's not working.
The way to do it is to join the same table multiple times. Here is an untested class method on Article:
def self.must_have_categories(category_ids)
scope = self
category_ids.each do |category_id|
scope = scope.joins("INNER JOIN articles_categories as ac#{category_id} ON articles.id = ac#{category_id}.article_id").
where("ac#{category_id}.category_id = ?", category_id)
end
scope
end
You can query only those articles of the first category, which are also the articles of the second category.
It's going to be something like this:
Article.joins(:categories)
.where(categories: { id: 1 })
.where(id: Article.joins(:categories).where(categories: { id: 2 }))
Note, that it can be:
Category.find(1).articles.where(id: Category.find(2).articles)
but it makes additional requests and requires additional attention to the cases when category can't be found.
| common-pile/stackexchange_filtered |
Could Magneto control Wolverine?
Whether in comics, films, or a new animated tv series, Magneto has always been shown to be exceptionally powerful, whether or not it's lifting a submarine or ripping Adamantium from bone. However, I have always wondered: could Magneto control Wolverine? I know that, on several occasions, he has been known to halt or throw Wolverine, but I have never seen him take control of the berserker X-Man, even though Wolverine's entire skeletal structure is coated in the indestructible metal. What I want to know is why. Surely somebody would have some advice. NOTE: Any information from any continuity is acceptable
Do you mean simply to control Wolverine's whole body in the way Magneto controls his claws? Or do you mean control as in "animate convincingly", e.g. to make him move around and walk like a natural person?
Yes. In the X-men films, Magneto controls wolverine's upper body on several occasions.
Also in the comics he tosses Logan around like a doll often and in the 1990's he ripped all of his Adamantium from Wolvie's body.
| common-pile/stackexchange_filtered |
PendingIntent for Incoming call not getting launched for Twilio Android client
In the sample application for the Twilio android client,I am not being able to receive incoming calls.I am using node js for the server.The request is being routed between my server and twilio and even I am able to see the call status in the twilio call logs. But when I call a twilio client,nothing is seen on the android device. I think there is a problem with the pendingIntent getting launched. Can anybody help me out?
//MonkeyPhone.java
public void onInitialized() {
Log.d(TAG, "Twilio SDK is ready");
try {
capabilityToken = HttpHelper.httpGet("http://My server/token?client=jenny");
device = Twilio.createDevice(capabilityToken, this);
Intent intent = new Intent(context, HelloMonkeyActivity.class);
PendingIntent pendingIntent = PendingIntent.getActivity(context, 0,
intent, PendingIntent.FLAG_UPDATE_CURRENT);
device.setIncomingIntent(pendingIntent);
} catch (Exception e) {
Log.e(TAG,
"Failed to obtain capability token: "
+ e.getLocalizedMessage());
}
//HelloMonkeyActivity.java
@Override
protected void onNewIntent(Intent intent) {
super.onNewIntent(intent);
setIntent(intent);
}
@Override
protected void onResume() {
super.onResume();
Intent intent=getIntent();
Device device=intent.getParcelableExtra(Device.EXTRA_DEVICE);
Connection connection= intent.getParcelableExtra(Device.EXTRA_CONNECTION);
Log.i("Hello monkey", "device and connection are:"+device+" :"+connection);
if(device!=null && connection!=null){
intent.removeExtra(Device.EXTRA_DEVICE);
intent.removeExtra(Device.EXTRA_CONNECTION);
phone.handleIncomingConnections(device, connection);
}
}
I have implemented code for incoming successfully and I'm getting incoming calls too when I come to a particular activity. But I get Device and connection null when I implement that code into background Service. Have you also done this?
@anshul..Unfortunately I have not implemented the code for background service.
Are you getting incoming call successfully? I'm getting an error when I accept it 07-21 00:34:55.811: E/PJSIP(1388): 00:34:55.820 pjsua_acc.c ....SIP registration failed, status=302 (Moved Temporarily) 07-21 00:34:55.931: A/PJSIP(1388): 00:34:55.941 pjsua_acc.c .....Unable to create/send REGISTER: Object is busy (PJSIP_EBUSY) [status=171001]
| common-pile/stackexchange_filtered |
How to prove the property of an independent set that is contained in any maximum independent set of graph $G$?
Let S be an independent set that is contained in any maximum independent set of graph$G = (V,E)$, prove that every maximum independent set of $G$ contains at least one vertex $w \in N(u)-N[S]$ for each child $u \in N(S)$.
Some explanation:
The maximum independent set is an independent set in graph $G$ with the possible largest cardinality.
For a set $X$ of vertices, let $N(X)$ denote the neighbors of $X$, i.e., the vertices $y \in V-X$ adjacent to a vertex in $X$, and denote $N(X)\cup X$ by $ N[X]$. Specifically, we use $v$ to denote the set $\{v\}$ of a single vertex $v$.
For an independent set $S$ of $G$, a vertex $u\in N(S)$ is called a child of $S$ if it has a unique neighbor $s \in S$(i.e., $|N(u)\cap S|=1$), where $s$ is called the parent of $u$.
I've tried contradiction but failed. Could any one help me with this question?
Proof.
Choose some child $u\in N(S)$ with its parent $s\in S$. Let $M\subseteq V$ be a maximal independent set. We have $s\in M$ (because $S$ is by assumption contained in all maximal independend sets). If no vertex in $N(u)-N[S]$ is in $M$, then we can define the following other maximal independent set by replacing $s$ with $u$:
$$M':=M-s+u.$$
This works because all neigbors of $u$ are either $s$, or in $N(S)$, or in $N(u)-N[S]$, and none of these is in $M'$. However, $M'$ is a maximal independent set without $S\subseteq M'$, in contradiction to the choice of $S$.
$\square$
Maybe the following sketch can help to understand why this exchange is possible. The red vertices (and vertex sets) are part of the independent set. You can see that $u$ has (by assumption) no neighbors in the independent set except $s$.
Could you please explain why $S \subseteq M$ since I'm new to graph theory and cannot really get something obvious.
@Mark It was your assumption that $S$ is part of any maximal independent set.
I am sorry, I wrote "matching" instead of "independent set" everywhere! I hopefully replaced all mistakes now. Now $I$ would be a better name for the set instead of $M$, but I will leave it because of consistency.
| common-pile/stackexchange_filtered |
Concept about marginal probability $p(y)$to conditional probability $p(y|x)$ transformation?
I have a function like the following,
$p\left( y \right) = \int\limits_x {\int\limits_z {(Q({x^2} + y) + yz + z)dxdz} } $
Where, $Q(x) = \frac{1}{{2\pi }}\int\limits_x^\infty {{e^{ - \frac{{{t^2}}}{2}}}dt} $ and $x,y,z \in R$. I like to find $p(y|x)$ and $p(y|x = 0)$?
For, $p(y|x = 0)$ I put $x=0$. But I think I am wrong.
$p(y|x = 0) = \int\limits_z {(Q(y) + yz + z)dz} $
What have you tried?
I edited the question description what I tried but I think this is not the correct way of doing.
Just to be sure, is $Q(x^2 + y) + yz + z$ the conditional distribution of Y given $(X,Z)$? (That is what it seems to me). If so, then just integrate that wrt $z$ to obtain distribution of $Y|X$ and then plug in $x = 0$ in that to obtain the distribution for $Y| X = 0$. (In short, you seem to be on the right track)
Your definition of $Q(x)$ is a function of $t$, not $x$. Check the lower bound of the integral. $$\int_{\color{red}{t}}^\infty (\mathrm e^{-t^2/2}/{2\pi})~\mathrm d t=\operatorname{erfc}(\color{red}{t}/\sqrt{2\pi}) /(2\sqrt{2\pi})$$
| common-pile/stackexchange_filtered |
How to make a div responsive which works in old browser too
Suppose I have images inside a div where I specify div width 100% but images size is not getting responsive.
Here is sample. just see and guide me what kind of CSS I need to add for div as a result whatever is there inside div will be responsive as per screen size.
<div class="headerCarouselwrapperOuter" style="width: 100%">
<div class="headerCarouselwrapper">
<img src="Images/new-bba-header-image1dyna.jpg" alt="" />
<img src="images/new-bba-header-image2dyna.jpg" alt="" />
<img src="images/new-bba-header-image3dyna.jpg" alt="" />
<img src="images/new-bba-header-image4dyna.jpg" alt="" />
</div>
</div>
Can you post your CSS too please?
In what way do you want the images to respond? What does "old browser" mean in this context?
why people give negative vote for this post ??
#headerCarouselwrapperOuter, .headerCarouselwrapper {width:100%;}
.headerCarouselwrapper img {display:block;width:100%;}
Add this to your CSS and this should make your images responsive
Default div width in 100%, to make image responsive mention
img{
max-width:100%;
height:auto;
}
in your css.
| common-pile/stackexchange_filtered |
Prevent the user from moving, resizing and rotating input field in Excel 2016
I'm using Excel 2016 and I have an Excel worksheet with an input field. Every time I click on this input field, I'm able to move it, rotate it and resize it, but I don't want the end-user to be able to do any of these things.
How can I prevent the user from rotating, resizing and moving the field? I just want him to only be able to edit and cancel what he has edited in that field.
I have the same issue on Mac Excel 2016. It seems design mode is activated by default
It sounds like your spreadsheet is still in design mode.
Under the developer tab in the ribbon, check whether the design mode button is activated (if it has a yellow/orange background it is active). If it is, click the button to toggle design mode.
Once out of design mode, it should no longer be possible to make layout changes to the control.
No, it's not activated. By the way I'm using excel 2016 and if I click the design mode button the background color turns dark grey not yellow/orange.
| common-pile/stackexchange_filtered |
jqXHR object removes itself and container from an array
I am storing jquery ajax objects inside an array so that I can manually cancel the call,if it has not returned, based on user triggered events. These are ajax calls that I expect to run long.
JavaScript
var ajaxCalls = [];
ajaxCalls[0] = $.ajax({...});
$("#cancelButton").on("click", function() { ajaxCalls[0].abort(); });
The problem that exists is that the jqXHR object disappears from the array after the abort() method is called.
Now this looks great when it comes to managing the memory of my objects as it is magically removing itself and cleaning up nicely. However, it leaves an anti-pattern in my code where it looks like I am not cleaning up after myself. It also defeats the purpose of me creating a remove companion method to my add method.
Is there a reason for this? If my array contains more then 1 object it re-sizes itself completely appropriately.
This array if the letters were jqXHR objects
['a', 'b', 'c', 'd']
Becomes this array when jqXHR object 'b' completes
['a', 'c', 'd']
Reading the code, one would expect that this would occur
['a', null/undefined, 'c', 'd']
WHY?
EDIT:
Here is a fiddle that I was able to get to reproduce what I am doing.
As a direct array element:
[ jqXHR, jqXHR, jqXHR ]
http://jsfiddle.net/itanex/uHX8q/
As a Key Value object element in the array:
[ { key: "key", value: jqXHR }, { key: "key", value: jqXHR } ]
http://jsfiddle.net/itanex/7DSrg/
If I understand you correctly, I'm not seeing that behavior here: http://jsfiddle.net/KXjam/.
I've done this several times in different projects, and never had that issue. I've also tested it every way I can think of the last 15 minutes here -> http://jsfiddle.net/6Rq7J/ and can not reproduce that issue at all, the XHR object is always available after an abort, and is never removed from the array automagically ?
I have had a hard time trying to get an example of this without setting up a service to call that purposefully has a long duration to be able to appropriately demonstrate this.
jsFiddle has that built in, just see the above Fiddle in my previous comment ?
OK I see that now. Learned something new about jsfiddle. Let me see if I can reproduce everything as an example now.
It does not make sense since by looking at the code above, xhr object has no reference to array at all. It is impossible for it to remove itself from container array. You must give us more context code before and after.
The complete callback runs when you abort the call. So this happens:
ajaxCalls.pop();
And there goes your jqXHR object.
Added a console.log to show the pop:
http://jsfiddle.net/ycpZc/
So I am just getting tangled into the 'Great Weave' that I made. Guess it is a combination of the weekend and a clear view on the code that shows that with all the async behaviors happening, my object was being cleaned up as I had set up.
| common-pile/stackexchange_filtered |
Removing (Presumably) Extraneous Network Adapters from Device Manager (eg WAN Miniport)
Can anyone shed some light on the default items in the Network Adapters branch of the Windows Device Manager? In addition to the network card, there are always a bunch of other things that I cannot find any useful information on such as RAS Asynch Adapter and all the WAN Miniports (IKEv2, IP(v6), L2TP, Network Monitor, PPPOE, PPTP, SSTP).
I would like to trim it down and uninstall whatever possible but cannot find out exactly what these items are responsible for (and therefore whether or not they are needed on my system). Most of the pages found with Google are either people trying to fix an error with such an item or someone asking what it is and being given an unhelpful, pat response like “just leave them alone” or “they’re necessary”. I highly doubt that is the case and I’m certain that at least some items can be removed because even if they become necessary in the future they can be added again (for example installing Network Monitor or Protowall reinstalls the miniport drivers anyway).
Just leave them alone, they're necessary.
@Dennis, cute. :-D
They're not extraneous. Those are essentially drivers for various types of network connections. For example, the WAN Miniport (PPTP) adapter (driver) is used when making a VPN connection to a PPTP VPN server. The WAN Miniport (PPOE) adapter (driver) would be used when your computer is connected directly to a PPOE broadband modem. Removing these adapters (drivers) would break the functionallity that these adapters (drivers) provide.
Yes, but like I said, they are not necessarily necessary for every system. I am trying to determine exactly what each one does so tha I can determine which ones are required for my system. I don’t use VPN, so then I suppose I can uninstall the PPTP right? I don’t use IPv6, so then I can uninstall the IPv6 Miniport driver right? Isn’t the RAS Async Adapter related to dial-up? I don’t use dial-up. etc.
I’m not looking for pat answers, I’m looking for information.
Remove the ones you don't think you need and see what happens. In addition, I didn't give you a pat answer.
How do you remove ones that aren't in the Control Panel / Network Connections, but appear in the list when you do ipconfig ?
@SteveC Control_Panel/Device_Manager -> Network_adapters. Tick from View menu Show hidden devices then make a right click on "non needed" and disable it.
I was getting Event ID Error 3 WiFiSession errors when viewing the Event Viewer and I don't even have wifi on this desktop. Searching why I was getting this error I was led to believe it was the Network Adapters (googling the error) and when in Device Manager then showing Hidden Devices I too had listed under my LAN adapter (GBE Realtek) there were 8 WAN Mini-port items greyed/lite blued out.. when I uninstalled them those errors in Event Viewer stopped.
Correct, you shouldn’t need them all in every environment. You might like to make a system image backup (option, option, citation) before making major changes.
Best as I can find out:
PPPOE is for connecting your machine directly to DSL without using a router. Or for creating a hot spot.
PPP might be for using your computer to extend your network.
I don’t know what the RAS Async Adapter is. I suspect it might be related to Azure server, but I cannot find out for sure.
I also don’t know about the network monitors.
IKeV2, L2TP, SST, PPTP, and GRE are VPN/remote access configurations.
I would tend to upgrade, then disable, and see if security or performance improves. Some of this you may never need, but you may later decide to, say, load a security solution containing a VPN so you can take your laptop or mobile device to a coffee shop or hospital and secure your connection there.
| common-pile/stackexchange_filtered |
Spring boot STOMP websocket works locally, but not on deployed server
I have a STOMP client and Spring backend, the code works fine when local but not when deployed to server, failed to connect to server.
@Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
config.setApplicationDestinationPrefixes("/app");
}
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/tracker").setAllowedOrigins("*");
}
}
Javascript client initiates connection with :
var socket = new WebSocket("ws://localhost:8080/tracker");
When I try this after deployment
WebSocket("wss://myurl/tracker")
or
WebSocket("wss://myurl:8080/tracker")
The connection fails
Can you provide more details? What is the exact error code and message?
Is it possible that you have a reverse proxy or a load balancer in front of your deployment server? This could be blocking the websocket from connecting in the deployment environment. In that case, you need to configure the proxy/balancer to allow for websockets. In NGINX, these are the lines you are looking for:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
Websocket connection starts with an HTTP Upgrade request to upgrade the protocol to WS or WSS depending on security. The lines above instruct NGINX to pass that request further to the server.
I recommend you have an in-depth read here in NGINS's guide.
| common-pile/stackexchange_filtered |
Put caption in the left of the figure a
how can I Put title of figure in the left of the figure as in the attached capture ?
See http://tex.stackexchange.com/questions/29143/caption-on-the-side-of-a-figure
This may be helpful:
http://tex.stackexchange.com/questions/275131/align-caption-to-the-left
Do note that floatrow is extremely buggy. So don't try anything too adventurous with it ....
using floatrow do not work for me
| common-pile/stackexchange_filtered |
Another proof of the divergence of $\int_1^\infty\!\frac{\ln{f(x)}}{x^2}\mathrm{d}x$, and the asymptotic behaviour of its integrand?
Reportedly, the last problem of this contest is:
Given a positive decreasing sequence $\displaystyle{\{a_n\}}_{n\geqslant1}$ satisfying $\displaystyle\lim_{n\to\infty}a_n=0$, show that if the series $\displaystyle\sum_{n=1}^\infty{a_n}$ diverges, then the integral $\displaystyle\!\int_1^{+\infty}{\!\dfrac{\ln{\operatorname{\it{f}}{(x)}}}{x^2}\operatorname{d}{\!x}}$ diverges as well, where $\displaystyle\operatorname{\it{f}}{(x)}≔\sum_{n=1}^\infty{a_n^nx^n}$.
Its suggested solution (in S̲i̲m̲p̲l̲i̲f̲i̲e̲d̲ Chinese) can be found in the link above. However, I attempt to solve this via another approach, which is as follows:
It is well known that $\displaystyle\sum_{n=1}^\infty{\mspace{-1.5mu}n^{-n\mspace{-0.5mu}}x^n}=\int_0^1{\mspace{-1mu}{xt^{-xt}\mspace{1.5mu}}\mathrm{d}t}$ (see below) and then $\displaystyle{(\sum_{n=1}^\infty{n^{-n}x^n\!})^{1/x}}\sim{\mspace{-1.5mu}\exp{\!\left(\mspace{-0.75mu}\frac1{\mathrm{e}}\mspace{-0.75mu}\right)}}$ (This is AMM11982.) as $x\to+\infty$ and hence $\displaystyle\int_1^{+\infty}{\mspace{-3.5mu}\dfrac{1}{x^2}\mspace{0.5mu}\ln{\mspace{-5mu}\left(\sum_{n=1}^\infty{\mspace{-1.5mu}\frac{x^n}{n^n}}\mspace{-2.5mu}\right)\mspace{-4.5mu}}\operatorname{d}{\!x}}$ diverges. The main method is: For all sufficiently large $x\ge1$, $\operatorname{\it{f}}{(x)}$ should be positive, so, if one can prove that $\sqrt[x]{\operatorname{\it{f}}{(x)}}=\operatorname{\mathcal{O\!}}{\left(\mathrm{e}^{x^{\varepsilon}}\right)}$ (which means $\dfrac{\ln{\operatorname{\it{f}}{(x)}}}{x^2}=\dfrac1x\ln\!\left(f(x)\right)^{\mspace{-1mu}1/x}=\operatorname{\mathcal{O\!}}{\left(\mspace{-1.5mu}\dfrac1{x^{1-\varepsilon}}\!\right)}$) as $x\to+\infty$ for some non-negative real number $\varepsilon$ (depending on $\{a_n\}$), the respective integral must diverge.
Now, since $$\begin{align}\operatorname{\it{f}}{(x)}&=\sum_{n=1}^\infty{a_n^nx^n}=\sum_{n=0}^\infty{a_{n+1}^{n+1}\frac{x^{n+1}}{n!}\operatorname{\!\Gamma\!}{\left(n+1\right)}}=\sum_{n=0}^\infty{a_{n+1}^{n+1}\frac{x^{n+1}}{n!}\mspace{-4.5mu}\int_0^1{\mspace{-4.5mu}\ln^n\!\!\left(\mspace{-2.25mu}\frac1y\mspace{-2.25mu}\right)\mspace{-4.5mu}\operatorname{d}{\!y}}}\text{, }\\&=x\sum_{n=0}^\infty{\frac{x^n}{n!}\mspace{-3.5mu}\int_0^{+\infty}{\mspace{-4mu}u^n\mathrm{e}^{\mspace{-1.5mu}-\frac{\mspace{-4.5mu}u}{a_{\mspace{2.5mu}n+1\mspace{7mu}}}\mspace{-5mu}}\operatorname{d}{\!u}}}=x\sum_{n=0}^\infty{\frac{x^n}{n!}\mspace{-3.5mu}\int_{0^+}^1{\left(-t\ln{t}\right)^{n}t^{a_{n+1}^{-1}\mspace{2mu}-\left(n+1\right)}\operatorname{d}{\!t}}}\text{,}\end{align}$$ provided that the termwise integration is allowed, we have $$\operatorname{\it{f}}{(x)}=x\mspace{-0.5mu}\int_0^1{\sum_{n=0}^\infty{\mspace{-1mu}\frac{\left(-x\mspace{1mu}t\ln{t}\right)^{n}}{n!}\mspace{-2.5mu}\cdot\underline{t^{a_{n+1\mspace{5.5mu}}^{-1}-\left(n+1\right)}}\operatorname{d}{\!t}}}\text{.}$$ When $a_n=n^{-1}$, direct evaluation leads to the above equation, and using one theorem (see below), its divergence can be proven. But for other $\{a_n\}$ (where we cannot ignore the underline part), I am not able to obtain similar result as a smallest divergent series is nonexistent (cf. Nonexistence of boundary between convergent and divergent series? and Is there a slowest rate of divergence of a series?)! What can we say for any positive decreasing sequence $\{a_n\}$ such that $\lim_{n\to\infty}a_n=0$ and $\sum_{n=1}^\infty{a_n}=+\infty$?
Why I write $f(x)$ in that form? A theorem is: $${{g,h\in\operatorname{\mathscr{C\!}}{\left(\left[a,b\right]\right)}}\land{h>0}}\implies{\displaystyle\lim_{n\to\infty}\sqrt[n]{\int_a^b{\left\lvert\operatorname{\it{g}}{(x)}\right\rvert^{n}h(x)\operatorname*{d\!}{x}}}=\max_{{a}\leqslant{x}\leqslant{b}}\left\lvert\operatorname{\it{g}}{(x)}\right\rvert}\tt.$$ And ignoring the underline part, $\displaystyle\left(x\int_0^1\sum_{n=0}^\infty{\mspace{-1mu}\frac{\left(-x\mspace{1mu}t\ln{t}\right)^{n}}{n!}}\times\left(\textrm{?}\right)dx\right)^{\frac1x}\overset{?}\approx{x^{1/x}\left(\int_0^1\left(\frac1{t^t}\right)^{x}\times\left(\textrm{¿}\right)dx\right)^{1/x}}$ is just similar to the left hand side of this theorem. Unfortunately, I do not know how to use those condition of $\displaystyle\{a_n\}$ in next steps (based on my “another” approach). Is it really unworkable in general? I have no idea.
Besides, Although I reckon that the problem at the begining is not new at least in MSE (or other forums like AoPS), I can't find such old posts after searching. So it can be still suitable for posting these questions… I'd be grateful for any help one may provide.
| common-pile/stackexchange_filtered |
Shiny app with two sliders: second slider is only selecting the endpoints, not range
I have written the following R Shiny app. The user selects farm from a drop-down menu and a single value of year from a slider. I would like the user to be able to select a range of values for month, but the second slider is only selecting the end points of month, not the range.
Here is what the month slider is selecting, as an example with farm == 1 and year == 2 from the data.frame fruit created within the app below:
> fruit[fruit$farm == 1 & fruit$year == 2 & fruit$month %in% c(4,9),]
farm year month apples cherries
16 1 2 4 70.23774 82.18512
21 1 2 9 38.99675 66.07546
Here is what I would like the month slider to select:
> fruit[fruit$farm == 1 & fruit$year == 2 & fruit$month %in% c(4:9),]
farm year month apples cherries
16 1 2 4 70.23774 82.18512
17 1 2 5 37.17340 83.47027
18 1 2 6 36.00925 73.27322
19 1 2 7 31.20337 98.30440
20 1 2 8 33.93355 63.92046
21 1 2 9 38.99675 66.07546
How can I modify this code so all selected months are included in the resulting plot? Here is my Shiny app:
library(shiny)
set.seed(1234)
n.farms <- 5
n.years <- 3
fruit <- data.frame(
farm = rep(1:n.farms, each = 12*n.years),
year = rep(rep(1:n.years, each = 12), n.farms),
month = rep(1:12,n.farms * n.years),
apples = runif(n.farms*12*n.years, 20, 80),
cherries = runif(n.farms*12*n.years, 0, 100)
)
ui <- fluidPage(
titlePanel("Subset and Plot Fruit Data"),
sidebarLayout(
sidebarPanel(
selectInput("codeInput1", label = "Choose Farm", choices = unique(fruit$farm)),
sliderInput("codeInput2", label = "Select Year",
min = min(fruit$year), max = max(fruit$year),
value = median(fruit$year), step = 1),
sliderInput("codeInput3", label = "Select Month Range",
min = min(fruit$month), max = max(fruit$month),
value = unique(fruit$month)[c(4,9)], step = 1)
),
mainPanel(
plotOutput("view")
)
)
)
server <- function(input, output) {
dataset <- reactive({
return(subset(fruit, (farm == input$codeInput1 &
year == input$codeInput2 &
month %in% input$codeInput3)))
})
output$view <- renderPlot(plot(dataset()$apples, dataset()$cherries))
}
shinyApp(ui = ui, server = server)
If you select a range, sliderInput returns the minimum and the maximum, but not all values in this range. So you have to generate the range from the two returned values:
month %in% (input$codeInput3[1]:input$codeInput3[2])
Edit
As Mark points out, my solution only works for integers; the correct way to test for a range (with included borders) is:
month >= input$codeInput3[1] && month <= input$codeInput3[2]
Your answer worked in this case but not in a subsequent case in which I was told I needed to use: (month >= input$codeInput3[1] & month <= input$codeInput3[2]). Your answer probably is fine in this case because month is an integer.
| common-pile/stackexchange_filtered |
How to filter an object by data from array in JS
I have an array
const newChecked = ["recreational", "charity"]
and an object
const allActivities = [
{activity: '...', type: 'social', },
{activity: '...', type: 'social', },
{activity: '...', type: 'charity', },
{activity: '...', type: 'recreational', },
]
And I want to get a new object, that does not contain type that are contains in newChecked. In other words, newActivitiesList should contain data with type of "social".
const newActivitiesList = allActivities.filter((item) =>
newChecked.map((i) => {
return item.type !== i;
})
);
You can use Array#filter in conjunction with Array#includes.
const newChecked = ["recreational", "charity"]
const allActivities = [
{activity: '...', type: 'social', },
{activity: '...', type: 'social', },
{activity: '...', type: 'charity', },
{activity: '...', type: 'recreational', },
];
const res = allActivities.filter(({type})=>!newChecked.includes(type));
console.log(res);
You can use Array.prototype.reduce function to create a new array
const newChecked = ["recreational", "charity"]
const allActivities = [
{activity: '...', type: 'social', },
{activity: '...', type: 'social', },
{activity: '...', type: 'charity', },
{activity: '...', type: 'recreational', },
];
const newActivitiesList = allActivities.reduce((accumulator, current) => {
if(newChecked.indexOf(current.type) === -1) {
return accumulator.concat(current);
} else {
return accumulator;
}
},[]);
console.log(newActivitiesList);
| common-pile/stackexchange_filtered |
log4j add prefix to all messages from one abstract class
I am trying to use log4j to add prefix to all log commands.
Previously I have been using @Slf4j in front of all classes, but now I need to add certain prefix to all messages, so this is no longer possible.
My class structure is as follows:
abstract class MyAbstractClass {
protected final MyLogger log;
MyAbstractClass(String foodType) {
this.log = new MyLogger(Logger.getLogger(this.getClass(), foodType));
}
}
class MyClass1 extends MyAbstractClass {
MyClass1(String foodType) { super(foodType); }
public static void myMethod1() {
log.info("hehe");
}
}
class MyClass2 extends MyAbstractClass {
MyClass2(String foodType) { super(foodType); }
public static void myMethod2() {
log.info("hohohoho");
}
}
/* MyLogger.java */
@Slf4j
public class MyLogger {
private final Logger LOGGER;
private String PREFIX;
public MyLogger(Logger logger, String foodType) {
LOGGER = logger;
PREFIX = foodType + ": ";
}
public void info(final String str) {
log.info(PREFIX + str);
}
public void warn(final String str) {
log.warn(PREFIX + str);
}
public void debug(final String str) { log.debug(PREFIX + str); }
public void error(final String str) {
log.error(PREFIX + str);
}
public void error(final String str, Exception e) {
log.error(PREFIX + str, e);
}
...
// i know.. it's so inconvenient because I have to override all the methods that I want to use from log4j Logger. I wish there is an alternative.
}
If I were to do something like this,
public static void main() {
MyClass1 c1 = new MyClass1("burger");
MyClass2 c2 = new MyClass2("pizza");
c1.myMethod1();
c2.myMethod2();
}
I would get something like this.
/* console */
[INFO] MyLogger#info - burger: hehe
[INFO] MyLogger#info - pizza: hohoho
BUT, i would like to get something like this to be able to trace where exactly it came from (original Log4j does this).
/* console */
[INFO] MyClass1#MyMethod1 - burger: hehe
[INFO] MyClass2#MyMethod2 - pizza: hohoho
Is there way for me to accomplish this?
Thanks!
Rathan that wrapping the the log4j logger, why don't you do something like log.info(foodType + ": hehe" )? Also, I don't see where the #info is generated for [INFO] MyLogger#info. Is it defined in a PatternLayout? If so, post that too.
@bradimus I could do the log.info(food type + ":" + msg) method, but this is very repetitive when I have many log calls. I just wanted a more clean way to do this haha. Also, I am not aware of PatternLayout, could you give some more info on this?. #info is generated because the log4j log calls are made from MyLogger.info. log4j seem to be tracking its log calls to the MyLogger functions rather than the actual methods that call the log functions (MyClass1.myMethod1 and MyClass2.myMethod2)
PattenLayout javadoc.
@bradimus here is the full pattern layout (i revised a bit and still cant get burger and pizza to print): %d{yyyy-MM-dd HH:mm:ss,SSS} [%-5p] %c{1}#%M - %m%n
problem solved? With the pattern above, I got the correct output.
Similar question: https://stackoverflow.com/q/38159124/65839
I solved this by using the MDC Pattern. I no longer need MyLogger class. I can just use the Log4j 's Logger class.
You can do something like
abstract class MyAbstractClass {
protected final Logger log;
MyAbstractClass(String foodType) {
MDC.put("myfood", foodType);
this.log = LoggerFactory.getLogger(getClass()));
}
}
and in the pattern, you can add on %X{myfood} to access the variables you set :)
More on MDC here: http://logback.qos.ch/manual/mdc.html
If you are using PatternLayout (and you should) you can do this by defining the pattern to be
%C{1}:%M - %m%n
Could you please extend on this? I am not very aware of PatternLayout
@Blee here is a nice tutorial on layouts. https://logging.apache.org/log4j/2.x/manual/layouts.html
this doesnt work. still prints MyLogger#info.
Here is the pattern i made: %d{yyyy-MM-dd HH:mm:ss,SSS} [%-5p] %c{1}#%M - %m%n
I should also add that I want to be able to control Logging level for individual classes in XML. Currently, in my log4j.xml, I just have this:<logger name="com.logger.MyLogger"> <level value="INFO" /> </logger>
And I have no control on individual classes (MyClass1, MyClass2)
@Blee It's a capital C in your pattern is a small c
| common-pile/stackexchange_filtered |
Adding new variable to dataframe equally
using RStudio, I have this :
GROUP NUM
A 45
A 78
A 79
B 45
B 47
B 99
C 28
C 78
C 54
I want to add a new variable, named AGENT, which is:
AGENT=c("John", "Maria", "Pamela")
But the problem is that, I want each of my Agent to be equally spreaded amongsts the initial dataframe according to the ID. Basically, I want this:
GROUP NUM AGENT
A 45 John
A 78 Maria
A 79 Pamela
B 45 John
B 47 Maria
B 99 Pamela
C 28 John
C 78 Maria
C 54 Pamela
My example here is basic because I have as many groups as I have agents. However in my case, I might have 70 of each letter (70 A, 70 B and 70 C) and still only 3 agents. I still want them to be spread as equally as possible....
For example, if I had 6 A, I would have :
GROUP NUM AGENT
A 45 John
A 78 Maria
A 79 Pamela
A 48 John
A 97 Maria
A 59 Pamela
...
And if I had 7, then 7th would be assigned randomly, or just the next on the list.
Any ideas? I've been torturing myself over this. Thanks in advance! :P
If "or just the next on the list." is appropriate for any overflow when the group is larger, you can take advantage of vector recycling and just do it in one assignment:
dat$newvar <- with(dat, ave(1:nrow(dat), GROUP, FUN=function(x) AGENT) )
dat
# GROUP NUM newvar
#1 A 45 John
#2 A 78 Maria
#3 A 79 Pamela
#4 B 45 John
#5 B 47 Maria
#6 B 99 Pamela
#7 C 28 John
#8 C 78 Maria
#9 C 54 Pamela
Just ignore any warnings you might get when the groups are not neatly matched to the size of AGENT
data.table could be used too, in a similar fashion:
library(data.table)
setDT(dat)
dat[, newvar2 := AGENT, by=GROUP]
I found this answer for a similar problem. It works for me as well, but the vector I'd like to add contains dates. When I add it this way, the dates are transformed into numeric variables. Simply wrapping with(...) in as.Date() doesn't work. It throws the error: 'origin' must be supplied.
Any ideas perhaps?
@Tingolfin - you can specify the default origin= manually as.Date(ave(...), origin="1970-01-01"), or use a Date object in the initial ave call like ave(rep(AGENT[1],nrow(dat)), dat$GROUP, FUN=function(x) AGENT)
Try this-
# Data
df <- data.frame("GROUP" = c("A","A","A","A","B","B","C","C","C"),
"NUM" = c(45,78,79,45,47,99,28,78,54))
AGENT=c("John", "Maria", "Pamela")
# Assign agents
df$agent <- NA
groups <- levels(factor(df$GROUP))
lapply(groups, function(x)
{
df[df$GROUP == x, "agent"] <<-
c(rep(AGENT, as.integer(length(df[df$GROUP == x, "NUM"]) / 3)),
AGENT[0:(length(df[df$GROUP == x, "NUM"]) %% 3)])
})
If there are more than 3 agents, replace the 3 in the script by length(AGENT)
I'm not the downvoter, but I suspect the use of <<- might set off alarms for some folk.
I come up with a bit complicated way to do it using the index. There might be a much easier way. Here is the code:
library(dplyr)
AGENT <- c("John", "Maria", "Paul")
fun <- function(x){
x %>% mutate(agent=AGENT[((1:nrow(.) - 1) %% 3) + 1])
}
df %>%
split(.$GROUP) %>%
lapply(fun) %>%
bind_rows()
GROUP NUM agent
1 A 45 John
2 A 78 Maria
3 A 79 Paul
4 B 45 John
5 B 47 Maria
6 B 99 Paul
7 C 28 John
8 C 78 Maria
9 C 54 Paul
If there isn't too many data or the length of GROUP is not the multiple of AGENT, it will create the variable follow the order of AGENT.
df1
GROUP NUM
1 A 45
2 A 78
3 B 45
4 C 28
df1 %>%
split(.$GROUP) %>%
lapply(fun) %>%
bind_rows()
GROUP NUM agent
1 A 45 John
2 A 78 Maria
3 B 45 John
4 C 28 John
| common-pile/stackexchange_filtered |
New three-tiered badge idea: Explainer →Refiner → Illuminator
The difference between a poor or meh question and a stellar question can often simply be someone understanding it and providing it a great answer. I can't begin to count the number of times I've justified re-opening a question as a moderator by saying:
Look at the answer it got, though. This isn't something we want deleted, this is something we want fixed, because it's obviously valuable. Someone with domain knowledge can easily edit that question based on the answer it received.
This doesn't only apply to borderline-poor or poor questions – sometimes a good edit means the difference between 100 and 10,000 people finding something through typing things into Google. Good titles are hard.
We think we've come up with a way to provide incentive for good, opportunistic edits to happen more often, ideally without questions needing to go through closure or collecting a centimeter of dust before they do. What if we simply rewarded folks that answered and edited questions with a special, relatively difficult to earn badge?
Let's say that you answer something, and:
You edit the question 12 hours before or after answering it. This allows you to edit now, answer later — or answer now and edit later, when you have the time.
Your edit isn't rolled back, or outright rejected if it was a suggested edit
The question is not closed for any reason, even simply being a duplicate
Your answer has a score of 1 or higher
… then you've done something that we probably want to recognize. You understood something, you provided the knowledge that you have, and then you provided an edit to make sure that more people in need of this knowledge can find it, while raising the overall quality of the site.
Here's the proposed tiers for the badge (within the above stated criteria):
Explainer (Bronze): Answered & edited 10 questions
Refiner (Silver): Answered & edited 50 questions
Illuminator (Gold): Answered & edited 500 questions
We think this is going to be pretty hard to game. You can't go leaving streaks of terse snark all over the place on questions you know are going to be closed anyway, and junk edits aren't going to fly.
We're talking about rewarding folks that take a tiny bit more time out of their day just to write a more descriptive title so that folks can find the awesome answer that they wrote. Just doing that alone can make a big difference. Sure, it's a tad hard to get at the gold level, but it's supposed to be. If you routinely edit questions that you answer and have done so for some time, you'd probably get it the day we roll it out.
While this is technically something that we came up with on the quality initiative (more on MSE | more on MSO) — this badge could be earned on all of our sites.
What do you think? What did we miss? What could be better about the idea?
Could you please roll out the stats on the numbers of people who'd get the badges at feature intoduction? Are we talking about dozens or hundreds Keeper badges?
Interesting idea. It can really be great to turn a not so good closable/closed question into a really interesting and on-topic thing and a badge as reward would add to the feeling of simply having improved the site. Yet to motivate exactly that behaviour and prevent abuse (or more mildy put, overawarding the badge), there would need to be some measure of low quality/closability of the original question, because otherwise you have millions of those badges. I edit like every question and they're not necessarily bad ones (my edits still improve, of course, but maybe only slightly).
Very nice idea. Would/should tag edits count towards these badges? Proper tagging does make an answer/question easier to find but awarding badges for it could lead to a lot of not so useful tag additions.
Oh, I'm all about this. When I have some time to myself, I'll try to tease some things out of this - see if I could improve it/find flaws with it. May not be for at least 18 hours, though.
@Harry: Probably that's why it has this stipulation: Your edit isn't rolled back, or outright rejected if it was a suggested edit.
@Harry Yes. But the same editing norms apply, try to fix as much as you possibly can in your edit. If it's just missing a language tag, that's fine - add the tag. But please don't leave 'error in listview' as the title. Remember, the edit has to stick.
@AndriyM: It is perfectly possible to add a not so useful tag without it getting rolled back mate. Somebody could go on and just add a "loops" tag to any question that remotely is about loops and not many would really want to roll it back. But I think there is not much that can be done about it. To get something positive, we have to end up accepting some negatives too.
@DeerHunter One of the reasons I started this as a discussion was to get feedback on the perceived difficulty of the higher tiers. I plan to circle back when folks have chimed in about practical concerns, when we think we've got it all worked out. In short, I'm not yet ... there yet.
For: One of the most frustrating things about SO is seeing perfectly viable questions closed by people who don't understand them. Against: everyone who answers a question will then attempt to "polish" the question in silly little ways.
Will there be an "quality controls" on the edits beyond just "not rolled back or rejected"? If I just answer and then immediately edit <!--html comments--> or zerowidthspaces into the question, will I still be eligible for the badge? (I tend to believe that such invisible edits won't be rolled back, or even really noticed.)
I think this is a fantastic idea, but I, too, share the concern that it will lead to numerous inconsequential edits for the sole purpose of achieving the badges.
@TimPost: Since this badge is different from the Copy Editor and is introduced with more emphasis on increasing the usefulness/searchability of the question, wouldn't adding a parameter for the no. of views (total/post edit) make sense? I think that could be a better (not the best) indicator of the usefullness of the edit.
I suggest a stronger condition: The question needs to go from negative votes to positive after the edit.
People rarely up-vote questions, as a general rule. People certainly don't tend to upvote questions with a negative score, even if they are now "stellar". So I'm in favour of merely the edit being enough.
I do not understand the "12 hour" criteria. Does it matter if a Q&A are both improved within one hour or whether the improvements are one week apart? Both improve the overall quality of the site.
@Mysticial how often do you see that kind of turn around with an edit? I doubt many would ever achieve a gold badge if that was the criteria.
How to differentiate between junk / formatting edits vs content / meaning edits will be difficult enough to automate without human intervention. Still, you have a good idea.
I think the edit should be substantial, editing a tag or a title is not enough. Also, I think the answer should be accepted.
@Tanner Not often enough. But a badge may encourage people to actually try. Of course the counts can be adjusted to make the difficulty more reasonable.
@Mysticial what you suggest could make a slightly different set of badges, and limits would have to be lower, and names would be more of medical / healing nature. Therapist (1) -> Surgeon (10) -> Savior (or... Dentist:) (50)
@gnat Dentist over surgeon and on the level of Savior? Interesting...
@BAR yeah, that's only because dentists deal with gold :)
Oooh. For the first time, a badge I might actually deliberately try for!
@Sklivvz Sometimes a title edit is substantial, if you completely rewrite the title based on your answer. And yeah, we're saying - normal editing guidelines apply. However, I don't want to do anything that starts up the accpet-my-answer-grrr badgering again ;)
I don't think the 12 hour rule makes sense. I won't necessarily see that my answer is a great one until I see it upvoted. My highest voted answers are the ones where it's worth going back and editing the question, surely, and I can't assess that within 12 hours. Or does this defeat your point of wanting it done soon?
@RobieBasak You bring up a very interesting point I hadn't considered, You can't always jump in and edit unless you're really certain you understood what they were asking. 24 hours might be better, and allow from an OP that's a little slower to respond. Thinking about it.
What is the purpose or benefit of the 12 (or 24) hour rule? My earlier comment about the 12 hour rule did not get any response.
Do we really want to award users for editing a question to match an answer, let alone their own? Sure, there are some cases where this is beneficial, e.g., if the asker only clarified something by accepting an answer or the asker vanished without clarifying. But many unclear questions can only be clarified by the asker and clarifying them without the asker’s consent can lead to more confusion and even disgruntle the asker, if it was against their intent. Such edits are very rare as far as I can tell and I would prefer it to stay that way.
You want to make a 12 hour window between edit and answer to avoid gaming it.
But honestly - 12 hours on Stack Overflow with 7000 questions each day - after the first 10 minutes a question does not get much attention anymore. And the only ones editing the posts will be the users wanting the badge.
My suggestion is similar to Mysticial's, but less restrictive: add the requirement Question upvoted at least once after the edit, by someone other than the answerer. This would make badges better correlate with tangible improvement of questions, rather than with editing for the sake of the badge.
@TimPost what about asking that the question has no other undeleted answers? The intent is to single out questions that have no answer because are not good, get improved and answered. More in general, why not give a badge for edits that lead to upvoted answers (by others too!)? That would still be a improve-quality signal. Just ideas I'm putting out here.
To be honest I'd rather you spent the time fixing stuff that's broken and would ultimately benefit everyone. For example, making search better would presumably get people directed to existing answers so they didn't have to ask crappy new questions. It's also help with things like http://meta.stackexchange.com/questions/232242/help-us-find-duplicates-efficiently and http://meta.stackexchange.com/questions/232131/boost-duplicate-post-search-results-by-incoming-link-count
@Sklivvz We considered part of the criteria being that the question itself was subsequently up-voted, by someone other than the editor. But we're already getting a little deep in criteria, and we really need more incentive for people that have proven they've figured out what a question is about by their answer to make it clear to everyone else through an edit. This .. does that, as simply as we could put it together.
@TimPost specifically on skeptics I'm pretty sure the current criteria would openly encourage bad behavior: change a question so it's a strawman or trivial, then answer it.
How long does an edit need to "stick" to qualify? Or, I guess a better question is: How long do the criteria need to be "maintained" until the badge is awarded? It has to be more than a few minutes, or even hours (on smaller sites) but days seems to be too much.
Any thoughts on griefing? In other words, User "Asshat" sees that user "HelpfulGuy" has answered a question and edited it, so to prevent "HelpfulGuy" from getting credit toward one of these badges "Asshat" makes a subsequent edit to the question.
@FishBelowtheIce I'm guessing that a subsequent edit isn't counted as your edit still stands? It's only for edit rollback/rejection.
@DavidG: A malicious rollback, then.
@FishBelowtheIce Are edit rollbacks checked in the same way that serial downvoting is? I guess that should take care of it?
@FishBelowtheIce That wouldn't work. He would have to actually roll back the edit that HelpFulGuy made, not just edit on top of whatever HelpFulGuy did. Folks abusing this would quickly come under scrutiny by moderators. There's a potential to abuse almost any feature we have, even voting, but most users won't do that, and it's easy to nab the few that do.
Isn't this essentially a way to vote on edits?
@Sklivvz "change a question so it's a strawman or trivial, then answer it" --- then get your edit rolled back and your answer downvoted. Answering for sake of badges is a rare behavior, anywhere.
I want to thank everyone so far that has provided feedback, you folks are just amazing. So far, I'm pretty convinced that we'd have to drop the bronze requirements down a bit, those need to be a bit easier to earn. I need to think about concerns regarding folks gaming to get the badge (and the annoyance that can create), as well as the time window being a bit too narrow. Thank you, everyone for making this as productive as its been today - and please keep at it for any ideas or concerns that have not come up yet. I think most like the idea, I'm going to take another look at the mechanics.
One thing I like about the time window is that, trilogy sites aside, the question will already be on the front page when you do the second action (answer or edit). One thing that can be frustrating is seeing trivial-ish edits that bump questions, where the editor benefits from the exposure (has an answer, in this case). So let's not encourage people to go through all their old answers and finding something to edit in those questions. 12-24 hours should be fine for this. But it shouldn't be forever.
@MonicaCellio Very good point; over at Math I advocated relatively small edits (focusing on titles) to be done along with answering. Frustrating to see multiple 2K+ users answering a question with a really bad title ("Prove or Disprove") and not editing it... By the way, people keep mentioning "trilogy" as if these are still the biggest sites... Math currently gets more questions per day than Super User, Server Fault, and Ask Ubuntu combined.
@CareBear "trilogy", for suitably large values of "3", then. :-) (And wow, I didn't realize Math was that big. Congrats.)
The requirement that the edit be within any specific amount of time really needs to be thought through a bit. While there are tags that get large numbers of questions per day, there are also tags where the traffic is listed in questions per month. All of the lower traffic tags also have all the same answer-respose-edit-etc loop issues as have been mentioned. However, the timing on the loop is significantly expanded compared to tags which see large numbers of questions per day. This should be accounted for in some way, or a compromise amount of time chosen.
The answer by @Shog9 clarifies the purpose of the 12 hour rule, thank you. The question says "You edit the question 12 hours before or after answering it. This allows you to edit now, answer later - or answer now and edit later, when you have the time." which does not explain that it is referring to edits made within a 12 hour period. It is trying to say that edits made outside that window would not count to the badge. Assuming my understanding is correct then could someone clarify the text in the question?
+1, I'm not sure about the mechanism, but yes we need to encourage answerers taking ownership of the questions they answer. If I find a clunky question and can answer it well, I usually definitely maintain the title and do a bit of work on the question as well.
@Mysticial Just saw your comment about requiring the post go from negative to positive. We talked about that. One of our hopes is that this can help prevent questions that only needed a little editing from ever going negative in the first place, which not only helps quality, but also lends to a better experience for (mostly) new-ish users.
I think we should be doing this anyway. Badges? whatever...
I edit lots of questions. I also answer lots of questions, and many of them are questions that I edited first. But almost all of my edits are just cosmetic: poor use of markdown, badly indented code, tables that don't line up, image links instead of inline images, etc. Do I really merit a special new badge because I made the question readable so I could answer it?
@Barmar Yes, because we need more users to recognize what you are doing, and (hopefully) do it themselves.
So quality of questions falls and instead/beside of thinking of way to prevent asking bad questions you thought that for new shiny badges users will try to improve their quality as much as possible. Brilliant.
@Shog can you please post answer with the final implementation details and ask Tim to accept so it's on top? Currently it's not clear what exactly was done. (from what I see on SO, the requirements are also different from what is proposed here)
Fine, @Sha: http://meta.stackexchange.com/questions/239898/new-three-tiered-badge-idea-explainer-refiner-illuminator/240302#240302
Whoa, 500? Now that's what I call a gold badge!
@juergend how do you find the usage statistics (i.e. 7000 questions per day) for Stack Overflow? I've been looking around, but all I can find is the total number of questions and answers.
@ahorn http://stackexchange.com/sites#questionsperday
@ahorn: https://stackexchange.com/sites?view=list#traffic
@TimPost, would a gold badge for unanswered questions also qualify for [se-quality-project]?
The text in the badge makes it sound like you have to be Dr. Manhattan to get it: "Edit and answer 500 questions (both actions within 12 hours, answer score > 0)."
@MerlynMorgan-Graham If "both" could refer to 500 objects... but it can't. "Both" means two things, which here can be only: editing and answering a question.
@normalHuman I was confused by this for a bit, then realized my error. I don't think my error is unique or surprising. My interpretation was based off of: I consider editing 500 questions a separate "action" from answering 500 questions. Yes, you might consider "both" a significant enough disambiguation, but I didn't for my "fast" reading, and really I still don't. Something less ambiguous (though maybe flows worse?): "Edit a question you answered, or vice-versa, within 12 hours. Repeat 500 times. Answer score must be > 0."
@NormalHuman Or with less change to style, "Edit and answer 500 questions (both actions on the same question within 12 hours, answer score > 0)."
See my answer below, @Masked
It's an interesting suggestion, but as per the comments, what this would do is incentivise answerers to always edit every question they answered, with some edit, however minor. And that would encourage more bad behaviour than good behaviour.
So it's a nice idea, but doesn't quite achieve what it sets out to. Perhaps delving a bit more into what is trying to be achieved, will help.
I think the observed phenomenon that has prompted the question is this:
A badly-phrased question can be salvaged by a well-phrased answer.
AIUI, the suggestion is to reward the rephrasing of the question, so that it does justice to the answer.
Symptoms of positive behaviour:
a question with an answer is closed, edited, then reopened.
that question goes from being a bad question to a good question; to put that in a way that can be tested for automatically, it would go from having a net negative score to a net positive score.
Should it make any difference whether the edit was done by the answerer or not? I don't think so. Isn't it the quality of the edit that counts?
So, there are a couple of criteria there, that don't appear in the suggestion in the question - the edit that leads to a reopening, and/or leads to the question's net score going from negative to positive.
But what if there are several editors active between closure and reopening, and/or between negative and positive net scores? Who among the editors gets the credit towards the badges? If I see a really good edit that turns a bad question into a good one, am I being incentivised to do an irrelevant minor edit on that question in order to share the credit for turning the question around? (if you're tied to the concept of these badges only applying to people who'd answered the question, then imagine this happening on a question where more than one person had answered it, and the question-editors, substantive and trivial, are among the answerers).
Can the relative substantiveness of edits be established automatically (not easily, I'd wager - it just wouldn't be worth the programming effort). Could there be community voting on the usefulness of individual edits in such cases (again this sounds like more hassle than it's worth)?
I don't think it would encourage junk edits. Remember, the edit has to be accepted, or at least built upon in order for it to count - and the question has to remain open. This means, if you're editing something that really needs it, you need to be thorough. Sure, there's going to be folks that get caught up in the mechanics rather than intent, and annoy people with a bunch of minor edits but I'm sure that's going to be a very small minority, and mods can have a side bar with users doing this as needed. A lot of questions could (at the least) use a better title, I think we're okay.
@TimPost: Even if you are right, a lot of those edits won’t turn those questions from closeworthy to keepworthy. I would probably earn the silver version of that badge on German Language, but I cannot remember a single edit that came close to having that impact. It’s not that those edits were useless, but we already have badges for that – so there is no need to tie it to answering that question.
Calm the system tell if the exit is done by a reviewer?
@TimPost I'm absolutely with the answer and Wrzlprmft here. I do millions of edits (but I'm more from the school of "there's rarely a too minor edit at all" anyway) and it is rather rarely that I would say they significantly improved the question from closable to reopnable. This isn't done for malicious reasons or to game the system, but I still don't think I should get a badge for that (other than the existing edit-based ones) when deciding to answer them, too. There has to be any criteria, like an edit done after closing and before reopening (or after close-voting and before retracting).
Thinking about it, that last idea edit after closing and before reopening (without the answer necessity) might make for a great badge proposal on its own. This would be much more strict and rare, but it's something that might deserve more motivation, too. Maybe I'll ask that as a separate question, since it is quite different from the approach presented here.
Folks, don't forget - not all questions in need of a little love are closeable questions. This also helps questions that are okay, but not .. well .. easy for folks to find in the future. While this can help questions that could be great get even better, it will also help questions that are pretty much great get seen by people likely to benefit from reading them.
@TimPost Sure, but in this case it somehow loses the whole "improved as far as to be answerable" aspect and the connection to gaining an answer. It would in this case simply be one of the generally improving edits for which there already exist other badges.
@TimPost Sometimes people have formatting/style preferences which they'll try to apply to every post they edit. This would encourage them to apply them to posts they wouldn't otherwise edit, resulting in trivial edits. These kinds of things aren't usually controversial enough to warrant rollbacks (or debates or edit wars) but they're also not necessarily improvements, so I'm not sure we'd want to encourage them. And I'm not sure it's a small thing - I've seen cases where even without this kind of encouragement, it was annoying other users on the site.
Uh, yeah - that's exactly what we're trying to encourage. If you're taking the time to answer a question without even considering an edit, you're potentially wasting your time; if you put in a useless edit to the question, it's still mostly your effort that will go to waste if no one else finds your answer. We're hoping the badges wake a few folks up to this - if some folks are determined to get a badge for trivial edits while letting their comparatively major efforts answering go to waste, it's their loss. That said, note the score requirement for the answer...
@Shog9 I'm not talking about encouraging people to fix teeny mistakes they would otherwise have left in place - I agree, that's great. I'm talking about encouraging people to make edits that don't actually improve the post at all, but also don't harm it (so they won't get rolled back).
See my answer @Jefromi.
Is there any way to prove the first paragraph?
We now accept every minor edit that improve even so slightly a question or answer, so yes this will be easily abused.
I started writing the SQL to determine what posts would be eligible for this badge; it's horrible:
select p.id, p.owneruserid
from posts p
join posts q
on p.parentid = q.id
and p.owneruserid <> q.owneruserid
join posthistory ph
on p.parentid = ph.postid
and p.owneruserid = ph.userid
and p.creationdate between dateadd(hour, -12, ph.creationdate)
and dateadd(hour, 12, ph.creationdate)
join votes v
on p.id = v.postid
and p.creationdate between dateadd(hour, -12, v.creationdate)
and dateadd(hour, 12, v.creationdate)
where p.posttypeid = 2
and ph.posthistorytypeid in (4, 5)
and q.posttypeid = 1
and q.closeddate is null
and v.votetypeid in (2, 3)
and not exists ( select 1
from votes
where postid = p.id
and v.votetypeid = 6
and v.creationdate <= dateadd(hour, 12, p.creationdate)
)
group by p.id, p.owneruserid
having sum(case when v.votetypeid = 2 then 1 else -1 end) >= 1
This probably means that the conditions are overly complex... as currently described the following badges would be granted:
badges count
------ ----
bronze 1764
silver 280
gold 6
This is pretty low, but I guess the badge is to incentivize behaviour which isn't might not be happening currently - low isn't necessarily a bad thing.
However, I also think this disencentivizes positive behaviour. If the question was closed when the badge script was run but was subsequently reopened, for instance, then why should the editor/answerer who may have got the post into this state be penalised? Surely, this is an excellent outcome for both the questioner and future viewers?
In short, I think that the time restrictions actively work against this proposal; as others have said. Removing these time restrictions changes the number of awarded badges to the following:
badges count
------ ----
bronze 3500
silver 830
gold 24
I'd also argue that changing the score to 2 or greater would be more beneficial than having time restrictions on the order of hours. SE's currently trying to improve the visibility of potentially high quality questions, which is going to mean that the lower quality questions are theoretically going to disappear even faster. If one gets a good answer and becomes higher quality due to the efforts of the answerer then it may be some time before the answer gets the upvotes necessary. It makes little sense to disincentivize people from improving less visible questions.
Your SQL is perfectly wonderful in my opinion. ;-) The results of the counts present a strong argument for removing the time restrictions. But doesn't removing the time restrictions also open up the possibility of going back and making trivial edits to all the questions you've ever answered to get a silver and gold badge? That would be pretty irritating.
@JonEricson The whole idea that ridiculous edits can bring yet another series of badges to people is disturbing. No offence towards Tim, but this simply isn't a good design.
I agree Jon; I make the point but not necessarily as well as I could have. I think maybe keep the time restrictions on edits and remove them elsewhere. If someone gets an upvote a week later or a question is opened after 13/25/whatever hours then everyone has a satisfactory outcome, the OP the answerer and the community. This is behaviour we should be incentivising and if the chance of it happening is lessened by other (good) initiatives underway then there's a risk that this incentivises little. Even the script run time will cause issues/complaints for little reason.
Perhaps I'm missing or misunderstanding something, but I'm not sure why you have a date restriction in your query based on the date of the votes. I read it to say that only the date of the edit had to be in the 12 hour window.
That's how I read the question and educated guesses about the scripts to award the badges @James; implementation details may be different .
Weird. When I try to load your first query, the tab freezes in Chrome.
@tohecz: The time restriction greatly reduces the odds that people will make trivial edits to question for the sake of the badge. Furthermore, edits within a narrow time window of the question being answered are not inherently disruptive. Edits on years-old questions that have already been answered might be disruptive. We are literally trying to encourage people to improve questions that they are answering, so this badge is purposely focused on those two actions occurring in short succession. Note: truly ridiculous edits risk getting the question closed or deleted which don't count.
@JonEricson Can you weigh in on whether or not there is a restriction on when the answer receives votes? If the votes can be cast at any time then the results of the two queries move much closer together, considering only the edit is restricted at that point.
@benisuǝqbackwards your query breaks my browser :(
@JamesMontagne: Actually, I get a slightly different result when I write a query from scratch. I don't think we need to join the Votes table since the criteria is actually total answer Score (net +/- votes) which is denormalized in Posts. My version does not account for rollbacks, however, since I don't immediately know how to do that.
If you answered a question without fixing it and it gets closed, then going back and editing fixing it later is... Good. But I don't want to reward that; you had a chance to fix it before it got closed, and if you'd done that you'd have saved a whole heap of people a lot of time and effort... So that's the behavior we're looking to reward here.
@benisuǝqbackwards When you have v.votetypeid = 6, you haven't aliased the inner votes table to v so you are doing this filter on the original votes table which is already restricted to only values of 2 or 3. So I think this not exists is always true.
Yup, thanks @James. But it doesn't seem to make any difference, which is very strange...
@benisuǝqbackwards postid = p.id is p the answer? Maybe this should be q as the answer can't be closed.
Yup, unsurprisingly it goes down a tiny amount...
For comparison, on cooking there are just 7 bronze badges, with counts ranging from 12 to 29. I suspect this is partially because we're a much smaller site but also partially because there are a lot of willing editors (who don't mind editing things they won't answer) so there's not always as much need for fixing up questions as you answer.
You mean something like this @Shog? Makes sense, but it doesn't deal with the situation where someone edited a question, got it reopened and then answered it well... or is that too confusing/out of scope for this?#
If you haven't answered it until you've edited it, that's a different scenario @ben - one this badge also rewards.
Here's a version of the query that shows the distribution of how many users would get a badge per chosen threshold
A few quick notes, since there's some good discussion buried in comments here:
The primary goal here is to reward folks for doing something that benefits themselves as well as others, but which isn't immediately obvious. Think of any question you answer as being effectively the introduction to an instructional blog post you're writing: would spend time writing a great explanation only to hide it behind a broken English introduction and a nondescript title? Even in cases where the question wouldn't be closed, by failing to edit you're potentially making it harder for your work to be found - and that's a shame.
I disagree that tag-only edits should count toward this. Not that tag edits aren't useful - but if you're only touching the tags, you're missing a huge opportunity. Also, our existing editing badges (Editor, Strunk & White, Copy Editor, Excavator and Archaeologist) don't count tag edits, so we should be consistent there.
You do actually have to write a useful answer. Hence the score requirement on the answer - this is probably the biggest potential for abuse here, and it's entirely possible for the community to mitigate this by downvoting useless answers.
I don't think additional quality controls for the edits are necessary here. In fact, I don't think we even need to take rollbacks into account - if your edit was approved / applied, and the question it was applied to stays open (and visible - deleted questions shouldn't count either!), then it counts. Anyone abusing this on a regular basis will be exposing both the question and their answers to a fair bit of extra scrutiny; that's probably not something you want to do frivolously. At the end of the day, it's your answer, your work that's on the line here: if you're not making the most of your edits, you're selling yourself short. Checking for rollbacks is expensive, so if we must do so we should limit the check to rollbacks within a few hours of the edit - but if at all possible, that criteria should be discarded entirely.
The 12-hour window exists to both discourage abuse and encourage good behavior - fixing a question a week later is never bad if you actually fix it, but fixing it while the topic is fresh in your mind is what we're hoping to encourage here. Strictly-speaking, we could count any edit up to 12 hours after the answer is posted and still accomplish this, but "within 12 hours" seems easier to explain.
By the same token, we do not want to encourage folks to answer and then wait around to see if their answer gets traction before editing - at the point you've posted an answer, you've already invested a fair bit of work into the post; you're not gonna know if it was worthwhile unless people see it... And the best way to make sure the right folks see your answer is to make the question look good.
I tend to agree that 10 is too high for a bronze badge - it should be 1, to encourage "just in time learning" - the first time you edit a question you've answered, you'll be informed that this is behavior we explicitly encourage! Silver and gold badge levels should be a lot harder.
And to address some of the comments on this answer now... The purpose of these badges is not to encourage higher-quality edits; rather, it is to encourage some of the people best able to make high-quality edits - those who actually purport to understand the questions they're answering - to at least consider doing so on a regular basis. If you want a badge that rewards heroic editing, that's an awesome idea - but the criteria are gonna look very different, so post a separate proposal.
it would look terribly unfair to count edits removed during grace period, as well as edits after which question score decreases. You better either discard these, or add an explanation of why discarding would do more harm than good (something similar to explanation given for edits that are rolled back)...
...Answer score 1 sucks, single upvote is almost guaranteed as a gratitude from asker. Even score 2 is shaky, single random upvote from someone (coupled with one from asker) doesn't look a sufficiently reliable indication of answer quality. Score 3 sounds about right; after all there was a reason why it was decided to guarantee keeping rep from deleted posts. As for giving bronze for a single action to encourage "just in time learning", it makes so much sense! thanks for catching this
@gnat Answers with +1 count toward tag badges, which now have some real consequences, unlike the badges proposed here. And edits that are accompanied by no answer at all count toward a gold badge [Copy Editor] already. I don't see why this particular badge should have such stringent upvote requirement on answers. Waiting for 3 upvotes would disconnects award from action, making its effect weaker. If I understand correctly, the idea of the proposal is to get more users into the habit of editing questions as they answer them, not to identify the greatest contributors.
@CareBear raising requirement to score 3 should be of course accompanied with lowering numbers, like 10(20) for silver badge and 50(100) for gold (note how tag and Copy Editor badge require 1000 upvotes / edits). But even then, you make good point, my original position is too rigid, it would better be spelled as "either raise score requirement from to 3, or provide a compelling explanation why doing so would cause more harm than good" (FWIW your comment seems to make a good starting point to build such an explanation)
On SO there's too many robo-reviewers and we now accept any minor edits that improve a question regardless of the rest that could use improvements, quality control on edits should be required and needed. Too easy to just edit a little thing that will be approved (easy to know what will).
That's really a separate issue, @Jonathan. The numbers ben cooked up should tell you, the behavior that would be encouraged by these proposed badges is... Pretty rare currently. I know it's too much to expect everyone to start immediately making awesome edits to the crappy questions they're answering, but if nothing else it should be a little reminder to pull the beam from your own eye...
oh, missed this @gnat - but gratitude is hardly guaranteed (if only...) That said, if even just the asker likes your answer, you've at least made one person happy - given how incredibly rare these edits are thus far, we really just need the score high enough to prevent blatant abuse of the answering system, we don't need this to double as an answering badge.
@Shog9 that makes a good sense, thanks for explaining. Overall, the more I ponder possible vulnerabilities in these badges, the more it looks like good enough, even with original action counts (including your correction) 1-50-500. You will probably have to explain that gold version will be very hard to attain at smaller sites, but IIRC something like this was already clarified for gold tag bages / dupehammer
A note about tag-only edits; usually if I do a tag edit, there's almost always something substantially wrong with the question. Good question askers usually know which tags to use/not use (and how to read tag wikis!)
@Qix I think that might be site-dependent. Certainly on the TeX site there are lots of questions where the tags get tidied up but there is little else required in terms of editing.
Something that just came up for me (and then went back down after my answer was upvoted), am I correct in assuming this does not include zero-score accepted answers? I've earned rep from it but not score, technically, so my score is not > 0 as the requirements state, however it is a good answer if it helps the OP such that they accept the answer.
Acceptance is irrelevant to this, @TylerH.
Final requirements:
Edited n questions within 12 hours of posting an answer (that's 12 hours before or after answering), where:
The question was asked by someone other than the answerer
Neither the questions nor the answers are deleted
The questions are not closed
The answers score > 0
The question edits changed either body, titles, or both
If n >= 1, an Explainer badge (bronze) is awarded.
If n >= 50, a Refiner badge (silver) is awarded.
If n >= 500, an Illuminator badge (gold) is awarded.
Each badge can be awarded only once per person, per site.
If you're interested in the implementation, this SEDE query is roughly what's being run.
If the edit is updated (overruled) by someone else within the 12 hour window, is that post still eligible? I am not referring about rollbacks or rejections, but someone coming after and editing the question again. Tks.
Doesn't matter what happens after you edit (as long as the post stays open and visible). If the edit was recorded, the edit counts.
What if the answer gets accepted but without any upvote? Wouldn't it be fair to be taken into consideration as well?
No, @fedorqui. If this happens to you a lot, consider editing the questions you're answering such that others with similar problems can find your answer...
I received Explainer on SO, but got no notification. Was that a deliberate change from previous "backdated" introduced badges?
Within the badge notification system, you're classified as a "veteran" user, @Mark - as a result, you don't get notified of a bunch of different bronze badges anymore, the assumption being that you're pretty familiar with how things work by now and don't need the constant inbox noise. Not a great assumption in cases where a new badge is introduced, but that's how it's set up.
Has that changed since the Yearling notifications on the new Meta Stackoverflow earlier this year?
No, @Mark - Yearling is one of the badges that don't get ignored, even for veteran users.
Shog, how do these badges coexist with Reversal? Recent discussion at MSO made me wonder if these are somewhat in conflict: "refinement badges encourage answerer to improve the question while Reversal makes answerer wish it sink down further..."
@Shog9, I reused several of the small conditions in my proposal for a gold version of Necromancer.
@Shog9 So you have to answer 500 questions within 12 hours (for Illuminator)? Holy crap, and people have that badge... That's amazing. Even providing for the +/-12, that's an answer every 3m, isn't it?
No, 500 for all time, @dave - but each question edited within 12 hours of answering.
@Shog9 Ah, OMG, I was freaking out trying to understand how it was even possible. lol Is there a progress query for this? I'm curious now.
I don't feel that if I edit than answer, I'm not awarded. Is this still valid.
If you are a less than 2K user on a site then you may be interested in reading Delay in approval leads to missing out on being awarded an "Explainer" badge as it will explain why some of your question edit and answer combinations do not qualify for the explainer/refiner/illuminator badge. Currently, for a less than 2K user, the edit date is taken as the edit approved date and not the suggested edit date.
This is quite similar to the Editor (1 edit) / Strunk & White (80 edit) /Copy Editor badges in the sense that all curators will have the copy editor badge, all archivists will have the Strunk & White badge and all keepers will have the Editor badge.
In any case, I would avoid mixing "answerer" and "editor" rewards. Why would an "editor" receive more reward if he/she is also one of the "answerers" of the question? This is not fair for those who did not answer.
As for the "good answer salvage bad question" aspect, there is the gold "reversal" badge for this (provided answer of +20 score to a question of -5 score). Adding silver and bronze badges to this group could help reward answerers to spend time even if the question is not well rated, but this counteracts the motivation of the answerer to edit the question.
+1. Re your last paragraph: how about changing the "reversal" badge to three badges like this? Bronze "salvage" (answer +5 to question -1), silver "reversal" (answer +10 to question -3), gold "lead to gold" (answer +20 to question -5, the current gold "reversal" badge).
Maybe 'transmutation' instead of 'lead to gold'?
I don't think we need more badges that encourage answering horrible questions without improving them. This happens often enough already.
Reversal is nice, but isn't it better if that +20 score answer ends up on a +5 score question thanks to editing?
@Jefromi No. You have many people almost never upvoting questions. 20/5 is quite typical, isn't it?
@Jefromi: A exceptionally good answer on a bad question doesn't necessarily make the question good.
@tohecz The point is if you catch it before it's swung to -5, and you can edit it so that it'll get upvoted instead of downvoted, reversal encourages you not to do so. This badge encourages you to do the right thing.
@Cerbrus Yes, not every question can be saved, but assuming it can be, badges might as well encourage you to.
@JonathanLeffler Transmutation? There's only one possible name for "lead to gold": the Alchemist badge...
I think the best thing about this idea is its simplicity. You answer a question, you make some kind of change (presumably positive) to the question, you get rewarded. This is a behavioural pattern we should be encouraging - answer and edit.
It's inevitable some of these edits will be trivial, but even trivial edits are normally improvements. It will be interesting to see what happens to the edit queue if this is implemented - it may explode. If so, consider offering the badge only to those already with the edit privilege.
Some other users have suggested we need a net-negative to net-positive score swing in order to attain the badge. I don't think this is appropriate, since a question with a negative score rarely obtains a positive score, even after an astoundingly good edit.
"even trivial edits are normally improvements" - Sure, of course, but badge-worthy apart from the edit badges?
@ChristianRau I think these badges will encourage a positive behaviour for all users of the site. Some people will try to game the system, but that's always the case. We have that already for the edit badges. But I'm convinced that the site will be a better place due to this kind of incentive.
It's not so much about gaming the system, but about the question if we need another pack of badges that basically reward and motivate exactly the same bevahiour as the existing edit badges already do (well, apart from posting an answer in addition to editing, which to me seems rather unconnected to editiing anyway)
The thing is, while junk edits won't fly, really trivial ones will. You can fix a one-character typo in a question and no one will ever roll it back. And you can even make a change to suit your personal style that doesn't really improve the question; as long as it doesn't make it worse, it won't get rolled back. And encouraging people to go out and edit a bunch of questions while simply not making them worse... meh.
I'm not sure how best to address this, but at the very least, what if the edits only counted toward the badge if they exceeded some number of changed characters, similar to suggested edits requiring 6 characters, except maybe a bit higher of a limit?
Yes, it would ignore some actually helpful tiny edits (little spell corrections and so on). But while those are good, they're not exactly the behavior the badge is trying to encourage. Surely any edit that really substantially helps the question is going to exceed whatever character limit you choose. And if someone has to work a little harder to get the badges, great.
See also Shokhet's answer with similar sentiments.
Some of my best edits have basically been adding single semicolons. See http://serverfault.com/posts/605661/revisions and http://superuser.com/posts/804496/revisions.
You know what's worse than getting credit for a badge just for fixing a typo? Taking time to answer a question and then leaving an embarrassing typo in the title. I've seen people do this, repeatedly, and... It's just sad. Would you publish a blog post with "javasript" in the title? Of course not - but folks will cheerfully publish their work on SO under such embarrassments.
@Shog9 Fair enough, encouraging people to fix typos is good. (My first thought is to give tiny edits fractional credit for the badge, but still, yes, it should be encouraged.) It's really the latter case, changing the style just because you can, that drives me crazy and I'd rather not see further rewarded. I suppose it's a problem with the existing editing badges too, though, and most people aren't like that?
Not entirely sure what you're referring to by "personal style", @Jefromi - as long as you're respectful of the original author, I don't see an issue with (for instance) re-writing text plagued by unusual grammar or spelling (not to pick on anyone specifically, but in some countries English education tends to result in a fairly archaic style that can come off as strange or even insulting to other readers).
@Shog9 For example, I have seen people who felt the need to replace straight quotes with curly quotes. I've seen someone leave off the last period in every paragraph in their own writing and when editing.
Yeah, there are very good technical reasons to discourage the use of "curly quotes" @Jefromi - but in all cases, it's probably worth leaving a comment for folks making such edits informing them of the problem; even if they don't get "credit" for the edits, if they keep making them it becomes a burden.
@Shog9 I think it's maybe a somewhat separate issue from this badge, but problems like that are actually really hard to sort out - people will often genuinely believe that their edits are an improvement, or even just that they're not harming anything, and continue making them even if the problem is pointed out, unless things get fairly confrontational. Honestly, if someone really wants to make edits like that, they're going to until a mod gets involved.
Sometimes, that becomes necessary @Jefromi - it's unfortunate, but hardly a new problem.
@Shog9 & Jefromi: I suppose a counter-balance measure would be to encourage people to rollback or reject purely stylistic or idiosyncratic edits that don't improve readability and SEO. Right now my impression is that most people don't want to start an edit war, and so let it go. E.g., a recent post of mine was edited to turn "P.S." into "Moreover," and remove double spaces between sentences in the Markdown (which has zero impact on the rendered HTML). It bugged me, but not enough to rollback.
I can certainly see trivial edits being a concern with this. One way to combat that would be the requirement:
The question gets an upvote after your edit (from somebody other than yourself.)
This rule would:
Validate that the question is a good question (and presumably better now that it has been edited)
Eliminate the need for the time period that was being used to prevent everybody from going back and editing every question they had ever answered. Now editing old questions won't help much because old questions rarely get more up votes.
So the problem with "just an upvote", is that it's abusable on popular questions. You edit and answer it. Because it's popular, both the question and the new answer will get votes regardless. We already have a problem with people posting late answers on popular questions in an attempt to "farm" them for rep. And those answers are very difficult to moderate because they get upvoted by passerbys. (non-moderators can only delete answers that have a negative score, and moderators do not delete low-quality answers that are, nevertheless, legitimate answers)
@Mysticial A possible compromise: edit counts if the question had score <5 prior to edit and got an upvote (from someone else) after the edit. This would exclude edits on already-popular posts (that are probably already in good shape).
@CareBear I'm in favor of that. My suggestion in the comments (negative score) is essentially the same but with a different threshold (-1 vs. 4). At the risk of making it too complicated, maybe the threshold should be relative instead of fixed. (I once answered and then did a fairly important edit on a +8 question which went to +400 by the end of the month. But that's still only one instance.)
Edits bump the question and make new views likely.
If the end goal is to encourage edits that improve poor questions with good answers, to make those answers easier to find, why not explicitly make that the measure?
E.g., The badge would be awarded if you
edit a negative-scored question,
which has or later receives an answer that scores X or better, and
after your edit the question receives Y number of upvotes, or receives a popular/notable/famous question badge (since you want to encourage edits for SEO purposes).
X, Y, and the level of question popularity badge would change according to whether this is a bronze/silver/gold badge. No time limits, so you don't penalize the less visible tags (or questions which are less visible because they were so badly downvoted).
For rewarding good answers which have a similar impact, @radouxju's proposal for creating bronze and silver levels of the Reversal tag is solid.
Mixing the answer and edit incentives into one tag somewhat confounds the goals—as others have said—encouraging minor edits, or worse, editing the question to match the answer regardless of the OP's intent. There is nothing in the current badge definitions that actually measures an improvement to the question.
Although I have on occasion edited a question I'm answering to use more standard terminology or more clearly reflect the real problem discovered in their code, that is rare. Generally for substantive improvements to questions, I rely on comments to ask for more information and also suggest that the OP edit for clarity. More often, if I am editing a question it is for minor things like code formatting, typos, and embedding external-linked images or code. And while I wouldn't mind getting a badge for the above, I don't think it matches the goals described in this post.
I'm not sure if this scheme would support Tim Post's recent edits/comments that he wants to encourage improvements from okay questions to great (not just bad to good). If you removed the requirement that the question was initially negatively-scored, then you would need some way of calculating whether the later upvotes/popularity were due to the edit or would have been earned anyway.
And since the calculation of "improved quality" is impossible, the whole original idea is built on water.
Sounds like a good idea but, as EnergyNumbers pointed out in his answer, these badges create the temptation to make a lot of "junk edits," just for the badges, which would "encourage more bad behaviour than good behaviour."
So, as a suggestion for his point that we need to figure out how a "question goes from being a bad question to a good question; to put that in a way that can be tested for automatically" -- what if these badges will only be awarded for questions that were flagged (by other people, to prevent abuse) as "Very Low Quality"?
I'm not dismissing that as a possibility. My problem with the VLQ flag is, while it's very helpful, it's often misused. We'd need to consider maybe 2 or more of them, raised after the edit. I need to think about it. I don't think abuse of this would be a major problem to begin with, people don't take kindly to crap edits - and mods routinely contact people (or more) for trying to game badges. Most people just try to follow the intent of the badge while earning it. Still, very seldom is defensive a bad thing. It comes down to a matter of complexity, mostly. I'll consider it.
Related answer: http://meta.stackexchange.com/a/239924/266359
On cooking, at least, there are tons of questions that really really need cleanup but aren't flagged as VLQ. I mean, look at the description - it says "This question is unlikely to be salvageable through editing". If people are using it responsibly, the kinds of things people can save via edit will never be flagged.
@Jefromi: See this question of mine on flagging questions as VLQ.
I have to say that, after reading the first three paragraphs of the suggestion, I expected something completely different (no, not a Monty Pythonesque pun) than what it turned out to be after finishing reading it. I agree with most comments and answers suggesting it might end up with loads of badge-chasing trivial edits. It also seems like a great idea was lost to the simplicity of writing SQL queries for it. But I'm not writing this answer to reiterate those points, or to bash (or is is shellshock now?) the proposal with vague arguments.
What I want to describe is what I believe would be a better way of rewarding substantial improvements of the questions themselves;
First of all, lose the requirement that these new badges can only be awarded to those editors that also answered the questions they edited. Such badges should be awarded regardless if editors can answer the questions they edited to substantially improve them, or indeed felt they require additional answers to those they might have already received.
Award these badges for improved question's approval or visibility, i.e. that the question was indeed improved and it enjoys either substantial increase in site-wide visibility, or it clearly started gaining more up-votes / less down-votes per number of views by registered members.
Also award these badges for edits that saved the question, so those questions that were already closed but were reopened due to / after the edit.
Badges should be awarded solely by the quality of edits, not their quantity. So lose awarding them by number of edits and instead establish qualitative metric. That means you could gain them multiple times, but each time your edit should improve the question to some measurable effect. Bronze badges for helpful edits of questions that aren't yours, silver for great edits that made a substantial difference, and gold badges for those exemplary edits that moved the question from meh to fantastic on your qualitative scale.
Whatever you decide qualitative metric should be, award these badges also retroactively. And yes, if the editor merely copy/pastes OP's comment to the question's body itself and that helped it along, that should do, too. It might even communicate to our members better what the purpose of comments under contributions is.
The increased visibility point might be a bit moot, since we already award badges (Announcer, Booster and Publicist as bronze, silver and gold, respectively) for sharing a link to a question that was visited by certain amount of unique IPs, but if we limited that to only track new views (even revisits) by registered members, it should create an altogether different incentive.
Now, writing SQL queries to support awarding of such badges might be a bit less trivial, and I'm not sure it's even possible, but that's for the dev team to answer. I just wanted to suggest one way (or a few of them actually, depending on your interpretation) of rewarding our editors for qualitatively and measurably better non-trivial edits substantially improving our questions. Thoughts?
Maybe you expected something like this related (but unfortunately far less noticed) proposal? This was at least what I expected from this proposal (and seems to match some of your ideas), but got enlightened later about its entirely different motivation by all the comments here.
I would like to tie some quality metric to the editor badges. But the Keeper series is about something different: it is very much by design that this badge rewards a dual action, answering and curating. I see a rift between answerers and curators, especially on SO. Rewarding answerers who also participate in moderation would be a good thing.
Why set the rates so high? 10 seems pretty high for bronze.
What about 5, 40, and 100?
The initial limits were based on some arbitrary numbers, part of the reason I'm tossing this out as a discussion instead of just announcing it after implementation. I'd like to wait for a few more folks to chime in on them before settling on a set. They're not carved in stone, just a starting point for discussion. The gold needs to be hard to earn, and part of the feedback I hope to get here is precisely about the perceived difficulty.
Good point. 100 might then be a little low for gold. Maybe 5, 50, 250 is better.
Good point on lowering the count for the bronze badge but I think the gold badge should be left at 500. So 5, 50 and 500 would be my choice.
Lowering the threshold for bronze might not be a bad idea, especially since it's likely to fall out of grasp a few times for some folks due to duplicates being closed. I originally wanted to count questions if they were simply duplicates, but ... there could be a problem with providing incentive for a different type of behavior many frown upon, which unfortunately makes this a tad harder to earn. I'm eager to see what more folks think of it before I go crazy on running numbers, it first needs to be perceived as reasonably likely to be attainable at the levels it's offered.
@TimPost Is there a general expectation for attainment of badges within the populace? I.e. 5% of users typically obtain gold, 25% silver and 60% bronze or something? If so, can't you set the thresholds based (at least partly) on whichever values trigger that spread of users receiving the badges on day one?
Could do it like Mortarboard/Epic/Legendary and go with 1/50/150.
It doesn't make sense to speak about the precise values unless we have the statistics of how many people reached what number of answered&edited.
After sleeping on this a bit, I agree that bronze is too high - bronze badges are traditionally a way of saying, "you did something important for the first time - thanks!" - worrying about folks gaming bronze badges isn't really necessary. I suggest we set bronze at 1, leave the others as proposed. @Tim
After reading this good answer by ben, I think the specifics definitely should be changed.
Here is what I think:
Answers that are eligible for this badge must have received at least 3 upvotes and 0 downvotes within the first 24 hours of being answered.
The edit to the question must happen between the 12 hour time period before the answer was posted and the 12 hour time period after the answer was posted.
Also, the badge structure/levels should change. With the currently proposed rules, it will have people editing each and every question that they answer. And a lot of questions receive more than 1 answer, so you'll have so many edits on the question that are probably not really needed, but only edited for the sake of trying to get the badge. To make it less likely for people to be abusing this, and better user experience for everyone, I suggest the badge structure should be as follows :
bronze badge: answered & edited 1 question
silver badge: answered & edited 10 questions
gold badge: answered & edited 50 questions.
The only issue I see with my suggestions, is the 3 upvotes, since some less popular tags may be at a greater disadvantage. But isn't that the case with many other badges, anyways? So instead of 3 upvotes, I'd be fine with 2, but I'd prefer 3.
Your upvote requirement is going to be nearly impossible to achieve on smaller sites.
@FishBelowtheIce well, it's either make it terribly easy on bigger sites like SO or make it terribly harder on smaller sites. If a good answer to a good question can't get 2 upvotes in the first 24 hours on a smaller site, then maybe it shouldn't be a site?
@FishBelowtheIce: In my experience, it’s quite the opposite: On smaller sites, there are some enthusiastic users which will see every question and also are quite likely to vote on it if it deserves it. On bigger sites, perfectly good questions can easily get no attention at all. On certain small sites, the tumbleweed badge has not been awarded once.
Not just smaller sites, the less busy tags on SO don't attract a lot of upvotes.
I've gone in and edited a closed question so it's more understandable and was re-opened a number of times. I think that having the requirement that the question was closed and re-opened after your edit (and perhaps flag) should be part of the badge. Answering should be optional.
So basically: have a closed question re-opened because of your edit. To be fair, all editors to the question between being closed and re-opened (which I doubt would be many in the majority of cases) would be eligible.
this answer would be more appropriate in another recent request: Reopener (Edit) Badge Idea
A gnat said -- this may be a good idea, but it's a rather different idea. The proposal here is meant to get users to improve questions as they answer them.
Thanks for heads up...I like that one because it's less likely to be abused.
Answer a question with less than 2 upvotes.
Answer receives at least 2 upvotes.
Edit question within 12 hours after answering.
For the next 12 hours, further edits to the question are only minor.
Question upvotes surpass 2.
(with "upvotes" the total score is meant)
Bronze: 1
Silver: 25
Gold: 250
Although the exact values for silver and gold should be ascertained statistically (so you don't have a thousand people getting a gold badge at once or - respectively - no one). I like the badge names very much btw.
+10 upvotes? Let's be realistic, most questions and answers will never get that many. (I would know: three of my answers have 10+ upvotes, out of 1045). It's still worthwhile to edit the question into shape.
@CareBear What would you propose? The adjective stellar was used, after all. (Btw. My guess would be that your situation springs from the fact that questions on Math SE get buried instantly. I, for instance, have 3 out of 76 - on a beta site.)
"stellar" was a bit of over-the-top rhetoric, in my opinion. The comment by Shog9 is closer to what the badges might actually achieve: more users looking at the question they just answered and fixing the misspelled "javasript" in the title. Nothing heroic, just something that should be a habit of answerers.
I see (and agree). I changed it a bit in that direction.
I have answered quite a few bad questions in the past, producing useful answers to questions that other users couldn't even understand the requirements to (I know this because of the comments). Yet, I cannot remember an instance where I have re-written the question. Personally I don't think it's right to change a user's question. Instead, we should encourage them to improve it. I have no problem with fixing formatting, spelling, tags, etc... I just don't agree with re-wording.
Anyway, that's just my opinion. The more important issue I foresee is as follows:
In this proposal, a 'good' answer is defined by at least one upvote, but on what grounds do users decide that an answer is worthy of an upvote when they don't even understand the question?
I believe in instances where a question is unclear, but a 'good' answer has been upvoted. It is down to a lot of voters thinking "oh... so that's what the question means".
Just because someone posts a seemingly good answer, who is to say that they actually did understand the question (as per the OP's requirements)? I think only the OP can truly be the one to say "yeah, that's what I meant". Until the OP confirms your presumptions, what gives you the right to reword a question?
So on that basis surely only accepted answers should be included? This would make it much harder to earn the badges (the limits could be lowered to account for this), but at the same time would make it much harder to cheat your way to the badges, and thus prevent a lot of non-useful question edits.
http://meta.stackoverflow.com/questions/258432/can-a-question-with-an-accepted-answer-be-closed-as-unanswerable
| common-pile/stackexchange_filtered |
Define nested object with "optional" parameters leaf key values as undefined in typescript
This is a follow up question to this.
Here the object can have optional parameters and the undefinedAllLeadNodes will like below
Input:
class Person {
name: string = 'name';
address: {street?: string, pincode?: string} = {};
}
const undefperson = undefinedAllLeadNodes(new Person);
console.log(undefperson);
Output:
Person: {
"name": undefined,
"address": undefined
}
As you can see as address has no properties, it should return as undefined.
How can I make sure Undefine(defined here) type handles this?
Currently it accepts undefperson.address.street = '';
But I want to let it throw an error with "address may be undefined"
Update:
export function undefineAllLeafProperties<T extends object>(obj : T) {
const keys : Array<keyof T> = Object.keys(obj) as Array<keyof T>;
if(keys.length === 0)
return undefined!;//This makes sure address is set to undefined. Now how to identify this with typescript conditionals so that when accessing undefperson.address.street it should say address may be undefined.
keys.forEach(key => {
if (obj[key] && typeof obj[key] === "object" && !Array.isArray(obj[key])) {
obj[key] = undefineAllLeafProperties(<any>obj[key]) as T[keyof T];
} else if(obj[key] && Array.isArray(obj[key])) {
obj[key] = undefined!;
} else {
obj[key] = undefined!;
}
});
return obj;
}
please share undefinedAllLeadNodes
First we'll need a couple of helper types:
type HasNoKeys<T extends object> = keyof T extends never ? 1 : 0
type RequiredOnly<T> = {
[K in keyof T as T[K] extends Required<T>[K] ? K : never]: T[K]
}
The first one check whether the passed T object has no keys. The second one is a bit more complex. We're using remappine kyes in mapped types feature to remove optional fields.
And finally combile those type to result in undefined only when T object has no required fields (or no fields at all, because object having no fields have no required fields aswell):
type UndefinedIfHasNoRequired<T> =
HasNoKeys<RequiredOnly<T>> extends 1 ? undefined : never
And the final type will look like:
type Undefine<T extends object> = {
[K in keyof T]: T[K] extends object
? Undefine<T[K]> | UndefinedIfHasNoRequired<T[K]>
: T[K] | undefined
}
playground link
Here we're adding | undefined to the type of the object field only if it has no required fields.
Though now you'll have to assure typescript that your inner properties are not undefined when trying to assign values to their fields:
class OptionalPerson {
name: string = 'name';
address: {street?: string, pincode?: string} = {street: 'street', pincode: '44555'};
}
const undefOptionalPerson = undefineAllLeafProperties(new OptionalPerson())
undefOptionalPerson.address.street = '' // error
undefOptionalPerson.address!.street = ''
// or
if (undefOptionalPerson.address) {
undefPerson.address.street = ''
}
In the first case we're using non-null assertion operator to make typescript believe our object's address field is not undefined. Though keep in mind that if in fact it's still undefined you'll get runtime error here.
In the second case we're using legit type narrowing to actually check whether the field has truthy value. And accounting for it's type object | undefined this check discards the | undefined part.
I see a problem. Here address should only be undefined if it has optional/no parameters only.
aleksxor = GOD SENT!
I will go through it in detail but a quick check seems to work for my case! Thanks a lot!
@aleksxor this https://stackoverflow.com/questions/68205975/how-can-i-have-one-explicit-and-one-inferred-type-parameter/68206980#68206980 question might be interesting for you. Maybe you will find some better approach
What happens if this (UndefinedIfHasNoRequired<T[K]>) resolves to never in Undefine<T[K]> | UndefinedIfHasNoRequired<T[K]> ?
Ex: Undefine<T[K]> | never =====> Undefined<T[K]> ?
You may think of | as + (plus) on the type level. And never is kind of 0 (zero) on type leverl. So any type plus never is the same type: string | never ~= string for example.
Sweet! Highlight part in the solution is usage of HasNoKeys to achieve equality operation. I was banging my head to compare T[K] with {[index:string] : never} (which is eventually meaning no keys)
Well, I believe the part keyof T extends never is pretty obvious. Only T object with no keys is an empty object. Otherwise keyof T will be a union of it's keys and something cannot extend nothing (never).
type Undefine<T extends object> = {
[K in keyof T]: T[K] extends object
? HasNoKeys<T[K]> extends 1 ? undefined : Undefine<T[K]>
: T[K] | undefined
}
Any reason why this isn't working? I removed UndefinedIfHasNoRequired and inlined it to have another conditional.
First, you've omitted RequiredOnly check and second they're not quite equal. Since it's pretty much to explain in a comment I made a playground https://tsplay.dev/Nal6ow
One last question. How can i exclude array types here and only consider maps alone?
Include a check for array type. Something like T extends any[]
You can update your class Person to following and it should work.
class Person {
name: string = 'name';
address: { street?: string, pincode?: string } | undefined = { street: 'street', pincode: '44555' };
}
Find the Playground Link
Update
You can do it differently like below as well
type Undefine<T extends object> = Id<{
[K in keyof T]: T[K] extends object ? Undefine<T[K]> | undefined : T[K] | undefined
}>
Find the Playground Link
Note:
type Person {
name?: string
address?: {
street?: string
pincode?: string
}
}
is equivalent to
type Person {
name: string | undefined
address: {
street: string | undefined
pincode: string | undefined
} | undefined
}
In typescript ? is used for denoting optional param which is basically undefined
Actually i don't want to declare undefined for address. I'm looking for an option where if its possible to have a conditional type to evaluate as undefined if no properties exist (here in this case address has no required properties - all are optional)
When no properties exist and you define an address like address:{} it will be considered as an object because an object can have no properties. Typescript will convert your code into javascript and in javascript blank object is possible which is not undefined. So you need to define in types as undefined if it's possible.
How to identify address:{} with conditional types in typescript?
Hey, In the example please use street and pincode as optional parameters
| common-pile/stackexchange_filtered |
Getting something more than "Error while parsing query. Please check the syntax" in SQL Query Activity
I am creating a Query Activity in the Automation Studio. The query gives the following message in red:
An error occurred while checking the query syntax. Errors: Error while
parsing query. Please check the syntax
One usually gets more information about this in the Activity tab of the automation. But what should I do when I cannot even save this query (the step before choosing the destination Data Extension)?
I have gone through these steps in Adam Spriggs blog. But none of them seem to apply here.
I just need to know more about the query. It seems to work fine on the SQL Server database where it originally comes from.
UPDATE: The query code
SELECT DISTINCT a.PassengerId,
a.BookingKey,
a.FlightId,
a.Email,
a.FirstDepartureDateTime,
a.Origin,
a.Destination,
a.FirstName,
a.LastName,
a.Gender,
a.FrequentFlyerId,
a.HasAncillaries,
a.HasAncillaryBioFuel
FROM
(SELECT DISTINCT t8.BookingKey,
t8.Email,
t8.PassengerId,
t8.FirstDepartureDateTime,
t8.FirstName,
t8.LastName,
t8.FrequentFlyerId,
t9.refFirst_FlownTransactionID AS FlightId,
Gender,
HasAncillaries,
HasAncillaryBioFuel,
Destination,
Origin
FROM
(SELECT DISTINCT t1.BookingKey,
t3.Email,
t3.PassengerID,
t2.FirstDepartureDateTime,
t3.FirstName,
t3.LastName,
t3.FrequentFlyerId,
t3.Gender,
t2.HasAncillaries,
t2.HasAncillaryBioFuel,
t2.Destination,
t2.FirstDepartureStation AS Origin
FROM [Flights] AS t1
INNER JOIN [Bookings] AS t2 ON (t1.BookingKey = t2.BookingKey)
INNER JOIN [Customers] AS t3 ON (t1.PassengerID = t3.PassengerID)
WHERE 1 = 1
AND convert(date,t2.BookingDateTime) = convert(date, dateadd(DAY, -2, getdate()))
AND IsBooked = 1
AND IsPassenger = 1
AND t1.IsCancelled = 0
AND t1.IsCrew = 0
AND t1.RBD NOT IN ('I',
'Z',
'R')
AND t1.PassengerType NOT IN ('CHD')
AND t2.SalesChannel <> 'Travel Agency'
AND t3.IsBRAMember = 1
AND t3.CommunicationOptIn = 1
AND t3.FrequentFlyerId = t1.FrequentFlyerId
AND DATEDIFF(DAY, t2.BookingDateTime, t2.FirstDepartureDateTime) >= 3
AND t3.FirstBookingEverKey = t1.BookingKey ) AS t8
INNER JOIN
(SELECT DISTINCT BookingKey,
PassengerID ,
FIRST_VALUE(FlownTransactionID) OVER (PARTITION BY BookingKey,
PassengerID
ORDER BY BookingKey,
PassengerID,
ScheduledDepartureDateTime) AS refFirst_FlownTransactionID
FROM [Flights]) AS t9 ON (t8.BookingKey = t9.BookingKey
AND t8.PassengerID = t9.PassengerID)
UNION SELECT DISTINCT t8.BookingKey,
t8.Email,
t8.PassengerId,
t8.FirstDepartureDateTime,
NULL AS FirstName,
NULL AS LastName,
NULL AS FrequentFlyerId,
refFirst_FlownTransactionID AS FlightId,
Gender,
HasAncillaries,
HasAncillaryBioFuel,
Destination,
Origin
FROM
(SELECT DISTINCT BookingKey,
Email,
FIRST_VALUE(PassengerID) OVER (PARTITION BY BookingKey
ORDER BY BookingKey,
first3CharMatch DESC, CountTrips12M DESC, PassengerID DESC) AS PassengerId,
FirstDepartureDateTime,
NULL AS FirstName,
NULL AS LastName,
NULL AS FrequentFlyerId,
Gender,
HasAncillaries,
HasAncillaryBioFuel,
Destination,
Origin
FROM
(SELECT DISTINCT t1.BookingKey,
t2.BookingDateTime,
t2.email_booker,
t3.Email,
t2.FirstDepartureDateTime,
CASE
WHEN LEFT(replace(replace(replace(replace(lower(t1.PassengerFirstName), 'ø', 'o'), 'å', 'a'), 'ä', 'a'), 'ö', 'o'), 3) = LEFT(t2.email_booker, 3) THEN 1
ELSE 0
END AS first3CharMatch,
t3.CountTrips12M,
t3.IsBRAMember,
t1.PassengerID,
t1.PassengerFirstName,
t3.Gender,
t2.HasAncillaries,
t2.HasAncillaryBioFuel,
t2.Destination,
t2.FirstDepartureStation AS Origin
FROM [Flights] AS t1
INNER JOIN [Bookings] AS t2 ON (t1.BookingKey = t2.BookingKey)
INNER JOIN [Customers] AS t3 ON (t1.PassengerID = t3.PassengerID)
WHERE 1 = 1
AND convert(date, t2.BookingDateTime) = convert(date, dateadd(DAY, -2, getdate()))
AND t1.IsBooked = 1
AND t1.IsPassenger = 1
AND t1.IsCancelled = 0
AND t1.IsCrew = 0
AND t1.RBD NOT IN ('I',
'Z',
'R')
AND t1.PassengerType NOT IN ('CHD')
AND t2.SalesChannel <> 'Travel Agency'
AND t3.IsBRAMember = 0
AND t3.Email IS NOT NULL
AND t3.Email = t2.email_booker
AND DATEDIFF(DAY, t2.BookingDateTime, t2.FirstDepartureDateTime) >= 3
AND t3.FirstBookingEverKey = t1.BookingKey) AS t1) AS t8
INNER JOIN
(SELECT DISTINCT BookingKey,
PassengerID ,
FIRST_VALUE(FlownTransactionID) OVER (PARTITION BY BookingKey,
PassengerID
ORDER BY BookingKey,
PassengerID,
ScheduledDepartureDateTime) AS refFirst_FlownTransactionID
FROM [Flights]) AS t9 ON (t8.BookingKey = t9.BookingKey
AND t8.PassengerID = t9.PassengerID)) a
INNER JOIN [Flights] b ON (a.FlightId=b.FlownTransactionID and a.PassengerId = b.PassengerID)
Can you put the query here? It's hard to tell without looking at it..
@RachidMamai Sure, sorry! I added the query text.
Hey DisasterKid, incredible query here. Is this designed to run daily and target recently booked travelers (*with criteria) for an email? If so - I think this query could be simplified.
@CameronRobert lol. you're damn right. this is someone else's query to be honest.
@CameronRobert if simplifying the query greenlights in the marketing cloud, feel free to give me a shortcut.
@Disasterkid, I also work in travel :). Given your data is only coming from 3 tables (Bookings, Customers & Flights), I think you can simplify the query by starting with eligible Flights/Bookings (each day), and then JOIN in the other relevant info. Without seeing the schema & relationships of these 3 tables, I can only speculate how this would look. However, you can also try changing the CONVERT() functions to CAST(). EG: CAST(t2.BookingDateTime as datetime) = CAST(dateadd(DAY, -2, getdate()) as datetime)
@CameronRobert thanks but i doubt if the CONVERT is causing the issue because they've been there when things went well. I just have no clue why this query which is working fine in the local database is not working here.
@Disasterkid, Salesforce doesn't allow all SQL functions in Query Activities; some very common SQL functions are not allowed in Marketing Cloud. I recommend stepping through your query to find what part has the syntax error. My bet is the FIRST_VALUE or CONVERT functions are to blame.
CAST and CONVERT are both supported - I would try running it without FIRST_VALUE to see if it validates
@CameronRobert I will check and come back. I think FIRST_VALUE was there before this occurred.
@zuzannamj I will check and come back. I think FIRST_VALUE was there before this occurred.
| common-pile/stackexchange_filtered |
Does this kitchen drain plumbing look correct?
I'm not a plumber by any means, but my kitchen renovation is already very over-budget so figured I'd attempt to connect my sink plumbing myself. I haven't glued this together, it's just dry fitted, but I wanted to see if everything looked right first or if any adjustments would be needed.
Before, there had been an S trap, which I know isn't up to code. I did a lot of research and learned that I needed an AAV to work with a P trap since the drain is in the floor and not the wall. From what I can tell, the only requirements were that it should be placed as high as possible (with a min. of 6") from the horizontal P trap pipe and I think I saw somewhere the pipe needed to be at least 4" from the P trap to the sanitary tee. Does this all look right?
Plus one for the pictures with the measuring tape. Will let the experts see everything.
It looks correct but I don't like the glue-in trap adapter. A slip joint for the trap arm will make it:
easier to fit (2 more degrees of freedom - in/out, and rotate to vertical)
easier to snake (one less 90 and entering from the front not the bottom)
easier to replace with a different disposal when the time comes (same 2 degrees of freedom plus easy to add extension or cut the arm)
Also, it looks like you're being fairly meticulous, so I'll add something I saw recently --- perfectionist plumbers have the writing on the pipes always face out (for easy inspection, which I doubt you'll be doing here), or always in (so it looks nicer, which doesn't matter under a sink ... or does it?)
The key is to make all the writing not just face the same direction but also read in the same direction, and that direction should be with the fall of the pipe.
The biggest thing here is to have the avv higher than the flood line of the sink, such that the sink floods the floor on backup before crud floods the aav. Looks like there might be enough room to push the aav up between the sink and the back of the cabinet a little bit higher.
Ok thank you! I'll keep the writing in mind when I glue everything in. I see what you're saying about the slip joint. That's how I've seen them set up before in the past, but I must've bought the wrong kind of kit because it only came with a glue-on adapter. I'll go back out tomorrow and find a different one.
@FreshCodemonger I was able to move some of the faucet lines around to move the AAV a bit, so I think this should be good now that it's above the sink drain? https://imgur.com/a/b54flNS
@pdd I love it. Brilliant. Something to aspire to. If your coworkers don't defenestrate you.
@FreshCodemonger I have seen "flood level" advice a lot but I did some reading for this question and code seems to be 4 inches top of trap weir to bottom of AAV, and specifically says flood level doesn't matter because trapped air will be compressed and prevent water or debris from reaching the AAV. It seems like a good idea, certainly more useful than imagining the pipes are directional :) per pdd. But it's not required by code or (some) manufacturer instructions, and Oatey's instructions include illustration exactly as OP has done it.
@DylanL aav looks way better up high like that. My plumbers always put it in with the max height. There certainly are some codes that require it above the flood level and it makes sense that if you can do it you should. Good work ! https://terrylove.com/forums/index.php?threads/air-admittance-valve-above-the-flood-level-rim.14452/#
| common-pile/stackexchange_filtered |
TVirtualInterface fails with an interface that contains "function of object" property
I have the interface:
TOnIntegerValue: function: integer of object;
ITestInterface = interface(IInvokable)
['{54288E63-E6F8-4439-8466-D3D966455B8C}']
function GetOnIntegerValue: TOnIntegerValue;
procedure SetOnIntegerValue(const Value: TOnIntegerValue);
property OnIntegerValue: TOnIntegerValue read GetOnIntegerValue
write SetOnIntegerValue;
end;
and in my tests i have:
.....
FTestInterface: ITestInterface;
.....
procedure Test_TestInterface.SetUp;
begin
FTestInterface := TVirtualInterface.Create(TypeInfo(ITestInterface)) as ITestInterface;
end;
.....
and get the error : "Range check error"
Any idea? or TVirtualInterface doesnt support "function of object" and "procedure of object" types?
Thanks!!
It seems that TVirtualInterface works fine with method pointers, but doesn't like properties. Here's a quick sample to demonstrate:
{$APPTYPE CONSOLE}
uses
SysUtils, Rtti;
type
TIntegerFunc = function: integer of object;
IMyInterface = interface(IInvokable)
['{8ACA4ABC-90B1-44CA-B25B-34417859D911}']
function GetValue: TIntegerFunc;
// property Value: TIntegerFunc read GetValue; // fails with range error
end;
TMyClass = class
class function GetValue: Integer;
end;
class function TMyClass.GetValue: Integer;
begin
Result := 666;
end;
procedure Invoke(Method: TRttiMethod; const Args: TArray<TValue>; out Result: TValue);
begin
Writeln(Method.ToString);
Result := TValue.From<TIntegerFunc>(TMyClass.GetValue);
end;
var
Intf: IMyInterface;
begin
Intf := TVirtualInterface.Create(TypeInfo(IMyInterface), Invoke) as IMyInterface;
Writeln(Intf.GetValue()); // works fine
// Writeln(Intf.Value()); // fails with range error
Readln;
end.
This programs works as expected. However, uncommenting the property is enough to make it fail. It's clearly an RTTI bug. I see no ready way for anyone other than Embarcadero to fix it.
It seems that the combination of a property whose type is a method pointer is the problem. The workaround is to avoid such properties. I suggest that you submit a QC report. The code from this answer is just what you need.
Yes, I removed properties in my Interface and works fine. Thanks!
The problem is not a property itself but a property to an event. There is some RTTI generated that causes problems.
@StefanGlienke That's what I mean by my final paragraph.
@DavidHeffernan You wrote "it seems" and I confirmed it. :)
As David already mentioned the problem is the compiler generating wrong RTTI for properties that return a method type.
So for the property
property OnIntegerValue: TOnIntegerValue;
the compiler generates RTTI for a method that would look like this:
function OnIntegerValue: Integer;
but it does not include the implicit Self parameter for this method. This is the reason why you get the range check error because while reading the RTTI to create a TRttiInterfaceType this line of code gets executed:
SetLength(FParameters, FTail^.ParamCount - 1);
This should never happen as all valid methods have the implicit Self parameter.
There is another problem with that wrong RTTI as it messes up the virtual method indizes because of the invalid methods it generates. If the method type has a parameter you do not get the range check error but a wrong TRttiMethod instance which causes all following methods to have a wrong virtual index which will cause the virtual interface invokation to fail.
Here is a unit I wrote that you can use to fix wrong RTTI.
unit InterfaceRttiPatch;
interface
uses
TypInfo;
procedure PatchInterfaceRtti(ATypeInfo: PTypeInfo);
implementation
uses
Windows;
function SkipShortString(P: Pointer): Pointer;
begin
Result := PByte(P) + PByte(P)^ + 1;
end;
function SkipAttributes(P: Pointer): Pointer;
begin
Result := PByte(P) + PWord(P)^;
end;
procedure PatchInterfaceRtti(ATypeInfo: PTypeInfo);
var
typeData: PTypeData;
table: PIntfMethodTable;
p: PByte;
entry: PIntfMethodEntry;
tail: PIntfMethodEntryTail;
methodIndex: Integer;
paramIndex: Integer;
next: PByte;
n: UINT_PTR;
count: Integer;
doPatch: Boolean;
function IsBrokenMethodEntry(entry: Pointer): Boolean;
var
p: PByte;
tail: PIntfMethodEntryTail;
begin
p := entry;
p := SkipShortString(p);
tail := PIntfMethodEntryTail(p);
// if ParamCount is 0 the compiler has generated
// wrong typeinfo for a property returning a method type
if tail.ParamCount = 0 then
Exit(True)
else
begin
Inc(p, SizeOf(TIntfMethodEntryTail));
Inc(p, SizeOf(TParamFlags));
// if Params[0].ParamName is not 'Self'
// and Params[0].Tail.ParamType is not the same typeinfo as the interface
// it is very likely that the compiler has generated
// wrong type info for a property returning a method type
if PShortString(p)^ <> 'Self' then
begin
p := SkipShortString(p); // ParamName
p := SkipShortString(p); // TypeName
if PIntfMethodParamTail(p).ParamType^ <> ATypeInfo then
Exit(True);
end;
end;
Result := False;
end;
begin
if ATypeInfo.Kind <> tkInterface then Exit;
typeData := GetTypeData(ATypeInfo);
table := SkipShortString(@typeData.IntfUnit);
if table.RttiCount = $FFFF then Exit;
next := nil;
for doPatch in [False, True] do
begin
p := PByte(table);
Inc(p, SizeOf(TIntfMethodTable));
for methodIndex := 0 to table.Count - 1 do
begin
entry := PIntfMethodEntry(p);
p := SkipShortString(p);
tail := PIntfMethodEntryTail(p);
Inc(p, SizeOf(TIntfMethodEntryTail));
for paramIndex := 0 to tail.ParamCount - 1 do
begin
Inc(p, SizeOf(TParamFlags)); // TIntfMethodParam.Flags
p := SkipShortString(p); // TIntfMethodParam.ParamName
p := SkipShortString(p); // TIntfMethodParam.TypeName
Inc(p, SizeOf(PPTypeInfo)); // TIntfMethodParamTail.ParamType
p := SkipAttributes(p); // TIntfMethodParamTail.AttrData
end;
if tail.Kind = 1 then // function
begin
p := SkipShortString(p); // TIntfMethodEntryTail.ResultTypeName
Inc(p, SizeOf(PPTypeInfo)); // TIntfMethodEntryTail.ResultType
end;
p := SkipAttributes(p); // TIntfMethodEntryTail.AttrData
if doPatch and IsBrokenMethodEntry(entry) then
begin
WriteProcessMemory(GetCurrentProcess, entry, p, next - p, n);
count := table.Count - 1;
p := @table.Count;
WriteProcessMemory(GetCurrentProcess, p, @count, SizeOf(Word), n);
count := table.RttiCount;
p := @table.RttiCount;
WriteProcessMemory(GetCurrentProcess, p, @count, SizeOf(Word), n);
p := PByte(entry);
end;
end;
p := SkipAttributes(p); // TIntfMethodTable.AttrData
next := p;
end;
end;
end.
| common-pile/stackexchange_filtered |
java script validation to force user to enter # as first character in password
I have 2 text box(UserName,password),1 Submit Button.I want a java script to force user to start # as 1st character in the password textbox(Alert:"password should start with #").Any Help appreciated very much
Enforcing # as the first character of all passwords arguably reduces their strength. Why would you do that?
Why not just have them enter any password they want and then prepend #? As Frederic says - if everyone has to do it then what's the point?
client's requirement.It seems they want to group different users to start with different characters as their groups 1st character for the password field.
This code will make sure that the first character in the field with ID passfield starts with a sharp. I hope that your "password" field isn't requesting a real password, because forcing a password to start with a certain character makes no sense.
function validate(){
var pass = document.getElementById("passfield");
if(pass.value.charAt(0) != "#"){
alert("Password should start with a #.");
//Optionally: Return
return;
}
//Not returned, the input is valid, continue
// ...
}
| common-pile/stackexchange_filtered |
Android onActivityResult Query
I have two Activities A,B
From Activity A, I do open my gallery and I want that when the picture is selected from the gallery it should go on Activity B and not on Activity C.
Is this possible??
share_picture.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View arg0) {
// TODO Auto-generated method stub
Intent choosePic = new Intent(Intent.ACTION_PICK,
MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(choosePic, LOAD_IMAGE_GALLERY);
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
// TODO Auto-generated method stub
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == LOAD_IMAGE_GALLERY && resultCode == RESULT_OK
&& null != data) {
Uri selectedImage = data.getData();
String[] filePathColumn = { MediaStore.Images.Media.DATA };
Cursor cursor = getContentResolver().query(selectedImage,
filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
picturePath = cursor.getString(columnIndex);
//I WANT TO CALL ACTIVITY B FROM HERE.. THAT AFTER THE PICTURE IS SELECTED IT SHOULD GO ON ACITIVITY B AND NOT ON A.
}
}
Thanks
Put in intent extra your filePathColumn.
Finish Activity C;
And call Activity B with intent;
just write this code inonActivityResult after picturePath = cursor.getString(columnIndex);
// used to show HD images
BitmapFactory.Options bounds = new BitmapFactory.Options();
// divide bitmap to 4 sample size it can be 2rest(2,4,8 etc)
bounds.inSampleSize = 4;
// get bitmap from bounds and file path
Bitmap bmp = BitmapFactory.decodeFile(filePath, bounds);
imageView1.setImageBitmap(bmp);
Now here write Intent code
Intent intent= new Intent(A.java,B.class);
startActivity(intent);
You just need to pass the intent from your onActivityResult() inside ActivtyA to ActivityB passing picturePath through intent
ActivityA.java
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
//Insert it once you got the picturePath through Content Resolver
picturePath = cursor.getString(columnIndex);
Intent forwardToB=new Intent(getApplicationContext(),ActivityB.class);
forwardToB.putExtras("PATH",picturePath);
startActivity(forwardToB);
}
ActivityB.java
Intent i=getIntent();
String pathToImage=i.getStringExtra("PATH");
OR
Bundle extras = this.getIntent().getExtras();
if (extras != null)
{
String value = extras.getString("PATH");
}
Now you can do whatever once pathToImage is inside your ActivityB
| common-pile/stackexchange_filtered |
How to remove door panel from Bosch dishwasher with buttons on top?
I have a Bosch dishwasher with buttons on top (SHP65T55UC) and am trying to remove the front panel from the door.
I've found several guides that all appear to say the same thing about Bosch dishwashers:
Open the door
Using a TORX bit, remove the 6 (or 8) screws on the side of the door (not the screws at the top which hold the control panel in
Close the door completely
Pull the bottom of the panel away from the unit.
With the bottom pulled away, pull down on the panel to remove.
When I do that, the panel still seems to hold on somewhere. It's not attached to the top panel, but pulling on it deforms the top panel.
After nearly breaking the door thinking something was stuck, I finally noticed that the inside of the grip recess has some more screws.
I pulled three more screws from there and the door panel removed easily. I don't know if this model is unusual, but none of the guides I was using mentioned screws in that area.
| common-pile/stackexchange_filtered |
Getting Windows 10 Tablet to run a Wamp Server
I have a very basic Asus tablet (VivoTab) I just upgraded to Windows 10. I want to run an offline demo on it of my website, so my thought process is to install WAMP server and go from there. I've done it on my PC, but as you can imagine its a little different with a Tablet, and I can't seem to find much online information on the topic.
If there is a 'better' way to go about this, not using WAMP, or if someone has done this there help would be much appreciated!!
Preferably I don't want to jailbreak, but if its necessary I don't mind.
what the actual heck are you talking about? you want to run your tablet on your server? wtf does that even mean?
Why don't you go ahead with WAMP?Wamp for windows works on windows!!..So if it's available,then why don't you use it?Wamp wouldn't be affected by the display size or resolution.So what makes a tab so different than a PC in this case?
it doesn't matter if it is on desktop or tablet, as long as you are on full windows software that can run windows desktop program. it works the same. for light weight staging, try xampplite.
@Pamblam...He is talking about hosting a website on the localhost.Basically,converting the tablet to a website host(webserver).(using tools like WAMP and XAMPP).
Thanks everyone! After playing with some configs with wamp it worked. Had to uninstall it couple times, The tablet didn't play nicely with 3.0 but worked with 2.5. The difficulty was installing the correct MSVCR. For 2.5 Apache runs on MSVCR 2010 for 32bit but for some reason you need 2008 revision installed as well. Also had to block IIS it runs on port 80 as wel
| common-pile/stackexchange_filtered |
How can I use an Angular Material Datepicker with a reactive / model-driven form?
I have a reactive (model-driven) angular form with multiple form groups (formGroupName) and controls. The form-fields on this form are controlled and populated by a FormBuilder in the associate TypeScript file.
I have a text field on this form that I'm using to collect a date from the user. I wanted to associate an Angular Material datepicker on this field but as soon as I add a "formControlName" attribute to this input field, I get the error:
Error: More than one custom value accessor matches form control with path: ' -> '
I can't use ngModel (switch to a template driven form) as everything else on this form is model-driven and this control sits within a formGroupName block. I did try putting an ngModel on here just to see what would happen and Angular complained that I can't mix an ngModel in here.
How can I data-bind this field (set the initial value for this field on the TypeScript side) and then get any changes made by the user on the template/view when the form is submitted while using Angular Material's datepicker? If Angular Material's date picker doesn't support Model-driven forms, I'm willing to use another date-picker that supports this feature.
My model looks like this (the field that's relevant to this discussion is step1.visitDate):
this.form = this.fb.group({
step1: this.fb.group({
representative: [null],
visitDate: [new Date()],
category: ["Retail Store Visit"],
contactType: ["Store Visit"],
goPakStore: [null],
goPakLocation: [null],
eventType: ["Outreach"],
eventDescription: [null],
servicePlaza: [null],
inventoryLocation: [null]
}),
step2: this.fb.group({
inventoryItemLocation: [null],
quantityAvailable: [null],
quantityDistributed: [null],
})
});
My HTML template looks like this (the relevant parts):
<form [formGroup]="form" (ngSubmit)="onSubmit(form.value)" novalidate>
<md-card class="users-listing" formGroupName="step1">
<md-card-content>
<div>
<md-input-container style="width:98%;">
<input mdInput placeholder="Visit Date" [mdDatepicker]="visitDatePicker" formControlName="visitDate" required trim maxlength="10">
<button mdSuffix [mdDatepickerToggle]="visitDatePicker"></button>
</md-input-container>
<md-datepicker #visitDatePicker></md-datepicker>
</div>
</md-card-content>
</md-card>
This should work, could you please reproduce the issue in a plunker.
For a working example using material2 have a look here.
| common-pile/stackexchange_filtered |
Filling NA values with a sequence in R data.table
I have a data table that looks something like the following. Note that the flag is 1 when vals is 0 and missing elsewhere.
dt <- data.table(vals = c(0,2,4,1,0,4,3,0,3,4))
dt[vals == 0, flag := 1]
> dt
vals flag
1: 0 1
2: 2 NA
3: 4 NA
4: 1 NA
5: 0 1
6: 4 NA
7: 3 NA
8: 0 1
9: 3 NA
10: 4 NA
I would like the output to look like the seq column below. That is, the column needs to contain a set of sequences beginning at 1 whenever vals is 0 and counting up until the next row when vals is 0. The flag is only helpful if it helps attain the goal described.
> dt
vals seq
1: 0 1
2: 2 2
3: 4 3
4: 1 4
5: 0 1
6: 4 2
7: 3 3
8: 0 1
9: 3 3
10: 4 3
Originally, I was thinking about using cumsum() somehow, but I can't figure out how to use it effectively.
My current solution is pretty ugly.
dt <- data.table(vals = c(0,2,4,1,0,4,3,0,3,4))
dt[vals == 0, flag := 1]
dt[, flag_rleid := rleid(flag)]
# group on the flag_rleid column
dt[, flag_seq := seq_len(.N), by = flag_rleid]
# hideous subsetting to avoid incrementing the first appearance of a 1
dt[vals != 0, flag_seq := flag_seq + 1]
# flag_seq is the desired column
> dt
vals flag flag_rleid flag_seq
1: 0 1 1 1
2: 2 NA 2 2
3: 4 NA 2 3
4: 1 NA 2 4
5: 0 1 3 1
6: 4 NA 4 2
7: 3 NA 4 3
8: 0 1 5 1
9: 3 NA 6 2
10: 4 NA 6 3
Any improvements are appreciated.
We can use a logical index with cumsum to create the grouping variable and then based on that we get the sequence colum
dt[, flag_seq := seq_len(.N), cumsum(vals ==0)]
dt
# vals flag flag_seq
# 1: 0 1 1
# 2: 2 NA 2
# 3: 4 NA 3
# 4: 1 NA 4
# 5: 0 1 1
# 6: 4 NA 2
# 7: 3 NA 3
# 8: 0 1 1
# 9: 3 NA 2
#10: 4 NA 3
| common-pile/stackexchange_filtered |
Angular 6 : date disapear on refresh
I have textarea which allow user to submit comments, I want to grab the date on time the comment is submitted, and save to json together with comment added.
Unfortunatelly when I submit the comment , comment and date displays as expected, but when I refresh the page date is gone .
Note am using json-server
after comment is submitted in json file I would like to have something like this:
"comment": [
{
"id": 1,
"localTime": "2018-10-27T13:42:55",
"description": "Lorem ipsum dolor sit amet enim. Etiam ullamcorper. Suspendisse a pellentesque dui, non felis. Maecenas malesuada elit lectus felis, malesuada ultricies. Curabitur et lig\n"
}
]
Problem: Right now when comment is submitted I have the following in json server,
"comment": [
{
"id": 1,
"localTime": null,
"description": "Lorem ipsum dolor sit amet enim. Etiam ullamcorper. Suspendisse a pellentesque dui, non felis. Maecenas malesuada elit lectus felis, malesuada ultricies. Curabitur et lig\n"
}
]
Here is what I have tried so far to grab the date from entered comment.
HTML :
<form class="add-comments" [formGroup]="addForm" (keyup.enter)="addComments()">
<input type="hidden" id="localTime" name="localTime">
<div class="form-group">
<textarea class="form-control" rows="1" placeholder="Add comments" formControlName="description" id="description"></textarea>
</div>
</form>
Here is method on compoents ts.
addComments(task_id) {
const formData = this.addForm.value;
formData.task_id = task_id;
this.userService.addComments(formData)
.subscribe(data => {
this.comments.push(this.addForm.value);
});
const date = new Date();
const d = date.getUTCDate();
const day = (d < 10) ? '0' + d : d;
const m = date.getUTCMonth() + 1;
const month = (m < 10) ? '0' + m : m;
const year = date.getUTCFullYear();
const h = date.getUTCHours();
const hour = (h < 10) ? '0' + h : h;
const mi = date.getUTCMinutes();
const minute = (mi < 10) ? '0' + mi : mi;
const sc = date.getUTCSeconds();
const second = (sc < 10) ? '0' + sc : sc;
const loctime = `${year}-${month}-${day}T${hour}:${minute}:${second}`;
this. addForm.get('localTime').setValue(loctime);
}
Here is service for adding comments to server
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import {Status } from '../model/statuses.model';
import { Comment } from '../model/comments.model';
import { User } from '../model/user.model';
@Injectable({
providedIn: 'root'
})
export class UserService {
status: Status[];
constructor(private http: HttpClient) { }
statusUrl = 'http://localhost:3000/statuses';
commentsUrl = 'http://localhost:3000/comment';
usersUrl = 'http://localhost:3000/users';
addComments(comments: Comment) {
return this.http.post(this.commentsUrl, comments);
}
getComments(id: number) {
return this.http.get<Comment[]>(this.commentsUrl);
}
}
Here is class model
export class Comment {
id: number;
username: string;
email: string;
days: number;
localTime: Date;
description: string;
}
what do I need to change to get what I want??
Your data will be cleared on refresh. You can try and use localStorage or sessionStorage to achieve your purpose.
why session storage while am using json server????
ugh, why are you trusting the client to send your server the correct date ?
@Stavm is that qn for me or @sarthank?
After this line in your code...
const date = new Date();
... the date variable will already contain the current date and time. Instead of your custom function calls for constructing the date string, you might as well just assign this date to your comment instance before posting it to the server - as your comment will be serialized to a JSON the date attribute will automatically be converted to a date string conforming to the format you desire: "2018-10-27T13:42:55".
You could just move above date assignment into your addComments method instead:
addComments(comments: Comment) {
comments.localTime = new Date();
return this.http.post(this.commentsUrl, comments);
}
let me check I will be back
I changed as per your answer but I get the following error : `Failed to compile.
src/app/service/user.service.ts(27,5): error TS2322: Type 'number' is not assignable to type 'Date'.`
Sorry, I made a mistake while adjusting the code. It should be new Date(), I will edit my answer.
Use localStorage.
// Set localStorage
localStorage.setItem('nameOfYourKey','data you will save');
// Retrieve data from
localStorage.getItem('nameofYourKey');
Hii Thank u for suggestion but I dont want use local storage
| common-pile/stackexchange_filtered |
NOT NULL constraint failed: forum_question.user_id (django)
I'm trying to save an object using cbv's im new to using it, and I'm trying to save an object using create view but is getting this error:
"NOT NULL constraint failed: forum_question.user_id"
I would appreciate beginner friendly explanation on how to fix this and maybe tips as well, thank you!
models.py:
class Question(VoteModel, models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
title = models.CharField(max_length=30)
detail = models.TextField()
tags = models.TextField(default='')
add_time = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.title
forms.py:
class QuestionForm(ModelForm):
class Meta:
model = Question
fields = ['title', 'detail', 'tags']
views.py:
class AskForm(CreateView):
def post(self):
user = self.request.user
model = Question
form_class = QuestionForm
template_name = 'forum/ask-question.html'
if form_class.is_valid():
form_class.save()
exceptions?:
edit 3:
extra info:
Traceback (most recent call last):
File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
return handler(request, *args, **kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 174, in post
return super().post(request, *args, **kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 144, in post
return self.form_valid(form)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 127, in form_valid
self.object = form.save()
File "/home/titanium/.local/lib/python3.8/site-packages/django/forms/models.py", line 466, in save
self.instance.save()
File "/home/titanium/.local/lib/python3.8/site-packages/vote/models.py", line 67, in save
super(VoteModel, self).save(*args, **kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 743, in save
self.save_base(using=using, force_insert=force_insert,
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 780, in save_base
updated = self._save_table(
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 885, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 923, in _do_insert
return manager._insert(
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/query.py", line 1301, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1441, in execute_sql
cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/utils.py", line 90, in exit
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py", line 416, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: forum_question.user_id
[14/Apr/2022 09:58:02] "POST /ask/ HTTP/1.1" 500 175023
When you create the QuestionForm using the Question model you need to add a User because you made it a ForeignKey relation AND you haven't specified it to be NULL (required) by default it is required (NOT NULL).
A forum question instance must have a non null user field, but you are not specifying the user related to the object you're creating. In the case you dont want to add the user, update your model's user field to be:
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
or in your ask form you overload form_valid() in order to add the user sort of like this (Note I have not tested this directly, follow the documentation here):
class AskForm(CreateView):
def post(self):
user = self.request.user
model = Question
form_class = QuestionForm
template_name = 'forum/ask-question.html'
if form_class.is_valid():
form_class.save()
def form_valid(self, form):
form.instance.created_by = self.request.user
return super().form_valid(form)
it didn't work, the same error comes up :(
Could you provide the full exception or point to which line is causing the exception, that will help in understanding how to fix the problem.
hello, do you mean this?
The above exception (NOT NULL constraint failed: forum_question.user_id) was the direct cause of the following exception:
response = await sync_to_async(response_for_exception, thread_sensitive=False)(request, exc)
return response
return inner
else:
@wraps(get_response)
def inner(request):
try:
response = get_response(request) …
except Exception as exc:
response = response_for_exception(request, exc)
return response
return inner
Also I think this is worth mentioning, I'm using a customuser model, thanks again!
@TitaniumTronic sadly, the error messages do not help as much as i would have thought, but I did find a question that is similar to yours which may help you solve this problem. Heres the link : https://stackoverflow.com/questions/63546367/not-null-constraint-failed-blog-userpost-user-id
@TitaniumTronic The exceptions you provided say "The above exception (NOT NULL constraint failed: forum_question.user_id)", and then you provide the rest of the trace, but what is it referring to by "the above exception"? You have to follow the stack trace back up the stack until you get to your own code. When you get to your code, that will likely be the source of the error and is what will tell us what the problem is with exactly
hey, I edited the question above to include a pic on the traceback
And if all else fails, try changing that line in the Question class to "user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)" and see if it at least fixes that error. It may not be exactly want you want right now, but the important thing is to make sure nothing else is causing problems on top of the current one
@TitaniumTronic Is that the full thing? I did see that, but if you look at the location of the files those exceptions were coming from, and their code for that matter, its all the location of the error in Django, not your code.
yo so uhh I tried the user to foreignkey and unfortuantely still didnt work :(
I asked around on a few discord servers already and existing stack overflow questions, but cant seem to find the solution
@TitaniumTronic I do hope you know the primary part of that line to change the user to does not have to do with changing to a foreignkey, the important part was adding the parameters ", blank=True, null=True" after "on_delete=models.CASCADE,"
I'm not sure if this is still useful, however, I ran into the same error. You can fix the error by deleting your migration files and the database.
The error is due to the sending of NULL data(no data) to an already existing field in the database, usually after that field have been modified or deleted.
| common-pile/stackexchange_filtered |
Salomaa's axiomatisation of regular languages and the use of regular expression in it
I am reading the classical article of A. Salomaa where he gives two axiom systems for regular sets and proofs consistency and completeness.
As I have understood it, an axiomatic system in some logic (lets suppose predicate first order logic) are axioms formulated in the language of the logic, i.e. well-formed formulas together with primitive notions (constant, predicate or function symbols). And a (set theoretical) model is an interpretation for this.
For example, consider the theory of groups. The primitive notions are groups, multiplication, inversion and idenity, mostly written as $(G, \cdot, ^{-1}, 1)$ and the axioms would be
\begin{align*}
& \forall x,y,z : (xy)z = x(yz) \\
& \forall x \exists y : xy = 1 \\
& \forall x : x1 = x \land 1x = x.
\end{align*}
The existence of certain groups shows its consistency, and as there are models such that for example the sentence $\forall x \forall y : xy = xy$
is either true or false, it is not complete. But essential here is that when talking about the theory we have just the axioms in mind, without bearing to any actual realisation/model.
Now to come back to Salomaa's paper, in his system $F_1$ he lists $11$ Axioms. Now it is easy to see that regular expression (defined as terms over some alphabet) are a model for these axioms, but besides that their might be other models. When dealing with questions about this axiom system in general we cannot argue with one specific model, or?
To be more specific, in Lemma 4 of his paper he shows that every regular expression has an equational characterisation (i.e. a set of equations this expression fulfills) and this is essential for the completeness proof. And the proof goes by induction over the construction of regular expression, so it works just for this specific model. But in fact he must how that everything (not just regular expressions) obeying the axioms have such an equational characterisation, so he must argue more general than using the specific model of regular expression?
Am I right? Or why does this works out... or am I confusing something here, in what sense does regular expressions go into the axiom system that we can use this model in proving statements about the axiom system (I guess this is not the only model, or?).
I can't access the paper so I don't really know but: Yes, there may be other models, like there are other models of Peano in first order logic. But non-standard models of Peano all contain the standard model (meaning that there is a subset of each non-standard model that's isomorphic to the standard model to them). I'd guess that that's what he's proving here: All models of his theory contain a subset isomorphic to the "standard" model: regular expressions.
If the axioms are "algebraic", it's probably equivalent to proving that for any given model, there is an injective morphism from regular expressions to any the model, which he seems to describe by giving a set of equations characterizing the image of a regular expression by the morphism. And it's perfectly fine to define a function from regular expressions to something else by induction on regular expressions. (You can sometimes prove everything in one model, but only after you've proved that all other models are isomorphic to it. And the theory of a model is always complete)
| common-pile/stackexchange_filtered |
How to remove isolated and get a clear non overlapping nodes of connected graph with networkx
def gene_network_graph(self, edges, s, e):
G = nx.MultiDiGraph(seed = 1)
nodes = list(range(s,e))
G.add_nodes_from(nodes)
G.add_weighted_edges_from(edges)
edge_labels=dict([((u,v,),d['weight'])
for u,v,d in G.edges(data=True)])
pos=nx.spring_layout(G,k=0.2)
node_sizes = 400
M = G.number_of_edges()
edge_colors = range(2, M + 2)
edge_alphas = [(5 + i) / (M + 4) for i in range(M)]
nx.draw(G,pos, with_labels=True)
nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)
pl.show()
I have total of 57 nodes. I need to remove isolated nodes and need a clear layout with non overlapping nodes. Also my edge weights are overlapping with each other.
Isolated nodes are the connected components that consist of a single node. You could use connected_components to get the connected components and from that sellect the ones that have cardinality one.
I just saw that your graph is directed. It is, right? Those are little arrow heads. In that case you compute the connected components of the undirected graph. The latter you can obtain from to_undirected.
Please share a Minimal, Complete, and Verifiable example
| common-pile/stackexchange_filtered |
What is the time complexity of Trsm and other BLAS operations?
I am accelerating a model by replacing all its linear algebra operations with cuBlas's functions. And I want to get the time complexity or FLOPs of the model to evaluate its performance in roofline model.
There are two kinds of operation in the model: Gemm and Trsm.
I know the FLOPs of Gemm is about 2 * k * m * n from the question : How to compute the achieved FLOPS of a MPI program which calls cuBlas function:
The standard BLAS GEMM operation is C <- alpha * (A dot B) + beta * C and for A (m by k), B (k by n) and C (m by n), each inner product of a row of A and a column of B multiplied by alpha is 2 * k + 1 flop and there are m * n inner products in A dot B and another 2 * m * n flop for adding beta * C to that dot product. So the total model FLOP count is (2 * k + 3) * (m * n) when alpha and beta are both non-zero.
But for Trsm, I have no idea about its computation complexity. All the documents I found say it's about O(n^3) which isn't clear enough to get the computation complexity.
Sincerely thank you for your answers!
O(n^3) IS the computational complexity. If you want to know the actual runtime, you should do some benchmarking.
@TimRoberts I want to know a more specific model count of the time complexity, like 2 * m * n * k for gemm. Because to get the work of the model, I need to sum up the operations' FLOPs.
But for Trsm, I have no idea about its computation complexity. All the documents I found say it's about O(n^3) which isn't clear enough to get the computation complexity.
It's about O(n^2) not O(n^3).
TRSM performs a number of different implementation depending on the form of the matrix, but for the canonical B <- alpha * inv(A) dot B where B is (m by n) and A is a non-unit upper triangular matrix, the operation count should be something like n * (3/2*m + 1) [n diagonal scaling operations plus 1/2 * 3 * m * n operations over all the non-zero row entries]
If you want to know the exact FLOP count for the reference implementation of any of the BLAS or LAPACK routines, the obvious way to do it is inspect the reference serial code and count the floating point operations, either by hand or by instrumentation of the code itself.
| common-pile/stackexchange_filtered |
I am creating a program that counts up by an increment. I am really close but I can't get the program to go up by the decided increment?
This is the code. Any suggestions are welcome but I am really trying to learn C++ for a class and not just get answers. So if you could explain how you got to the solution that would be amazing! Thanks for the help!!
//Looping
#include <iostream>
using namespace std;
int main()
{
int increment;
int input_variable;
bool stop_printing = true;
cout << "Input: ";
cin >> input_variable;
cout << "Increment: ";
cin >> increment;
do
{
int counter = 0;
counter = increment++;
cout << "Result: " << counter << endl;
increment++;
if (increment >= input_variable)
{
stop_printing = false;
}
} while (stop_printing == true);
return 0;
}
You seem to have your counter and increment confused - counter should change, increment shouldn't. Also "keep printing while stop printing is true and stop printing once stop_printing is false" - thats just awful naming!
Do we get a pizza party if we ace the midterm?
Haha I agree I wasn't allowed to change the naming. I was given variable names.
Do you see that increment++ appears twice in this code? Does that follow your logic?
Sure it is on me! hahah
Do I not need it twice?
You need to be able to tell us what the difference is. What does it do? What does it do twice? The trick with programming is being able to be certain about what each instruction does - so you can be certain about what a collection of instructions does.
Change:
counter = increment++;
to:
counter += increment;
variable++ is for adding 1 to the variable. variable += value is for adding the second value to the first variable (it's equivalent to variable = variable + value).
You have other problems: You're setting counter back to 0 every time through the loop; that should be before the loop. You're doing increment++ a second time in the loop, this isn't needed. And you're testing increment against the input variable instead of testing counter.
int main()
{
int increment;
int input_variable;
bool stop_printing = true;
cout << "Input: ";
cin >> input_variable;
cout << "Increment: ";
cin >> increment;
int counter = 0;
do
{
counter += increment;
cout << "Result: " << counter << endl;
if (counter >= input_variable)
{
stop_printing = false;
}
} while (stop_printing == true);
return 0;
}
You'll also need to pull int counter = 0 outside the loop, no?
Thank you for the explanation. I forgot about the different types
I think it's worth noting that increment and decrement are operators that return, and that they have pre and post to that effect if someone wanted a variable to match another's incrementation (++increment returning the incremented result vs increment++ returning the value of increment before it was increased)
I didn't want to make his head explode, so I didn't think it necessary to mention those details.
| common-pile/stackexchange_filtered |
How to get JS fiddle to work with simple width and height percentages
I'm trying to post a problem I ran into with percentages and I'm having a lot of trouble reducing it to the simplicist case in JS fiddle. I don't want to have to extrapolate too much from the issue but JS fiddle is forcing me to.
css:
.content-container {
width: 100%;
height: 100%;
outline: 1px solid;
}
.inner-box {
width: 56%;
height: 56%;
position: relative;
outline: 1px solid;
}
html:
<div class = "content-container">
<div class = "inner-box">
</div>
</div>
js fiddle:
http://jsfiddle.net/XMBEq/1/
the exact same code.
http://bonkmas.com/examplejs.html
You need -
html, body {
width: 100%; height: 100%;
}
Working Demo
| common-pile/stackexchange_filtered |
How Do we Know the Cardinality of a Set?
I recently had a discussion in a comment section here about the cardinality of the empty set. He claimed that cardinality to equal zero. I claimed that we didn't know such. My claim merely came as that people just assume the empty set to have zero cardinality. As far as I can tell we only know about the cardinality of a set, because of a bijection, a surjection, or an injection. I do not see how we can have any function either from or to the empty set, since it does not have any members. How do we know the cardinality of a set?
Added question:
Some of the answers here talk about the axiom of choice and other principles, which as far as I can tell from the arguments made, constructivist mathematicians do not accept. How would a constructivist go about evaluating the cardinality of the empty set, or does that not come as possible?
The empty set is itself a function. Edit: and itself a cardinal, if I'm not mistaken.
Can you give your definition of carnality ? For finite sets I use the definition of how many elements are in the set, in the case of empty set it's $0$
@Belgi And how do you know how many elements are in a set? I find that definition very ambiguous. A cardinal number is an ordinal number that isn't equinumerous to any ordinal number inferior to itself. Having established this the cardinality of a set can be defined as the only cardinal that is equinumerous to the given set.
@GitGud - I studied only basic set theory, since the question was tagged as such I explained it in the way it was explained to me. I realize this isn't very good as a definition, but I hoped it would of helped the OP
@Belgi Fair enough.
A set has cardinality $\ge1$ if and only if it has a member. Ergo $\varnothing$ has cardinality $0$.
@anon From your statement I only infer that the empty set does not have cardinality greater than 1. You'd need an argument that shows that if a set has a cardinality less than 1, then it will have cardinality 0 for your argument to go through to completion.
What do you think the word "cardinality" means?
@DougSpoonwood Come again?
@anon There is no definition of cardinality in general, unless you have a predetermined notion of what it means for an element to belong to a set. Consider a universe of discourse such as {1, 3, 5, 6} and a subset {(1, .2), (3, 1), (5, .4), (6, 0)} :=A, where ".2" "1" and ".4" indicate the values of the membership function. One notion of cardinality for a fuzzy subset A implies that A has cardinality of 1.6. Set A still has at least 3 elements. Thus, the notion of cardinality I consider as relative to a function of some sort. But, the empty set has no subsets other than itself. So...
either the notion of a characteristic function for evaluating classical sets breaks down at some point (I don't see how it breaks down for infinite sets), or the empty set has no cardinality at all. All the arguments I've seen here seem rather indirect. The characteristic function way of evaluating the cardinality of sets does come as direct. I don't see how it breaks down. Consequently, at least so far, I remain convinced that the concept of cardinality simply does not apply to the empty set in any proper sense. We can still assume it equal to 0, but I see no way to prove it properly.
@PeterTamaroff See my comments above.
This isn't the first time you've taken evasive maneuvers to a different arena with different rules so you can comment on straw issues that don't apply to the original context as part of a broader contrarian campaign to feign dumb about basic concepts. Oh well, I don't know what I expected. I'll leave you to your whatever you're actually trying to do.
@anon I assure you I'm not feigning. I believe that either the arguments in the answers below make subtle errors (perhaps by assuming certain equivalences without understanding the precise conditions between them) in reasoning or it comes as inconsistent to accept the cardinality of the empty set as 0 on the basis of constructivist principles. Either way, the hypothesis that the empty set does NOT have any cardinality at all remains in place. What consequences follow from that hypothesis?
Since you want constructivist principles, etc., please write down in full the system that you are working in. This should include any non-classical logic laws (and the list of the ones omitted from classical logic as well); and what sort of axioms your set theory has (because talking about sets requires you to have some theory about sets from the start). Then it might be possible to give you a satisfactory answer.
@GitGud If there does exist such a thing as the empty set, I think it can viewed as a function between (a certain class of indicator) functions.
@DougSpoonwood I'm afraid I can not discuss this with you. To me the existence of the empty set is an axiom and I'm sure you know this, So your objections are deeper than my knowledge.
@GitGud Every assumption or axiom can get questioned. If the empty set qualifies as a function, and a function can get represented by a set of ordered pairs, then what I've written below in some sense shows you what the empty set with respect to a finite universe of discourse looks like.
@AsafKaragila As I understand things, it isn't necessarily required to write down the full system to adhere to constructivist principles. If you can compute something, you've worked under constructivist principles. Some other people's answers here may have allowed a computation of cardinality for (an) empty set. I know my answer did allow such computations up to the point where I made argued via induction. Though, such an argument makes it clear that many more computations can get made.
Doug, if your name is Nelson, then people probably know your axiomatic system. Sadly your name is Doug, so people don't know, and this means that you have to tell them what you are thinking about, otherwise it is impossible to guess.
@AsafKaragila I did tell them here, after I had thought of it.
From a standard set-theoretic point of view:
The empty set, which in this answer I will (mostly) denote by $0$, is transitive and well-ordered by $\in$, so by definition it is an ordinal. If $\alpha\ne 0$ is an ordinal, then $0\in\alpha$, so $0<\alpha$, and $0$ is therefore the smallest ordinal. If $A$ can be well-ordered, the cardinality of $A$ is by definition the smallest ordinal $\alpha$ such that there is a bijection between $A$ and $\alpha$. Let $A=0$. Then $A\times 0$ is vacuously a bijection from $A$ to $0$, and $0$ is the smallest of all ordinals, so $|A|=0$, i.e., $|\varnothing|=0$.
Why the single use of $\varnothing$ instead of $0$ for the empty set? I don't mind either notation, but surely one should be consistent.
@Ilmari: Because Doug asked about the cardinality of the empty set, and at the end I wanted for absolute clarity to emphasize that it was indeed the empty set as we normally think of and represent it about which I had been talking right along. The question was sufficiently naïve that I thought it a good idea to do so.
While the second given with reference to ordinals are correct, here is one without them.
Recall that if the cardinality of $x$ is larger than the cardinality of $y$, then there is an injection from $y$ into $x$.
If the cardinality of the empty set is not zero, then it is at least $1$, meaning there is an injection from $\{1\}$ into the empty set. Call this injection $f$, therefore $f(1)\in\varnothing$ which is a contradiction. Therefore the cardinality of the empty set is indeed $0$.
Eh? If it's not zero, then it's at least one? Where's that coming from? What are zero and one without ordinals?
Oh lord, how did generations of mathematicians used the natural numbers and compared sizes before the ordinals??? :(
The OP is, after all, asking how to prove that the empty set has $0$ elements....
I forgot, after all, that Cantor invented sets and "zero". Oh, wait, von Neumann invented zero. But Zermelo invented the empty set... :|
From the "keep it simple" perspective: "The empty set has no elements. That is, it has $0$ of them. In fancy words, its cardinality is $0$".
I do NOT see how it follows that if the cardinality of the empty set is not zero, then it comes as equal to at least 1. It seems to me that it could have just not have cardinality at all. I see no reason to assume that every set has cardinality. That all said, at least this proof is very clearly not a constructive one. Also, if you assume every set has cardinality, and we have a linear ordering for the cardinality of sets, I do think your argument works.
@Doug: If you want a complete answer then you will have to write what is your definition of cardinality. Generally speaking, cardinality is just an equivalence relation on sets. So every set must have a cardinality.
The usual definition of a function $f : X \rightarrow Y$ is: a relation $f \subseteq X \times Y$ such that for all $x \in X$ there exists unique $y \in Y$ such that $(x,y) \in f.$ Under this definition, there is a unique function $f : \emptyset \rightarrow Y$ for each set $Y,$ namely the empty relation.
Furthermore, we can define that a function $f : X \rightarrow Y$ is injective iff for all $y \in Y$, there is at most one $x \in X$ such that $(x,y) \in f$. Under this definition, the unique function $f : \emptyset \rightarrow Y$ is indeed injective.
Finally, I think a good definition of $X \lesssim Y$ is the existence of an injection $X \rightarrow Y$.
So, if you accept the above definitions, then you must agree that, since there is an injection $\emptyset \rightarrow Y$ for all sets $Y$, thus by definition we have $\emptyset \lesssim Y$.
Now. Let $X \cong Y$ mean that there exists a bijection $X \rightarrow Y.$ You need to show the following.
$\lesssim$ is compatible with $\cong$
So too are the operations $\uplus$ and $\times,$ defined in the usual way.
If you've done that, you're nearly there! Let $|*|$ denote a function such that $X \cong Y$ iff $|X| = |Y|$, and enforce the definitions of order, addition and multiplication.
$$X \lesssim Y \iff |X| \leq |Y|$$
$$|X \uplus Y| = |X| + |Y|$$
$$|X \times Y| = |X| \cdot |Y|$$
This is legal because we proved compatibility with $\cong.$
Finally, after all that work, you can (and should) prove that an initial portion of the cardinal numbers satisfies the Peano axioms. To do this, you'll have to take $|\emptyset|$ as your $0.$ Furthermore, it shouldn't be too hard to show that no other choice will work, since after all we already know that $\emptyset \lesssim Y$ for every set $Y$, so in general we have $|\emptyset| \leq |Y|.$
It seems to me that you're just defining equinumerosity or cardinality between sets, but I don't see how you're actually defining a cardinal.
@GitGud, the cardinal numbers can be viewed as the image of the $|*|$ function, which is any function such that $X \cong Y$ iff $|X|=|Y|$. Von Neumann's approach is the standard example of such a function.
I understand now. But this is dependent on the function $|*|$. I prefer the approach I described in the comments to the question.
@GitGud, yes that's the standard approach, and for good reason. However, here's how I would phrase it. Assuming the axiom of choice, every set is well-orderable. So we can define $|X|$ as the least ordinal number $\alpha$ such that we can find a surjective function $f : \alpha \rightarrow X.$ We can then prove that $|X| = |Y|$ iff $X \cong Y$, which is the property we really need of $|*|$.
@GitGud, regarding the non-uniqueness of $||$ as defined in my answer, a question I asked the other day is pertinent. If we can extend the universe as we please, then the uniqueness issue essentially disappears. The trick is, you dynamically extend with a class of cardinal numbers $\mathrm{Card}$ disjoint from $\mathrm{Set}$ and assert that $|| : \mathrm{Set} \rightarrow \mathrm{Card}$ and that $|X|=|Y|$ iff $X \cong Y$.
@GitGud, lol. So wait, you like the idea of extending with a new class $\mathrm{Card}$? In my experience, mathematicians trained in a more classical school of thought generally don't like this idea.
No, I like that you worry about this sort of thing. I prefer the classical idea.
"The usual definition of a function f:X→Y is: a relation f⊆X×Y such that for all x∈X there exists unique y∈Y such that (x,y)∈f. Under this definition, there is a unique function f:∅→Y for each set Y, namely the empty relation." I don't accept this definition. Suppose Y and X both the empty set. For all x∈X there does NOT exist a unique y∈Y. So, I don't see how the definition makes any sense.
@DougSpoonwood, presumably it depends on the background logic / axiomatic system. However, if we confine our discussion to classical first-order ZFC, everything goes through as I've described. In particular, let $X$ and $f$ denote empty sets (no need to prove they're equal) and suppose $Y$ is fixed but arbitrary. We will show that $f$ is a function $X \rightarrow Y$. We need to check two conditions, namely that [0] $f \subseteq X \times Y$, and that [1] $\forall x \in X, \exists! y \in Y : (x,y) \in f$.
Condition [0]. Suppose $p \in f$. Then since $f$ is empty, this entails a contradiction. But classical logic is subject to the principle of explosion, so its certainly true that $p \in X \times Y.$ Discharging, we have that $\forall p(p \in f \rightarrow p \in X \times Y)$, which is just longhand for $f \subseteq X \times Y$, as required.
Condition [1]. We wish to show that $\forall x \in X ,\exists ! y \in Y : (x,y) \in f.$ This is shorthand for $\forall x(x \in X \rightarrow \mbox{ something }),$ where the something is irrelevant. Now in general, $x \in X$ is false, since $X$ is empty. Thus the statement of interest is equivalent to $\forall x(\bot \rightarrow \mbox{ something })$. But in classical logic, false implies everything, or more precisely, $(\bot \rightarrow \mbox{ whatever })$ is equivalent to $\top$. So the statement of interest is just longhand for $\forall x \top,$ which is classical equivalent to $\top$.
Of course, if you don't accept classical logic and/or ZFC, that's fine too. Perhaps in your favoured system, $\emptyset$ simply has no cardinality.
@user18921 "But classical logic is subject to the principle of explosion" The principle of explosion can get formalized as CKpNpq. Since you have a contradiction, and since the rule of uniform substitution applies, I have the option of inferring the negation of the statement you've made also. That is, by the principle of explosion I can infer it is not the case that "p∈X×Y". It is thus not certainly true that "p∈X×Y", though you may infer it at such a point...
"But in classical logic, false implies everything, or more precisely, (⊥→ whatever ) is equivalent to ⊤ . So the statement of interest is just longhand for ∀x⊤, which is classical equivalent to ⊤ ." Sure, you can do that. But, in classical logic (⊥→ whatever), and that "whatever" can come as ⊥. Thus, in classical logic I can use a case-by-base analysis that you've given me to infer that the empty set has no cardinality at all. And thus since the case-by-case analysis can lead to a contradiction either way, I infer a contradiction from some unstated assumption.
@DougSpoonwood, I'm not sure I really get what you're saying. I mean sure, by the principle of explosion you can infer $p \notin X \times Y,$ and thus that $\forall p(p \in f \rightarrow p \notin X \times Y)$. So? It doesn't help us prove that $f \subseteq X \times Y$, so its not very helpful.
Cardinality is determined by an equivalence relation on sets, and the emptyset is also a set.
It just happens that the equivalence class of $\emptyset$ happens to have just one element, and the only function from $\emptyset\to\emptyset$ is an "empty function" (thinking of functions from $X\to Y$ as special subsets of $X\times Y$).
If the intent of the answer is to give something better than intuition, I don't think I follow. An equivalence relation is a binary relation on a set. Which set is this you're thinking about?
@GitGud - I think rschwieb means the relation that $A~B$ iff there exist a bijection between them, although it isn't defined on a set (but rather on the class (I think thats the term that I'm abusing here) of all sets)
@Belgi That makes sense intuitively, but I don't know to formalize it exactly because we're dealing with a proper class.
Dear @GitGud : Belgi's description is what I mean. There is no real restriction against thinking of this as an equivalence relation on the class of sets. The Cartesian product of a proper class with itself can still be defined (it's just not a set is all) and you can still think about subclasses of it satisfying the axioms for an equivalence relation.
Yes, there is a restriction against equivalence relations being on proper classes. However, an equivalence relation can be a proper class, which is all you need here.
@BrianM.Scott How do you define a relation in the context of proper classes?
@BrianM.Scott I don't see how your last sentence meshes with what your first sentence claims. Can you add details about what the restriction is? Thanks.
@GitGud: By a formula. In this case $x\sim y$ means the rather long, involved formula that expresses ‘there is a bijection between $x$ and $y$’.
@BrianM.Scott Ah! Thanks.
@rschwieb: $|x|=|y|$ can be thought of as $\langle x,y\rangle\in\mathbf{EQ}$, where $\mathbf{EQ}$ is a relation on sets that is itself a proper class, but that’s an abbreviation for $\varphi_{\mathbf{EQ}}(x,y)$ for some formula $\varphi_{\mathbf{EQ}}(x,y)$ that says that there’s a bijection between $x$ and $y$. The point is that even if you think of it as a proper class, it’s a proper class of sets. There is no mechanism for forming a proper class of proper classes, which is what you’d need in order to have a relation on proper classes.
Dear @BrianM.Scott : So it's fine to do this with a proper class of sets, but there are classes (not of sets) for which it is not OK? Maybe one can make some clever paradox otherwise?
There’s no mechanism in $\mathsf{ZF}$ (or, so far as I know, any of the other relatively standard axiomatizations) even for talking about a collection of proper classes.
Dear @BrianM.Scott I'm still trying to see why a class of sets should be exceptional compared to other classes with regards to what we're doing :S
There are no other proper classes in any of the usual axiomatizations. Every class, proper or not, has only sets as elements.
Dear @BrianM.Scott : That's what I thought, and why I was confused. But now I'm seeing that it looks like you meant the restriction is not on entities are used but on what is done. Is it that the "cross product" of the classes can't be formed? We're just not equipped with a way to make a collection of pairs of sets for a class of sets? Maybe this is where my intuition was breaking down about what can't be done for classes.
The restriction is on entities. If $x$ and $y$ are entities such that $x\in y$, then $x$ must be a set.
Dear @BrianM.Scott : That was a yes/no question. Is that a "no you can't do cross products of classes because that would entail membership between things that aren't sets"? This is also a yes/no question.
There is no problem forming $A\times B$ for proper classes $A$ and $B$, but that doesn’t help you to form a relation on proper classes; it merely allows you to form a relation on sets that is itself a proper class.
Dear @BrianM.Scott : Now that really has me flattened. What's the difference between a relation on sets and a relation on the proper class of sets?
That’s not what I was contrasting. You can form a relation on sets, even on a proper class of sets. You cannot form a relation on proper classes, i.e., one that relates proper classes to one another. (You can sometimes express an instance of a relation: for example, if in $\mathsf{ZF}$ you have proper classes $\mathbf{A}$ and $\mathbf{B}$ defined respectively by formulas $\varphi(x)$ and $psi(x)$, then $\varphi(x)\to\psi(x)$ can be thought of as expressing the relationship $\mathbf{A}\subseteq\mathbf{B}$.)
| common-pile/stackexchange_filtered |
Is a black hole a perfect sphere?
I know all about how black holes form and why their gravity is so strong. However, is the gravity equally powerful in all directions? Will the event horizon appear as a perfect sphere?
One can not directly see the event horizon, but one can see radiation from the infall of material, which might give visual cues to the event horizon. The event horizon would be a perfect sphere, but it would not appear as a perfect sphere because of gravitational lensing. If you were stationary or moving directly towards or away from the black hole, it would appear as a perfect circle that was apparently larger than the actual black hole, also because of lensing.
@Aabaakawad What about a spinning black hole? Surely that would have some effect on the event horizon, maybe making it more oval-shaped?
you might think that because you are imagining the mass of the black hole to be distributed throughout the volume within the event horizon. It is not. All the mass is either concentrated into a point in the center of the black (a singularity), or an ultracompact object being held up from being a singularity by forces not yet known, (not likely, but who knows). Yes, a singularity can have spin, and I do not know how that works. It can also have an electric charge and/or a magnetic moment.
See this Physic.SE Q&A, or even the wikipedia page.
You cannot see the event horizon. That being said:
A non-rotating black hole, free of external influences, has perfect spherical symmetry. All its properties are exactly the same in any direction, period. This is the Schwarzschild metric.
https://en.wikipedia.org/wiki/Schwarzschild_metric
Even if it is electrically charged, if it's non-rotating, and free of external influences, it is still perfectly spherically symmetrical - this is the Reissner–Nordström metric.
https://en.wikipedia.org/wiki/Reissner%E2%80%93Nordstr%C3%B6m_metric
As soon as it begins rotating, the black hole is no longer in perfect spherical symmetry. It acquires an ergosphere, which is shaped like a flattened ball. This is the Kerr metric.
https://en.wikipedia.org/wiki/Kerr_metric
Significant external factors can distort the shape of the event horizon. Two black holes merging will go through a process where spherical symmetry is lost temporarily:
https://www.youtube.com/watch?v=JOQMvyLYmd4
Technically, anything near a BH would slightly distort the metric of spacetime, but in practice this would not be easily measurable unless the external factor is very large (another very massive object). So for most practical purposes, the metrics described above would apply.
The Schwarzschild metric by definition can be written in a spherically symmetric form, the Kerr metric is not spherically symmetric, but in Boyer–Lindquist coordinates the event horizon has a constant radial coordinate.
For most people this will be good enough to say the event horizon of Kerr black holes (including Schwarzschild black holes) are perfect spheres. A physical black hole has a quasi-neutral charge, but the more general case of a charged Kerr-Newman black hole still has a perfectly spherical event horizon.
Now of course no physical situation actually possess the exact symmetries of Kerr black holes. However the outer Kerr metric is stable, which means that the event horizon mantains a near-perfect spherical shape, except in extreme situations such as when two black holes are undergoing a merger.
As I understand it, general relativity says that there's gravitation everywhere in the universe, and this gravitation creates dips in space, so to speak, often represented in 2D as a weight on a rubber sheet, like the picture below.
Source,
so, a black hole might generate a perfectly spherical event horizon, but it generates it on a not perfectly flat 3 dimensional surface, and I think the gravitation from other objects makes it not quite a perfect sphere. For example, a star or planet that orbits a black hole would drag around a ripple on the event horizon as it orbits the black hole.
Precisely what shape that ripple would be . . . I'm not sure. That said, if you had a black hole as the only object in a universe, then I think the event horizon might be a perfect sphere, or as perfect as possible, given quantum fluctuation, hawking radiation and the impossibility of precise observation and all that good stuff.
Non black holes tend to have lumpy/inconsistent gravity - see here. Even Neutron stars have some inconsistency, but Black holes probably avoid that because the matter is condensed to a point, so there's equal gravitational pull from all directions from the singularity.
Now a Kerr black hole, that's a whole different question. I'm not smart enough to try to answer that one. That might not be a sphere at all.
I would avoid using the rubber sheet analogy here. It's misleading in a number of aspects, and doesn't help your points too much.
@HDE226868 are you saying the gravitational field doesn't effect the perfect sphere of the event horizon?
So, the rubber sheet analogy is not ideal, but is it wrong? Can someone say why it's wrong rather than just voting down? It's not bad logic. The Kerr metric is obviously bigger and I said that. Why is this wrong for a non rotating black hole? Inquiring minds want to know. It might be wrong, but what's the explanation?
The rubber sheet analogy isn't wrong, per se, but it can be misleading. Eg, what causes bodies to move down into the depression, some kind of meta-gravity?
"there's equal gravitational pull from all directions from the singularity". It's a bit more subtle than that. We can't get any information from inside the EH (event horizon), and that includes mapping the shape of the black hole's core; gravity cannot escape from the EH anymore than light can. The gravitational response of bodies outside the EH is completely determined by the spacetime curvature that was created as the BH was formed. For further details, please see http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/black_gravity.html
@PM2Ring Thank you for that. This is perhaps a question that novices like me shouldn't try to answer.
Because the gravitational pull of a non-rotating black hole in a void space is equal at all sides (as a result of it being a point) the event horizon should in theory be a perfect sphere. Because of the nature of a blackhole not letting light escape you would need a background for it to be seen. Also, in real world physics a blackhole would need to interact with light to be "seen" which "correct me if I'm wrong" has it's own very small gravitational pull, it would slightly bend the sphere a bit, the observer will give the same effect.
If rotating it would (as pointed out before) form an ergosphere around the event horizon connecting to it at the poles an being the farthest over the equator, thus forming a more oval shape.
Also, you know who to subscribe to right?
| common-pile/stackexchange_filtered |
Methods of Deleting Duplicates in SQL Server with SSDT
I'm currently setting up a database which has an upload process in SSDT. It pulls data over time from an Excel sheet. My issue is that the Excel sheet is appended onto as time continues but the upload process uploads the entire sheet every time the process is run. This results in exact, duplicate rows for the data which were in there previously.
I have attempted to solve this in a number of ways. I have attempted to use the Sort function included in SSDT but have not been able to get it to work for whatever reason. I am considering writing and SQL task to clean the database after each upload but am only so confident in my ability to do so. Is there some method I am not thinking of which would make this easier or a way to get the sort transformation to work? Thanks for the help in advance.
Cleanup your dupes programmatically after or staging table -> perm table and don't merge the dupes.
The only problem I have with this is that you have to have a little over double the amount of space in the database to create the temporary table before the merge, although I do not think any of the other suggestions have figured out a way around this yet. Perhaps I will try this.
Could purge the table before loading, but if load fails you're gonna be without the data as you just purged it. Safer loading the data and manipulating it.
You can delete duplicate rows in SQL Server using a cte with row_number() window function.
This was the SQL task to execute I was thinking of writing. It seems like it might be the better way to go. Will attempt it later today.
| common-pile/stackexchange_filtered |
I have problem with my query using laravel
I am building a simple private chat app with laravel.
I have a messages table with a column : id/sender_id/receiver_id and body
I am trying to show chat history related to user when i select to any user but unfortunately my query is not working properly all users seeing all message please help me how can i resolved that thank u.
Controller
Message::where('sender_id', Auth::user()->id)
->orWhere('receiver_id', Auth::user()->id)
->where('sender_id', $userId)
->orWhere('receiver_id', $userId)
->get();
that orWhere isnt scoped correctly. You will want to use ->where(function($q) { $q->where()->orWhere(); });
please can you update my query ? thank u.
the StackOverflow coding service has already provided 2 answers below. Please consider becoming a sponsor; also like comment and subscribe
Does this answer your question? How do you wrap Laravel Eloquent ORM query scopes in parentheses when chaining?
Just add ->where(function($q){}) case orWhere breakdown all other conditions
try this code
Message::where(function ($q) {
$q->where('sender_id', Auth::user()->id)
->orWhere('receiver_id', Auth::user()->id);
})->where(function ($q) {
$q->where('sender_id', $userId)
->orWhere('receiver_id', $userId);
})->get();
i have used this query but chat messages is not showing .
is there any error ? provide me with more details
i am not getting error just messages is not showing on chat please check https://ibb.co/KbvHBRZ
try to dd($userId,Auth::user()->id) is there data ?
after this dd(Auth::user()->id); getting auth user id 177
I Updated my answer could you try again
i have updated your code but why all chat message is showing to all users please check https://ibb.co/4WXv2PZ
Let us continue this discussion in chat.
make subquery where like this:
Message::where(function($q){
$q->where('sender_id', Auth::user()->id)
->orWhere('receiver_id', Auth::user()->id);
})->where(function($q){
$q->where('sender_id', $userId)
->orWhere('receiver_id', $userId)
})->get();
| common-pile/stackexchange_filtered |
Post data on web server with web service
Hello I am tying to save data on web server with a web service implemented in PHP.
I am trying below code to do it. I am getting response from server but the data is not getting saved to server. I have wasted 5-6 hours of the day in googling and trying code given on net. But nothing seems to work :(
NSDictionary *someInfo = [NSDictionary dictionaryWithObjectsAndKeys:
txtTreasureName.text, @"name",
txtDescription.text, @"description",
txtMaterials.text, @"materials",
@"77.3833", @"longitude",
@"29.0167", @"latitude",
categoryIdStr, @"categoryId",
nil];
NSError *error;
NSData *jsonData = [NSJSONSerialization dataWithJSONObject:treasureInfo
options:NSJSONWritingPrettyPrinted
error:&error];
if (! jsonData) {
DLog(@"Got an error: %@", error);
} else {
NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init];
NSString *urlString = @"http://www.myurl.php";
NSURL *url = [NSURL URLWithString:urlString];
[request setURL:url];
[request setHTTPMethod:@"POST"];
[request setValue:@"application/json"
forHTTPHeaderField:@"Content-Type"];
[request setValue:@"application/json"
forHTTPHeaderField:@"Accept"];
[request setValue:[NSString stringWithFormat:@"%d",
[jsonData length]]
forHTTPHeaderField:@"Content-length"];
[request setHTTPBody:jsonData];
DLog(@"%@", request);
[[NSURLConnection alloc]
initWithRequest:request
delegate:self];
// Print json
DLog(@"JSON summary: %@", [[NSString alloc] initWithData:jsonData
encoding:NSUTF8StringEncoding]);
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[NSURLConnection sendAsynchronousRequest:request
queue:queue
completionHandler:^(NSURLResponse *response,
NSData *data, NSError *error) {
if ([data length] &&
error == nil) {
DLog(@"%@", [[NSString alloc] initWithData:data
encoding:NSUTF8StringEncoding]);
if ([self shouldDismiss]) {
[self dismissViewControllerAnimated:YES
completion:nil];
}
}
}];
}
What is the error message? What is the line? What language is that?
I am not getting any error. I am getting response from server but data is not just getting saved on server.
So the data is getting to the server but not being saved by the server ?
As an suggestion, I strongly advice to take a look on RestKit, this framework makes really easy to serialize data to json and configure the calls to your webservice methods.
put your server side PHP script as well to review what you are doing there.
@Wain you got it exactly. Call to server is being made but some how server is not treating it as a valid post request
So we really need to see the server setup because your code above looks ok. I agree with @ararog about using RestKit to make life easier but I don't think the app code is your problem. Does the server actually accept JSON body data? If not then the answer from @'Dipen Panchasara' s a good bet.
@Wain you are right the problem is solved now. Problem was with server side code. Thank you everybody for your time, quick responses and help :)
Set Request URL in the function,
You have alread y created data Dictionary
NSDictionary *someInfo = [NSDictionary dictionaryWithObjectsAndKeys:
txtTreasureName.text, @"name",
txtDescription.text, @"description",
txtMaterials.text, @"materials",
@"77.3833", @"longitude",
@"29.0167", @"latitude",
categoryIdStr, @"categoryId",
nil];
Add this function to your implementation file and invoke it, rest will be dont by this function
[self postWith:someInfo];
Add this
- (void)postWith:(NSDictionary *)post_vars
{
#warning Add your Webservice URL here
NSString *urlString = [NSString stringWithFormat:@"YourHostString"];
NSURL *url = [NSURL URLWithString:urlString];
NSString *boundary = @"----1010101010";
// define content type and add Body Boundry
NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url];
[request setHTTPMethod:@"POST"];
[request addValue:contentType forHTTPHeaderField: @"Content-Type"];
NSMutableData *body = [NSMutableData data];
[body appendData:[[NSString stringWithFormat:@"--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]];
NSEnumerator *enumerator = [post_vars keyEnumerator];
NSString *key;
NSString *value;
NSString *content_disposition;
while ((key = (NSString *)[enumerator nextObject])) {
value = (NSString *)[post_vars objectForKey:key];
content_disposition = [NSString stringWithFormat:@"Content-Disposition: form-data; name=\"%@\"\r\n\r\n", key];
[body appendData:[content_disposition dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[value dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]];
}
//Close the request body with Boundry
[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
[request setHTTPBody:body];
[request addValue:[NSString stringWithFormat:@"%d", body.length] forHTTPHeaderField: @"Content-Length"];
NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];
NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding];
NSLog(@"%@", returnString);
}
This could be an appropriate answer for the question how to send a multipart form request. However, as it looks like, this won't work since the server apparently expects application/json as content type. The OP should be more specific about what errors are returned and what the server expects.
| common-pile/stackexchange_filtered |
Flask/Python socketio Modules Compatibility with Flask-SocketIO 5.3.6 and Socket.IO 4.7.4
I have a very basic Flask Socket-IO project setup:
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app: Flask = Flask(__name__)
app.config['SECRET_KEY'] = 'top-secret'
app.debug = True # for development
socketio: SocketIO = SocketIO(app, cors_allowed_origins='*')
@app.route('/join', methods=['GET'])
def join():
return render_template('index.html')
@socketio.on('connect')
def connect():
# TODO: implement connection logic
emit('after connect', {'data': 'connected'})
@socketio.on('disconnect')
def disconnect():
# TODO: implement disconnection logic
pass
if __name__ == '__main__':
socketio.run(app, port='8000', allow_unsafe_werkzeug=True)
When I run it, I get this error message:
The client is using an unsupported version of the Socket.IO or Engine.IO protocols
I've read this post and understand this as a compatibility issue between client and server Socket.IO versions.
However, I'm essentially using all the latest versions of the dependencies, which also by the book and the other book is supposed to work. I don't know which dependency should I downgrade, and how much should I downgrade it.
My current versions on server:
Flask-SocketIO 5.3.6
python-engineio 4.9.0
python-socketio 5.11.1
I'm using https://cdn.socket.io/4.7.4/socket.io.min.js in my index.html, so I'd assume I'm using 4.7.4 for client version.
Since it's a fresh project, I'm trying to use the newest versions of the dependencies. Does anyone know what would be the minimum (or close to minimum) downgrade required to get this working?
Please include complete server logs in your question.
| common-pile/stackexchange_filtered |
less-middleware does't work - "Express 500 Error: Unrecognised input"
I'm using Express framework with less-middleware and jade template engine
When I'm trying to get my css file in browser "/css/style.css" - I get the error
"Express 500 Error: Unrecognised input"
Here is basic setup in app.js
var app = express();
// all environments
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
app.set('view options', {
layout: false
});
app.use(express.favicon());
app.use(express.logger('dev'));
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(require('less-middleware')({ src: __dirname + '/public' }));
app.use(express.static(path.join(__dirname, 'public')));
app.use(app.router);
// development only
if ('development' == app.get('env')) {
app.use(express.errorHandler());
}
app.get("/api/places", api.getAllPlaces);
app.get("/api/place/:id", api.getOnePlace);
app.all('*', routes.index);
http.createServer(app).listen(app.get('port'), function(){
console.log('Express server listening on port ' + app.get('port'));
});
Any help appreciated! Thanks.
Can you post your filesystem layout under the public directory and the content of public/css/styl.less?
I think i've resolved this issue just after asking it. It was simply typo in less file. Thanks
OK, please answer your own question and accept it.
FYI if ('development' == app.get('env')) is synonymous with app.configure('development', function() { ... }). Also, it's generally considered better, performance wise, to configure the express router before the static file server (at least if you don't provide a named route for it). Finally, you shouldn't need to call app.use(app.router) explicitly, it is implicitly added when you register your first route (in your case app.get("/api/places", api.getAllPlaces);)
That was just typo in *.less file. Thanks!
| common-pile/stackexchange_filtered |
Number of integer solutions $(x, y)$ of $x(x+6) = y^2 + k$ for different integer values of $k$
Let $n$ be the number of pairs $(x, y)$ of integer solutions to the following equation:$$x(x+6) = y^2 + k$$
Can there be an integer $m$, $k$ can be given an integer value so that $n=m$ ?
Shouldn't $n$ depend on $k$ in the first place?
A hint: Put $z=x+3$ and simplify to an equation containing $z^2-y^2$. You can then apply a well-known analysis of differences between squares equal to a given integer, as in answers to this question.
| common-pile/stackexchange_filtered |
jQuery multiple filters by class
I am building an filter option with multiple filters option. I want to filter on classes but I can’t get it working. I manage to create multiple filters. These filters can also be combined. But if you then adjust/change one of these filters the results do not want to update.
First change filter 1
Than change filter 2 (location)
If you now change filter 1 again (with filter 2 not changed) it don’t update the results.
$('select').change(function() {
var current = this.value;
$.each($('#FilterContainer').find('div.all').not('.hidden').not('.' + current), function() {
$(this).addClass('hidden');
});
$.each($('#FilterContainer').find('div.all').is('.' + current), function() {
$(this).removeClass('hidden');
});
});
.hidden { display: none; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<p>Filter:</p>
<select class="filterby">
<option value="all"><h5>Show All</h5></option>
<option value="1"><h5>One</h5></option>
<option value="2"><h5>Two</h5></option>
<option value="3"><h5>Three</h5></option>
</select>
<p>Location:</p>
<select class="filterby">
<option value="all"><h5>All Locations</h5></option>
<option value="nj"><h5>NJ</h5></option>
<option value="ny"><h5>NY</h5></option>
<option value="pa"><h5>PA</h5></option>
</select>
<div id="FilterContainer">
<div class="all 1 nj">Test One NJ</div>
<div class="all 1 ny">Test One NY</div>
<div class="all 1 pa">Test One PA</div>
<div class="all 2 nj">Test Two NJ</div>
<div class="all 2 ny">Test Two NY</div>
<div class="all 2 pa">Test Two PA</div>
<div class="all 3 nj">Test Three NJ</div>
<div class="all 3 ny">Test Three NY</div>
<div class="all 3 pa">Test Three PA</div>
</div>
(or on jsfiddle)
Please always try to add a [mcve] in the question itself (preferable as snippet -> <> / Ctrl+M) and not only a link to an external resource that may be not available (offline, blocked, ...).
The "filter" selects have invalid markup. <option> nodes can only have Text (text nodes and phrasing content) but no heading elements.
Hi Andreas,
Sorry I am new here and have to discover it yet. Thanks for your help!
@morrits assign ids to your select dropdowns to make life easier. otherwise the value of 'this' can confuse things. Here's a fiddle: http://jsfiddle.net/RachGal/ejsnu3bq/
No problem :) You are welcome
Hi @RachelGallen It seems that it is not working properly.
If I do the first click on filter 1, nothing happens and if I first use another filter and then filter 1, it works.
And finally, if I want a third filter (for example Countries) how can I achieve this? I guess on the same way. But I can't get it working.
Sorry used enter to early. See my comment above. You can check the option with a country filter here: https://stackoverflow.com/questions/56314940/jquery-multiple-filters-by-class?answertab=oldest#tab-top
@morrits it's only meant to be a starting point (otherwise I'd have posted an answer). sO isn't a code writing service, I'm sure you can improve on it. I'm making food at the minute. I'm sure you take the point of assigning individual IDs makes it easier to retrieve accurate values anyway.
@RachelGallen yes, you are right. I'm going to play with it! Thanks again.
You might want to reset all the elements on each user interaction by removing .hidden class. At the top of your .change() function, try this:
.removeClass('hidden');
Once you remove .hidden from all the elements, you can add the class to the elements that need it.
In the JSFiddle example, you can write this this way:
$('select').change(function () {
var current = this.value;
$.each($('#FilterContainer').find('div.all'), function () {
$(this).removeClass('hidden');
});
$.each($('#FilterContainer').find('div.all').not('.' + current), function () {
$(this).addClass('hidden');
});
});
Thank you very much for your quick respons. Appreciate it!
Sorry I included the wrong code in the post. See the jsfiddle for example or the (new)code above.
| common-pile/stackexchange_filtered |
Is the multiplicative order of a number always equal to the order of its multiplicative inverse?
Is it true that $ord_{n}(a)=ord_{n}(\bar{a})$ $\forall n$?
Here, $\bar{a}$ refers to the multiplicative inverse of $a$ modulo $n$ and $ord_{n}(a)$ refers to the multiplicative order of $a$ modulo $n$.
Please define the notation that you're using. What does $\bar a$ mean? Does ord${}_n(a)$ mean the order of $a$ (in the multiplicative group) modulo $n$?
Yes, it is. Since $a \bar{a}=1$, it follows that for any positive integer $k$ we have $a^k (\bar{a})^k=1$. It follows that $a^k=1$ if and only if $(\bar{a})^k=1$. In particular, if $k$ is the smallest positive integer such that $a^k=1$, then $k$ is the smallest positive integer such that $(\bar{a})^k=1$.
The answer is yes.
Hint: note that $a^m (\bar a)^n = a^{m-n}$.
| common-pile/stackexchange_filtered |
Mail_Queue PHP Crond - mail sends but PHP process hangs
This has been causing me a problem since I upgraded to PHP 5.3. The problem didn't happen with PHP 5.2.
I have a PHP script (which is pretty much the stock Mail_Queue script) to send messages.
I'm executing the script via cron:
php /home/public_html/send_messages.php
Email is successfully sent, however I get the PHP process hanging, along with the cron and postfix. Killing the PHP process solves the problem.
I get this output when executing the cron job (via webmin):
PHP Notice: Error in sending mail: Mail Queue Error: Cannot initialize container in /usr/share/pear/PEAR.php on line 873
When I execute the PHP script from a browser this problem doesn't happen.
| common-pile/stackexchange_filtered |
Javascript class import call expects exactly one argument
so I read a couple of question similar to mine but unfortunately it wasn't so similar or I just couldn't figure it out.
without further ado, this my export in Book.js
export default Book;
and my import in script.js
import Book from "./Book.js";
and link in html
<script type="module" src="script.js"> </script>
<script type="module" src="Book.js"> </script>
So basically how to fix class import call expects exactly one argument?
or just how to import the class and using it in main script properly ?
hope get approved and posted like it supposed to be ,
thanks in advance.
EDIT : SOLVED => for some reason it was not a code thing, I reseted dev tools in chrome and Firefox and everything worked.
What's the question?
sorry I thought it was clear ,
how to fix the class import call expects exactly one argument error in console?
its just that my code works in safari but not in chrome or Firefox ..
Solved it , I made a reset for both browsers dev tools , thanks for your time!
| common-pile/stackexchange_filtered |
how can I change the mod key in awesome wm from windows key to meta?
I recently got a happy hacking keyboard 2 professional that has a meta key instead of windows keys. Since I used to use the windows key to change between screens in awesome window manager (local modkey = "Mod4") how can I change that to use meta instead?
right now I have in my rc.lua
local modkey = "Mod4"
local altkey = "Mod1"
changing local modkey="Mod1" should work
| common-pile/stackexchange_filtered |
Add bank account to stripe connect custom accounts
I'm using Stripe Connect with Node.js and TypeScript on a platform that handles payments for third-party services we call "partners". We opted to use Stripe Connect's custom accounts to have full control over the user experience and all the related aspects concerning our partners in order to create a Stripe Connect ID for them. We already have the necessary information to create the custom account, but there's something we don't understand: How do we connect the bank account to this custom Stripe Connect account?
What data is needed for the bank account to be valid? And how can I verify if this data is correct before it's sent to the server?
I only have the following
const { id } = await stripe.accounts.create({
...
type: 'custom',
country,
email,
business_type: 'individual',
...
});
I need help with this Stripe flow.
Did you have a look at the documentation?
Have a look at the various tabs and drilldowns on https://docs.stripe.com/connect/payouts-bank-accounts
You can build frontend forms, either using Stripe's Financial Connection product that links bank accounts, or your own basic HTML forms for collecting the bank account numbers, creating tokens with https://docs.stripe.com/js/tokens/create_token?type=bank_account and passing to https://docs.stripe.com/api/accounts/create#create_account-external_account
| common-pile/stackexchange_filtered |
Is this the proper way to hang an exterior door?
My contractor's team is building us a new detached garage.
This is how the pre-hung door is attached to the frame.
Is this acceptable? It seems like it could be easily kicked in (which is a real concern in this city).
Is this to code?
What can I suggest to have them improve this before they hang drywall and trim over it?
What, specifically, is concerning to you?
Is this well constructed? Is this to code? What improvements do I need to ask them to do if either of the first two questions is "no"?
Well constructed? Maybe. Did they put a long screw through each hinge into the frame? (They should.) They clearly used long screws at the deadbolt strike. (Good.) Code doesn’t really cover install details afaik. They could improve the install with a can of window and door spray foam. (Glues everything together.) Doors, frankly, can be kicked in most any time. Motion activated lights outside would be one of many ways to protect your stuff.
@AloysiusDefenestrate technically, we can't see how long the strike screws are—we can only see that they went into the OSB jamb a distance equal to the taper on the tip of the screw or more.
Fair comment, @Huesmann, and relatively easy to check. I should have also noted that the reveals and plumb/level should be good before spray foaming.
The limited pics show what appears to be a normal door installation. If your concern is security, you should have a discussion with the contractor about that and how he may be able to enhance the security of the door. Be prepared to pay a bit more unless extra security was addressed in your contract.
Thank you all for your feedback. I appreciate the advice!
| common-pile/stackexchange_filtered |
Can't get amCharts WordPress dataloader to work
I am trying to use Data Loader from amCharts in WordPress, but I have no success. I am starting with an default Stock Chart and replacing the JS dataset structure with the one from the github site. Then I change the corresponding field values but I always get an empty site. I am using a CSV file on the same server to make sure it is not an access to external sources problem.
Does someone maybe have a complete code?
Here is my not working code so far:
var chart = AmCharts.makeChart("chartdiv", {
"type": "stock",
"color": "#fff",
"dataSets": [{
"title": "MSFT",
"fieldMappings": [{
"fromField": "Open",
"toField": "open"
}, {
"fromField": "High",
"toField": "high"
}, {
"fromField": "Low",
"toField": "low"
}, {
"fromField": "Close",
"toField": "close"
}, {
"fromField": "Volume",
"toField": "volume"
}],
"compared": false,
"categoryField": "Date",
/**
* data loader for data set data
*/
"dataLoader": {
"url": "uploads/2015/12/table.csv",
"format": "csv",
"showCurtain": true,
"showErrors": true,
"async": true,
"reverse": true,
"delimiter": ",",
"useColumnNames": true
},
}],
//"dataDateFormat": "YYYY-MM-DD",
"panels": [{
"title": "Value",
"percentHeight": 70,
"stockGraphs": [{
"type": "candlestick",
"id": "g1",
"openField": "open",
"closeField": "close",
"highField": "high",
"lowField": "low",
"valueField": "close",
"lineColor": "#fff",
"fillColors": "#fff",
"negativeLineColor": "#db4c3c",
"negativeFillColors": "#db4c3c",
"fillAlphas": 1,
"comparedGraphLineThickness": 2,
"columnWidth": 0.7,
"useDataSetColors": false,
"comparable": true,
"compareField": "close",
"showBalloon": false,
"proCandlesticks": true
}],
"stockLegend": {
"valueTextRegular": undefined,
"periodValueTextComparing": "[[percents.value.close]]%"
}
},
{
"title": "Volume",
"percentHeight": 30,
"marginTop": 1,
"columnWidth": 0.6,
"showCategoryAxis": false,
"stockGraphs": [{
"valueField": "volume",
"openField": "open",
"type": "column",
"showBalloon": false,
"fillAlphas": 1,
"lineColor": "#fff",
"fillColors": "#fff",
"negativeLineColor": "#db4c3c",
"negativeFillColors": "#db4c3c",
"useDataSetColors": false
}],
"stockLegend": {
"markerType": "none",
"markerSize": 0,
"labelText": "",
"periodValueTextRegular": "[[value.close]]"
},
"valueAxes": [{
"usePrefixes": true
}]
}
],
panelsSettings: {
color: "#fff",
plotAreaFillColors: "#333",
plotAreaFillAlphas: 1,
marginLeft: 60,
marginTop: 5,
marginBottom: 5
},
chartScrollbarSettings: {
graph: "g1",
graphType: "line",
usePeriod: "WW",
backgroundColor: "#333",
graphFillColor: "#666",
graphFillAlpha: 0.5,
gridColor: "#555",
gridAlpha: 1,
selectedBackgroundColor: "#444",
selectedGraphFillAlpha: 1
},
categoryAxesSettings: {
equalSpacing: true,
gridColor: "#555",
gridAlpha: 1
},
valueAxesSettings: {
gridColor: "#555",
gridAlpha: 1,
inside: false,
showLastLabel: true
},
chartCursorSettings: {
pan: true,
valueLineEnabled: true,
valueLineBalloonEnabled: true
},
legendSettings: {
color: "#fff"
},
stockEventsSettings: {
showAt: "high",
type: "pin"
},
balloon: {
textAlign: "left",
offsetY: 10
},
periodSelector: {
position: "bottom",
periods: [{
period: "DD",
count: 10,
label: "10D"
}, {
period: "MM",
count: 1,
label: "1M"
}, {
period: "MM",
count: 6,
label: "6M"
}, {
period: "YYYY",
count: 1,
label: "1Y"
}, {
period: "YYYY",
count: 2,
selected: true,
label: "2Y"
}, {
period: "YTD",
label: "YTD"
}, {
period: "MAX",
label: "MAX"
}]
}
});
}
That is a lot of code. Can you tell us in what way it is not working? What were the desired results? Can you detail any error messages you may have encountered?
Hi Martynasma glad to see you on board. Where can i find the error messages ? Sorry i am a noob/beginner with wp/amcharts. I now picked your dataloader example from https://github.com/amcharts/dataloader/blob/master/examples/stock_csv_data_and_events.html
and put your csv files from data and put it on my server. I pasted the javascript section in Wordpress in the amcharts javascript section, made sure dataloader_min is in the resources. The amcharts preview shows me an empty chart widget with all buttons but no graph. Want i finally want to do is load yahoo finance data in amcharts
If you can't share a link to the page with the chart, first thing to do is to check browser console for any errors. To open browser console press F12 and select Console tab.
responsive.min.js.map?ver=1.0.13:1 Uncaught SyntaxError: Unexpected token :
amcharts.js?ver=1.0.13:28 Uncaught TypeError: Cannot read property 'translate' of undefinedd.ChartCursor.d.Class.update @ amcharts.js?ver=1.0.13:28e.AmRectangularChart.e.Class.update @ serial.js?ver=1.0.13:3e.AmSerialChart.e.Class.update @ serial.js?ver=1.0.13:7d.update @ amcharts.js?ver=1.0.13:1
i am sorry i accidently loaded responsive.min. now with dataloader_min error looks this way
Uncaught TypeError: Cannot read property 'chart' of undefinede.AmStockChart.e.Class.parseEvents @ amstock.js?ver=1.0.13:1e.AmStockChart.e.Class.parseStockData @ amstock.js?ver=1.0.13:1e.AmStockChart.e.Class.updateData @ amstock.js?ver=1.0.13:1e.AmStockChart.e.Class.validateDataReal @ amstock.js?ver=1.0.13:4(anonymous function) @ amstock.js?ver=1.0.13:4
amcharts.js?ver=1.0.13:28
Uncaught TypeError: Cannot read property 'translate' of undefinedd.ChartCursor.d.Class.update @ amcharts.js?ver=1.0.13:28e.AmRectangularChart.e.Class.update @ serial.js?ver=1.0.13:3e.AmSerialChart.e.Class.update @ serial.js?ver=1.0.13:7d.update @ amcharts.js?ver=1.0.13:1
if someone would have a working wordpress example with dataloader i guess it would help me a lot!
| common-pile/stackexchange_filtered |
Pandas extract columns and rows by ID from a distance matrix
I have a distance matrix with IDs as column and row names:
A B C D
A 0 1 2 3
B 1 0 4 5
C 2 4 0 6
D 3 5 6 0
How to efficiently extract values from a large matrix, e.g. for IDs A and C to get this matrix:
A C
A 0 2
C 2 0
Edit, missing IDs in the matrix should be ignored.
Use DataFrame.loc for get values by labels:
vals = ['A','C']
df = df.loc[vals, vals]
print (df)
A C
A 0 2
C 2 0
EDIT: If some values not match and need omit them add Index.intersection:
vals = ['J','A','C']
new = df.columns.intersection(vals, sort=False)
df = df.loc[new, new]
print (df)
A C
A 0 2
C 2 0
Thanks, I have some ids in my list which are not in the matrix. How to ignore them? Otherwise I get a key error.
| common-pile/stackexchange_filtered |
bad URI(is not URI?): {"message":"d64","type":"success"}
Am using httparty to send API request, I want to get output response after i sent request,
url = HTTParty.post("https://example.com/api/sendhttp.php",
:query => { :authkey => "authkeyvalue",
:mobiles => mobileNos,
:message => messages,
:sender => "senderid",
:route => "routeid",
:response => 'json'
})
response = HTTParty.get(url)
puts response.body, response.code, response.message, response.headers.inspect
But when i run above code it throws bad URI(is not URI?): {"message":"d64","type":"success"} error. How to solve it and get the response?
Does your real url contain spaces? : https://github.com/jnunemaker/httparty/issues/180
This looks like you are running a POST to https://example.com/api/sendhttp.php with JSON format, this succeeds, and you're getting a response of:
{"message":"d64","type":"success"}
So your url variable now contains {"message":"d64","type":"success"}, which is clearly not a valid URL, so when you try to do a GET on it, you get an error. You've already got a response from the first POST, you should perhaps parse this? You don't need to do
response = HTTPParty.get(url)
unless you're expecting a second GET request to a URL which is returned by the first.
| common-pile/stackexchange_filtered |
Combine RGB Of Different Dimension
I am newbie to Matlab Programming,
i have R , G , B values with different size(for example dimension R is 30000x1 and G is 35000x1) and want to make them same size to use cat(3,RColor , GColor, BColor); to combine them and produce image.
you cant, not unless you drop some elements or interpolate the vectors to be of same size
@Amro can i store them in for example 40000x1 matrix and have 0 for empty indexes?
yes you can do that too (padding by zeros)
It might help to add something about the meanings of these vectors, and what you hope to achieve visually by combining them. Also it never hurts to show the code you have so far, because it focuses the question, making it easier to answer, and for you to get what you really need to help, not a best guess.
You might resample all your R,G and B vectors to have the same length.
You can choose an arbitrary length like m = 4000, to interpolate data by factor of m and decimate it by factor of length(~).
m = 4000;
R = double(R);
G = double(G);
B = double(B);
R = resample(R,m,length(R));
G = resample(G,m,length(G));
B = resample(B,m,length(B));
ImageRGB = cat(3,R,G,B);
Then you could change them back to R = uint8(R);, if you wish.
Nice, however, you should add a bit of information of what you are proposing, not just a piece of code!
thanks but i get The input signal X must be a double-precision vector on resample
@Arash, Is your data uint? or double?
@Arash, You should change it to double.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.