qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
1. Yes, some pages will need different combinations of style sheets. Each combination must be cached individually. Unfortunately, the browser won't know that there isn't a difference between `?stylesheets=a.css,b.css` and `?stylesheets=b.css,a.css` so both will need to be cached. 2. That's used to make sure the browser doesn't accidentally cache the dynamically generated stylesheet. It's unnecessary if you are using a decent minifier. Usually, the GUID is found by hashing the `last-modified` times of each file in the list. Like I said, most minifiers will automatically check for new versions of files and discard the old cached version. I would suggest [PHP Minify](http://code.google.com/p/minify/). Installation is as easy as copying the folder into your doc root. It also supports JavaScript compression with the Google Closure Compiler.
A1: Yes i think the best solution is to generate one for every combination. BUt you can cache every file separate and combine them but i think its faster when you cache them together. A2: You can calculate the GUID/Hash/etc. from the checksumes of the files. like: ``` sha1( sha1(file1) + sha(file2) .......) ``` A good idea is to cache this id generation, too with a very short ttl. This simple implementation has some interesting functions like Etag generation: <http://rakaz.nl/projects/combine/combine.phps>
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
1. Yes, some pages will need different combinations of style sheets. Each combination must be cached individually. Unfortunately, the browser won't know that there isn't a difference between `?stylesheets=a.css,b.css` and `?stylesheets=b.css,a.css` so both will need to be cached. 2. That's used to make sure the browser doesn't accidentally cache the dynamically generated stylesheet. It's unnecessary if you are using a decent minifier. Usually, the GUID is found by hashing the `last-modified` times of each file in the list. Like I said, most minifiers will automatically check for new versions of files and discard the old cached version. I would suggest [PHP Minify](http://code.google.com/p/minify/). Installation is as easy as copying the folder into your doc root. It also supports JavaScript compression with the Google Closure Compiler.
For your first question, I would just minify all your (often used) CSS files together into one file and not bother with all possible combinations. That way, every subsequent request can simply use the same CSS file from the local cache instead of fetching a different minified file based on whatever combination is required. Use proper ID and class names to ensure that you don't get style collissions when it all ends up in one file. As for you second question, this is usually done through some sort of build system (e.g. the same system that you call the CSS minifier from). Each time the CSS files are rebuilt, a build num,ber of version number is generated (could be just incremental, could be the md5 hash of the file) which is then used by your application in the generated stylesheet link urls.
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
1. Yes, some pages will need different combinations of style sheets. Each combination must be cached individually. Unfortunately, the browser won't know that there isn't a difference between `?stylesheets=a.css,b.css` and `?stylesheets=b.css,a.css` so both will need to be cached. 2. That's used to make sure the browser doesn't accidentally cache the dynamically generated stylesheet. It's unnecessary if you are using a decent minifier. Usually, the GUID is found by hashing the `last-modified` times of each file in the list. Like I said, most minifiers will automatically check for new versions of files and discard the old cached version. I would suggest [PHP Minify](http://code.google.com/p/minify/). Installation is as easy as copying the folder into your doc root. It also supports JavaScript compression with the Google Closure Compiler.
Do look at <https://github.com/web-developer/Resource-Handler>. This allows you to merege all your css into 1. It also does compress, minify and even cache your css. It also works for your js. This requires least configuration or hassle, unlike some of the other ones suggested . **[RESOURCE HANDLER](https://github.com/web-developer/Resource-Handler)**
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
[Assetic](https://github.com/kriswallsmith/assetic) seems to be good at making assets of JS/CSS and minify them while putting them in cache.
For your first question, I would just minify all your (often used) CSS files together into one file and not bother with all possible combinations. That way, every subsequent request can simply use the same CSS file from the local cache instead of fetching a different minified file based on whatever combination is required. Use proper ID and class names to ensure that you don't get style collissions when it all ends up in one file. As for you second question, this is usually done through some sort of build system (e.g. the same system that you call the CSS minifier from). Each time the CSS files are rebuilt, a build num,ber of version number is generated (could be just incremental, could be the md5 hash of the file) which is then used by your application in the generated stylesheet link urls.
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
[Assetic](https://github.com/kriswallsmith/assetic) seems to be good at making assets of JS/CSS and minify them while putting them in cache.
Do look at <https://github.com/web-developer/Resource-Handler>. This allows you to merege all your css into 1. It also does compress, minify and even cache your css. It also works for your js. This requires least configuration or hassle, unlike some of the other ones suggested . **[RESOURCE HANDLER](https://github.com/web-developer/Resource-Handler)**
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
A1: Yes i think the best solution is to generate one for every combination. BUt you can cache every file separate and combine them but i think its faster when you cache them together. A2: You can calculate the GUID/Hash/etc. from the checksumes of the files. like: ``` sha1( sha1(file1) + sha(file2) .......) ``` A good idea is to cache this id generation, too with a very short ttl. This simple implementation has some interesting functions like Etag generation: <http://rakaz.nl/projects/combine/combine.phps>
For your first question, I would just minify all your (often used) CSS files together into one file and not bother with all possible combinations. That way, every subsequent request can simply use the same CSS file from the local cache instead of fetching a different minified file based on whatever combination is required. Use proper ID and class names to ensure that you don't get style collissions when it all ends up in one file. As for you second question, this is usually done through some sort of build system (e.g. the same system that you call the CSS minifier from). Each time the CSS files are rebuilt, a build num,ber of version number is generated (could be just incremental, could be the md5 hash of the file) which is then used by your application in the generated stylesheet link urls.
7,083,568
Trying to use SQL Server with a jQuery autocomplete. I can get the following to work when just checking the term entered with a matching domain, but I would also like the autocomplete to check if a match is found for the contact name (first name and last name). Is there a way (like in mySQL) to concat the fname and lname? Domain only: ($term = data entered in autocomplete box) ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%'; ``` My attempt at matching name as well: ``` SELECT distinct comp_companyid, comp_name, comp_emailaddress, comp_website, pers_firstname, pers_lastname, addr_address1, addr_address2, addr_city, addr_state, addr_postcode FROM company, person, address, address_link WHERE pers_companyid = comp_companyid AND addr_addressid = adli_addressid AND adli_companyid = comp_companyid AND comp_website LIKE '%".$term."%' OR pers_firstname + ' ' + pers_lastname LIKE '%".$term."%'; ```
2011/08/16
[ "https://Stackoverflow.com/questions/7083568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/708831/" ]
A1: Yes i think the best solution is to generate one for every combination. BUt you can cache every file separate and combine them but i think its faster when you cache them together. A2: You can calculate the GUID/Hash/etc. from the checksumes of the files. like: ``` sha1( sha1(file1) + sha(file2) .......) ``` A good idea is to cache this id generation, too with a very short ttl. This simple implementation has some interesting functions like Etag generation: <http://rakaz.nl/projects/combine/combine.phps>
Do look at <https://github.com/web-developer/Resource-Handler>. This allows you to merege all your css into 1. It also does compress, minify and even cache your css. It also works for your js. This requires least configuration or hassle, unlike some of the other ones suggested . **[RESOURCE HANDLER](https://github.com/web-developer/Resource-Handler)**
1,330,330
I have a sheet with the following data: ``` Column A | Column B | Column C - E | Column F ==========|============|===================|============ Customer | Amount | Additional data | Paid (Y/N) ``` I would like to create a new worksheet with just the customers who haven't paid and the data from that row.
2018/06/11
[ "https://superuser.com/questions/1330330", "https://superuser.com", "https://superuser.com/users/913625/" ]
You could use a **Pivot Table**. * Simply select your data (headers included), then in the ribbon, click on `Insert > Pivot Table`. * Excel will create a new sheet and a pivot table, that you need to configure. Simply drag the relevant fields in the rows/columns space, and drag the field `Paid` in the filters space. * Finally, filter the pivot table and select `N`, to display only people who didn't pay. --- When you add/remove data, don't forget to update the pivot table: * By right-clicking the pivot table and selecting `Refresh` * In the ribbon, in the Data tab, by clicking `Refresh All` --- [More information for setting up and using Pivot Tables](https://www.excel-easy.com/data-analysis/pivot-tables.html)
Try this arrary formula: ``` =IFERROR(INDEX(Sheet2!$A:$A,SMALL(IF(Sheet2!$D$2:$D$8="N",ROW(Sheet2!$2:$8)),ROW(A1))),"") ``` Press **Ctrl+Shift+Enter** [![enter image description here](https://i.stack.imgur.com/sitI8.gif)](https://i.stack.imgur.com/sitI8.gif)
1,330,330
I have a sheet with the following data: ``` Column A | Column B | Column C - E | Column F ==========|============|===================|============ Customer | Amount | Additional data | Paid (Y/N) ``` I would like to create a new worksheet with just the customers who haven't paid and the data from that row.
2018/06/11
[ "https://superuser.com/questions/1330330", "https://superuser.com", "https://superuser.com/users/913625/" ]
You could use a **Pivot Table**. * Simply select your data (headers included), then in the ribbon, click on `Insert > Pivot Table`. * Excel will create a new sheet and a pivot table, that you need to configure. Simply drag the relevant fields in the rows/columns space, and drag the field `Paid` in the filters space. * Finally, filter the pivot table and select `N`, to display only people who didn't pay. --- When you add/remove data, don't forget to update the pivot table: * By right-clicking the pivot table and selecting `Refresh` * In the ribbon, in the Data tab, by clicking `Refresh All` --- [More information for setting up and using Pivot Tables](https://www.excel-easy.com/data-analysis/pivot-tables.html)
Use a filter: 1. Select your entire table. 2. Click `Filter` on the Data ribbon. 3. Set the filter on column F by clicking the drop-down triangle button in the column header. Make sure only "No" is checked and then click OK. This will show you all the rows in your table where the Paid value is "No". To turn off the filter, you can either reset the filter on column F (i.e., make it show all values again, not just "No"), or you can click the `Filter` button in the Data ribbon again to remove the filter altogether.
1,330,330
I have a sheet with the following data: ``` Column A | Column B | Column C - E | Column F ==========|============|===================|============ Customer | Amount | Additional data | Paid (Y/N) ``` I would like to create a new worksheet with just the customers who haven't paid and the data from that row.
2018/06/11
[ "https://superuser.com/questions/1330330", "https://superuser.com", "https://superuser.com/users/913625/" ]
Use a filter: 1. Select your entire table. 2. Click `Filter` on the Data ribbon. 3. Set the filter on column F by clicking the drop-down triangle button in the column header. Make sure only "No" is checked and then click OK. This will show you all the rows in your table where the Paid value is "No". To turn off the filter, you can either reset the filter on column F (i.e., make it show all values again, not just "No"), or you can click the `Filter` button in the Data ribbon again to remove the filter altogether.
Try this arrary formula: ``` =IFERROR(INDEX(Sheet2!$A:$A,SMALL(IF(Sheet2!$D$2:$D$8="N",ROW(Sheet2!$2:$8)),ROW(A1))),"") ``` Press **Ctrl+Shift+Enter** [![enter image description here](https://i.stack.imgur.com/sitI8.gif)](https://i.stack.imgur.com/sitI8.gif)
57,450,967
Before upgrading from react native 0.59 to 0.60, all was good. Xcode 10.3 build error: ``` /bin/sh -c myApp.build/Script-6C5554832063F4750081EA9D.sh error: Could not get GOOGLE_APP_ID in Google Services file from build environment ** ARCHIVE FAILED ** ``` Again everything worked before upgrading firebase and react native. I had a working /ios/GoogleService-Info.plist In my project's Build Phases > Run Script ``` "${PODS_ROOT}/Fabric/run" ``` File: package:json ``` "react": "^16.9.0", "react-native": "^0.60.4", "react-native-firebase": "^5.5.6", ``` File: Podfile ``` pod 'Firebase/Core', '~> 6.3.0' pod 'Firebase/Messaging', '~> 6.3.0' pod 'Fabric', '~> 1.10.2' pod 'Crashlytics', '~> 3.13.2' ``` Before upgrading from react native 0.59 to 0.60, all was good. Can anyone advise what else I can check?
2019/08/11
[ "https://Stackoverflow.com/questions/57450967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245646/" ]
``` _CrtDumpMemoryLeaks(); ``` At this time, both of your `P1` and `P2` shared pointers still exist and in scope, and maintain a reference count. They are not destroyed (but your dynamically-allocated pointer is `delete`d and destroyed). So, you shouldn't be surprised that your debugging library still finds some allocated memory. You called this debugging function before `P1` and `P2` **get destroyed when your `main()` returns**. Destroy P1 and P2 first, then try again. The easiest way to do this is to simply put them in an inner scope. ``` { SharedPtr P1(DBG_NEW double(10)); //Leak SharedPtr P2(DBG_NEW double(20)); //Leak // The rest of your code } _CrtDumpMemoryLeaks(); ```
Objects created in a certain scope are destroyed at the end of that scope. So `P1` and `P2` will be destroyed at the end of `main()`, after the call to `_CrtDumpMemoryLeaks()`. So it will report them as leaks since they haven't been destroyed yet. Move the test into a separate function, i.e. `test()` and call it from `main()` like this: ``` void test() { SharedPtr P1(DBG_NEW double(10)); SharedPtr P2(DBG_NEW double(20)); SharedPtr* P = DBG_NEW SharedPtr(DBG_NEW double(30)); delete P; //No leak } int main() { test(); _CrtDumpMemoryLeaks(); } ```
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
Notice you're dealing with a [Response object](https://developer.mozilla.org/en-US/docs/Web/API/Response). You need to basically **read** the response stream with `Response.json()` or `Response.text()` (or via other methods) in order to see your data. Otherwise your response body will always appear as a locked readable stream. For example: ``` fetch('https://api.ipify.org?format=json') .then(response=>response.json()) .then(data=>{ console.log(data); }) ``` If this gives you unexpected results, you may want to inspect your response with [Postman](https://www.getpostman.com/).
I had a typo in my code as pointed out by [GabeRogan](https://stackoverflow.com/users/4833285/gabe-rogan) in [this comment](https://stackoverflow.com/questions/43903767/read-the-body-of-a-fetch-promise#comment74842732_43904074): > > Ok awesome. To be quite honest I have absolutely no clue why you're getting undefined, except that it might be some sort of misspelling error? > > > Here's my updated code for the front end which returns the response body text: ``` Client.upload(this.file).then(response => response.text()) .then((body) => { console.log(body); }); ``` `body` is a string that reads `Unique File Name: [FILE-NAME]` Here's a good explanation of the Fetch API and reading the response you get from the promise object: [Css tricks: Using Fetch](https://css-tricks.com/using-fetch/).
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
Notice you're dealing with a [Response object](https://developer.mozilla.org/en-US/docs/Web/API/Response). You need to basically **read** the response stream with `Response.json()` or `Response.text()` (or via other methods) in order to see your data. Otherwise your response body will always appear as a locked readable stream. For example: ``` fetch('https://api.ipify.org?format=json') .then(response=>response.json()) .then(data=>{ console.log(data); }) ``` If this gives you unexpected results, you may want to inspect your response with [Postman](https://www.getpostman.com/).
You can also use async/await: When returning json content: ``` Client.upload(this.file).then(async r => console.log(await r.json())) ``` or just returning in textual form: ``` Client.upload(this.file).then(async r => console.log(await r.text())) ```
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
Notice you're dealing with a [Response object](https://developer.mozilla.org/en-US/docs/Web/API/Response). You need to basically **read** the response stream with `Response.json()` or `Response.text()` (or via other methods) in order to see your data. Otherwise your response body will always appear as a locked readable stream. For example: ``` fetch('https://api.ipify.org?format=json') .then(response=>response.json()) .then(data=>{ console.log(data); }) ``` If this gives you unexpected results, you may want to inspect your response with [Postman](https://www.getpostman.com/).
If you are getting "undefined" for data and you are doing something with the response, make sure to return the response. I was doing something like this ``` fetch(URL) .then(response => { response.text(); console.log(response.statusText); }) .then(data => console.log(data)); // got "undefined" ``` Return response object: **return response.text();** ``` fetch(URL) .then(response => { console.log(response.statusText); return response.text(); }) .then(data => console.log(data)); // got data ```
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
I had a typo in my code as pointed out by [GabeRogan](https://stackoverflow.com/users/4833285/gabe-rogan) in [this comment](https://stackoverflow.com/questions/43903767/read-the-body-of-a-fetch-promise#comment74842732_43904074): > > Ok awesome. To be quite honest I have absolutely no clue why you're getting undefined, except that it might be some sort of misspelling error? > > > Here's my updated code for the front end which returns the response body text: ``` Client.upload(this.file).then(response => response.text()) .then((body) => { console.log(body); }); ``` `body` is a string that reads `Unique File Name: [FILE-NAME]` Here's a good explanation of the Fetch API and reading the response you get from the promise object: [Css tricks: Using Fetch](https://css-tricks.com/using-fetch/).
You can also use async/await: When returning json content: ``` Client.upload(this.file).then(async r => console.log(await r.json())) ``` or just returning in textual form: ``` Client.upload(this.file).then(async r => console.log(await r.text())) ```
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
I had a typo in my code as pointed out by [GabeRogan](https://stackoverflow.com/users/4833285/gabe-rogan) in [this comment](https://stackoverflow.com/questions/43903767/read-the-body-of-a-fetch-promise#comment74842732_43904074): > > Ok awesome. To be quite honest I have absolutely no clue why you're getting undefined, except that it might be some sort of misspelling error? > > > Here's my updated code for the front end which returns the response body text: ``` Client.upload(this.file).then(response => response.text()) .then((body) => { console.log(body); }); ``` `body` is a string that reads `Unique File Name: [FILE-NAME]` Here's a good explanation of the Fetch API and reading the response you get from the promise object: [Css tricks: Using Fetch](https://css-tricks.com/using-fetch/).
If you are getting "undefined" for data and you are doing something with the response, make sure to return the response. I was doing something like this ``` fetch(URL) .then(response => { response.text(); console.log(response.statusText); }) .then(data => console.log(data)); // got "undefined" ``` Return response object: **return response.text();** ``` fetch(URL) .then(response => { console.log(response.statusText); return response.text(); }) .then(data => console.log(data)); // got data ```
43,903,767
I have the following express endpoint for uploading to Google Cloud storage. It works great and the response from the google api gives me a unique file name that I want to pass back to my front end: ``` app.post('/upload', (req, res) => { var form = new formidable.IncomingForm(), files = [], fields = []; form .on('field', function(field, value) { fields.push([field, value]); }) .on('file', function(field, file) { files.push([field, file]); }) .on('end', function() { console.log('-> upload done'); }); form.parse(req, function(err, fields, files){ var filePath = files.file.path; bucket.upload(filePath, function(err, file, apiResponse){ if (!err){ res.writeHead(200, {'content-type': 'text/plain'}); res.end("Unique File Name:" + file.name); }else{ res.writeHead(500); res.end(); } }); }); return; }); ``` I reach this endpoint by calling a short function which passes the file to it: ``` function upload(file) { var data = new FormData(); data.append('file', file); return fetch(`upload`,{ method: 'POST', body: data }); } const Client = { upload }; export default Client; ``` This function is called from my front end like this: ``` Client.upload(this.file).then((data) => { console.log(data); }); ``` This final `console.log(data)` logs the response to the console. However, I don't see anywhere the response that I wrote in (`"Unique File Name:" + file.name`) How I can retrieve this info from the response body on the client-side? The `data` looks like this when I console.log it: [![Screenshot of the data console.log](https://i.stack.imgur.com/0wl4a.png)](https://i.stack.imgur.com/0wl4a.png) This is the response I get when I POST a file to my endpoint using Postman: [![Screen shot of response using Postman](https://i.stack.imgur.com/pEsyV.png)](https://i.stack.imgur.com/pEsyV.png)
2017/05/10
[ "https://Stackoverflow.com/questions/43903767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5775341/" ]
You can also use async/await: When returning json content: ``` Client.upload(this.file).then(async r => console.log(await r.json())) ``` or just returning in textual form: ``` Client.upload(this.file).then(async r => console.log(await r.text())) ```
If you are getting "undefined" for data and you are doing something with the response, make sure to return the response. I was doing something like this ``` fetch(URL) .then(response => { response.text(); console.log(response.statusText); }) .then(data => console.log(data)); // got "undefined" ``` Return response object: **return response.text();** ``` fetch(URL) .then(response => { console.log(response.statusText); return response.text(); }) .then(data => console.log(data)); // got data ```
26,600,506
I accidentally amended merge commit instead of creating new one. Now I don't know how to extract the changes to normal commit which I can push. The changes will show up in gitk, but will not appear in format-patch. Please, help.
2014/10/28
[ "https://Stackoverflow.com/questions/26600506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/90096/" ]
You have 2 SHAs that are of interest here - the original merge commit, and the amended merge commit. What you want to do is `git reset` your `HEAD` to the original merge commit, while preserving your index and working directory. You can then create a new commit hanging off the merge commit. Use `git reflog` to find the SHA of the original merge commit reset to the commit with `git reset ORIGINAL_MERGE_COMMIT_SHA` or directly from reflog with `git reset HEAD@{X}` where X is 1 or the position in the reflog that represents the merge commit. You should now be ready to `git commit` your original changes and don't pass in --amend here and you will create a new commit.
I've found one way which works: ``` git diff HEAD~1 > p.patch git checkout master git checkout -b branch-name ``` Manually edit p.patch to remove unrelated changes from merge. ``` git apply p.patch ``` But I suspect there is a much easier/better way to do it.
26,600,506
I accidentally amended merge commit instead of creating new one. Now I don't know how to extract the changes to normal commit which I can push. The changes will show up in gitk, but will not appear in format-patch. Please, help.
2014/10/28
[ "https://Stackoverflow.com/questions/26600506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/90096/" ]
I've found one way which works: ``` git diff HEAD~1 > p.patch git checkout master git checkout -b branch-name ``` Manually edit p.patch to remove unrelated changes from merge. ``` git apply p.patch ``` But I suspect there is a much easier/better way to do it.
This worked for me: * Get the SHAs of both the original merge commit and the amended merge commit * `git reset --hard xxx` where `xxx` is the amended merge commit SHA * `git reset --soft yyy` where `yyy` is the original merge commit SHA That left me with my accidentally-amended changes staged.
26,600,506
I accidentally amended merge commit instead of creating new one. Now I don't know how to extract the changes to normal commit which I can push. The changes will show up in gitk, but will not appear in format-patch. Please, help.
2014/10/28
[ "https://Stackoverflow.com/questions/26600506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/90096/" ]
You have 2 SHAs that are of interest here - the original merge commit, and the amended merge commit. What you want to do is `git reset` your `HEAD` to the original merge commit, while preserving your index and working directory. You can then create a new commit hanging off the merge commit. Use `git reflog` to find the SHA of the original merge commit reset to the commit with `git reset ORIGINAL_MERGE_COMMIT_SHA` or directly from reflog with `git reset HEAD@{X}` where X is 1 or the position in the reflog that represents the merge commit. You should now be ready to `git commit` your original changes and don't pass in --amend here and you will create a new commit.
This worked for me: * Get the SHAs of both the original merge commit and the amended merge commit * `git reset --hard xxx` where `xxx` is the amended merge commit SHA * `git reset --soft yyy` where `yyy` is the original merge commit SHA That left me with my accidentally-amended changes staged.
72,795,617
I have here this function in my service.ts ``` public getUploadPeriod(){ let baseYear = 2020; let currYear = new Date().getFullYear(); let years = []; for (var i = baseYear; i <= currYear; i++) { years.push(i); } return years } ``` And this is the ts of my project I am trying to call out the function that I create in service.ts ``` getUploadPeriod() { const years = this.uploadFacilityService.getUploadPeriod(); console.log(years) return years } ``` And this is my html ``` <mat-select placeholder="Select Time Period"> <mat-option value=""> {{ years }} </mat-option> </mat-select> ``` The years are already generated with the function that I created but theres no display in my dropdown list. Can someone help me with this. Thank you!
2022/06/29
[ "https://Stackoverflow.com/questions/72795617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19440121/" ]
As seen in the documentation for the Angular select: > > To add options to the select, add `<mat-option>` elements to the > `<mat-select>`. Each `<mat-option>` has a value property that can be used > to set the value that will be selected if the user chooses this > option. The content of the `<mat-option>` is what will be shown to the > user. > > > Remember that inside a select, the options must be generated iteratively if you want to fetch them from a collection. Thus, in your useCase, it would look like this: ``` <mat-label>Select time period</mat-label> <mat-select [(ngModel)]="year" name="year"> <mat-option *ngFor="let year of years" [value]="year"> {{year}} </mat-option> </mat-select> ``` Read more about select and `<mat-select>`: <https://material.angular.io/components/select/overview> **Edit:** After taking a look at the rest of your code, `years` should have a higher scope than the one you are giving it. So, initialize the variable in the component constructor and set its value inside of your method (remember to set `this.years`). This way, the variable will be accessible from the component's template.
`years` variable should be declared outside of method scope so template would have access to it. ```js years: any; getUploadPeriod() { this.years = this.serviceGetUploadPeriod(); console.log(this.years); } ``` Use `*ngFor` to iterate over items in template. ```html <div *ngFor="let year of years"> {{ year }} </div> ``` Working example: <https://stackblitz.com/edit/angular-ivy-4qd3ao?file=src%2Fapp%2Fapp.component.ts>
17,739,842
I am using "UNIX" (on my virtual machine) and generating ".docx" file there using "C", after getting the file into Windows, when I am opening the file it's saying "the file is corrupted, can't be opened" and then its not opening. I am using MS-Word 2010. Here is the piece of code I am using:- ``` Write_to_file(){ FILE *fp; if((fp=fopen("hello.docx","w"))==(FILE*)NULL){ printf("Error opening file"); return 0; } fprintf(fp,"Hello World"); fclose(fp); } ```
2013/07/19
[ "https://Stackoverflow.com/questions/17739842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/661801/" ]
Just with an extension name (doc docx) does not make the file a MS word file. Your code is only writing a text file. You can dectect this by file command under Linux. Please reference this <http://msdn.microsoft.com/en-us/library/cc313105(v=office.12).aspx>, and write the **REAL** MS document file.
A `doc` file is not a simple text file. You'd want to use the `txt` format: ``` fopen("hello.txt", "w"); ``` To actually read/write a `doc` file, you'd need to use a library designed specifically to read them and write them. [The spec for MS-DOC files is pretty lengthy, so I wouldn't implement my own reader/writer if I were you.](http://msdn.microsoft.com/en-us/library/cc313153%28v=office.12%29.aspx)
33,308,549
I am trying to use this Makefile for a C Program. Could you please share with me a way how I can understand the Make utility better? I am trying the following: ``` # stack/Makefile CC := gcc CFLAGS += -std=c99 CFLAGS += -Wall CFLAGS += -Wextra CFLAGS += -g VPATH = main src all: bin/main.exe clean run bin/main.exe: bin/main.o bin/stack.o $(CC) -o $@ $^ $(LDFLAGS) $(LDLIBS) bin/main.o: main.c bin/stack.o: stack.c stack.h bin/%.o: %.c $(CC) $(CFLAGS) -c -o $@ $< demo: ../bin/main.exe clean: rm -f bin/*.o run: bin/main.exe .PHONY: all clean run ``` And I getting this message: ``` gcc -std=c99 -Wall -Wextra -g -c -o bin/main.o main.c error: unable to open output file 'bin/main.o': 'No such file or directory' 1 error generated. make: *** [bin/main.o] Error 1 ```
2015/10/23
[ "https://Stackoverflow.com/questions/33308549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5057040/" ]
The error stems from the fact that your Makefile wants to generate the executable and object files in subdirectory `bin` but it contains no rule to ensure that `bin` exists when it is needed. As @JonathanLeffler comments, you can solve that just by manually creating `bin` yourself. But it is often desired that a Makefile itself will ensure that a subdirectory, or some other resource, exists when it is needed, and you probably assumed that the pattern-rule ``` bin/%.o: %.c ``` would create `bin` as needed. It won't. Your Makefile can ensure the existence of `bin` if you amend it like this: Somewhere below the `all` rule, add a new target: ``` bin: mkdir -p $@ ``` This is to make the `bin` subdirectory if it doesn't exist. Then change the rule: ``` bin/%.o: %.c $(CC) $(CFLAGS) -c -o $@ $< ``` to: ``` bin/%.o: %.c | bin $(CC) $(CFLAGS) -c -o $@ $< ``` The additional `| bin` is an example of an [order-only prequisite](https://www.gnu.org/software/make/manual/html_node/Prerequisite-Types.html) It means: If any of the targets (the `bin/%.o` things) needs to be remade from any of preceding prequisites (the ones before `|`, i.e. the `%.c` things), then `bin` must be made first. So, as soon as anything needs to be made in `bin`, `bin` will be made first. There is another more basic flaw in your Makefile. `all` is dependent on `clean`, so every time you successfully build your program the object files are deleted. I understand that you *intend* this, but it entirely defeats the purpose of `make`, which is to *avoid the need to rebuild everything* (object files, executables) every time by instead just rebuilding those things that have become out-of-date with respect to their prerequisites. So `all` should not depend on `clean`, and then an object file will be recompiled only if it *needs* to be recompiled, i.e. is older than the corresponding source file. If and when you want to clean out the object files, run `make clean`. Finally, your `demo` rule: ``` demo: ../bin/main.exe ``` is inconsistent with the others. The others assume that the `bin` where the executable is put is in the current directory. The `demo` rule assumes it is in the *parent* of the current directory. If you correct the `demo` rule then it will be identical to the `run` rule, so it is superfluous, but if it were not superfluous then it should be added it to the `.PHONY` targets.
The best way to learn the proper way to use makefiles is to read the [manual](http://www.gnu.org/software/make/manual/make.html). Also, you can Google for some [simple tutorials](https://www.google.com/?gws_rd=ssl#safe=off&q=make%20tutorial).
30,170,109
After inserting elements to my linked list, i would like to retrieve them. This is the struct: ``` typedef struct node { int data; struct node *next; } Element; typedef Element* ROW; ``` I start my list with : ``` void startRow(ROW *row){ *row = NULL; } ``` After that, i use the function to insert at the end: ``` void insertEnd(ROW *row, int value){ if(*row==NULL){ *row = (ROW) (malloc(sizeof(Element))); if(*row==NULL){ return; }else{ (*row)->data = value; (**row).next = NULL; } } } ``` And to list the values, i use this function (I plan on removing recursion, just playing arround): ``` void listValues(ROW row){ if(row==NULL){ return; } printf("%d\n", row->data); listValues(row->next); } ``` I call it like this, but it **only outputs the first element**! ``` int main(){ ROW row; startRow(&row); insertEnd(&row, 10); insertEnd(&row, 20); listValues(row); return 0; } ``` Why? Shouldn't it output the complete list?
2015/05/11
[ "https://Stackoverflow.com/questions/30170109", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1536010/" ]
\*row is not null after the first insert. So the next insert never happens. ``` void insertEnd(ROW *row, int value){ if(*row==NULL){ printf("Inserting %d\n", value); *row = (ROW) (malloc(sizeof(Element))); if(*row==NULL){ return; }else{ (*row)->data = value; (**row).next = NULL; } } } ``` Add a debug printf like this and it'll become obvious what's happening. I think there are more fundamental problems though with the design! You might want to keep a pointer to the start of the list around otherwise it'll be mighty difficult to find it again. And for a fast insert at the end you might want to keep a pointer to the end of the list as well. I'm assuming that you are trying to figure this out for yourself, so I won't give you the answer :)
The problem is in the method insertEnd. You should link the elements of the list not overide them :) like this : ``` void insertEnd(ROW *row, int value){ if(*row!=NULL){ ROW *lastElement = (ROW) (malloc(sizeof(Element))); if(*lastElement==NULL){ return; }else{ (*lastElement)->data = value; (**lastElement).next = NULL; (**row).next = lastElement; } } else { *row = (ROW) (malloc(sizeof(Element))); if(*row==NULL){ return; }else{ (*row)->data = value; (**row).next = NULL; } } ``` Like this you will link all your list elements.
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
You need to have **.env** on your appication folder then run: ``` $ php artisan key:generate ``` If you don't have **.env** copy from **.env.example**: ``` $ cp .env.example .env ```
Check your .env file if **APP\_KEY** is not set, it is the issue, Now run `php artisan key:generate` then run `php artisan config:cache` it will set an **APP\_KEY** key in your .env file. If **APP\_KEY** is already set run the same commands. It will update this key.
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
For laravel version 5.4 PHP 7.4 1. To solve **run this command** `php artisan key:generate` This will set a value for `APP_KEY=` in your `.env` file something like this: `APP_KEY=base64:trp5LQ9/TW85+17o0T7F0bZ/Ca1J9cIMgvyNIYl0k/g=` 2. Clean cache to get everything working again with the following commands: `php artisan config:clear` then `php artisan config:cache` Hope this helps.
The artisan commands did it for me Run `php artisan key:generate`. Do `php artisan config:clear`, Then `php artisan config:cache` Thank me later!
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
Ok, this has basically already been answered, but I found a couple caveats that had be consternated, or constipated, one of those two... First, as has already been said, you should ensure you have a valid `.env` file which you can accomplish in the terminal by copying the existing `.env.example` file as such: `$ cp .env.example .env` Then, generate your application Key `$ php artisan key:generate` Once this is done, make sure to open your .env file and ensure that the APP\_KEY line looks correct - this is where my consternation came from: `APP_KEY=base64:MsUJo+qAhIVGPx52r1mbxCYn5YbWtCx8FQ7pTaHEvRo=base64:Ign7MpdXw4FMI5ai7SXXiU2vbraqhyEK1NniKPNJKGY=` **You'll notice that the key length is wrong, it for some unknown reason (probably from running key:generate multiple times) has two `base64=` keys in there. Removing one is the fix to the problems I was having and this appears to be an Artisan/Laravel bug.** Hope this answer helps anyone who may be struggling with the same problems or annoying bug.
If the artisan command doesn't work and you get the same error in the command line, that means composer didn't do a good job at getting all the files, you should delete the vendor folder and run `composer update` again.
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
in `.env` file give this key and you are done ``` APP_KEY=ABCDEF123ERD456EABCDEF123ERD456E ``` Still not working? If you are working from cli, reboot the server and it will. Want explanation? ok, as the error message says: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. > > > Key length for `AES-128-CBC` is 16 characters e.g ABCDEF123ERD456E Key length for `AES-256-CBC` is 32 characters e.g ABCDEF123ERD456EABCDEF123ERD456E Make sure in `config/app.php` the `cipher` is set to the appropriate cipher like the two above and the key is pointing to the `.env` file `APP_KEY` variable. My app has the `AES-256-CBC` cipher set, so i gave it the 32 characters key like `APP_KEY=ABCDEF123ERD456EABCDEF123ERD456E` and everything worked just fine after that.
For laravel version 5.4 PHP 7.4 1. To solve **run this command** `php artisan key:generate` This will set a value for `APP_KEY=` in your `.env` file something like this: `APP_KEY=base64:trp5LQ9/TW85+17o0T7F0bZ/Ca1J9cIMgvyNIYl0k/g=` 2. Clean cache to get everything working again with the following commands: `php artisan config:clear` then `php artisan config:cache` Hope this helps.
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
You need to have **.env** on your appication folder then run: ``` $ php artisan key:generate ``` If you don't have **.env** copy from **.env.example**: ``` $ cp .env.example .env ```
If you newly create a laravel project with command like `composer create-project --prefer-dist laravel/laravel market` and deploy the new repo to the application path with cp command you may get this issue. **I use laravel 5.4** ``` roofe@www:~/market$ php artisan --version Laravel Framework 5.4.33 ``` When you create the laravel project, you can see the the logs that create the key like this: > > Generating autoload files > > > > > > > Illuminate\Foundation\ComposerScripts::postUpdate > > php artisan optimize Generating optimized class loader The compiled services file has been removed. > > php artisan key:generate Application key [base64:exxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/k=] set successfully. > > > > > > > > > By default the key config in the config/app.php is as follows, it use `AES-256-CBC` and the generated key when creating the project is stored in the `.env` file. If you use command like `cp -r ./* /var/www/market/` the `.env` file will not be moved to the application path. > > > ``` > /* > |-------------------------------------------------------------------------- > | Encryption Key > |-------------------------------------------------------------------------- > | > | This key is used by the Illuminate encrypter service and should be set > | to a random, 32 character string, otherwise these encrypted strings > | will not be safe. Please do this before deploying an application! > | > */ > > 'key' => env('APP_KEY'), > > 'cipher' => 'AES-256-CBC', > > ``` > > When I chage my deploy command to `cp -r ./* ./.env /var/www/market/`, this issue gone. You also can refer to this github [issue](https://github.com/laravel/framework/issues/9080).
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
I also had this issue. I check my environment variable value for "APP\_KEY" using `echo $APP_KEY` For me it was "lumen" which was set for another lumen project and that's why it was not working. I updated "APP\_KEY" value using `export APP_KEY=[you app_key value from .env file]` and cleared cache `php artisan config:cache` and it worked for me.
Another thing to just check is that your .env file is in the www-data group (or httpd or whatever your web server group is) and that the group has read permission. On linux my permissions looked like this when I got this error: `-rw-rw-r-- 1 kevin kevin 618 Mar 16 09:32 .env` I then just removed read permission for all and removed write permission for group. `chmod 640 .env` Then I changed the group to www-data `chown kevin:www-data .env` My permissions now look like this: `-rw-r----- 1 kevin www-data 516 Mar 16 09:35 .env`
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
Run `php artisan key:generate`. Do `php artisan config:clear`, Then `php artisan config:cache` And things will start working!
Ok, this has basically already been answered, but I found a couple caveats that had be consternated, or constipated, one of those two... First, as has already been said, you should ensure you have a valid `.env` file which you can accomplish in the terminal by copying the existing `.env.example` file as such: `$ cp .env.example .env` Then, generate your application Key `$ php artisan key:generate` Once this is done, make sure to open your .env file and ensure that the APP\_KEY line looks correct - this is where my consternation came from: `APP_KEY=base64:MsUJo+qAhIVGPx52r1mbxCYn5YbWtCx8FQ7pTaHEvRo=base64:Ign7MpdXw4FMI5ai7SXXiU2vbraqhyEK1NniKPNJKGY=` **You'll notice that the key length is wrong, it for some unknown reason (probably from running key:generate multiple times) has two `base64=` keys in there. Removing one is the fix to the problems I was having and this appears to be an Artisan/Laravel bug.** Hope this answer helps anyone who may be struggling with the same problems or annoying bug.
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
Ok, this has basically already been answered, but I found a couple caveats that had be consternated, or constipated, one of those two... First, as has already been said, you should ensure you have a valid `.env` file which you can accomplish in the terminal by copying the existing `.env.example` file as such: `$ cp .env.example .env` Then, generate your application Key `$ php artisan key:generate` Once this is done, make sure to open your .env file and ensure that the APP\_KEY line looks correct - this is where my consternation came from: `APP_KEY=base64:MsUJo+qAhIVGPx52r1mbxCYn5YbWtCx8FQ7pTaHEvRo=base64:Ign7MpdXw4FMI5ai7SXXiU2vbraqhyEK1NniKPNJKGY=` **You'll notice that the key length is wrong, it for some unknown reason (probably from running key:generate multiple times) has two `base64=` keys in there. Removing one is the fix to the problems I was having and this appears to be an Artisan/Laravel bug.** Hope this answer helps anyone who may be struggling with the same problems or annoying bug.
If you newly create a laravel project with command like `composer create-project --prefer-dist laravel/laravel market` and deploy the new repo to the application path with cp command you may get this issue. **I use laravel 5.4** ``` roofe@www:~/market$ php artisan --version Laravel Framework 5.4.33 ``` When you create the laravel project, you can see the the logs that create the key like this: > > Generating autoload files > > > > > > > Illuminate\Foundation\ComposerScripts::postUpdate > > php artisan optimize Generating optimized class loader The compiled services file has been removed. > > php artisan key:generate Application key [base64:exxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/k=] set successfully. > > > > > > > > > By default the key config in the config/app.php is as follows, it use `AES-256-CBC` and the generated key when creating the project is stored in the `.env` file. If you use command like `cp -r ./* /var/www/market/` the `.env` file will not be moved to the application path. > > > ``` > /* > |-------------------------------------------------------------------------- > | Encryption Key > |-------------------------------------------------------------------------- > | > | This key is used by the Illuminate encrypter service and should be set > | to a random, 32 character string, otherwise these encrypted strings > | will not be safe. Please do this before deploying an application! > | > */ > > 'key' => env('APP_KEY'), > > 'cipher' => 'AES-256-CBC', > > ``` > > When I chage my deploy command to `cp -r ./* ./.env /var/www/market/`, this issue gone. You also can refer to this github [issue](https://github.com/laravel/framework/issues/9080).
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
You need to have **.env** on your appication folder then run: ``` $ php artisan key:generate ``` If you don't have **.env** copy from **.env.example**: ``` $ cp .env.example .env ```
Run this commands on your terminal: **php artisan config:clear** then **php artisan config:cache**
39,693,312
I installed a new fresh copy of Laravel 5.3 using composer but I'm getting this error: > > The only supported ciphers are AES-128-CBC and AES-256-CBC with the > correct key lengths. Even though my app.php file in config directory specify > > 'cipher' => 'AES-128-CBC' > > >
2016/09/26
[ "https://Stackoverflow.com/questions/39693312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1576294/" ]
Run `php artisan key:generate`. Do `php artisan config:clear`, Then `php artisan config:cache` And things will start working!
Check your .env file if **APP\_KEY** is not set, it is the issue, Now run `php artisan key:generate` then run `php artisan config:cache` it will set an **APP\_KEY** key in your .env file. If **APP\_KEY** is already set run the same commands. It will update this key.
12,650,060
How can i send a iframe in email body.This is how i am simply sending a mail. ``` string getemail = textbox_email.Text; System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage(); message.To.Add(getemail); message.Subject = "Hello"; message.From = new System.Net.Mail.MailAddress("sendingemail"); message.Body = "This is message body"; System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.gmail.com"); smtp.Credentials = new System.Net.NetworkCredential("sendingemail", "password"); smtp.EnableSsl = true; smtp.Send(message); Response.Write("Sent"); ``` This is iframe in html.Actully iframe will contain the youtube video link. ``` <iframe id="iframe" runat="server" src="http://www.w3schools.com" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:100px; height:21px;" allowTransparency="true"></iframe> ``` How can i send it in email body? Thanks in advance.
2012/09/29
[ "https://Stackoverflow.com/questions/12650060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1697157/" ]
You can use either [IsBodyHtml](http://msdn.microsoft.com/en-us/library/system.net.mail.mailmessage.isbodyhtml%28v=vs.100%29.aspx) property or [AlternativeViews](http://msdn.microsoft.com/en-us/library/system.net.mail.mailmessage.alternateviews%28v=vs.100%29.aspx) to send HTML. If you expect that client will render content inside your IFrame it very likely not to happen due to security restrictions in mail clients (similar to blocking external images).
While it IS extremely unlikely that your email recipient's client will render the iframe, depending on your intended goal, you may want to look into the DownloadString method of the WebClient class.
11,868,906
below i have a control template that comprises of two text box. The control template is used as a grid's cell display template and the datacontext is a bindinglist of my model object (which implement INotifyPropertyChanged). The second text box is initially Collapsed. However when "Ask" and "Bid" price updates happen, i want to 'flash" this text box by toggling visibility for about 1 second. The problem i'm seeing is that on initial load of the form, i do see the 2nd text box flash, but after that...nothing. Interestingly if i click on a cell grid (which activates the edit template) then click out of the cell (which reverts back to the display template, which is the template shown below), the 2nd textbox then flashes. Can anyone see an issue why the 2nd textbox would not 'flash' when AskPrice or BidPrice changes? I do see that the converter is being invoked as expected, but the update is not triggering the storyboard animation. ``` <Style x:Key="PriceSpreadAlertStyle" TargetType="TextBlock"> <Setter Property="Visibility" Value="Collapsed" /> <Setter Property="Background" Value="Red" /> <Style.Triggers> <EventTrigger RoutedEvent="Binding.TargetUpdated" > <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00.2" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.4" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.6" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.8" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.0" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.2" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Style.Triggers> </Style> <ControlTemplate x:Key="QuoteDisplayTemplate" > <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding QuotePrice}" /> <TextBlock Style="{StaticResource PriceSpreadAlertStyle}"> <TextBlock.Text> <MultiBinding Converter="{StaticResource AskBidToSpreadConverter}"> <Binding Path="AskPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> <Binding Path="BidPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ControlTemplate> ```
2012/08/08
[ "https://Stackoverflow.com/questions/11868906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/480118/" ]
Set your image to `img { display: block; }`. And please next time add your code here on SO to prevent link rot and lets us easily review your code here.
Give your image a bottom vertical alignment and the space goes away. **[jsFiddle example](http://jsfiddle.net/j08691/7Knyx/10/)** ``` img { vertical-align:bottom; } ```
11,868,906
below i have a control template that comprises of two text box. The control template is used as a grid's cell display template and the datacontext is a bindinglist of my model object (which implement INotifyPropertyChanged). The second text box is initially Collapsed. However when "Ask" and "Bid" price updates happen, i want to 'flash" this text box by toggling visibility for about 1 second. The problem i'm seeing is that on initial load of the form, i do see the 2nd text box flash, but after that...nothing. Interestingly if i click on a cell grid (which activates the edit template) then click out of the cell (which reverts back to the display template, which is the template shown below), the 2nd textbox then flashes. Can anyone see an issue why the 2nd textbox would not 'flash' when AskPrice or BidPrice changes? I do see that the converter is being invoked as expected, but the update is not triggering the storyboard animation. ``` <Style x:Key="PriceSpreadAlertStyle" TargetType="TextBlock"> <Setter Property="Visibility" Value="Collapsed" /> <Setter Property="Background" Value="Red" /> <Style.Triggers> <EventTrigger RoutedEvent="Binding.TargetUpdated" > <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00.2" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.4" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.6" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.8" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.0" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.2" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Style.Triggers> </Style> <ControlTemplate x:Key="QuoteDisplayTemplate" > <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding QuotePrice}" /> <TextBlock Style="{StaticResource PriceSpreadAlertStyle}"> <TextBlock.Text> <MultiBinding Converter="{StaticResource AskBidToSpreadConverter}"> <Binding Path="AskPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> <Binding Path="BidPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ControlTemplate> ```
2012/08/08
[ "https://Stackoverflow.com/questions/11868906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/480118/" ]
Give your image a bottom vertical alignment and the space goes away. **[jsFiddle example](http://jsfiddle.net/j08691/7Knyx/10/)** ``` img { vertical-align:bottom; } ```
you could also margin-top property to achieve that <http://jsfiddle.net/meetravi/7Knyx/14/> P.S Its always great to avoid negative values in css if you are planning to support relatively old browsers
11,868,906
below i have a control template that comprises of two text box. The control template is used as a grid's cell display template and the datacontext is a bindinglist of my model object (which implement INotifyPropertyChanged). The second text box is initially Collapsed. However when "Ask" and "Bid" price updates happen, i want to 'flash" this text box by toggling visibility for about 1 second. The problem i'm seeing is that on initial load of the form, i do see the 2nd text box flash, but after that...nothing. Interestingly if i click on a cell grid (which activates the edit template) then click out of the cell (which reverts back to the display template, which is the template shown below), the 2nd textbox then flashes. Can anyone see an issue why the 2nd textbox would not 'flash' when AskPrice or BidPrice changes? I do see that the converter is being invoked as expected, but the update is not triggering the storyboard animation. ``` <Style x:Key="PriceSpreadAlertStyle" TargetType="TextBlock"> <Setter Property="Visibility" Value="Collapsed" /> <Setter Property="Background" Value="Red" /> <Style.Triggers> <EventTrigger RoutedEvent="Binding.TargetUpdated" > <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00.2" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.4" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.6" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:00.8" Value="{x:Static Visibility.Collapsed}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.0" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:01.2" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Style.Triggers> </Style> <ControlTemplate x:Key="QuoteDisplayTemplate" > <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding QuotePrice}" /> <TextBlock Style="{StaticResource PriceSpreadAlertStyle}"> <TextBlock.Text> <MultiBinding Converter="{StaticResource AskBidToSpreadConverter}"> <Binding Path="AskPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> <Binding Path="BidPrice" UpdateSourceTrigger="PropertyChanged" NotifyOnTargetUpdated="True" /> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ControlTemplate> ```
2012/08/08
[ "https://Stackoverflow.com/questions/11868906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/480118/" ]
Set your image to `img { display: block; }`. And please next time add your code here on SO to prevent link rot and lets us easily review your code here.
you could also margin-top property to achieve that <http://jsfiddle.net/meetravi/7Knyx/14/> P.S Its always great to avoid negative values in css if you are planning to support relatively old browsers
49,570,351
Safari 11 is crashing when i open web inspector. Odd thing is, It works without failing if i don't open console panel. My HTML5 application works well in all other browsers. There is no memory leaks as it is not going beyond 50MB while taking memory heaps. And no use of console methods. Is this a known issue in Safari 11?
2018/03/30
[ "https://Stackoverflow.com/questions/49570351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248195/" ]
The only thing I can do to fix this is free up RAM. If I close a few applications and then try it again, it always works. Also after it crashes a few times you will not be able to inspect your device until you quit safari completely and open it again.
Some say it seems to be an hardware issue, happening only when the battery is at 100%. Have a closer look here: <https://forums.developer.apple.com/thread/92290> Maybe it helps keeping on your flashlight and camera, then establishing a cable connection and starting the web inspector then. **Anyway: This wasn't working for me.** **Another Maybe**: Connect via network instead of a cable connection. ***Things that might be too late by now***: **Advice 1**: Do not upgrade above 10.x and use Safari 10.x along with iOS 10.x **Advice 2**: Put your devices in trash or sell it anyway and start not to use apple anymore. 99% of my upsets lead to issues with iOS and Mac.
63,911,439
``` logged_at type name values 2020-08-17 00:02:22 weak AA 55 2020-08-17 00:12:20 weak AA 54 2020-08-17 00:22:24 weak AA 53 2020-08-17 00:32:25 weak AA 50 2020-08-17 00:42:28 strong AA 44 2020-08-17 00:52:22 strong AA 33 2020-08-17 01:02:20 strong AA 32 2020-08-17 01:22:24 weak AA 56 2020-08-17 01:32:25 weak AA 55 2020-08-17 01:42:28 weak AA 43 ``` Above is the dataframe type. What I want to do is to look at each name as a subset until the type is changed and get the value minus the first and last values ​​of the values ​​of each subset. The final desired shape seems to be this type. ``` logged_at type name values rank 2020-08-17 00:02:22 weak AA 5 1 2020-08-17 00:42:28 strong AA 12 2 2020-08-17 01:22:24 weak AA 13 3 ``` Whenever the type is changed, a rank value should be assigned so that each subset can be distinguished, but what I think of now is that a new list is created through itertuples and a type different from the previous type appears while checking the type, raising the rank and storing it in the list, and then a new column. I think of adding it to... I don't know if this is efficient. There are about 300 million rows, so i am looking for the most efficient method.
2020/09/16
[ "https://Stackoverflow.com/questions/63911439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12329709/" ]
You can break it in stages : ``` # get the first values : top = df.loc[df["type"].ne(df["type"].shift())] # get the last values bottom = df.loc[df["type"].ne(df["type"].shift(-1))] #get the difference in values and generate the rank : top.assign(values=top["values"].array - bottom["values"].array, rank=range(1, 1 + len(top))) logged_at type name values rank 0 2020-08-17 00:02:22 weak AA 5 1 4 2020-08-17 00:42:28 strong AA 12 2 7 2020-08-17 01:22:24 weak AA 13 3 ``` You can reset the index. However, with 300 million rows, I do not think Pandas is the right choice.
If the dataframe is sorted by time, just remove duplicates as follows. ``` df.drop_duplicates(by = 'logged_at', keep = 'first') ```
32,885,532
I am doing setbit operation in redis to mark which users have been online on a particular day. I am doing a redis get operation to get the value of the key. ``` coffee> redis.setbit "a",7,1 true coffee> redis.setbit "d",4,1 true coffee> redis.setbit "g",1,1 true coffee> redis.setbit "h",0,1 ``` And the output is ``` coffee> redis.get "a",(err,res)->console.log res.toString().charCodeAt(0) true coffee> 1 coffee> redis.get "d",(err,res)->console.log res.toString().charCodeAt(0) true coffee> 8 coffee> redis.get "g",(err,res)->console.log res.toString().charCodeAt(0) true coffee> 64 coffee> redis.get "h",(err,res)->console.log res.toString().charCodeAt(0) true coffee> 65533 ``` My problem is with the "h" key where I am setting the 0th bit 1. It should return 128 but returns 65533.Why is this so? My end goal is to get bitmap from redis in binary so that I can get which users were active on that particular day.
2015/10/01
[ "https://Stackoverflow.com/questions/32885532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4381216/" ]
This error occurs because of utf-8 encoding. When we set the 0th bit as 1, it does not follow utf-8 rules. Now when we try to get it we get the replacement Character > > U+FFFD � replacement character used to replace an unknown or > unrepresentable character > > > and when we do a charCodeAt on it we will get 65533. Read here [UTF-8](https://en.wikipedia.org/wiki/UTF-8) and [Specials Unicode Block](https://en.wikipedia.org/wiki/Specials_(Unicode_block))
The secret is to put "return\_buffers: true" when you connect. This causes buffers rather than strings to be returned on any function calls that use that redis client. ``` const client = redis.createClient({ host: '192.168.1.89', port: 6379, return_buffers: true }); ```
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Good unit tests are independent. The order in which the tests are run (or even if other tests are run at all) should not matter. Set up your fixture for each test, then tear it down (restore the state) afterwards. Most of the time you won't need a teardown method (implementation depends on the test framework) if you keep your variables local to the test method. If your library keeps track of state, you may have other issues (such as thread safety).
Don't know what language you are programming with. In Java, you can create TestSuites with the JUnit API. You add single sub-tests to the TestSuite and run the TestSuite as a whole.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Going by the strict definition of unit testing used by Glen Beck and co, it could well **actually be a bad idea** to create unit tests in your situation. What you definitely and unarguably need for a library to be used by anyone outside shouting range of your workstation is a full set of user-level API system tests. Ones that exercise the complete system in the same way client code would, not piece-by-piece in isolation. Once you have those, then depending on: * execution time (influenced by your language and nature of what you are doing). * internal hidden complexity * size of the API * number of simultaneous developers on the library * build time it could well be that you can set up a complete set of system tests that covers everything that needs covering and executes in seconds. You could supplement this by a small set of focused unit tests on the lowest-level and most complex 10% of the code, but full coverage of everything at unit level could well be wasted effort. Note: in the case of a system that is a library, a unit test tool like Junit is perfectly well suited to creating such complete-system tests. So a lot of people would end up calling them unit tests. Which is fine, as long as you don't fall for the argument that giving them that name changes anything about the way they should work. The qualities that make good unit tests not only do not apply, they may well be actively harmful.
Here's some best practices that will help: * Make tests independent, so they can be run alone, and in any sequence. * Start every test in a known, clean state. * Build up the environment using fixtures. Again look at db:unit. I like to have a good "canonical" state of the database, and restore it before each test (or roll back). I disagree with one assumption you have: that a test must RESTORE its state before it finishes. You can also say it's the responsibility of the test runner to do this; tools like db:unit do this. They allow you to write your tests more freely. Your sequential test makes sense, but it becomes difficult in practice. (I've tried) As you mentioned, you don't get good reports of the failures. In addition, as the program grows beyond 4 tests, you'll need to figure out where in the sequence to insert a test so that the data is just right for it. It will quickly become too hard to reason about. (Think 100s or 1000s of tests). That's why test fixturs (or factories) make sense. Kent Beck's Test Driven Development: By Example discusses this topic thoroughly.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Good unit tests are independent. The order in which the tests are run (or even if other tests are run at all) should not matter. Set up your fixture for each test, then tear it down (restore the state) afterwards. Most of the time you won't need a teardown method (implementation depends on the test framework) if you keep your variables local to the test method. If your library keeps track of state, you may have other issues (such as thread safety).
Here's some best practices that will help: * Make tests independent, so they can be run alone, and in any sequence. * Start every test in a known, clean state. * Build up the environment using fixtures. Again look at db:unit. I like to have a good "canonical" state of the database, and restore it before each test (or roll back). I disagree with one assumption you have: that a test must RESTORE its state before it finishes. You can also say it's the responsibility of the test runner to do this; tools like db:unit do this. They allow you to write your tests more freely. Your sequential test makes sense, but it becomes difficult in practice. (I've tried) As you mentioned, you don't get good reports of the failures. In addition, as the program grows beyond 4 tests, you'll need to figure out where in the sequence to insert a test so that the data is just right for it. It will quickly become too hard to reason about. (Think 100s or 1000s of tests). That's why test fixturs (or factories) make sense. Kent Beck's Test Driven Development: By Example discusses this topic thoroughly.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
By "restore the state of the world" you mean clean up a database perhaps, or something similar? In which case you probably already have some unit tests which mock out your persistence layer. This would let you run unit tests in isolation, which would not depend on preserving state between tests. So that leaves you with your other tests, which sound a bit more like "black box" tests, or integration tests, or whatever you want to call them. They rely on external state which has to be set up, monitored, torn down etc. You should definitely expect them to be more brittle than unit tests. My personal opinion is that once you get to these tests... it really depends. What you're suggesting doesn't sound unreasonable. I tend to build a strong suite of isolated unit tests, and rely on the tests you've described for final "glue testing". So they're not too detailed and don't try and exercise every facet of a library.
Don't know what language you are programming with. In Java, you can create TestSuites with the JUnit API. You add single sub-tests to the TestSuite and run the TestSuite as a whole.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Going by the strict definition of unit testing used by Glen Beck and co, it could well **actually be a bad idea** to create unit tests in your situation. What you definitely and unarguably need for a library to be used by anyone outside shouting range of your workstation is a full set of user-level API system tests. Ones that exercise the complete system in the same way client code would, not piece-by-piece in isolation. Once you have those, then depending on: * execution time (influenced by your language and nature of what you are doing). * internal hidden complexity * size of the API * number of simultaneous developers on the library * build time it could well be that you can set up a complete set of system tests that covers everything that needs covering and executes in seconds. You could supplement this by a small set of focused unit tests on the lowest-level and most complex 10% of the code, but full coverage of everything at unit level could well be wasted effort. Note: in the case of a system that is a library, a unit test tool like Junit is perfectly well suited to creating such complete-system tests. So a lot of people would end up calling them unit tests. Which is fine, as long as you don't fall for the argument that giving them that name changes anything about the way they should work. The qualities that make good unit tests not only do not apply, they may well be actively harmful.
Read the book "XUnit Test Patterns" but Gerard Meszaros (<http://xunitpatterns.com/>). It covers the subject well.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
By "restore the state of the world" you mean clean up a database perhaps, or something similar? In which case you probably already have some unit tests which mock out your persistence layer. This would let you run unit tests in isolation, which would not depend on preserving state between tests. So that leaves you with your other tests, which sound a bit more like "black box" tests, or integration tests, or whatever you want to call them. They rely on external state which has to be set up, monitored, torn down etc. You should definitely expect them to be more brittle than unit tests. My personal opinion is that once you get to these tests... it really depends. What you're suggesting doesn't sound unreasonable. I tend to build a strong suite of isolated unit tests, and rely on the tests you've described for final "glue testing". So they're not too detailed and don't try and exercise every facet of a library.
Read the book "XUnit Test Patterns" but Gerard Meszaros (<http://xunitpatterns.com/>). It covers the subject well.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
By "restore the state of the world" you mean clean up a database perhaps, or something similar? In which case you probably already have some unit tests which mock out your persistence layer. This would let you run unit tests in isolation, which would not depend on preserving state between tests. So that leaves you with your other tests, which sound a bit more like "black box" tests, or integration tests, or whatever you want to call them. They rely on external state which has to be set up, monitored, torn down etc. You should definitely expect them to be more brittle than unit tests. My personal opinion is that once you get to these tests... it really depends. What you're suggesting doesn't sound unreasonable. I tend to build a strong suite of isolated unit tests, and rely on the tests you've described for final "glue testing". So they're not too detailed and don't try and exercise every facet of a library.
Here's some best practices that will help: * Make tests independent, so they can be run alone, and in any sequence. * Start every test in a known, clean state. * Build up the environment using fixtures. Again look at db:unit. I like to have a good "canonical" state of the database, and restore it before each test (or roll back). I disagree with one assumption you have: that a test must RESTORE its state before it finishes. You can also say it's the responsibility of the test runner to do this; tools like db:unit do this. They allow you to write your tests more freely. Your sequential test makes sense, but it becomes difficult in practice. (I've tried) As you mentioned, you don't get good reports of the failures. In addition, as the program grows beyond 4 tests, you'll need to figure out where in the sequence to insert a test so that the data is just right for it. It will quickly become too hard to reason about. (Think 100s or 1000s of tests). That's why test fixturs (or factories) make sense. Kent Beck's Test Driven Development: By Example discusses this topic thoroughly.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Going by the strict definition of unit testing used by Glen Beck and co, it could well **actually be a bad idea** to create unit tests in your situation. What you definitely and unarguably need for a library to be used by anyone outside shouting range of your workstation is a full set of user-level API system tests. Ones that exercise the complete system in the same way client code would, not piece-by-piece in isolation. Once you have those, then depending on: * execution time (influenced by your language and nature of what you are doing). * internal hidden complexity * size of the API * number of simultaneous developers on the library * build time it could well be that you can set up a complete set of system tests that covers everything that needs covering and executes in seconds. You could supplement this by a small set of focused unit tests on the lowest-level and most complex 10% of the code, but full coverage of everything at unit level could well be wasted effort. Note: in the case of a system that is a library, a unit test tool like Junit is perfectly well suited to creating such complete-system tests. So a lot of people would end up calling them unit tests. Which is fine, as long as you don't fall for the argument that giving them that name changes anything about the way they should work. The qualities that make good unit tests not only do not apply, they may well be actively harmful.
Good unit tests are independent. The order in which the tests are run (or even if other tests are run at all) should not matter. Set up your fixture for each test, then tear it down (restore the state) afterwards. Most of the time you won't need a teardown method (implementation depends on the test framework) if you keep your variables local to the test method. If your library keeps track of state, you may have other issues (such as thread safety).
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Going by the strict definition of unit testing used by Glen Beck and co, it could well **actually be a bad idea** to create unit tests in your situation. What you definitely and unarguably need for a library to be used by anyone outside shouting range of your workstation is a full set of user-level API system tests. Ones that exercise the complete system in the same way client code would, not piece-by-piece in isolation. Once you have those, then depending on: * execution time (influenced by your language and nature of what you are doing). * internal hidden complexity * size of the API * number of simultaneous developers on the library * build time it could well be that you can set up a complete set of system tests that covers everything that needs covering and executes in seconds. You could supplement this by a small set of focused unit tests on the lowest-level and most complex 10% of the code, but full coverage of everything at unit level could well be wasted effort. Note: in the case of a system that is a library, a unit test tool like Junit is perfectly well suited to creating such complete-system tests. So a lot of people would end up calling them unit tests. Which is fine, as long as you don't fall for the argument that giving them that name changes anything about the way they should work. The qualities that make good unit tests not only do not apply, they may well be actively harmful.
Don't know what language you are programming with. In Java, you can create TestSuites with the JUnit API. You add single sub-tests to the TestSuite and run the TestSuite as a whole.
1,394,636
Context ------- I'm struggling to write a set of unit-tests for a library/framework I'm designing. For context, please, think of my library as an object layer over a hierarchical set of related objects. Question -------- Basically, I'm trying to adhere to the principles and best practices regarding unit testing as seen in [some posts](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [here](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?"), but they seem conflicting with regards to specifically unit testing a library or framework. For instance, a basic test concerns itself with "creating an artifact". And another one with "removing an artifact". But, since a unit test is supposed to be standalone and restore the state of the world after its completion, both those tests seem somewhat related anyway : when testing the artifact creation, we need to cleanup the state at the end of the test by actually removing it. This means that artifact removal is itself implicitly tested. And the same reasoning applies to testing artifact removal : in order to setup the world so that artifact removal is testable, we need to create a new artifact first. The situation is compounded when we need to unit test the creation and removal of related sub-artifacts, for which we will need to setup the world accordingly. What I'm leaning towards is perform a set of related unit tests in sequence, so that each unit test is discrete (i.e. only tests one thing and one thing only), but depends on previous tests in the sequence to progressively setup the world. Then, my sequence could look like this: [create artifact]->[create sub artifact]->[remove sub artifact]->[remove artifact] With this principle, the entire library/framework is unit-tested, and the state of the world is restored at the end of the entire run of the test suite. However, this means that any failure in the middle of the test suite "breaks the world". What are some best practices and guidelines that may be useful to reconcile those conflicting needs? References ---------- [What makes a good unit test?](https://stackoverflow.com/questions/61400/what-makes-a-good-unit-test "What makes a good unit test?") [Is it bad form to have unit test calling other unit tests?](https://stackoverflow.com/questions/1368900/unit-testing-is-it-bad-form-to-have-unit-test-calling-other-unit-tests "Is it bad form to have unit test calling other unit tests?")
2009/09/08
[ "https://Stackoverflow.com/questions/1394636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18865/" ]
Good unit tests are independent. The order in which the tests are run (or even if other tests are run at all) should not matter. Set up your fixture for each test, then tear it down (restore the state) afterwards. Most of the time you won't need a teardown method (implementation depends on the test framework) if you keep your variables local to the test method. If your library keeps track of state, you may have other issues (such as thread safety).
Read the book "XUnit Test Patterns" but Gerard Meszaros (<http://xunitpatterns.com/>). It covers the subject well.
33,501,217
I want to order some command output (using pipeline), taking into account some field from the output. For example, if I run `l` command, I have: ``` -rw-r----- 1 matias matias 67843408 sep 11 08:55 file1 -rw-r----- 1 matias matias 1952 oct 23 12:05 file2 -rw-r----- 1 matias matias 965 oct 23 10:14 asd.txt -rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt -rw-r----- 1 matias matias 892743 ago 18 08:09 qwe ``` I want to order this output according for example by the day of the month field, so the output should be: ``` -rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt -rw-r----- 1 matias matias 67843408 sep 11 08:55 file1 -rw-r----- 1 matias matias 892743 ago 18 08:09 qwe -rw-r----- 1 matias matias 1952 oct 23 12:05 file2 -rw-r----- 1 matias matias 965 oct 23 10:14 asd.txt ``` How can I do this? I usually use `grep`, `cat`, `l`, `ls`, `ll`, but I can't figure out how to achieve this.
2015/11/03
[ "https://Stackoverflow.com/questions/33501217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/977081/" ]
You can use `sort` with the 7th column: ``` $ sort -k7 -n file -rw-r----- 1 matias matias 892743 sep 3 08:36 aaa.txt -rw-r----- 1 matias matias 67843408 sep 11 08:55 file1 -rw-r----- 1 matias matias 892743 ago 18 08:09 qwe -rw-r----- 1 matias matias 1952 oct 23 12:05 file2 -rw-r----- 1 matias matias 965 oct 23 10:14 asd.txt ``` From `man sort`: ``` -n, --numeric-sort compare according to string numerical value -k, --key=KEYDEF sort via a key; KEYDEF gives location and type ``` However, this is quite fragile and, in general, [you should not parse the output of `ls`](http://mywiki.wooledge.org/ParsingLs).
use `ls -t` or `ls -tr` You can google this
26,909,861
I'm getting an exception when trying to use a decimal value with FunScript. It can be reproduced simply by using: ``` Globals.window.alert(Globals.JSON.stringify(3M)) ``` The exception says: ``` System.Exception was unhandled Message: An unhandled exception of type 'System.Exception' occurred in FunScript.dll Additional information: Could not compile expression: Call (None, MakeDecimal, [Value (13), Value (0), Value (0), Value (false), Value (0uy)]) ``` I suspect this is a FunScript limitation, but I just wanted to check. In that case, How could a decimal value be used in the context of FunScript code? Or How could Funscript be extended in order to fix this?
2014/11/13
[ "https://Stackoverflow.com/questions/26909861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2660176/" ]
The decimal primitive is a [TODO feature](https://github.com/ZachBray/FunScript/blob/master/src/main/FunScript/PrimitiveTypes.fs#L23). I guess the best way to tackle it would be reimplementing the System.Decimal structure using the recently open sourced [.NET Framework reference source](http://referencesource.microsoft.com/#mscorlib/system/decimal.cs) and then add the appropiate expression replacements to the compiler like it's done for other types which do not translate directly from .NET to JavaScript like DateTime, TimeSpan, Regex, F# option or list, etc. I guess the feature hasn't been prioritized. If you need it, can you please open an issue in the [Github page](https://github.com/ZachBray/FunScript/issues) so one of the contributors (maybe myself) can start implementing it? If you think you can do it yourself, please feel free to submit a pull request. Thanks!
It is rather JavaScript limitation because JavaScript has only binary floating point. One of the solution would be creating your own type containing two integers: for integer part and fractional part
46,017,228
I've JSON response body then I convert it to array use `json_decode()` but when I trying loop one single array I received error > > Illegal String Offset > > > this my json response ``` array(5) { ["StartDate"]=> string(10) "2016-08-29" ["EndDate"]=> string(10) "2016-09-01" ["Currency"]=> string(3) "IDR" ["StartBalance"]=> string(12) "100000000.00" ["Data"]=> array(20) { [0]=> array(6) { ["TransactionDate"]=> string(5) "29/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(10) "9000000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING DB" ["Trailer"]=> string(58) "2808/ACDFT/WS950519000000.00 REK KORAN DARI GIRO KE TAPRES" } [1]=> array(6) { ["TransactionDate"]=> string(5) "29/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(8) "25000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING CR" ["Trailer"]=> string(26) "08/28 95031 FOR CORPDUMMY1" } [2]=> array(6) { ["TransactionDate"]=> string(5) "29/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(9) "200000.00" ["TransactionName"]=> string(17) "BA JASA E-BANKING" ["Trailer"]=> string(36) "2908/TRCHG/WS95051BIAYA TRANSFER SME" } [3]=> array(6) { ["TransactionDate"]=> string(5) "29/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(10) "1000000.00" ["TransactionName"]=> string(17) "BYR VIA E-BANKING" ["Trailer"]=> string(51) "29/08 WSID95051 PENERIMAAN NEGARA 115110002341111" } [4]=> array(6) { ["TransactionDate"]=> string(5) "29/08" ["BranchCode"]=> string(4) "0061" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(9) "900000.00" ["TransactionName"]=> string(13) "SETORAN TUNAI" ["Trailer"]=> string(14) "PEMBUKAAN REK." } [5]=> array(6) { ["TransactionDate"]=> string(5) "30/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(8) "10000.00" ["TransactionName"]=> string(17) "BA JASA E-BANKING" ["Trailer"]=> string(36) "3008/TRCHG/WS95051BIAYA TRANSFER SME" } [6]=> array(6) { ["TransactionDate"]=> string(5) "30/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(10) "5000000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING DB" ["Trailer"]=> string(61) "3008/FTRTG/00001000110042 other bank TRANSFER VIA RTGS Dummy4" } [7]=> array(6) { ["TransactionDate"]=> string(5) "30/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(11) "70000000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING CR" ["Trailer"]=> string(53) "3008/FTSCY/WS95051 70000000.00 Ticket Payment DUMMY13" } [8]=> array(6) { ["TransactionDate"]=> string(5) "30/08" ["BranchCode"]=> string(4) "0061" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(10) "5000000.00" ["TransactionName"]=> string(8) "NK - LLG" ["Trailer"]=> string(0) "" } [9]=> array(6) { ["TransactionDate"]=> string(5) "30/08" ["BranchCode"]=> string(4) "0998" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(9) "800000.00" ["TransactionName"]=> string(15) "SETORAN VIA CDM" ["Trailer"]=> string(25) "3008 WSID:Z9991 DUMMY16" } [10]=> array(6) { ["TransactionDate"]=> string(5) "31/08" ["BranchCode"]=> string(4) "0015" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(10) "1000000.00" ["TransactionName"]=> string(13) "NK - KU MASUK" ["Trailer"]=> string(0) "" } [11]=> array(6) { ["TransactionDate"]=> string(5) "31/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(8) "30000.00" ["TransactionName"]=> string(9) "BIAYA ADM" ["Trailer"]=> string(0) "" } [12]=> array(6) { ["TransactionDate"]=> string(5) "31/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(8) "11100.00" ["TransactionName"]=> string(5) "BUNGA" ["Trailer"]=> string(0) "" } [13]=> array(6) { ["TransactionDate"]=> string(5) "31/08" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(7) "2220.00" ["TransactionName"]=> string(11) "PAJAK BUNGA" ["Trailer"]=> string(0) "" } [14]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(9) "100000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING DB" ["Trailer"]=> string(56) "0109/FTSCY/WS95051 100000.00 Online Transfer PT DUMMY2" } [15]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0061" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(10) "3000000.00" ["TransactionName"]=> string(8) "NK - LLG" ["Trailer"]=> string(0) "" } [16]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(9) "250000.00" ["TransactionName"]=> string(17) "TRSF E-BANKING DB" ["Trailer"]=> string(44) "0109/FTSCY/WS95051 250800.00 Transfer DUMMY1" } [17]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0000" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(9) "100000.00" ["TransactionName"]=> string(17) "BA JASA E-BANKING" ["Trailer"]=> string(36) "0109/TRCHG/WS95051BIAYA TRANSFER SME" } [18]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0101" ["TransactionType"]=> string(1) "C" ["TransactionAmount"]=> string(8) "10000.00" ["TransactionName"]=> string(11) "KR OTOMATIS" ["Trailer"]=> string(20) "DUMMY7 039903811112" } [19]=> array(6) { ["TransactionDate"]=> string(4) "PEND" ["BranchCode"]=> string(4) "0038" ["TransactionType"]=> string(1) "D" ["TransactionAmount"]=> string(9) "100000.00" ["TransactionName"]=> string(13) "TARIKAN TUNAI" ["Trailer"]=> string(0) "" } } } ``` this JSON to array ``` $output = curl_exec($ch); // This is API Response curl_close($ch); $result = json_decode($output,true); return view("bca.cekmutasi", [ "result" => $result ]); ``` this my view ``` @if (isset($result)) @foreach ($result as $key => $value) <div class="col-xs-6"> <ul class="list-unstyled" style="line-height: 2"> <li><span class="fa fa-circle text-success"></span> TransactionDate : <b>{{ $value['StartDate']['EndDate']['Currency']['StartBalance']['StartBalance']['Data'][0]['TransactionDate'] }}</b></li> </ul> </div> @endforeach @endif ``` and if I just put `{{ $result['Data'][0]['TransactionDate'] }}` without foreach its show the result How do I fix this ?
2017/09/02
[ "https://Stackoverflow.com/questions/46017228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8012110/" ]
Your response is not an array, you need to do this on the foreach ``` @if (isset($result)) @foreach ($result['Data'] as $key => $value) <div class="col-xs-6"> <ul class="list-unstyled" style="line-height: 2"> <li><span class="fa fa-circle text-success"></span> TransactionDate : <b>{{ $value['TransactionDate'] }}</b></li> </ul> </div> @endforeach @endif ```
You have to close it like this ``` { "StartDate": "2016-09-01", "EndDate": "2016-09-01", "Currency": "IDR", "StartBalance": "94163880.00", "Data": [{ "TransactionDate": "PEND", "BranchCode": "0000", "TransactionType": "D", "TransactionAmount": "100000.00", "TransactionName": "TRSF E-BANKING DB", "Trailer": "0109/FTSCY/WS95051 100000.00 Online Transfer PT DUMMY2" }, { "TransactionDate": "PEND", "BranchCode": "0061", "TransactionType": "C", "TransactionAmount": "3000000.00", "TransactionName": "NK - LLG", "Trailer": "" } ] } ```
41,667,637
I just started coding in swift and I am at the point that I can get a single value out of the JSON but I can't seem to get all the values out of it by looping trough the array. so my question is how do I get all the values out and view it as float or string. here is my code: ``` let url = URL(string: "http://api.fixer.io/latest") let task = URLSession.shared.dataTask(with: url!) { (data, response, error) in if error != nil { print ("ERROR") } else { if let content = data { do { //Array let myJson = try JSONSerialization.jsonObject(with: content, options: JSONSerialization.ReadingOptions.mutableContainers) as AnyObject //print(myJson) for items in myJson [AnyObject] { print(items) } //here is the single value part, it looks for the rates then it puts it in label. if let rates = myJson["rates"] as? NSDictionary{ if let currency = rates["AUD"]{ print(currency); self.label.text=String(describing: currency) } } } catch { } } } } task.resume() ```
2017/01/15
[ "https://Stackoverflow.com/questions/41667637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966014/" ]
Try this code: ``` import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() self.getJson() } func getJson(){ let url = URL(string: "http://api.fixer.io/latest") let task = URLSession.shared.dataTask(with: url!) { (data, response, error) in if error != nil { print ("ERROR") } else { if let content = data { do { //Dic guard let myJson:[String:Any] = try JSONSerialization.jsonObject(with: content, options: JSONSerialization.ReadingOptions.mutableContainers) as? [String:Any] else {return} //print(myJson) for items in myJson { print(items) } //here is the single value part, it looks for the rates then it puts it in label. if let rates = myJson["rates"] as? NSDictionary{ if let currency = rates["AUD"]{ print(currency); // self.label.text=String(describing: currency) } } } catch { } } } } task.resume() } } ``` And the result in the console is like below: [![enter image description here](https://i.stack.imgur.com/QX9lJl.jpg)](https://i.stack.imgur.com/QX9lJl.jpg) The `myJson` is the dictionary what you want.
try this out: ``` if let currency = rates["AUD"] as? NSDictionary{ for(key,value) in currency { // the key will be your currency format and value would be your currency value } } ```
41,667,637
I just started coding in swift and I am at the point that I can get a single value out of the JSON but I can't seem to get all the values out of it by looping trough the array. so my question is how do I get all the values out and view it as float or string. here is my code: ``` let url = URL(string: "http://api.fixer.io/latest") let task = URLSession.shared.dataTask(with: url!) { (data, response, error) in if error != nil { print ("ERROR") } else { if let content = data { do { //Array let myJson = try JSONSerialization.jsonObject(with: content, options: JSONSerialization.ReadingOptions.mutableContainers) as AnyObject //print(myJson) for items in myJson [AnyObject] { print(items) } //here is the single value part, it looks for the rates then it puts it in label. if let rates = myJson["rates"] as? NSDictionary{ if let currency = rates["AUD"]{ print(currency); self.label.text=String(describing: currency) } } } catch { } } } } task.resume() ```
2017/01/15
[ "https://Stackoverflow.com/questions/41667637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966014/" ]
I strongly recommend that you use [SwiftyJSON](https://github.com/SwiftyJSON/SwiftyJSON#loop) to deal with JSON. It's extremely easy to learn and use. first, you should install `SwiftyJSON` via CocoaPods (or any other way you like). then you can code it simply like below: ``` let url = URL(string: "http://api.fixer.io/latest") let task = URLSession.shared.dataTask(with: url!) { (data, response, error) in if error != nil { print ("ERROR") } else { if let content = data { // Initialization let myJson = JSON(data: content) // Getting a string using a path to the element self.label.text = myJson["rates"]["AUD"].stringValue // Loop test for (key,value):(String, JSON) in myJson["rates"] { print("key is :\(key), Value:\(value.floatValue)") } } } } task.resume() ```
try this out: ``` if let currency = rates["AUD"] as? NSDictionary{ for(key,value) in currency { // the key will be your currency format and value would be your currency value } } ```
16,381,649
Iwonder why the out put is 0(zero) for below code snippet? Can anyone please clarify why below code output is zero. ``` <?php function a($number) { return (b($number) * $number); } function b(&$number) { ++$number; } echo a(5); // output 0(zero) ? ?> ```
2013/05/05
[ "https://Stackoverflow.com/questions/16381649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/754402/" ]
You never return any value from the function, and you're trying to `echo` the return value. ``` function b(&$number) { return ++$number; } ``` Note that this is a silly example for a function that takes its parameter by reference, since you don't have a reference to the original value `5`. Something like this would be more appropriate: ``` function b( &$number) { ++$number; } $num = 5; b( $num); echo $num; // Prints 6 ```
The function name is b, but you are calling a... Also, you are echoing a function, that doesn't return a value. This means you are echoing a non-initialize variable. You must either return a value: ``` return ++$number; ``` or echo the variable directly: ``` $number = 5; b($number); echo $number; ```
2,319
They're both similar in execution and used as counters against hip techniques. Are there any reasons to prefer using one instead of the other in various situations?
2013/10/30
[ "https://martialarts.stackexchange.com/questions/2319", "https://martialarts.stackexchange.com", "https://martialarts.stackexchange.com/users/1437/" ]
I can’t comment, so I'll answer it here instead. As @Dave Liepmann said, it depends on your strengh and your ability. Utsuri Goshi is **hard** to do. The hardest move I've done, seriously. Using the “right” move is always a decision based on your opponent's reaction. If you start and manage to raise him off his feet (which is the first step of both moves) as set-up for ushiro goshi, 99% of the competitors will use their right leg to "hook" the inside of yours to prevent you from shifting correctly and easily over your shoulder/to your side, and often your grip isn't ideal and they will slip, turn, and try/manage to fall on you if you still try to get the move off. I was a huge Harai goshi user, and got ushiro goshi used *a lot* against me... And you train yourself *a lot* to counter it. It's obvious when someone tries it, and not too hard to counter. I'd say in 14 or 15 years of competitions, it has been successfully used less than 10 times against me, and threw that about 2 or 3 times for anything higher than yuko. On the other side, Utsuri ghosi gives you more control over your opponent, since once you manage to raise him off his feet, his first reflex will be to press or push against you, thus making it easier to pass in front of him, and once you do, you cannot miss scoring Ippon. Plus it's a hugely spectacular move to pull off. But most of all, it's counterintuitive for someone to really block you correctly, and it has a big surprise factor. I was training with the elite team here, and I've never seen anyone practice how to counter or dodge utsuri goshi, because it is *seriously* hard to do.
I don't have an *utsuri goshi*, but I like *ushiro goshi* when my opponent hasn't broken my balance forward. After a successful hip block, *ushiro goshi* often feels like just an extra little "pop" of my hips to send someone up. My understanding is that *utsuri goshi* is a somewhat more difficult and skillful throw, but it seems to afford a greater deal of control over the person being thrown. That is, *uke* can be controlled longer during *utsuri goshi* than with *ushiro goshi*, since *uke* can be carried on the hip and connected to your center further along the trip to the ground. Then again, like I said, I don't use the throw.
52,294,299
I have an application that requires to open port `80`. In accordance to [this](https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443#892391), I gave the binary capabilities to open low ports. Also I gave capabilities to `gdb` itself. When I run the binary, port is opened successfully, but when I run it with GDB I have error with `errno = 13`. **IMPORTANT**: Running application with `sudo` is exactly the thing that I want to avoid
2018/09/12
[ "https://Stackoverflow.com/questions/52294299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1132871/" ]
You haven't included signal.h. You're including some C++ standard library headers, and as a side effect on MacOS, these happen to include signal.h. However, that isn't specified to happen so you can't rely on it working in different implementations of those headers. Try adding: ``` #include <signal.h> ``` at the top.
On Linux the header file to include is ``` #include <signal.h> ``` On MacOS the equivalent header file to include is ``` #include <csignal.h> ``` Depending on your OS, header files always change. They should both do the same thing though
29,899,794
I am following the instructions here: <https://spark.apache.org/docs/latest/quick-start.html> to create a simple application that will run on a local standalone Spark build. In my system I have Scala 2.9.2 and sbt 0.13.7. When I write in my `simple.sbt` the following: `scalaVersion := "2.9.2"` after I use `sbt package`, I get the error: `sbt.ResolveException: unresolved dependency: org.apache.spark#spark-core_2.9.2;1.3.1: not found` However, when I write in `simple.sbt`: `scalaVersion := "2.10.4"` sbt runs successfully and the application runs fine on Spark. How can this happen, since I do not have scala 2.10.4 on my system?
2015/04/27
[ "https://Stackoverflow.com/questions/29899794", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2711645/" ]
Scala is not a package, it is a library that executes on top of the Java runtime. Likewise, the Scala compiler `scalac` runs on top of a Java runtime. The fact that you have a version of Scala installed in your "system" is a convenience, but is not in any way required. Therefore, it is entirely possible to launch `sbt` from one version of Scala (2.9.2) but instruct it to run other commands (compilation) using an entirely different version of Scala (2.10.x) by passing the appropriate flags such as `-classpath`. See: [Can java run a compiled scala code?](https://stackoverflow.com/questions/3626002/can-java-run-a-compiled-scala-code)
As @noahlz said, you don't need Scala on your system as sbt will fetch it for you. The issue you're having is that there is not `spark-core` version `1.3.1` for Scala 2.9.2. From what I can see in Maven Central (searching for [spark-core](http://search.maven.org/#search|ga|1|spark-core)) there are only builds of `spark-core` for Scala 2.10 and 2.11. Therefore I would recommend you use this setup: ``` scalaVersion := "2.11.6" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.1" ``` If for whatever reason that doesn't work for you, use Scala 2.10.5: ``` scalaVersion := "2.10.5" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.1" ```
8,518,999
i have a aspx page which is inherited from the master page and i am dynamically adding the Webuser control inside the placeholder in aspx page . i want to acces the label which is inside the master page in my web user control . thanks
2011/12/15
[ "https://Stackoverflow.com/questions/8518999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1034944/" ]
There's a pretty nice [article](https://web.archive.org/web/20211020103719/https://www.4guysfromrolla.com/articles/062409-1.aspx#postadlink) on this subject by 4guysfromrolla with tons of useful links like this [one](http://odetocode.com/code/450.aspx).
``` ((Label)this.Page.Master.FindControl("YourLabelNameHere")).Text = "Hello"; ```
207,557
We're using InnoDB tables as the back end of a web application, and everything was fine for about two years until we had to restart MySQL a few weeks ago. (We hadn't disabled reverse DNS lookups, even though we weren't really using them, but our hosting system suddenly stopped responding to those requests. They're now disabled.) Unfortunately, the configuration file had changed, and we don't have a copy of its original state for comparison. After fixing the most significant problems, we're left with a real puzzler: Under high load, the database queries start taking much longer than usual. During such times, we have several hundred open connections from our seven apache servers. Running SHOW PROCESSLIST reveals that half or more of those connections are in the "Sending data" state, frequently with times of a few hundred seconds. Almost all of their queries are SELECT, with similar queries tending to clump together. In fact, the lowest clump in the list has tended to be the exact same query (I would expect it to be in the query cache), returning 1104 rows of two integers each. Other frequent offenders are lists of a few hundred single-integer rows, several single-integer rows, or even a single COUNT(\*) result. We tried shutting down the web servers during one of these periods, but the problem returned within a minute of restarting them. However, completely restarting mysqld resolved the problem until the next day. What could the problem be, and how can we verify and/or fix it?
2010/11/30
[ "https://serverfault.com/questions/207557", "https://serverfault.com", "https://serverfault.com/users/28981/" ]
Well, do note that if I recall well (it's been a while since I did DB work) COUNT(\*) queries without a WHERE clause on innodb tables are notoriously slower than on MyISAM and Memory tables. Also, is this by any chance a Xen DomU? What is the frontend language? If PHP, is it using MySQL or MySQLi? Are they using persistent connections? You have not mentionned the underlying operating system, but in the case of Linux I would start by staring at the output of `free -m`, paying special attention to the last two lines to see if memory is tight overall. ``` [0:504] callisto:cyanotype $ free -m total used free shared buffers cached Mem: 3961 3816 144 0 184 1454 -/+ buffers/cache: 2177 1784 Swap: 2898 0 2898 ``` Here we have a system that's healthy (it is my workstation). The second column excludes buffers and cache, so I am in fact using 2177mb of memory, and have 1784 megabytes readily available. Last line shows that I don't use swap at all so far. Then giving `vmstat(8)`, to see if your system is trashing like mad would be useful, too. ``` [0:505] callisto:cyanotype $ vmstat 5 10 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 134116 189828 1499948 0 0 11 3 44 49 1 1 98 0 0 0 0 143112 189836 1489688 0 0 0 6 526 2177 1 1 98 0 0 0 0 139268 190504 1491864 0 0 512 4 663 4704 2 1 96 1 2 0 0 136688 191084 1493484 0 0 473 5 641 3039 1 1 97 1 0 0 0 52636 191712 1518620 0 0 5066 4 1321 6600 8 2 86 4 5 0 0 72992 193264 1377324 0 0 10742 31 1602 7441 12 3 80 5 2 1 0 84036 193896 1202012 0 0 10126 43 2621 4305 31 2 57 10 3 0 0 42456 195812 1060904 0 0 3970 75 55327 9806 43 5 41 10 8 1 0 34620 197040 942940 0 0 3554 64 50892 12531 43 6 44 6 ^C [0:506] callisto:cyanotype $ ``` (My desktop really isn't doing all that much here, sorry. What a waste of 8 perfectly good cores) If you see a lot of process spending time in the 'b' column, that means they are blocked, waiting for something. Often that is IO. The important columns here are `si` and `so`. Check if they're populated with high values. If so, this may be your problem -- something is consuming a lot of memory, more than you can actually affort. Using `top(4)` and ordering the columns by memory % (shift+m while in top) might show the culprit(s). It's not impossible that your system is trashing to and from swap, and saturating the disks, causing blocked threads and processes.The tool `iostat(8)`(part of package `sysstat`, usually) should be given a whirl to see if you have processes that are blocked, stuck on IO\_WAIT. A saturated disk can spell bad news for the whole system under high load, especially if the system is swapping a lot. You might run iostat with extended stats, every five seconds, for instance: ``` [0:508] callisto:cyanotype $ iostat -x 5 Linux 2.6.35-23-generic (callisto) 2010-11-30 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 16,55 0,12 2,70 2,60 0,00 78,02 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdc 0,00 2,00 1,00 0,80 27,20 22,40 27,56 0,01 3,33 3,33 0,60 sdd 0,00 12,60 67,60 4,80 4222,40 139,20 60,24 0,62 8,62 3,29 23,80 sde 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 sdf 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 avg-cpu: %user %nice %system %iowait %steal %idle 32,02 0,10 1,83 0,44 0,00 65,61 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdc 0,60 3,20 11,00 0,80 265,60 32,00 25,22 0,05 3,90 2,88 3,40 sdd 0,00 8,20 0,00 3,00 0,00 89,60 29,87 0,02 8,00 7,33 2,20 sde 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 sdf 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 avg-cpu: %user %nice %system %iowait %steal %idle 49,26 0,22 3,12 0,12 0,00 47,28 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdc 6,20 3,00 7,40 3,80 208,00 54,40 23,43 0,09 7,86 2,50 2,80 sdd 0,00 15,20 0,20 4,00 1,60 152,00 36,57 0,03 6,67 6,19 2,60 sde 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 sdf 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 avg-cpu: %user %nice %system %iowait %steal %idle 16,00 0,54 1,05 1,07 0,00 81,35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdc 4,20 0,00 31,40 0,00 3204,80 0,00 102,06 0,17 4,90 2,68 8,40 sdd 0,00 28,20 0,20 2,60 1,60 246,40 88,57 0,02 7,14 7,14 2,00 sde 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 sdf 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 ^C ``` This should allow you to easily see if your volumes are being saturated. For instance here, you can see that my disks are terribly underutilized, that the system spends most of its cpu cycles idling, etc etc. If that percentage is mostly in the % IOWAIT column, well you have an IO bottleneck here. You probably already know all this, but just covering all the bases to make sure. The idea is that your config file changed, and you have no history of it (putting your config files under version control is a great idea for that very reason) -- and it is not impossible the size of a buffer suddendly changed thus making expensive queries like COUNT(\*) without SELECT suddendly start to gobble up ressources. Based on what you have learned from the previous use of the tools abive -- you should probably inspect the configuration file (being the only thing that changed, it is very likely the culprit) to see if the buffer values are sane for your average load. How large are the buffers, like the `query_cache_size` value, and especially the `sort_buffer` sizes? (If that doesn't fit in memory, it will performed on disk, at a massive cost as I'm sure you can imagine). How large is the `innodb_buffer_pool_size`? How large is the `table_cache`and most importantly, does that value fits within the system limits for file handles? (both open-files-limit in [mysqld] and at the OS level). Also, I don't remember off the top of my head if this is still true, but I'm fairly certain that innodb actually locks the entire table whenever it has to commit an auto-increment fields. I googled and I could not find if that was still true or not. You could also use `innotop(1)` to see what's going on more in detail, too. I hope this helps somehow or gives you a starting point :)
There are likely many reasons for this. In our particular case it was happening when the number of queries seen spiked during a short time frame which was causing thrashing in the CPU as the number of threads exceeded 4x our core count on the server. Our issue is the number of queries spiking is actually normal for our application, and the POSIX implementation worked acceptably "most" of the time, but would intermittently grind to a halt. After a lot of investigation, we stumbled across an Oracle mySQL enterprise plugin called thread-pool which offers an alternative implementation for handling threads. Even better - Percona server has implemented this natively (no plugin needed) and the change was a one-liner to test in our cnf file. The results have been dramatically better heavy load performance. While this is not likely to be the issue with many implementations, its my hope that it may be for some and this easy change is worth testing. [Percona Thread-pool mySQL5.7](https://www.percona.com/doc/percona-server/5.7/performance/threadpool.html) and here is another use case example: [Percona 100k connections](https://www.percona.com/blog/2019/02/25/mysql-challenge-100k-connections/)
207,557
We're using InnoDB tables as the back end of a web application, and everything was fine for about two years until we had to restart MySQL a few weeks ago. (We hadn't disabled reverse DNS lookups, even though we weren't really using them, but our hosting system suddenly stopped responding to those requests. They're now disabled.) Unfortunately, the configuration file had changed, and we don't have a copy of its original state for comparison. After fixing the most significant problems, we're left with a real puzzler: Under high load, the database queries start taking much longer than usual. During such times, we have several hundred open connections from our seven apache servers. Running SHOW PROCESSLIST reveals that half or more of those connections are in the "Sending data" state, frequently with times of a few hundred seconds. Almost all of their queries are SELECT, with similar queries tending to clump together. In fact, the lowest clump in the list has tended to be the exact same query (I would expect it to be in the query cache), returning 1104 rows of two integers each. Other frequent offenders are lists of a few hundred single-integer rows, several single-integer rows, or even a single COUNT(\*) result. We tried shutting down the web servers during one of these periods, but the problem returned within a minute of restarting them. However, completely restarting mysqld resolved the problem until the next day. What could the problem be, and how can we verify and/or fix it?
2010/11/30
[ "https://serverfault.com/questions/207557", "https://serverfault.com", "https://serverfault.com/users/28981/" ]
This turned out to be a flaw in the combination of [`innodb_file_per_table`](http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_file_per_table), `default-storage-engine = innodb`, and a frequently-accessed page that created a temporary table. Each time a connection closed, it would drop the table, [discarding pages from the buffer pool LRU](http://www.mysqlperformanceblog.com/2011/02/03/performance-problem-with-innodb-and-drop-table/). This would cause the server to stall for a bit, but never on the query that was actually causing the problem. Worse, the `innodb_file_per_table` setting had been languishing in our `my.cnf` file for months before the server had to be restarted for an entirely unrelated reason, during which time we had been using those temporary tables without a problem. (The NOC suddenly took down the DNS server, causing every new connection to hang because we hadn't enabled `skip-name-resolve`, and wouldn't admit for hours that anything had changed.) Fortunately, we were able to rewrite the offending page to use an even faster set of queries that loaded most of the work onto the front-end web servers, and haven't seen a problem since.
There are likely many reasons for this. In our particular case it was happening when the number of queries seen spiked during a short time frame which was causing thrashing in the CPU as the number of threads exceeded 4x our core count on the server. Our issue is the number of queries spiking is actually normal for our application, and the POSIX implementation worked acceptably "most" of the time, but would intermittently grind to a halt. After a lot of investigation, we stumbled across an Oracle mySQL enterprise plugin called thread-pool which offers an alternative implementation for handling threads. Even better - Percona server has implemented this natively (no plugin needed) and the change was a one-liner to test in our cnf file. The results have been dramatically better heavy load performance. While this is not likely to be the issue with many implementations, its my hope that it may be for some and this easy change is worth testing. [Percona Thread-pool mySQL5.7](https://www.percona.com/doc/percona-server/5.7/performance/threadpool.html) and here is another use case example: [Percona 100k connections](https://www.percona.com/blog/2019/02/25/mysql-challenge-100k-connections/)
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In the end I solved this by creating a bootable USB with the Windows ISO on it, and ran the "Repair" tool twice. I essentially followed the instructions at: <http://en.community.dell.com/support-forums/software-os/w/microsoft_os/3316.2-1-microsoft-windows-7-official-iso-download-links-digital-river.aspx> * In my case, I was using Windows 7 Professional (x86), so I downloaded that ISO. * Then I used the "[Microsoft USB/DVD download tool](https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool)" to create the bootable USB. Many thanks for the help.
In case that you only have windows, you have to boot from a cd. There is no other option, I have searched a lot for other options but they don't exist. The only other option to solve problems on Windows is to press F8 by booting and opening a recovery console, but it won't work in your case.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In case that you only have windows, you have to boot from a cd. There is no other option, I have searched a lot for other options but they don't exist. The only other option to solve problems on Windows is to press F8 by booting and opening a recovery console, but it won't work in your case.
A factory reset will also fix this problem but be sure to back up your files when prompted to.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In case that you only have windows, you have to boot from a cd. There is no other option, I have searched a lot for other options but they don't exist. The only other option to solve problems on Windows is to press F8 by booting and opening a recovery console, but it won't work in your case.
Solution: The best and easiest option is to remove your hard drive and connect it to an external HDD enclosure. Connect the external HDD to another computer. * Right Click My computer Click manage * Select Disk manager * Right Click the recovery drive * Select "Mark Partition as active" Eject the external HDD from the computer. Remove the HDD and fix it on the computer that was missing the boot manager. The computer should start without any issues.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In the end I solved this by creating a bootable USB with the Windows ISO on it, and ran the "Repair" tool twice. I essentially followed the instructions at: <http://en.community.dell.com/support-forums/software-os/w/microsoft_os/3316.2-1-microsoft-windows-7-official-iso-download-links-digital-river.aspx> * In my case, I was using Windows 7 Professional (x86), so I downloaded that ISO. * Then I used the "[Microsoft USB/DVD download tool](https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool)" to create the bootable USB. Many thanks for the help.
The active partition is the partition which the system will use to boot from. In your case, you've changed it so that the required bootmgr could not be loaded. You need to set the partition named 'Reserved by system' to active. This partition usually has no letter assigned to it. You can set this with a bootable live USB running a partitioning program. Below I've placed a link which describes how to make such a live USB running easeUS: <http://www.partition-tool.com/resource/manage-partition/usb-partition-manager.htm> After you do this, everything should work again.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
The active partition is the partition which the system will use to boot from. In your case, you've changed it so that the required bootmgr could not be loaded. You need to set the partition named 'Reserved by system' to active. This partition usually has no letter assigned to it. You can set this with a bootable live USB running a partitioning program. Below I've placed a link which describes how to make such a live USB running easeUS: <http://www.partition-tool.com/resource/manage-partition/usb-partition-manager.htm> After you do this, everything should work again.
A factory reset will also fix this problem but be sure to back up your files when prompted to.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
The active partition is the partition which the system will use to boot from. In your case, you've changed it so that the required bootmgr could not be loaded. You need to set the partition named 'Reserved by system' to active. This partition usually has no letter assigned to it. You can set this with a bootable live USB running a partitioning program. Below I've placed a link which describes how to make such a live USB running easeUS: <http://www.partition-tool.com/resource/manage-partition/usb-partition-manager.htm> After you do this, everything should work again.
Solution: The best and easiest option is to remove your hard drive and connect it to an external HDD enclosure. Connect the external HDD to another computer. * Right Click My computer Click manage * Select Disk manager * Right Click the recovery drive * Select "Mark Partition as active" Eject the external HDD from the computer. Remove the HDD and fix it on the computer that was missing the boot manager. The computer should start without any issues.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In the end I solved this by creating a bootable USB with the Windows ISO on it, and ran the "Repair" tool twice. I essentially followed the instructions at: <http://en.community.dell.com/support-forums/software-os/w/microsoft_os/3316.2-1-microsoft-windows-7-official-iso-download-links-digital-river.aspx> * In my case, I was using Windows 7 Professional (x86), so I downloaded that ISO. * Then I used the "[Microsoft USB/DVD download tool](https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool)" to create the bootable USB. Many thanks for the help.
A factory reset will also fix this problem but be sure to back up your files when prompted to.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
In the end I solved this by creating a bootable USB with the Windows ISO on it, and ran the "Repair" tool twice. I essentially followed the instructions at: <http://en.community.dell.com/support-forums/software-os/w/microsoft_os/3316.2-1-microsoft-windows-7-official-iso-download-links-digital-river.aspx> * In my case, I was using Windows 7 Professional (x86), so I downloaded that ISO. * Then I used the "[Microsoft USB/DVD download tool](https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool)" to create the bootable USB. Many thanks for the help.
Solution: The best and easiest option is to remove your hard drive and connect it to an external HDD enclosure. Connect the external HDD to another computer. * Right Click My computer Click manage * Select Disk manager * Right Click the recovery drive * Select "Mark Partition as active" Eject the external HDD from the computer. Remove the HDD and fix it on the computer that was missing the boot manager. The computer should start without any issues.
556,386
I have a "Recovery" partition which I mistakenly thought was redundant after reinstalling everything to C:. "Recovery" was previously the "Active" partition. I set C: as the "Active" directory in disk manager (I am using Windows 7). When attempting to boot, the laptop now returns "BOOTMGR is missing". I can go to BIOS and mess around with some stuff, but haven't found a way to change the active partition. I can disable various SATA drives (four are listed) and doing that sequentially changes the error message on booting, but no combination lets it boot. I am travelling and don't have a USB key or bootable CD with me. I do have an external HD, but this other computer that I'm on right now (which is unusably slow) doesn't recognise it. I think that the easiest solution will be to get hold of a USB key, make it bootable, and sort out the active partition from DOS. Any glaring shortcuts, alternative solutions or likely obstacles I'm missing? Edit: I now have a USB key, can boot to DOS and run fdisk, which I expected to enable the active partition to be set. Unfortunately fdisk will not set NTFS partitions as active, and I haven't found any alternatives that run from DOS and will set NTFS partitions as active. At this stage it looks as though I will need to get Windows CDs as Olivier mentioned below.
2013/02/23
[ "https://superuser.com/questions/556386", "https://superuser.com", "https://superuser.com/users/201796/" ]
Solution: The best and easiest option is to remove your hard drive and connect it to an external HDD enclosure. Connect the external HDD to another computer. * Right Click My computer Click manage * Select Disk manager * Right Click the recovery drive * Select "Mark Partition as active" Eject the external HDD from the computer. Remove the HDD and fix it on the computer that was missing the boot manager. The computer should start without any issues.
A factory reset will also fix this problem but be sure to back up your files when prompted to.
27,633,006
I am trying to search for a particular string in every line of a log file, and if that matches, i need to be able to get the Host information from that particular error. Consider the log entries as below : ``` 05-05-2014 00:02:02,771 [HttpProxyServer-thread-1314] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.128, port=37271, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,771 [HttpProxyServer-thread-2156] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.148, port=37273, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,802 [HttpProxyServer-thread-604] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.92, port=37280, User-Agent: Google Update/1.3.23.9;winhttp;cup ``` This is my code : ``` for line in log_file: if bool(re.search( r'Empty user name specified in NTLM authentication. Prompting for auth again.', line)): host = re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) ``` Problem is the Host information is not in the same line as the error. It is in the next line. How do i get the re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) to search in the next line that "line" is currently in?
2014/12/24
[ "https://Stackoverflow.com/questions/27633006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4383881/" ]
Just insert a ``` line = next(log_file) ``` between the two statements you currently have in the `for` loop.
Either write a regex that matches 2 successive lines from which you can extract the Host info of each, and loop over the matches instead of reading the file line-by-line, or add a flag that gets set when a line matches the error, and if that flag is set for a given line, you extract the host info & reset the flag instead of testing for the error.
27,633,006
I am trying to search for a particular string in every line of a log file, and if that matches, i need to be able to get the Host information from that particular error. Consider the log entries as below : ``` 05-05-2014 00:02:02,771 [HttpProxyServer-thread-1314] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.128, port=37271, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,771 [HttpProxyServer-thread-2156] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.148, port=37273, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,802 [HttpProxyServer-thread-604] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.92, port=37280, User-Agent: Google Update/1.3.23.9;winhttp;cup ``` This is my code : ``` for line in log_file: if bool(re.search( r'Empty user name specified in NTLM authentication. Prompting for auth again.', line)): host = re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) ``` Problem is the Host information is not in the same line as the error. It is in the next line. How do i get the re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) to search in the next line that "line" is currently in?
2014/12/24
[ "https://Stackoverflow.com/questions/27633006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4383881/" ]
Just insert a ``` line = next(log_file) ``` between the two statements you currently have in the `for` loop.
Try this: ``` >>> import re >>> fp = open('log_file') >>> line = fp.readline() >>> while line: ... if 'Empty user name specified in NTLM authentication. Prompting for auth again.' in line: ... host = re.search(r'Host=(\D+.\D+.\D+,)', fp.readline()).group(1) ... # ^^^^^^^^^^^^^^ ... # this makes re search in the next line ... print host ... line = fp.readline() ... tools.google.com, tools.google.com, tools.google.com, ```
27,633,006
I am trying to search for a particular string in every line of a log file, and if that matches, i need to be able to get the Host information from that particular error. Consider the log entries as below : ``` 05-05-2014 00:02:02,771 [HttpProxyServer-thread-1314] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.128, port=37271, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,771 [HttpProxyServer-thread-2156] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.148, port=37273, User-Agent: Google Update/1.3.23.9;winhttp;cup-ecdsa 05-05-2014 00:02:02,802 [HttpProxyServer-thread-604] ERROR fd - Empty user name specified in NTLM authentication. Prompting for auth again. Host=tools.google.com, Port=80, Client ip=/10.253.168.92, port=37280, User-Agent: Google Update/1.3.23.9;winhttp;cup ``` This is my code : ``` for line in log_file: if bool(re.search( r'Empty user name specified in NTLM authentication. Prompting for auth again.', line)): host = re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) ``` Problem is the Host information is not in the same line as the error. It is in the next line. How do i get the re.search(r'Host=(\D+.\D+.\D+,)', line).group(1) to search in the next line that "line" is currently in?
2014/12/24
[ "https://Stackoverflow.com/questions/27633006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4383881/" ]
Try this: ``` >>> import re >>> fp = open('log_file') >>> line = fp.readline() >>> while line: ... if 'Empty user name specified in NTLM authentication. Prompting for auth again.' in line: ... host = re.search(r'Host=(\D+.\D+.\D+,)', fp.readline()).group(1) ... # ^^^^^^^^^^^^^^ ... # this makes re search in the next line ... print host ... line = fp.readline() ... tools.google.com, tools.google.com, tools.google.com, ```
Either write a regex that matches 2 successive lines from which you can extract the Host info of each, and loop over the matches instead of reading the file line-by-line, or add a flag that gets set when a line matches the error, and if that flag is set for a given line, you extract the host info & reset the flag instead of testing for the error.
358,704
A text book proposed that "when comparing fractions ,if the compared fractions's are such that numerator is smaller than denominator ,then fraction with more difference(absolute) between numerator and denominator is the smallest among the fractions compared " And I found many text books support this. Even on looking at the video <http://www.youtube.com/watch?v=rJz-f7uCBns#t=6m35s> , here he has used this idea , but for a case where numerator is greater than denominator . But consider the case $2/3$ & $ 20/30 $ , but as per the theory proposed 2/3 > 20/30.but actually they are the same. Even taking a bit complex case if $2/7 = 0.285714 $ definitely we will be able to find another number with different difference but same answer as $ 3/0.2857514 = 10.5 $ so $ 3/ 10.5 = 0.285714 $ here difference is 7.5 but value of the ratio is still 0.285714 . So am I going wrong in understanding this concept preferred in many popular text books . If so please spot the error and help me by giving conditions when this fact holds good .
2013/04/11
[ "https://math.stackexchange.com/questions/358704", "https://math.stackexchange.com", "https://math.stackexchange.com/users/30423/" ]
I have never seen this claim in any textbook; in any case it's **wrong**. The claim seems to be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, then $|a - b| < |c - d| \iff \frac{a}{b} < \frac{c}{d}$. This is false. We can write $\frac{a}{b} = 1 - \frac{b-a}{b}$ and $\frac{c}{d} = 1 - \frac{d-c}{d}$, so you're comparing $\frac{b-a}{b}$ and $\frac{d-c}{d}$ (whichever is greater, the corresponding fraction is smaller). The first numerator may be smaller than the second, but the actual comparison of these fractions can of course go either way. For example, * Here is one with $b - a < d - c$ but $\frac{a}{b} > \frac{c}{d}$: consider $\frac{2}{3} > \frac{3}{5}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} = \frac{c}{d}$: consider $\frac{2}{3} = \frac{4}{6}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} < \frac{c}{d}$: consider $\frac{2}{3} < \frac{5}{7}$. So all results are possible; the test is nonsense. Edit: Just for fun/completeness, here is a table showing pairs $(\frac{a}{b}, \frac{c}{d})$ with each possible combination of the two comparisions: $$\begin{array}{c|c|c|c} & \frac{a}{b}<\frac{c}{d} & \frac{a}{b}=\frac{c}{d} & \frac{a}{b}>\frac{c}{d}\\ \hline \\ b-a<d-c & \frac23,\frac57 & \frac23,\frac46 & \frac23,\frac35\\ \hline \\ b-a=d-c & \frac23,\frac34 & \frac23,\frac23 & \frac23,\frac12\\ \hline \\ b-a>d-c & \frac57,\frac23 & \frac46,\frac23 & \frac35,\frac23 \\ \end{array}$$ (If you want examples involving fractions greater than $1$, turn each of the fractions upside down. Each of the inequalities between the fractions will reverse direction, so you'll still have a complete set of examples.) --- Edit: On looking at that segment of the video, it's possible (not very clear) that what he may have been saying is equivalent to the following claim, which *is* true: if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a > b$ and $c > d$, and if $a - b < c - d$ **and** $b > d$, then $\frac{a}{b} < \frac{c}{d}$. Proof: $$\frac{a}{b} = 1 + \frac{a-b}{b} < 1 + \frac{c-d}{b} < 1 + \frac{c-d}{d} = \frac{c}{d}$$ With fractions less than $1$, the corresponding statement would be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, and if $b - a < d - c$ **and** $b > d$, then $\frac{a}{b} > \frac{c}{d}$: $$\frac{a}{b} = 1 - \frac{b-a}{b} > 1 - \frac{b-a}{d} > 1 - \frac{d-c}{d} = \frac{c}{d}$$ But these are so many conditions on the hypothesis that I wonder how often it will be useful.
The statement is incorrect, even without considering possibly special cases. Take for example $$\frac{901}{1000}<\frac{9011}{10000}.$$
358,704
A text book proposed that "when comparing fractions ,if the compared fractions's are such that numerator is smaller than denominator ,then fraction with more difference(absolute) between numerator and denominator is the smallest among the fractions compared " And I found many text books support this. Even on looking at the video <http://www.youtube.com/watch?v=rJz-f7uCBns#t=6m35s> , here he has used this idea , but for a case where numerator is greater than denominator . But consider the case $2/3$ & $ 20/30 $ , but as per the theory proposed 2/3 > 20/30.but actually they are the same. Even taking a bit complex case if $2/7 = 0.285714 $ definitely we will be able to find another number with different difference but same answer as $ 3/0.2857514 = 10.5 $ so $ 3/ 10.5 = 0.285714 $ here difference is 7.5 but value of the ratio is still 0.285714 . So am I going wrong in understanding this concept preferred in many popular text books . If so please spot the error and help me by giving conditions when this fact holds good .
2013/04/11
[ "https://math.stackexchange.com/questions/358704", "https://math.stackexchange.com", "https://math.stackexchange.com/users/30423/" ]
The statement is incorrect, even without considering possibly special cases. Take for example $$\frac{901}{1000}<\frac{9011}{10000}.$$
Here is the comparison criterion for fractions in the unit interval. You omit a $\rm\,\color{#c00}{key\ hypothesis}.$ **Lemma** $\rm\ \ \color{#c00}{ A>a},\,\ \color{blue}{b\!-\!a > B\!-\!A}\ \Rightarrow\ A/B > a/b,\ \ $ for $\rm\ \ A/B,\,\color{#0a0}{a/b \,\in\, (0,1)},\ \ B,b> 0\:$ **Proof** $\rm\ \ \ \ \displaystyle \frac{A}B-\frac{a}b\, =\, \frac{A(b\!-\!a)-a(B\!-\!A)}{bB} \,=\, \frac{(\color{#c00}{A\!-\!a})(\color{#0a0}{b\!-\!a}) + a(\color{blue}{b\!-\!a\!-\!(B\!-\!A))}}{bB}\, >\, 0$ **Remark** $\ $ It is a special case of the [continued fraction comparison algorithm.](https://math.stackexchange.com/a/14314/23500) $ $Indeed $$\rm\displaystyle \frac{A}B\, >\, \frac{a}b \iff \frac{b}a\, >\, \frac{B}A\ \stackrel{X\to\ X-1\!} {\iff}\ \frac{b-a}a\, >\, \frac{B-A}A \iff b\!-\!a\, >\, \frac{a}A\, (B-A)$$
358,704
A text book proposed that "when comparing fractions ,if the compared fractions's are such that numerator is smaller than denominator ,then fraction with more difference(absolute) between numerator and denominator is the smallest among the fractions compared " And I found many text books support this. Even on looking at the video <http://www.youtube.com/watch?v=rJz-f7uCBns#t=6m35s> , here he has used this idea , but for a case where numerator is greater than denominator . But consider the case $2/3$ & $ 20/30 $ , but as per the theory proposed 2/3 > 20/30.but actually they are the same. Even taking a bit complex case if $2/7 = 0.285714 $ definitely we will be able to find another number with different difference but same answer as $ 3/0.2857514 = 10.5 $ so $ 3/ 10.5 = 0.285714 $ here difference is 7.5 but value of the ratio is still 0.285714 . So am I going wrong in understanding this concept preferred in many popular text books . If so please spot the error and help me by giving conditions when this fact holds good .
2013/04/11
[ "https://math.stackexchange.com/questions/358704", "https://math.stackexchange.com", "https://math.stackexchange.com/users/30423/" ]
The statement is incorrect, even without considering possibly special cases. Take for example $$\frac{901}{1000}<\frac{9011}{10000}.$$
The test is as follows: 1. Given two Proper Fractions n/d & N/D where D>d, If (D-N) ≤ (d-n) then N/D>n/d 2. Given two Improper Fractions n/d and N/D where D>d, If (N-D) ≤ (n-d) then n/d>N/D (Note: the conditions are sufficient but not necessary).
358,704
A text book proposed that "when comparing fractions ,if the compared fractions's are such that numerator is smaller than denominator ,then fraction with more difference(absolute) between numerator and denominator is the smallest among the fractions compared " And I found many text books support this. Even on looking at the video <http://www.youtube.com/watch?v=rJz-f7uCBns#t=6m35s> , here he has used this idea , but for a case where numerator is greater than denominator . But consider the case $2/3$ & $ 20/30 $ , but as per the theory proposed 2/3 > 20/30.but actually they are the same. Even taking a bit complex case if $2/7 = 0.285714 $ definitely we will be able to find another number with different difference but same answer as $ 3/0.2857514 = 10.5 $ so $ 3/ 10.5 = 0.285714 $ here difference is 7.5 but value of the ratio is still 0.285714 . So am I going wrong in understanding this concept preferred in many popular text books . If so please spot the error and help me by giving conditions when this fact holds good .
2013/04/11
[ "https://math.stackexchange.com/questions/358704", "https://math.stackexchange.com", "https://math.stackexchange.com/users/30423/" ]
I have never seen this claim in any textbook; in any case it's **wrong**. The claim seems to be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, then $|a - b| < |c - d| \iff \frac{a}{b} < \frac{c}{d}$. This is false. We can write $\frac{a}{b} = 1 - \frac{b-a}{b}$ and $\frac{c}{d} = 1 - \frac{d-c}{d}$, so you're comparing $\frac{b-a}{b}$ and $\frac{d-c}{d}$ (whichever is greater, the corresponding fraction is smaller). The first numerator may be smaller than the second, but the actual comparison of these fractions can of course go either way. For example, * Here is one with $b - a < d - c$ but $\frac{a}{b} > \frac{c}{d}$: consider $\frac{2}{3} > \frac{3}{5}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} = \frac{c}{d}$: consider $\frac{2}{3} = \frac{4}{6}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} < \frac{c}{d}$: consider $\frac{2}{3} < \frac{5}{7}$. So all results are possible; the test is nonsense. Edit: Just for fun/completeness, here is a table showing pairs $(\frac{a}{b}, \frac{c}{d})$ with each possible combination of the two comparisions: $$\begin{array}{c|c|c|c} & \frac{a}{b}<\frac{c}{d} & \frac{a}{b}=\frac{c}{d} & \frac{a}{b}>\frac{c}{d}\\ \hline \\ b-a<d-c & \frac23,\frac57 & \frac23,\frac46 & \frac23,\frac35\\ \hline \\ b-a=d-c & \frac23,\frac34 & \frac23,\frac23 & \frac23,\frac12\\ \hline \\ b-a>d-c & \frac57,\frac23 & \frac46,\frac23 & \frac35,\frac23 \\ \end{array}$$ (If you want examples involving fractions greater than $1$, turn each of the fractions upside down. Each of the inequalities between the fractions will reverse direction, so you'll still have a complete set of examples.) --- Edit: On looking at that segment of the video, it's possible (not very clear) that what he may have been saying is equivalent to the following claim, which *is* true: if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a > b$ and $c > d$, and if $a - b < c - d$ **and** $b > d$, then $\frac{a}{b} < \frac{c}{d}$. Proof: $$\frac{a}{b} = 1 + \frac{a-b}{b} < 1 + \frac{c-d}{b} < 1 + \frac{c-d}{d} = \frac{c}{d}$$ With fractions less than $1$, the corresponding statement would be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, and if $b - a < d - c$ **and** $b > d$, then $\frac{a}{b} > \frac{c}{d}$: $$\frac{a}{b} = 1 - \frac{b-a}{b} > 1 - \frac{b-a}{d} > 1 - \frac{d-c}{d} = \frac{c}{d}$$ But these are so many conditions on the hypothesis that I wonder how often it will be useful.
Here is the comparison criterion for fractions in the unit interval. You omit a $\rm\,\color{#c00}{key\ hypothesis}.$ **Lemma** $\rm\ \ \color{#c00}{ A>a},\,\ \color{blue}{b\!-\!a > B\!-\!A}\ \Rightarrow\ A/B > a/b,\ \ $ for $\rm\ \ A/B,\,\color{#0a0}{a/b \,\in\, (0,1)},\ \ B,b> 0\:$ **Proof** $\rm\ \ \ \ \displaystyle \frac{A}B-\frac{a}b\, =\, \frac{A(b\!-\!a)-a(B\!-\!A)}{bB} \,=\, \frac{(\color{#c00}{A\!-\!a})(\color{#0a0}{b\!-\!a}) + a(\color{blue}{b\!-\!a\!-\!(B\!-\!A))}}{bB}\, >\, 0$ **Remark** $\ $ It is a special case of the [continued fraction comparison algorithm.](https://math.stackexchange.com/a/14314/23500) $ $Indeed $$\rm\displaystyle \frac{A}B\, >\, \frac{a}b \iff \frac{b}a\, >\, \frac{B}A\ \stackrel{X\to\ X-1\!} {\iff}\ \frac{b-a}a\, >\, \frac{B-A}A \iff b\!-\!a\, >\, \frac{a}A\, (B-A)$$
358,704
A text book proposed that "when comparing fractions ,if the compared fractions's are such that numerator is smaller than denominator ,then fraction with more difference(absolute) between numerator and denominator is the smallest among the fractions compared " And I found many text books support this. Even on looking at the video <http://www.youtube.com/watch?v=rJz-f7uCBns#t=6m35s> , here he has used this idea , but for a case where numerator is greater than denominator . But consider the case $2/3$ & $ 20/30 $ , but as per the theory proposed 2/3 > 20/30.but actually they are the same. Even taking a bit complex case if $2/7 = 0.285714 $ definitely we will be able to find another number with different difference but same answer as $ 3/0.2857514 = 10.5 $ so $ 3/ 10.5 = 0.285714 $ here difference is 7.5 but value of the ratio is still 0.285714 . So am I going wrong in understanding this concept preferred in many popular text books . If so please spot the error and help me by giving conditions when this fact holds good .
2013/04/11
[ "https://math.stackexchange.com/questions/358704", "https://math.stackexchange.com", "https://math.stackexchange.com/users/30423/" ]
I have never seen this claim in any textbook; in any case it's **wrong**. The claim seems to be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, then $|a - b| < |c - d| \iff \frac{a}{b} < \frac{c}{d}$. This is false. We can write $\frac{a}{b} = 1 - \frac{b-a}{b}$ and $\frac{c}{d} = 1 - \frac{d-c}{d}$, so you're comparing $\frac{b-a}{b}$ and $\frac{d-c}{d}$ (whichever is greater, the corresponding fraction is smaller). The first numerator may be smaller than the second, but the actual comparison of these fractions can of course go either way. For example, * Here is one with $b - a < d - c$ but $\frac{a}{b} > \frac{c}{d}$: consider $\frac{2}{3} > \frac{3}{5}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} = \frac{c}{d}$: consider $\frac{2}{3} = \frac{4}{6}$. * Here is one with $b - a < d - c$ but $\frac{a}{b} < \frac{c}{d}$: consider $\frac{2}{3} < \frac{5}{7}$. So all results are possible; the test is nonsense. Edit: Just for fun/completeness, here is a table showing pairs $(\frac{a}{b}, \frac{c}{d})$ with each possible combination of the two comparisions: $$\begin{array}{c|c|c|c} & \frac{a}{b}<\frac{c}{d} & \frac{a}{b}=\frac{c}{d} & \frac{a}{b}>\frac{c}{d}\\ \hline \\ b-a<d-c & \frac23,\frac57 & \frac23,\frac46 & \frac23,\frac35\\ \hline \\ b-a=d-c & \frac23,\frac34 & \frac23,\frac23 & \frac23,\frac12\\ \hline \\ b-a>d-c & \frac57,\frac23 & \frac46,\frac23 & \frac35,\frac23 \\ \end{array}$$ (If you want examples involving fractions greater than $1$, turn each of the fractions upside down. Each of the inequalities between the fractions will reverse direction, so you'll still have a complete set of examples.) --- Edit: On looking at that segment of the video, it's possible (not very clear) that what he may have been saying is equivalent to the following claim, which *is* true: if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a > b$ and $c > d$, and if $a - b < c - d$ **and** $b > d$, then $\frac{a}{b} < \frac{c}{d}$. Proof: $$\frac{a}{b} = 1 + \frac{a-b}{b} < 1 + \frac{c-d}{b} < 1 + \frac{c-d}{d} = \frac{c}{d}$$ With fractions less than $1$, the corresponding statement would be that if you have two fractions $\frac{a}{b}$ and $\frac{c}{d}$ with $a < b$ and $c < d$, and if $b - a < d - c$ **and** $b > d$, then $\frac{a}{b} > \frac{c}{d}$: $$\frac{a}{b} = 1 - \frac{b-a}{b} > 1 - \frac{b-a}{d} > 1 - \frac{d-c}{d} = \frac{c}{d}$$ But these are so many conditions on the hypothesis that I wonder how often it will be useful.
The test is as follows: 1. Given two Proper Fractions n/d & N/D where D>d, If (D-N) ≤ (d-n) then N/D>n/d 2. Given two Improper Fractions n/d and N/D where D>d, If (N-D) ≤ (n-d) then n/d>N/D (Note: the conditions are sufficient but not necessary).
26,876,461
I want to pass an SQL command to a variable and execute aggregate function for that. but when I run it, this error is shown, and nothing appear for @cnt value , *Must declare the variable '@cnt'* What's my mistake ? ``` DECLARE @ret varchar(300); set @ret = ''; declare @cnt int; set @ret = 'select @cnt = count(*) from TBL1' EXEC (@ret) print @cnt ```
2014/11/11
[ "https://Stackoverflow.com/questions/26876461", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3466755/" ]
You could use sp\_executesql to access a variable inside of your dynamic SQL String: ``` DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); DECLARE @cnt varchar(30); SET @SQLString = N'SELECT @cntOUT = count(1) from tbl1'; SET @ParmDefinition = N'@cntOUT varchar(30) OUTPUT'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @cntOUT=@cnt OUTPUT; SELECT @cnt; ```
The exec statement signals the end of a batch, so the print statement doesn't know about @cnt. Try this instead: ``` DECLARE @ret varchar(300) set @ret = '' set @ret = 'declare @cnt int select @cnt = count(*) from [Load].RINData print @cnt' EXEC (@ret) ```
114,839
I am preparing myself for a job interview for a lecturer position. According to many articles on academic job interviews, a common question to be prepared for is that " what can you do to enhance our department?" I'm not sure what exactly the committee would mean by that? I understand that I should tailor my own answer but question is not really clear to me.
2018/08/06
[ "https://academia.stackexchange.com/questions/114839", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/41930/" ]
Well, what can you do that nobody else in the department can? Some possible examples: 1. If you've had experience teaching electronically, and the department doesn't currently offer e-learning, you could point this out. 2. If you have expertise in a certain subject that nobody else in the department has, you could point this out. 3. If you've seen the content, homework assignments, etc, of the current courses the department offers, and you think you can do better, you could point this out. For example maybe in your experience students like to learn about [topic] which would be a really good addition to the department's [course], and you are very capable of teaching [topic].
It is an obviously open ended question that is intended to be hard. It will be used differently by different people. Often it is used because they can't think of anything very specific to ask you. In the book Siddhartha by Hesse, Siddhartha is asked the question by a potential employer. His response is "I can think. I can wait. I can fast" which seems a bit non-responsive. The statement has been widely analyzed. A similarly orthogonal answer is sometimes fine, or not, but it depends on your reading of the situation. But the answer should somehow be meaningful to you. One specific piece of advice, though, is not to use the question to appear arrogant. "I play well with others." Fine, even if a bit silly. "I have a 180 measured IQ." Not fine. It is good to think about the question. It is also good to think about why you want *this* job. When you think about such questions, think beyond your professional capabilities. Think about your goals, your other interests, etc.
114,839
I am preparing myself for a job interview for a lecturer position. According to many articles on academic job interviews, a common question to be prepared for is that " what can you do to enhance our department?" I'm not sure what exactly the committee would mean by that? I understand that I should tailor my own answer but question is not really clear to me.
2018/08/06
[ "https://academia.stackexchange.com/questions/114839", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/41930/" ]
Well, what can you do that nobody else in the department can? Some possible examples: 1. If you've had experience teaching electronically, and the department doesn't currently offer e-learning, you could point this out. 2. If you have expertise in a certain subject that nobody else in the department has, you could point this out. 3. If you've seen the content, homework assignments, etc, of the current courses the department offers, and you think you can do better, you could point this out. For example maybe in your experience students like to learn about [topic] which would be a really good addition to the department's [course], and you are very capable of teaching [topic].
This is an opportunity to sell yourself. The question is vague which allows you to take the conversation to whatever topic you want to bring up. Teaching? Research? Potential collaborators? Here is a short list of ideas that I used when I got questions like this: * I can teach X, which I think could help the department. * My research interests overlap with Y's. * No one in the department is currently doing research in Z. Another technique is to transform vague questions into questions that you can answer. I did this a lot. See my [list](http://austinhenley.com/resources/facultyinterviewquestions.html) of questions that I was asked during my faculty interviews for more ideas.
114,839
I am preparing myself for a job interview for a lecturer position. According to many articles on academic job interviews, a common question to be prepared for is that " what can you do to enhance our department?" I'm not sure what exactly the committee would mean by that? I understand that I should tailor my own answer but question is not really clear to me.
2018/08/06
[ "https://academia.stackexchange.com/questions/114839", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/41930/" ]
This is an update for those who are planning to go down this route: I had my interview. I was asked a lot of questions regarding my teaching philosophy and experience. * I was asked why I like to teach at their university?(to protect my privacy name of university will remain anonymous-but lets say it's a prestigous research university). * One particular question I was asked was that what was the most difficult part of teaching I ever faced and how I dealt with that. * Also they named a few courses and asked if I've taught them before. * They asked if I have ever taught classes as instructor and not a TA. * They asked a follow up question asking if I did A to Z of the course preparation myself, e.g. writing syllabus, making exams etc. At any rate, while I didn't get the cliche of why-are-you-a-good-fit? the committee aimed to get to know me and investigate my teaching skills in detail by asking straightforward questions.
It is an obviously open ended question that is intended to be hard. It will be used differently by different people. Often it is used because they can't think of anything very specific to ask you. In the book Siddhartha by Hesse, Siddhartha is asked the question by a potential employer. His response is "I can think. I can wait. I can fast" which seems a bit non-responsive. The statement has been widely analyzed. A similarly orthogonal answer is sometimes fine, or not, but it depends on your reading of the situation. But the answer should somehow be meaningful to you. One specific piece of advice, though, is not to use the question to appear arrogant. "I play well with others." Fine, even if a bit silly. "I have a 180 measured IQ." Not fine. It is good to think about the question. It is also good to think about why you want *this* job. When you think about such questions, think beyond your professional capabilities. Think about your goals, your other interests, etc.
114,839
I am preparing myself for a job interview for a lecturer position. According to many articles on academic job interviews, a common question to be prepared for is that " what can you do to enhance our department?" I'm not sure what exactly the committee would mean by that? I understand that I should tailor my own answer but question is not really clear to me.
2018/08/06
[ "https://academia.stackexchange.com/questions/114839", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/41930/" ]
This is an update for those who are planning to go down this route: I had my interview. I was asked a lot of questions regarding my teaching philosophy and experience. * I was asked why I like to teach at their university?(to protect my privacy name of university will remain anonymous-but lets say it's a prestigous research university). * One particular question I was asked was that what was the most difficult part of teaching I ever faced and how I dealt with that. * Also they named a few courses and asked if I've taught them before. * They asked if I have ever taught classes as instructor and not a TA. * They asked a follow up question asking if I did A to Z of the course preparation myself, e.g. writing syllabus, making exams etc. At any rate, while I didn't get the cliche of why-are-you-a-good-fit? the committee aimed to get to know me and investigate my teaching skills in detail by asking straightforward questions.
This is an opportunity to sell yourself. The question is vague which allows you to take the conversation to whatever topic you want to bring up. Teaching? Research? Potential collaborators? Here is a short list of ideas that I used when I got questions like this: * I can teach X, which I think could help the department. * My research interests overlap with Y's. * No one in the department is currently doing research in Z. Another technique is to transform vague questions into questions that you can answer. I did this a lot. See my [list](http://austinhenley.com/resources/facultyinterviewquestions.html) of questions that I was asked during my faculty interviews for more ideas.
48,717,580
Angular coders may recognize this code as a modified [Tour of Heroes](https://angular.io/tutorial) project. How do I set the width of the "badge" CSS class dynamically such that all badges in the `ul` are the same width, fitting the longest id? So, the badge width property should be 2ch with the given dummy data which currently has a max length of 2. Template ``` <ul class="contacts"> <li *ngFor="let contact of contacts"> <a routerLink="/detail/{{contact.id}}"> <span class="badge">{{contact.id}}</span>{{contact.name}} </a> <button title="Delete contact" (click)="delete(contact)">x</button> </li> </ul> ``` CSS ``` .contacts .badge { color: white; background-color: #607D8B; justify-content: flex-end; width: 2ch; padding: 0 1ch 0 1ch; } ``` Dummy data ``` const contacts = [ { id: 1, name: 'John McClane' }, { id: 2, name: 'Holly Gennero' }, { id: 3, name: 'Harry Ellis' }, { id: 4, name: 'Karl Vreski' }, { id: 5, name: 'Al Powell' }, { id: 6, name: 'Tony Vreski' }, { id: 7, name: 'Theo' }, { id: 8, name: 'Agent Johnson' }, { id: 9, name: 'Special Agent Johnson' }, { id: 10, name: 'Dwayne T. Robinson' }, { id: 11, name: 'Yoshinobu Takagi' }, { id: 12, name: 'Hans Gruber'} ]; ``` I've tried to style the page based on this [SO answer](https://stackoverflow.com/a/13904086/3853136) but no such luck. If possible, I prefer keep the styling in CSS rather than setting the width programmatically in javascript.
2018/02/10
[ "https://Stackoverflow.com/questions/48717580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
> > I am wondering how the compiler handles this situation. Are the macro parameters still used for compiling? > > > Macros are processed in the pre-processing stage. If the pre-processor is able to deal with the extra arguments used in the usage of the macro, the only thing it can do is drop those parameters in the macro expansion. > > E.g. will one be able to see the "logging started" string on reverse engeneering? > > > No, the code processed by the compiler will not have that line at all. --- If you have the option to change those lines of code, I would recommend changing the non-debug definition of the macro to be a noop expansion. E.g.: ``` #define Logger(level, input) (void)level; ```
As per the law, a macro function must declare as many parameters as you call it with. So calling your macro ``` #define Logger() ``` as `Logger(LogLevel::Info, "logging started")` results in an error. MSVC probably allows it because it isn't standard-compliant. There's not much to reason further (which is my answer to the actual question). You either ``` #define Logger(unused,unused2) ``` and leave the replacement part empty or ``` #define Logger(...) ``` and suffer the consequences of being able to call with any number of arguments. [Hint: First one is recommended.]
22,447,481
What is wrong with my query? Can somebody help me please? I get this errormessage > > "Unknown column 'wp\_gdsr\_data\_article. > (SUM(user\_voters)+SUM(visitor\_voters))' in 'order clause'" > > > but the '`wp_gdsr_data_article`' colum exist ``` SELECT * FROM `wp_posts` INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE `post_status` = 'publish' AND `post_type` = 'post' ORDER BY `wp_gdsr_data_article`.`(SUM(user_voters)+SUM(visitor_voters))` DESC LIMIT 1 , 30 ```
2014/03/17
[ "https://Stackoverflow.com/questions/22447481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2306309/" ]
You are using an expression in the `ORDER` clause which not a table's column. Hence you can't use a table identifier on the outcome of an expression. This is wrong. ``` ORDER BY `wp_gdsr_data_article`.`(SUM(user_voters)+SUM(visitor_voters))` DESC ``` Change it to: ``` ORDER BY (SUM(user_voters)+SUM(visitor_voters)) DESC ``` And you can't directly use any aggregate function in `ORDER BY` clause like that. Calculate `SUM...` parts separately and then use in `ORDER BY`. ``` SELECT * from ( SELECT *, (SUM(user_voters)+SUM(visitor_voters)) total FROM `wp_posts` INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE `post_status` = 'publish' AND `post_type` = 'post' LIMIT 1 , 30 ) results ORDER BY total ``` **Refer to**: [*How to ORDER BY a SUM() in MySQL?*](https://stackoverflow.com/questions/1309841/how-to-order-by-a-sum-in-mysql)
Remove quotes from all columns and tablenames in your statement. Besides the Order by part should base on a column or an evaluated value in our statement, the way you use it is wrong. you will need to evaluate the part which you use in the order by section before using it for the order by, something like this (untested): ``` SELECT (SUM(wp_gdsr_data_article.user_voters)+SUM(wp_gdsr_data_article.visitor_voters) as total_voters), * FROM wp_posts INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE post_status = 'publish' AND post_type = 'post' ORDER BY total_voters DESC LIMIT 1 , 30 ```
22,447,481
What is wrong with my query? Can somebody help me please? I get this errormessage > > "Unknown column 'wp\_gdsr\_data\_article. > (SUM(user\_voters)+SUM(visitor\_voters))' in 'order clause'" > > > but the '`wp_gdsr_data_article`' colum exist ``` SELECT * FROM `wp_posts` INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE `post_status` = 'publish' AND `post_type` = 'post' ORDER BY `wp_gdsr_data_article`.`(SUM(user_voters)+SUM(visitor_voters))` DESC LIMIT 1 , 30 ```
2014/03/17
[ "https://Stackoverflow.com/questions/22447481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2306309/" ]
You are using an expression in the `ORDER` clause which not a table's column. Hence you can't use a table identifier on the outcome of an expression. This is wrong. ``` ORDER BY `wp_gdsr_data_article`.`(SUM(user_voters)+SUM(visitor_voters))` DESC ``` Change it to: ``` ORDER BY (SUM(user_voters)+SUM(visitor_voters)) DESC ``` And you can't directly use any aggregate function in `ORDER BY` clause like that. Calculate `SUM...` parts separately and then use in `ORDER BY`. ``` SELECT * from ( SELECT *, (SUM(user_voters)+SUM(visitor_voters)) total FROM `wp_posts` INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE `post_status` = 'publish' AND `post_type` = 'post' LIMIT 1 , 30 ) results ORDER BY total ``` **Refer to**: [*How to ORDER BY a SUM() in MySQL?*](https://stackoverflow.com/questions/1309841/how-to-order-by-a-sum-in-mysql)
Try this ``` SELECT *, (SUM(wp_gdsr_data_article.user_voters)+SUM(wp_gdsr_data_article.visitor_voters)) AS someSum FROM `wp_posts` INNER JOIN wp_term_relationships ON wp_term_relationships.object_id = ID AND wp_term_relationships.term_taxonomy_id = 1 INNER JOIN wp_gdsr_data_article ON post_id = ID WHERE `post_status` = 'publish' AND `post_type` = 'post' ORDER BY someSum DESC LIMIT 1 , 30 ```
1,610,587
I recently switched from Ubuntu to macOS Big Sur 11.1, and Apple is using `zsh` as their default shell .The `.zsh_sessions` folder is taking up about 100 MB and I want to disable it completely, although I could delete normally. I have added following in my `.zshrc` but it was of no use: ```bash set hist_ignore_all_dumps setopt hist_ignore_space setopt HIST_NO_FUNCTIONS SHELL_SESSION_HISTORY=0 ``` I want to disable creation of `.zsh_sessions` folder completely.
2020/12/17
[ "https://superuser.com/questions/1610587", "https://superuser.com", "https://superuser.com/users/1252304/" ]
The code that sets up macOS's "Save/Restore Shell State" feature for Zsh can be found in `/etc/zshrc_Apple_Terminal`. As explained in that file, to disable this feature, you need to do the following: 1. In your home dir, create a plain text file named `.zprofile`. 2. In this file, add the following: ```bash export SHELL_SESSIONS_DISABLE=1 ``` Why we need to put the variable in `~/.zprofile` ------------------------------------------------ When Zsh is started as an *interactive* shell (login or not), it will source `/etc/zshrc` and `~/.zshrc`, in that order. If (and only if) macOS’s `/etc/zshrc` runs inside Terminal.app, it calls `/etc/zshrc_Apple_Terminal`, which starts the “Save/Restore” feature. This means that we cannot set the variable to disable this feature in `~/.zshrc`, because that file is read only *after* the feature has already been started and “restored” your previous session. However, whenever you open a new tab or window in Apple’s Terminal.app, it starts a new interactive *login* shell. When a Zsh login shell starts up, it sources `~/.zprofile` and does so *before* sourcing `/etc/zshrc`. Why we need to `export` the variable ------------------------------------ Interactive shells that are descendants of a login shell are not automatically themselves login shells. Thus, if we would disable the feature in each login shell by only *setting* the variable, then any subshell started with, say, `exec zsh`, would still start up with the “Save/Restore” feature enabled. To fix this, we `export` the variable. This puts into the *environment.* Each child process inherits its parent’s environment, with all the variables in it. This way we make sure that the “Save/Restore” feature is also disabled in interactive shells that are descendants of the login shell, but not necessarily login shells themselves.
You can switch to bash, and bash also logs sessions in an analogous ~/.bash\_sessions file, or folder - if you prefer, everything in UNIX is actually a file. But, to prevent the creation of sessions files in /bin/bash, "touch ~/.bash\_sessions\_disable" If you don't like the message about apple switching to zsh, you don't have to see it if you switch from Apple's Terminal, to iTerm2. iTerm2 is by far, better, and while you are at it, get brew, and then brew install bash, and don't use apple's bash at all. I suggest getting iTerm2, forgetting apple's Terminal all together, and for that matter, get the open source bash, and point to it in iTerm2 as your either login or non-login shell. To this question, look at these two files, and see how this sessions stuff is implemented.: ``` /etc/bashrc_Apple_Terminal /etc/zshrc_Apple_Terminal ``` They are sourced in the normal "apple Terminal" start-up sequence. Do a man on bash and/or man zsh.
18,603,702
I'm not sure about the title, I tried my best. I have a table displayed with information from a database using this file display.php ``` <?php mysql_connect("localhost", "root", "root") or die(mysql_error()); mysql_select_db("tournaments") or die(mysql_error()); $result = mysql_query("SELECT * FROM tournies") or die(mysql_error()); echo '<table id="bets" class="tablesorter" cellspacing="0" summary="Datapass"> <thead> <tr> <th>Tournament <br> Name</th> <th>Pot</th> <th>Maximum <br> Players</th> <th>Minimum <br> Players</th> <th>Host</th> <th></th> <th></th> </tr> </thead> <tbody>'; while($row = mysql_fetch_array( $result )) { $i=0; if( $i % 2 == 0 ) { $class = ""; } else { $class = ""; } echo "<tr" . $class . "><td>"; echo $row['tour_name']; $tour_id = $row['tour_name']; echo "</td><td>"; echo $row['pot']," Tokens"; echo "</td><td class=\"BR\">"; echo $row['max_players']," Players"; echo "</td><td class=\"BR\">"; echo $row['min_players']," Players"; echo "</td><td class=\"BR\">"; echo $row['host']; echo "</td><td>"; echo "<input id=\"delete_button\" type=\"button\" value=\"Delete Row\" onClick=\"SomeDeleteRowFunction(this)\">"; echo "</td><td>"; echo "<form action=\"join.php?name=$name\" method=\"POST\" >"; echo "<input id=\"join_button\" type=\"submit\" value=\"Join\">"; echo "</td></tr>"; } echo "</tbody></table>"; ?> ``` Basically I want the user to press a button from a row of the table and they go to a new page called join.php. I need the persons username and the name of the tournament from the row the clicked. For example here's my page: ![my page](https://i.stack.imgur.com/eG0g9.png) When they click the join button at the end of row one it should send them to 'join.php?name=thierusernamehere&tourname=dfgdds' Any help much appreciated. Thanks.
2013/09/04
[ "https://Stackoverflow.com/questions/18603702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2288508/" ]
``` echo '<td><a href="join.php?name='.$row['name'].'&tourname='.$row['tour_name'].'">Join</a></td>' ```
There are many way to approach. The easiest way is just `echo '<a href="join.php?name=' . $row['col_name'] . '">JOIN</a>';` or you can use a form with hidden input and submit button. BUT Your code is really a mess, try to make your code more maintainable and readable. And do NOT use any `mysql_*` functions, they are deprecated. Read more about PDO: * <http://php.net/manual/en/book.pdo.php> * <http://net.tutsplus.com/tutorials/php/why-you-should-be-using-phps-pdo-for-database-access/>
37,853,548
I need to scrape roughly 30GB of JSON data from a website API as quickly as possible. I don't need to parse it -- I just need to save everything that shows up on each API URL. * I can request quite a bit of data at a time -- say 1MB or even 50MB 'chunks' (API parameters are encoded in the URL and allow me to select how much data I want per request) * the API places a limit of 1 request per second. * I would like to accomplish this on a laptop and 100MB/sec internet connection Currently, I am accomplishing this (synchronously & too slowly) by: -pre-computing all of the (encoded) URL's I want to scrape -using Python 3's requests library to request each URL and save the resulting JSON one-by-one in separate .txt files. Basically, my synchronous, too-slow solution looks like this (simplified slightly): ``` #for each pre-computed encoded URL do: curr_url_request = requests.get(encoded_URL_i, timeout=timeout_secs) if curr_url_request.ok: with open('json_output.txt', 'w') as outfile: json.dump(curr_url_request.json(), outfile) ``` What would be a better/faster way to do this? Is there a straight-forward way to accomplish this asynchronously but respecting the 1-request-per-second threshold? I have read about grequests (no longer maintained?), twisted, asyncio, etc but do not have enough experience to know whether/if one of these is the right way to go. **EDIT** Based on Kardaj's reply below, I decided to give async Tornado a try. Here's my current Tornado version (which is heavily based on one of the examples in their docs). It successfully limits concurrency. The hangup is, how can I do an overall rate-limit of 1 request per second globally across all workers? (Kardaj, the async sleep makes a worker sleep before working, but does not check whether other workers 'wake up' and request at the same time. When I tested it, all workers grab a page and break the rate limit, then go to sleep simultaneously). ``` from datetime import datetime from datetime import timedelta from tornado import httpclient, gen, ioloop, queues URLS = ["https://baconipsum.com/api/?type=meat", "https://baconipsum.com/api/?type=filler", "https://baconipsum.com/api/?type=meat-and-filler", "https://baconipsum.com/api/?type=all-meat&paras=2&start-with-lorem=1"] concurrency = 2 def handle_request(response): if response.code == 200: with open("FOO"+'.txt', "wb") as thisfile:#fix filenames to avoid overwrite thisfile.write(response.body) @gen.coroutine def request_and_save_url(url): try: response = yield httpclient.AsyncHTTPClient().fetch(url, handle_request) print('fetched {0}'.format(url)) except Exception as e: print('Exception: {0} {1}'.format(e, url)) raise gen.Return([]) @gen.coroutine def main(): q = queues.Queue() tstart = datetime.now() fetching, fetched = set(), set() @gen.coroutine def fetch_url(worker_id): current_url = yield q.get() try: if current_url in fetching: return #print('fetching {0}'.format(current_url)) print("Worker {0} starting, elapsed is {1}".format(worker_id, (datetime.now()-tstart).seconds )) fetching.add(current_url) yield request_and_save_url(current_url) fetched.add(current_url) finally: q.task_done() @gen.coroutine def worker(worker_id): while True: yield fetch_url(worker_id) # Fill a queue of URL's to scrape list = [q.put(url) for url in URLS] # this does not make a list...it just puts all the URLS into the Queue # Start workers, then wait for the work Queue to be empty. for ii in range(concurrency): worker(ii) yield q.join(timeout=timedelta(seconds=300)) assert fetching == fetched print('Done in {0} seconds, fetched {1} URLs.'.format( datetime.now() - tstart, len(fetched))) if __name__ == '__main__': import logging logging.basicConfig() io_loop = ioloop.IOLoop.current() io_loop.run_sync(main) ```
2016/06/16
[ "https://Stackoverflow.com/questions/37853548", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3580621/" ]
You are parsing the content and then serializing it again. You can just write the content directly to a file. ``` curr_url_request = requests.get(encoded_URL_i, timeout=timeout_secs) if curr_url_request.ok: with open('json_output.txt', 'w') as outfile: outfile.write(curr_url_request.content) ``` That probably removes most of the processing overhead.
`tornado` has a very powerful asynchronous client. Here's a basic code that may do the trick: ``` from tornado.httpclient import AsyncHTTPClient import tornado URLS = [] http_client = AsyncHTTPClient() loop = tornado.ioloop.IOLoop.current() def handle_request(response): if response.code == 200: with open('json_output.txt', 'a') as outfile: outfile.write(response.body) @tornado.gen.coroutine def queue_requests(): results = [] for url in URLS: nxt = tornado.gen.sleep(1) # 1 request per second res = http_client.fetch(url, handle_request) results.append(res) yield nxt yield results # wait for all requests to finish loop.add_callback(loop.stop) loop.add_callback(queue_requests) loop.start() ``` This is a straight-forward approach that may lead to too many connections with the remote server. You may have to resolve such problem using a sliding window while queuing the requests. In case of request timeouts or specific headers required, feel free to read the [doc](http://tornadokevinlee.readthedocs.io/en/latest/httpclient.html#tornado.httpclient.HTTPRequest)
43,453,236
In html when creating tabs, I can set Paris tab active(hard coded in html with "style="display: block;"). So, when page is loaded the active tab(Paris) would show. **Problem** When I click on different tab (London) and hit refresh it shows Paris again. How can I hit refresh and show me the information of the tab that is currently active? Instead of bringing me back to the defined tab. Maybe javascript or jquery can fix my problem? ``` <!DOCTYPE html> <html> <head> <div class="tab"> <button class="tablinks" onclick="openCity(event, 'London')">London</button> <button class="tablinks" onclick="openCity(event, 'Paris')">Paris</button> <button class="tablinks" onclick="openCity(event, 'Tokyo')">Tokyo</button> </div> <div id="London" class="tabcontent"> <h3>London</h3> <p>London is the capital city of England.</p> </div> <div id="Paris" class="tabcontent" style="display: block;> <h3>Paris</h3> <p>Paris is the capital of France.</p> </div> <div id="Tokyo" class="tabcontent"> <h3>Tokyo</h3> <p>Tokyo is the capital of Japan.</p> </div> <script> function openCity(evt, cityName) { var i, tabcontent, tablinks; tabcontent = document.getElementsByClassName("tabcontent"); for (i = 0; i < tabcontent.length; i++) { tabcontent[i].style.display = "none"; } tablinks = document.getElementsByClassName("tablinks"); for (i = 0; i < tablinks.length; i++) { tablinks[i].className = tablinks[i].className.replace(" active", ""); } document.getElementById(cityName).style.display = "block"; evt.currentTarget.className += " active"; } </script> </body> </html> ``` [Plunker View](https://plnkr.co/edit/lP9d9F7OtAvOuGxD50KD?p=preview) **Example Image** [![Example](https://i.stack.imgur.com/XrZ89.png)](https://i.stack.imgur.com/XrZ89.png) Currently on London, If I hit refresh it should display London's details. Should also work if it were Paris, or Tokyo.
2017/04/17
[ "https://Stackoverflow.com/questions/43453236", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7032335/" ]
To avoid the default margins in general, you can add `margin: 0;` to `html` and `body`. To place your absolutely positioned menu *behind* the `h2`element, you can apply `z-index: -1`, which moves it behind its parent element. In my snippet below I also changed the text-centering to right alignment and added a padding-right on the `ul`. You can play around with those values so they fit your needs. ```css html, body { margin: 0; } #mainMenu { font-family:Arial, Times, sans-serif; list-style-type:none; padding-right: 30px; } #mainMenu a { text-decoration:none; margin:5px; padding:2px; color:SeaGreen; font-weight:bold; } #mainMenu a:hover { color:Teal; } #menu { text-align:right; width:100%; height:50px; background-color:paleGoldenRod; position: absolute; z-index: -1; } li { display:inline; } footer { background-color:SlateGray; height:150px; width:100%; position:absolute; bottom:0; left:0; } ``` ```html <!DOCTYPE html> <html> <head> <title>Miko</title> <link href="#" rel="stylesheet" type="text/css"> </head> <body> <div id="menu"> <ul id="mainMenu"> <li><a href="#">HOME</a></li> <li><a href="#">ABOUT</a></li> <li><a href="#">CONTACT ME</a></li> </ul> </div> <h2>About The Page</h2> <p>To Be Added</p> <footer> <p>Web Design</p> </footer> </body> </html> ```
Add `padding-top: 50px` (the menu height) to `body`. ```css body { padding-top: 50px; } #mainMenu { font-family:Arial, Times, sans-serif; list-style-type:none; padding:0; } #mainMenu a { text-decoration:none; margin:5px; padding:2px; color:SeaGreen; font-weight:bold; } #mainMenu a:hover { color:Teal; } #menu { text-align:center; width:100%; height:50px; background-color:paleGoldenRod; position:absolute; left:0; top:0; } li { display:inline; } footer { background-color:SlateGray; height:150px; width:100%; position:absolute; bottom:0; left:0; } ``` ```html <!DOCTYPE html> <html> <head> <title>Miko</title> <link href="#" rel="stylesheet" type="text/css"> </head> <body> <div id="menu"> <ul id="mainMenu"> <li><a href="#">HOME</a></li> <li><a href="#">ABOUT</a></li> <li><a href="#">CONTACT ME</a></li> </ul> </div> <h2>About The Page</h2> <p>To Be Added</p> <footer> <p>Web Design</p> </footer> </body> </html> ``` [JSBin](https://jsbin.com/zokuwujase/1/edit?html,css,output)
43,453,236
In html when creating tabs, I can set Paris tab active(hard coded in html with "style="display: block;"). So, when page is loaded the active tab(Paris) would show. **Problem** When I click on different tab (London) and hit refresh it shows Paris again. How can I hit refresh and show me the information of the tab that is currently active? Instead of bringing me back to the defined tab. Maybe javascript or jquery can fix my problem? ``` <!DOCTYPE html> <html> <head> <div class="tab"> <button class="tablinks" onclick="openCity(event, 'London')">London</button> <button class="tablinks" onclick="openCity(event, 'Paris')">Paris</button> <button class="tablinks" onclick="openCity(event, 'Tokyo')">Tokyo</button> </div> <div id="London" class="tabcontent"> <h3>London</h3> <p>London is the capital city of England.</p> </div> <div id="Paris" class="tabcontent" style="display: block;> <h3>Paris</h3> <p>Paris is the capital of France.</p> </div> <div id="Tokyo" class="tabcontent"> <h3>Tokyo</h3> <p>Tokyo is the capital of Japan.</p> </div> <script> function openCity(evt, cityName) { var i, tabcontent, tablinks; tabcontent = document.getElementsByClassName("tabcontent"); for (i = 0; i < tabcontent.length; i++) { tabcontent[i].style.display = "none"; } tablinks = document.getElementsByClassName("tablinks"); for (i = 0; i < tablinks.length; i++) { tablinks[i].className = tablinks[i].className.replace(" active", ""); } document.getElementById(cityName).style.display = "block"; evt.currentTarget.className += " active"; } </script> </body> </html> ``` [Plunker View](https://plnkr.co/edit/lP9d9F7OtAvOuGxD50KD?p=preview) **Example Image** [![Example](https://i.stack.imgur.com/XrZ89.png)](https://i.stack.imgur.com/XrZ89.png) Currently on London, If I hit refresh it should display London's details. Should also work if it were Paris, or Tokyo.
2017/04/17
[ "https://Stackoverflow.com/questions/43453236", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7032335/" ]
To avoid the default margins in general, you can add `margin: 0;` to `html` and `body`. To place your absolutely positioned menu *behind* the `h2`element, you can apply `z-index: -1`, which moves it behind its parent element. In my snippet below I also changed the text-centering to right alignment and added a padding-right on the `ul`. You can play around with those values so they fit your needs. ```css html, body { margin: 0; } #mainMenu { font-family:Arial, Times, sans-serif; list-style-type:none; padding-right: 30px; } #mainMenu a { text-decoration:none; margin:5px; padding:2px; color:SeaGreen; font-weight:bold; } #mainMenu a:hover { color:Teal; } #menu { text-align:right; width:100%; height:50px; background-color:paleGoldenRod; position: absolute; z-index: -1; } li { display:inline; } footer { background-color:SlateGray; height:150px; width:100%; position:absolute; bottom:0; left:0; } ``` ```html <!DOCTYPE html> <html> <head> <title>Miko</title> <link href="#" rel="stylesheet" type="text/css"> </head> <body> <div id="menu"> <ul id="mainMenu"> <li><a href="#">HOME</a></li> <li><a href="#">ABOUT</a></li> <li><a href="#">CONTACT ME</a></li> </ul> </div> <h2>About The Page</h2> <p>To Be Added</p> <footer> <p>Web Design</p> </footer> </body> </html> ```
Position in css is tricky thing, everyone uses absolute positioning for placement of element.but before using it. you need to know about what the position is all about. when we use position:absolute then element act like it is floating on top of all element.while using absolute positioning element goes out from html normal flow. you have used position absolute for both menu links and footer, So these elemment are floating on top of other elements.enter code here use position absolute and fixed when you want to stick element to specific position. ``` #mainMenu { font-family:Arial, Times, sans-serif; list-style-type:none; padding:0; } #mainMenu a { text-decoration:none; margin:5px; padding:2px; color:SeaGreen; font-weight:bold; } #mainMenu a:hover { color:Teal; } #menu { text-align:center; width:100%; height:50px; background-color:paleGoldenRod; } li { display:inline; } footer { background-color:SlateGray; height:150px; width:100%; position:absolute; bottom:0; left:0; } ``` if you still want to use position absolute for menu, so you need to use proper margin for h2 tag.so that h2 tag will not be hidden beside menu links.
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
> > Here is how much the hour hand travels per hour, minute, second: > > > * Per Hour: > $$\text{H}=\frac{360^{\circ}}{12\space\text{hours}}=30^{\circ}\text{/}\space\text{hour}$$ > * Per Minute: > $$\text{M}=\frac{\text{H}^{\circ}}{60\space\text{minutes}}=\left(\frac{1}{2}\right)^{\circ}\text{/}\space\text{minute}$$ > * Per Second: > $$\text{S}=\frac{\text{M}^{\circ}}{60\space\text{seconds}}=\left(\frac{1}{120}\right)^{\circ}\text{/}\space\text{second}$$ > > > --- So, when it is $6:05$, we get: $$6\cdot30^{\circ}+5\cdot\left(\frac{1}{2}\right)^{\circ}=182.5^{\circ}$$ But, for the 'minute hand' we got $30^{\circ}$ too much, so: $$\text{angle}=182.5^{\circ}-30^{\circ}=152.5^{\circ}$$
The time in minutes (let's ignore the seconds, as there are none here) is 6\*60 + 5 = 365 minutes. The angle for a hand is `360° * time / (time per 360°)` The angle for the hour hand is ``` 360° * [365 / (12*60)] = 182.5 ``` The angle for the minute hand is ``` (360° * 365 / 60) % 360 = 30 ``` The angle between the 2 is 152.5°
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
> > Here is how much the hour hand travels per hour, minute, second: > > > * Per Hour: > $$\text{H}=\frac{360^{\circ}}{12\space\text{hours}}=30^{\circ}\text{/}\space\text{hour}$$ > * Per Minute: > $$\text{M}=\frac{\text{H}^{\circ}}{60\space\text{minutes}}=\left(\frac{1}{2}\right)^{\circ}\text{/}\space\text{minute}$$ > * Per Second: > $$\text{S}=\frac{\text{M}^{\circ}}{60\space\text{seconds}}=\left(\frac{1}{120}\right)^{\circ}\text{/}\space\text{second}$$ > > > --- So, when it is $6:05$, we get: $$6\cdot30^{\circ}+5\cdot\left(\frac{1}{2}\right)^{\circ}=182.5^{\circ}$$ But, for the 'minute hand' we got $30^{\circ}$ too much, so: $$\text{angle}=182.5^{\circ}-30^{\circ}=152.5^{\circ}$$
Your approach is correct, but the hour is $6+\frac{5}{60}=\frac{73}{12}$, not $\frac{73}{2}$. This yields an angle of $\frac{73}{12}\cdot30^\circ$ for the hour hand, and $6^\circ\cdot 5=30^\circ$ for the minute hand. Thus the end result is $\frac{73}{12}\cdot30^\circ-30^\circ=152.5^\circ$
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
> > Here is how much the hour hand travels per hour, minute, second: > > > * Per Hour: > $$\text{H}=\frac{360^{\circ}}{12\space\text{hours}}=30^{\circ}\text{/}\space\text{hour}$$ > * Per Minute: > $$\text{M}=\frac{\text{H}^{\circ}}{60\space\text{minutes}}=\left(\frac{1}{2}\right)^{\circ}\text{/}\space\text{minute}$$ > * Per Second: > $$\text{S}=\frac{\text{M}^{\circ}}{60\space\text{seconds}}=\left(\frac{1}{120}\right)^{\circ}\text{/}\space\text{second}$$ > > > --- So, when it is $6:05$, we get: $$6\cdot30^{\circ}+5\cdot\left(\frac{1}{2}\right)^{\circ}=182.5^{\circ}$$ But, for the 'minute hand' we got $30^{\circ}$ too much, so: $$\text{angle}=182.5^{\circ}-30^{\circ}=152.5^{\circ}$$
Let $\alpha$ be the angle in degrees of the hour hand, measured with reference to $12$ and $\beta$ be the angle in degrees of the minute hand measured with reference to $12$. At 6:05, the minute hand has moved $\frac{1}{12}$ of the way around the clock. Thus, $\beta=\frac{360^{\circ}}{12}=30^{\circ}$. The hour hand has moved $\frac{1}{12}$ of the way from $6$ to $7$. In other words, $\frac{1}{12}\cdot\frac{1}{12}=\frac{1}{144}$ of the way from the $6$. Thus, $\alpha=180+\frac{360}{144}=182.5^{\circ}$. Therefore, the angle between the two hands is $\alpha-\beta=182.5^{\circ}-30^{\circ}=152.5^{\circ}$.
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
Think of it this way: five minutes after six, the minute hand is $\frac1{12}$ of the circle ahead from 12, while the hour hand has advanced $\frac1{12}$ of the way towards 7 from 6, or $\frac1{144}$ of the circle ahead. The initial angle between the two hands is $\frac12$ of the circle, so the solution is $$\frac12-\frac1{12}+\frac1{144}=\frac{61}{144}=152.5^\circ$$
Let $\alpha$ be the angle in degrees of the hour hand, measured with reference to $12$ and $\beta$ be the angle in degrees of the minute hand measured with reference to $12$. At 6:05, the minute hand has moved $\frac{1}{12}$ of the way around the clock. Thus, $\beta=\frac{360^{\circ}}{12}=30^{\circ}$. The hour hand has moved $\frac{1}{12}$ of the way from $6$ to $7$. In other words, $\frac{1}{12}\cdot\frac{1}{12}=\frac{1}{144}$ of the way from the $6$. Thus, $\alpha=180+\frac{360}{144}=182.5^{\circ}$. Therefore, the angle between the two hands is $\alpha-\beta=182.5^{\circ}-30^{\circ}=152.5^{\circ}$.
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
> > Here is how much the hour hand travels per hour, minute, second: > > > * Per Hour: > $$\text{H}=\frac{360^{\circ}}{12\space\text{hours}}=30^{\circ}\text{/}\space\text{hour}$$ > * Per Minute: > $$\text{M}=\frac{\text{H}^{\circ}}{60\space\text{minutes}}=\left(\frac{1}{2}\right)^{\circ}\text{/}\space\text{minute}$$ > * Per Second: > $$\text{S}=\frac{\text{M}^{\circ}}{60\space\text{seconds}}=\left(\frac{1}{120}\right)^{\circ}\text{/}\space\text{second}$$ > > > --- So, when it is $6:05$, we get: $$6\cdot30^{\circ}+5\cdot\left(\frac{1}{2}\right)^{\circ}=182.5^{\circ}$$ But, for the 'minute hand' we got $30^{\circ}$ too much, so: $$\text{angle}=182.5^{\circ}-30^{\circ}=152.5^{\circ}$$
The hours hand revolves $360°/(12\cdot60)=0.5°$ per minute and the minutes hand $360°/60=6°$ per minute, so that the angle increases by $5.5°$ per minute. Hence working modulo $360°$, $$(6\cdot60+5)\cdot5.5°=2007.5°\equiv-152.5°=-152°30'.$$ The negative sign is because the hours hand is ahead.
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
Your approach is correct, but the hour is $6+\frac{5}{60}=\frac{73}{12}$, not $\frac{73}{2}$. This yields an angle of $\frac{73}{12}\cdot30^\circ$ for the hour hand, and $6^\circ\cdot 5=30^\circ$ for the minute hand. Thus the end result is $\frac{73}{12}\cdot30^\circ-30^\circ=152.5^\circ$
The time in minutes (let's ignore the seconds, as there are none here) is 6\*60 + 5 = 365 minutes. The angle for a hand is `360° * time / (time per 360°)` The angle for the hour hand is ``` 360° * [365 / (12*60)] = 182.5 ``` The angle for the minute hand is ``` (360° * 365 / 60) % 360 = 30 ``` The angle between the 2 is 152.5°
1,893,955
> > What is the angle between the hour and minute hands of a clock at 6:05? > > > I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct?
2016/08/16
[ "https://math.stackexchange.com/questions/1893955", "https://math.stackexchange.com", "https://math.stackexchange.com/users/359877/" ]
The hours hand revolves $360°/(12\cdot60)=0.5°$ per minute and the minutes hand $360°/60=6°$ per minute, so that the angle increases by $5.5°$ per minute. Hence working modulo $360°$, $$(6\cdot60+5)\cdot5.5°=2007.5°\equiv-152.5°=-152°30'.$$ The negative sign is because the hours hand is ahead.
The time in minutes (let's ignore the seconds, as there are none here) is 6\*60 + 5 = 365 minutes. The angle for a hand is `360° * time / (time per 360°)` The angle for the hour hand is ``` 360° * [365 / (12*60)] = 182.5 ``` The angle for the minute hand is ``` (360° * 365 / 60) % 360 = 30 ``` The angle between the 2 is 152.5°