qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
|---|---|---|---|---|---|
88,134
|
I am a total guitar beginner trying to learn metal riffs and songs like Slipknot. I have a Squier Strat starter pack that has an SSS pickup config.
Will it sound more metal if I get a different guitar, maybe one with high-gain humbuckers, or if I use the Strat with the proper settings on something like a Katana Mini amp?
One thing also is I cannot play too loud because I might disturb my neighbors.
|
2019/08/27
|
[
"https://music.stackexchange.com/questions/88134",
"https://music.stackexchange.com",
"https://music.stackexchange.com/users/62376/"
] |
Ever heard the old joke about the farmer being asked how to get to a nearby famous landmark? There are a hundred versions of it, but there's one commonality...
His reply is always, "Well, you don't want to start from here…"
---
The thing about GAS\* is that it's the wrong place to start.
You have a Strat - live with it for now. When you get 'good' you'll know by then **why** you might need another guitar & what that guitar ought to be.
In the meantime, learn how to play what you have.
Half of any guitarist's sound is **the guitarist**, not the gear he uses.
So you need to learn to walk before you can run.
As for an amp - you know you can get an app for your phone for **free** that will be as good as anything with its own speakers that costs less than a few hundred bucks/quid/shekels. Here's one for starters - [IK Multimedia's AmpliTube](https://www.ikmultimedia.com/products/index.php?R=INIT&FV=product-type-menu-apps&CV=Product&PSEL=Apps) in its many forms.
Now, to go this route, you will need an [interface](https://www.ikmultimedia.com/products/index.php?R=INIT&FV=product-type-menu-interfaces&CV=Product&PSEL=Interfaces), starting at around 40 bucks, but you already saved the other 40 on the mini amp & you're in no danger of irritating your neighbours.
The last thing… is that you can't properly get that 'metal' sound at low volume no matter what gear you own, because to do it 'properly' there needs to be acoustic feedback between amp & guitar. So, in the meantime, use a modelling amp that will at least go some way towards the sound.
I've done entire commercial albums using modelling amps - they do not suck.
Slammed through enough modelled valve distortion, with even a fuzz box in front, you're on your way.
Keep it up & never look back.
btw, I'm not even going to start the list of very very serious guitarists who are famous for playing Strats. That would be mean ;)
\*Gear Acquisition Syndrome - everybody suffers from it at some point in their lives. There's nothing worse than a bad case of GAS.
|
A Squier strat is probably fine as a starter guitar for just about any style of music you want to play (including metal). I would make sure that it is properly intonated and has a reasonable buzz-free action to begin with.
|
21,422,234
|
I wanted to calculate the distance the user has walked no matter how little during an activity (such as running) using GPS. Even if GPS is inaccurate, i want a function that can calculate accurate distance.
In my code below, when I comment out my `onLocationChanged` code, my app did not crash. but the two TextView and Toast did not display the current location.
When I include my `onLocationChanged` code, my app on my phone crashes a while after. I could not figure out the error.
Could someone tell me whether or not the way I am using `Location.distanceBetween` is correct? I felt something is wrong but do not know what.
updated Java code
```
public class MainActivity extends Activity{
protected LocationManager locationManager;
EditText userNumberInput;
EditText userTextInput;
TextView distanceText;
TextView latitude;
TextView longitude;
double lat1,lon1,lat2,lon2,lat3,lon3,lat4,lon4;
long dist;
float[] result;
private static final long MINIMUM_DISTANCE_CHANGE_FOR_UPDATES = 0; // in Meters
private static final long MINIMUM_TIME_BETWEEN_UPDATES = 0;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
distanceText=(TextView)findViewById(R.id.Distance);
latitude=(TextView)findViewById(R.id.currentLat);
longitude=(TextView)findViewById(R.id.currentLon);
locationManager = (LocationManager) getSystemService(LOCATION_SERVICE);
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER,MINIMUM_TIME_BETWEEN_UPDATES,MINIMUM_DISTANCE_CHANGE_FOR_UPDATES, myLocationListener);
Log.d("GPS Enabled", "GPS Enabled");
Criteria criteria = new Criteria();
criteria.setAccuracy(Criteria.ACCURACY_FINE);
String provider = locationManager.getBestProvider(criteria, true);
Location location=locationManager.getLastKnownLocation(provider);
//Display current location in Toast
String message = String.format(
"Current Location \n Longitude: %1$s \n Latitude: %2$s",
location.getLongitude(), location.getLatitude()
);
Toast.makeText(MainActivity.this, message,
Toast.LENGTH_LONG).show();
//Display current location in textview
latitude.setText("Current Latitude: " + String.valueOf(location.getLatitude()));
longitude.setText("Current Longitude: " + String.valueOf(location.getLongitude()));
lat1 = location.getLatitude();
lon1 = location.getLongitude();
lat3 = 123.0;
lon3 = 234.0;
lat4 = 123.0;
lon4 = 224.0;
float[] results = new float[1];
Location.distanceBetween(lat3, lon3, lat4, lon4, results);
System.out.println("Distance is: " + results[0]);
}
private CharSequence ToString(double latitude2) {
// TODO Auto-generated method stub
return null;
}
LocationListener myLocationListener = new LocationListener()
{
public void onLocationChanged(Location loc2)
{
}
}
public void onProviderDisabled(String provider)
{
Toast.makeText(MainActivity.this,
"GPS is disabled",
Toast.LENGTH_LONG).show();
}
public void onProviderEnabled(String provider)
{
Toast.makeText(MainActivity.this,
"GPS is enabled",
Toast.LENGTH_LONG).show();
}
public void onStatusChanged(String provider, int status,
Bundle extras)
{
Toast.makeText(MainActivity.this,
"GPS status changed",
Toast.LENGTH_LONG).show();
}
};
@Override
protected void onPause() {
super.onPause();
locationManager.removeUpdates(myLocationListener);
}
@Override
protected void onResume() {
super.onResume();
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER,MINIMUM_TIME_BETWEEN_UPDATES,MINIMUM_DISTANCE_CHANGE_FOR_UPDATES, myLocationListener);
}
}
```
the LogCat
```
01-29 15:37:24.070: I/Process(20836): Sending signal. PID: 20836 SIG: 9
01-29 15:37:38.305: D/GPS Enabled(24434): GPS Enabled
01-29 15:37:38.305: D/AndroidRuntime(24434): Shutting down VM
01-29 15:37:38.305: W/dalvikvm(24434): threadid=1: thread exiting with uncaught exception (group=0x40fe32a0)
01-29 15:37:38.305: E/AndroidRuntime(24434): FATAL EXCEPTION: main
01-29 15:37:38.305: E/AndroidRuntime(24434): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.validationapp/com.example.validationapp.MainActivity}: java.lang.NullPointerException
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2100)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2125)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread.access$600(ActivityThread.java:140)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1227)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.os.Handler.dispatchMessage(Handler.java:99)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.os.Looper.loop(Looper.java:137)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread.main(ActivityThread.java:4898)
01-29 15:37:38.305: E/AndroidRuntime(24434): at java.lang.reflect.Method.invokeNative(Native Method)
01-29 15:37:38.305: E/AndroidRuntime(24434): at java.lang.reflect.Method.invoke(Method.java:511)
01-29 15:37:38.305: E/AndroidRuntime(24434): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1006)
01-29 15:37:38.305: E/AndroidRuntime(24434): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:773)
01-29 15:37:38.305: E/AndroidRuntime(24434): at dalvik.system.NativeStart.main(Native Method)
01-29 15:37:38.305: E/AndroidRuntime(24434): Caused by: java.lang.NullPointerException
01-29 15:37:38.305: E/AndroidRuntime(24434): at com.example.validationapp.MainActivity.onCreate(MainActivity.java:57)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.Activity.performCreate(Activity.java:5206)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1083)
01-29 15:37:38.305: E/AndroidRuntime(24434): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2064)
01-29 15:37:38.305: E/AndroidRuntime(24434): ... 11 more
01-29 15:38:51.680: D/GPS Enabled(28843): GPS Enabled
01-29 15:38:51.685: D/AndroidRuntime(28843): Shutting down VM
01-29 15:38:51.685: W/dalvikvm(28843): threadid=1: thread exiting with uncaught exception (group=0x40fe32a0)
01-29 15:38:51.685: E/AndroidRuntime(28843): FATAL EXCEPTION: main
01-29 15:38:51.685: E/AndroidRuntime(28843): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.validationapp/com.example.validationapp.MainActivity}: java.lang.NullPointerException
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2100)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2125)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread.access$600(ActivityThread.java:140)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1227)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.os.Handler.dispatchMessage(Handler.java:99)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.os.Looper.loop(Looper.java:137)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread.main(ActivityThread.java:4898)
01-29 15:38:51.685: E/AndroidRuntime(28843): at java.lang.reflect.Method.invokeNative(Native Method)
01-29 15:38:51.685: E/AndroidRuntime(28843): at java.lang.reflect.Method.invoke(Method.java:511)
01-29 15:38:51.685: E/AndroidRuntime(28843): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1006)
01-29 15:38:51.685: E/AndroidRuntime(28843): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:773)
01-29 15:38:51.685: E/AndroidRuntime(28843): at dalvik.system.NativeStart.main(Native Method)
01-29 15:38:51.685: E/AndroidRuntime(28843): Caused by: java.lang.NullPointerException
01-29 15:38:51.685: E/AndroidRuntime(28843): at com.example.validationapp.MainActivity.onCreate(MainActivity.java:57)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.Activity.performCreate(Activity.java:5206)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1083)
01-29 15:38:51.685: E/AndroidRuntime(28843): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2064)
01-29 15:38:51.685: E/AndroidRuntime(28843): ... 11 more
```
i run again but it did not crash or display any output. the log cat is
```
01-29 16:22:12.795: D/GPS Enabled(28011): GPS Enabled
01-29 16:22:12.800: I/System.out(28011): Distance is: 607178.94
01-29 16:22:12.820: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.820: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.820: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.915: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.915: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.920: D/SensorManager(28011): registerListener :: handle = 0 name= LSM330DLC 3-axis Accelerometer delay= 200000 Listener= android.view.OrientationEventListener$SensorEventListenerImpl@41c3a408
01-29 16:22:12.960: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:22:12.960: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
```
whenever i type a letter onto my EditText the logcat keeps increasing by
```
01-29 16:23:42.500: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[0] is 64
01-29 16:23:42.500: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[1] is 67
01-29 16:23:42.500: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[2] is 67
01-29 16:23:42.505: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:23:42.505: E/Dynamiclayout(28011): java.lang.IndexOutOfBoundsException: charAt: 0 >= length 0
01-29 16:23:42.520: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[0] is 64
01-29 16:23:42.520: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[0] is 64
01-29 16:23:42.520: D/TextLayoutCache(28011): TextLayoutCache::replaceThai, prevBuffer[1] is 67
```
this type of code. each letter i type the more it increase so i can't paste it all here
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21422234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3196446/"
] |
you can set your `Location Manager` like
```
locationManager = (LocationManager) getSystemService(LOCATION_SERVICE);
locationManager.requestLocationUpdates( LocationManager.GPS_PROVIDER, MIN_TIME_BW_UPDATES, MIN_DISTANCE_CHANGE_FOR_UPDATES, this);
Log.d("GPS Enabled", "GPS Enabled");
provider = locationManager.getBestProvider(criteria, true);
criteria.setAccuracy(Criteria.ACCURACY_FINE);
```
Now, you can display `Latitude` and `Longitude` like this way:
```
latitude.setText(String.valueOf(location.getLatitude()));
longitude.setText(String.valueOf(location.getLongitude()));
```
Now, find Distance between two point like below:
```
float[] results = new float[1];
Location.distanceBetween(
des_pos.latitude,des_pos.longitude,
cur_pos.latitude, cur_pos.longitude, results);
System.out.println("Distance is: " + results[0]);
```
Try this
**Update:**
Go to this [More about Location.distanceBetween(..)](http://developer.android.com/reference/android/location/Location.html#distanceBetween%28double,%20double,%20double,%20double,%20float%5b%5d%29)
**Update2:**
You just want to remove updates on your `Activity Pause()`:
```
@Override
protected void onPause() {
super.onPause();
locationManager.removeUpdates(this);
}
```
and, also doing continue into your `Activity Resume()`:
```
@Override
protected void onResume() {
super.onResume();
locationManager.requestLocationUpdates( LocationManager.GPS_PROVIDER, MIN_TIME_BW_UPDATES, MIN_DISTANCE_CHANGE_FOR_UPDATES, this);
}
```
|
You can set [accuracy](http://developer.android.com/reference/android/location/Criteria.html#ACCURACY_HIGH%20acuracy) in [Criteria object](http://developer.android.com/reference/android/location/Criteria.html%20Criteria)
|
8,479,424
|
Is there a possibility to make the default input-values dependent on one of the input-values? I tried this:
```
def getHigh(pricedata, start=min(pricedata), end=max(pricedata)):
## do something
```
But it doesn't work, because pricedata is not yet defined.
|
2011/12/12
|
[
"https://Stackoverflow.com/questions/8479424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655635/"
] |
Keep it simple :)
```
def getHigh(pricedata, start=None, end=None):
start = min(pricedata) if start is None else start
end = max(pricedata) if end is None else end
```
|
Two ways to do what you asked are:
```
def getHigh(pricedata, start=None, end=None):
if start is None:
start = min(pricedata)
if end is None:
end = max(pricedata)
```
and...
```
def getHigh(pricedata, **kwargs):
start = kwargs.get("start", min(pricedata))
end = kwargs.get("end",max(pricedata))
```
|
8,479,424
|
Is there a possibility to make the default input-values dependent on one of the input-values? I tried this:
```
def getHigh(pricedata, start=min(pricedata), end=max(pricedata)):
## do something
```
But it doesn't work, because pricedata is not yet defined.
|
2011/12/12
|
[
"https://Stackoverflow.com/questions/8479424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655635/"
] |
Keep it simple :)
```
def getHigh(pricedata, start=None, end=None):
start = min(pricedata) if start is None else start
end = max(pricedata) if end is None else end
```
|
In the general case, you can do this using `None` and testing for it.
```
def getHigh(pricedata, start=None, end=None):
start = min(pricedata) if start is None else start
end = max(pricedata) if end is None else end
```
In the particular case where you're setting a range and will be processing only items in that range, you can use defaults that include all possible values:
```
def getHigh(pricedata, start=float("-inf"), end=float("+inf")):
# consider only items between start and end, inclusive
pricedata = [datum for datum in pricedata if start <= datum <= end]
```
|
103,723
|
I've seen similar questions about Pyrex, but as I understand it Pyrex manufactured in the United States is now made from [tempered soda-lime glass](https://icedteapitcher.myshopify.com/#:~:targetText=Unfortunately%20when%20Corning%2C%20Inc.,continues%20to%20use%20Borosilicate%20Glass.), so I am not asking about that.
**UPDATE:**
To address a similar question: the answer to the [question on Pyrex](https://cooking.stackexchange.com/questions/17970/is-pyrex-safe-to-use-on-a-gas-burner) was, first of all, a generic question on Pyrex (not just borosilicate Pyrex), and it left an open ended answer for personal testing on borosilicate, which doesn't answer my specific question, and it attracted boarder answers (many completely oblivious to the materials used). I'm really looking for a specific answer from personal experience with borosilicate.
**The Question:**
I specifically would like to know if anyone has used a *(verified) borosilicate glass* pot on a gas burner stove top, and if it is safe for regular use, and by "regular use" I would specifically mean:
1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
**OR** are there limiting factors, and if so, what are they?
I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the [stated temperature limits](https://glassshop.yale.edu/helpful-glass-facts), it would seem that it might be okay, but I've not seen enough information on the effects of rapid temperature change. So I'm really interested in actual experiences.
I once spat on a hot incandescent light bulb, and I just want to make sure this pot won't explode the way that light bulb did.
|
2019/11/26
|
[
"https://cooking.stackexchange.com/questions/103723",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/69108/"
] |
>
> I specifically would like to know if anyone has used a (verified)
> borosilicate glass pot on a gas burner stove top
>
>
>
I have used borosilicate glass vessels on a number of different heat sources, both in laboratory circumstances and on a standard home gas stove.
>
> and if it is safe for regular use, and by "regular use" I would
> specifically mean:
>
>
> 1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
> 2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water
> splash)?
>
>
>
Generally speaking, no. I mean, it depends on what you mean by "safe" -- borosilicate glass is less likely to shatter in explosive ways compared to some other types of glass. Glass vessels designed specifically for cooking are probably even less likely to fail in a way that is *dangerous*.
But will the vessel survive doing such things repeatedly over an extended period? Probably not. It may eventually crack. Depending on how extreme the temperature difference exposed to, it may even break more violently. But most likely, cooking vessels like this will simply fail by developing a crack (which will then leak contents).
>
> OR are there limiting factors, and if so, what are they?
>
>
>
I would consult the specific documentation that comes with any glass cooking vessel for guidelines appropriate to it. (And in case this isn't clear, I would NOT recommend using borosilicate glass vessels on the stove top unless they are specifically designed for such use. Even lab glassware will eventually crack if not used over an even heat source, which is why labs tend to use various heat diffusion devices when employing an open flame on a glass vessel.)
For example, the Pyrex company had a set of glass pots for stove top use called Flameware, which was discontinued (I believe) in the late 1970s. Instructions can be found [here](https://archive.lib.msu.edu/DMC/sliker/msuspcsbs_pyre_corninggla10/msuspcsbs_pyre_corninggla10.pdf), which state (among other things):
>
> Heat uniformly, using low to moderate flame [...] Do not heat when dry
> or cook to dryness. [...] Add water before food is put in dish and
> stir occasionally while boiling. [...] When dish is hot, avoid pouring
> in cold liquids. [...] Avoid setting dish on damp or cold surfaces.
>
>
>
All of these warnings have to do with avoiding stress created through rapid temperature changes, which will eventually cause a pot to fail. Note these are warnings for a line of Pyrex specifically designed for stove top cooking.
For a standard producer of borosilicate glass for stove top use today, see JENA's Trendglas, whose products here are [specifically designed for direct heat](http://www.trendglas-jena.com/en/quality-warranty):
>
> The very even wall thickness ensures outstanding thermodynamic
> qualities so that our glass products can be placed directly on heat
> sources. [...] Our glass products [...] show a high resistance to
> sudden temperature changes with a temperature difference of
> 140°C/284°F (three times as high as normal glass or lead crystal
> glass)
>
>
>
These glass products have [instructions here](http://www.trendglas-jena.com/shop_ordered/55862/pic/GA2018web.pdf). And that document includes similar warnings to the Pyrex products discussed above:
>
> * [...] Always use evenly distributed heat and never heat your Trendglas JENA product empty. The heat source and the glass shall be
> heated together.
> * Avoid sudden cooling, don't place the hot glass under cold water or on wet hot pads. [...]
> * Only heat water-containing foods on the stove, never use it for heating solid foods or oily liquids.
> * Make sure that the hob is not smaller than the bottom of the glass product. [...] When using a gas stove, we recommend using a gas stove
> grate.
>
>
>
Basically, glass is a substance that can crack from thermal shock. No amount of engineering is going to make a glass vessel as durable and resistant to thermal shock as a metal pot. That said, scientists use borosilicate glass all the time in complex procedures involving heating and cooling, and exploding glassware is rare unless you do something very stupid. But cracks and failures of glassware that is used over and over for applications involving thermal stress and rapid temperature changes are also to be expected.
If you follow the manufacturer's instructions for your pot and take advice like that listed above (avoiding thermal stress, heating gradually, etc.), some glass pots can last for many years or decades. And you may get away with occasionally forgetting and making a mistake. But even if cracks don't appear immediately, those stresses can build up over time and eventually cause failure.
---
**EDIT:** After reading over some of the comments that have appeared, I wanted to incorporate some ideas and re-emphasize a point I stated above: **I would NOT recommend using borosilicate glass vessels on the stove top *unless they are specifically designed for such use*.**
Some of the answers on previous questions about borosilicate versions of Pyrex (still to be found, particularly outside the U.S.) say it may be possible to attempt to heat borosilicate containers (Pyrex or otherwise) on the stove top, but I would strongly advise against this unless it's actually a vessel labeled for stove top use or described in its instructions as appropriate for stove top use.
It's not just borosilicate glass itself that makes a pot or other vessel safe for the stove top. Glass intended for use on the stove top usually has higher standards for production. It is often more even and thinner (like lab glassware) than some other types of glass bakeware and kitchenware, as thicker glass will have a larger thermal gradient between the surface and interior that can make the glass vulnerable to fracture. It may employ a somewhat different formulation of ingredients in the glass (to respond better to thermal stress, rather than to emphasize durability). It may also have specific design features in terms of its shape and variance in thickness to account for the type of expansion that occurs during stovetop heating (i.e., heating from the bottom) vs. bakeware that will usually expand more at the top (because food inside will be cooler in the initial stages of baking, while the top of a glass baking dish may get hotter without food in direct contact with it). And I'm sure there are other design features that make glass safer for stove top use.
User SiHa linked a [video](https://www.youtube.com/watch?v=xbuvcQrAOSk) showing what can happen if borosilicate glassware is inappropriately heated. That measuring cup is specifically labeled as not for stove top use, and it's not designed for that sort of stress. A measuring cup is probably only expected to take the variation in temperature caused by things like introducing boiling water, not an open flame applied to it. But also note that the catastrophic failure in that video (and mild explosion) likely happens due to two things:
1. the uneven heating of a blowtorch, which already inflicts a huge amount of stress even without the introduction of water; the third cup breaks even before they apply water to it
2. the thickness of the cup, which is designed to be durable and to resist breaking when handled roughly or dropped, not for heating; the thickness allows more energy to build up in the cup before fracture, leading to the explosive burst, rather than the more mild fracture than would likely occur in a thinner vessel
I don't mean to scare anyone too much here, only to emphasize the importance of only using glassware designed for direct heat on the stove top. As I mentioned in my original answer above, when borosilicate glassware designed for direct heating fails, it is more likely to crack (perhaps crack a *lot*, but still simply crack into pieces) if handled appropriately. That can create a mess, but it's usually not dangerous or explosive unless one does something stupid with it.
|
Chemex coffee pots are made of borosilicate glass, and the Chemex web site explicitly says that you can use a low gas flame to keep the coffee warm. But I would worry about drastic temperature changes like putting a hot pot under running water.
|
103,723
|
I've seen similar questions about Pyrex, but as I understand it Pyrex manufactured in the United States is now made from [tempered soda-lime glass](https://icedteapitcher.myshopify.com/#:~:targetText=Unfortunately%20when%20Corning%2C%20Inc.,continues%20to%20use%20Borosilicate%20Glass.), so I am not asking about that.
**UPDATE:**
To address a similar question: the answer to the [question on Pyrex](https://cooking.stackexchange.com/questions/17970/is-pyrex-safe-to-use-on-a-gas-burner) was, first of all, a generic question on Pyrex (not just borosilicate Pyrex), and it left an open ended answer for personal testing on borosilicate, which doesn't answer my specific question, and it attracted boarder answers (many completely oblivious to the materials used). I'm really looking for a specific answer from personal experience with borosilicate.
**The Question:**
I specifically would like to know if anyone has used a *(verified) borosilicate glass* pot on a gas burner stove top, and if it is safe for regular use, and by "regular use" I would specifically mean:
1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
**OR** are there limiting factors, and if so, what are they?
I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the [stated temperature limits](https://glassshop.yale.edu/helpful-glass-facts), it would seem that it might be okay, but I've not seen enough information on the effects of rapid temperature change. So I'm really interested in actual experiences.
I once spat on a hot incandescent light bulb, and I just want to make sure this pot won't explode the way that light bulb did.
|
2019/11/26
|
[
"https://cooking.stackexchange.com/questions/103723",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/69108/"
] |
>
> I specifically would like to know if anyone has used a (verified)
> borosilicate glass pot on a gas burner stove top
>
>
>
I have used borosilicate glass vessels on a number of different heat sources, both in laboratory circumstances and on a standard home gas stove.
>
> and if it is safe for regular use, and by "regular use" I would
> specifically mean:
>
>
> 1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
> 2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water
> splash)?
>
>
>
Generally speaking, no. I mean, it depends on what you mean by "safe" -- borosilicate glass is less likely to shatter in explosive ways compared to some other types of glass. Glass vessels designed specifically for cooking are probably even less likely to fail in a way that is *dangerous*.
But will the vessel survive doing such things repeatedly over an extended period? Probably not. It may eventually crack. Depending on how extreme the temperature difference exposed to, it may even break more violently. But most likely, cooking vessels like this will simply fail by developing a crack (which will then leak contents).
>
> OR are there limiting factors, and if so, what are they?
>
>
>
I would consult the specific documentation that comes with any glass cooking vessel for guidelines appropriate to it. (And in case this isn't clear, I would NOT recommend using borosilicate glass vessels on the stove top unless they are specifically designed for such use. Even lab glassware will eventually crack if not used over an even heat source, which is why labs tend to use various heat diffusion devices when employing an open flame on a glass vessel.)
For example, the Pyrex company had a set of glass pots for stove top use called Flameware, which was discontinued (I believe) in the late 1970s. Instructions can be found [here](https://archive.lib.msu.edu/DMC/sliker/msuspcsbs_pyre_corninggla10/msuspcsbs_pyre_corninggla10.pdf), which state (among other things):
>
> Heat uniformly, using low to moderate flame [...] Do not heat when dry
> or cook to dryness. [...] Add water before food is put in dish and
> stir occasionally while boiling. [...] When dish is hot, avoid pouring
> in cold liquids. [...] Avoid setting dish on damp or cold surfaces.
>
>
>
All of these warnings have to do with avoiding stress created through rapid temperature changes, which will eventually cause a pot to fail. Note these are warnings for a line of Pyrex specifically designed for stove top cooking.
For a standard producer of borosilicate glass for stove top use today, see JENA's Trendglas, whose products here are [specifically designed for direct heat](http://www.trendglas-jena.com/en/quality-warranty):
>
> The very even wall thickness ensures outstanding thermodynamic
> qualities so that our glass products can be placed directly on heat
> sources. [...] Our glass products [...] show a high resistance to
> sudden temperature changes with a temperature difference of
> 140°C/284°F (three times as high as normal glass or lead crystal
> glass)
>
>
>
These glass products have [instructions here](http://www.trendglas-jena.com/shop_ordered/55862/pic/GA2018web.pdf). And that document includes similar warnings to the Pyrex products discussed above:
>
> * [...] Always use evenly distributed heat and never heat your Trendglas JENA product empty. The heat source and the glass shall be
> heated together.
> * Avoid sudden cooling, don't place the hot glass under cold water or on wet hot pads. [...]
> * Only heat water-containing foods on the stove, never use it for heating solid foods or oily liquids.
> * Make sure that the hob is not smaller than the bottom of the glass product. [...] When using a gas stove, we recommend using a gas stove
> grate.
>
>
>
Basically, glass is a substance that can crack from thermal shock. No amount of engineering is going to make a glass vessel as durable and resistant to thermal shock as a metal pot. That said, scientists use borosilicate glass all the time in complex procedures involving heating and cooling, and exploding glassware is rare unless you do something very stupid. But cracks and failures of glassware that is used over and over for applications involving thermal stress and rapid temperature changes are also to be expected.
If you follow the manufacturer's instructions for your pot and take advice like that listed above (avoiding thermal stress, heating gradually, etc.), some glass pots can last for many years or decades. And you may get away with occasionally forgetting and making a mistake. But even if cracks don't appear immediately, those stresses can build up over time and eventually cause failure.
---
**EDIT:** After reading over some of the comments that have appeared, I wanted to incorporate some ideas and re-emphasize a point I stated above: **I would NOT recommend using borosilicate glass vessels on the stove top *unless they are specifically designed for such use*.**
Some of the answers on previous questions about borosilicate versions of Pyrex (still to be found, particularly outside the U.S.) say it may be possible to attempt to heat borosilicate containers (Pyrex or otherwise) on the stove top, but I would strongly advise against this unless it's actually a vessel labeled for stove top use or described in its instructions as appropriate for stove top use.
It's not just borosilicate glass itself that makes a pot or other vessel safe for the stove top. Glass intended for use on the stove top usually has higher standards for production. It is often more even and thinner (like lab glassware) than some other types of glass bakeware and kitchenware, as thicker glass will have a larger thermal gradient between the surface and interior that can make the glass vulnerable to fracture. It may employ a somewhat different formulation of ingredients in the glass (to respond better to thermal stress, rather than to emphasize durability). It may also have specific design features in terms of its shape and variance in thickness to account for the type of expansion that occurs during stovetop heating (i.e., heating from the bottom) vs. bakeware that will usually expand more at the top (because food inside will be cooler in the initial stages of baking, while the top of a glass baking dish may get hotter without food in direct contact with it). And I'm sure there are other design features that make glass safer for stove top use.
User SiHa linked a [video](https://www.youtube.com/watch?v=xbuvcQrAOSk) showing what can happen if borosilicate glassware is inappropriately heated. That measuring cup is specifically labeled as not for stove top use, and it's not designed for that sort of stress. A measuring cup is probably only expected to take the variation in temperature caused by things like introducing boiling water, not an open flame applied to it. But also note that the catastrophic failure in that video (and mild explosion) likely happens due to two things:
1. the uneven heating of a blowtorch, which already inflicts a huge amount of stress even without the introduction of water; the third cup breaks even before they apply water to it
2. the thickness of the cup, which is designed to be durable and to resist breaking when handled roughly or dropped, not for heating; the thickness allows more energy to build up in the cup before fracture, leading to the explosive burst, rather than the more mild fracture than would likely occur in a thinner vessel
I don't mean to scare anyone too much here, only to emphasize the importance of only using glassware designed for direct heat on the stove top. As I mentioned in my original answer above, when borosilicate glassware designed for direct heating fails, it is more likely to crack (perhaps crack a *lot*, but still simply crack into pieces) if handled appropriately. That can create a mess, but it's usually not dangerous or explosive unless one does something stupid with it.
|
I would be thinking about the worst case failure mode, however unlikely. The pot shatters, the contents are no longer supported by the sides of the pot.
Could that result in the cook being seriously burned? If it's food that is liquid, such as soup, the answer is yes. If it's basically solid, no.
If the pot is in an oven or microwave, that provides some containment. A big mess, but not a pint of boiling soup headed for the cook's groin.
|
103,723
|
I've seen similar questions about Pyrex, but as I understand it Pyrex manufactured in the United States is now made from [tempered soda-lime glass](https://icedteapitcher.myshopify.com/#:~:targetText=Unfortunately%20when%20Corning%2C%20Inc.,continues%20to%20use%20Borosilicate%20Glass.), so I am not asking about that.
**UPDATE:**
To address a similar question: the answer to the [question on Pyrex](https://cooking.stackexchange.com/questions/17970/is-pyrex-safe-to-use-on-a-gas-burner) was, first of all, a generic question on Pyrex (not just borosilicate Pyrex), and it left an open ended answer for personal testing on borosilicate, which doesn't answer my specific question, and it attracted boarder answers (many completely oblivious to the materials used). I'm really looking for a specific answer from personal experience with borosilicate.
**The Question:**
I specifically would like to know if anyone has used a *(verified) borosilicate glass* pot on a gas burner stove top, and if it is safe for regular use, and by "regular use" I would specifically mean:
1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
**OR** are there limiting factors, and if so, what are they?
I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the [stated temperature limits](https://glassshop.yale.edu/helpful-glass-facts), it would seem that it might be okay, but I've not seen enough information on the effects of rapid temperature change. So I'm really interested in actual experiences.
I once spat on a hot incandescent light bulb, and I just want to make sure this pot won't explode the way that light bulb did.
|
2019/11/26
|
[
"https://cooking.stackexchange.com/questions/103723",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/69108/"
] |
>
> I specifically would like to know if anyone has used a (verified)
> borosilicate glass pot on a gas burner stove top
>
>
>
I have used borosilicate glass vessels on a number of different heat sources, both in laboratory circumstances and on a standard home gas stove.
>
> and if it is safe for regular use, and by "regular use" I would
> specifically mean:
>
>
> 1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
> 2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water
> splash)?
>
>
>
Generally speaking, no. I mean, it depends on what you mean by "safe" -- borosilicate glass is less likely to shatter in explosive ways compared to some other types of glass. Glass vessels designed specifically for cooking are probably even less likely to fail in a way that is *dangerous*.
But will the vessel survive doing such things repeatedly over an extended period? Probably not. It may eventually crack. Depending on how extreme the temperature difference exposed to, it may even break more violently. But most likely, cooking vessels like this will simply fail by developing a crack (which will then leak contents).
>
> OR are there limiting factors, and if so, what are they?
>
>
>
I would consult the specific documentation that comes with any glass cooking vessel for guidelines appropriate to it. (And in case this isn't clear, I would NOT recommend using borosilicate glass vessels on the stove top unless they are specifically designed for such use. Even lab glassware will eventually crack if not used over an even heat source, which is why labs tend to use various heat diffusion devices when employing an open flame on a glass vessel.)
For example, the Pyrex company had a set of glass pots for stove top use called Flameware, which was discontinued (I believe) in the late 1970s. Instructions can be found [here](https://archive.lib.msu.edu/DMC/sliker/msuspcsbs_pyre_corninggla10/msuspcsbs_pyre_corninggla10.pdf), which state (among other things):
>
> Heat uniformly, using low to moderate flame [...] Do not heat when dry
> or cook to dryness. [...] Add water before food is put in dish and
> stir occasionally while boiling. [...] When dish is hot, avoid pouring
> in cold liquids. [...] Avoid setting dish on damp or cold surfaces.
>
>
>
All of these warnings have to do with avoiding stress created through rapid temperature changes, which will eventually cause a pot to fail. Note these are warnings for a line of Pyrex specifically designed for stove top cooking.
For a standard producer of borosilicate glass for stove top use today, see JENA's Trendglas, whose products here are [specifically designed for direct heat](http://www.trendglas-jena.com/en/quality-warranty):
>
> The very even wall thickness ensures outstanding thermodynamic
> qualities so that our glass products can be placed directly on heat
> sources. [...] Our glass products [...] show a high resistance to
> sudden temperature changes with a temperature difference of
> 140°C/284°F (three times as high as normal glass or lead crystal
> glass)
>
>
>
These glass products have [instructions here](http://www.trendglas-jena.com/shop_ordered/55862/pic/GA2018web.pdf). And that document includes similar warnings to the Pyrex products discussed above:
>
> * [...] Always use evenly distributed heat and never heat your Trendglas JENA product empty. The heat source and the glass shall be
> heated together.
> * Avoid sudden cooling, don't place the hot glass under cold water or on wet hot pads. [...]
> * Only heat water-containing foods on the stove, never use it for heating solid foods or oily liquids.
> * Make sure that the hob is not smaller than the bottom of the glass product. [...] When using a gas stove, we recommend using a gas stove
> grate.
>
>
>
Basically, glass is a substance that can crack from thermal shock. No amount of engineering is going to make a glass vessel as durable and resistant to thermal shock as a metal pot. That said, scientists use borosilicate glass all the time in complex procedures involving heating and cooling, and exploding glassware is rare unless you do something very stupid. But cracks and failures of glassware that is used over and over for applications involving thermal stress and rapid temperature changes are also to be expected.
If you follow the manufacturer's instructions for your pot and take advice like that listed above (avoiding thermal stress, heating gradually, etc.), some glass pots can last for many years or decades. And you may get away with occasionally forgetting and making a mistake. But even if cracks don't appear immediately, those stresses can build up over time and eventually cause failure.
---
**EDIT:** After reading over some of the comments that have appeared, I wanted to incorporate some ideas and re-emphasize a point I stated above: **I would NOT recommend using borosilicate glass vessels on the stove top *unless they are specifically designed for such use*.**
Some of the answers on previous questions about borosilicate versions of Pyrex (still to be found, particularly outside the U.S.) say it may be possible to attempt to heat borosilicate containers (Pyrex or otherwise) on the stove top, but I would strongly advise against this unless it's actually a vessel labeled for stove top use or described in its instructions as appropriate for stove top use.
It's not just borosilicate glass itself that makes a pot or other vessel safe for the stove top. Glass intended for use on the stove top usually has higher standards for production. It is often more even and thinner (like lab glassware) than some other types of glass bakeware and kitchenware, as thicker glass will have a larger thermal gradient between the surface and interior that can make the glass vulnerable to fracture. It may employ a somewhat different formulation of ingredients in the glass (to respond better to thermal stress, rather than to emphasize durability). It may also have specific design features in terms of its shape and variance in thickness to account for the type of expansion that occurs during stovetop heating (i.e., heating from the bottom) vs. bakeware that will usually expand more at the top (because food inside will be cooler in the initial stages of baking, while the top of a glass baking dish may get hotter without food in direct contact with it). And I'm sure there are other design features that make glass safer for stove top use.
User SiHa linked a [video](https://www.youtube.com/watch?v=xbuvcQrAOSk) showing what can happen if borosilicate glassware is inappropriately heated. That measuring cup is specifically labeled as not for stove top use, and it's not designed for that sort of stress. A measuring cup is probably only expected to take the variation in temperature caused by things like introducing boiling water, not an open flame applied to it. But also note that the catastrophic failure in that video (and mild explosion) likely happens due to two things:
1. the uneven heating of a blowtorch, which already inflicts a huge amount of stress even without the introduction of water; the third cup breaks even before they apply water to it
2. the thickness of the cup, which is designed to be durable and to resist breaking when handled roughly or dropped, not for heating; the thickness allows more energy to build up in the cup before fracture, leading to the explosive burst, rather than the more mild fracture than would likely occur in a thinner vessel
I don't mean to scare anyone too much here, only to emphasize the importance of only using glassware designed for direct heat on the stove top. As I mentioned in my original answer above, when borosilicate glassware designed for direct heating fails, it is more likely to crack (perhaps crack a *lot*, but still simply crack into pieces) if handled appropriately. That can create a mess, but it's usually not dangerous or explosive unless one does something stupid with it.
|
**No.**
>
> Is it safe to take a chilled pot from the refrigerator to a full open flame? Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
>
>
> ...
>
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan.
>
>
>
The answers to all of these questions is no, it is absolutely not safe. You would be playing russian roulette, **the question is not if but when it will break** and leave you with a dangerous mess. The material was not designed to be used this way and cannot be treated like a metal pot. Yes, it's intended to be able to withstand high amounts of thermal shock, like being moved from a freezer to an oven, but open flames or other stovetop cooking methods are an entirely different matter.
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the stated temperature limits ...
>
>
>
The link shows you exactly why you cannot -- borosilicate should only be used at temps around 400 C for a short amount of time and begins softening at 800 C. A gas burner can reach temps up to 2000 C, and even the cool flames are well above 800.
|
103,723
|
I've seen similar questions about Pyrex, but as I understand it Pyrex manufactured in the United States is now made from [tempered soda-lime glass](https://icedteapitcher.myshopify.com/#:~:targetText=Unfortunately%20when%20Corning%2C%20Inc.,continues%20to%20use%20Borosilicate%20Glass.), so I am not asking about that.
**UPDATE:**
To address a similar question: the answer to the [question on Pyrex](https://cooking.stackexchange.com/questions/17970/is-pyrex-safe-to-use-on-a-gas-burner) was, first of all, a generic question on Pyrex (not just borosilicate Pyrex), and it left an open ended answer for personal testing on borosilicate, which doesn't answer my specific question, and it attracted boarder answers (many completely oblivious to the materials used). I'm really looking for a specific answer from personal experience with borosilicate.
**The Question:**
I specifically would like to know if anyone has used a *(verified) borosilicate glass* pot on a gas burner stove top, and if it is safe for regular use, and by "regular use" I would specifically mean:
1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
**OR** are there limiting factors, and if so, what are they?
I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the [stated temperature limits](https://glassshop.yale.edu/helpful-glass-facts), it would seem that it might be okay, but I've not seen enough information on the effects of rapid temperature change. So I'm really interested in actual experiences.
I once spat on a hot incandescent light bulb, and I just want to make sure this pot won't explode the way that light bulb did.
|
2019/11/26
|
[
"https://cooking.stackexchange.com/questions/103723",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/69108/"
] |
**No.**
>
> Is it safe to take a chilled pot from the refrigerator to a full open flame? Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
>
>
> ...
>
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan.
>
>
>
The answers to all of these questions is no, it is absolutely not safe. You would be playing russian roulette, **the question is not if but when it will break** and leave you with a dangerous mess. The material was not designed to be used this way and cannot be treated like a metal pot. Yes, it's intended to be able to withstand high amounts of thermal shock, like being moved from a freezer to an oven, but open flames or other stovetop cooking methods are an entirely different matter.
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the stated temperature limits ...
>
>
>
The link shows you exactly why you cannot -- borosilicate should only be used at temps around 400 C for a short amount of time and begins softening at 800 C. A gas burner can reach temps up to 2000 C, and even the cool flames are well above 800.
|
Chemex coffee pots are made of borosilicate glass, and the Chemex web site explicitly says that you can use a low gas flame to keep the coffee warm. But I would worry about drastic temperature changes like putting a hot pot under running water.
|
103,723
|
I've seen similar questions about Pyrex, but as I understand it Pyrex manufactured in the United States is now made from [tempered soda-lime glass](https://icedteapitcher.myshopify.com/#:~:targetText=Unfortunately%20when%20Corning%2C%20Inc.,continues%20to%20use%20Borosilicate%20Glass.), so I am not asking about that.
**UPDATE:**
To address a similar question: the answer to the [question on Pyrex](https://cooking.stackexchange.com/questions/17970/is-pyrex-safe-to-use-on-a-gas-burner) was, first of all, a generic question on Pyrex (not just borosilicate Pyrex), and it left an open ended answer for personal testing on borosilicate, which doesn't answer my specific question, and it attracted boarder answers (many completely oblivious to the materials used). I'm really looking for a specific answer from personal experience with borosilicate.
**The Question:**
I specifically would like to know if anyone has used a *(verified) borosilicate glass* pot on a gas burner stove top, and if it is safe for regular use, and by "regular use" I would specifically mean:
1. Is it safe to take a chilled pot from the refrigerator to a full open flame?
2. Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
**OR** are there limiting factors, and if so, what are they?
I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the [stated temperature limits](https://glassshop.yale.edu/helpful-glass-facts), it would seem that it might be okay, but I've not seen enough information on the effects of rapid temperature change. So I'm really interested in actual experiences.
I once spat on a hot incandescent light bulb, and I just want to make sure this pot won't explode the way that light bulb did.
|
2019/11/26
|
[
"https://cooking.stackexchange.com/questions/103723",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/69108/"
] |
**No.**
>
> Is it safe to take a chilled pot from the refrigerator to a full open flame? Is it safe to take a pot that has been on a full open flame (for an hour or so) to the bottom of the sink (risking a little cool water splash)?
>
>
> ...
>
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan.
>
>
>
The answers to all of these questions is no, it is absolutely not safe. You would be playing russian roulette, **the question is not if but when it will break** and leave you with a dangerous mess. The material was not designed to be used this way and cannot be treated like a metal pot. Yes, it's intended to be able to withstand high amounts of thermal shock, like being moved from a freezer to an oven, but open flames or other stovetop cooking methods are an entirely different matter.
>
> I just want to make sure I can treat a borosilicate pot the same way I might a regular pot or pan. I've done a decent amount of research, and according to the stated temperature limits ...
>
>
>
The link shows you exactly why you cannot -- borosilicate should only be used at temps around 400 C for a short amount of time and begins softening at 800 C. A gas burner can reach temps up to 2000 C, and even the cool flames are well above 800.
|
I would be thinking about the worst case failure mode, however unlikely. The pot shatters, the contents are no longer supported by the sides of the pot.
Could that result in the cook being seriously burned? If it's food that is liquid, such as soup, the answer is yes. If it's basically solid, no.
If the pot is in an oven or microwave, that provides some containment. A big mess, but not a pint of boiling soup headed for the cook's groin.
|
42,948,619
|
I had a fresh Lumen 5.4 installation and followed this [tutorial](https://iwader.co.uk/post/tymon-jwt-auth-with-lumen-5-2). Login and others work fine but the logout doesn't seem to work properly. What I mean is, if I try to expire a token it doesn't give me an error but if the same token(the one that was just expired) is re-used, it should say expired but still goes through and gets me the data. In simple terms, I believe it is not expiring the token at all. Below is my code:
`UserController` code:
```
class UserController extends Controller
{
protected $jwt;
public function __construct(JWTAuth $jwt)
{
$this->jwt = $jwt;
}
public function Signin(Request $request)
{
$this->validate($request, [
'email' => 'required|email|max:100',
'password' => 'required|min:6',
]);
if (!$token = $this->jwt->attempt($request->only('email', 'password'))) {
return response()->json(['The credentials provided are invalid.'], 500);
}
return response()->json(compact('token'));
}
public function LogoutUser(Request $request){
$this->jwt->invalidate($this->jwt->getToken());
return response()->json([
'message' => 'User logged off successfully!'
], 200);
}
}
```
`routes`:
```
$app->group(['prefix' => 'api'], function($app){
$app->post('/signup', [
'uses' => 'UserController@Signup'
]);
$app->group(['middleware' => 'auth:api'], function($app){
$app->post('/logout',[
'uses' => 'UserController@LogoutUser'
]);
});
});
```
`config/auth.php`:
```
'defaults' => [
'guard' => env('AUTH_GUARD', 'api'),
],
'guards' => [
'api' => [
'driver' => 'jwt',
'provider' => 'users'
],
],
'providers' => [
'users' => [
'driver' => 'eloquent',
'model' => \App\User::class,
],
],
'passwords' => [
//
],
```
Any help will be greatly appreciated.
|
2017/03/22
|
[
"https://Stackoverflow.com/questions/42948619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6779637/"
] |
I have now got it working and leave behind the steps so if anybody else faces the same issue. The fix was to use `CACHE_DRIVER=file` in the `.env` file. I am not exactly sure why or how this fixes it but some research led me to this and was a result of trial and error things.
|
i find something may cause this problem.
see Tymon\JWTAuth\Providers\Storage\Illuminate
```
public function add($key, $value, $minutes)
{
// If the laravel version is 5.8 or higher then convert minutes to seconds.
if ($this->laravelVersion !== null
&& is_int($minutes)
&& version_compare($this->laravelVersion, '5.8', '>=')
) {
$minutes = $minutes * 60;
}
$this->cache()->put($key, $value, $minutes);
}
```
my cache driver is redis and when i invalidate a token, this function will be invoded。
Obviously `$this->laravelVersion` is null in lumen.
I just solved my issue
copy `Tymon\JWTAuth\Providers\Storage\Illuminate` to other folder
in my case, i copy to app\Providers\Storage\Illuminate
modify the `add` function
```
public function add($key, $value, $minutes)
{
$seconds = $minutes * 60;
$this->cache()->put($key, $value, $seconds);
}
```
remember change the namespace.
then modify the `config/jwt`
```
#'storage' => Tymon\JWTAuth\Providers\Storage\Illuminate::class,
'storage' => App\Providers\Storage\Illuminate::class,
```
Hope it helps
|
38,881,970
|
I'm collecting statistics into a Dictionary:
```
var metrics = [String:Any]
```
These metrics can be Ints, Doubles, Strings, so I'm trying to use Any.
I initialize a metric like this:
```
metrics["sentBeacons"] = UInt(0)
```
But now if I try to modify it, the compiler throws an error:
```
metrics["sentBeacons"]! += 1 // Error: can't apply Int to Any?
```
Which I understand, it can't know that that key has an Int.
I tried:
```
if metrics["sentBeacons"] is UInt {
metrics["sentBeacons"]! += 1
}
```
But no dice. I also tried typecasting:
```
if var sentBeacons = metrics["sentBeacons"] as? UInt {
sentBeacons += 1
}
```
But this doesn't work either, because (I think) the Dictionary is a value type and I'm only accessing a copy of the variable.
How can I ask the compiler: "If this 'Any' is now an Int, then add '1' to it (in place)"?
|
2016/08/10
|
[
"https://Stackoverflow.com/questions/38881970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3254629/"
] |
You could update your dictionary with your new calculated value:
```
if var sentBeacons = metrics["sentBeacons"] as? UInt {
sentBeacons += 1
metrics["sentBeacons"] = sentBeacons
}
```
|
I made myself an extension for this, to encapsulate the read/modify/write steps when dealing with dictionaries:
```
extension Dictionary {
func transformValue(forKey key: Key, valueTransformer: (oldValue: Value?) -> Value) -> Dictionary {
var copy = self
copy[key] = valueTransformer(oldValue: self[key])
return copy
}
}
metricts.transformValue(forKey: "sentBeacons"){UInt($0!) + 1}
```
|
260
|
Consider the question [Introductory books on nature sciences behind bioinformatics](https://cs.stackexchange.com/questions/1156/introductory-books-on-nature-sciences-behind-bioinformatics). Beside the fact that it is a book question and those are arguably always problematic, it seems to be more about biology than computer science. How should we deal with such questions?
|
2012/04/10
|
[
"https://cs.meta.stackexchange.com/questions/260",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/98/"
] |
I would say that recommendations for any reference material *must involve computer science at some level* and more importantly are focused from a computer science vantage point.
I think the question referenced, *in its current form*, is not appropriate for this forum. It seems that the question referenced needs to be refocused to ask "**what level of biology background is necessary for study in bioinformatics**". This way it is both asked from the view point of computer science and answerable within this forum.
I think the person who answered the question did a good job keeping this in line with CS, I think the question should be modified to reflect this.
|
The question at hand does clearly not ask for a computer science related answer, nor is it likely that a computer scientist can provide a better answer than a biologist. Therefore I think that that particular question should be closed.
In general, there may be questions worth asking but I think it is elementary that a computer science viewpoint is essential for answering the questions. Examples that come to mind are of didactic nature, e.g. "What is a good analogy to explain [math/physics/... concept] to CS students?", or questions that specifically ask for cases where something moved across borders, e.g. "What inspired the naming of 'phase transitions' in the context of SAT hardness?".
We should be very quick to close questions like "Why is [other science concept X] not used in CS?" or "Problems with my minors: How to calculate this [other science exercise]?".
|
260
|
Consider the question [Introductory books on nature sciences behind bioinformatics](https://cs.stackexchange.com/questions/1156/introductory-books-on-nature-sciences-behind-bioinformatics). Beside the fact that it is a book question and those are arguably always problematic, it seems to be more about biology than computer science. How should we deal with such questions?
|
2012/04/10
|
[
"https://cs.meta.stackexchange.com/questions/260",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/98/"
] |
if best\_persons\_to\_answer(question) ∩ Computer\_scientists ≫ ∅
then Is\_fine\_here(question)
else Should\_be\_closed(question)
|
The question at hand does clearly not ask for a computer science related answer, nor is it likely that a computer scientist can provide a better answer than a biologist. Therefore I think that that particular question should be closed.
In general, there may be questions worth asking but I think it is elementary that a computer science viewpoint is essential for answering the questions. Examples that come to mind are of didactic nature, e.g. "What is a good analogy to explain [math/physics/... concept] to CS students?", or questions that specifically ask for cases where something moved across borders, e.g. "What inspired the naming of 'phase transitions' in the context of SAT hardness?".
We should be very quick to close questions like "Why is [other science concept X] not used in CS?" or "Problems with my minors: How to calculate this [other science exercise]?".
|
260
|
Consider the question [Introductory books on nature sciences behind bioinformatics](https://cs.stackexchange.com/questions/1156/introductory-books-on-nature-sciences-behind-bioinformatics). Beside the fact that it is a book question and those are arguably always problematic, it seems to be more about biology than computer science. How should we deal with such questions?
|
2012/04/10
|
[
"https://cs.meta.stackexchange.com/questions/260",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/98/"
] |
if best\_persons\_to\_answer(question) ∩ Computer\_scientists ≫ ∅
then Is\_fine\_here(question)
else Should\_be\_closed(question)
|
I would say that recommendations for any reference material *must involve computer science at some level* and more importantly are focused from a computer science vantage point.
I think the question referenced, *in its current form*, is not appropriate for this forum. It seems that the question referenced needs to be refocused to ask "**what level of biology background is necessary for study in bioinformatics**". This way it is both asked from the view point of computer science and answerable within this forum.
I think the person who answered the question did a good job keeping this in line with CS, I think the question should be modified to reflect this.
|
28,760,034
|
I have a strange problem happening intermittently with my apps where textbox controls would disappear intermittently. I have narrowed the cause of this down to having image files (small logos etc) on my page.
I have managed to create a simple project which contains an xaml page with an image and 2 textblocks (these are in grids).
I have found on two test tablets that I can re-create the problem by going to task manager and creating a dump file for the running app.
After I do this twice and resume the app, the two textblocks disappear.
This exact problem is happening intermittently in my live apps.
Has anyone any ideas why this might be happening or what I should try next? I have no idea why creating a dump file forces the issue.
You can see a video of me re-creating the issue here:
<https://onedrive.live.com/redir?resid=DF2BE823348DEA6C!74381&authkey=!AIvSU05r0363S3Y&ithint=video%2cMOV>
The test project in the video can be downloaded from here:
<https://onedrive.live.com/redir?resid=DF2BE823348DEA6C!74382&authkey=!AIGHSdezFcCbEZQ&ithint=file%2czip>
So far I can re-create it with the exact same steps on my two different tablets - both running Windows 8.1 Pro 32bit.
If you are familiar with sideloading apps and you have a 64bit tablet I would be really appreciative if someone was able to test out the exact same steps as seen in the video.
Any help would be extremely appreciated as I am clueless as to where to go next.
|
2015/02/27
|
[
"https://Stackoverflow.com/questions/28760034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2682616/"
] |
I finally got to the bottom of this. The problem was down to the intel graphics driver that was installed on the tablets.
An updated driver was released around April which seemed to resolve the issue.
|
I have observed this problem of disappearing controls on views that do not contain image files but rather a background image. After reading your method of how to reproduce the issue, I've since removed the background image and replaced with a LinearGradientBrush and retested with the dump file process. The problem seems to have disappeared. (I'm running 32-bit as well.)
|
10,107,395
|
[I read this coding rule](http://www.appperfect.com/support/java-coding-rules/garbagecollection.html#rule6) that states you should prefer the use of `long[]` over `Date[]`, but there is no reference provided as to *why*.
Why should I do this? What is the advantage of using a `long` over `Date`? What effect will this have on garbage collection? Or is this just a bunch of nonsense?
|
2012/04/11
|
[
"https://Stackoverflow.com/questions/10107395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/482702/"
] |
It depends what you're trying to achieve. For example, `long[]` has a smaller memory requirement than `Date[]` (because `Date` is an Object and `long` is a primative), but is that really important for your purpose?
|
I believe it is because you can get a lot of problems with databases using the Date.
|
10,107,395
|
[I read this coding rule](http://www.appperfect.com/support/java-coding-rules/garbagecollection.html#rule6) that states you should prefer the use of `long[]` over `Date[]`, but there is no reference provided as to *why*.
Why should I do this? What is the advantage of using a `long` over `Date`? What effect will this have on garbage collection? Or is this just a bunch of nonsense?
|
2012/04/11
|
[
"https://Stackoverflow.com/questions/10107395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/482702/"
] |
If anyone else stumbles upon this question:
>
> I'm not trying to be a smartass :) But the fact that you have only one object (which in fact is an array of 10000 longs) or if you have 10001 objects (one being the array, and 10000 Dates) is a huge difference for the garbage collector
>
>
>
from [Lukas Eder](https://stackoverflow.com/users/521799/lukas-eder)
|
I believe it is because you can get a lot of problems with databases using the Date.
|
10,107,395
|
[I read this coding rule](http://www.appperfect.com/support/java-coding-rules/garbagecollection.html#rule6) that states you should prefer the use of `long[]` over `Date[]`, but there is no reference provided as to *why*.
Why should I do this? What is the advantage of using a `long` over `Date`? What effect will this have on garbage collection? Or is this just a bunch of nonsense?
|
2012/04/11
|
[
"https://Stackoverflow.com/questions/10107395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/482702/"
] |
It depends what you're trying to achieve. For example, `long[]` has a smaller memory requirement than `Date[]` (because `Date` is an Object and `long` is a primative), but is that really important for your purpose?
|
If anyone else stumbles upon this question:
>
> I'm not trying to be a smartass :) But the fact that you have only one object (which in fact is an array of 10000 longs) or if you have 10001 objects (one being the array, and 10000 Dates) is a huge difference for the garbage collector
>
>
>
from [Lukas Eder](https://stackoverflow.com/users/521799/lukas-eder)
|
15,164,717
|
We're developing a Javascript SDK which is used by embedded applications (injected in my site as IFRAMEs) to help them use some resources like loading some dialogs: e.g. an authorization dialog or a share box(like Facebook).
Our SDK is using easyXDM to work more or less like this:
**HTML Page**
----------------------------------------------------------------------------------------------------------------------------
| http / https: **www.mysite.com**/embedded-app/
| (Some JS classes)
|
|- - - - - | ------------------------------------------------------------------------------------------------------------------
|- - - - - | (IFRAME)
|- - - - - | https: // **www.some-embedded-app-domain.com**/page.html
|- - - - - | (loads and instantiates the SDK from "http/https:www.mysite.com/sdk.js that uses easyXDM)
|- - - - - |
|- - - - - | - - - - - |---------------------------------------------------------------------------------------------------
|- - - - - | - - - - - | (IFRAME injected by easyXDM)
|- - - - - | - - - - - | http / https: **www.mysite.com**/embedded\_provider.html
|- - - - - | - - - - - | (Communication with window.top to talk to use those wndow.top JS classes)
|- - - - - | - - - - - |
|- - - - - | - - - - - |
|- - - - - | - - - - - |
|- - - - - | - - - - - |
My site can be loaded both using http/https, but the embedded application must be served using **always** HTTPS. In order to allow the inner iframe injected by easyXDM to communicate with my site, the host and protocol **must match** in both urls, otherwise a same origin policy violation will arise.
Problem: how would I tell the code from the SDK, which is loaded from an external app URL, that the outer windows (my site) is using http or https, to render the embedded\_provider.html using the same protocol and thus allowing JS communication between both of them?
The only solution I can think of is to inform the embedded app somehow that we're currently browsing from http / https, and then it can instantiate it properly (using a flag use\_https or so), but I'd prefer to not force the App to know the protocol we are using.
Do you know any other alternative?
Thanks!
|
2013/03/01
|
[
"https://Stackoverflow.com/questions/15164717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752503/"
] |
The only solution I can think of (haven't tested this though), is to have easyXDM put in 2 test iframes - one on <http://www.mysite.com/url_test> and one on <https://www.mysite.com/url_test>.
Then have your /url\_test webpage try to access window.top.location.href. If /url\_test can see the location, then it must be on the same domain. Then the successfull /url\_test page will communicate to the parent iframe (on some-embedded-app.com) via easyXDM to create the iframe you really want on the correct host.
Note: the /url\_test page which is on the incorrect host will start to dump same-origin iframe warnings in the console. :)
Note #2: if this approach works, you could iteratively improve it by saying, "because of the product domain, i know that 80% of these embeds will be on http. I'll make a /url\_test on http first, then try https as a backup."
|
Use // instead http:// or https://
The browser will sort it out
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Many databases keep their data in one file per table. MySQL does, for example, for MyISAM tables. Whether the data is all kept in one file or not isnt very relevant. It just depends on the storage system being used.
Basically, its among the last things I'd care about when selecting a database *server*. For desktop databases, single file data might be more convenient.
|
MySql does store data in a separate file per table AFAIK. I don't think there is a good reason about doing it one way or the other, alot of it has to depend on the implementation details of your storage I'd think. But I don't write database servers so I can't say for sure.
Also, sql server doesn't have a lock file. It has a transaction log file. Locks are for desktop dbs.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Databases usually use one big file for performance reasons. The DB engine can use its own internal structure and not have to restrict itself to file system limits on block size, buffer size, or fragmentation policies. I know some systems used to (and possibly still do) provide their own block device drivers. For those systems, you'd simply hand them a raw disk partition and let the database handle its own storage management, cutting out a level of indirection and speeding up all disk operations. It's also easier to keep the database in contiguous blocks on disk the fewer files you have to fragment.
|
MySql does store data in a separate file per table AFAIK. I don't think there is a good reason about doing it one way or the other, alot of it has to depend on the implementation details of your storage I'd think. But I don't write database servers so I can't say for sure.
Also, sql server doesn't have a lock file. It has a transaction log file. Locks are for desktop dbs.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Many databases keep their data in one file per table. MySQL does, for example, for MyISAM tables. Whether the data is all kept in one file or not isnt very relevant. It just depends on the storage system being used.
Basically, its among the last things I'd care about when selecting a database *server*. For desktop databases, single file data might be more convenient.
|
In Oracle, you could define "External Tables". These tables are just separate text files but can be accessed via SQL. In xBase family of databases, each database has its own file (.dbf) and each index has its own file (.ndx) (extensions vary among products).
The drawbacks of having several files representing your database are security and consistency of the database. The major bad thing about MS-Access specifically is that the database file stores not only data but also some code. If the database file is corrupted you get a problem not only accessing the data but also in accessing the coed.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Many databases keep their data in one file per table. MySQL does, for example, for MyISAM tables. Whether the data is all kept in one file or not isnt very relevant. It just depends on the storage system being used.
Basically, its among the last things I'd care about when selecting a database *server*. For desktop databases, single file data might be more convenient.
|
I can see no reason why you would want one file per table. In SQL server you can partition the files if you want for performance but typically you don't need to do that until you have many millions of records.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Databases usually use one big file for performance reasons. The DB engine can use its own internal structure and not have to restrict itself to file system limits on block size, buffer size, or fragmentation policies. I know some systems used to (and possibly still do) provide their own block device drivers. For those systems, you'd simply hand them a raw disk partition and let the database handle its own storage management, cutting out a level of indirection and speeding up all disk operations. It's also easier to keep the database in contiguous blocks on disk the fewer files you have to fragment.
|
Many databases keep their data in one file per table. MySQL does, for example, for MyISAM tables. Whether the data is all kept in one file or not isnt very relevant. It just depends on the storage system being used.
Basically, its among the last things I'd care about when selecting a database *server*. For desktop databases, single file data might be more convenient.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Databases usually use one big file for performance reasons. The DB engine can use its own internal structure and not have to restrict itself to file system limits on block size, buffer size, or fragmentation policies. I know some systems used to (and possibly still do) provide their own block device drivers. For those systems, you'd simply hand them a raw disk partition and let the database handle its own storage management, cutting out a level of indirection and speeding up all disk operations. It's also easier to keep the database in contiguous blocks on disk the fewer files you have to fragment.
|
In Oracle, you could define "External Tables". These tables are just separate text files but can be accessed via SQL. In xBase family of databases, each database has its own file (.dbf) and each index has its own file (.ndx) (extensions vary among products).
The drawbacks of having several files representing your database are security and consistency of the database. The major bad thing about MS-Access specifically is that the database file stores not only data but also some code. If the database file is corrupted you get a problem not only accessing the data but also in accessing the coed.
|
143,197
|
I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either.
I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file).
My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?
**edit**
Thank you everyone for you're answers. This has really helped clarify the whole situation.
|
2012/04/05
|
[
"https://softwareengineering.stackexchange.com/questions/143197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33180/"
] |
Databases usually use one big file for performance reasons. The DB engine can use its own internal structure and not have to restrict itself to file system limits on block size, buffer size, or fragmentation policies. I know some systems used to (and possibly still do) provide their own block device drivers. For those systems, you'd simply hand them a raw disk partition and let the database handle its own storage management, cutting out a level of indirection and speeding up all disk operations. It's also easier to keep the database in contiguous blocks on disk the fewer files you have to fragment.
|
I can see no reason why you would want one file per table. In SQL server you can partition the files if you want for performance but typically you don't need to do that until you have many millions of records.
|
48,583,020
|
I am trying to display number format in my angular 4 application. Basically what i am looking at is if the number is in 12.23 millions then it should display For e.g
12.2M (One decimal place)
If the number is 50,000.123 then 50.1K
How do i achieve that in angular. Do I need to write a directive ? Are there any inbuilt pipes in angular ?
structure
```
export interface NpvResults {
captiveInsYear: number[];
captiveInsPremiumPaid: number[];
captiveInsTaxDeduction: number[];
captiveInsLoanToParent: number[];
captiveInsCapitalContribution: number[];
captiveDividentDistribution: number[];
captiveInsTerminalValue: number[];
}
```
The array is initialized to the following value
```
this.sourceResults.captiveInsPremiumPaid = [1,2,3,4,5];
```
The html
```
<td *ngFor= "let item of NpvResults.captiveInsPremiumPaid" >{{item}}</td>
```
|
2018/02/02
|
[
"https://Stackoverflow.com/questions/48583020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607841/"
] |
you can create **PIPE**
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'thousandSuff'
})
export class ThousandSuffixesPipe implements PipeTransform {
transform(input: any, args?: any): any {
var exp, rounded,
suffixes = ['k', 'M', 'G', 'T', 'P', 'E'];
if (Number.isNaN(input)) {
return null;
}
if (input < 1000) {
return input;
}
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
```
implement in the view
```
{{ model | thousandSuff }} <!-- 0 decimals -->
{{ model | thousandSuff : 2 }} <!-- X decimals -->
```
result
>
> {{ 22600000 | thousandSuff }} -> 23M
>
>
> {{ 22600000 | thousandSuff : 2 }} -> 22.60M
>
>
>
|
Here I just give you an idea first create
**Html**
`{{number | shortNumber}}`
you can create your own `custom pipe filter` in which you can pass your number in the pipe and then, you can do code like below, put this logic in your custom pipe.
**Pipe filter**
```
getformat(){
if(number == 0) {
return 0;
}
else
{
// hundreds
if(number <= 999){
return number ;
}
// thousands
else if(number >= 1000 && number <= 999999){
return (number / 1000) + 'K';
}
// millions
else if(number >= 1000000 && number <= 999999999){
return (number / 1000000) + 'M';
}
// billions
else if(number >= 1000000000 && number <= 999999999999){
return (number / 1000000000) + 'B';
}
else
return number ;
}
}
```
you can do like this.For creating custom pipe you can refer to this site[click here](https://angular.io/guide/pipes)
|
48,583,020
|
I am trying to display number format in my angular 4 application. Basically what i am looking at is if the number is in 12.23 millions then it should display For e.g
12.2M (One decimal place)
If the number is 50,000.123 then 50.1K
How do i achieve that in angular. Do I need to write a directive ? Are there any inbuilt pipes in angular ?
structure
```
export interface NpvResults {
captiveInsYear: number[];
captiveInsPremiumPaid: number[];
captiveInsTaxDeduction: number[];
captiveInsLoanToParent: number[];
captiveInsCapitalContribution: number[];
captiveDividentDistribution: number[];
captiveInsTerminalValue: number[];
}
```
The array is initialized to the following value
```
this.sourceResults.captiveInsPremiumPaid = [1,2,3,4,5];
```
The html
```
<td *ngFor= "let item of NpvResults.captiveInsPremiumPaid" >{{item}}</td>
```
|
2018/02/02
|
[
"https://Stackoverflow.com/questions/48583020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607841/"
] |
Here I just give you an idea first create
**Html**
`{{number | shortNumber}}`
you can create your own `custom pipe filter` in which you can pass your number in the pipe and then, you can do code like below, put this logic in your custom pipe.
**Pipe filter**
```
getformat(){
if(number == 0) {
return 0;
}
else
{
// hundreds
if(number <= 999){
return number ;
}
// thousands
else if(number >= 1000 && number <= 999999){
return (number / 1000) + 'K';
}
// millions
else if(number >= 1000000 && number <= 999999999){
return (number / 1000000) + 'M';
}
// billions
else if(number >= 1000000000 && number <= 999999999999){
return (number / 1000000000) + 'B';
}
else
return number ;
}
}
```
you can do like this.For creating custom pipe you can refer to this site[click here](https://angular.io/guide/pipes)
|
I write the below pipe, it will handle all of the negative scenarios( **Negative numbers, Strings, blank**) as well. [And works best in all of the cases. Complete Article here.](https://medium.com/@nimishgoel056/display-number-in-billion-million-thousand-using-custom-pipe-in-angular-b95bf388350a)
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'numberSuffix'
})
export class NumberSuffixPipe implements PipeTransform {
transform(input: any, args?: any): any {
let exp;
const suffixes = ['K', 'M', 'B', 'T', 'P', 'E'];
const isNagtiveValues = input < 0;
if (Number.isNaN(input) || (input < 1000 && input >= 0) || !this.isNumeric(input) || (input < 0 && input > -1000)) {
if (!!args && this.isNumeric(input) && !(input < 0) && input != 0) {
return input.toFixed(args);
} else {
return input;
}
}
if (!isNagtiveValues) {
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
} else {
input = input * -1;
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input * -1 / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
isNumeric(value): boolean {
if (value < 0) value = value * -1;
if (/^-{0,1}\d+$/.test(value)) {
return true;
} else if (/^\d+\.\d+$/.test(value)) {
return true;
} else {
return false;
}
}
}
```
|
48,583,020
|
I am trying to display number format in my angular 4 application. Basically what i am looking at is if the number is in 12.23 millions then it should display For e.g
12.2M (One decimal place)
If the number is 50,000.123 then 50.1K
How do i achieve that in angular. Do I need to write a directive ? Are there any inbuilt pipes in angular ?
structure
```
export interface NpvResults {
captiveInsYear: number[];
captiveInsPremiumPaid: number[];
captiveInsTaxDeduction: number[];
captiveInsLoanToParent: number[];
captiveInsCapitalContribution: number[];
captiveDividentDistribution: number[];
captiveInsTerminalValue: number[];
}
```
The array is initialized to the following value
```
this.sourceResults.captiveInsPremiumPaid = [1,2,3,4,5];
```
The html
```
<td *ngFor= "let item of NpvResults.captiveInsPremiumPaid" >{{item}}</td>
```
|
2018/02/02
|
[
"https://Stackoverflow.com/questions/48583020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607841/"
] |
you can create **PIPE**
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'thousandSuff'
})
export class ThousandSuffixesPipe implements PipeTransform {
transform(input: any, args?: any): any {
var exp, rounded,
suffixes = ['k', 'M', 'G', 'T', 'P', 'E'];
if (Number.isNaN(input)) {
return null;
}
if (input < 1000) {
return input;
}
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
```
implement in the view
```
{{ model | thousandSuff }} <!-- 0 decimals -->
{{ model | thousandSuff : 2 }} <!-- X decimals -->
```
result
>
> {{ 22600000 | thousandSuff }} -> 23M
>
>
> {{ 22600000 | thousandSuff : 2 }} -> 22.60M
>
>
>
|
The easiest way is to make a pipe and a service. You can do the actually formatting in service using numeral js library and import that service in the pipe and apply formatting in transform method.
```
import { Pipe, PipeTransform } from '@angular/core';
import { NumberFormatService } from '../utils/number-format.service';
@Pipe({
name: 'numberFormat'
})
export class NumberFormatPipe implements PipeTransform {
constructor(private nF: NumberFormatService) {}
transform(value: any, args?: any): any {
if(args == "commas"){
return this.nF.addCommas(value);
}
else if(args == "kmb"){
return this.nF.addKMB(value);
}
return value;
}
}
import { Injectable } from '@angular/core';
import * as numeral from 'numeral';
@Injectable({
providedIn: 'root'
})
export class NumberFormatService {
constructor() { }
public addCommas(val){
return numeral(val).format('0,0');
}
public addKMB(val){
return numeral(val).format('0[.]00a');
}
}
html template file
{{dailyData.avgDaily | numberFormat: 'commas'}}
{{dailyData.avgDaily | numberFormat: 'kmb'}}
```
1. You can get numeral js dependency documentation from official site [link](http://numeraljs.com/).
2. You can install numeraljs in the angular application using npm install numeral --save
|
48,583,020
|
I am trying to display number format in my angular 4 application. Basically what i am looking at is if the number is in 12.23 millions then it should display For e.g
12.2M (One decimal place)
If the number is 50,000.123 then 50.1K
How do i achieve that in angular. Do I need to write a directive ? Are there any inbuilt pipes in angular ?
structure
```
export interface NpvResults {
captiveInsYear: number[];
captiveInsPremiumPaid: number[];
captiveInsTaxDeduction: number[];
captiveInsLoanToParent: number[];
captiveInsCapitalContribution: number[];
captiveDividentDistribution: number[];
captiveInsTerminalValue: number[];
}
```
The array is initialized to the following value
```
this.sourceResults.captiveInsPremiumPaid = [1,2,3,4,5];
```
The html
```
<td *ngFor= "let item of NpvResults.captiveInsPremiumPaid" >{{item}}</td>
```
|
2018/02/02
|
[
"https://Stackoverflow.com/questions/48583020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607841/"
] |
you can create **PIPE**
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'thousandSuff'
})
export class ThousandSuffixesPipe implements PipeTransform {
transform(input: any, args?: any): any {
var exp, rounded,
suffixes = ['k', 'M', 'G', 'T', 'P', 'E'];
if (Number.isNaN(input)) {
return null;
}
if (input < 1000) {
return input;
}
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
```
implement in the view
```
{{ model | thousandSuff }} <!-- 0 decimals -->
{{ model | thousandSuff : 2 }} <!-- X decimals -->
```
result
>
> {{ 22600000 | thousandSuff }} -> 23M
>
>
> {{ 22600000 | thousandSuff : 2 }} -> 22.60M
>
>
>
|
I write the below pipe, it will handle all of the negative scenarios( **Negative numbers, Strings, blank**) as well. [And works best in all of the cases. Complete Article here.](https://medium.com/@nimishgoel056/display-number-in-billion-million-thousand-using-custom-pipe-in-angular-b95bf388350a)
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'numberSuffix'
})
export class NumberSuffixPipe implements PipeTransform {
transform(input: any, args?: any): any {
let exp;
const suffixes = ['K', 'M', 'B', 'T', 'P', 'E'];
const isNagtiveValues = input < 0;
if (Number.isNaN(input) || (input < 1000 && input >= 0) || !this.isNumeric(input) || (input < 0 && input > -1000)) {
if (!!args && this.isNumeric(input) && !(input < 0) && input != 0) {
return input.toFixed(args);
} else {
return input;
}
}
if (!isNagtiveValues) {
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
} else {
input = input * -1;
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input * -1 / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
isNumeric(value): boolean {
if (value < 0) value = value * -1;
if (/^-{0,1}\d+$/.test(value)) {
return true;
} else if (/^\d+\.\d+$/.test(value)) {
return true;
} else {
return false;
}
}
}
```
|
48,583,020
|
I am trying to display number format in my angular 4 application. Basically what i am looking at is if the number is in 12.23 millions then it should display For e.g
12.2M (One decimal place)
If the number is 50,000.123 then 50.1K
How do i achieve that in angular. Do I need to write a directive ? Are there any inbuilt pipes in angular ?
structure
```
export interface NpvResults {
captiveInsYear: number[];
captiveInsPremiumPaid: number[];
captiveInsTaxDeduction: number[];
captiveInsLoanToParent: number[];
captiveInsCapitalContribution: number[];
captiveDividentDistribution: number[];
captiveInsTerminalValue: number[];
}
```
The array is initialized to the following value
```
this.sourceResults.captiveInsPremiumPaid = [1,2,3,4,5];
```
The html
```
<td *ngFor= "let item of NpvResults.captiveInsPremiumPaid" >{{item}}</td>
```
|
2018/02/02
|
[
"https://Stackoverflow.com/questions/48583020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607841/"
] |
The easiest way is to make a pipe and a service. You can do the actually formatting in service using numeral js library and import that service in the pipe and apply formatting in transform method.
```
import { Pipe, PipeTransform } from '@angular/core';
import { NumberFormatService } from '../utils/number-format.service';
@Pipe({
name: 'numberFormat'
})
export class NumberFormatPipe implements PipeTransform {
constructor(private nF: NumberFormatService) {}
transform(value: any, args?: any): any {
if(args == "commas"){
return this.nF.addCommas(value);
}
else if(args == "kmb"){
return this.nF.addKMB(value);
}
return value;
}
}
import { Injectable } from '@angular/core';
import * as numeral from 'numeral';
@Injectable({
providedIn: 'root'
})
export class NumberFormatService {
constructor() { }
public addCommas(val){
return numeral(val).format('0,0');
}
public addKMB(val){
return numeral(val).format('0[.]00a');
}
}
html template file
{{dailyData.avgDaily | numberFormat: 'commas'}}
{{dailyData.avgDaily | numberFormat: 'kmb'}}
```
1. You can get numeral js dependency documentation from official site [link](http://numeraljs.com/).
2. You can install numeraljs in the angular application using npm install numeral --save
|
I write the below pipe, it will handle all of the negative scenarios( **Negative numbers, Strings, blank**) as well. [And works best in all of the cases. Complete Article here.](https://medium.com/@nimishgoel056/display-number-in-billion-million-thousand-using-custom-pipe-in-angular-b95bf388350a)
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'numberSuffix'
})
export class NumberSuffixPipe implements PipeTransform {
transform(input: any, args?: any): any {
let exp;
const suffixes = ['K', 'M', 'B', 'T', 'P', 'E'];
const isNagtiveValues = input < 0;
if (Number.isNaN(input) || (input < 1000 && input >= 0) || !this.isNumeric(input) || (input < 0 && input > -1000)) {
if (!!args && this.isNumeric(input) && !(input < 0) && input != 0) {
return input.toFixed(args);
} else {
return input;
}
}
if (!isNagtiveValues) {
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
} else {
input = input * -1;
exp = Math.floor(Math.log(input) / Math.log(1000));
return (input * -1 / Math.pow(1000, exp)).toFixed(args) + suffixes[exp - 1];
}
}
isNumeric(value): boolean {
if (value < 0) value = value * -1;
if (/^-{0,1}\d+$/.test(value)) {
return true;
} else if (/^\d+\.\d+$/.test(value)) {
return true;
} else {
return false;
}
}
}
```
|
57,865,653
|
I will try to make my question more clear.
Because I used below setting on Html body, I need to scroll the page down to get to page bottom.
```
body { height: 180vh;}
```
And I want to set a div position `top` property relative to document height, so that I can control its position at a place only visible when I scroll down. I prefer to set it by percentage value so that code will adapt different sizes of devices.
But by default the `top` property is relative to viewport, so I can not realize it by setting `top` value.
Is there a way to realize what I want to do? even not by `top` property.
|
2019/09/10
|
[
"https://Stackoverflow.com/questions/57865653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9940069/"
] |
If you give the body a `position: relative` and the div a `position: absolute`, you can set the `top` property as a percentage, where `top: 100%` will position it at the bottom of the page:
```css
body {
height: 180vh;
background: lightblue;
position: relative;
}
div{
height: 30px;
width: 140px;
border: solid 2px gray;
background: white;
position: absolute;
top: 100%;
}
```
```html
<div></div>
```
|
First you need to make sure that the `position` property of the body is `relative`, `absolute` or `fixed`.
You can then set the `position: absolute` to make the element be dependent on the last parent element that had one of three properties mentioned above.
Finally you can set your `top`, `left`, `right` and `bottom` properties after that.
```css
body {
height: 180vh;
position: relative;
}
#down {
position: absolute;
// or bottom: 10%;
bottom: 25px;
}
```
```html
<div id="down">Here</div>
```
|
57,865,653
|
I will try to make my question more clear.
Because I used below setting on Html body, I need to scroll the page down to get to page bottom.
```
body { height: 180vh;}
```
And I want to set a div position `top` property relative to document height, so that I can control its position at a place only visible when I scroll down. I prefer to set it by percentage value so that code will adapt different sizes of devices.
But by default the `top` property is relative to viewport, so I can not realize it by setting `top` value.
Is there a way to realize what I want to do? even not by `top` property.
|
2019/09/10
|
[
"https://Stackoverflow.com/questions/57865653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9940069/"
] |
If you give the body a `position: relative` and the div a `position: absolute`, you can set the `top` property as a percentage, where `top: 100%` will position it at the bottom of the page:
```css
body {
height: 180vh;
background: lightblue;
position: relative;
}
div{
height: 30px;
width: 140px;
border: solid 2px gray;
background: white;
position: absolute;
top: 100%;
}
```
```html
<div></div>
```
|
If i understand your question correctly, there are 2 ways
* use `padding-top`
* use `position:absolute` and `top`
code **snippet** below:
```css
body {
height: 180vh;
}
.myDiv1 {
margin-top: 110vh;
height: 100px;
width: 100%;
background: lightblue;
}
.myDiv2 {
position: absolute;
top: 150vh;
height: 100px;
width: 98%;
background: lightpink;
}
```
```html
<body>
<div class='myDiv1'></div>
<div class='myDiv2'></div>
</body>
```
|
57,865,653
|
I will try to make my question more clear.
Because I used below setting on Html body, I need to scroll the page down to get to page bottom.
```
body { height: 180vh;}
```
And I want to set a div position `top` property relative to document height, so that I can control its position at a place only visible when I scroll down. I prefer to set it by percentage value so that code will adapt different sizes of devices.
But by default the `top` property is relative to viewport, so I can not realize it by setting `top` value.
Is there a way to realize what I want to do? even not by `top` property.
|
2019/09/10
|
[
"https://Stackoverflow.com/questions/57865653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9940069/"
] |
If i understand your question correctly, there are 2 ways
* use `padding-top`
* use `position:absolute` and `top`
code **snippet** below:
```css
body {
height: 180vh;
}
.myDiv1 {
margin-top: 110vh;
height: 100px;
width: 100%;
background: lightblue;
}
.myDiv2 {
position: absolute;
top: 150vh;
height: 100px;
width: 98%;
background: lightpink;
}
```
```html
<body>
<div class='myDiv1'></div>
<div class='myDiv2'></div>
</body>
```
|
First you need to make sure that the `position` property of the body is `relative`, `absolute` or `fixed`.
You can then set the `position: absolute` to make the element be dependent on the last parent element that had one of three properties mentioned above.
Finally you can set your `top`, `left`, `right` and `bottom` properties after that.
```css
body {
height: 180vh;
position: relative;
}
#down {
position: absolute;
// or bottom: 10%;
bottom: 25px;
}
```
```html
<div id="down">Here</div>
```
|
28,852,736
|
I have below code for getting full screen view for the Excel workbook. But when I minimize the workbook and Maximize, full screen feature automatically disabling. I need full screen in all condition for this workbook. Please suggest.
```
Application.DisplayFullScreen = True
Application.CommandBars("Full Screen").Enabled = False
'Disable the Esc key.
Application.OnKey "{ESC}", ""
```
|
2015/03/04
|
[
"https://Stackoverflow.com/questions/28852736",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3671568/"
] |
The best solution for me:
In your workbook -> Event Windows Resize
Not necessary disable ESC key in this situation
```
Private Sub Workbook_WindowResize(ByVal Wn As Window)
Application.DisplayFullScreen = True
End Sub
```
|
Try this.
```
Private Sub Workbook_Open()
'this sub is automatically called on opening the workbook
Application.WindowState = xlMaximized
'the active window is expanded to the biggest size available
End Sub
```
|
66,151,339
|
I need some help on datetime
here is my code:
```
import datetime
x = int(input()).datetime.datetime()
print(x.strftime("%B, %d, %Y"))
```
My custom input: 12 25 1990
but I always got an error `ValueError: invalid literal for int() with base 10: '12 25 1990'`
I just wanted the output to be " December 25, 1990" can anybody help thank you;;
|
2021/02/11
|
[
"https://Stackoverflow.com/questions/66151339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15189209/"
] |
I already ran login before and it is supposed to store the credentials. When I reran login it always gave me:
```
2021-02-11 07:20:42,052 - INFO - Instabot version: 0.117.0 Started
Traceback (most recent call last):
File "main.py", line 42, in <module>
insta.login(username=username,password=password)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/bot/bot.py", line 443, in login
if self.api.login(**args) is False:
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api.py", line 240, in login
self.load_uuid_and_cookie(load_cookie=use_cookie, load_uuid=use_uuid)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api.py", line 199, in load_uuid_and_cookie
return load_uuid_and_cookie(self, load_uuid=load_uuid, load_cookie=load_cookie)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api_login.py", line 354, in load_uuid_and_cookie
self.cookie_dict["urlgen"]
KeyError: 'urlgen'
```
Which I thought was the feedback when you are already being logged in. I am still not sure why my first login seemed to fail. And why it doesn't give a proper feedback.
After dezese's answer I started doubting that the login was successful and I found [this](https://stackoverflow.com/questions/66000898/instabot-keyerror-urlgen)
I just ended up running:
```
rm -rf config
```
And then the code properly with login:
```
insta = Bot()
insta.login(username=username,password=password)
insta.upload_photo(photo_path,caption ="just try")
```
And everything worked! Thanks deceze for pointing me in the right direction. I hope this detailed explanation helps people with similar issues, since those error codes are not really helpful in finding the true cause.
Currently I have to delete the config folder every time. Not the best solution but works for now. If anybody knows a better way please post it and I will accept the answer
|
I solved it by just deleting the `uuid_and_cookie.json` before `bot.login()`
```
import os
try:
os.remove("config/<your_username>_uuid_and_cookie.json")
except:
pass
```
There's a bug when the file exists
|
66,151,339
|
I need some help on datetime
here is my code:
```
import datetime
x = int(input()).datetime.datetime()
print(x.strftime("%B, %d, %Y"))
```
My custom input: 12 25 1990
but I always got an error `ValueError: invalid literal for int() with base 10: '12 25 1990'`
I just wanted the output to be " December 25, 1990" can anybody help thank you;;
|
2021/02/11
|
[
"https://Stackoverflow.com/questions/66151339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15189209/"
] |
you can specify `use_cookie` parameter as `False` in `login()` method which means that bot would use cookies not to re-login every time it needs to push some request. Defaults as `True`.
```
bot.login(username="",password="",use_cookie=False)
```
|
I already ran login before and it is supposed to store the credentials. When I reran login it always gave me:
```
2021-02-11 07:20:42,052 - INFO - Instabot version: 0.117.0 Started
Traceback (most recent call last):
File "main.py", line 42, in <module>
insta.login(username=username,password=password)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/bot/bot.py", line 443, in login
if self.api.login(**args) is False:
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api.py", line 240, in login
self.load_uuid_and_cookie(load_cookie=use_cookie, load_uuid=use_uuid)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api.py", line 199, in load_uuid_and_cookie
return load_uuid_and_cookie(self, load_uuid=load_uuid, load_cookie=load_cookie)
File "/Users/user/.virtualenvs/v3/lib/python3.8/site-packages/instabot/api/api_login.py", line 354, in load_uuid_and_cookie
self.cookie_dict["urlgen"]
KeyError: 'urlgen'
```
Which I thought was the feedback when you are already being logged in. I am still not sure why my first login seemed to fail. And why it doesn't give a proper feedback.
After dezese's answer I started doubting that the login was successful and I found [this](https://stackoverflow.com/questions/66000898/instabot-keyerror-urlgen)
I just ended up running:
```
rm -rf config
```
And then the code properly with login:
```
insta = Bot()
insta.login(username=username,password=password)
insta.upload_photo(photo_path,caption ="just try")
```
And everything worked! Thanks deceze for pointing me in the right direction. I hope this detailed explanation helps people with similar issues, since those error codes are not really helpful in finding the true cause.
Currently I have to delete the config folder every time. Not the best solution but works for now. If anybody knows a better way please post it and I will accept the answer
|
66,151,339
|
I need some help on datetime
here is my code:
```
import datetime
x = int(input()).datetime.datetime()
print(x.strftime("%B, %d, %Y"))
```
My custom input: 12 25 1990
but I always got an error `ValueError: invalid literal for int() with base 10: '12 25 1990'`
I just wanted the output to be " December 25, 1990" can anybody help thank you;;
|
2021/02/11
|
[
"https://Stackoverflow.com/questions/66151339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15189209/"
] |
you can specify `use_cookie` parameter as `False` in `login()` method which means that bot would use cookies not to re-login every time it needs to push some request. Defaults as `True`.
```
bot.login(username="",password="",use_cookie=False)
```
|
I solved it by just deleting the `uuid_and_cookie.json` before `bot.login()`
```
import os
try:
os.remove("config/<your_username>_uuid_and_cookie.json")
except:
pass
```
There's a bug when the file exists
|
68,250,923
|
I'm getting data from Wordpress, which contains HTML with inline styles. Before I insert this HTML into my React website, I would like to remove all inline styles. For example, I get something like:
```
<div style='height: 500px'>
<img style='height: 300px; width: 250px'/>
... more elements with inline styles
</div>
```
I'm looking for a clean way of doing this because I've tried using other solutions on SO and they use Jquery and other JS methods which would not work since I'm using React to build my website.
|
2021/07/05
|
[
"https://Stackoverflow.com/questions/68250923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16108670/"
] |
You call select all the html tags and using `document.querySelectorAll('*')` and iterate through each one of them and using `removeAttribute`, removes `style` attribute.
```js
const htmlString = `<div style='height: 500px'>
<img style='height: 300px; width: 250px'/>
<p> text</p>
<div style="color: red">Red text</div>
... more elements with inline styles
</div>`;
const htmlNode = document.createElement('div');
htmlNode.innerHTML = htmlString;
htmlNode.querySelectorAll('*').forEach(function(node) {
node.removeAttribute('style');
});
console.log(htmlNode.innerHTML);
```
|
You can find all tag in your body then use `removeAttribute` to remove style attribute.
```js
function remove(){
var allBodyTag = document.body.getElementsByTagName("*");
for (var i = 0; i < allBodyTag.length; i++) {
allBodyTag[i].removeAttribute("style");
}
}
```
```html
<button onClick="remove()">Remove </button>
<div style='height: 500px'>
This is text
<img style='height: 50px; width: 250px'/>
</div>
```
|
72,520,369
|
I want to be able to quickly figure out how many Deployments in a namespace don't have a container named "main".
This is as close as I have got so far, using jq, which gives me a list of all container names:
```bash
kubectl get deploy -o json | jq '.items[].spec.template.spec.containers[].name'
"main"
"main"
"healthchecker"
"main"
"main"
"service"
"main"
"main"
```
The problem with that is, I can't see which containers belong to which Deployments.
|
2022/06/06
|
[
"https://Stackoverflow.com/questions/72520369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11272609/"
] |
The below command would print the deployment name and the container names. `grep -v` would filter out whatever you need to remove.
```
kubectl get deployment -o custom-columns='"DEPLOYMENT-NAME":.metadata.name,"CONTAINER-NAME":.spec.template.spec.containers[*].name'
DEPLOYMENT-NAME CONTAINER-NAME
foo httpd
foobar nginx
foobar007 nginx
foobar123 nginx
zoo nginx,main
zoo1 busybox,main
```
The above command may be further modified to trim the output header.
```
kubectl get deployment --no-headers -o custom-columns='"":.metadata.name,"":.spec.template.spec.containers[*].name'
foo httpd
foobar nginx
foobar007 nginx
foobar123 nginx
zoo nginx,main
zoo1 busybox,main
```
|
You need to include `.metadata.name` :
```
kubectl get deploy -o json |
jq -r '.items[] | "\(.metadata.name) \(.spec.template.spec.containers[].name)"'
```
|
236,983
|
I would like the solutions of `FindRoot` (using `NIntegrate` under it) to be drawn in different colours by `Plot`, like in the last plot but without defining a separate function for each element in the solution list of `FindRoot`:
```
Clear[w, x, y, z, plt1, plt2, plt3, w1, w2]
w[x_?NumericQ] := {y, z} /.
FindRoot[{x*Exp[y] - y,
x*(Exp[z] - z)}, {{y, 0.45, 0, 1}, {z, 0.52, 0, 1}}];
plt1 = Plot[w[x], {x, 0, 1}, Evaluated -> True];
plt2 = Plot[Evaluate[w[x]], {x, 0, 1}];
w1[x_?NumericQ] := w[x][[1]]
w2[x_?NumericQ] := w[x][[2]]
plt3 = Plot[{w1[x], w2[x]}, {x, 0, 1}];
GraphicsRow[{plt1, plt2, plt3}]
```
Is there some simple plot option that does this? `Evaluate` as in [this question](https://mathematica.stackexchange.com/questions/1731/plot-draws-list-of-curves-in-same-color-when-not-using-evaluate) does not seem to work for me.
Mathematica 11.3 on Ubuntu 18.04.
|
2020/12/22
|
[
"https://mathematica.stackexchange.com/questions/236983",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/6211/"
] |
The *is* a case for [`ReplaceList`](https://reference.wolfram.com/language/ref/ReplaceList.html):
```
expr = Exp[-x^2]^(Log[a + 3]) Sin[x] y^(x) Sin[Cos[b]];
pattern = ___ base_^pwr_ Sin[arg_] :> {base, pwr, arg};
ReplaceList[expr, pattern]
```
>
>
> ```
> {{E^-x^2, Log[3 + a], x},
> {E^-x^2, Log[3 + a], Cos[b]},
> {y, x, x},
> {y, x, Cos[b]}}
>
> ```
>
>
This also works:
```
Map[Flatten] @ Tuples @ Values @
GroupBy[List @@ expr, Head, ReplaceAll[ {Sin[x_] :> x, Power[a_, b_] :> {a, b}}]]
```
>
> same result
>
>
>
And this:
```
DeleteDuplicates @ SequenceCases[List @@ expr,
{OrderlessPatternSequence[Power[a_, b_], Sin[c_], ___]} :> {a, b, c},
Overlaps -> All]
```
>
> same result
>
>
>
|
This is only a partial answer to explain the result that you see.
```
expr1 = Exp[-x^2]^(Log[a + 3])*Sin[x]*y^(x)*Sin[Cos[b]];
expr1 /. base_^pwr_*Sin[arg_] -> {base, pwr, arg}
(* {E^-x^2 y^x Sin[Cos[b]], y^x Log[3 + a] Sin[Cos[b]], x y^x Sin[Cos[b]]} *)
```
The pattern matched `(E^(-x^2))^(Log[3+a])* Sin[x]` and replaced it with the list `{E^(-x^2), Log[3+a], x}` This gave
```
{E^(-x^2), Log[3 + a], x}*y^x*Sin[Cos[b]]
(* {E^-x^2 y^x Sin[Cos[b]], y^x Log[3 + a] Sin[Cos[b]], x y^x Sin[Cos[b]]} *)
```
If you want it to continue until there are no more matches you would need to use [`ReplaceRepeated`](https://reference.wolfram.com/language/ref/ReplaceRepeated.html)
```
expr1 //. base_^pwr_*Sin[arg_] -> {base, pwr, arg}
(* {{E y^x, -x^2 y^x, y^x Cos[b]}, {y Log[3 + a], x Log[3 + a],
Cos[b] Log[3 + a]}, {x y, x^2, x Cos[b]}} *)
```
I would guess that instead you might want to use [`Cases`](https://reference.wolfram.com/language/ref/Cases.html); however, you have not clearly defined what your desired result should be. And you should be looking at the [`FullForm`](https://reference.wolfram.com/language/ref/FullForm.html) of `expr1`
|
39,725,968
|
This is for a Linux system, in C. It involves network programming. It is for a file transfer program.
I've been having this problem where this piece of code works unpredictably. It either is completely successful, or the while loop in the client never ends. I discovered that this is because the fileLength variable would sometimes be a huge (negative or positive) value, which I thought was attributed to making some mistake with ntohl. When I put in a print statement, it seemed to work perfectly, without error.
Here is the client code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
//socket file descriptor
int sockfd;
if (argc != 2) {
fprintf (stderr, "usage: client hostname\n");
exit(1);
}
//...creates socket file descriptor, connects to server
//create buffer for filename
char name[256];
//recieve filename into name buffer, bytes recieved stored in numbytes
if((numbytes = recv (sockfd, name, 255 * sizeof (char), 0)) == -1) {
perror ("recv");
exit(1);
}
//Null terminator after the filename
name[numbytes] = '\0';
//length of the file to recieve from server
long fl;
memset(&fl, 0, sizeof fl);
//recieve filelength from server
if((numbytes = recv (sockfd, &fl, sizeof(long), 0)) == -1) {
perror ("recv");
exit(1);
}
//convert filelength to host format
long fileLength = ntohl(fl);
//check to make sure file does not exist, so that the application will not overwrite exisitng files
if (fopen (name, "r") != NULL) {
fprintf (stderr, "file already present in client directory\n");
exit(1);
}
//open file called name in write mode
FILE *filefd = fopen (name, "wb");
//variable stating amount of data recieved
long bytesTransferred = 0;
//Until the file is recieved, keep recieving
while (bytesTransferred < fileLength) {
printf("transferred: %d\ntotal: %d\n", bytesTransferred, fileLength);
//set counter at beginning of unwritten segment
fseek(filefd, bytesTransferred, SEEK_SET);
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//recieve segment from server
if ((numbytes = recv (sockfd, buf, sizeof buf, 0)) == -1) {
perror ("recv");
exit(1);
}
//first byte of buffer, stating number of bytes of data in recieved segment
//converting from char to short requires adding 128, since the char ranges from -128 to 127
short bufLength = buf[0] + 128;
//write buffer into file, starting after the first byte of the buffer
fwrite (buf + 1, 1, bufLength * sizeof (char), filefd);
//add number of bytes of data recieved to bytesTransferred
bytesTransferred += bufLength;
}
fclose (filefd);
close (sockfd);
return 0;
}
```
This is the server code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
if (argc != 2) {
fprintf (stderr, "usage: server filename\n");
exit(1);
}
//socket file descriptor, file descriptor for specific client connections
int sockfd, new_fd;
//...get socket file descriptor for sockfd, bind sockfd to predetermined port, listen for incoming connections
//...reaps zombie processes
printf("awaiting connections...\n");
while(1) {
//...accepts any incoming connections, gets file descriptor and assigns to new_fd
if (!fork()) {
//close socket file discriptor, only need file descriptor for specific client connection
close (sockfd);
//open a file for reading
FILE *filefd = fopen (argv[1], "rb");
//send filename to client
if (send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0) == -1)
{ perror ("send"); }
//put counter at end of selected file, and find length
fseek (filefd, 0, SEEK_END);
long fileLength = ftell (filefd);
//convert length to network form and send it to client
long fl = htonl(fileLength);
//Are we sure this is sending all the bytes??? TEST
if (send (new_fd, &fl, sizeof fl, 0) == -1)
{ perror ("send"); }
//variable stating amount of data unsent
long len = fileLength;
//Until file is sent, keep sending
while(len > 0) {
printf("remaining: %d\ntotal: %d\n", len, fileLength);
//set counter at beginning of unread segment
fseek (filefd, fileLength - len, SEEK_SET);
//length of the segment; 255 unless last segment
short bufLength;
if (len > 255) {
len -= 255;
bufLength = 255;
} else {
bufLength = len;
len = 0;
}
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//Set first byte of buffer as the length of the segment
//converting short to char requires subtracting 128
buf[0] = bufLength - 128;
//read file into the buffer starting after the first byte of the buffer
fread(buf + 1, 1, bufLength * sizeof(char), filefd);
//Send data too client
if (send (new_fd, buf, sizeof buf, 0) == -1)
{ perror ("send"); }
}
fclose (filefd);
close (new_fd);
exit (0);
}
close (new_fd);
}
return 0;
}
```
Note: I've simplified the code a bit, to make it clearer I hope.
Anything beginning with //... represents a bunch of code
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39725968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6887828/"
] |
You seem to be assuming that each `send()` will either transfer the full number of bytes specified or will error out, and that each one will will pair perfectly with a `recv()` on the other side, such that the `recv()` receives exactly the number of bytes sent by the `send()` (or error out), no more and no less. Those are not safe assumptions.
You don't show the code by which you set up the network connection. If you're using a datagram-based protocol (i.e. UDP) then you're more likely to get the send/receive boundary matching you expect, but you need to account for the possibility that packets will be lost or corrupted. If you're using a stream-based protocol (i.e. TCP) then you don't have to be too concerned with data loss or corruption, but you have no reason at all to expect boundary-matching behavior.
You need at least three things:
* An application-level protocol on top of the network-layer. You've got parts of that already, such as in how you transfer the file length first to advise the client about much content to expect, but you need to do similar for all data transferred that are not of pre-determined, fixed length. Alternatively, invent another means to communicate data boundaries.
* Every `send()` / `write()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. The return value tells you how many of the requested bytes were transferred (or at least how many were handed off to the network stack), and if that's fewer than requested you must loop back to try to transfer the rest.
* Every `recv()` / `read()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. I recommend structuring that along the same lines as described for `send()`, but you also have the option of receiving data until you see a pre-arranged delimiter. The delimiter-based approach is more complicated, however, because it requires additional buffering on the receiving side.
Without those measures, your server and client can easily get out of sync. Among the possible results of that are that the client interprets part of the file name or part of the file content as the file length.
|
Even though you removed it from that code I'll make an educated guess and assume that you're using TCP or some other stream protocol here. This means that the data that the servers sends is a stream of bytes and the `recv` calls will not correspond in the amount of data they get with the `send` calls.
It is equally legal for your first `recv` call to just get one byte of data, as it is to get the file name, file size and half of the file.
You say
>
> When I put in a print statement,
>
>
>
but you don't say where. I'll make another educated guess here and guess that you did it on the server before sending the file length. And that happened to shake things enough that the data amounts that were sent on the connection just accidentally happened to match what you were expecting on the client.
You need to define a protocol. Maybe start with a length of the filename, then the filename, then the length of the file. Or always send 256 bytes for the filename regardless of how long it is. Or send the file name as a 0-terminated string and try to figure out the data from that. But you can never assume that just because you called `send` with X bytes that the `recv` call will get X bytes.
|
39,725,968
|
This is for a Linux system, in C. It involves network programming. It is for a file transfer program.
I've been having this problem where this piece of code works unpredictably. It either is completely successful, or the while loop in the client never ends. I discovered that this is because the fileLength variable would sometimes be a huge (negative or positive) value, which I thought was attributed to making some mistake with ntohl. When I put in a print statement, it seemed to work perfectly, without error.
Here is the client code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
//socket file descriptor
int sockfd;
if (argc != 2) {
fprintf (stderr, "usage: client hostname\n");
exit(1);
}
//...creates socket file descriptor, connects to server
//create buffer for filename
char name[256];
//recieve filename into name buffer, bytes recieved stored in numbytes
if((numbytes = recv (sockfd, name, 255 * sizeof (char), 0)) == -1) {
perror ("recv");
exit(1);
}
//Null terminator after the filename
name[numbytes] = '\0';
//length of the file to recieve from server
long fl;
memset(&fl, 0, sizeof fl);
//recieve filelength from server
if((numbytes = recv (sockfd, &fl, sizeof(long), 0)) == -1) {
perror ("recv");
exit(1);
}
//convert filelength to host format
long fileLength = ntohl(fl);
//check to make sure file does not exist, so that the application will not overwrite exisitng files
if (fopen (name, "r") != NULL) {
fprintf (stderr, "file already present in client directory\n");
exit(1);
}
//open file called name in write mode
FILE *filefd = fopen (name, "wb");
//variable stating amount of data recieved
long bytesTransferred = 0;
//Until the file is recieved, keep recieving
while (bytesTransferred < fileLength) {
printf("transferred: %d\ntotal: %d\n", bytesTransferred, fileLength);
//set counter at beginning of unwritten segment
fseek(filefd, bytesTransferred, SEEK_SET);
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//recieve segment from server
if ((numbytes = recv (sockfd, buf, sizeof buf, 0)) == -1) {
perror ("recv");
exit(1);
}
//first byte of buffer, stating number of bytes of data in recieved segment
//converting from char to short requires adding 128, since the char ranges from -128 to 127
short bufLength = buf[0] + 128;
//write buffer into file, starting after the first byte of the buffer
fwrite (buf + 1, 1, bufLength * sizeof (char), filefd);
//add number of bytes of data recieved to bytesTransferred
bytesTransferred += bufLength;
}
fclose (filefd);
close (sockfd);
return 0;
}
```
This is the server code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
if (argc != 2) {
fprintf (stderr, "usage: server filename\n");
exit(1);
}
//socket file descriptor, file descriptor for specific client connections
int sockfd, new_fd;
//...get socket file descriptor for sockfd, bind sockfd to predetermined port, listen for incoming connections
//...reaps zombie processes
printf("awaiting connections...\n");
while(1) {
//...accepts any incoming connections, gets file descriptor and assigns to new_fd
if (!fork()) {
//close socket file discriptor, only need file descriptor for specific client connection
close (sockfd);
//open a file for reading
FILE *filefd = fopen (argv[1], "rb");
//send filename to client
if (send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0) == -1)
{ perror ("send"); }
//put counter at end of selected file, and find length
fseek (filefd, 0, SEEK_END);
long fileLength = ftell (filefd);
//convert length to network form and send it to client
long fl = htonl(fileLength);
//Are we sure this is sending all the bytes??? TEST
if (send (new_fd, &fl, sizeof fl, 0) == -1)
{ perror ("send"); }
//variable stating amount of data unsent
long len = fileLength;
//Until file is sent, keep sending
while(len > 0) {
printf("remaining: %d\ntotal: %d\n", len, fileLength);
//set counter at beginning of unread segment
fseek (filefd, fileLength - len, SEEK_SET);
//length of the segment; 255 unless last segment
short bufLength;
if (len > 255) {
len -= 255;
bufLength = 255;
} else {
bufLength = len;
len = 0;
}
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//Set first byte of buffer as the length of the segment
//converting short to char requires subtracting 128
buf[0] = bufLength - 128;
//read file into the buffer starting after the first byte of the buffer
fread(buf + 1, 1, bufLength * sizeof(char), filefd);
//Send data too client
if (send (new_fd, buf, sizeof buf, 0) == -1)
{ perror ("send"); }
}
fclose (filefd);
close (new_fd);
exit (0);
}
close (new_fd);
}
return 0;
}
```
Note: I've simplified the code a bit, to make it clearer I hope.
Anything beginning with //... represents a bunch of code
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39725968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6887828/"
] |
Even though you removed it from that code I'll make an educated guess and assume that you're using TCP or some other stream protocol here. This means that the data that the servers sends is a stream of bytes and the `recv` calls will not correspond in the amount of data they get with the `send` calls.
It is equally legal for your first `recv` call to just get one byte of data, as it is to get the file name, file size and half of the file.
You say
>
> When I put in a print statement,
>
>
>
but you don't say where. I'll make another educated guess here and guess that you did it on the server before sending the file length. And that happened to shake things enough that the data amounts that were sent on the connection just accidentally happened to match what you were expecting on the client.
You need to define a protocol. Maybe start with a length of the filename, then the filename, then the length of the file. Or always send 256 bytes for the filename regardless of how long it is. Or send the file name as a 0-terminated string and try to figure out the data from that. But you can never assume that just because you called `send` with X bytes that the `recv` call will get X bytes.
|
Well, after some testing, I discovered that the issue causing the problem did have something to do with htonl(), though I had still read the data incorrectly in the beginning. It wasn't that htonl() wasn't working at all, but that I didn't realize a 'long' has different lengths depending on system architecture (thanks @tofro). That is to say the length of a 'long' integer on 32-bit and 64-bit operating systems is 4 bytes and 8 bytes, respectively. And the htonl() function (from arpa/inet.h) for 4-byte integers. I was using a 64-bit OS, which explains why the value was being fudged. I fixed the issue by using the int32\_t variable (from stdint.h) to store the file length. So the main issue in this case was not that it was becoming out of sync (I think). But as for everyone's advice towards developing an actual protocol, I think I know what exactly you mean, I definitely understand why it's important, and I'm currently working towards it. Thank you all for all your help.
**EDIT:** Well now that it has been several years, and I know a little more, I know that this explanation doesn't make sense. All that would result from `long` being larger than I expected (8 bytes rather than 4) is that there's some implicit casting going on. I used `sizeof(long)` in the original code rather than hardcoding it to assume 4 bytes, so that particular (faulty) assumption of mine shouldn't have produced the bug I saw.
The problem is almost certainly what everyone else said: one call to `recv` was not getting all of the bytes representing the file length. At the time I doubted this was the real cause of the behaviour I saw, because the file name (of arbitrary length) I was sending through was never partially sent (i.e. the client always created a file of the correct filename). Only the file length was messed up. My hypothesis at the time was that `recv` mostly respected message boundaries, and while `recv` *can possibly* only send part of the data, it was more likely that it was sending it all and there was another bug in my code. I now know this isn't true at all, and [TCP doesn't care](https://stackoverflow.com/questions/51661519/what-is-meant-by-record-or-data-boundaries-in-the-sense-of-tcp-udp-protocol).
I'm a little curious as to why I didn't see other unexpected behaviour as well (e.g. the file name being wrong on the receiving end), and I wanted to investigate further, but despite managing to find the files, I can't seem to reproduce the problem now. I suppose I'll never know, but at least I understand the main issue here.
|
39,725,968
|
This is for a Linux system, in C. It involves network programming. It is for a file transfer program.
I've been having this problem where this piece of code works unpredictably. It either is completely successful, or the while loop in the client never ends. I discovered that this is because the fileLength variable would sometimes be a huge (negative or positive) value, which I thought was attributed to making some mistake with ntohl. When I put in a print statement, it seemed to work perfectly, without error.
Here is the client code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
//socket file descriptor
int sockfd;
if (argc != 2) {
fprintf (stderr, "usage: client hostname\n");
exit(1);
}
//...creates socket file descriptor, connects to server
//create buffer for filename
char name[256];
//recieve filename into name buffer, bytes recieved stored in numbytes
if((numbytes = recv (sockfd, name, 255 * sizeof (char), 0)) == -1) {
perror ("recv");
exit(1);
}
//Null terminator after the filename
name[numbytes] = '\0';
//length of the file to recieve from server
long fl;
memset(&fl, 0, sizeof fl);
//recieve filelength from server
if((numbytes = recv (sockfd, &fl, sizeof(long), 0)) == -1) {
perror ("recv");
exit(1);
}
//convert filelength to host format
long fileLength = ntohl(fl);
//check to make sure file does not exist, so that the application will not overwrite exisitng files
if (fopen (name, "r") != NULL) {
fprintf (stderr, "file already present in client directory\n");
exit(1);
}
//open file called name in write mode
FILE *filefd = fopen (name, "wb");
//variable stating amount of data recieved
long bytesTransferred = 0;
//Until the file is recieved, keep recieving
while (bytesTransferred < fileLength) {
printf("transferred: %d\ntotal: %d\n", bytesTransferred, fileLength);
//set counter at beginning of unwritten segment
fseek(filefd, bytesTransferred, SEEK_SET);
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//recieve segment from server
if ((numbytes = recv (sockfd, buf, sizeof buf, 0)) == -1) {
perror ("recv");
exit(1);
}
//first byte of buffer, stating number of bytes of data in recieved segment
//converting from char to short requires adding 128, since the char ranges from -128 to 127
short bufLength = buf[0] + 128;
//write buffer into file, starting after the first byte of the buffer
fwrite (buf + 1, 1, bufLength * sizeof (char), filefd);
//add number of bytes of data recieved to bytesTransferred
bytesTransferred += bufLength;
}
fclose (filefd);
close (sockfd);
return 0;
}
```
This is the server code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
if (argc != 2) {
fprintf (stderr, "usage: server filename\n");
exit(1);
}
//socket file descriptor, file descriptor for specific client connections
int sockfd, new_fd;
//...get socket file descriptor for sockfd, bind sockfd to predetermined port, listen for incoming connections
//...reaps zombie processes
printf("awaiting connections...\n");
while(1) {
//...accepts any incoming connections, gets file descriptor and assigns to new_fd
if (!fork()) {
//close socket file discriptor, only need file descriptor for specific client connection
close (sockfd);
//open a file for reading
FILE *filefd = fopen (argv[1], "rb");
//send filename to client
if (send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0) == -1)
{ perror ("send"); }
//put counter at end of selected file, and find length
fseek (filefd, 0, SEEK_END);
long fileLength = ftell (filefd);
//convert length to network form and send it to client
long fl = htonl(fileLength);
//Are we sure this is sending all the bytes??? TEST
if (send (new_fd, &fl, sizeof fl, 0) == -1)
{ perror ("send"); }
//variable stating amount of data unsent
long len = fileLength;
//Until file is sent, keep sending
while(len > 0) {
printf("remaining: %d\ntotal: %d\n", len, fileLength);
//set counter at beginning of unread segment
fseek (filefd, fileLength - len, SEEK_SET);
//length of the segment; 255 unless last segment
short bufLength;
if (len > 255) {
len -= 255;
bufLength = 255;
} else {
bufLength = len;
len = 0;
}
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//Set first byte of buffer as the length of the segment
//converting short to char requires subtracting 128
buf[0] = bufLength - 128;
//read file into the buffer starting after the first byte of the buffer
fread(buf + 1, 1, bufLength * sizeof(char), filefd);
//Send data too client
if (send (new_fd, buf, sizeof buf, 0) == -1)
{ perror ("send"); }
}
fclose (filefd);
close (new_fd);
exit (0);
}
close (new_fd);
}
return 0;
}
```
Note: I've simplified the code a bit, to make it clearer I hope.
Anything beginning with //... represents a bunch of code
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39725968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6887828/"
] |
You seem to be assuming that each `send()` will either transfer the full number of bytes specified or will error out, and that each one will will pair perfectly with a `recv()` on the other side, such that the `recv()` receives exactly the number of bytes sent by the `send()` (or error out), no more and no less. Those are not safe assumptions.
You don't show the code by which you set up the network connection. If you're using a datagram-based protocol (i.e. UDP) then you're more likely to get the send/receive boundary matching you expect, but you need to account for the possibility that packets will be lost or corrupted. If you're using a stream-based protocol (i.e. TCP) then you don't have to be too concerned with data loss or corruption, but you have no reason at all to expect boundary-matching behavior.
You need at least three things:
* An application-level protocol on top of the network-layer. You've got parts of that already, such as in how you transfer the file length first to advise the client about much content to expect, but you need to do similar for all data transferred that are not of pre-determined, fixed length. Alternatively, invent another means to communicate data boundaries.
* Every `send()` / `write()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. The return value tells you how many of the requested bytes were transferred (or at least how many were handed off to the network stack), and if that's fewer than requested you must loop back to try to transfer the rest.
* Every `recv()` / `read()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. I recommend structuring that along the same lines as described for `send()`, but you also have the option of receiving data until you see a pre-arranged delimiter. The delimiter-based approach is more complicated, however, because it requires additional buffering on the receiving side.
Without those measures, your server and client can easily get out of sync. Among the possible results of that are that the client interprets part of the file name or part of the file content as the file length.
|
Well, after some testing, I discovered that the issue causing the problem did have something to do with htonl(), though I had still read the data incorrectly in the beginning. It wasn't that htonl() wasn't working at all, but that I didn't realize a 'long' has different lengths depending on system architecture (thanks @tofro). That is to say the length of a 'long' integer on 32-bit and 64-bit operating systems is 4 bytes and 8 bytes, respectively. And the htonl() function (from arpa/inet.h) for 4-byte integers. I was using a 64-bit OS, which explains why the value was being fudged. I fixed the issue by using the int32\_t variable (from stdint.h) to store the file length. So the main issue in this case was not that it was becoming out of sync (I think). But as for everyone's advice towards developing an actual protocol, I think I know what exactly you mean, I definitely understand why it's important, and I'm currently working towards it. Thank you all for all your help.
**EDIT:** Well now that it has been several years, and I know a little more, I know that this explanation doesn't make sense. All that would result from `long` being larger than I expected (8 bytes rather than 4) is that there's some implicit casting going on. I used `sizeof(long)` in the original code rather than hardcoding it to assume 4 bytes, so that particular (faulty) assumption of mine shouldn't have produced the bug I saw.
The problem is almost certainly what everyone else said: one call to `recv` was not getting all of the bytes representing the file length. At the time I doubted this was the real cause of the behaviour I saw, because the file name (of arbitrary length) I was sending through was never partially sent (i.e. the client always created a file of the correct filename). Only the file length was messed up. My hypothesis at the time was that `recv` mostly respected message boundaries, and while `recv` *can possibly* only send part of the data, it was more likely that it was sending it all and there was another bug in my code. I now know this isn't true at all, and [TCP doesn't care](https://stackoverflow.com/questions/51661519/what-is-meant-by-record-or-data-boundaries-in-the-sense-of-tcp-udp-protocol).
I'm a little curious as to why I didn't see other unexpected behaviour as well (e.g. the file name being wrong on the receiving end), and I wanted to investigate further, but despite managing to find the files, I can't seem to reproduce the problem now. I suppose I'll never know, but at least I understand the main issue here.
|
39,725,968
|
This is for a Linux system, in C. It involves network programming. It is for a file transfer program.
I've been having this problem where this piece of code works unpredictably. It either is completely successful, or the while loop in the client never ends. I discovered that this is because the fileLength variable would sometimes be a huge (negative or positive) value, which I thought was attributed to making some mistake with ntohl. When I put in a print statement, it seemed to work perfectly, without error.
Here is the client code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
//socket file descriptor
int sockfd;
if (argc != 2) {
fprintf (stderr, "usage: client hostname\n");
exit(1);
}
//...creates socket file descriptor, connects to server
//create buffer for filename
char name[256];
//recieve filename into name buffer, bytes recieved stored in numbytes
if((numbytes = recv (sockfd, name, 255 * sizeof (char), 0)) == -1) {
perror ("recv");
exit(1);
}
//Null terminator after the filename
name[numbytes] = '\0';
//length of the file to recieve from server
long fl;
memset(&fl, 0, sizeof fl);
//recieve filelength from server
if((numbytes = recv (sockfd, &fl, sizeof(long), 0)) == -1) {
perror ("recv");
exit(1);
}
//convert filelength to host format
long fileLength = ntohl(fl);
//check to make sure file does not exist, so that the application will not overwrite exisitng files
if (fopen (name, "r") != NULL) {
fprintf (stderr, "file already present in client directory\n");
exit(1);
}
//open file called name in write mode
FILE *filefd = fopen (name, "wb");
//variable stating amount of data recieved
long bytesTransferred = 0;
//Until the file is recieved, keep recieving
while (bytesTransferred < fileLength) {
printf("transferred: %d\ntotal: %d\n", bytesTransferred, fileLength);
//set counter at beginning of unwritten segment
fseek(filefd, bytesTransferred, SEEK_SET);
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//recieve segment from server
if ((numbytes = recv (sockfd, buf, sizeof buf, 0)) == -1) {
perror ("recv");
exit(1);
}
//first byte of buffer, stating number of bytes of data in recieved segment
//converting from char to short requires adding 128, since the char ranges from -128 to 127
short bufLength = buf[0] + 128;
//write buffer into file, starting after the first byte of the buffer
fwrite (buf + 1, 1, bufLength * sizeof (char), filefd);
//add number of bytes of data recieved to bytesTransferred
bytesTransferred += bufLength;
}
fclose (filefd);
close (sockfd);
return 0;
}
```
This is the server code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
if (argc != 2) {
fprintf (stderr, "usage: server filename\n");
exit(1);
}
//socket file descriptor, file descriptor for specific client connections
int sockfd, new_fd;
//...get socket file descriptor for sockfd, bind sockfd to predetermined port, listen for incoming connections
//...reaps zombie processes
printf("awaiting connections...\n");
while(1) {
//...accepts any incoming connections, gets file descriptor and assigns to new_fd
if (!fork()) {
//close socket file discriptor, only need file descriptor for specific client connection
close (sockfd);
//open a file for reading
FILE *filefd = fopen (argv[1], "rb");
//send filename to client
if (send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0) == -1)
{ perror ("send"); }
//put counter at end of selected file, and find length
fseek (filefd, 0, SEEK_END);
long fileLength = ftell (filefd);
//convert length to network form and send it to client
long fl = htonl(fileLength);
//Are we sure this is sending all the bytes??? TEST
if (send (new_fd, &fl, sizeof fl, 0) == -1)
{ perror ("send"); }
//variable stating amount of data unsent
long len = fileLength;
//Until file is sent, keep sending
while(len > 0) {
printf("remaining: %d\ntotal: %d\n", len, fileLength);
//set counter at beginning of unread segment
fseek (filefd, fileLength - len, SEEK_SET);
//length of the segment; 255 unless last segment
short bufLength;
if (len > 255) {
len -= 255;
bufLength = 255;
} else {
bufLength = len;
len = 0;
}
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//Set first byte of buffer as the length of the segment
//converting short to char requires subtracting 128
buf[0] = bufLength - 128;
//read file into the buffer starting after the first byte of the buffer
fread(buf + 1, 1, bufLength * sizeof(char), filefd);
//Send data too client
if (send (new_fd, buf, sizeof buf, 0) == -1)
{ perror ("send"); }
}
fclose (filefd);
close (new_fd);
exit (0);
}
close (new_fd);
}
return 0;
}
```
Note: I've simplified the code a bit, to make it clearer I hope.
Anything beginning with //... represents a bunch of code
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39725968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6887828/"
] |
You seem to be assuming that each `send()` will either transfer the full number of bytes specified or will error out, and that each one will will pair perfectly with a `recv()` on the other side, such that the `recv()` receives exactly the number of bytes sent by the `send()` (or error out), no more and no less. Those are not safe assumptions.
You don't show the code by which you set up the network connection. If you're using a datagram-based protocol (i.e. UDP) then you're more likely to get the send/receive boundary matching you expect, but you need to account for the possibility that packets will be lost or corrupted. If you're using a stream-based protocol (i.e. TCP) then you don't have to be too concerned with data loss or corruption, but you have no reason at all to expect boundary-matching behavior.
You need at least three things:
* An application-level protocol on top of the network-layer. You've got parts of that already, such as in how you transfer the file length first to advise the client about much content to expect, but you need to do similar for all data transferred that are not of pre-determined, fixed length. Alternatively, invent another means to communicate data boundaries.
* Every `send()` / `write()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. The return value tells you how many of the requested bytes were transferred (or at least how many were handed off to the network stack), and if that's fewer than requested you must loop back to try to transfer the rest.
* Every `recv()` / `read()` that aims to transfer more than one byte must be performed in a loop to accommodate transfers being broken into multiple pieces. I recommend structuring that along the same lines as described for `send()`, but you also have the option of receiving data until you see a pre-arranged delimiter. The delimiter-based approach is more complicated, however, because it requires additional buffering on the receiving side.
Without those measures, your server and client can easily get out of sync. Among the possible results of that are that the client interprets part of the file name or part of the file content as the file length.
|
I believe the issue is actually a compound of everything you and others have said. In the server code you send the name of the file like this:
```
send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0);
```
and receive it in the client like this:
```
recv (sockfd, name, 255 * sizeof (char), 0);
```
This will cause an issue when the filename length is anything less than 255. Since TCP is a stream protocol (as mentioned by @Art), there are no real boundaries between the `send`s and `recv`s, which can cause you to receive data in odd places where you are not expecting them.
My recommendation would be to first send the length of the filename, eg:
```
// server
long namelen = htonl(strlen(argv[1]));
send (new_fd, &namelen, 4, 0);
send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0);
// client
long namelen;
recv (sockfd, &namelen, 4, 0);
namelen = ntohl(namelen);
recv (sockfd, name, namelen * sizeof (char), 0);
```
This will ensure that you are always aware of exactly how long your filename is and makes sure that you aren't accidentally reading your file length from somewhere in the middle of your file (which is what I expect is happening currently).
edit.
Also, be cautious when you are sending sized numbers. If you use the `sizeof` call on them, you may be sending and receiving different sizes. This is why I hard-coded the sizes in the `send` and `recv` for the name length so that there is no confusion on either side.
|
39,725,968
|
This is for a Linux system, in C. It involves network programming. It is for a file transfer program.
I've been having this problem where this piece of code works unpredictably. It either is completely successful, or the while loop in the client never ends. I discovered that this is because the fileLength variable would sometimes be a huge (negative or positive) value, which I thought was attributed to making some mistake with ntohl. When I put in a print statement, it seemed to work perfectly, without error.
Here is the client code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
//socket file descriptor
int sockfd;
if (argc != 2) {
fprintf (stderr, "usage: client hostname\n");
exit(1);
}
//...creates socket file descriptor, connects to server
//create buffer for filename
char name[256];
//recieve filename into name buffer, bytes recieved stored in numbytes
if((numbytes = recv (sockfd, name, 255 * sizeof (char), 0)) == -1) {
perror ("recv");
exit(1);
}
//Null terminator after the filename
name[numbytes] = '\0';
//length of the file to recieve from server
long fl;
memset(&fl, 0, sizeof fl);
//recieve filelength from server
if((numbytes = recv (sockfd, &fl, sizeof(long), 0)) == -1) {
perror ("recv");
exit(1);
}
//convert filelength to host format
long fileLength = ntohl(fl);
//check to make sure file does not exist, so that the application will not overwrite exisitng files
if (fopen (name, "r") != NULL) {
fprintf (stderr, "file already present in client directory\n");
exit(1);
}
//open file called name in write mode
FILE *filefd = fopen (name, "wb");
//variable stating amount of data recieved
long bytesTransferred = 0;
//Until the file is recieved, keep recieving
while (bytesTransferred < fileLength) {
printf("transferred: %d\ntotal: %d\n", bytesTransferred, fileLength);
//set counter at beginning of unwritten segment
fseek(filefd, bytesTransferred, SEEK_SET);
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//recieve segment from server
if ((numbytes = recv (sockfd, buf, sizeof buf, 0)) == -1) {
perror ("recv");
exit(1);
}
//first byte of buffer, stating number of bytes of data in recieved segment
//converting from char to short requires adding 128, since the char ranges from -128 to 127
short bufLength = buf[0] + 128;
//write buffer into file, starting after the first byte of the buffer
fwrite (buf + 1, 1, bufLength * sizeof (char), filefd);
//add number of bytes of data recieved to bytesTransferred
bytesTransferred += bufLength;
}
fclose (filefd);
close (sockfd);
return 0;
}
```
This is the server code:
```
//...here includes relevant header files
int main (int argc, char *argv[]) {
if (argc != 2) {
fprintf (stderr, "usage: server filename\n");
exit(1);
}
//socket file descriptor, file descriptor for specific client connections
int sockfd, new_fd;
//...get socket file descriptor for sockfd, bind sockfd to predetermined port, listen for incoming connections
//...reaps zombie processes
printf("awaiting connections...\n");
while(1) {
//...accepts any incoming connections, gets file descriptor and assigns to new_fd
if (!fork()) {
//close socket file discriptor, only need file descriptor for specific client connection
close (sockfd);
//open a file for reading
FILE *filefd = fopen (argv[1], "rb");
//send filename to client
if (send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0) == -1)
{ perror ("send"); }
//put counter at end of selected file, and find length
fseek (filefd, 0, SEEK_END);
long fileLength = ftell (filefd);
//convert length to network form and send it to client
long fl = htonl(fileLength);
//Are we sure this is sending all the bytes??? TEST
if (send (new_fd, &fl, sizeof fl, 0) == -1)
{ perror ("send"); }
//variable stating amount of data unsent
long len = fileLength;
//Until file is sent, keep sending
while(len > 0) {
printf("remaining: %d\ntotal: %d\n", len, fileLength);
//set counter at beginning of unread segment
fseek (filefd, fileLength - len, SEEK_SET);
//length of the segment; 255 unless last segment
short bufLength;
if (len > 255) {
len -= 255;
bufLength = 255;
} else {
bufLength = len;
len = 0;
}
//buffer of 256 bytes; 1 byte for byte-length of segment, 255 bytes of data
char buf[256];
//Set first byte of buffer as the length of the segment
//converting short to char requires subtracting 128
buf[0] = bufLength - 128;
//read file into the buffer starting after the first byte of the buffer
fread(buf + 1, 1, bufLength * sizeof(char), filefd);
//Send data too client
if (send (new_fd, buf, sizeof buf, 0) == -1)
{ perror ("send"); }
}
fclose (filefd);
close (new_fd);
exit (0);
}
close (new_fd);
}
return 0;
}
```
Note: I've simplified the code a bit, to make it clearer I hope.
Anything beginning with //... represents a bunch of code
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39725968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6887828/"
] |
I believe the issue is actually a compound of everything you and others have said. In the server code you send the name of the file like this:
```
send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0);
```
and receive it in the client like this:
```
recv (sockfd, name, 255 * sizeof (char), 0);
```
This will cause an issue when the filename length is anything less than 255. Since TCP is a stream protocol (as mentioned by @Art), there are no real boundaries between the `send`s and `recv`s, which can cause you to receive data in odd places where you are not expecting them.
My recommendation would be to first send the length of the filename, eg:
```
// server
long namelen = htonl(strlen(argv[1]));
send (new_fd, &namelen, 4, 0);
send (new_fd, argv[1], strlen (argv[1]) * sizeof(char), 0);
// client
long namelen;
recv (sockfd, &namelen, 4, 0);
namelen = ntohl(namelen);
recv (sockfd, name, namelen * sizeof (char), 0);
```
This will ensure that you are always aware of exactly how long your filename is and makes sure that you aren't accidentally reading your file length from somewhere in the middle of your file (which is what I expect is happening currently).
edit.
Also, be cautious when you are sending sized numbers. If you use the `sizeof` call on them, you may be sending and receiving different sizes. This is why I hard-coded the sizes in the `send` and `recv` for the name length so that there is no confusion on either side.
|
Well, after some testing, I discovered that the issue causing the problem did have something to do with htonl(), though I had still read the data incorrectly in the beginning. It wasn't that htonl() wasn't working at all, but that I didn't realize a 'long' has different lengths depending on system architecture (thanks @tofro). That is to say the length of a 'long' integer on 32-bit and 64-bit operating systems is 4 bytes and 8 bytes, respectively. And the htonl() function (from arpa/inet.h) for 4-byte integers. I was using a 64-bit OS, which explains why the value was being fudged. I fixed the issue by using the int32\_t variable (from stdint.h) to store the file length. So the main issue in this case was not that it was becoming out of sync (I think). But as for everyone's advice towards developing an actual protocol, I think I know what exactly you mean, I definitely understand why it's important, and I'm currently working towards it. Thank you all for all your help.
**EDIT:** Well now that it has been several years, and I know a little more, I know that this explanation doesn't make sense. All that would result from `long` being larger than I expected (8 bytes rather than 4) is that there's some implicit casting going on. I used `sizeof(long)` in the original code rather than hardcoding it to assume 4 bytes, so that particular (faulty) assumption of mine shouldn't have produced the bug I saw.
The problem is almost certainly what everyone else said: one call to `recv` was not getting all of the bytes representing the file length. At the time I doubted this was the real cause of the behaviour I saw, because the file name (of arbitrary length) I was sending through was never partially sent (i.e. the client always created a file of the correct filename). Only the file length was messed up. My hypothesis at the time was that `recv` mostly respected message boundaries, and while `recv` *can possibly* only send part of the data, it was more likely that it was sending it all and there was another bug in my code. I now know this isn't true at all, and [TCP doesn't care](https://stackoverflow.com/questions/51661519/what-is-meant-by-record-or-data-boundaries-in-the-sense-of-tcp-udp-protocol).
I'm a little curious as to why I didn't see other unexpected behaviour as well (e.g. the file name being wrong on the receiving end), and I wanted to investigate further, but despite managing to find the files, I can't seem to reproduce the problem now. I suppose I'll never know, but at least I understand the main issue here.
|
64,360,347
|
I am writing SQL Server integration tests using xUnit.
My tests look like this:
```
[Fact]
public void Test()
{
using (IDbConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
connection.Query(@$"...");
//DO SOMETHING
}
}
```
Since I am planning to create multiple tests in the same class I was trying to avoid creating a new `SqlConnection` every time. I know that xUnit framework per se creates an instance of the class per each running test inside it.
This means I could make the creation of the `SqlConnection` in the constructor since anyway a new one will be created every time a new test run.
The problem is **disposing** it. Is it good practice or do you see any problem in disposing manually the `SqlConnection` at the end of each test?
Such as:
```
public class MyTestClass
{
private const string _connectionString = "...";
private readonly IDbConnection _connection;
public MyTestClass()
{
_connection = new SqlConnection(_connectionString));
}
[Fact]
public void Test()
{
_connection.Open();
_connection.Query(@$"...");
//DO SOMETHING
_connection.Dispose();
}
}
```
|
2020/10/14
|
[
"https://Stackoverflow.com/questions/64360347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2335172/"
] |
>
> I was trying to avoid creating a new SqlConnection every time.
>
>
>
It's okay to create and dispose a new SqlConnection over and over. The connection instance is disposed, but the underlying connection is pooled. Under the hood it may actually use the same connection repeatedly without closing it, and that's okay.
See [SQL Server Connection Pooling](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling).
So if that's your concern, don't worry. What you are doing in the first example is completely harmless. And however your code is structured, if you're creating a new connection when you need it, using it, and disposing it, that's fine. You can do that all day long.
If what concerns you is the repetitive code, I find this helpful. I often have a class like this in my unit tests:
```
static class SqlExecution
{
public static void ExecuteSql(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
command.ExecuteNonQuery();
}
}
}
public static T ExecuteScalar<T>(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
return (T)command.ExecuteScalar();
}
}
}
public static string GetConnectionString()
{
// This may vary
}
}
```
How you obtain the connection string may vary, which is why I left that method empty. It might be a configuration file, or it could be hard-coded.
This covers the common scenarios of executing something and retrieving a value, and allows me to keep all the repetitive database code out of my tests. Within the test is just one line:
```
SqlExecution.ExecuteSql("UPDATE WHATEVER");
```
You can also use dependency injection with xUnit, so you could write something as an instance class and inject it. But I doubt that it's worth the effort.
If I find myself writing more and more repetitive code then I might add test-specific methods. For example the test class might have a method that executes a certain stored procedure or formats arguments into a query and executes it.
---
You can also do dependency injection with xUnit so that dependencies are injected into the class like with any other class. There are lots of [answers about this](https://stackoverflow.com/questions/50921675/dependency-injection-in-xunit-project). You may find it useful. Maybe it's the reason why you're using xUnit. But something simpler might get the job done.
---
Someone is bound to say that unit tests shouldn't talk to the database. And they're mostly right. But it doesn't matter. Sometimes we have to do it anyway. Maybe the code we need
to unit test is a stored procedure, and executing it from a unit test and verifying the
results is the simplest way to do that.
Someone else might say that we shouldn't have logic in the database that needs testing, and I agree with them too. But we don't always have that choice. If I have to put logic in SQL I still want to test it, and writing a unit test is usually the easiest way. (We can call it an "integration test" if it makes anyone happy.)
|
There's a couple different things at play here. First, SqlConnection uses connection pooling so it should be fine to create/dispose the connections within a single test.
Having said that disposing of the connections on a per test class basis would also be doable. XUnit will dispose of test classes that are IDisposable. So either would work. I'd suggest it's cleaner to create/dispose them within the test.
```
public class MyTestClass : IDisposable
{
const string ConnectionStr = "";
SqlConnection conn;
public MyTestClass()
{
this.conn = new SqlConnection(ConnectionStr);
}
public void Dispose()
{
this.conn?.Dispose();
this.conn = null;
}
public void Test()
{
using (var conn = new SqlConnection(ConnectionStr))
{
}
}
}
```
|
722,521
|
I have this input file on a Linux machine where there are mutliple lines with such:
```
123, 'John, Nesh', 731, 'ABC, DEV, 23', 6, 400 'Text'
123, 'John, Brown', 140, 'ABC, DEV, 23', 6, 500 'Some other, Text'
123, 'John, Amazing', 1, 'ABC, DEV, 23', 8, 700 'Another, example, Text'
```
etc.
And I want to remove any `,` that is found within a single quoted field. Expected output:
```
123, 'John Nesh', 731, 'ABC DEV 23', 6, 400 'Text'
123, 'John Brown', 140, 'ABC DEV, 23', 6, 500 'Some other Text'
123, 'John Amazing', 1, 'ABC DEV, 23', 8, 700 'Another example, Text'
```
|
2022/10/26
|
[
"https://unix.stackexchange.com/questions/722521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/546735/"
] |
bash 5.2 has a new loadable module `dsv` for parsing "delimiter-separated" values:
```bash
$ echo $BASH_VERSION
5.2.0(2)-release
$ cat input.csv
'123','ABC, DEV 23','345','534.202','NAME'
$ enable dsv
$ dsv -S -p -a fields "$(head -1 input.csv)"
$ declare -p fields
declare -a fields=([0]="'123'" [1]="'ABC, DEV 23'" [2]="'345'" [3]="'534.202'" [4]="'NAME'")
$ fields=( "${fields[@]//,/}" ) # remove commas from all elements
$ (IFS=,; echo "${fields[*]}")
'123','ABC DEV 23','345','534.202','NAME'
```
The help text for the `dsv` command:
>
> dsv: dsv [-a ARRAYNAME] [-d DELIMS] [-Sgp] string
>
>
>
> >
> > Read delimiter-separated fields from STRING.
> >
> >
> > Parse STRING, a line of delimiter-separated values, into individual
> > fields, and store them into the indexed array ARRAYNAME starting at
> > index 0. The parsing understands and skips over double-quoted strings.
> > If ARRAYNAME is not supplied, "DSV" is the default array name.
> > If the delimiter is a comma, the default, this parses comma-
> > separated values as specified in RFC 4180.
> >
> >
> > The -d option specifies the delimiter. The delimiter is the first
> > character of the DELIMS argument. Specifying a DELIMS argument that
> > contains more than one character is not supported and will produce
> > unexpected results. The -S option enables shell-like quoting: double-
> > quoted strings can contain backslashes preceding special characters,
> > and the backslash will be removed; and single-quoted strings are
> > processed as the shell would process them. The -g option enables a
> > greedy split: sequences of the delimiter are skipped at the beginning
> > and end of STRING, and consecutive instances of the delimiter in STRING
> > do not generate empty fields. If the -p option is supplied, dsv leaves
> > quote characters as part of the generated field; otherwise they are
> > removed.
> >
> >
> > The return value is 0 unless an invalid option is supplied or the ARRAYNAME
> > argument is invalid or readonly.
> >
> >
> >
>
>
>
|
With `perl`:
```
perl -pe "s{'.*?'}{\$& =~ s/,//gr}ge" < your-file
```
It assumes quoted strings never span several lines, and that there's no escaped `'`s within the `'...'` quoted strings (though it would still work if they were escaped as `''` as is common in csvs).
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
I am assuming that you want to find the number of rows when a particular condition (when a variable is having some value) is met.
If this is the case, then I suppose you have "x" as a variable represented in a column. "x" can take multiple values. Suppose you want to find how many rows are there in your data when x is 0. This could be done by:
```
nrow(subset(data, x=="0")
```
'data' is the object name for your dataset in R
**EDIT:**
I am seeing your edited dataframe now. You could use this to solve your problem:
```
table(data$type, data$x)
```
|
Given the data frame,
`df=data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))`
If you want to know how many of each value in column 1 have a value in column 2 of zero then you can use:
`table(df)[,1]`
as long as you are only working with 1's and 0's to get the answer:
```
home NULL public search
1 1 1 2
```
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
You could also use the sqldf package:
```
library(sqldf)
df <- data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))
sqldf("SELECT type, COUNT(*) FROM df WHERE x=0 GROUP BY type")
```
which gives the following result:
```
type COUNT(*)
1 NULL 1
2 home 1
3 public 1
4 search 2
```
|
Given the data frame,
`df=data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))`
If you want to know how many of each value in column 1 have a value in column 2 of zero then you can use:
`table(df)[,1]`
as long as you are only working with 1's and 0's to get the answer:
```
home NULL public search
1 1 1 2
```
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
I am assuming that you want to find the number of rows when a particular condition (when a variable is having some value) is met.
If this is the case, then I suppose you have "x" as a variable represented in a column. "x" can take multiple values. Suppose you want to find how many rows are there in your data when x is 0. This could be done by:
```
nrow(subset(data, x=="0")
```
'data' is the object name for your dataset in R
**EDIT:**
I am seeing your edited dataframe now. You could use this to solve your problem:
```
table(data$type, data$x)
```
|
You could also do this with the `dplyr` package:
```
library(dplyr)
df2 <- df %>% group_by(x,type) %>% tally()
```
which gives:
```
x type n
1 0 home 1
2 0 NULL 1
3 0 public 1
4 0 search 2
5 1 home 1
6 1 search 1
```
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
I am assuming that you want to find the number of rows when a particular condition (when a variable is having some value) is met.
If this is the case, then I suppose you have "x" as a variable represented in a column. "x" can take multiple values. Suppose you want to find how many rows are there in your data when x is 0. This could be done by:
```
nrow(subset(data, x=="0")
```
'data' is the object name for your dataset in R
**EDIT:**
I am seeing your edited dataframe now. You could use this to solve your problem:
```
table(data$type, data$x)
```
|
You could also use the sqldf package:
```
library(sqldf)
df <- data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))
sqldf("SELECT type, COUNT(*) FROM df WHERE x=0 GROUP BY type")
```
which gives the following result:
```
type COUNT(*)
1 NULL 1
2 home 1
3 public 1
4 search 2
```
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
I am assuming that you want to find the number of rows when a particular condition (when a variable is having some value) is met.
If this is the case, then I suppose you have "x" as a variable represented in a column. "x" can take multiple values. Suppose you want to find how many rows are there in your data when x is 0. This could be done by:
```
nrow(subset(data, x=="0")
```
'data' is the object name for your dataset in R
**EDIT:**
I am seeing your edited dataframe now. You could use this to solve your problem:
```
table(data$type, data$x)
```
|
Given your data is structured as a data frame, the following code has a better running time than the answers given above:
```
nrow(data[data$x=="0"])
```
You can test your run time using:
```
ptm <- proc.time()
nrow(subset(data, x == "0"))
proc.time() - ptm
ptm <- proc.time()
nrow(data[data$x=="0"]))
proc.time() - ptm
```
In my case, the running time was about 15 times faster, with 1 million rows.
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
You could also use the sqldf package:
```
library(sqldf)
df <- data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))
sqldf("SELECT type, COUNT(*) FROM df WHERE x=0 GROUP BY type")
```
which gives the following result:
```
type COUNT(*)
1 NULL 1
2 home 1
3 public 1
4 search 2
```
|
You could also do this with the `dplyr` package:
```
library(dplyr)
df2 <- df %>% group_by(x,type) %>% tally()
```
which gives:
```
x type n
1 0 home 1
2 0 NULL 1
3 0 public 1
4 0 search 2
5 1 home 1
6 1 search 1
```
|
26,042,409
|
I would like to count how many rows there are per `type` if they meet the condition `x == 0`. Sort of like a group by in SQL
Here is an example of the data
```
type x
search 0
NULL 0
public 0
search 1
home 0
home 1
search 0
```
|
2014/09/25
|
[
"https://Stackoverflow.com/questions/26042409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990099/"
] |
You could also use the sqldf package:
```
library(sqldf)
df <- data.frame(type=c('search','NULL','public','search','home','home','search'),x=c(0,0,0,1,0,1,0))
sqldf("SELECT type, COUNT(*) FROM df WHERE x=0 GROUP BY type")
```
which gives the following result:
```
type COUNT(*)
1 NULL 1
2 home 1
3 public 1
4 search 2
```
|
Given your data is structured as a data frame, the following code has a better running time than the answers given above:
```
nrow(data[data$x=="0"])
```
You can test your run time using:
```
ptm <- proc.time()
nrow(subset(data, x == "0"))
proc.time() - ptm
ptm <- proc.time()
nrow(data[data$x=="0"]))
proc.time() - ptm
```
In my case, the running time was about 15 times faster, with 1 million rows.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I have managed to find a solution:
Go to [http://tmtheme-editor.herokuapp.com](http://tmtheme-editor.herokuapp.com/#/theme/Solarized%20%28light%29) (someone has built a web-based theme editor). Once you have tweaked the colour syntax you can download the themename.tmTheme file. Move that file to /Users/username/Library/Application Support/Sublime Text 3/Packages/ User/ (if you are on a Mac).
Load in the syntax theme from the top menu: Sublime Text > Preferences > Color Scheme > themename.
If you want to tweak you syntax further up can upload the custom theme back in the web-based theme editor and save it out again.
Hope this helps someone else.
|
This should be much easier than it is
I used parts of all the other answers to make this work.
**Important notes before you begin:**
1. I had used this menu item to select a colour scheme: Sublime Text > Preferences > Color Scheme
2. I chose "Monokai.tmTheme"
3. Therefore the file you need containing the colours is: Monokai.tmTheme
4. You will NOT have easy access to that file yet!!
**Overall Steps:**
The overall steps I found to work are:
1. Get a copy of the Monokai.tmTheme text file
2. Place it in your "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User" directory
(replace XXX with your username)
3. Again use this menu: Sublime Text > Preferences > Color Scheme
4. Choose the new entry "Monokai - User"
5. Any changes to your Monokai.tmTheme file will immediately be seen by Sublime Text
**Getting the Monokai.tmTheme text file :**
This is the tricky part.
You have two options
*Option A. Use PackageResourceViewer to open the resource:*
A1. Tools -> Command Palette
A2. Type "PackageResourceViewer"
A3. Choose "PackageResourceViewer: Open Resource"
A4. Navigate to "Color Scheme - Default"
A5. Navigate to "Monokai.tmTheme"
A6. This will open the contents of the file but it is NOT a real file on your disk! You must copy the contents into a new text document and save it into "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User/Monokai.tmTheme" as above
*Option B. Use the web app to create your .tmTheme file:*
B1. Goto <http://tmtheme-editor.herokuapp.com>
B2. Edit the colours
B3. Download the .tmTheme file
B4. Put it into "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User/Monokai.tmTheme" as above
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I have managed to find a solution:
Go to [http://tmtheme-editor.herokuapp.com](http://tmtheme-editor.herokuapp.com/#/theme/Solarized%20%28light%29) (someone has built a web-based theme editor). Once you have tweaked the colour syntax you can download the themename.tmTheme file. Move that file to /Users/username/Library/Application Support/Sublime Text 3/Packages/ User/ (if you are on a Mac).
Load in the syntax theme from the top menu: Sublime Text > Preferences > Color Scheme > themename.
If you want to tweak you syntax further up can upload the custom theme back in the web-based theme editor and save it out again.
Hope this helps someone else.
|
Editing is way simpler than advices above.
1. Go to Sublime installation folder, find there **Packages** subfolder.
2. Open **Color Scheme - Default.sublime-package** as a zip archive (I use Total Commander and `Ctrl`+`PgDn` keys).
3. Find there any scheme you like `*.tmTheme`, copy to HDD and edit with any XML editor.
4. Pack modified file back (with Total Commander just copy file to opened archive).
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I have managed to find a solution:
Go to [http://tmtheme-editor.herokuapp.com](http://tmtheme-editor.herokuapp.com/#/theme/Solarized%20%28light%29) (someone has built a web-based theme editor). Once you have tweaked the colour syntax you can download the themename.tmTheme file. Move that file to /Users/username/Library/Application Support/Sublime Text 3/Packages/ User/ (if you are on a Mac).
Load in the syntax theme from the top menu: Sublime Text > Preferences > Color Scheme > themename.
If you want to tweak you syntax further up can upload the custom theme back in the web-based theme editor and save it out again.
Hope this helps someone else.
|
As of May 2018 using Sublime Text 3:
I followed the *Overall Steps* and *Option A.* from **@davidfrancis** with great success to customize the Mariana color scheme.
However, I had to **change the file extension** from `<name>.tmTheme` to `<name>.sublime-color-scheme`. Also, the filename can be anything you want (don't need to keep the default), and it will appear under **Preferences > Color Scheme**.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
This should be much easier than it is
I used parts of all the other answers to make this work.
**Important notes before you begin:**
1. I had used this menu item to select a colour scheme: Sublime Text > Preferences > Color Scheme
2. I chose "Monokai.tmTheme"
3. Therefore the file you need containing the colours is: Monokai.tmTheme
4. You will NOT have easy access to that file yet!!
**Overall Steps:**
The overall steps I found to work are:
1. Get a copy of the Monokai.tmTheme text file
2. Place it in your "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User" directory
(replace XXX with your username)
3. Again use this menu: Sublime Text > Preferences > Color Scheme
4. Choose the new entry "Monokai - User"
5. Any changes to your Monokai.tmTheme file will immediately be seen by Sublime Text
**Getting the Monokai.tmTheme text file :**
This is the tricky part.
You have two options
*Option A. Use PackageResourceViewer to open the resource:*
A1. Tools -> Command Palette
A2. Type "PackageResourceViewer"
A3. Choose "PackageResourceViewer: Open Resource"
A4. Navigate to "Color Scheme - Default"
A5. Navigate to "Monokai.tmTheme"
A6. This will open the contents of the file but it is NOT a real file on your disk! You must copy the contents into a new text document and save it into "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User/Monokai.tmTheme" as above
*Option B. Use the web app to create your .tmTheme file:*
B1. Goto <http://tmtheme-editor.herokuapp.com>
B2. Edit the colours
B3. Download the .tmTheme file
B4. Put it into "/Users/XXX/Library/Application Support/Sublime Text 3/Packages/User/Monokai.tmTheme" as above
|
As of May 2018 using Sublime Text 3:
I followed the *Overall Steps* and *Option A.* from **@davidfrancis** with great success to customize the Mariana color scheme.
However, I had to **change the file extension** from `<name>.tmTheme` to `<name>.sublime-color-scheme`. Also, the filename can be anything you want (don't need to keep the default), and it will appear under **Preferences > Color Scheme**.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I'd recommend using [PackageResourceViewer](https://github.com/skuroda/PackageResourceViewer) to open the file. You could make your modifications there, and it would save in the proper location to override the built in files. However, I would recommend copying the contents of the file and creating a custom version in your User folder. That way, you can easily move it around and modify it without worrying about the built in color schemes.
The default color scheme files are located in Color Schemes - Default.sublime-package. In ST2, this was extracted to Packages/Color Scheme - Default. PackageResourceViewer will display these folders (as they would normally appear in the Packages folder in ST2. You could navigate to your color scheme from that.
|
***Sublime version, OS & OS version independent way:***
In Sublime, at the top menu bar goto "Sublime Text 2/3" -> Preferences -> "Browse Packages...".
This will open the "Packages" folder correctlin Finder/FileExplorer/Nautilus/... depending on OS/version-of-OS.
[](https://i.stack.imgur.com/RXeeq.png)
Find your theme and edit away..
[](https://i.stack.imgur.com/MOihV.png)
[Source](https://stackoverflow.com/a/9812619/496289)
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
Editing is way simpler than advices above.
1. Go to Sublime installation folder, find there **Packages** subfolder.
2. Open **Color Scheme - Default.sublime-package** as a zip archive (I use Total Commander and `Ctrl`+`PgDn` keys).
3. Find there any scheme you like `*.tmTheme`, copy to HDD and edit with any XML editor.
4. Pack modified file back (with Total Commander just copy file to opened archive).
|
As of May 2018 using Sublime Text 3:
I followed the *Overall Steps* and *Option A.* from **@davidfrancis** with great success to customize the Mariana color scheme.
However, I had to **change the file extension** from `<name>.tmTheme` to `<name>.sublime-color-scheme`. Also, the filename can be anything you want (don't need to keep the default), and it will appear under **Preferences > Color Scheme**.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I have managed to find a solution:
Go to [http://tmtheme-editor.herokuapp.com](http://tmtheme-editor.herokuapp.com/#/theme/Solarized%20%28light%29) (someone has built a web-based theme editor). Once you have tweaked the colour syntax you can download the themename.tmTheme file. Move that file to /Users/username/Library/Application Support/Sublime Text 3/Packages/ User/ (if you are on a Mac).
Load in the syntax theme from the top menu: Sublime Text > Preferences > Color Scheme > themename.
If you want to tweak you syntax further up can upload the custom theme back in the web-based theme editor and save it out again.
Hope this helps someone else.
|
***Sublime version, OS & OS version independent way:***
In Sublime, at the top menu bar goto "Sublime Text 2/3" -> Preferences -> "Browse Packages...".
This will open the "Packages" folder correctlin Finder/FileExplorer/Nautilus/... depending on OS/version-of-OS.
[](https://i.stack.imgur.com/RXeeq.png)
Find your theme and edit away..
[](https://i.stack.imgur.com/MOihV.png)
[Source](https://stackoverflow.com/a/9812619/496289)
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
***Sublime version, OS & OS version independent way:***
In Sublime, at the top menu bar goto "Sublime Text 2/3" -> Preferences -> "Browse Packages...".
This will open the "Packages" folder correctlin Finder/FileExplorer/Nautilus/... depending on OS/version-of-OS.
[](https://i.stack.imgur.com/RXeeq.png)
Find your theme and edit away..
[](https://i.stack.imgur.com/MOihV.png)
[Source](https://stackoverflow.com/a/9812619/496289)
|
As of May 2018 using Sublime Text 3:
I followed the *Overall Steps* and *Option A.* from **@davidfrancis** with great success to customize the Mariana color scheme.
However, I had to **change the file extension** from `<name>.tmTheme` to `<name>.sublime-color-scheme`. Also, the filename can be anything you want (don't need to keep the default), and it will appear under **Preferences > Color Scheme**.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I'd recommend using [PackageResourceViewer](https://github.com/skuroda/PackageResourceViewer) to open the file. You could make your modifications there, and it would save in the proper location to override the built in files. However, I would recommend copying the contents of the file and creating a custom version in your User folder. That way, you can easily move it around and modify it without worrying about the built in color schemes.
The default color scheme files are located in Color Schemes - Default.sublime-package. In ST2, this was extracted to Packages/Color Scheme - Default. PackageResourceViewer will display these folders (as they would normally appear in the Packages folder in ST2. You could navigate to your color scheme from that.
|
As of May 2018 using Sublime Text 3:
I followed the *Overall Steps* and *Option A.* from **@davidfrancis** with great success to customize the Mariana color scheme.
However, I had to **change the file extension** from `<name>.tmTheme` to `<name>.sublime-color-scheme`. Also, the filename can be anything you want (don't need to keep the default), and it will appear under **Preferences > Color Scheme**.
|
18,746,993
|
I am trying to edit some of the syntax colours in Sublime Text 3. I'm using the Solarized (Light) built in colour scheme but I only want to change a few of the colours. Where is the settings file (on a Mac)?
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18746993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1353451/"
] |
I'd recommend using [PackageResourceViewer](https://github.com/skuroda/PackageResourceViewer) to open the file. You could make your modifications there, and it would save in the proper location to override the built in files. However, I would recommend copying the contents of the file and creating a custom version in your User folder. That way, you can easily move it around and modify it without worrying about the built in color schemes.
The default color scheme files are located in Color Schemes - Default.sublime-package. In ST2, this was extracted to Packages/Color Scheme - Default. PackageResourceViewer will display these folders (as they would normally appear in the Packages folder in ST2. You could navigate to your color scheme from that.
|
I have managed to find a solution:
Go to [http://tmtheme-editor.herokuapp.com](http://tmtheme-editor.herokuapp.com/#/theme/Solarized%20%28light%29) (someone has built a web-based theme editor). Once you have tweaked the colour syntax you can download the themename.tmTheme file. Move that file to /Users/username/Library/Application Support/Sublime Text 3/Packages/ User/ (if you are on a Mac).
Load in the syntax theme from the top menu: Sublime Text > Preferences > Color Scheme > themename.
If you want to tweak you syntax further up can upload the custom theme back in the web-based theme editor and save it out again.
Hope this helps someone else.
|
702,891
|
Currently I'm updating my `iptables` rules using a bash script, where I call the command:
```
iptables -F
```
then I apply the rules.
The problem being that I need to update the rules to gain access to port 80, then I drop everything, in a cron job every 10 minutes. So every 10 minutes I call `iptables -F` to delete old rules and open all ports (the thing I don't want).
I want to not have to flush the rules every 10 minutes, just edit or update the existing rules.
|
2022/05/17
|
[
"https://unix.stackexchange.com/questions/702891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/526225/"
] |
You can `print` the existing line (`$0`) and a new field like this, `,` will use the output separator between the arguments.
```
awk -v n=1100 'length($2)!=4 {print $0,++n}' file
```
Output:
```
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
If you need any additional formatting of the output, you can use [`printf`](https://www.gnu.org/software/gawk/manual/html_node/Printf-Examples.html) function. Here is an example for alignment:
```
$ awk -v n=1100 'length($2)!=4 {printf "%s %4s %s\n", $1, $2, ++n}' file
d1001 100 1101
d1002 10 1102
d1003 1 1103
$ awk -v n=1100 'length($2)!=4 {printf "%s %-4s %s\n", $1, $2, ++n}' file
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
|
```
awk -v k=1100 '{if(length($2) !=4 && length($2)>0){k=k+1;print $1,$2,k }}' file.txt
```
output
```
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
|
702,891
|
Currently I'm updating my `iptables` rules using a bash script, where I call the command:
```
iptables -F
```
then I apply the rules.
The problem being that I need to update the rules to gain access to port 80, then I drop everything, in a cron job every 10 minutes. So every 10 minutes I call `iptables -F` to delete old rules and open all ports (the thing I don't want).
I want to not have to flush the rules every 10 minutes, just edit or update the existing rules.
|
2022/05/17
|
[
"https://unix.stackexchange.com/questions/702891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/526225/"
] |
You can `print` the existing line (`$0`) and a new field like this, `,` will use the output separator between the arguments.
```
awk -v n=1100 'length($2)!=4 {print $0,++n}' file
```
Output:
```
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
If you need any additional formatting of the output, you can use [`printf`](https://www.gnu.org/software/gawk/manual/html_node/Printf-Examples.html) function. Here is an example for alignment:
```
$ awk -v n=1100 'length($2)!=4 {printf "%s %4s %s\n", $1, $2, ++n}' file
d1001 100 1101
d1002 10 1102
d1003 1 1103
$ awk -v n=1100 'length($2)!=4 {printf "%s %-4s %s\n", $1, $2, ++n}' file
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
|
Using **Raku** (formerly known as Perl\_6)
```perl
raku -ne 'state $i; if .words[1].match( /^ \d**0..3 $/ ) { print .words~" "; put ++$i + 1100 };'
```
This question is similar to a recent question, and you can slightly modify that question's Raku answer.
Briefly (compared to the previous question), ask `if .words[1]` the second column can match to a string that contains nothing other than `\d**0..3` zero-to-three digits, and if so, print out all `.words`, followed by a third column that returns the `$i` counter, starting from a basal value of `1100` incremented via `++`.
Sample Input:
```
d1000 1000
d1001 100
d1002 10
d1003 1
```
Sample Output:
```
d1001 100 1101
d1002 10 1102
d1003 1 1103
```
<https://unix.stackexchange.com/a/703436/227738>
<https://raku.org>
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
>
> in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them... Why is this?
>
>
>
IMO it's because they're thinking about the program in a simplistic, non-object-oriented way. Most of the developers that I work with would go the other way, and declare the variable to be `Map...` instead of `HashMap...` because when they're using the object, they're only interested in knowing how it will behave---not at all interested in what type of object it actually *is*.
It's not a big deal when you're talking about a private variable that only exists within a single class, but it becomes a very big deal when it becomes part of a public API. My co-workers probably would reject your example in code review if you declared the variable as `HashMap...`. Not because a private variable declaration mattered to any other part of the system, but because they'd consider it to be a shallow way of thinking, and they'd want you to get in the habit of thinking on a deeper level.
|
In addition to Ewan's dependency injection scenario.
If you use The Separated Interface Pattern -
>
> The pattern prescribes that you use different packages for the interface and any of its
> implementations. Only the definition of the interface is known to any
> packages that need to consume the functionality.
>
>
>
in such a situation the client/consumer will not have the reference of the concrete implementation at all.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
A programmer doing this is probably just following best practise in using interface types instead of concrete types; however, here there is no real advantage to doing that, though because it is private as you noted. (In fact, it is possible that this results in a minor performance penalty when using the field later on.)
Any implementation change of the two lines of code in question (the line that declares and the line that initializes) will be versioned together, so it would be easy to change the type to something else as needed.
This is an area where it might be nice to be able to declare fields using `var` (C# anyway).
Using `var` keeps the most possible amount of type information, a local variable declared that way: `var employees = new HashMap()` would be a reference to a `HashMap` instance, that also supports the `Map` interface, without actually having to violate best practice of using interfaces for declarations.
See also: [Does the best practice of 'programming to interfaces' apply to local variables?](https://stackoverflow.com/q/10627750/471129). As you can read for yourself, there are opinions in both directions on this. I think that some fear if programmers don't do this for locals, they will forget to follow this best practise when it really counts, such as for public method parameter & return value types. YMMV.
|
>
> programmers normally declare variables to be of the exact type of the
> class that will be assigned to them
>
>
>
In my experience, there are several reasons for this:
1. The programmer lacks the understanding/insight to recognize that `Map` is an interface of `HashMap` and fails to see that it is even a consideration or option.
2. The programmer is focused on the need to implement a `Map`, not on providing flexibility for *possible* future uses that are not obvious.
3. The flexibility gains are unnecessary or could even add complexity to an implementation without adding enough value to justify it (a form of pre-optimization).
The third situation is the more important, but I rarely see it. I see a lot of very smart programmers that write very "smart" and "flexible" code and do something like what you illustrate. Then when the code is maintained or needs to be modified at future date, all of those valuable and needed but not so smart programmers out there have no idea what happened or how to take advantage of it. It adds complexity during those future upgrade and/or maintenance phases that is costly.
So, it might be that a very forward looking programmer that was perfectly capable of making a more flexible implementation, instead opted for a more direct and easier to maintain implementation until future demands required a more flexible approach - and better determined the appropriate way to implement that flexibility (e.g. maybe a wrapper class or custom `Map` implementation is ultimately needed).
>
> does it make sense to declare their type to be of the interface that
> you are using?
>
>
>
In some cases, it is very useful. However, doing so without an immediate need is unlikely to "improve" the code. In your simplified example the change in the declaration to an interface should be easy and obvious when needed, and fully compatible. In more complex cases, it is probably still best to implement only what is required.
If, however, you are writing system level or API code, you are probably going the right direction if you intend to expose access to this variable in some way and the "end user" of the code may have needs beyond what you can currently define. And it's these cases where it makes a huge difference in the quality of the code to have a quality programmer asking good questions like yours.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
It makes a great deal of sense if you assign a value to them from outside the class. Ie.
```
public class Basket
{
private IPurchaseService _pServ;
public Basket(IPurchaseService pServ)
{
_pServ = pServ;
}
}
```
|
>
> programmers normally declare variables to be of the exact type of the
> class that will be assigned to them
>
>
>
In my experience, there are several reasons for this:
1. The programmer lacks the understanding/insight to recognize that `Map` is an interface of `HashMap` and fails to see that it is even a consideration or option.
2. The programmer is focused on the need to implement a `Map`, not on providing flexibility for *possible* future uses that are not obvious.
3. The flexibility gains are unnecessary or could even add complexity to an implementation without adding enough value to justify it (a form of pre-optimization).
The third situation is the more important, but I rarely see it. I see a lot of very smart programmers that write very "smart" and "flexible" code and do something like what you illustrate. Then when the code is maintained or needs to be modified at future date, all of those valuable and needed but not so smart programmers out there have no idea what happened or how to take advantage of it. It adds complexity during those future upgrade and/or maintenance phases that is costly.
So, it might be that a very forward looking programmer that was perfectly capable of making a more flexible implementation, instead opted for a more direct and easier to maintain implementation until future demands required a more flexible approach - and better determined the appropriate way to implement that flexibility (e.g. maybe a wrapper class or custom `Map` implementation is ultimately needed).
>
> does it make sense to declare their type to be of the interface that
> you are using?
>
>
>
In some cases, it is very useful. However, doing so without an immediate need is unlikely to "improve" the code. In your simplified example the change in the declaration to an interface should be easy and obvious when needed, and fully compatible. In more complex cases, it is probably still best to implement only what is required.
If, however, you are writing system level or API code, you are probably going the right direction if you intend to expose access to this variable in some way and the "end user" of the code may have needs beyond what you can currently define. And it's these cases where it makes a huge difference in the quality of the code to have a quality programmer asking good questions like yours.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
>
> Does it make sense to declare private fields using an interface as their type?
>
>
>
Yes it does. For many reasons.
>
> However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
>
>
>
Until they've fully written the method they don't know what all it will need access to. You could start with the most restrictive interface and switch to less restrictive as needed. But that will work against code completion tools.
Starting with HashMap as HashMap and later switching to the most restrictive interface theoretically produces the same result. And it means, as you think of new things you need from HashMap, they're right there at your fingertips. Problem is, people who work this way sometimes forget to make the switch.
For some it's easier to think of a HashMap as just a HashMap when writing code. After all you have many other things to think about. Even some well trained OOP programmers do this, not because they think it's good practice but because they don't want to be distracted while writing code. This is selfish because it doesn't hurt them nearly as much as it hurts those who come after. Sometimes they meant to take the time to refactor to the interface but never got around to it. Don't look at code written like this as if it was meant to be this way. This is quick and dirty.
But it's private! No getters. No setters.
=========================================
It's true that being private means this variables type and its very existence is not visible to the outside world unless it is passed into some outside object. Even in that case the parameter of the method / constructor receiving it is the one that most needs to be as high up the inheritance chain as possible. Not the argument (our variable) being passed in.
However, good coding principles aren't just for between objects. Some can save you a lot of hassle just within one method.
Relevant design principles
==========================
1. Program to an interface, not an implementation
2. Role interfaces
3. Make decisions in one place
Program to an interface
=======================
Say this variable is used in at least one long hard to read method. If it is declared as a map and not something concrete I can instantly tell that I could refactor to any map implementation without even having to read the long method.
Role interface
==============
If the long method can get by with the variable at an even higher level it should. Do you really need map? Can you get by with one of its super interfaces that exposes less? (Well map doesn't have a super interface but imagine we were talking about list). The less exposed, the higher up it is, the more options you have for hassle free change.
Make decisions in one place
===========================
Since I've already shown that I don't want to even read this long ugly method, should it come as a surprise that I don't want to go through it changing a bunch of hashmaps to map? (or some other implementation if I feel like passing on the misery). I don't. I really don't. Please don't make me. Please?
|
In addition to Ewan's dependency injection scenario.
If you use The Separated Interface Pattern -
>
> The pattern prescribes that you use different packages for the interface and any of its
> implementations. Only the definition of the interface is known to any
> packages that need to consume the functionality.
>
>
>
in such a situation the client/consumer will not have the reference of the concrete implementation at all.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
Sometimes I do this because the implementation tells the user of the class something important.
One example would be the C# class called ConcurrentDictionary. Depending on how I'm using it, I could use the dictionary using the interface of IDictionary, but doing this makes it less clear to later programmers that the dictionary is thread-safe. (This example does not apply to Java, since ConcurrentMap is an interface, the idea applies)
|
In addition to Ewan's dependency injection scenario.
If you use The Separated Interface Pattern -
>
> The pattern prescribes that you use different packages for the interface and any of its
> implementations. Only the definition of the interface is known to any
> packages that need to consume the functionality.
>
>
>
in such a situation the client/consumer will not have the reference of the concrete implementation at all.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
In addition to Ewan's dependency injection scenario.
If you use The Separated Interface Pattern -
>
> The pattern prescribes that you use different packages for the interface and any of its
> implementations. Only the definition of the interface is known to any
> packages that need to consume the functionality.
>
>
>
in such a situation the client/consumer will not have the reference of the concrete implementation at all.
|
The general answer is this:
With `HashMap obj =new HashMap()`, the writer is required to think about HashMap.
With `Map obj =new HashMap()` he is required to think about Map and HashMap.
Does the scenario justify the added cost? What's the plot?
(The rookies who go *"yay now we don't have to think about the class!"* ship with additional bugs in their programs.)
**Does the added cost result in savings?**
---
In the case of (`java.util.`) `Map` vs `HashMap`, it is unequivocally no. `Map` is so big and `HashMap` so small, that you have nothing to save but much overhead to lose if you use both of them within private code.
In the general case of `A` vs `B`, if they have a significant difference in size and/or there is a significant amount of private code using `A` instead of `B`, then by employing both you gain to the extent of their difference and the amount of code using it.
For example, `java.util.Queue` vs `.LinkedList`:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
```
Would it be better to leave the variable as LinkedList or to use Queue?
With one line of usage, it can go both ways. But it quickly gets obvious which is a better choice when the amount of code increases:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
```
The code usage could get so much that you'd need to factor it into it's own function:
```
obj. //..
//..
//..
obj. //..
//..
//..
Function1(obj);
//..
//..
}
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
}
```
..or even multiple functions:
```
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
Function2(obj);
//..
Function3(obj);
//..
Function4(obj);
//..
}
```
When writing all of those code in those functions, the writer never once have to think in terms of LinkedList. The only times when its required to think in terms of LinkedList are at the callsites of those functions.
Since the size difference between Queue and LinkedList is significant and the usage of Queue here plentiful, the overhead of having to think in two separate constructs (compared to simply using LinkedList throughout) is relatively small.
Tldr: For `Map` vs `HashMap`, Nopppe.
---
And it's interesting that this question always appears every now and then yet I see no one ever talking about AbstractMap as if it doesn't even exist, and indeed yes it doesn't need to exist. And if only `Map` was 10 characters longer and `Hashmap` 10 characters shorter, no one would be talking about using `Map` for private code either just as no one talks about `AbstractMap` because those extra keystrokes would make people realize the obviousness that `Map obj = new HashMap()` for private code is senselessly redundant. Reminding me of tards who appear with **an interface for every single method and every single combination of it** ..a tale for another day..
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
A programmer doing this is probably just following best practise in using interface types instead of concrete types; however, here there is no real advantage to doing that, though because it is private as you noted. (In fact, it is possible that this results in a minor performance penalty when using the field later on.)
Any implementation change of the two lines of code in question (the line that declares and the line that initializes) will be versioned together, so it would be easy to change the type to something else as needed.
This is an area where it might be nice to be able to declare fields using `var` (C# anyway).
Using `var` keeps the most possible amount of type information, a local variable declared that way: `var employees = new HashMap()` would be a reference to a `HashMap` instance, that also supports the `Map` interface, without actually having to violate best practice of using interfaces for declarations.
See also: [Does the best practice of 'programming to interfaces' apply to local variables?](https://stackoverflow.com/q/10627750/471129). As you can read for yourself, there are opinions in both directions on this. I think that some fear if programmers don't do this for locals, they will forget to follow this best practise when it really counts, such as for public method parameter & return value types. YMMV.
|
The general answer is this:
With `HashMap obj =new HashMap()`, the writer is required to think about HashMap.
With `Map obj =new HashMap()` he is required to think about Map and HashMap.
Does the scenario justify the added cost? What's the plot?
(The rookies who go *"yay now we don't have to think about the class!"* ship with additional bugs in their programs.)
**Does the added cost result in savings?**
---
In the case of (`java.util.`) `Map` vs `HashMap`, it is unequivocally no. `Map` is so big and `HashMap` so small, that you have nothing to save but much overhead to lose if you use both of them within private code.
In the general case of `A` vs `B`, if they have a significant difference in size and/or there is a significant amount of private code using `A` instead of `B`, then by employing both you gain to the extent of their difference and the amount of code using it.
For example, `java.util.Queue` vs `.LinkedList`:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
```
Would it be better to leave the variable as LinkedList or to use Queue?
With one line of usage, it can go both ways. But it quickly gets obvious which is a better choice when the amount of code increases:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
```
The code usage could get so much that you'd need to factor it into it's own function:
```
obj. //..
//..
//..
obj. //..
//..
//..
Function1(obj);
//..
//..
}
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
}
```
..or even multiple functions:
```
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
Function2(obj);
//..
Function3(obj);
//..
Function4(obj);
//..
}
```
When writing all of those code in those functions, the writer never once have to think in terms of LinkedList. The only times when its required to think in terms of LinkedList are at the callsites of those functions.
Since the size difference between Queue and LinkedList is significant and the usage of Queue here plentiful, the overhead of having to think in two separate constructs (compared to simply using LinkedList throughout) is relatively small.
Tldr: For `Map` vs `HashMap`, Nopppe.
---
And it's interesting that this question always appears every now and then yet I see no one ever talking about AbstractMap as if it doesn't even exist, and indeed yes it doesn't need to exist. And if only `Map` was 10 characters longer and `Hashmap` 10 characters shorter, no one would be talking about using `Map` for private code either just as no one talks about `AbstractMap` because those extra keystrokes would make people realize the obviousness that `Map obj = new HashMap()` for private code is senselessly redundant. Reminding me of tards who appear with **an interface for every single method and every single combination of it** ..a tale for another day..
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
It makes a great deal of sense if you assign a value to them from outside the class. Ie.
```
public class Basket
{
private IPurchaseService _pServ;
public Basket(IPurchaseService pServ)
{
_pServ = pServ;
}
}
```
|
The general answer is this:
With `HashMap obj =new HashMap()`, the writer is required to think about HashMap.
With `Map obj =new HashMap()` he is required to think about Map and HashMap.
Does the scenario justify the added cost? What's the plot?
(The rookies who go *"yay now we don't have to think about the class!"* ship with additional bugs in their programs.)
**Does the added cost result in savings?**
---
In the case of (`java.util.`) `Map` vs `HashMap`, it is unequivocally no. `Map` is so big and `HashMap` so small, that you have nothing to save but much overhead to lose if you use both of them within private code.
In the general case of `A` vs `B`, if they have a significant difference in size and/or there is a significant amount of private code using `A` instead of `B`, then by employing both you gain to the extent of their difference and the amount of code using it.
For example, `java.util.Queue` vs `.LinkedList`:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
```
Would it be better to leave the variable as LinkedList or to use Queue?
With one line of usage, it can go both ways. But it quickly gets obvious which is a better choice when the amount of code increases:
```
private LinkedList/Queue obj = new LinkedList();
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
//..
obj. //..
//..
```
The code usage could get so much that you'd need to factor it into it's own function:
```
obj. //..
//..
//..
obj. //..
//..
//..
Function1(obj);
//..
//..
}
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
}
```
..or even multiple functions:
```
private void Function1(Queue obj){
//..
//..
obj. //..
//..
//..
Function2(obj);
//..
Function3(obj);
//..
Function4(obj);
//..
}
```
When writing all of those code in those functions, the writer never once have to think in terms of LinkedList. The only times when its required to think in terms of LinkedList are at the callsites of those functions.
Since the size difference between Queue and LinkedList is significant and the usage of Queue here plentiful, the overhead of having to think in two separate constructs (compared to simply using LinkedList throughout) is relatively small.
Tldr: For `Map` vs `HashMap`, Nopppe.
---
And it's interesting that this question always appears every now and then yet I see no one ever talking about AbstractMap as if it doesn't even exist, and indeed yes it doesn't need to exist. And if only `Map` was 10 characters longer and `Hashmap` 10 characters shorter, no one would be talking about using `Map` for private code either just as no one talks about `AbstractMap` because those extra keystrokes would make people realize the obviousness that `Map obj = new HashMap()` for private code is senselessly redundant. Reminding me of tards who appear with **an interface for every single method and every single combination of it** ..a tale for another day..
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
It makes a great deal of sense if you assign a value to them from outside the class. Ie.
```
public class Basket
{
private IPurchaseService _pServ;
public Basket(IPurchaseService pServ)
{
_pServ = pServ;
}
}
```
|
A programmer doing this is probably just following best practise in using interface types instead of concrete types; however, here there is no real advantage to doing that, though because it is private as you noted. (In fact, it is possible that this results in a minor performance penalty when using the field later on.)
Any implementation change of the two lines of code in question (the line that declares and the line that initializes) will be versioned together, so it would be easy to change the type to something else as needed.
This is an area where it might be nice to be able to declare fields using `var` (C# anyway).
Using `var` keeps the most possible amount of type information, a local variable declared that way: `var employees = new HashMap()` would be a reference to a `HashMap` instance, that also supports the `Map` interface, without actually having to violate best practice of using interfaces for declarations.
See also: [Does the best practice of 'programming to interfaces' apply to local variables?](https://stackoverflow.com/q/10627750/471129). As you can read for yourself, there are opinions in both directions on this. I think that some fear if programmers don't do this for locals, they will forget to follow this best practise when it really counts, such as for public method parameter & return value types. YMMV.
|
317,832
|
For the fields that you have as encapsulated members of a class, does it make sense to declare their type to be of the interface that you are using? For example:
```
public class PayrollInfo
{
private Map<String, Employee> employees;
public PayrollInfo()
{
employees = new HashMap<>();
}
}
```
This gives you maximum flexibility as you can change the implementation of the class easily. However, in my experience programmers normally declare variables to be of the exact type of the class that will be assigned to them, e.g. HashMap in this case. Why is this?
|
2016/05/07
|
[
"https://softwareengineering.stackexchange.com/questions/317832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124966/"
] |
Always have extendability in mind when coding. Using interfaces in private fields can be a time-saver later on when you...
1. want to use a setter for the field. This way the user of the setter only needs to conform to the interface.
2. want to change the implementation inside the class. Imagine one day you find a more performant implementation of Map than HashMap. When refactoring you only need to change the constructor. No worries about any usage of implementation-specific methods.
---
My rule of a thumb:
**If you need some features of the concrete implementation, use it, otherwise stick to the interface.**
Note that this applies to built-in types and third-party libs only. For your own code you need to know where your API to the outside world is. To save myself from changing both interfaces and implemantations I use implementations only - until I know I gain some benefits from generating interfaces from classes.
|
In addition to Ewan's dependency injection scenario.
If you use The Separated Interface Pattern -
>
> The pattern prescribes that you use different packages for the interface and any of its
> implementations. Only the definition of the interface is known to any
> packages that need to consume the functionality.
>
>
>
in such a situation the client/consumer will not have the reference of the concrete implementation at all.
|
55,757,933
|
We have an application that get created using helm. Every time we do release it creates a service with release name in it. How do we handle in alb-ingress if the service keeps changing ?
ex: for alb ingress(under kops) I have below rule
```
- host: pluto.example.com
paths:
- path: /
backend:
serviceName: pluto-service
servicePort: 8080
```
With a different helm release pluto-service will have new name. How to handle the ingress ?
|
2019/04/19
|
[
"https://Stackoverflow.com/questions/55757933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4178978/"
] |
You can also try to use '--reuse-values' flag with helm upgrade command. This will reuse the last release`s values.
|
You can create a service in helm where you pass a different value to the name of the service, most likely you use a release name right now. For example, create a helm chart for your application where you pass the name as a value:
```
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nameOverride }}
spec:
type: NodePort
ports:
- name: http-service
targetPort: 5000
protocol: TCP
port: 80
selector:
app: <MyApp>
```
And in the values.yaml of the chart you can specify the name of your service: `nameOverride: MyService`
|
55,757,933
|
We have an application that get created using helm. Every time we do release it creates a service with release name in it. How do we handle in alb-ingress if the service keeps changing ?
ex: for alb ingress(under kops) I have below rule
```
- host: pluto.example.com
paths:
- path: /
backend:
serviceName: pluto-service
servicePort: 8080
```
With a different helm release pluto-service will have new name. How to handle the ingress ?
|
2019/04/19
|
[
"https://Stackoverflow.com/questions/55757933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4178978/"
] |
Is the ingress declared with helm too ?
If so, and if the service use a `{{ .Release.Name }}-service` as name, you can also use `{{ .Release.Name }}-service` as ingress' service name. You can also write you own tpl function (and add it to `_helpers.tpl` file) to determine service name.
If not, maybe you should ...
|
You can create a service in helm where you pass a different value to the name of the service, most likely you use a release name right now. For example, create a helm chart for your application where you pass the name as a value:
```
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nameOverride }}
spec:
type: NodePort
ports:
- name: http-service
targetPort: 5000
protocol: TCP
port: 80
selector:
app: <MyApp>
```
And in the values.yaml of the chart you can specify the name of your service: `nameOverride: MyService`
|
42,356,596
|
I implemented a simple logistic regression. Before running the training algorithm, I created a placeholder for my weights where I initialized all the weights to 0...
```
W = tf.Variable(tf.zeros([784, 10]))
```
After initializing all my variables correctly, the logistic regression is implemented (which I've tested and runs correctly)...
```
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples/batch_size)
# loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
# compute average loss
avg_cost += c / total_batch
# display logs per epoch step
if (epoch + 1) % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
```
My issue is, I need to extract the weights used in the model. I used the following for my model...
```
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
```
I tried extracting the following way...
```
var = [v for v in tf.trainable_variables() if v.name == "Variable:0"][0]
print(sess.run(var[0]))
```
I thought that the trained weights would be located in `tf.training_variables()`, however when I run the `print` function, I get an array of zeroes.
What I want, is the all sets of weights. But for some reason I am getting arrays of zeroes instead of the actual weights of the classifier.
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42356596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4333347/"
] |
The following works independent of which python version you are using:
```
os.execv(sys.executable,[sys.executable.split("/")[-1]]+sys.argv)
```
|
I feel like I may be misunderstanding something, but I think you just want to pass `'python2.7'` as the first element of your list of arguments instead of `'python'`. Or if you're not sure what the executable name will be, pass `sys.executable`.
|
47,689,531
|
How are events and phases triggered according to the events?
like every objects created in tree with their handles and if you click on one button all events linked in that div are triggered.
I have made program which have different sub div and having one paragraph and button.
I have came across different theories for event bubbling like [event bubbling explained](https://javascript.info/bubbling-and-capturing)
|
2017/12/07
|
[
"https://Stackoverflow.com/questions/47689531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8655469/"
] |
Basic Example
in HTML, elements are nested inside other elements like so
```
<div id="outer">
<h1 id="inner">Inner<h1>
</div>
```
when an event on an inner element is triggered it affects every outer element as you might have read in the post you linked. This is bubbling. Event flow can be controlled by
```
var element = document.getElementById('inner');
element.addEventListener("event",yourFunction, false);
```
here, `false` means event flow is from the inner element (on which event is fired) to the outer elements. this is the default in all browsers.
to stop bubbling you could use `stopPropagation()`
|
```html
<html>
<head>
<title>Event Bubble and Phases</title>
<script>
function f1() {
console.log("f1");
console.log("Current Element " + event.currentTarget.id);
console.log("Target "+event.target.id);
console.log("Event Phase "+event.eventPhase);
console.log("Bubble Status "+event.bubbles);
console.log("Cancelable Status "+event.cancelable);
}
function f2() {
console.log("f2");
console.log("Current Element " + event.currentTarget.id);
console.log("Target "+event.target.id);
console.log("Event Phase "+event.eventPhase);
console.log("Bubble Status "+event.bubbles);
console.log("Cancelable Status "+event.cancelable);
}
function f3() {
console.log("f3");
console.log("Current Element " + event.currentTarget.id);
console.log("Target "+event.target.id);
console.log("Event Phase "+event.eventPhase);
console.log("Bubble Status "+event.bubbles);
console.log("Cancelable Status "+event.cancelable);
}
function f4() {
console.log("f4");
console.log("Current Element " + event.currentTarget.id);
console.log("Target "+event.target.id);
console.log("Event Phase "+event.eventPhase);
console.log("Bubble Status "+event.bubbles);
console.log("Cancelable Status "+event.cancelable);
}
</script>
</head>
<body>
<div id="div1" onclick="f1()" style="background-color: brown; color: aliceblue;padding:20px">
This is div at level1
<div id="div2" onclick="f2()" style="background-color: burlywood;padding:20px">
This is div at level2
<p id="paragraph1" onclick="f3()" style="background-color: appworkspace;padding:20px">
<button id="button1" onclick="f4()" style="background-color: beige;padding:20px">
Click
</button>
</p>
</div>
</div>
</body>
</html>
```
|
64,288,300
|
I'm trying to parse a json object that an application (AWS lambda) wrote in a log file, but this json has an stringify object nested. Something like that:
```
input = '{"object":"{\"base\":\"brn\",\"scope\":\"all\",\"channel\":\"sve\",\"service\":\"getAssociatesCards\",\"entity\":\"consolidate\",\"attribute\":\"cRelId\",\"qualifier\":\"45430608\"}"}'
```
And when i try to parse it fails:
```
JSON.parse(input)
```
With this error:
```
SyntaxError: Unexpected token b in JSON at position 13
```
What can i do for fix this and get a nice Json Object?
|
2020/10/09
|
[
"https://Stackoverflow.com/questions/64288300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3050002/"
] |
If you have control over the definition of the JSON like in the code in your question, declare it with `String.raw` so the backslashes are interpreted as literal backslashes.
The inner `object` property contains more JSON, so you can parse that as well if you want:
```js
const input = String.raw`{"object":"{\"base\":\"brn\",\"scope\":\"all\",\"channel\":\"sve\",\"service\":\"getAssociatesCards\",\"entity\":\"consolidate\",\"attribute\":\"cRelId\",\"qualifier\":\"45430608\"}"}`;
const parsed = JSON.parse(input);
console.log(JSON.parse(parsed.object));
```
|
If you are reading this from some log file and you cannot control how it is defined, you could always strip out the inner json and parse it on its own.
```js
var input = '{"object":"{\"base\":\"brn\",\"scope\":\"all\",\"channel\":\"sve\",\"service\":\"getAssociatesCards\",\"entity\":\"consolidate\",\"attribute\":\"cRelId\",\"qualifier\":\"45430608\"}"}';
var innerJSON = input.substring(11, input.length - 2);
console.log({
object: JSON.parse(innerJSON)
});
```
|
934,178
|
I use Cygwin as a POSIX environment, and I have a bunch of mapped network drives on my windows computer (set up by a sysadmin, not by me). I have found that all the network drives which are mapped on windows are automatically mounted in Cygwin. I can use `df` to see all the drives, and their mount points on `/cygdrive`. What I don't know is how. I looked at `/etc/fstab` and I didn't find anything there.
When and by whom is mount called?
Currently I'm having unrelated problems with permissions on certain data on these network drives and the Internet told me to change how it is mounted, but I can't seem to find that information.
|
2015/06/29
|
[
"https://superuser.com/questions/934178",
"https://superuser.com",
"https://superuser.com/users/463991/"
] |
### How does cygwin mount mapped network drives
The mount table is set up somewhere in the bowels of `cygwin.dll` during cygwin startup.
Without looking at the source code I can't give a more precise answer.
You can `unmount` and then remount using `mount` with different options if you need to change the default options.
`mount` will display the current mount table (with the mount options used).
Note:
>
> The mount program is used to map your drives and shares onto Cygwin's simulated POSIX directory tree, much like as is done by mount commands on typical UNIX systems. However, in contrast to mount points given in /etc/fstab, mount points created or changed with mount are not persistent. They disappear immediately after the last process of the current user exited.
>
>
>
Edit `/etc/fstab` or `/etc/fstab.d/$USER` to make permanent changes.
---
### Further reading
* [fstab](https://cygwin.com/ml/cygwin/2008-07/msg00293/fstab) for examples.
* [mount](https://cygwin.com/cygwin-ug-net/using-utils.html#mount) for more information about `mount`.
* [Special filenames](https://cygwin.com/cygwin-ug-net/using-specialnames.html) for a list of filename that are read by Cygwin before the mount table has been established.
|
Two docs pages describe Cygwin's mount mechanic:
* <https://cygwin.com/cygwin-ug-net/using.html> - The Cygwin Mount Table
* <https://cygwin.com/cygwin-ug-net/using-specialnames.html> - Special filenames
An important quotation:
>
> The /etc/fstab file is used to map Win32 drives and network shares into Cygwin's internal POSIX directory tree.
>
>
>
>
> The only two exceptions are the file system types `cygdrive` and `usertemp`. The `cygdrive` type is used to set the `cygdrive` prefix.
>
>
>
|
15,533,929
|
I will preface with Table Structures:
revshare r : contains info for a purchase including orderNo, sales, commission, itemid, EventDate
Products p: contains information around a product including a PID (Product ID) and is used to join to the Merchants table to get Merchant information.
Merchants m: contains information about the merchant the product was purchased from, including MerchantName
**Question**
I am trying to create a MySQL query to pull top 10 itemid's ordered by sum of commission for a given month. The entire data set I would like to get is from 2011-2013 so each year would populate 120 records (10 per month).
I created a query to pull 1 months worth of data and planned on using a UNION ALL to just create a records list with 10 records from each query (each individual query representing a months top 10 itemid's).
**Query1**
This query accurately returns me the top 10 itemid's based on total commission of those items in the given month period.
```
SELECT
m.MerchantName,
Count(r.OrderNo),
sum(r.commission)
FROM revshare r
LEFT JOIN Products p ON r.itemid = p.PID
LEFT JOIN Merchants m ON p.MID = m.MID
WHERE r.EventDate between '2011-01-01' and '2011-01-31'
GROUP by r.itemid
ORDER by 3 DESC LIMIT 10
```
When I try to UNION this query with another so that I can get records for the next month between '2011-02-01' and '2011-02-31' I get and error "ERROR: Incorrect usage of UNION and ORDER BY" I know this is because apparently you cannot use ORDER BY on any set of UNION'd queries but the last. I could pull the entire data set and then use Excel or Pentaho BI to show only the top 10 but that is not efficient based on the huge data sets in the revshare table.
Below is the query with the UNION ALL that doesn't work. Does anyone have any better method of pulling this data?
Any help is greatly appreciated.
Regards,
-Chris
**Query 2 (doesn't work because of the ORDER BY statement)**
```
SELECT
m.MerchantName,
Count(r.OrderNo),
sum(r.commission)
FROM revshare r
LEFT JOIN Products p ON r.itemid = p.PID
LEFT JOIN Merchants m ON p.MID = m.MID
WHERE r.EventDate between '2011-01-01' and '2011-01-31'
GROUP by r.itemid
ORDER by 3 DESC LIMIT 10
UNION ALL
SELECT
m.MerchantName,
Count(r.OrderNo),
sum(r.commission)
FROM revshare r
LEFT JOIN Products p ON r.itemid = p.PID
LEFT JOIN Merchants m ON p.MID = m.MID
WHERE r.EventDate between '2011-02-01' and '2011-02-31'
GROUP by r.itemid
ORDER by 3 DESC LIMIT 10
```
|
2013/03/20
|
[
"https://Stackoverflow.com/questions/15533929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2192017/"
] |
Ok, try this....
```
SELECT * FROM (
SELECT
m.MerchantName,
Count(r.OrderNo),
sum(r.commission)
FROM
revshare r
LEFT JOIN Products p ON r.itemid = p.PID
LEFT JOIN Merchants m ON p.MID = m.MID
WHERE
r.EventDate between '2011-01-01' and '2011-01-31'
GROUP by
r.itemid
ORDER by
3 DESC LIMIT 10
) AS RESULT1
UNION ALL
SELECT * FROM (
SELECT
m.MerchantName,
Count(r.OrderNo),
sum(r.commission)
FROM
revshare r
LEFT JOIN Products p ON r.itemid = p.PID
LEFT JOIN Merchants m ON p.MID = m.MID
WHERE
r.EventDate between '2011-02-01' and '2011-02-31'
GROUP by
r.itemid
ORDER by
3 DESC LIMIT 10
) AS RESULT2
```
|
Since you already started down the path of `union`ing queries together, here is the right approach:
```
select t.*
from ((SELECT '2011-01' as yyyymm, m.MerchantName, Count(r.OrderNo) as cnt, sum(r.commission) as comm
FROM revshare r LEFT JOIN
Products p
ON r.itemid = p.PID LEFT JOIN
Merchants m
ON p.MID = m.MID
WHERE r.EventDate between '2011-01-01' and '2011-01-31'
GROUP by r.itemid
ORDER by comm DESC
LIMIT 10
) union all
(SELECT '2011-02' as yyyymm, m.MerchantName, Count(r.OrderNo) as cnt, sum(r.commission) as comm
FROM revshare r LEFT JOIN
Products p
ON r.itemid = p.PID LEFT JOIN
Merchants m
ON p.MID = m.MID
WHERE r.EventDate between '2011-02-01' and '2011-02-28'
GROUP by r.itemid
ORDER by comm DESC LIMIT 10
) union all
. . .
) t
order by 1, comm desc
```
In other words, you need to use subqueries for the `union all`. Note that I also added in `yyyymm` to identify the month.
|
752
|
Over the last year I have had to work on several commercial themes for clients, and in most cases they were so poorly coded it ended up costing money to re-engineer, it has been frustrating to say the least for both me and the clients.
I wanted to create a resource on what commercial themes (mainly theme shops) to avoid and why, or possibly good/bad experiences.
Would such a question be appropriate, I don't want to really toss anyone under the bus but it might be helpful and improve the situation.
Thoughts?
ps. I do understand creating a theme from scratch is best, but sometimes this is just a reality.
|
2011/09/30
|
[
"https://wordpress.meta.stackexchange.com/questions/752",
"https://wordpress.meta.stackexchange.com",
"https://wordpress.meta.stackexchange.com/users/1509/"
] |
I can't remember any *do/don't do this* rules of network about such, so mostly my personal take.
On the outside this seems like simple flip of *best of X* community wiki concept, that I am fond of. However I think there are major nuances that make negative approach much less viable.
It's inflammatory
-----------------
While it's little impact in praising someone for something, criticizing comes with much larger responsibility. There is no value to collection of simplistic rants, such write-ups must be bullet-proof and up-to-date.
What if it's been a year and everything got fixed? Who is going to follow up and clean up answers? What if site gets sued for badmouthing some business (unlikely but not impossible)?
It's inefficient
----------------
Simply put that would be *enumerating badness*, which is poor approach to anything. Starting with small selection of good choices is vastly more safe and productive than trying to downsize all possibilities by subtracting known bad picks.
Ten best themes (not happening objectively, for the sake of example) is immensely useful list. Ten worst themes is useless drop in a sea of sub-par products.
So
--
Where such content could work is in site's blog. Credible top users taking apart themes to provide excruciating honest feedback is an interesting form and some blogs around do dabble with it.
Only we still don't have a blog. :( A lot of top users seem either highly busy or discouraged with site in recent months. And frankly I'd prefer to see us deal with dangerously growing pile of unanswered questions first.
|
I agree with Rarst on several points, but I just want to drive one home all the more.
Timelessness
============
StackExchange sites are valuable because much of their content is *timeless*. Meaning you can find an answer in Google results 6 months from now and it will still be relevant. This is also why users are encouraged to return and offer updated answers to old questions as technology/practices continue to evolve.
An "avoid X retailer because of Y" thread falls on its face in this paradigm because the company can change. A bad developer today might get better and become the standard against which all are measured tomorrow. Likewise, a company with shady practices now might turn over a new leaf and become legitimate in the future.
So I would advice ***strongly*** against this kind of question. Instead, take one of two approaches:
* Rather than asking who people should avoid, ask who they should look to for quality work
* Curate your own - *private* - list of avoidance-worthy shops on your own - *private* - blog
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
[Yes, it is](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046).
The voting "window" thing was implemented to allay fears over "tactical down-voting" - adding and removing votes to affect the ranking of answers other than your own in order to maintain a high position in the list of answers. If you want to down-vote a post for legitimate reasons, editing it is perfectly acceptable (assuming you're comfortable with the resulting loss of anonymity).
As for being disrespectful / rude... People are often disrespectful and rude. If it bothers you too much for you to bear, then avoid future interactions with them.
|
Why not?
Editing a post to be able to change your vote is the official solution to do so, and if he really thought that you should have been downvoted he should be able to.
If you are getting *seriously* pissed off because of downvotes, you are doing it wrongtm.
Just leave and help somebody that will be glad for it and that won't get on your nerves.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
>
> I just had somebody edit my answer
> specifically so they could change
> their upvote to a downvote. Is this a
> valid tactic? It seems incredibly
> disrespectful and rude.
>
>
>
Yes. He upvoted you because he initially agreed with your answer. Upon realizing that he disagreed, he wished to change it to a downvote. Assume that he never initially agreed, but instead disagreed the whole time. Either way, you receive a downvote.
>
> This has seriously pissed me off. I've
> been answering this person's questions
> over the course of the past 12 hours,
> and further more he's completely wrong
> about his reasoning.
>
>
>
Excellent. One of you will be enlightened soon. If it is he, then you will receive your upvote back. If it is you, well, there's not much we can do.
|
Why not?
Editing a post to be able to change your vote is the official solution to do so, and if he really thought that you should have been downvoted he should be able to.
If you are getting *seriously* pissed off because of downvotes, you are doing it wrongtm.
Just leave and help somebody that will be glad for it and that won't get on your nerves.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
This is an abuse of the editing system, caused by a problem with the voting system.
Editing should be used to **FIX** things.
|
Why not?
Editing a post to be able to change your vote is the official solution to do so, and if he really thought that you should have been downvoted he should be able to.
If you are getting *seriously* pissed off because of downvotes, you are doing it wrongtm.
Just leave and help somebody that will be glad for it and that won't get on your nerves.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
Why not?
Editing a post to be able to change your vote is the official solution to do so, and if he really thought that you should have been downvoted he should be able to.
If you are getting *seriously* pissed off because of downvotes, you are doing it wrongtm.
Just leave and help somebody that will be glad for it and that won't get on your nerves.
|
>
> [someone edited] my answer [to] change their upvote to a downvote. Is this a valid tactic?
>
>
>
As [mentioned](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58101#58101) by [other](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58099#58099) [answerers](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58098#58098), this is the [only way to do it](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046). It is valid and permissible.
>
> It seems incredibly disrespectful and rude.
>
>
>
It *seems* that way because it *implies* that they have modified your answer to disagree with it. This is a red-herring, they were doing what they had to do in order to downvote an answer they felt was wrong.
It is important to sometimes remember that not only are downvotes very minor in the scheme of things but that they are meant to indicate one user's *opinion* about the correctness of your post. Downvotes are not about disrespect, they are about accuracy.
>
> This has seriously pissed me off. I've been answering this person's questions over the course of the past 12 hours,
>
>
>
Anytime you are angered by something, the best response is just to walk away. It's -2. Your rep is quite healthy, so the -2 will not affect you. Getting angrier will not lead either party to any revelations, you'll just come to hate each other more.
>
> and further more he's completely wrong about his reasoning.
>
>
>
See the last line of the previous paragraph. *But*, assuming that you are correct, and he *is* wrong, and you are willing to stay and *politely* try to convince him, then you should be aware that there are stubborn people who won't necessarily change their mind without something drastic going on. For instance, if you were to create code to test both methods and post it somewhere he can access. That would prove your point, and you could do it politely.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
[Yes, it is](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046).
The voting "window" thing was implemented to allay fears over "tactical down-voting" - adding and removing votes to affect the ranking of answers other than your own in order to maintain a high position in the list of answers. If you want to down-vote a post for legitimate reasons, editing it is perfectly acceptable (assuming you're comfortable with the resulting loss of anonymity).
As for being disrespectful / rude... People are often disrespectful and rude. If it bothers you too much for you to bear, then avoid future interactions with them.
|
>
> [someone edited] my answer [to] change their upvote to a downvote. Is this a valid tactic?
>
>
>
As [mentioned](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58101#58101) by [other](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58099#58099) [answerers](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58098#58098), this is the [only way to do it](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046). It is valid and permissible.
>
> It seems incredibly disrespectful and rude.
>
>
>
It *seems* that way because it *implies* that they have modified your answer to disagree with it. This is a red-herring, they were doing what they had to do in order to downvote an answer they felt was wrong.
It is important to sometimes remember that not only are downvotes very minor in the scheme of things but that they are meant to indicate one user's *opinion* about the correctness of your post. Downvotes are not about disrespect, they are about accuracy.
>
> This has seriously pissed me off. I've been answering this person's questions over the course of the past 12 hours,
>
>
>
Anytime you are angered by something, the best response is just to walk away. It's -2. Your rep is quite healthy, so the -2 will not affect you. Getting angrier will not lead either party to any revelations, you'll just come to hate each other more.
>
> and further more he's completely wrong about his reasoning.
>
>
>
See the last line of the previous paragraph. *But*, assuming that you are correct, and he *is* wrong, and you are willing to stay and *politely* try to convince him, then you should be aware that there are stubborn people who won't necessarily change their mind without something drastic going on. For instance, if you were to create code to test both methods and post it somewhere he can access. That would prove your point, and you could do it politely.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
>
> I just had somebody edit my answer
> specifically so they could change
> their upvote to a downvote. Is this a
> valid tactic? It seems incredibly
> disrespectful and rude.
>
>
>
Yes. He upvoted you because he initially agreed with your answer. Upon realizing that he disagreed, he wished to change it to a downvote. Assume that he never initially agreed, but instead disagreed the whole time. Either way, you receive a downvote.
>
> This has seriously pissed me off. I've
> been answering this person's questions
> over the course of the past 12 hours,
> and further more he's completely wrong
> about his reasoning.
>
>
>
Excellent. One of you will be enlightened soon. If it is he, then you will receive your upvote back. If it is you, well, there's not much we can do.
|
>
> [someone edited] my answer [to] change their upvote to a downvote. Is this a valid tactic?
>
>
>
As [mentioned](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58101#58101) by [other](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58099#58099) [answerers](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58098#58098), this is the [only way to do it](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046). It is valid and permissible.
>
> It seems incredibly disrespectful and rude.
>
>
>
It *seems* that way because it *implies* that they have modified your answer to disagree with it. This is a red-herring, they were doing what they had to do in order to downvote an answer they felt was wrong.
It is important to sometimes remember that not only are downvotes very minor in the scheme of things but that they are meant to indicate one user's *opinion* about the correctness of your post. Downvotes are not about disrespect, they are about accuracy.
>
> This has seriously pissed me off. I've been answering this person's questions over the course of the past 12 hours,
>
>
>
Anytime you are angered by something, the best response is just to walk away. It's -2. Your rep is quite healthy, so the -2 will not affect you. Getting angrier will not lead either party to any revelations, you'll just come to hate each other more.
>
> and further more he's completely wrong about his reasoning.
>
>
>
See the last line of the previous paragraph. *But*, assuming that you are correct, and he *is* wrong, and you are willing to stay and *politely* try to convince him, then you should be aware that there are stubborn people who won't necessarily change their mind without something drastic going on. For instance, if you were to create code to test both methods and post it somewhere he can access. That would prove your point, and you could do it politely.
|
58,095
|
I just had somebody edit my answer [specifically so they could change their *upvote* to a *downvote*](https://stackoverflow.com/questions/3302645/optimization-problem-finding-a-maximum). Is this a valid tactic? It seems incredibly disrespectful and rude.
This has *seriously* pissed me off. I've been answering this person's questions over the course of the past 12 hours, and further more he's completely *wrong* about his reasoning.
|
2010/07/22
|
[
"https://meta.stackexchange.com/questions/58095",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/136258/"
] |
This is an abuse of the editing system, caused by a problem with the voting system.
Editing should be used to **FIX** things.
|
>
> [someone edited] my answer [to] change their upvote to a downvote. Is this a valid tactic?
>
>
>
As [mentioned](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58101#58101) by [other](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58099#58099) [answerers](https://meta.stackexchange.com/questions/58095/is-it-valid-to-edit-a-post-specifically-to-downvote-it/58098#58098), this is the [only way to do it](https://meta.stackexchange.com/questions/6460/if-you-just-witnessed-tactical-downvoting-is-it-a-reportable-offense/18046#18046). It is valid and permissible.
>
> It seems incredibly disrespectful and rude.
>
>
>
It *seems* that way because it *implies* that they have modified your answer to disagree with it. This is a red-herring, they were doing what they had to do in order to downvote an answer they felt was wrong.
It is important to sometimes remember that not only are downvotes very minor in the scheme of things but that they are meant to indicate one user's *opinion* about the correctness of your post. Downvotes are not about disrespect, they are about accuracy.
>
> This has seriously pissed me off. I've been answering this person's questions over the course of the past 12 hours,
>
>
>
Anytime you are angered by something, the best response is just to walk away. It's -2. Your rep is quite healthy, so the -2 will not affect you. Getting angrier will not lead either party to any revelations, you'll just come to hate each other more.
>
> and further more he's completely wrong about his reasoning.
>
>
>
See the last line of the previous paragraph. *But*, assuming that you are correct, and he *is* wrong, and you are willing to stay and *politely* try to convince him, then you should be aware that there are stubborn people who won't necessarily change their mind without something drastic going on. For instance, if you were to create code to test both methods and post it somewhere he can access. That would prove your point, and you could do it politely.
|
41,326,289
|
I have the following rules for phone number data entry:
* For USA phones, it must follow format (999) 999-9999
* For non-USA phones, it must start with a + followed by any combination of digits, hyphens, and spaces (basically, much less strict than the USA format)
I have regex validation that ensures that the entered phone number matches these rules. I want to use a phone format mask that follows the above rules.
For example, the simple case of just allowing the specific USA format, I can do:
```
<label for="phone">Enter Phone Number:</label>
<input type="text" id="phone" name="phone">
<script type="text/javascript">
$(function () {
$('.usa-phone').mask("(999) 999-9999");
});
</script>
```
This would display something like this in the textbox when the user clicks to focus the phone textbox:
```
(___) ___-____
```
But how do I then allow for international phones that start with "+" sign in the same input field?
For example `+99 (99) 9999-9999`.
|
2016/12/26
|
[
"https://Stackoverflow.com/questions/41326289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4544312/"
] |
There are several solutions to this, including having the user picking from a drop-down-list as to which country they are in, so you can setup the format accordingly. A second way would be to intercept the users individual keystrokes and then change the format as you se +44 or +39 to represent the UK or Italian number format.
There are several packages out there that implements it, and since there are 100's of countries many with more than one or two format in each country, you probably should not try to implement this yourself,but instead use one of those packages
I think I have used this one here: <https://github.com/albeebe/phoneformat.js>
in an earlier project.
Since you are looking for a Jquery solution, the stripe library looks promising
<https://github.com/stripe/jquery.mobilePhoneNumber>
If you are OK building a bit of code yourself you may be able to combine the keyboard processing explained here [How to change Phone number format in input as you type?](https://stackoverflow.com/questions/17980061/how-to-change-phone-number-format-in-input-as-you-type) with the core code from either of the other libs
So, to grossly over-simplify the logic, you would dynamically change the format of the filed based on inputs in a keyup event, like
```
$("#phone").keyup(function(x){
var plus = $(x.target).val();
if (plus[0] == '+')
$(x.target).mask("\+99 9999999999")
else
$(x.target).mask("999 999-9999")
})
```
|
You can use the below mask option to achieve what you have mentioned i.e `+99 (99) 9999-9999`
`data-inputmask="'mask': ['+99 (99) 9999-9999']"`
You can use the above option along with your input element. I have created a fiddle for this.
**JS Fiddle Link:**<https://jsfiddle.net/5yh8q6xn/1/>
**HTML Code:**
```
<div class="form-group">
<label>Intl US phone mask:</label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-phone"></i>
</div>
<input id="intl" type="text" class="form-control" data-inputmask="'mask': ['+99 (99) 9999-9999']" data-mask>
<!--'999-999-9999 [x99999]', -->
</div>
<!-- /.input group -->
</div>
```
**JS Code:**
```
$(function(){
$("[data-mask]").inputmask();
});
```
**Other Options**
You can also append `0` in the input mask as some users prefer prefixing 0 in the contact number along with `+`
For appending `0` along with your number, just replace the below in input element.
`data-inputmask="'mask': ['+099 (99) 9999-9999']"`
Similarly, for different inputs(of different countries), you can mask your input accordingly.
I hope this is what you were looking for!
|
41,326,289
|
I have the following rules for phone number data entry:
* For USA phones, it must follow format (999) 999-9999
* For non-USA phones, it must start with a + followed by any combination of digits, hyphens, and spaces (basically, much less strict than the USA format)
I have regex validation that ensures that the entered phone number matches these rules. I want to use a phone format mask that follows the above rules.
For example, the simple case of just allowing the specific USA format, I can do:
```
<label for="phone">Enter Phone Number:</label>
<input type="text" id="phone" name="phone">
<script type="text/javascript">
$(function () {
$('.usa-phone').mask("(999) 999-9999");
});
</script>
```
This would display something like this in the textbox when the user clicks to focus the phone textbox:
```
(___) ___-____
```
But how do I then allow for international phones that start with "+" sign in the same input field?
For example `+99 (99) 9999-9999`.
|
2016/12/26
|
[
"https://Stackoverflow.com/questions/41326289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4544312/"
] |
You could add a button that swaps between the two masks. For instance:
```js
$(document).ready(function() {
$("#phone").inputmask("mask", {
"mask": "(999)999-999"
});
var i = 0;
$('.plus-button').click(function() {
if (i === 0) {
$("#phone").inputmask("mask", {
"mask": "+99 (99) 9999-9999"
});
i = 1;
} else {
$("#phone").inputmask("mask", {
"mask": "(999)999-999"
});
i = 0;
}
});
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://rawgit.com/RobinHerbots/jquery.inputmask/3.x/dist/jquery.inputmask.bundle.js"></script>
<label for="phone">Enter Phone Number:
<button type="button" class="plus-button">+</button>
</label>
<input type="text" id="phone" name="phone" data-mask>
```
|
There are several solutions to this, including having the user picking from a drop-down-list as to which country they are in, so you can setup the format accordingly. A second way would be to intercept the users individual keystrokes and then change the format as you se +44 or +39 to represent the UK or Italian number format.
There are several packages out there that implements it, and since there are 100's of countries many with more than one or two format in each country, you probably should not try to implement this yourself,but instead use one of those packages
I think I have used this one here: <https://github.com/albeebe/phoneformat.js>
in an earlier project.
Since you are looking for a Jquery solution, the stripe library looks promising
<https://github.com/stripe/jquery.mobilePhoneNumber>
If you are OK building a bit of code yourself you may be able to combine the keyboard processing explained here [How to change Phone number format in input as you type?](https://stackoverflow.com/questions/17980061/how-to-change-phone-number-format-in-input-as-you-type) with the core code from either of the other libs
So, to grossly over-simplify the logic, you would dynamically change the format of the filed based on inputs in a keyup event, like
```
$("#phone").keyup(function(x){
var plus = $(x.target).val();
if (plus[0] == '+')
$(x.target).mask("\+99 9999999999")
else
$(x.target).mask("999 999-9999")
})
```
|
41,326,289
|
I have the following rules for phone number data entry:
* For USA phones, it must follow format (999) 999-9999
* For non-USA phones, it must start with a + followed by any combination of digits, hyphens, and spaces (basically, much less strict than the USA format)
I have regex validation that ensures that the entered phone number matches these rules. I want to use a phone format mask that follows the above rules.
For example, the simple case of just allowing the specific USA format, I can do:
```
<label for="phone">Enter Phone Number:</label>
<input type="text" id="phone" name="phone">
<script type="text/javascript">
$(function () {
$('.usa-phone').mask("(999) 999-9999");
});
</script>
```
This would display something like this in the textbox when the user clicks to focus the phone textbox:
```
(___) ___-____
```
But how do I then allow for international phones that start with "+" sign in the same input field?
For example `+99 (99) 9999-9999`.
|
2016/12/26
|
[
"https://Stackoverflow.com/questions/41326289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4544312/"
] |
You could add a button that swaps between the two masks. For instance:
```js
$(document).ready(function() {
$("#phone").inputmask("mask", {
"mask": "(999)999-999"
});
var i = 0;
$('.plus-button').click(function() {
if (i === 0) {
$("#phone").inputmask("mask", {
"mask": "+99 (99) 9999-9999"
});
i = 1;
} else {
$("#phone").inputmask("mask", {
"mask": "(999)999-999"
});
i = 0;
}
});
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://rawgit.com/RobinHerbots/jquery.inputmask/3.x/dist/jquery.inputmask.bundle.js"></script>
<label for="phone">Enter Phone Number:
<button type="button" class="plus-button">+</button>
</label>
<input type="text" id="phone" name="phone" data-mask>
```
|
You can use the below mask option to achieve what you have mentioned i.e `+99 (99) 9999-9999`
`data-inputmask="'mask': ['+99 (99) 9999-9999']"`
You can use the above option along with your input element. I have created a fiddle for this.
**JS Fiddle Link:**<https://jsfiddle.net/5yh8q6xn/1/>
**HTML Code:**
```
<div class="form-group">
<label>Intl US phone mask:</label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-phone"></i>
</div>
<input id="intl" type="text" class="form-control" data-inputmask="'mask': ['+99 (99) 9999-9999']" data-mask>
<!--'999-999-9999 [x99999]', -->
</div>
<!-- /.input group -->
</div>
```
**JS Code:**
```
$(function(){
$("[data-mask]").inputmask();
});
```
**Other Options**
You can also append `0` in the input mask as some users prefer prefixing 0 in the contact number along with `+`
For appending `0` along with your number, just replace the below in input element.
`data-inputmask="'mask': ['+099 (99) 9999-9999']"`
Similarly, for different inputs(of different countries), you can mask your input accordingly.
I hope this is what you were looking for!
|
54,300,543
|
**Scenario**
I have the following code to check how many times a certain value appears in a column. Here poRange is a range defined earlier and poValueCheck is a value defined earlier by the program.
`iVal = Application.WorksheetFunction.CountIf(Range(poRange), Trim(poValueCheck) & "*")`
**Problem**
Everything works fine except if I need to search for the occurrence of a number in that range. If I need to search a pure number which exist in the poRange, iVal getting 0 even though there is a similar number exists.
Anyone can help?
|
2019/01/22
|
[
"https://Stackoverflow.com/questions/54300543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9151274/"
] |
I am responding to my own question. I found what is causing the issue in my code. In fact, I started using `npm agenda`, which uses `MongoDB`, for scheduling some tasks in my application. I later switched to `npm bull`, which is using `Redis`. I created the queue instance like the following.
```js
const Queue = require("bull");
const emailQueue = new Queue("emails");
```
while it should be
```js
const Queue = require("bull");
const emailQueue = new Queue("emails", {
redis: { port: 6379, host: "redis-server" }
});
```
Because I did not specify the redis settings for the module, it was not able to connect to the server. I totally forgot that I had another module that is using redis. The reason the app seemed to be working is because `express` get connected to redis while `bull` did not. I would figure that out if I had tried to use a feature that bull is part of the process. I also simplified my `Dockerfile` and `docker-compose.yml` files to include only the necessary steps.
/Dockerfile
```
FROM node:alpine # create image with NodeJS version alpine included
WORKDIR /usr/app # set working directory of the app to be /usr/app/. create folder /usr/app if it does not exist already. The following operations would then happen inside /usr/app
COPY ./package.json ./ # copy package.json file into current directory, which is /usr/app according to previous step
RUN npm install # install dependencies
COPY ./ ./ # add the rest of the files to /usr/app
CMD ["npm", "start"] # run 'npm start' when the container is starting
```
/docker-compose.yml
```
version: "3"
services:
# Redis
redis-server: # the name of the redis service becomes the host name in my application
container_name: REDIS_SERVER
image: 'redis'
# API
main-api:
container_name: MAIN_API
build: ./
ports:
- "5555:5555"
```
I did not need anything else for the application to work. By simply giving the service name to `ioredis` in my application, the module will figure out where to find the server. That is said [here](https://docs.docker.com/compose/compose-file/) in `links` section. In my case, the server name is `redis-server` so the hostname for `ioredis` and `bull` would also be `redis-server` and the port number `6379`. Thank you @DavidMaze @yeaseol, and ...
|
The port is bound to `6378`, so you need to access `6378`.
And can you use hostname `redis`? (Did you set it?)
Try this:
`module.exports = new Redis('redis://{docker-server-hostname}:6378');`
or
The best option is to set a `link` to docker-compose.
```
main-api:
container_name: MAIN_API
build: ./
command: ["npm", "start"]
working_dir: /usr/src/MainAPI
ports:
- "5555:5555"
volumes:
- ./:/usr/src/MainAPI
links:
- {your-redis-container-name}
```
Try this:
`module.exports = new Redis('redis://{your-redis-container-name}:6379');`
|
3,686,204
|
I've used regex's in sed before and used a bit of awk, but am unsure of the exact syntax I need here...
I want to do something like:
```
sed -i 's/restartfreq\([\s]\)*\([0-9]\)*/restartfreq\1$((\2/2))/2/g' \
my_file.conf
```
where the second match is divided by 2 and then put back in the inline edit.
I've read though that sed can't do math.
Can I do this cleanly with sed or awk alone? Suggestions please.
**Edit 1**
I thought the meaning of my inquiry was straightforward enough, but I guess I might not have given a good enough sample of the data I want to modify. Here's and example of the line in my \*.conf file I want to edit inline:
```
restartfreq 1250 ;# 2500steps = every 1.25 ps
```
I've posted a solution below. Both of the answers I received were with regards to printing text to the terminal, not editing a file inline. I try to avoid answering my own question, but in this case neither of the answers I received really did what I requested (edit the file, not just print the edited line) and they were substantially longer than my solution and/or required additional linux programs besides just awk or sed.
I do appreciate the help and feedback, though! :)
NOTE: As my usual disclaimer, this is not a homework question, I am a chemical engineering researcher.
|
2010/09/10
|
[
"https://Stackoverflow.com/questions/3686204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/375828/"
] |
```
$ cat script.awk
/restartfreq *[0-9]+/{
$2 = $2/2
}
{print}
$ awk -f script.awk my_file.conf
```
If you always want the number to be an integer, change `$2/2` to `int($2/2)`.
---
To overwrite the file, you could either use sponge (if you have [moreutils](http://kitenet.net/~joey/code/moreutils/) available) or a temporary file.
The latter should be self-explanatory.
Sponge lets you do:
```
$ awk -f script.awk my_file.conf | sponge my_file.conf
```
|
This may be along the lines of what you're looking for:
```
$ echo 'restart 4' | awk '{$2=$2/2; print}'
restart 2
```
|
3,686,204
|
I've used regex's in sed before and used a bit of awk, but am unsure of the exact syntax I need here...
I want to do something like:
```
sed -i 's/restartfreq\([\s]\)*\([0-9]\)*/restartfreq\1$((\2/2))/2/g' \
my_file.conf
```
where the second match is divided by 2 and then put back in the inline edit.
I've read though that sed can't do math.
Can I do this cleanly with sed or awk alone? Suggestions please.
**Edit 1**
I thought the meaning of my inquiry was straightforward enough, but I guess I might not have given a good enough sample of the data I want to modify. Here's and example of the line in my \*.conf file I want to edit inline:
```
restartfreq 1250 ;# 2500steps = every 1.25 ps
```
I've posted a solution below. Both of the answers I received were with regards to printing text to the terminal, not editing a file inline. I try to avoid answering my own question, but in this case neither of the answers I received really did what I requested (edit the file, not just print the edited line) and they were substantially longer than my solution and/or required additional linux programs besides just awk or sed.
I do appreciate the help and feedback, though! :)
NOTE: As my usual disclaimer, this is not a homework question, I am a chemical engineering researcher.
|
2010/09/10
|
[
"https://Stackoverflow.com/questions/3686204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/375828/"
] |
this is how you do it with awk only
```
awk '$1=="restartfreq"{$2=$2/2;}1' file > t && mv t file
```
|
This may be along the lines of what you're looking for:
```
$ echo 'restart 4' | awk '{$2=$2/2; print}'
restart 2
```
|
3,686,204
|
I've used regex's in sed before and used a bit of awk, but am unsure of the exact syntax I need here...
I want to do something like:
```
sed -i 's/restartfreq\([\s]\)*\([0-9]\)*/restartfreq\1$((\2/2))/2/g' \
my_file.conf
```
where the second match is divided by 2 and then put back in the inline edit.
I've read though that sed can't do math.
Can I do this cleanly with sed or awk alone? Suggestions please.
**Edit 1**
I thought the meaning of my inquiry was straightforward enough, but I guess I might not have given a good enough sample of the data I want to modify. Here's and example of the line in my \*.conf file I want to edit inline:
```
restartfreq 1250 ;# 2500steps = every 1.25 ps
```
I've posted a solution below. Both of the answers I received were with regards to printing text to the terminal, not editing a file inline. I try to avoid answering my own question, but in this case neither of the answers I received really did what I requested (edit the file, not just print the edited line) and they were substantially longer than my solution and/or required additional linux programs besides just awk or sed.
I do appreciate the help and feedback, though! :)
NOTE: As my usual disclaimer, this is not a homework question, I am a chemical engineering researcher.
|
2010/09/10
|
[
"https://Stackoverflow.com/questions/3686204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/375828/"
] |
```
$ cat script.awk
/restartfreq *[0-9]+/{
$2 = $2/2
}
{print}
$ awk -f script.awk my_file.conf
```
If you always want the number to be an integer, change `$2/2` to `int($2/2)`.
---
To overwrite the file, you could either use sponge (if you have [moreutils](http://kitenet.net/~joey/code/moreutils/) available) or a temporary file.
The latter should be self-explanatory.
Sponge lets you do:
```
$ awk -f script.awk my_file.conf | sponge my_file.conf
```
|
The following solution actually *edits* the file inline, only uses one command, and is a bit more elegant than the other proposed solutions (IMHO).
```
awk '{gsub(/restartfreq\s*[0-9]+/,$2/2,$2)}' my_file.conf
```
Apologies to Dennis, et. al for answering my own question, but I feel this is the best solution and hopefully it will benefit others who read this...
**Edit 1**
Whoops I lied. That just prints to the terminal too...
It took me a long time to find, but this works, using only sed and awk:
```
sed -i '/restartfreq/s/'`sed -n 's/restartfreq*/&/p' my_file.conf | awk '{print $2}'`'/'`sed -n 's/restartfreq*/&/p' my_file.conf | awk '{print $2/2}'`'/g' my_file.conf
```
Perhaps this can be streamlined, but at least it works! :)
|
3,686,204
|
I've used regex's in sed before and used a bit of awk, but am unsure of the exact syntax I need here...
I want to do something like:
```
sed -i 's/restartfreq\([\s]\)*\([0-9]\)*/restartfreq\1$((\2/2))/2/g' \
my_file.conf
```
where the second match is divided by 2 and then put back in the inline edit.
I've read though that sed can't do math.
Can I do this cleanly with sed or awk alone? Suggestions please.
**Edit 1**
I thought the meaning of my inquiry was straightforward enough, but I guess I might not have given a good enough sample of the data I want to modify. Here's and example of the line in my \*.conf file I want to edit inline:
```
restartfreq 1250 ;# 2500steps = every 1.25 ps
```
I've posted a solution below. Both of the answers I received were with regards to printing text to the terminal, not editing a file inline. I try to avoid answering my own question, but in this case neither of the answers I received really did what I requested (edit the file, not just print the edited line) and they were substantially longer than my solution and/or required additional linux programs besides just awk or sed.
I do appreciate the help and feedback, though! :)
NOTE: As my usual disclaimer, this is not a homework question, I am a chemical engineering researcher.
|
2010/09/10
|
[
"https://Stackoverflow.com/questions/3686204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/375828/"
] |
this is how you do it with awk only
```
awk '$1=="restartfreq"{$2=$2/2;}1' file > t && mv t file
```
|
The following solution actually *edits* the file inline, only uses one command, and is a bit more elegant than the other proposed solutions (IMHO).
```
awk '{gsub(/restartfreq\s*[0-9]+/,$2/2,$2)}' my_file.conf
```
Apologies to Dennis, et. al for answering my own question, but I feel this is the best solution and hopefully it will benefit others who read this...
**Edit 1**
Whoops I lied. That just prints to the terminal too...
It took me a long time to find, but this works, using only sed and awk:
```
sed -i '/restartfreq/s/'`sed -n 's/restartfreq*/&/p' my_file.conf | awk '{print $2}'`'/'`sed -n 's/restartfreq*/&/p' my_file.conf | awk '{print $2/2}'`'/g' my_file.conf
```
Perhaps this can be streamlined, but at least it works! :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.