text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Overview:
Matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. In this article I will introduce you with the mathematical concept Matrix and matrix Addition. After that, I will explain What is an array? and What is Multidimensional Array? Moreover, I will show Demonstration for addition of matrices. At last, I will show a C program to add two matrices with output in front of you.
Table of contents:
- What is Matrix?
- What is matrix Addition?
- What is an Array?
- What is multi dimensional array?
- Demonstration for addition of matrices.
- Logic for matrix addition
- c program for matrix addition
- Conclusion
What is Matrix?
A matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. The matrix of r rows and c column is known as r X c matrix. Matrices are often denoted using capital roman letters such as A, B and C. For example,
What is matrix Addition?
Matrix addition is the operation of adding two matrices by adding the corresponding entries together. Matrix addition is done element wise (entry wise) i.e. Sum of two matrices. A and B of size mXn is defined by:
(A + B) = Aij + Bij (Where 1 ≤ i ≤ m and 1 ≤ j ≤ n)
What is an Array?
An array is a container for the collection of items that stores these items at contiguous memory locations. An array is a variable that can store multiple values. For example, if you want to store 100 integers in a single variable then you can create an array for it.
What is multi dimensional array?
Multidimensional array is an array of arrays. You can declare a two-dimensional (2d) array. For example, float x[3][4]; Similarly, you can declare a three-dimensional (3d) array. For example, float y[2][4][3];
Demonstration for addtion of matrices.
Logic for matrix addtion
To add two matrices, i.e., compute their sum and print it. A user inputs their orders (number of rows and columns) and the elements of these two matrices. For example, if the order is 2, 2, i.e., two rows and two columns and the matrices are as follows:
First matrix A =
1 1 2
3 1 4
Second matrix B=
4 1 5
-1 5 3
The Sum of these two matrices is:
1+4 1+1 2+5
3-1 1+5 4+3
=
5 2 7
2 6 7
The main Logic to write C program for the Matrix Addition is as follows:
- Step 1: Take 3 arrays such as first[][], second[][], and sum[][].
- Step 2: Accept the number of rows and columns for the two arrays.
- Step 3: Then, accept elements of these two arrays.
- Step 4: After all, add the corresponding elements of the arrays and store its elements to the new array and display its elements.
C program for matrix addition
#include <stdio.h> int main() { int r, c, i, j, first[10][10], second[10][10], sum[10][10]; printf("Enter the number of rows and columns of matrix\n"); scanf("%d%d", &r, &c); printf("Enter the elements of first matrix\n"); for (i = 0; i < r; i++) for (j = 0; j < c; j++) scanf("%d", &first[i][j]); printf("Enter the elements of second matrix\n"); for (i = 0; i < r; i++) for (j = 0 ; j < c; j++) scanf("%d", &second[i][j]); printf("Sum of entered matrices:-\n"); for (i = 0; i < r; i++) { for (j = 0 ; j < c; j++) { sum[i][j] = first[i][j] + second[i][j]; printf("%d\t", sum[i][j]); } printf("\n"); } return 0; }
Conclusion:
This Program shows matrix addition through the addition of corresponding elements of the matrices. | https://www.onlineinterviewquestions.com/blog/c-program-for-matrix-addition/ | CC-MAIN-2021-17 | refinedweb | 623 | 64.61 |
Talk:Key:website
Contents
Simple vs. specific URLs
I object to the recommendation against using specific URLs. They might last longer, but could in many cases make the whole URL more or less useless. Consider the case of hiking route relations: A hiking route relation describes a hiking route, so defined often my a destination company or hiking association. An example is this URL which points to a page about that specific route. The simple URL would be. If you go to the latter, you could spend quite some time trying to find the information you want, while if you go to the first you get there instantly. --Dittaeva 16:26, 22 February 2012 (UTC)
- As far as I am concerned, this recommendation means that you should choose simple URLs over complex URLs if they basically point to the same content. So for example, you would use instead of, because while both get you to the front page of that website, the second variant depends very much on the current implementation of their site and doesn't add any maningful information. --Tordanik 17:13, 22 February 2012 (UTC)
separating multiple values with space
hi,
it seems reasonable to to divide multiple values not with a semi-colon in this case but with a space. I just tried it and was surprised that it actually already works on osm.o: (see
1941952571 (XML, iD, JOSM, Potlatch2, history))
--Shmias 22:42, 1 October 2012 (BST)
- Multiple values are unnecessary for a website tag, though, because it is meant to contain "the official website". This might be a useful suggestion for Key:url. --Tordanik 12:50, 2 October 2012 (BST)
as far as I can see, this feature is deprecated. I also had a problem with multiple values because urls are usually long and values may not be longer than 255 characters. does anyone have a better idea where to place that stuff? --Shmias 10:44, 3 October 2012 (BST)
We shouldn't encourage crippled URLs
I really don't like the idea of omitting the http:// because it cripples the URL. The article should not written in a way that users may be encouraged to omit the "http://" (or "https://"). Remember, the string is not neccessarily the same string which is presented to the user, it is stored in a file, waiting to be processed by software. There are quite some programs which expect to parse URLs. Not crippled URLs, almost-URLs or "urls without the scheme"[1]. The good thing about URLs is that most programs already understand how they work; no extra code has to be written. . The purpose of an URL is to get munched by a program directly. It is generally a bad idea to require applications to give OSM a "special treating" because there has to be extra code just because OSM doesn't give a damn about Internet standards. What comes next? Suggesting to drop the "www."?
There should be at least a pointer to RFC1738 section 3.1, section 3.3 (for the http scheme) and RFC2818 section 2.4 in the article for reference. (Update: I just did it anyways --Wuzzy 23:44, 28 November 2012 (UTC)) And there should be a clarification that the value should be a valid URL. You really need only to read these three relevant sections to understand URLs and these are rather short. You don't even need to fully understand URLs to use them: Just copy the frikking address bar of your browser! Nothing can be easier than that. ;-)
I know at least one program which interprets the value always as an URL and fails badly if it is not an URL. It is a plugin in JOSM which adds an entry to the context menu for the website tag. I think this plugin does it absolutely right, and this definition made it wrong.
Here's my suggested new definition for the website key:
Just copy the frikking address bar of your browser! ;-)
I personally consider it an error if I encounter a "website" tag without the proper scheme. I always add the scheme in front of it, check if it works and if it does, I upload the change, if it doesn't, I remove the website key. By the way: Is there any quality assurance tool in the wild which checks URLs for (syntactic) validity?
If there are no serious objections within a week, I would change this wiki page in a way that it clearly expresses that only full URLs should be used and not crippled ones. I also would remove the "no scheme if http" exception then. --Wuzzy 23:15, 28 November 2012 (UTC)
- ↑ Oh, by the way, it is wrong that that "http://" is the scheme, like the article says. "http" is the scheme, because ":" is a seperator and "//" is the start of the common Internet scheme syntax
- Disagree with the suggested "Just copy the frikking address bar of your browser!" definition because of the #Simple vs. specific URLs issue. But otherwise, yes, the http:// should imo be included in the value. I've always read that section as a suggestion to application authors for a potentially useful fallback, not as a guideline for mappers. So I would support a clarification in that direction. --Tordanik 17:03, 29 November 2012 (UTC)
- Okay, my friends. The time is over! I just removed the section in discussion, with respect to Tordaniks little objection. --Wuzzy 20:27, 6 December 2012 (UTC)
- If this would be a generic URL field I would fully agree, but it is not. It is a tag to specify a website. Apart from the very, very few websites that can only be reached via https all of them use http. Adding the protocol does not give OSM any advantage and is redundant. The argument of "some software" that needs to parse URLs is also invalid. The content of the field are not full URLs, so no URL parser is to be expected to be able to parse it. A scheme of "crippled" URLs (they are not URLs actually, the wording is wrong) are used for wikipedia articles too and works very well and has been so for a long time. The removal of the section explaining all this (IIRC) and abandoning the scheme by a single user is... at least impolite, Wuzzy. --Stefanct 23:01, 13 January 2013 (UTC)
- The old definition (before I touched it) was too ambigious. I am aware that https-only websites are rare but https-enabled websites are not. Even Google has HTTPS! And if I have the choice between the HTTPS and the HTTP version of a certain website, I’d rather link to the HTTPS version (except when it’s broken). And there are way more HTTPS-enabled websites than you’d believe, so I don’t think that HTTPS is a strange side case. Don’t believe me? Look at HTTPS Everywhere! Also note that the old definition in fact already included the abbreviation “URL” some times, so one may legitimely think to use a full URL as value. On the other hand, the definition allowed to drop the “http://” part. This was a contradiction, so the definition HAD have to be changed. I chose full URLs because URLs are very common and standarized. I call the other definiton “crippled” URLs because the definition implied (sort of) that one could “omit the ‘http://’ part”. Sorry, but this sounds very much like “URL without schema”, I fail to see how this definition does not relate to URLs. My software argument is not invalid. The big advantage of strictly using full URLs only for this scheme is a software may not have to parse the URL at all, it could simply blindly give the value to a browser (for instance). I also don’t think it a good practice to mix well-defined standarized URLs with “crippled” URLs (or call them “non-URLs” or “strings that look like URLs but aren’t” or whatever).
- The wikipedia argument is a false analogy because the wikipedia tagging scheme is not even similar to an URL, because the common format is "lg:Pagename" with lg=language code and Pagename=name of page (usually with spaces). You never have real spaces (read: unencoded) in an URL and the “:” character has a completely different meaning in URLs. Most important, the wikipedia scheme is clear and unabigious, therefore I didn’t touch it.
- In my view, the current definition is clear and unabigious. The old definition was ambigious. Someone had to change that. I hoped some more people would discuss with me but only one was interested who also agreed on the important part, then I got tired of having a ambigious definition and just went on. Yes, I may be bold. But be bold is a common wiki motto. And I didn’t simply edit, nope, my edit was based on a rational decision. As there are still no serious counter-arguments (I just have dismantled your counter-arguments.) after about one-and-a-third months, I do not regret this decision.
- If you are still not convinced, please keep in mind that a simple revert would not help, because you’d revert to all the problems the old version had. --Wuzzy 20:48, 17 January 2013 (UTC)
Deprecate this tag when it is used to provide contact information in favor of contact=*?
Reason: contact=* is more clear and precise way to link URL with world/POI objects. Xxzme (talk) 06:52, 20 July 2014 (UTC)
- When I look at the relative popularity, I think it makes much more sense to deprecate contact:website. --Tordanik 14:06, 20 July 2014 (UTC)
- Yes but you look it in wrong way. contact=* is not single tag but group of tags, to compare them fairly you have to sum all contact tags vs something. You have to use website=* 3 times vs there precise ways of contact contact:twitter=* contact:facebook=* contact:webcam=*. Contact is not limited to websites. IMO website=* should be deprecated when it is used to provide contact information, contact=* does this job better. Xxzme (talk) 06:50, 22 July 2014 (UTC)
- Alright, if you want to look at contact as a group, let's compare the full range of contact keys:
- ¹ I used url:webcam here
- I don't think this looks better for the contact keys. Except with the rarely used facebook and webcam keys, the usage counts are clearly in favour of the alternatives. --Tordanik 18:52, 25 July 2014 (UTC)
- Well, okay, they are 10 times more popular. But again, my point is still valid. Contact: namespace is precise way to specify information linked with phone number or facebook page. Some of phone=* values are not contact phones, but emergency phones. I personally use delivery: namespace to specify delivery phone (quite often this is different from contact phone where you can book table). To avoid confusion we need to use namespaces for each purpose, contact: is just one of them. Xxzme (talk) 05:06, 19 August 2014 (UTC)
- Most phone numbers are contact phone numbers. It makes much more sense to use emergency:phone for a exception. Also if you care that much about contact then you should remove 50% of the Social Media pages, because you can't actually reach most businesses that way. --AndiG88 (talk) 20:49, 21 October 2014 (UTC)
Administrative vs Tourist Website
Many countries, states, cities have both an administrative website and a tourist website, I understand website= should be the admin website, but how can we also store the tourism website? Aharvey (talk) 11:02, 16 June 2017 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Key:website | CC-MAIN-2017-26 | refinedweb | 1,950 | 71.34 |
Details
- Type:
Umbrella
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.6.0
- Fix Version/s: None
- Component/s: None
- Labels:None
Description
We should take a look at how to integrate Hama's BSP Engine to Hadoop's nextGen application platform.
Can be currently found in the 0.23 branch.
Issue Links
- is related to
GIRAPH-158 Support YARN (next generation MapReduce)
- Resolved
-
Activity
Let's see how they integrate MPI:
Excellent! Can help in anyways possible.
Let's see how they integrate MPI:
No need, can do independently. If you can write up a initial summary of any design you may have in mind, we can take it forward together. Will read-up the execution flow of HAMA myself in the meanwhile.
Couple pointers that can help.
- Implementation of MapReduce itself over YARN:
- Sharad Agarwal's presentation at a HUG on writing a custom ApplicationMaster: hadoop_contributors_meet_07_01_2011.pdf
- You might also want to follow
MAPREDUCE-2719, and MAPREDUCE-2720.
Thank you very much. I'll take a look at it.
Sorry for my late feedback and thank you for your help and information.
Currently our codebase has a majority of the "old" architecture of Hadoop. We changed parts of the computation model, but task execution and job livecycle stays the same as it was in the "old" Hadoop architecture. We put a synchronization service on top of it which is working (most of the time it is not working
) with zookeeper. In addition we have RPC connections between the servers to message each other.
I suggest to implement our BSPMaster as an "Application Master". He must take care of allocating new Containers which then will be a "Groom" in our namespace. Each groom needs a ZNode and some kind of identifier.
But there is a question of security in my mind. Do you mind when we don't care about security in the first version? I'm not an expert in these authentication systems like Kerberos.
So everything is actually implemented some way, but we need to port this code to YARN. I have alot of time tomorrow so I just start. I also think we are going to split this task up to several smaller pieces, so our other developers can contribute to it, too.
But I have a more general question:
Should we make this task part of our framework? Like another maven module which can be plugged-in into Hadoop?
But there is a question of security in my mind. Do you mind when we don't care about security in the first version? I'm not an expert in these authentication systems like Kerberos.
You can just leave a TODO. Maybe I can help that part a bit.
Should we make this task part of our framework? Like another maven module which can be plugged-in into Hadoop?
Creating "new package" in a 'hama-core' would be the best I think. The biggest advantage is the all code is in one place, easier administration, and easier maintenance.
Great so let's start with some kind of tutorial:
svn checkout
Follow the building rules in "BUILDING.txt".
Most of the time you'd just need to run:
mvn compile -e -DskipTests
This will retrieve the dependencies.
If you don't have protobuf in your path, the build fails while compiling yarn-api. This is caused by the exec plugin which compiles the protobuf files (generates sources).
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec (generate-sources) on project hadoop-yarn-api: Command execution failed. Process exited with an error: 1(Exit value: 1) -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec (generate-sources) on project hadoop-yarn-api: Command execution failed. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
This pom.xml tries to run an executable called "protoc", so make sure you have installed protobuf correctly.
You can download it here:
Follow the steps in INSTALL.txt.
./configure
make
make install
Maybe "configure" fails because you don't have g++ installed, so just install it via "apt-get install g++" and then start the whole process again.
Be careful what the output of install says. For me it told me that he layed the shared objects into "/usr/local/lib". You then have to edit your "/etc/ld.so.conf" and add the path of the protobuf shared objects to it. Reload with "ldconfig".
Now you can try to run "protoc" on your shell, it should tell you "missing input file".
Back to yarn you can just call
mvn clean install -e -rf :hadoop-yarn-api -DskipTests
to rerun the build process.
@Thomas,
This might be of help:
Great thanks for the link
But as you can see, I now figured out for myself
Okay so our first subtask is the security.
I set up a code project for that.
Checked in first version of BSPAppMaster and the job stuff (event/implementation etc).
We need to rethink our states and transitions, but I think we can barely cut them down to Setup->Compute->Cleanup.
A more sophisticated way would be to split the compute state into the supersteps. So each supersteps gets handled by a statetransition. But that could introduce a lot of overhead, but it seems to be cleaner than the handling of zookeeper. E.G. we can wait until every task has reached this transition with a cyclic barrier. Just like in the LocalBSPRunner.
And we need to write the event dispatchers for them.
My next goal would be to rewrite the BSPJobImpl class for our needs e.G. remove the map and reduce tasks counter and extract an interface.
Today and yesterday I did a bit on tasks, scheduling, lifecycle management.
TODO:
-Container things and how to get the BSPs running in reality
-sync service with zookeeper
Long comment, the leisure of the weekend
Good to see the ball rolling.
I had a browsing session on the current HAMA code(let's call this HamaV1 code) and the mapreduce-integration branch (actually this should be Yarn-integration, let's call this HamaV2).
Some thoughts follow. Some of the following may be naive as I am new around here
Regarding the Job and Task state machines: Yes it does look like you don't need a lot of states and their corresponding transitions here, from what I can see from HamaV1 JobInProgress and TaskInProgress. Is that because you don't have good failure handling in HamaV1 (as I read in of the presentations)? It that isn't true, ignore what follows. Otherwise, I think it is the right time to think about fault tolerance (if at all) and write down the state machines to include the faulty scenarios.
Implementation of barrier synchronization: Not sure of the problems you ran with ZooKeeper in HamaV1, but can't we use the ApplicationMaster(AM) in HamaV2 as a barrier synchronization service? Each BSPPeer could periodically poll the AM if it can proceed to the next superstep. If and when the AM goes down, all the BSPPeers just wait there spinning till AM is restarted by the Yarn ResourceManager.
– Pros: Avoiding ZooKeeper frees BSP from the ZK external dependency, one less service needed for running HAMA apps.
– Cons: It robs HAMA of the the notification push vis ZK's watcher mechanism (notification push vs periodic pull) (This should be agreeable, no?).
Thoughts?
Regarding use of MR classes:
- Reuse of MRV2 classes: I was appalled by the amount of Hadoop MapReduce code (kinda) forked in HamaV1. Glad that with Yarn and HamaV2, most of the forking will be gone. Still, one look at the HamaV2 code you have at Google Code tells me you are trying to mimic MRV2 (MapReduce over YARN) internals. IMO, that isn't needed as the Job, Task, TaskAttempt etc in MR have concepts specific to MapReduce like Map/Reduce tasks. I think we can redesign these objects needed for HAMA here relatively with far more ease. And that's cleaner too.
- Code reuse from MRV2: OTOH,.
Meta comment: Instead of jumping into writing the implementation, I think it helps to spend some time developing the design till it reaches some level of stability and then writing down the module structure(like BspAppMaster module, BspChild module etc.), followed by the interfaces of all the data objects and the components and finally wiring them together. Once we have all the interfaces and communication patterns in place, implementation can be done in parallel. It did help us writing MRV2 a lot cleaner, am sure it will help us here too.
General infra thought: I think having this branch at apache svn helps HAMA's incubation status. Also it will be easy for anyone else from the current hama-dev interested in working on this to use apache lists, svn etc. (Oh, BTW, I am looking for collaborating too
). What do you think?
Wow that's a wall of text
I'm no contributor (yet?), so I don't have SVN access, that was the main reason I choose the Google Code repo.
Yes we took a lot of Hadoop's old code for HamaV1, in these days we don't have failure recovery, detection should be on it's way (
HAMA-370).
Fault tolerance in HamaV2 should basically just check if a container is available through some kind of heartbeat. If a task isn't responding, we should roll back to the state it was before. The Task is responsible for state saving every superstep e.G. the messages received by other peers. This should be planted in HDFS along with the task-id so the AM can rerun the task with this input. -> we need some kind of task attempts.
Implementation of barrier synchronization:
I would be very glad if we can get away from Zookeepers Sync service, we had a lot of ideas how to make it running (see
HAMA-387) but it doesn't help. Edward asked a question on their user list, but they offered just the same ideas we have tried out before.
This should be agreeable, no?
Polling is totally agreeable. I very much doubt that Zookeeper isn't internally polling either.
Reuse of MRV2 classes
As you might see I totally reuse your classes. It's cool, but it is more work to cut down your statemachine handling to something simpler than rewriting it from scratch..
+1, that would be great.
Instead of jumping into writing the implementation,I think it helps to spend some time developing the design till it reaches some level of stability and then writing down the module structure [...]
You are right.
Some thoughts base what I have known so far, but I may be wrong (or miss the point) and probably do not see the whole forest.
Each BSPPeer could periodically poll the AM if it can proceed to the next superstep. ...
With polling, it seems that chances are the polling would not reach the agreement (there could always have 1 process missing) in an unfortunate timing case. Also, as the processes increase probably it would increase the loading for master to deal with polling tasks.
In addition, my understanding is the integration with MRV2 would be just an additional support so that MR job/ application can be submitted without rewriting to use hama for computation.
With polling, it seems that chances are the polling would not reach the agreement (there could always have 1 process missing) in an unfortunate timing case. Also, as the processes increase probably it would increase the loading for master to deal with polling tasks.
This is correct, but depends highly on the polling intervall. As far as I understood each BSPJob gets its own Application Master. So there is no "Master-Machine" anymore like our BSPMaster or Jobtracker.
We have two options:
- fix the barrier sync with zookeeper and use it in the AM and Peers
- do the polling in Application Master
In addition, my understanding is the integration with MRV2 would be just an additional support so that MR job/ application can be submitted without rewriting to use hama for computation.
That is right as well. I think we should make this a configuration based decision whether YARN (or the URL) has been set or not.
Thanks Thomas for your replies.
I'm no contributor (yet?), so I don't have SVN access, that was the main reason I choose the Google Code repo.
We can work with patches, but that won't scale. I think we need to get commit privileges with a promise to restrict ourselves to a branch. I noticed Edward is off for a week, may be he can pull some strings when he's back?
Fault tolerance in HamaV2 should basically ..
If this isn't already there in V1, it makes sense to take this up as a follow up to the first cut of V2.
As you might see I totally reuse your classes. It's cool, but it is more work to cut down your statemachine handling to something simpler than rewriting it from scratch.
Yes, I propose that we start afresh. As you mentioned, it is lot less work than trying to cut down the statemachine and peeling off MR specific stuff.
ChiaHung,
With polling, it seems that chances are the polling would not reach the agreement (there could always have 1 process missing) in an unfortunate timing case. Also, as the processes increase probably it would increase the loading for master to deal with polling tasks.
Regarding the missing processes, which we call stragglers in mapreduce, isn't the API such that there should be no progress till all the processes perform the barrier sync?
Regarding the load, even MR AM which uses a Hadoop RPC server has similar requirements, in the order of ten's of thousands of tasks. That amount of scalability should be enough for Hama's case also. And like Thomas mentioned, each BSPMaster is needed to serve the same job's BSPPeers, so that should help too.
In addition, my understanding is the integration with MRV2 would be just an additional support so that MR job/ application can be submitted without rewriting to use hama for computation.
It is not clear to me. But if you are talking of the ability to run the current BSP jobs without rewriting them, then yes, we will support API level compatibility.
Thanks Vinod,
We can work with patches, but that won't scale. I think we need to get commit privileges with a promise to restrict ourselves to a branch. I noticed Edward is off for a week, may be he can pull some strings when he's back?
He pulled some strings yesterday. My account is on its way. I guess we can start in 1-2 days.
If this isn't already there in V1, it makes sense to take this up as a follow up to the first cut of V2.
Yes, but I think we should take things like TaskAttempts into account.
The roll-back of an attempt will be another task and should be scheduled for V2. The extending of the state machine for handling these events will be another task, too.
So our first version will solely be:
- Message passing between peers
- Barrier Sync with control of the ApplicationMaster
- Job submission via the current BSPJob Class.
And the statemachine will just be:
Setup->Computation->Cleanup
Anything else we should take into account?
@ChiaHung do you want to help us?
Vinod,
Regarding the missing processes, which we call stragglers in mapreduce, isn't the API such that there should be no progress till all the processes perform the barrier sync?
Yes, in that case there would have no progress. However, it differs from the barrier sync with zookeeper in that there could always have different stragglers do not poll each round due to networking loading, etc. For instance, with time interval e.g. 1 secs each GroomServer polls to check if he can proceed; unfortunately due to network congestion, master server always receives parts of response (not response from all GroomServers). So the rate of barrier sync with no progress probably could be higher than expected. Or we will have master to help coordinate between stragglers, but this seems the tasks that should be handled by zookeeper service. In addition, if it is going to have multiple masters, to replicate the poll information should also be taken into account.
I was just thinking some issues that maybe we need to consider beforehand if it is decided to work toward this direction. Thanks Vinod, that inspires me a lot.
Regarding the barrier sync:
My idea is that we have two RPC calls, enterBarrier() and leaveBarrier().
In the ApplicationMaster we can handle each superstep via a CyclicBarrier[1] on the number of tasks.
So the RPC call is going from the client container to the ApplicationMaster, this call is going through the barrier, causing the clients to wait until the barrier is tripped. Then the RPC call returns to the clients, they send their messages and the whole thing is repeated with leaveBarrier().
In this case we don't have to poll for completion and it some higher level construct that works (see LocalBSPRunner).
[1]
And it works:
So no need for polling or zookeeper.
What do we need then?
- I propose to use my barrier sync code, so we don't introduce dependecy of Zookeeper.
- ApplicationMaster
- BSPJob that has the ultimate state machine
Okay Vinod, I need your professional help.
I started playing around with the ApplicationMaster and I think we need a ContainerLauncher right?
The deal with the ContainerLauncher is that you need an AppContext to launch this. Unfortunately the AppContext needs a M/R JobID and M/R Job for their methods.
Is there a chance to refactor this? Or should we just import this but not use them?
Attached files are roughly sketched state graph (via Graphviz). Task state is a bit unclear to me so there may be something missing. Please help correct the diagram (or anything related), I can then update one with fault scenario.
Thanks for the state machine. Looks good.
A fault scenario would be great, too.
I would create the statemachine for fault cases in the first implementation, but with no action triggered. So we can easily add this later.
Let's assemble what daemons we have to launch from the application master:
- n BSP Tasks
- a checkpointer per host? [1]
- the sync daemon [2]
[1] I see a real problem here, maybe we have to integrate this into a running task / BSPPeer.
[2] Based on where YARN is scheduling the container it might check for a free port. How do we get the hostname:port of this maschine then?
I started playing around with the ApplicationMaster and I think we need a ContainerLauncher right?
Don't import or copy over the ContainerLauncher. All you need is the code in there to start and stop containers.
Regarding the state machines, we will need some kind of representation for the barrier sync in the job too.
a checkpointer per host? the sync daemon
What is the purpose of these two?
I'll be hanging around at #hama channel at freenode, we can sync up w.r.t implementation details. (My timezone is IST)
Okay, looked at your code on github. Seems like Sync daemon can be started by the ApplicationMaster itself.
Still not sure about the checkpointer.
Don't import or copy over the ContainerLauncher. All you need is the code in there to start and stop containers.
Oh okay. I'll remove them later. Can you provide a tiny code example what is needed to launch a container and how it should look like in our cases?
Regarding the state machines, we will need some kind of representation for the barrier sync in the job too.
I would not track this via the state machine. But it can be possible if we can integrate the sync daemon into the application master.
So the application master will take care of the sync. Aren't the RPC services blocking each other then? I tested an integration with our Groom and BSPMaster, and it totally failed, so I had to put this into another process.
Seems like Sync daemon can be started by the ApplicationMaster itself.
Yes, there is still the question how to get the host:port of this daemon after it has been launched. Is there a kind of communication between the starter and the container?
Still not sure about the checkpointer.
Me neither, so we can leave this point open and revise the checkpointing mechanism later. I don't want to be inconsistent to the current features, but I think each task has to make its own checkpointing.
I'll be hanging around at #hama channel at freenode, we can sync up w.r.t implementation details. (My timezone is IST)
I was a few days off so I wasn't there, you probably noticed, I'm sorry.
Yes, checkpoint at the moment provides saving data to hdfs per host. The primary reason having a separated checkpointing process is to ensure bsp task would continuously process even in the presence of checkpointing service failure. Although we can combine chececkpointing process with bsp task together, chances are if the checkpointing process fails this may propergate to bsp task resulting in the collapse of bsp task. I think that Joe Armstrong's paper[1] explains this well.
[1]. Making reliable distributed systems in the presence of software errors.
Yes that is correct.
But I don't see the improvement, if a task fails, the checkpointer also fails within the task. If you seperate the checkpointer as a seperate process which guards several tasks, it can fail the tasks it guards if the process is not working properly. Armstrong is just referring to the need of redundancy to absorb failure, but with a single process which is guarding several tasks you have introduced another point of failure which can have a lot more impact than a single task which fails.
Each task attempt should write the checkpoints with its taskID, attemptID and superstep (as name?) into HDFS so it can be restarted from outside.
That's just my opinion on that, but you're the fault-tolerance professional
But I would leave this outside for now and we can open another issue that will add this. In this issue we can talk about the benefits of another process.
Thanks for the tutorial in the YARN site module.
Although it is not complete and sometimes there are variables which were never declared it is really helpful.
I just added the container allocation and start of the sync server as a daemon inside of the application master.
Then I added the BSPTaskLauncher which will then spawn the BSP tasks.
I decided to not follow the statemachine handling like MapReduce, because I think this event handling is worse than GOTOs are. It is not transparent (maybe not only to me?) from which point the event handler is handling the events and stuff.
I don't think we need this actor model now and we should have a simple first snapshot which works and is easy to develop with.
BSP does not have this task execution like MapReduce and we don't need the capability to schedule Tasks during the computation (excluding failure).
TODO:
- the real launching of the tasks within the BSPTaskLauncher
- task/job/overall cleanup
- the client integration
- a lot of testcases...
Indeed, 3 checkpointer processes were one solution implemented previously. It was just that having 3 processes doing the same things seems too redundant. Thus the implementation was shifted because the first goal is to ensure bsp task working smoothly. We can discuss this in other threads/ issues if needed. And yes at the moment the checkpointed data is written to hdfs with jobid/supersteps/taskid so that data recovery would be possible. : )
Progress update:
- Cleanup for everything has been implemented.
- Launching of tasks, too. But it has some flaws, e.G. that we should use the containers methods to pass the classpath and jars, as well as the confs.
I guess we can split this task to the client integration and checkpointing integration. Any opinions?
Just scripted the client so we have first complete snapshot.
I'd like to run this now, but I have troubles assembling the hadoop packages together and run YARN.
Is there a complete pre-release tarball? mvn assembly:assembly does not work on Hadoop Main.
The best is that this modules is now quite independend of the hama core, I just use the interfaces and the taskid/jobid classes.
So we can reroll the efford in splitting the modules... If we want to.
Just a quick note:
I would be glad if someone would review what I've written.
I will!
I think an abstract BSPPeer class which summarizes the methods we have in common could be worth the work.
I should refactor this.
I have a problem launching the APP master.
I'm putting the classpath together via suggested code:
//);
The app master gets started with following start command:
Start command: ${JAVA_HOME}/bin/java -cp $CLASSPATH:./*: org.apache.hama.bsp.BSPApplicationMaster file:/home/thomasjungblut/Desktop/application_1318323647317_0012/job.xml 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
Which is correct, assuming that $CLASSPATH has been set properly by YARN.
But I think that it is not:
11/10/11 11:37:03 INFO ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/avro/ipc/Server at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.apache.hadoop.yarn.ipc.YarnRPC.create(YarnRPC.java:53) at org.apache.hama.bsp.BSPApplicationMaster.<init>(BSPApplicationMaster.java:104) at org.apache.hama.bsp.BSPApplicationMaster.main(BSPApplicationMaster.java:233) Caused by: java.lang.ClassNotFoundException: org.apache.avro.ipc) ... 5 more
Is there a suggestion on how to solve this?
FYI, I'm submitting the dummy job via "yarn/bin/yarn jar xyz.jar". The generated start command contains all the jars that should be needed. So classpath should be set properly.
Oh sorry, that is the avro dependency. I was distracted by the YARN class. I try with avro jar in CP.
Putting Avro into CP worked fine.
But I'm facing relative huge issues with our project now.
In every of our module, we have hadoop-20.2 as dependency.
SERVER module is depending on API and CORE, now hadoop-20.2 ships within every jar export / snapshot build. Old classes override the new ones during the build process.
Should I change the hadoop 20.2 dependency to 23.0 in every module to fix this? I see several compatibility issues in our main packages, especially with Zookeeper.
I'm still thinking about just making the whole module depending on a Hama-0.4.0 SNAPSHOT and develop further. The integration totally sucks.
Looks good to me.
By the way, you'll use syncServer instead of Zookeeper?
Thanks, yes. I don't really know if it works better than Zookeeper.
Somehow I face classpath issues when starting the containers... I give you more information later on.
Okay I fixed the last issues.
I would now open some subtasks. For example to cleanup the TODOs or test with a distributed YARN (currently working with pseudo distribution).
Especially profiling the ApplicationMaster would be a task too. Its container will be killed if less than 2gb were allocated, I don't think that this is using that much memory (maybe its a misconfiguration).
Then we should take a look at the module splitting. Currently the server package is just on top of API/Core, which is in trunk solely core.
Although I think it is not a bad design, we have several new classes for the YARN stuff e.G. the YARNBSPJob. I don't know if we really should integrate YARN into our BSPJob.
Then we have to catch up the sources to the current trunk.
BTW you can build the server module with mvn install package and use the shaded jar to run on yarn with: yarn/bin/yarn jar <JAR>.jar org.apache.hama.bsp.YarnSerializePrinting
It is currently not working as expected, I'm still facing some conf and classpath issues. But I hope I'll finish them today.
Hello BSP on YARN
So for me the first snapshot is done. We can go back to our patch review process again
I'm proposing follow up issues:
- fix TODOs
- integrate the SuperSteps in verbose mode of the YARNBSPJob (not working properly somehow)
- test Hama-YARN in fully distributed mode (check if configurations and jars are getting copied correctly)
- check examples compatibility with YARN
- reintegrate to trunk (I need some opinions on that please, split modules etc)
- integrate checkpointing again
- run findbugs over it
- refactor BSPPeer so that we have an abstract version of it which runs on YARN and Hama
- LOTS of testcases
That is quite a lot, but they are very small tasks. So this could be done very fast.
Thanks, yes. I don't really know if it works better than Zookeeper.
If there's no big benefits, Zookeeper is better I think. Whatever we chose, it should be designed as a common module to provide sync service so that we can improve both our own cluster version and YARN version.
Well, like I already told you in the other issue:
It is better readable and clearer code in BSPPeer, 1 line vs 15 with synchronization stuff. And it may be faster, because it has less overhead.
And there is no additional process to spawn.
But I'm not aware of if it has the same issues like zookeeper has. We'll just have to test this. From the code I run, it works totally fine, but I don't own a 1k node cluster to test this
So there might be scalabilty and stability issues.
As you know, Zookeeper provides
- Fault-Tolerance
- Scalability (1,000+ clients per cell)
- High Performance
- Easy to use
And, it's already verified. I would like to suggest that we need to investigate more about FT, HA, and split-brain issues.
And again, whatever we chose, it should be designed as a common module.
You're right.
Note that we receive fault tolerance in YARN sync because it is part of the app master, it can simply be restarted.
And easy to use is a joke isn't it?
This:
protected boolean enterBarrier() throws KeeperException, InterruptedException { if (LOG.isDebugEnabled()) { LOG.debug("[" + getPeerName() + "] enter the enterbarrier: " + this.getSuperstepCount()); } synchronized (zk) { createZnode(bspRoot); final String pathToJobIdZnode = bspRoot + "/" + taskid.getJobID().toString(); createZnode(pathToJobIdZnode); final String pathToSuperstepZnode = pathToJobIdZnode + "/" + getSuperstepCount(); createZnode(pathToSuperstepZnode); BarrierWatcher barrierWatcher = new BarrierWatcher(); Stat readyStat = zk.exists(pathToSuperstepZnode + "/ready", barrierWatcher); zk.create(getNodeName(), null, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL); List<String> znodes = zk.getChildren(pathToSuperstepZnode, false); int size = znodes.size(); // may contains ready boolean hasReady = znodes.contains("ready"); if (hasReady) { size--; } LOG.debug("===> at superstep :" + getSuperstepCount() + " current znode size: " + znodes.size() + " current znodes:" + znodes); if (LOG.isDebugEnabled()) LOG.debug("enterBarrier() znode size within " + pathToSuperstepZnode + " is " + znodes.size() + ". Znodes include " + znodes); if (size < jobConf.getNumBspTask()) { LOG.info("xxxx 1. At superstep: " + getSuperstepCount() + " which task is waiting? " + taskid.toString() + " stat is null? " + readyStat); while (!barrierWatcher.isComplete()) { if (!hasReady) { synchronized (mutex) { mutex.wait(1000); } } } LOG.debug("xxxx 2. at superstep: " + getSuperstepCount() + " after waiting ..." + taskid.toString()); } else { LOG.debug("---> at superstep: " + getSuperstepCount() + " task that is creating /ready znode:" + taskid.toString()); createEphemeralZnode(pathToSuperstepZnode + "/ready"); } } return true; }
is just a total not-easy to use way to use zookeeper at all. And it is not working correctly without throwing exections the whole time.
Even if you take the log aside it is just a concurrency nightmare.
And again, whatever we chose, it should be designed as a common module.
I suggest to make the BSPPeer (or BSPPeerImpl what ever it is called now) an abstract class and subclass a ZooKeeper sync peer and a RPC Sync peer. Let the user decide.
I think this is just a discussion between >I< don't like ZooKeeper and all other projects use it. It is not something which will lead us towards a solution anyways.
I like the fact that I can add quorum servers simply. Anyway, the sync service is a totally another issue from YARN. So, if we design it as a common module, we can improve it later if needed.
What do you think?
What do you want to put into it?
Do you think is it possible to use just BSPPeer in Task runner? If so, let's do like that at the moment.
If we can continue to maintain low complexity of syncServer and no big differences about performance, let's get rid of zookeeper.
Sorry, which BSPPeer in which Taskrunner?
What is your opinion on the abstract version of BSPPeer? Which just adds abstract methods for enter/leave-barrier and getAllPeerNames?
This would be quite enough.
big differences about performance
How should we measure this?
re-integration patch to the trunk
just some minor refactoring in names of bsppeer and impl stuff.
Please apply this to rev 1182452 and have a look if everything is fine again. Then we can go for patches again and sort the sync/checkpoint issues via mailing list.
tiny fixes and renames. This works
New patch to catch up to trunk.
To apply you should first do:
svn move core/src/main/java/org/apache/hama/bsp/BSPPeer.java core/src/main/java/org/apache/hama/bsp/BSPPeerImpl.java
and
svn move core/src/main/java/org/apache/hama/bsp/BSPPeerInterface.java core/src/main/java/org/apache/hama/bsp/BSPPeer.java
Then you can safely apply the patch, it will ask where the core/src/main/java/org/apache/hama/bsp/BSPPeerInterface.java is gone, but just ignore the message.
Then do a
mvn clean install package
and
mvn eclipse:eclipse
and it should be just fine.
Can someone please check this in the near future? I don't want to keep this updating up..
Could you please remove tab spaces in yarn/pom.xml file?
Let's put this into trunk.
oh crap. That was the eclipse formatter. I'll fix that.
I'm going to commit this then.
Great, branch is deleted and it is committed.
Next steps:
-fix TODOs
-integrate the SuperSteps in verbose mode of the YARNBSPJob (not working properly somehow)
-test Hama-YARN in fully distributed mode (check if configurations and jars are getting copied correctly)
-add YARN serialize printing to examples package and make it dependend on YARN module.
Minor things:
-findbugs run, add target and .*files to SVN ignore.
Things to discuss:
BSPPeer problems
-Zookeeper yes/no/yes
-Checkpointing as a seperate process?
general
-module layout -> client/api/common/core yes/no/who wants to refactor this?
I'm going to create subtasks for the next steps and minor things. The discussion part must be transfered to the mailing list.
After svn update, BSPPeer.getBSPPeerConnection() method always return null on my 16 nodes hama cluster.
Will you fix this problem?
Like already told in Talk, I can't think of a problem here.
Let's see the diff:
Index: core/src/main/java/org/apache/hama/bsp/BSPPeerImpl.java =================================================================== --- core/src/main/java/org/apache/hama/bsp/BSPPeerImpl.java (Revision 1182784) +++ core/src/main/java/org/apache/hama/bsp/BSPPeerImpl.java (Arbeitskopie) @@ -59,9 +59,9 @@ /** * This class represents a BSP peer. */ -public class BSPPeer implements Watcher, BSPPeerInterface { +public class BSPPeerImpl implements Watcher, BSPPeer { - public static final Log LOG = LogFactory.getLog(BSPPeer.class); + public static final Log LOG = LogFactory.getLog(BSPPeerImpl.class); private final Configuration conf; private BSPJob jobConf; @@ -73,7 +73,7 @@ private final String bspRoot; private final String quorumServers; - private final Map<InetSocketAddress, BSPPeerInterface> peers = new ConcurrentHashMap<InetSocketAddress, BSPPeerInterface>(); + private final Map<InetSocketAddress, BSPPeer> peers = new ConcurrentHashMap<InetSocketAddress, BSPPeer>(); private final Map<InetSocketAddress, ConcurrentLinkedQueue<BSPMessage>> outgoingQueues = new ConcurrentHashMap<InetSocketAddress, ConcurrentLinkedQueue<BSPMessage>>(); private ConcurrentLinkedQueue<BSPMessage> localQueue = new ConcurrentLinkedQueue<BSPMessage>(); private ConcurrentLinkedQueue<BSPMessage> localQueueForNextIteration = new ConcurrentLinkedQueue<BSPMessage>(); @@ -192,7 +192,7 @@ /** * Protected default constructor for LocalBSPRunner. */ - protected BSPPeer() { + protected BSPPeerImpl() { bspRoot = null; quorumServers = null; messageSerializer = null; @@ -208,7 +208,7 @@ * @param umbilical is the bsp protocol used to contact its parent process. * @param taskid is the id that current process holds. */ - public BSPPeer(Configuration conf, TaskAttemptID taskid, + public BSPPeerImpl(Configuration conf, TaskAttemptID taskid, BSPPeerProtocol umbilical) throws IOException { this.conf = conf; this.taskid = taskid; @@ -312,7 +312,7 @@ Entry<InetSocketAddress, ConcurrentLinkedQueue<BSPMessage>> entry = it .next(); - BSPPeerInterface peer = peers.get(entry.getKey()); + BSPPeer peer = peers.get(entry.getKey()); if (peer == null) { try { peer = getBSPPeerConnection(entry.getKey()); @@ -587,19 +587,19 @@ @Override public long getProtocolVersion(String arg0, long arg1) throws IOException { - return BSPPeerInterface.versionID; + return BSPPeer.versionID; } - protected BSPPeerInterface getBSPPeerConnection(InetSocketAddress addr) + protected BSPPeer getBSPPeerConnection(InetSocketAddress addr) throws NullPointerException, IOException { - BSPPeerInterface peer; + BSPPeer peer; synchronized (this.peers) { peer = peers.get(addr); int retries = 0; while (peer != null) { - peer = (BSPPeerInterface) RPC.getProxy(BSPPeerInterface.class, - BSPPeerInterface.versionID, addr, this.conf); + peer = (BSPPeer) RPC.getProxy(BSPPeer.class, + BSPPeer.versionID, addr, this.conf); retries++; if (retries > 10) {
As you can see, this is just a simple renaming action.
Tested after removing while loop. and works well. But I don't know why.. (- _-
I'm committing that code at the moment.
Integrated in Hama-Nightly #328 (See)
HAMA-431 integration of the branch for YARN.
tjungblut :
Files :
- /incubator/hama/trunk/CHANGES.txt
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/BSPPeer.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/BSPPeerImpl.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/BSPPeerInterface.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/BSPTask.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/GroomServer.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/LocalBSPRunner.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/bsp/Task.java
- /incubator/hama/trunk/core/src/main/java/org/apache/hama/checkpoint/Checkpointer.java
- /incubator/hama/trunk/core/src/test/java/org/apache/hama/bsp/BSPSerializerWrapper.java
- /incubator/hama/trunk/core/src/test/java/org/apache/hama/checkpoint/TestCheckpoint.java
- /incubator/hama/trunk/pom.xml
- /incubator/hama/trunk/yarn
- /incubator/hama/trunk/yarn/pom.xml
- /incubator/hama/trunk/yarn/src
- /incubator/hama/trunk/yarn/src/main
- /incubator/hama/trunk/yarn/src/main/java
- /incubator/hama/trunk/yarn/src/main/java/org
- /incubator/hama/trunk/yarn/src/main/java/org/apache
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/BSPApplicationMaster.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/BSPClient.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/BSPRunner.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/BSPTaskLauncher.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/Job.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/JobImpl.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/YARNBSPJob.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/YARNBSPPeerImpl.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/YarnSerializePrinting.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/sync
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/sync/StringArrayWritable.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/sync/SyncServer.java
- /incubator/hama/trunk/yarn/src/main/java/org/apache/hama/bsp/sync/SyncServerImpl.java
- /incubator/hama/trunk/yarn/src/main/resources
- /incubator/hama/trunk/yarn/src/main/resources/log4j.properties
made this to an umbrella and unassign.
I'm schedule this to 0.6 roadmap.
Yes, we need to push this to the newest version and make it consistent to our latest release again.
I just installed YARN on my test machines. Let's do this 0.7.
We will continue on a new YARN jira.
So let's do this
| https://issues.apache.org/jira/browse/HAMA-431?subTaskView=all | CC-MAIN-2014-23 | refinedweb | 6,723 | 58.28 |
With the rise of test-driven development (TDD) in the 1990s as part of the eXtreme Programming (XP) movement, the role of example-based testing became fixed into the culture of software development1. The original idea was to drive development of software products based on examples of usage of the product by end users. To support this Kent Beck and others at the centre of the XP community created test frameworks. They called them unit test frameworks presumably because they were being used to test the units of code that were being constructed. This all seemed to work very well for the people who had been in on the start of this new way of developing software. But then XP became well-known and fashionable: programmers other than the original cabal began to claim they were doing XP and its tool TDD. Some of them even bought the books written by Kent Beck and others at the centre of the XP community. Some of them even read said books.
Labels are very important. The test frameworks were labelled unit test frameworks. As all programmers know, units are functions, procedures, subroutines, classes, modules: the units of compilation. (Interpreted languages have much the same structure despite not being compiled per se.) Unit tests are thus about testing the units, and the tools for this are unit test frameworks. Somewhere along the line, connection between these tests and the end user scenarios got lost. Testing became an introvert thing. The whole notion of functional testing and ‘end to end’ testing seemed to get lost because the label for the frameworks were ‘unit test’.
After a period of frustration with the lack of connection between end user scenarios and tests, some people developed the idea of acceptance testing so as to create frameworks and workflows. (Acceptance testing has been an integral part of most engineering disciplines for centuries; it took software development a while to regenerate the ideas.) FitNesse [FitNesse] and Robot [Robot] are examples of the sort of framework that came out of this period.
However the distance between acceptance testing and unit testing was still a yawning chasm2. Then we get a new entrant into the game, behaviour-driven development (BDD). This was an attempt by Dan North and others to recreate the way of using tests during development. The TDD of XP had lost its meaning to far too many programmers, so the testing frameworks for BDD were called JBehave, Cucumber, etc. and had no concept of unit even remotely associated with them.
Now whilst BDD reasserted the need for programmers and software developers to be aware of end user scenarios and at least pretend to care about user experience whilst implementing systems, we ended up with even more layers of tests and test frameworks.
And then came QuickCheck [QuickCheck], and the world of test was really shaken up: the term ‘property-based testing’ became a thing.
QuickCheck [Hackage] first appeared in work by John Hughes and others in the early 2000s. It started life in the Haskell [Haskell] community but has during the 2010s spread rapidly into the milieus of any programming language that even remotely cares about having good tests.
Example required
Waffling on textually is all very well, but what we really need is code; examples are what exemplify the points, exemplars are what we need. At this point it seems entirely appropriate to make some reuse, which, as is sadly traditional in software development, is achieved by cut and paste. So I have cut and paste3 the following from a previous article for Overload [Winder16]:
For this we need some code that needs testing: code that is small enough to fit on the pages of this august journal, but which highlights some critical features of the test frameworks.
We need an example that requires testing, but that gets out of the way of the testing code because it is so trivial.
We need factorial.
Factorial is a classic example usually of the imperative vs. functional way of programming, and so is beloved of teachers of first year undergraduate programming courses. I like this example though because it allows investigating techniques of testing, and allows comparison of test frameworks.
Factorial is usually presented via the recurrence relation:
f0 = 1
fn = nfn-1
This is a great example, not so much for showing software development or algorithms, but for showing testing4, and the frameworks provided by each programming language.
Given the Haskell heritage of property-based testing, it seems only right, and proper, to use Haskell for the first example. (It is assumed that GHC 7.10 or later (or equivalent) is being used.)
Haskell implementation…
There are many algorithms for realizing the Factorial function: iterative, naïve recursive, and tail recursive are the most obvious. So as we see in Listing 1 we have three realizations of the Factorial function. Each of the functions starts with a type signature followed by the implementation. The type signature is arguably redundant since the compiler deduces all types. However, it seems to be idiomatic to have the type signature, not only as documentation, but also as a check that the function implementation is consistent with the stated signature. Note that in Haskell there are no function call parentheses – parentheses are used to ensure correct evaluation of expressions as positional arguments to function calls. It is also important to note that in Haskell functions are always curried: a function of two parameters is actually a function of one parameter that returns a function of one parameter. Why do this? It makes it really easy to partially evaluate functions to create other functions. The code of Listing 1 doesn’t make use of this, but we will be using this feature shortly.
The
iterative and
naïveRecursive implementations are just matches with an expression: each match starts with a
| and is an expression of Boolean value then a
= followed by the result expression to evaluate for that match expression. Matches are tried in order and
otherwise is the ‘catch all’ “Boolean” that always succeeds; it should, of course, be the last in the sequence. The
error function raises an exception to be handled elsewhere. The
tailRecursive function has a match and also a ‘where clause’ which defines the function
iteration by pattern matching on the parameters. The ‘where clause’ definitions are scoped to the function of definition5,6.
…and example-based test
Kent Beck style TDD started in Smalltalk with sUnit7 and then transferred to Java with JUnit8. A (thankfully fading) tradition seems to have grown that the first test framework in any language is constructed in the JUnit3 architecture – even if this architecture is entirely unsuitable, and indeed not idiomatic, for the programming language. Haskell seem to have neatly side-stepped the problem from the outset since although the name is HUnit [HUnit] as required by the tradition, the architecture is nothing at all like JUnit3. Trying to create the JUnit3 architecture in Haskell would have been hard and definitely not idiomatic, HUnit is definitely idiomatic Haskell.
Listing 2 shows the beginnings of a test using a table driven (aka data driven) approach. It seems silly to have to write a new function for each test case, hence the use of a table (
positiveData) to hold the inputs and outputs and create all the tests with a generator (
testPositive, a function of two parameters, the function to test and a string unique to the function so as to identify it). The function
test takes a list argument with all the tests, here the list is being constructed with a list comprehension: the bit before the
| is the value to calculate in each case (a fairly arcane expression, but lets not get too het up about it) and the expression after is the ‘loop’ that drives the creation of the different values, in this case create a list entry for each pair in the table. Then we have a sequence (thanks to the
do expression9) of three calls to
runTestTT (a function of one parameter) which actually runs all the tests.
Of course, anyone saying to themselves “but he hasn’t tested negative values for the arguments of the Factorial functions”, you are not being silly; you are being far from silly, very sensible in fact. I am avoiding this aspect of the testing here simply to avoid some Haskell code complexity10 that adds nothing to the flow in this article. If I had used Python or Java (or, indeed, almost any language other than Haskell) we would not have this issue. For those wishing to see the detail of a full test please see my Factorial repository on GitHub [Winder].
And the proposition is…
The code of Listing 2 nicely shows that what we are doing is selecting values from the domain of the function and ensuring the result of executing the function is the correct value from the image of the function11. This is really rather an important thing to do but are we doing it effectively?
Clearly to prove the implementation is correct we have to execute the code under test with every possible value of the domain. Given there are roughly 264 (about 18,446,744,073,709,551,616) possible values to test on a 64-bit machine, we will almost certainly decide to give up immediately, or at least within just a few femtoseconds. The test code as shown in Listing 2 is sampling the domain in an attempt to give us confidence that our implementation is not wrong. Have we done that here? Are we satisfied? Possibly yes, but could we do more quickly and easily?
The proposition of proposition-based testing is to make propositions about the code and then use random selection of values from the domain to check the propositions are not invalid. In this case of testing the Factorial function, what are the propositions? Factorial is defined by a recurrence relation comprising two rules. These rules describe the property of the function that is Factorial with respect to the domain, the non-negative integers. If we encode the recurrence relation as a predicate (a Boolean valued function) we have a representation of the property that can be tested by random selection of non-negative integers.
Listing 3 shows a QuickCheck test of Factorial. The function
f_p is the predicate representing the property being tested. It is a function of two parameters, a function to test and a value to test, with the result being whether the recurrence relation that defines Factorial is true for that value and that function: the predicate is an assertion of the property that any function claiming to implement the Factorial function must satisfy. Why is this not being used directly, but instead
factorial_property is the predicate being tested by the calls to
quickCheck? It is all about types and the fact that values are automatically generated for us based on the domain of the property being tested.
f_p is a predicate dealing with
Integer, the domain of the functions being tested, values of which can be negative. Factorial is undefined for negative values12. So the predicate called by
quickCheck,
factorial_property, is defined with
Natural as the domain, i.e. for non-negative integers13. So when we execute
quickCheck on the function under test, it is non-negative integer values that are generated: The predicate never needs to deal with negative values, it tests just the Factorial proposition and worries not about handling the exceptions that the implementations raise on being given a negative argument. Should we test for negative arguments and that an exception is generated? Probably. Did I mention ignoring this for now?
Earlier I mentioned currying and partial evaluation. In Listing 3, we are seeing this in action. The argument to each
quickCheck call is an expression that partially evaluates
factorial_property, binding a particular implementation of
Factorial, and returning a function that takes only a
Natural value. This sort of partial evaluation is a typical and idiomatic technique of functional programming, and increasingly any language that supports functions as first class entities.
By default QuickCheck selects 100 values from the domain, so Listing 3 is actually 300 tests. In the case we have here there are no fails, all 300 tests pass. Somewhat splendidly, if there is a failure of a proposition, QuickCheck sets about ‘shrinking’ which means searching for the smallest value in the domain for which the proposition fails to hold. Many people are implementing some form of proposition testing in many languages. Any not having shrinking are generally seen as being not production ready. Shrinking is such a boon to taking the results of the tests and deducing (or more usually inferring) the cause of the problem, that it is now seen as essential for any property-based testing framework.
Figure 1 shows the result of running the two test programs: first the HUnit example based testing – 18 hand picked tests for each of the three implementations; and second the QuickCheck property-based testing – 100 tests for each case, all passing so no need for shrinking.
But who uses Haskell?
Well, quite a lot of people. However, one of the major goals of Haskell is to ‘Avoid success at all costs’14. The point here is not un-sensible. Haskell is a language for exploring and extending ideas and principles of functional programming. The Haskell committee therefore needs to avoid having to worry about backward compatibility. This puts it a bit at odds with many commercial and industrial operations who feel that, once written, a line of code should compile (if that is appropriate) and execute exactly the same for all time without any change. Clearly this can be achieved easily in any language by never upgrading the toolchain. However, the organizations that demand code works for all time usually demand that toolchains are regularly updated. (Otherwise the language is considered dead and unusable. There is irony in here somewhere I believe.) There is no pleasing some people. Successful languages in the sense of having many users clearly have to deal with backward compatibility. Haskell doesn’t. Thus Haskell, whilst being a very important language, doesn’t really have much market traction.
Frege makes an entry
Frege [Frege] though is actually likely to get more traction than Haskell. Despite the potential for having to update codebases, using ‘Haskell on the JVM’ is an excellent way of creating JVM-based systems. And because the JVM is a polyglot platform, bits of systems can be in Java, Frege, Kotlin [Kotlin], Ceylon [Ceylon], Scala [Scala], Apache Groovy [Groovy], etc. For anyone out there using the Java Platform, I can strongly recommend at least trying Frege. To give you a taste, look at Listing 4, which shows three Frege implementations of the Factorial function, and that Frege really is Haskell. The tests (see Listing 5) are slightly different from the Haskell ones not because the languages are different but because the context is: instead of creating a standalone executable as happens with Haskell, Frege create a JVM class to be managed by a test runner. So instead of a
main function calling the test executor, we just declare property instances for running using the
property function, and assume the test runner will do the right thing when invoked. The three examples here show a different way of constraining the test domain to non-negative integers than we saw with Haskell. Function composition (
. operator, must have spaces either side to distinguish it from member selection) of the property function (using partial evaluation) with a test data generator (
NonNegative.getNonNegative; dot as selector not function composition) shows how easy all this can be. Instead of just using the default generator (which would be
Integer for this property function
factorial_property, we are providing an explicit generator so as to condition the values from the domain that get generated.
The result of executing the Frege QuickCheck property-based tests are seen in Figure 2. As with the Haskell, 100 samples for each test with no fails and so no shrinking.
But…
With Haskell trying not to have a user base reliant on backward compatibility, and Frege not yet having quite enough traction as yet to be deemed popular, it behoves us to consider the proposition of proposition testing in one or more languages that have already gained real traction.
First off let us consider…Python.
Let’s hypothesize Python
Python [Python_1] has been around since the late 1980s and early 1990s. During the 2000s it rapidly gained in popularity. And then there was the ‘Python 2 / Python 3 Schism’.15 After Python 3.3 was released, there were no excuses for staying with Python 2. (Well, except two – and I leave it as an exercise for the reader to ascertain what these two genuine reasons are for not immediately moving your Python 2 code to Python 3.) For myself, I use Python 3.5 because Python now has function signature type checking [Python_2]16.
Listing 6 shows four implementations of the Factorial function. Note that the function signatures are advisory not strong type checking. Using the MyPy [MyPy] program the types will be checked, but on execution it is just standard Python as people have known for decades.
I suspect the Python code here is sufficiently straightforward that almost all programmers17 will be able to deduce or infer any meanings that are not immediately clear in the code. But a few comments to help: the
range function generates a range ‘from up to but not including’. The
if expression is of the form:
<true-value> if <boolean-expression> else <false-value>
The nested function iterate in
tail_recursive is scoped to the else block.
But are these implementations ‘correct’? To test them let’s use PyTest [Pytest]. The test framework that comes as standard with Python (unittest, aka PyUnit) could do the job, but PyTest is just better18. PyTest provides an excellent base for testing but it does not have property-based testing. For this we will use Hypothesis [Hypothesis] (which can be used with PyUnit as easily as with PyTest, but PyTest is just better).
Listing 7 shows a fairly comprehensive test – not only are we testing non-negative and negative integers, we also test other forms of error that are possible in Python. Tests are functions with the first four characters of the name being t, e, s, t. Very JUnit3, and yet these are module-level functions. There are no classes or inheritance in sight: that would be the PyUnit way. The PyTest way is to dispense with the classes as necessary infrastructure, swapping them for needing some infrastructure to be imported in some way or other. (This is all handled behind the scenes when pytest.main executes.) PyTest is in so many ways more Pythonic19 than PyUnit.
PyTest has the
@mark.parametrize decorator that rewrites your code so as to have one test per item of data in an iterable. In all the cases here, it is being used to generate tests for each algorithm20.
The
@given decorator, which comes from Hypothesis, does not rewrite functions to create new test functions. Instead it generates code to run the function it decorates with a number (the default is 100) of randomly chosen values using the generator given as argument to the decorator, recording the results to report back. This automated data generation is at the heart of property-based testing, and Hypothesis, via the supporting functions such as
integers,
floats, and
text (for generating integers, floats, and string respectively), does this very well. Notice how it is so easy to generate just negative integers or just non-negative integers. Also note the use of the ‘with statement’21 and the
raises function for testing that code does, in fact, raise an exception.
All the test functions have a parameter
a that gets bound by the action of the
@mark.parametrize decorator, and a parameter
x that gets bound by the action of the
@given decorator. This is all very different from the partial evaluation used in Haskell and Frege: different language features lead to different idioms to achieve the same goal. What is Pythonic is not Haskellic/Fregic, and vice versa. At least not necessarily.
The
pytest.main function, when executed, causes all the decorators to undertake their work and executes the result. The output from an execution will look very much as in Figure 3. You may find when you try this that the last line is green.22
Doing the C++ thing
There are many other example languages we could present here to show the almost complete coverage of property-based testing in the world: Kotlin [Kotlin], Ceylon [Ceylon], Scala [Scala], Apache Groovy [Groovy], Rust [Rust], D [D], Go [Go],… However, given this is an August23 ACCU journal and, historically at least, ACCU members have had a strong interest in C++, we should perhaps look at C++. Clearly people could just use Haskell and QuickCheck to test their C++ code, but let’s be realistic here, that isn’t going to happen24. So what about QuickCheck in C++? There are a number of implementations, for example CppQuickCheck [QuickCheck_2] and QuickCheck++ [QuickCheck_3]. I am, though, going to use RapidCheck [RapidCheck] here because it seems like the most sophisticated and simplest to use of the ones I have looked at to date25.
There is one thing we have to note straight away: Factorial values are big26. Factorial of 30 is a number bigger than can be stored in a 64-bit integer. So all the implementations of Factorial used in books and first year student exercises are a bit of a farce because they are shown using hardware integers: the implementations work for arguments [0..20] and then things get worrisome. “But this is true for all languages and we didn’t raise this issue for Haskell, Frege and Python.” you say. Well for Haskell (and Frege, since Frege is just Haskell on the JVM) the
Int type is a hardware number but
Integer, the type used in the Haskell and Frege code, is an integer type the values of which can be effectively arbitrary size. There is a limit, but then in the end even the universe is finite27. What about Python? The Python28
int type uses hardware when it can or an unbounded (albeit finite27) integer when it cannot. What about C++? Well the language and standard library have only hardware-based types, which could be taken as rather restricting. GNU has however conveniently created a C library for unbounded (albeit finite27) integers, and it has a rather splendid C++ binding [GNU].
So using the GMP C++ API, we can construct implementations of the Factorial function that are not restricted to arguments in the range [0..20] but are more generally useful. Listing 8 shows the functions being exported by the
Factorial namespace. We could dispense with the
long overloads, but it seems more programmer friendly to offer them.
Listing 9 presents the implementations. I suspect that unless you already know C++ (this code is C++14) you have already moved on. So any form of explanatory note is effectively useless here.29 We will note though that there is a class defined in there as well as implementations of the Factorial function.
Listing 10 presents the RapidCheck-based test code for the Factorial functions. There is a vector of function pointers30 so that we can easily iterate over the different implementations. Within the loop we have a sequence of the propositions. Each check has a descriptive string and a lambda function. The type of variables to the lambda function will cause (by default 100) values of that type to be created and the lambda executed for each of them. You can have any number of parameters – zero has been chosen here, which might seem a bit strange at first, but think generating random integers. Some of them are negative and some non-negative and we have to be careful to separate these cases as the propositions are so very different. Also some of the calculation for non-negative integers will result in big values. The factorial of a big number is stonkingly big. Evaluation will take a while… a long while… a very long while… so long we will have read War and Peace… a large number of times. So we restrict the integers of the domain sample by using an explicit generator. In this case for the non-negative integers we sample from [0..900]. For the negative integers we sample from a wider range as there should only ever be a very rapid exception raised, there should never actually be a calculation.
So that is the Factorial functions themselves tested. I trust you agree that what we have here is a very quick, easy, and providing good coverage test. But, you ask, what about that class? Should we test the class? An interesting question. Many would say “No” because it is internal stuff, not exposed as part of the API. This works for me: why test anything that is not observable from outside. Others will say “Yes” mostly because it cannot hurt. For this article I say “Yes” because it provides another example of proposition-based testing. We do not test any examples, we test only properties of the class and its member functions. See Listing 11. By testing the properties, we are getting as close to proving the implementation not wrong as it is possible to get in an easily maintainable way. QED.
And to prove that point, see Figure 4, which shows the Factorial tests and class test executed. So many useful (passing) tests, so little effort.
The message
Example-based testing of a sample from the domain tells us we are calculating the correct value(s). Proposition-based testing tells us that our code realizes the relationships that should exist between different values from the domain. They actually tell us slightly different things and so arguably good tests do both, not one or the other. However if we have chosen the properties to test correctly then zero, one, or two examples are likely to be sufficient to ‘prove’ the code not incorrect. Hypothesis, for example, provides an
@example decorator for adding those few examples. For other frameworks in other languages we can just add one or two example-based tests to the property-based tests.
But, some will say, don’t (example-based) tests provide examples of use? Well yes, sort of. I suggest that these examples of use should be in the documentation, that users should not have to descend to reading the tests. So for me property-based testing (with as few examples as needed) is the future of testing. Examples and exemplars should be in the documentation. You do write documentation, don’t you…
An apology
Having just ranted about documentation, you may think I am being hypocritical since the code presented here has no comments. A priori, code without comments, at least documentation comments31, is a Bad Thing™ – all code should be properly documentation commented. All the code in the GitHub repository that holds the originals from which the code presented here were extracted is. So if you want to see the properly commented versions, feel free to visit. If you find any improperly commented code, please feel free to nudge me about it and I will fix it post haste32.
Acknowledgements
Thanks to Fran Buontempo for being the editor of this august33 journal, especially this August august journal34, and letting me submit a wee bit late.
Thanks to Jonathan Wakely for not laughing too much when I showed him the original C++ code, and for making suggestions that made the code far more sensible.
Thanks to the unnamed reviewers who pointed out some infelicities of presentation as well as syntax. Almost all the syntactic changes have been made – I disagreed with a few. Hopefully the changes made to the content has fully addressed the presentation issues that were raised.
Thanks to all those people working on programming languages and test frameworks, and especially for those working on property-based testing features, without whom this article would have been a very great deal shorter.
References
[Catch]
[Ceylon]
[FitNesse]
[Frege] or
[GNU],
[Go]
[Groovy]
[Hackage]
[Haskell]
[HUnit]
[Hypothesis]
[Kotlin]
[MyPy]
[Pytest]
[Python_1]
[Python_2],
[QuickCheck]
[QuickCheck_2]
[QuickCheck_3]
[RapidCheck]
[Robot]
[Rust]
[Scala]
[Winder] The full Haskell example can be found at.
[Winder16] Overload, 24(131):26–32, February 2016. There are PDF () or HTML () versions available.
- Into the culture of cultured developers, anyway.
- Yes there is integration testing and system testing as well as unit testing and acceptance testing, and all this has been around in software, in principle at least, for decades, but only acceptance testing and unit testing had frameworks to support them. OK, technically FitNesse is an integration testing framework, but that wasn’t how it was being used, and not how it is now advertised and used.
- Without the footnotes, so if you want those you’ll have to check the original. We should note though that unlike that article of this august journal, this is an August august journal issue, so very august.
- OK so in this case this is unit testing, but we are creating APIs which are just units so unit testing is acceptance testing for all intents and purposes.
- If you need a tutorial introduction to the Haskell programming language then and are recommended.
- If you work with the JVM and want to use Haskell, there is Frege; see or Frege is a realization of Haskell on the JVM that allows a few extensions to Haskell so as to work harmoniously with the Java Platform.
- The name really does give the game away that the framework was for unit testing.
- Initially called JUnit, then when JUnit4 came out JUnit was renamed JUnit3 as by then it was at major version 3. Now of course we have JUnit5.
- Yes it’s a monad. Apparently monads are difficult to understand, and when you do understand them, they are impossible to explain. This is perhaps an indicator of why there are so many tutorials about monads on the Web.
- Involving Monads. Did I mention about how once you understand monads, you cannot explain them?
- Pages such as and may be handy if you are unused to the terminology used here.
- And also non-integral types, do not forget this in real testing.
- If you are thinking we should be setting up a property to check that all negative integers result in an error, you are thinking on the right lines.
- A phrase initially spoken by Simon Peyton Jones a number of years ago that caught on in the Haskell community.
- We will leave any form of description and commentary on the schism to historians. As Python programmers, we use Python 3 and get on with programming.
- This isn’t actually correct: Python allows function signatures as of 3.5 but doesn’t check them. You have to have to have a separate parser-type-checker such as MyPy. This is annoying, Python should be doing the checking.
- We will resist the temptation to make some facetious, and likely offensive, comment about some programmers who use only one programming language and refuse to look at any others. “Resistance is futile.” Seven of Nine.
- For reasons that may, or may not, become apparent in this article, but relate to PyUnit following JUnit3 architecture – remember the fading tradition – and PyTest being Pythonic.
- See
- There are ways of parameterizing tests in PyUnit (aka unittest), but it is left as an exercise for the reader to look for these. PyTest and
@pytest.mark.parametrizeare the way this author chooses to do parameterized tests in Python.
- Context managers and the ‘with statement’ are Python’s way of doing RAII (resource acquisition is initialization,) amongst other great things.
- Whilst this is an August august journal (and so very august), it is monochrome. So you will have to imagine the greenness of the test output. Either that or actually try the code out for yourself and observe the greenness first hand.
- Or should that be august. Well actually it has to be both.
- Not least because Haskell’s avowed aim is never to be successful.
- Also it uses Catch [Catch] for its tests.
- Factorials are big like space is big, think big in Hitchhiker’s Guide to the Galaxy terms: “Space,” it says, “is big. Really big. You just won’t believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space. Listen…?”
- Space may be big (see above) but the universe (space being the same thing as the universe as far as we know) is finite – assuming the current theories are correct.
- Python 3 anyway. Python 2 has effectively the same behaviour, but with more types. It is left as an exercise for the reader whether to worry about this.
- There was some thought of introducing the acronym RTFC (read the fine code), but this temptation was resisted. “Resistance is futile.” Seven of Nine.
- Well, actually pairs, with the first being the function pointer and the second being a descriptive string.
- Debating the usefulness or otherwise of non-documentation comments is left as an exercise for the readership.
- And request Doctor Who or someone to perform appropriate time travel with the corrections so that the situation has never been the case.
- And, indeed, August.
- “This joke is getting silly, stop this joke immediately.” The Colonel. | https://accu.org/index.php/journals/2272 | CC-MAIN-2020-29 | refinedweb | 5,602 | 61.67 |
On Thu, Sep 12, 2002 at 10:42:08AM +0200, Marcelo E. Magallon wrote: > >> Anthony Towns <aj@azure.humbug.org.au> writes: > > > Well, that doesn't really work for things like apt and apache, where > > you break backwards compatability and can't just keep the old version > > around. I don't think anyone came up with a way of doing it for the > > C++ stuff, either. > > It seemed to me that we had reached an agreement on the namespace > solution (put libraries linked using the new G++ ABI in, say, > /usr/lib/g++-3.2/ and modify the C++ front-end to pass the appropiate > flags to the linker). At least everyone seemed to be saying "yes, that > sounds ok" and noone raised major objections. We did put the idea of > "let's make the dynamic linker second-guess us" aside pretty fast -- > some people kept musing on that for a while but AFAIR it was going > nowhere. It is not my intention to get in the way of the g++3.2 transition, this proposal has been written way before g++3.2 came around. My motivation is mainly that woody is out and it is the time for such proposals. So, if the transition is already planned (sorry, I am not as up to date as I would like) just ignore my proposal with regards to it and do as planned. Anyway I still think it could be usefull not only for libraries, but for package splits or any other changes which may break other packages.. | https://lists.debian.org/debian-devel/2002/09/msg00837.html | CC-MAIN-2015-48 | refinedweb | 258 | 79.6 |
Note this HowTo assumes that you are using a cluster provided by the AMPLab team, where port 8888 has already been opened. If you have spun up your own cluster using their AMI, you need to go to the Security Groups tab and open that port for traffic. See here for full details.
You can run the IPython Notebook interface as a more friendly way to interact with your AMPCamp EC2 cluster. The detailed instructions on how to run a public IPython Notebook Server are here, but the basics are:
Create a certificate file for your cluster by typing at the command line:
cd /root openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
Let's make sure there's a default IPython profile ready for us to use:
ipython profile create default
python -c "from IPython.lib import passwd; print passwd()" \ > /root/.ipython/profile_default/nbpasswd.txt
Verify the password file has a string like
sha1:16a8a30fb9b6:82c0... in it (your actual value will differ). If you don't get this, repeat the prior step:
cat /root/.ipython/profile_default/nbpasswd.txt sha1:16a8a30fb9b6:82c030d3989b0069b9ed603822949a954a2beb21
Put the following into the file
/root/.ipython/profile_default/ipython_notebook_config.py:
# Configuration file for ipython-notebook. c = get_config() # Notebook config c.NotebookApp.certfile = u'/root/mycert.pem' c.NotebookApp.ip = '*' c.NotebookApp.open_browser = False # It is a good idea to put it on a known, fixed port c.NotebookApp.port = 8888 PWDFILE="/root/.ipython/profile_default/nbpasswd.txt" c.NotebookApp.password = open(PWDFILE).read().strip()
Put the following into the file
/root/.ipython/profile_default/startup/00-pyspark-setup.py:
# Configure the necessary Spark environment import os os.environ['SPARK_HOME'] = '/root/spark/' # And Python path import sys sys.path.insert(0, '/root/spark/python') # Detect the PySpark URL CLUSTER_URL = open('/root/spark-ec2/cluster-url').read().strip()
That's it! You can now start the notebook server by typing the following command:
ipython notebook
Note: I strongly recommend you do this inside a
screen or
tmux session so it's persistent. This will let it survive cleanly if you lose your connection to your cluster.
You can then connect to the server via
https://[YOUR INSTANCE URL HERE]:8888. Once you type your password, you should be able to start running code!
Warning: the URL for your notebook must start with
https, not
http.
The config file above creates a variable called
CLUSTER_URL which you can use to create your
SparkContext:
print CLUSTER_URL
spark://ec2-50-16-173-245.compute-1.amazonaws.com:7077
Now let's create the context:
from pyspark import SparkContext sc = SarkContext( CLUSTER_URL, 'pyspark')
And test it by creating a trivial RDD:
sc.parallelize([1,2,3])
<pyspark.rdd.RDD at 0x1e16d90>
Because of how PySpark works, the above context will hog all your cluster resources. If you are going to do new work and are done with this tutorial, remember to shut it down from the dashboard so you free the cluster for other work. | https://nbviewer.jupyter.org/gist/fperez/6384491/00-Setup-IPython-PySpark.ipynb | CC-MAIN-2019-13 | refinedweb | 498 | 57.57 |
What's the difference between:
import * as jslibname from 'jslibname'
declare var jslibname: any;
declare var firebase: any;
import * as moment from 'moment';
When you do
import * as library from 'library';
you are really importing the library and you can start using it. If you try to use it without importing before you would gen en error. Sometimes the library is being imported already in some place in your app but you just have to use it. You can and the app will work but I guess that in your case the TS compiler throws an error (and probably your IDE marks it as an error). This is because there is no typing definition file for your library (.d.ts). The easiest way to fix it is to do
declare var library: any;
This is telling the TS compiler that library exists and stops throwing an compilation error (also IDE stops complaining). | https://codedump.io/share/l4Uf63eGpozH/1/differences-between-import-options-for-3rd-party-libraries-in-typescriptangular-2 | CC-MAIN-2017-17 | refinedweb | 152 | 63.02 |
Apache\Blank.
The link at, "Adding the Windows 10 platform to your Cordova project", goes to a login screen.
The link is an internal website.
Thanks, Raymond and aarayas. It's supposed to be a bookmark link to the section with that title, but it looks like it got mangled. I'm working on fixing it. Until it gets fixed, just refer to the section below with that title.
Polita
Thanks everyone. We've removed the bad link and now you can refer to the section below with that title.
– Visual Studio Team
Hi Polita,
I'm trying LocationApp (from Channel9 on BUILD 2015). It has been several days. I add Plugins "Geolocation" but is unsuccessful, It says "Couldn't download plugin".
I've try to look in case there is incorrect setting I made but no luck.
I there any complete instruction how to add "Plugins"?
Need your help please!
Jannen Siahaan
@humaNiT
"Couldn't download plugin" usually occurs if prerequisites are not correctly installed or configured. The easiest way to see if this is so is via the Compatibility Checker. In the RC, this happens when the package is loaded into VS, which means that in order to invoke it, you need to restart VS and then open your project again.
Please verify that the prerequisites are installed. If this fails, please send an email vscordovatools@microsoft.com so we can further help diagnose the issue.
Hi,
In windows phone 8.1 the Cordova plugins were written in C#. We are now looking at the commit tree for windows 10 and find that plugins are written in Javascript (e.g. git-wip-us.apache.org/…/asf). Do this in turn use WINJS?
Also can you clarify that there will no Cordova Native C# classes. We have customizations to the windows 8.1 plugins in C# which will be a throw away and will need to figure out from scratch implementing them in JS. It would be good to know where can we find more details on writing cordova plugins in JS for Windows 10 platform. We have custom native plugins and can't figure out how to integrate.
I would appreciate a response.
Thanks.
Regards,
Kaushik
@Kaushik: Windows Phone 8.1 introduced a new, unified app platform model with the Desktop Windows 8.1 SKUs that is becoming even more unified in Windows 10. That unified app model architecture provides a consistent HTML application developer experience. By implementing Cordova in HTML and JavaScript, we’re improving the platform because developers don’t need to use two competing garbage collectors and JIT environments like they did in Windows Phone 8. We will continue these investments as we believe that Cordova implemented in HTML and JavaScript provides the best environment for developers and customers.
It is possible to still use C# components for Cordova plugins in Windows Phone 8.1 and in Windows 10. However, you will either have to write a custom Cordova Bridge component (that's the implementation of the Cordova "exec()" function), or write an initial proxy in JavaScript, which then calls into your C# component. Your C# component must be a Windows Runtime Component, e.g., must be a .winmd file; once those are referenced by your project, you should have access to the namespaces and types in that .winmd.
As to the question about it using WinJS: The Cordova platform is committed to providing at least baseline support for WinJS when running on the Windows platform, which means that plugins running on Windows get access to the set of functionality in base.js (Promise being the most-used one). You can see in cordova.js the loader that explicitly loads WinJS based on the version of Windows being targeted.
Hi,
I just follow this tutorial, and it fail ; I’m using Microsoft Visual Studio Community 2015, and my version of Cordova is 5.4.1.
When I try to build for Windows 10 (anyCPU, local machine), I get this error :
Cannot import the key file ‘CordovaApp_TemporaryKey.pfx’. The key file may be password protected. To correct this, try to import the certificate manually into the current uesr’s personal certificate store. [… path to the project …]
The certificate specified is not valid for signing. For more information about valid certificates, see. [… path to the project …]
For the moment, no custom code wrote ; just the default template, and it fails, using default Cordova Windows platform or your custom “windows@”
Do you know why this happen ?
Thanks
I am failing to install Apache Cordova in windows 10. I have tried installing it many a times but the cursor goes on rotating. I have node.js and git bash installed. I also have Windows Visual Studio 2015. Please help me in this | https://blogs.msdn.microsoft.com/visualstudio/2015/04/30/introducing-the-windows-10-apache-cordova-platform/ | CC-MAIN-2017-17 | refinedweb | 793 | 66.94 |
J C Calvarese wrote:
>Agreed. std isn't so bad for std.string and some things like that. I happen
>>to also think things like std.math and std.c.math would be better as d.math
>>and c.math but definitely the double std modules are the least appealing.
>
> Most of the double std modules could be easily fixed, by making a few
> module-specific changes...
>
> std.stdio -> std.io
> std.stdint -> std.integer
> std.stdarg -> std.vararg
>
> But I'm repeating myself, so I'll leave it there.
Is *that* the problem here ? The two "std" ? :-O
But "stdint" is not just about any integer, it's about
porting from C/C++ ? Kinda like "stdbool", which is in
object.d (and inside the D compiler) at the moment...
(but I won't bring my old stdutf and stdfloat suggestions up again...)
And what about the "std.c.windows.windows" or
"std.c.unix.unix". Are those also too repeated ?
(I don't see the problem with them, being used to:
#include <Carbon/Carbon.h> #import <Cocoa/Cocoa.h>)
Changing traditional stdio and stdarg just doesn't seem worth it, to me.
Besides, doesn't "io" already belong to *another* planet already ? ;-)
--anders
I just wanted to say that althought I was being nice to TZ, I ABSOLUTELY agree
that he had a bad attitude, and ALSO that adding GUI code into the LANGUAGE is
not only retarded, but against the whole concept of a system level language.
I use D cause it's modern, system level, and fast. Not because it has "time
saving" gui features. Perhaps Visual Basic is the language he should go use.. I
started there, and thats perhaps why I can appricate having a real language to
write with at a system level...
Anyhoo, just wanted to say that I am glad that we can discuss this stuff, I
diddn't mean to dicousrage it.. Only to ask what people were thinking about the
reality of such changes.
I am glad this newsgroup isn't full of jerks like EFNet IRC rooms are... I can
ask a dumb question at 5:00am and not get ridiculed... To me, that is a good
comunity.
So yes, we should keep talking about this stuff, but mostly stick together and
stick with D.
I admit to having a fear that causes loss of sleep sometimes that D will die
like BeOS did, or that my software I write being open source will somehow screw
me over in the end... But something about this newsgroup helps me sleep.
So thanks for participating and reading the crap I write from time to time.
D = my language of choice, and I hope it to be for a very long time.
Thanks,
Trevor Parscal
trevorparscal@hotmail.com
"Trevor Parscal" <trevorparscal@hotmail.com> wrote in message
news:d7luvv$2d8$1@digitaldaemon.com...
> Are we still on this subject cause nothing has happened, or because the
> opposition to changing things feels the attack is not over?
The battle is not over - send in the clones! :-)
Seriously, though, I think people said what they wanted to say and that's
that. Walter can do what he wants.
> Is D really done by committee? I thought we only had the power of
> suggestion. And to me, argument leads to only more fear of change (in
> Walter's eyes i suppose) as you don't want to anger the loyal
> programmers..
Interesting thoughts. I hope and suspect Walter isn't afraid to change
things that have been around for a while. He's said the core parts of D are
set so there are some things he won't change but I assumed before posting
that the std name wasn't one of those things.
[snip]
> Here it is. I love the D community, and I look forward to every day of
> programing and socializing over this newsgroup... But I have to ask
> myself, what is actually being done everyday. I mean, we should be talking
> about our projects, asking and answering questions, etc. Not arguing so
> very much.
This newsgroup does tend to be the place where language topics come up that
grow out of frustrations developed while working on a project. For example
with me all the std.stdio and std.c.stdio typing that I've been doing to add
FILE* streams lead me to remember how in the past some people complainded
about std and so I thought I post and see what people thought. I don't think
this thread was particularly argumentative - I tend to stop reading those
that look like the posters have started arguing.
> So maybe Walter should have stuck with d instead of std. Lets just be
> content with whatever Walter decides for D, let him do what he WANTS to
> do, and not restrict him so much with out demands of traditionalism.
I don't understand what is meant by demands of traditionalism but aside from
that I agree that Walter should do what he wants. The newsgroup is an
important place for him to get feedback on D so I think the more posts about
what's bothering people the better.
> Lets all work on a project of out own, or one on D source.
>
> Anyone with me?
I think one can work on projects and contribute to newsgroup discussions at
the same time.
> And yes, I like to type.. So sue me!
>
> --
> Thanks,
> Trevor Parscal
>
> trevorparscal@hotmail.com
"Trevor Parscal" <Trevor_member@pathlink.com> wrote in message
news:d7jpo4$uad$1@digitaldaemon.com...
> In article <d7jp8f$u1o$1@digitaldaemon.com>, J C Calvarese says...
>>
>>In article <1o6qpoefvv20w$.i85textgh7d9$.dlg@40tude.net>, Derek Parnell
>>says...
>>>
>>>On Tue, 31 May 2005 23:35:26 -0700, Andrew Fedoniouk wrote:
>>>
>>>> d is distance, discriminant, data.
>>>
>>>d is dumb ;-)
>>
>>It's not dumb. It's just the letter I use when a, b, and c are already
>>used. :)
>>
>>jcc7
>
> why use a, b, c, d... ? Arrays are SOOO much more flexible, powerful,
> manageable, and in most cases, faster when compiled.
>
> Eh.. whatever, It's not the point.. I won't name my varaible that way..
>
> and why would you have a user data type that's all lowercase?
>
> foreach(distance d; ...
>
> Wouldnt that be
>
> foreach(DISTANCE distance; ...
>
> or
>
> foreach(Distance distance; ...
>
> Well... I appricate the feedback anyhoo.
Naming is personal.
I am trying to use:
DISTANCE - enum and its value, constants, template typenames.
Distance - class and structure with semantic of class.
distance - aliased or typedefed ints, doubles, etc. and
structures with semantic of value:
e.g. rect, point, size - they are just basic values
treated, as a rule, as one single entity. | http://forum.dlang.org/thread/d7j1ir$82d$1@digitaldaemon.com?page=6 | CC-MAIN-2014-15 | refinedweb | 1,112 | 77.23 |
I'm interested, especially as I've recently written a very lightweight SAX/DOM-like Java parser
for fast parsing in Android/Java
called Ssx (Super Simple XML). Long dissatisfied with the verbosity and mistakes in the DOM
API, and having created a couple
simplified XML APIs in the past, Ssx has a most-concise DOM/mini-XPath API that is directly
aimed at application data use. The Ssx
code has the ability to switch between SAX parsers on the fly, one of which is internal.
One method that I needed and that I think should be part of DOM-like APIs is a call like toXml()
which returns parseable XML for any
element. It should either assume a given set of namespace declarations (meaning it is more
of a fragment) or it should generate
namespace declarations for everything active at the point in the tree. Ssx does the latter
so far in an efficient way.
I can't publish Ssx yet, but hopefully soon.
Additionally, we've begun the process of getting interest here in an OpenEXI incubator project.
EXI is the W3C Efficient XML
Interchange proposed standard for compact, efficient-to-process binary XML. We have two open
source code bases (one just open
sourced) that we are currently combining that will form the basis for the project. When a
little more complete, we will continue
that discussion.
Stephen
On 1/21/11 9:04 AM, Eric Johnson wrote:
> I've previously mentioned our GenXDM project on this mailing list. And I posted an incubator
proposal (gXML at the time). As a
> quick reminder, GenXDM defines a Java API for the XQuery Data Model, via a layer of indirection,
in such a way that you can choose
> different XML tree implementations at runtime, with minimal overhead.
>
> At the time, it appeared we didn't attract enough interest to go through with incubating
at Apache. We're still hoping to do
> that, though.
>
> Since I posted our proposal, we've been busy. Of particular note to this mailing list,
as a proof of concept, we've done a
> complete port of the XML Security Java library (Santuario) to the GenXDM APIs. And we've
now released that over at the Apache
> Extras site.
>
> We kept the port fully backwards compatible (all existing tests pass unmodified!), and
added to the API, so that you can use
> Santuario with non DOM XML trees.
>
> As we are still interested in incubating GenXDM at Apache, I wanted to mention our port
here, as several people mentioned at the
> time that they wanted to see more, before deciding whether it made sense to get involved.
>
> The projects:
>
>
>
> We welcome you to stop by, kick the tires, and join our mailing list, as you see fit!
>
> Thanks.
>
> -Eric.
>
> ---------------------------------------------------------------------
> | http://mail-archives.eu.apache.org/mod_mbox/incubator-general/201101.mbox/%3C4D39C8C9.4050507@lig.net%3E | CC-MAIN-2019-47 | refinedweb | 463 | 60.45 |
Pythonista 3 error on iPadOS: cannot connect to host discord.com:443 ssl=True
I'm coding a Discord.py bot in python and I'm trying to move to pythonista but getting this error and i haven't found anyone else with this error on pythonista:
ssl.CertificateError: hostname 'IP-ADDRESS' doesn't match either of 'ssl764977.cloudflaressl.com', '*.discord.com', 'discord.com'
please help me with this as i spent $15 on this and would really like to be able to use it. Thank you!
OK, I looked at the issue you sent, and that's a different error. This one is hostname doesn't match, and the other one is certificate expired... 🤔
It is still a certificate error. -- did you try the change in http.py? You will need to force quit pythonista and try again
Can you post your full traceback?
- Unique_Name
Going back to version 1.5.0 fixed it for me, seems like pythonista just doesn’t like 1.5.1 for some reason
You could also try (before importing discord -- need to restart pythonista then try this)
import ssl ssl.match_hostname = lambda cert, hostname: True
For some reason, ssl.match_hostname is being called with 'IP-ADDRESS' instead of an actual hostname. I'm not sure where that is coming from exactly, I guess the underlying _sslobj from OpenSSL. Pythonista uses an old OpenSSL version iirc, so this could explain it maybe. | https://forum.omz-software.com/topic/6734/pythonista-3-error-on-ipados-cannot-connect-to-host-discord-com-443-ssl-true/11 | CC-MAIN-2022-33 | refinedweb | 238 | 75.71 |
Not sure whether this is meant to be a feature or not, but I have a bunch of matplotlib scripts which output some graphs as pdf files. I have them set up to just save to the current folder, i.e. something like.
Hi there!
I have just send a tweet to john about this.It happens to me on Windows all the time, the files go to C:\Windows\System32.I don't now where are they going in OSX, but you can figure it out by using os.getcwd()
You can add a few lines on top of your scripts to fix this:
# Need to import os machinery
import os
# You can also do:
# from os import chdir
# from os.path import dirname
# Get path of current file
cwd = os.path.dirname(__file__)
# Change current working directory to that one
os.chdir(cwd)
It's a bit annoying but.. no other way right now.
Hope it helps!
Hi ratnushock, the easiest way I've found to fix this is to go to the Python build script (should be under Python in wherever the packages for ST are stored), and make sure it says:
{
"cmd": "python", "-u", "$file"],
"file_regex": "^ ]*File \"(...*?)\", line ([0-9]*)",
"working_dir": "$file_path",
"selector": "source.python"
}
i.e. add the "working_dir" property.
I guess I'm not sure why this isn't the standard behavior.
Kudos to you jesse!
Your solution is so simple and great that now I'm feeling stupid for not seeing it before
Many thanks! | https://forum.sublimetext.com/t/python-scripts-running-in-the-wrong-folder/1636/2 | CC-MAIN-2017-22 | refinedweb | 252 | 83.05 |
SCTP_RECVMSG(3) Linux Programmer's Manual SCTP_RECVMSG(3)
sctp_recvmsg - Receive a message from a SCTP socket.
#include <sys/types.h> #include <sys/socket.h> #include <netinet/sctp.h> int sctp_recvmsg(int sd, void * msg, size_t len, struct sockaddr * from, socklen_t * fromlen, struct sctp_sndrcvinfo * sinfo, int * msg_flags);
sctp_recvmsg is a wrapper library function that can be used to receive a message from a socket while using the advanced features of SCTP. sd is the socket descriptor on which the message pointed to by msg of length len is received. If from is not NULL, the source address of the message is filled in. The argument fromlen is a value-result parameter. initialized to the size of the buffer associated with from , and modified on return to indicate the actual size of the address stored. sinfo is a pointer to a sctp_sndrcvinfo structure to be filled upon receipt of the message. msg_flags is a pointer to a integer that is filled with any message flags like MSG_NOTIFICATION or MSG_EOR. The value of msg_flags pointer should be initialized to 0 to avoid unexpected behavior; msg_flags is also used as an input flags argument to recvmsg function.
On success, sctp_recvmsg_RECV) | https://www.man7.org/linux/man-pages/man3/sctp_recvmsg.3.html | CC-MAIN-2022-33 | refinedweb | 196 | 55.84 |
Contents
- Abstract
- Background
- Proposal
- Key Benefits
- A path of introduction into Python
- New Ways of Using Classes
- Rejected Design Options
- History
- References
Abstract, a hook to initialize descriptors, and a way to keep the order in which attributes are defined.
Those hooks should at first be defined in a metaclass in the standard library, with the option that this metaclass eventually becomes the default type metaclass.
The new mechanism should be easier to understand and use than implementing a custom metaclass, and thus should provide a gentler introduction to the full power initalization of descriptors and keeping the order in which class attributes were defined.
Those three use cases can easily be performed by just one metaclass. If this metaclass is put into the standard library, and all libraries that wish to customize class creation use this very metaclass, no combination of metaclasses is necessary anymore.
The three use cases are achieved as follows:
- The metaclass contains an __init_subclass__ hook that initializes all subclasses of a given class,
- the metaclass calls an __init_descriptor__ hook for all descriptors defined in the class, and
- an __attribute_order__ tuple is left in the class in order to inspect the order in which attributes were defined.
For ease of use, a base class SubclassInit is defined, which uses said metaclass and contains an empty stub for the hook described for use case 1.
As an example, the first use case looks as follows:
class SpamBase(SubclassInit): # this is implicitly a @classmethod def __init_subclass__(cls, **kwargs): # This is invoked after a subclass is created, but before # explicit decorators are called. # The usual super() mechanisms are used to correctly support # multiple inheritance. # **kwargs are the keyword arguments to the subclasses' # class creation statement super().__init_subclass__(cls, **kwargs) class Spam(SpamBase): pass # the new hook is called on Spam
The base class SubclassInit __init_descriptor__ initializer (this is an insanely simplified, yet working example):
import weakref class WeakAttribute: def __get__(self, instance, owner): return instance.__dict__[self.name] def __set__(self, instance, value): instance.__dict__[self.name] = weakref.ref(value) # this is the new initializer: def __init_descriptor__(self, owner, name): self.name = name
The third part of the proposal is to leave a tuple called __attribute_order__ in the class that contains the order in which the attributes were defined. This is a very common usecase, many libraries use an OrderedDict to store this order. This is a very simple way to achieve the same goal..
A path of introduction into Python
Most of the benefits of this PEP can already be implemented using a simple metaclass. For the __init_subclass__ hook this works all the way down to Python 2.7, while the attribute order needs Python 3.0 to work. Such a class has been uploaded to PyPI [3].
The only drawback of such a metaclass are the mentioned problems with metaclasses and multiple inheritance. Two classes using such a metaclass can only be combined, if they use exactly the same such metaclass. This fact calls for the inclusion of such a class into the standard library, let's call it SubclassMeta, with the base class using it called SubclassInit. Once all users use this standard library metaclass, classes from different packages can easily be combined.
But still such classes cannot be easily combined with other classes using other metaclasses. Authors of metaclasses should bear that in mind and inherit from the standard metaclass if it seems useful for users of the metaclass to add more functionality. Ultimately, if the need for combining with other metaclasses is strong enough, the proposed functionality may be introduced into Python's type.
Those arguments strongly hint to the following procedure to include the proposed functionality into Python:
- The metaclass implementing this proposal is put onto PyPI, so that it can be used and scrutinized.
- Once the code is properly mature, it can be added to the Python standard library. There should be a new module called metaclass which collects tools for metaclass authors, as well as a documentation of the best practices of how to write metaclasses.
- If the need of combining this metaclass with other metaclasses is strong enough, it may be included into Python itself.
While the metaclass is still in the standard library and not in the language, it may still clash with other metaclasses. The most prominent metaclass in use is probably ABCMeta. It is also a particularly good example for the need of combining metaclasses. For users who want to define a ABC with subclass initialization, we should support a ABCSubclassInit class, or let ABCMeta inherit from this PEP's metaclass.
Extensions written in C or C++ also often define their own metaclass. It would be very useful if those could also inherit from the metaclass defined here, but this is probably not possible.
New Ways of Using Classes
This proposal has many usecases like the following. In the examples, we still inherit from the SubclassInit base class. This would become unnecessary once this PEP is included in Python directly.
Subclass registration
Especially when writing a plugin system, one likes to register new subclasses of a plugin baseclass. This can be done as follows:
class PluginBase(SubclassInit): subclasses = [] def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls.subclasses.append(cls) __get__(self, instance, owner): return instance.__dict__[self.key] def __set__(self, instance, value): instance.__dict__[self.key] = value def __init_descriptor__(self, owner, name): self.key = name class Int(Trait): def __set__(self, instance, value): # some boundary check code here super().__set__(instance, value)
Rejected Design Options
Calling the hook on the class itself
Adding an __autodecorate__ hook that would be called on the class itself was the proposed idea of PEP 422. Most examples work the same way or even better if the hook is called on the subclass. In general, it is much easier to explicitly call the hook on the class in which it is defined (to opt-in to such a behavior) than to opt-out,.
Other variants of calling the hook).
Defining arbitrary namespaces
PEP 422 defined a generic way to add arbitrary namespaces for class definitions. This approach is much more flexible than just leaving the definition order in a tuple. The __prepare__ method in a metaclass supports exactly this behavior. But given that effectively the only use cases that could be found out in the wild were the OrderedDict way of determining the attribute order, it seemed reasonable to only support this special case.
The metaclass described in this PEP has been designed to be very simple such that it could be reasonably made the default metaclass. This was especially important when designing the attribute order functionality: This was a highly demanded feature and has been enabled through the __prepare__ method of metaclasses. This method can be abused in very weird ways, making it hard to correctly maintain this feature in CPython. This is why it has been proposed to deprecated this feature, and instead use OrderedDict as the standard namespace, supporting the most important feature while dropping most of the complexity. But this would have meant that OrderedDict becomes a language builtin like dict and set, and not just a standard library class. The choice of the __attribute_order__ tuple is a much simpler solution to the problem..
History
This used to be a competing proposal to PEP 422 by Nick Coughlan and Daniel Urban. It shares both most of the PEP text and proposed code, but has major differences in how to achieve its goals. In the meantime, PEP 422 has been withdrawn favouring this approach. | http://legacy.python.org/dev/peps/pep-0487/ | CC-MAIN-2017-39 | refinedweb | 1,256 | 60.75 |
Modern Apps : A Look at the Hub Project and Control in Windows Store Apps
Rachel Appel | March 2014
When it comes to development on Windows with Visual Studio, the built-in project templates are a good place to start. If you’re new to Windows Store (or any Microsoft stack) development, the templates can serve as a learning tool. In this article, I’ll look at the Hub control, but in context of the Hub project template. I’ll examine all the important things to know about the Hub project and control for both HTML and XAML apps.
The Hub project in particular enables you to deliver a large volume of content to the user while using a modern UX. This is because you can break the app’s content into parts called HubSections, so the app doesn’t overwhelm the user visually with large amounts of data. While this is just my opinion, I find the Hub project to be the most aesthetically interesting of all the Windows Store app templates. The content layout is in distinct sections that are easy to digest. You can parade a favorite piece of content in the front-and-center “hero” section of the hub, while the remaining content items are easily accessible in groups.
Of course, it’s not mandatory that you use the templates—you can start from a blank project. However, for many developers, it’s far easier to customize and expand upon the templates, as the code is set up for you.
The Hub Project Template
Visual Studio 2013 contains Hub project templates for both HTML and XAML. Upon creating a new HTML project using the template, you’ll see some familiar project folders such as the css, images and js folders. In addition to the customary folders are the Hub-specific folders: pages\hub, pages\item and pages\section. As you might expect, each of these folders contains files that correspond to their purpose in the app. In the project root is the file for the package manifest as well as default.html, the app’s starting point, which loads default.js and performs functions related to the app and lifecycle management. Default.html contains references to not just the \js\default.js file but also \js\data.js, which contains sample data, and \js\navigator.js, which performs navigation. For a refresher on navigation, see my August 2013 column, “Navigation Essentials in Windows Store Apps,” at msdn.microsoft.com/magazine/dn342878. In short, the Hub project template, like other templates, is a quick way to publish visually interesting modern apps.
Of course, the centerpiece of the Hub project is the Hub control. While default.html is the project starting point in an app built with the Windows Library for JavaScript (WinJS), once it loads, it immediately navigates to the hub.html file. Hub.html contains the Hub control and lives in the \pages\hub directory. The Hub control is what you use to create a modern layout that’s more than just boring groups of squares. Instead, the Hub control, coupled with asynchronous data fetching, enables you to present large amounts of data—or data that has distinct groups—in an organized yet fashionable manner.
The Hub template implements the hub, or hierarchical, navigational pattern. This means that from the starting point (that is, hub page), the user can navigate to a page containing all the members of a particular section, or the user can navigate to an individual item from the hub page. The template also contains navigation to an item page from a section page. While the template contains navigation code only between section 3 and its groups and items (see Figure 1), you can use the ListView or Repeater controls to do the same type of navigation for other sections if it makes sense for your app. Figure 1 illustrates what the default Hub app with sample data looks like at run time.
Figure 1 The Hub Control at Run Time for Both HTML and XAML Apps
With the reimagining of Windows came the notion of putting content front and center, and, as you can see, this template does just that.
The XAML Hub template project works the same conceptually as does the HTML template, relying on the hub as the main entry point, being navigable to sections and details. Of course, the implementation is different, and you can see this by examining the folder structure, which reveals the following directories: Assets, Common, DataModel and Strings. These folders contain what you might expect: assets such as graphics, data in the DataModel folder and localized strings in the Strings folder. In the root of the project lies the following working files needed so the app can run:
- App.xaml/.cs: This is the XAML equivalent of default.html. It has a tiny bit of code that assists in navigation and general tasks.
- HubPage.xaml/.cs: This is the crowning jewel of the app, containing the Hub control.
- ItemPage.xaml/.cs: This contains the individual items you can navigate to from the hub or section pages.
- SectionPage.xaml/.cs: This shows all individual data members that belong to a particular group.
- Package.appmanifest: This contains the app settings.
The XAML Hub project template’s HubPage.xaml file reveals the Hub control firmly seats itself in a Grid control that serves as the root container for the page and Hub.
In the DataModel folder is a file named SampleData.json containing sample data. Also in the folder is a SampleDataSource.cs file that transforms the JSON data into usable classes for C# or Visual Basic .NET consumption and XAML data binding. You can replace this with your own data, much like the data.js file in WinJS apps.
The Common folder contains several files that perform a variety of tasks such as navigation and other generally app-related tasks for working with data in view models. In addition, the Common folder contains the SuspensionManager.cs file, which performs process lifecycle tasks. Finally, the Strings folder contains localized strings for publishing in different locales.
The Hub Control
Both HTML and XAML project templates use the Hub control. In HTML apps, the Hub control works just like any other WinJS control. Use the data-win-control attribute of an HTML element, usually a <div>, to define it as a Hub control, as this code shows:
This means the WinJS.UI.Hub object is the brains behind the Hub control. The Hub control acts as a container for the HubSection controls, which define sections or groups of data. HubSections can contain any valid HTML tags, such as <div> or <img>, or a WinJS control, such as the ListView control. By default, the hub.html file’s Hub control encloses five sections, one named hero and four more designated by their class attributes (such as section1, section2 and so on). In the HubSections, the <div> and <img> tags are the most common child elements, but any valid HTML or WinJS controls will work to display data in a different layout. Changing the layout is a great way to personalize your app, but don’t forget to adhere to the Windows UX guidelines at bit.ly/1gBDHaW. Figure 2 shows a complete sample of the necessary HTML (you’ll see its CSS later) to create a Hub control with five sections. Inspecting the code in Figure 2 shows section 3 is the navigable section, while the rest are not navigable.
<div class="hub" data- <div class="hero" data-</div> <div class="section1" data- <img src="/images/gray.png" width="420" height="280" /> <div class="subtext win-type-x-large" data-</div> <div class="win-type-medium" data-</div> <div class="win-type-small"> <span data-</span> <span data-</span> <span data-</span> </div> </div> <div class="section2" data- <div class="item-title win-type-medium" data-</div> <div class="article-header win-type-x-large" data-</div> <div class="win-type-xx-small" data-</div> <div class="win-type-small"> > </div> </div> <div class="section3" data- <div class="itemTemplate" data- <img src="#" data- <div class="win-type-medium" data-</div> <div class="win-type-small" data-</div> </div> <div class="itemslist win-selectionstylefilled" data- </div> </div> <div class="section4" data- <div class="top-image-row"> <img src="/images/gray.png" /> </div> <div class="sub-image-row"> <img src="/images/gray.png" /> <img src="/images/gray.png" /> <img src="/images/gray.png" /> </div> <div class="win-type-medium" data-</div> <div class="win-type-small"> <span data-</span> <span data-</span> </div> </div> </div>
In XAML, the Hub control uses a <Hub> element that contains <Hub.Header> and <HubSection> elements. In turn, the child headings and sections contain Grid and other XAML controls, such as the StackPanel, as well as text blocks. Figure 3 shows the XAML required to create the Hub control used in the Visual Studio templates.
<Hub SectionHeaderClick="Hub_SectionHeaderClick"> <Hub.Header> <!-- Back button and page title --> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="80"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Button x: <TextBlock x: </Grid> </Hub.Header> <HubSection Width="780" Margin="0,0,80,0"> <HubSection.Background> <ImageBrush ImageSource="Assets/MediumGray.png" Stretch="UniformToFill" /> </HubSection.Background> </HubSection> <HubSection Width="500" x: <DataTemplate> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Image Source="Assets/MediumGray.png" Stretch="Fill" Width="420" Height="280"/> <TextBlock Style="{StaticResource SubheaderTextBlockStyle}" Grid. <TextBlock Style="{StaticResource TitleTextBlockStyle}" Grid. <TextBlock Style="{StaticResource BodyTextBlockStyle}" Grid. </Grid> </DataTemplate> </HubSection> <HubSection Width="520" x: <DataTemplate> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBlock Style="{StaticResource TitleTextBlockStyle}" Margin="0,0,0,10" x: <TextBlock Style="{StaticResource SubheaderTextBlockStyle}" Grid. <TextBlock Style="{StaticResource SubtitleTextBlockStyle}" Grid. <TextBlock Style="{StaticResource BodyTextBlockStyle}" Grid. </Grid> </DataTemplate> </HubSection> > <HubSection x: <DataTemplate> <!-- width of 400 --> <StackPanel Orientation="Vertical"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="130"/> <ColumnDefinition Width="5"/> <ColumnDefinition Width="130"/> <ColumnDefinition Width="5"/> <ColumnDefinition Width="130"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="270"/> <RowDefinition Height="95"/> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Image Source="Assets/MediumGray.png" Grid. <Image Source="Assets/MediumGray.png" Grid. <Image Source="Assets/MediumGray.png" Grid. <Image Source="Assets/MediumGray.png" Grid. <TextBlock Style="{StaticResource TitleTextBlockStyle}" Grid. <TextBlock Style="{StaticResource BodyTextBlockStyle}" Grid. </Grid> </StackPanel> </DataTemplate> </HubSection> </Hub>
As you can see, XAML syntax is a bit more verbose than HTML. That’s because you code layout and styles right in the XAML page (though XAML styles can be placed in a resource dictionary), while in HTML the layout and style rules are CSS (more on styling later).
Data Binding and the Hub Control
Arrays or JSON (which usually serializes to an array anyway) are the customary ways to work with data in WinJS, as well as in many other Web or client languages. This is no different with the Hub project. You can replace the data in \js\data.js with custom data broken into however many groups you plan to use. You’ll find two arrays as sample data in the data.js file, one for grouping and one for individual items that tie into a specific group. If you’re familiar with some of the other WinJS project templates, then you’ll notice this is the same sample data.
In the \pages\hub\hub.js file are the calls to the members of the Data namespace that obtain group and item data:
The section3Group and section3Items are global objects. Figure 2 shows the data-binding syntax for the ListView control. In hub.js, after the ready function, the code sets section3DataSource, a property of the Hub control:
The Hub control uses the preceding code to data bind to the ListView (Figure 2 shows the data-bound ListView code).
In XAML apps using C#, you have the same basic occurrences, as code from the HubPage.xaml.cs file indicates the following declaration for a view model of type ObservableDictionary, along with its corresponding property declaration (this is where you can return your own data):
Later in the file, code sets a page-level view model by calling GetGroupAsync, which, as its name implies, runs asynchronously:
Although the call obtains Group-4 data, you assign it to a view model named Section3Items to assign it to those items. Consider the hero section as Section 0, meaning the Section 3 items will align with the Group-4 data:
This is all you need in the codebehind. In XAML, notice the DataContext attribute binds to Section3Items.The other attributes aren’t necessary for data binding, but act as an aid for the design tools in Visual Studio or Blend, as designated by the “d” namespace:
While working with local sample data, you have many options for data access, including File IO, SQLite, Web Storage, IndexedDB, REST services and Windows Azure, to name a few. If you want to review what data options are available, see my March 2013 article, “Data Access and Storage Options in Windows Store Apps,” at msdn.microsoft.com/magazine/jj991982.
Styling the Hub Control
In Windows Store apps built with JavaScript, you can style the Hub control with CSS. The \hub\hub.css file contains all the default CSS related to the Hub control. Feel free to add your own styles to change the size of the elements or their layout. Figure 4 shows the complete CSS in hub.css. Notice there’s a .hubpage class selector that uses HTML5 semantic role attributes such as header[role=banner] and section[role=main] to designate the general styles for the hub. After that, the CSS in Figure 4 shows the “.hubpage .hub .hero” descendant selector, which creates the featured (hero) section of the Hub control. The hero fills roughly half of the left side of the viewable part of screen with a light gray background and, of course, it’s a great way to put a special piece of content where no user can miss it! You can fill it with lots of data, and graphic data or multimedia works quite nicely to show off here.
.hubpage header[role=banner] { position: relative; z-index: 2; } .hubpage section[role=main] { -ms-grid-row: 1; -ms-grid-row-span: 2; z-index: 1; } .hubpage .hub .win-hub-surface { height: 100%; } .hubpage .hub .hero { -ms-high-contrast-adjust: none; background-image: url(/images/gray.png); background-size: cover; margin-left: -80px; margin-right: 80px; padding: 0; width: 780px; } .hubpage .hub .hero:-ms-lang( ar, dv, fa, he, ku-Arab, pa-Arab, prs, ps, sd-Arab, syr, ug, ur, qps-plocm) { margin-left: 80px; margin-right: -80px; } .hubpage .hub .hero .win-hub-section-header { display: none; } .hubpage .hub .section1 { width: 420px; } .hubpage .hub .section1 .win-hub-section-content { overflow-y: hidden; } .hubpage .hub .section1 .subtext { margin-bottom: 7px; margin-top: 9px; } .hubpage .hub .section2 { width: 440px; } .hubpage .hub .section2 .win-hub-section-content { overflow-y: hidden; } .hubpage .hub .section2 .item-title { margin-top: 4px; margin-bottom: 10px; } .hubpage .hub .section2 .article-header { margin-bottom: 15px; } .hubpage .hub .section3 { } .hubpage .hub .section3 .itemslist { height: 100%; margin-left: -10px; margin-right: -10px; margin-top: -5px; } .hubpage .hub .section3 .win-container { margin-bottom: 36px; margin-left: 10px; margin-right: 10px; } .hubpage .hub .section3 .win-item { height: 229px; width: 310px; } .hubpage .hub .section3 .win-item img { height: 150px; margin-bottom: 10px; width: 310px; } .hubpage .hub .section4 { width: 400px; } .hubpage .hub .section4 .win-hub-section-content { overflow-y: hidden; } .hubpage .hub .section4 .top-image-row { height: 260px; margin-bottom: 10px; width: 400px; } .hubpage .hub .section4 .top-image-row img { height: 100%; width: 100%; } .hubpage .hub .section4 .sub-image-row { margin-bottom: 20px; display: -ms-flexbox; -ms-flex-flow: row nowrap; -ms-flex-pack: justify; } .hubpage .hub .section4 .sub-image-row img { height: 95px; width: 130px; }
As you can see, the CSS in Figure 4 shapes and styles the Hub control, and most of it deals with the layout and sizing of the HubSections. Elements and WinJS controls inside the HubSections apply the styles from ui-light.css or ui-dark.css, until you overwrite them with your own styles.
HTML apps rely on CSS for styling. XAML apps rely on XAML for styling. This means that XAML has several attributes you apply to tags to enforce styling definitions called resources. For example, the code that styles a TextBlock is the Style attribute and it references a built-in (static resource dictionary) style named SubheaderTextBlockStyle:
The layout of a page is also XAML, as all the Hubs, Grids and other elements contain inline coordinates for their on-screen position as well as size. You can see throughout Figure 3 there are margins, positioning, and row and column settings that position elements, all inline in the XAML. HTML is originally a Web technology, and conserving bandwidth by using CSS instead of HTML is a real benefit. Here in the land of XAML, it’s all client-side, so UI caching isn’t so much of an issue and styles can go inline. A nice upside of XAML is that you need to do very little to ensure a responsive design. Just be sure to set two <RowDefinition> elements to a height of “Auto” and “*”:
The rows will automatically respond to app view state changes, making the layout fluid while saving extra code. Figure 3 shows a few references to auto-height row definitions.
Samples Available
Once you’ve modified the Hub control, performed data retrieval and binding, and set styles, you’re good to go. Don’t forget to add modern touches such as tiles, search and other Windows integration to your app. The Hub project template is an easy way to build and publish apps quickly, whether in HTML or XAML. Using the hub navigational pattern with the Hub control enables you to build an effective and rich UX that adheres to modern UI principles. You can download Hub control samples covering many aspects of Windows app development at the following locations:
- HTML sample: bit.ly/1m0sWTE
- XAML sample: bit.ly/1eGsVAH expert for reviewing this article: Frank La Vigne (Microsoft)
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/dn605880.aspx | CC-MAIN-2019-39 | refinedweb | 3,032 | 56.15 |
Developing a Software Renderer Part 1Software Rendering ·
Today software rendering has mostly been replaced by GPUs but there are still places where it can be useful.
One example is software based occlusion culling (Software Occlusion Culling and Masked Occlusion Culling) where a software renderer is used to create a hierarchical z-buffer which is in turn used to test object visibility and prevent invisible stuff from being sent to the GPU.
I implemented my own compact software renderer/rasterizer with some nice features like pixel and vertex shaders in C++ and in this article I describe how I did it.
Setting Up The Environment
We need to create a window where we can render our stuff into. For this we will use SDL2. It works under Windows and Linux. My blog post Using SDL2 with CMake describes all the necessary steps required to setup SDL2 with CMake.
Once you have set this up you can remove the code which uses the
SDL_Renderer
and use the following to render directly to the screen without a renderer:
SDL_Surface *screen = SDL_GetWindowSurface(window); SDL_FillRect(screen, 0, 0); SDL_UpdateWindowSurface(window);
Drawing Pixels
Ultimately we want to rasterize a triangle. For this we need to be able to fill
every pixel inside the triangle with some color. We need a
putpixel function
to accomplish this.
An implementation of such a function can be found here or here.
You can then draw a lot of pixels like this:
for (int i = 0; i < 10000; i++) { int x = random() % 640; int y = random() % 480; int r = random() % 255; int g = random() % 255; int b = random() % 255; putpixel(screen, x, y, SDL_MapRGB(screen->format, r, g, b)); }
Rasterizing a Triangle
There are a lot of resources about triangle rasterization available online but I feel like a lot of that information is not very good.
Fortunately I was able to find two valuable resources about triangle rasterization that helped greatly.
The interested reader should read both of those resources to understand the inner workings of the rasterizer.
Simple Filling
I decided to implement a rasterizer based on edge equations. We can encapsulate
the edge related operation in an
EdgeEquation class.
struct EdgeEquation { float a; float b; float c; bool tie; EdgeEquation(const Vertex &v0, const Vertex &v1) { a = v0.y - v1.y; b = v1.x - v0.x; c = - (a * (v0.x + v1.x) + b * (v0.y + v1.y)) / 2; tie = a != 0 ? a > 0 : b > 0; } /// Evaluate the edge equation for the given point. float evaluate(float x, float y) { return a * x + b * y + c; } /// Test if the given point is inside the edge. bool test(float x, float y) { return test(evaluate(x, y)); } /// Test for a given evaluated value. bool test(float v) { return (v > 0 || v == 0 && tie); } };
We also want to interpolate colors across the triangle. Later we also want to
interpolate texture coordinates and in the general case arbitrary per vertex
attributes. For this we create a
ParameterEquation class.
struct ParameterEquation { float a; float b; float c; ParameterEquation( float p0, float p1, float p2, const EdgeEquation &e0, const EdgeEquation &e1, const EdgeEquation &e2, float area) { float factor = 1.0f / (2.0f * area); a = factor * (p0 * e0.a + p1 * e1.a + p2 * e2.a); b = factor * (p0 * e0.b + p1 * e1.b + p2 * e2.b); c = factor * (p0 * e0.c + p1 * e1.c + p2 * e2.c); } /// Evaluate the parameter equation for the given point. float evaluate(float x, float y) { return a * x + b * y + c; } };
Then we can go on and rasterize the triangle. We compute the bounding box of the triangle, restrict it to the scissor rectangle, cull backfacing triangles and then fill all the pixels inside the triangle while obeying the fill rule with the tie-breaker as described in the references.
void drawTriangle(const Vertex& v0, const Vertex &v1, const Vertex &v2) { // Compute triangle bounding box. int minX = std::min(std::min(v0.x, v1.x), v2.x); int maxX = std::max(std::max(v0.x, v1.x), v2.x); int minY = std::min(std::min(v0.y, v1.y), v2.y); int maxY = std::max(std::max(v0.y, v1.y), v2.y); // Clip to scissor rect. minX = std::max(minX, m_minX); maxX = std::min(maxX, m_maxX); minY = std::max(minY, m_minY); maxY = std::min(maxY, m_maxY); // Compute edge equations. EdgeEquation e0(v0, v1); EdgeEquation e1(v1, v2); EdgeEquation e2(v2, v0); float area = 0.5 * (e0.c + e1.c + e2.c); ParameterEquation r(v0.r, v1.r, v2.r, e0, e1, e2, area); ParameterEquation g(v0.g, v1.g, v2.g, e0, e1, e2, area); ParameterEquation b(v0.b, v1.b, v2.b, e0, e1, e2, area); // Check if triangle is backfacing. if (area < 0) return; // Add 0.5 to sample at pixel centers. for (float x = minX + 0.5f, xm = maxX + 0.5f; x <= xm; x += 1.0f) for (float y = minY + 0.5f, ym = maxY + 0.5f; y <= ym; y += 1.0f) { if (e0.test(x, y) && e1.test(x, y) && e2.test(x, y)) { int rint = r.evaluate(x, y) * 255; int gint = g.evaluate(x, y) * 255; int bint = b.evaluate(x, y) * 255; Uint32 color = SDL_MapRGB(m_surface->format, rint, gint, bint); putpixel(m_surface, x, y, color); } } }
Conclusion
The simple implementation works, but performance is not great. It can be improved by a block based approach that allows us to discard blocks outside the triangle faster and skip some test when the block is completely inside the triangle. This will be covered in the next part of this series.
Other improvements are the support of texture coordinates and other per vertex parameters, perspective correct parameter interpolation, multi-threaded rasterization and a pixel shader framework that allows us to configure the per pixel operations in a flexible manner to support texture mapping, alpha blending and other stuff. These are also topics that will be covered in the next posts.
Continue reading on Part 2. | https://trenki2.github.io/blog/2017/06/06/developing-a-software-renderer-part1/ | CC-MAIN-2020-40 | refinedweb | 988 | 59.19 |
Opened 5 years ago
Closed 3 years ago
Last modified 3 years ago
#15644 closed Bug (fixed)
django.core.files.base.File enhancement / fix
Description
Hi,
I think that django.core.files.base.File should be expanded to handle a wider range of file objects. In my specific case, StringIO/cStringIO and tempfile.SpooledTemporaryFile objects.
Here is a simple demonstration of where the File class breaks:
from tempfile import SpooledTemporaryFile from django.core.files import File f = SpooledTemporaryFile( max_size = 1024,# 1kb mode = 'w+b',# must be open in binary mode bufsize = -1, suffix = '', prefix = 'tmp', dir = None ) f.write("hello") print len(File(f))
Here is the result (on Windows using Python2.6):
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\...\django\core\files\base.py", line 33, in __len__ return self.size File "C:\...\django\core\files\base.py", line 39, in _get_size elif os.path.exists(self.file.name): File "C:\...\lib\tempfile.py", line 559, in name return self._file.name AttributeError: 'cStringIO.StringO' object has no attribute 'name'
It should be noted that not only does the current implementation fail, but it breaks in the wrong code block because it doesn't verify that the name attribute is available.
I propose that the file objects seek and tell method be used as an additional fallback before throwing the attribute error as follows:
def _get_size(self): if not hasattr(self, '_size'): if hasattr(self.file, 'size'): self._size = self.file.size elif hasattr(self.file, 'name') and os.path.exists(self.file.name): self._size = os.path.getsize(self.file.name) elif hasattr(self.file, 'tell') and hasattr(self.file, 'seek'): pos = self.file.tell() self.file.seek(0,os.SEEK_END) self._size = self.file.tell() self.file.seek(pos) else: raise AttributeError("Unable to determine the file's size.") return self._size
My proposed patch fixes the problems mentioned above.
Attachments (3)
Change History (16)
Changed 5 years ago by nickname123
comment:1 Changed 5 years ago by nickname123
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Version changed from 1.2 to SVN
comment:2 Changed 5 years ago by lukeplant
- Type set to New feature
comment:3 Changed 5 years ago by lukeplant
- Severity set to Normal
comment:4 Changed 5 years ago by julien
- Component changed from Uncategorized to File uploads/storage
- Needs documentation set
- Needs tests set
- Triage Stage changed from Unreviewed to Accepted
comment.
Changed 4 years ago by claudep
Patch with tests
comment:13 Changed 4 years ago by claudep
- Needs documentation unset
- Needs tests unset
- Type changed from New feature to Bug
I've reviewed the File documentation (), but didn't find any complement to add related to this fix.
Changed Type to bug because of the error raised by _get_size when the underlying file object has no name (see also #16946 which is a duplicate).
comment:14 Changed 4 years ago by Michael Palumbo <michael.palumbo87@…>
That would be nice to make the File object more file-like object so that we could use the File Storage API with any file-like objects (e.g. with what urllib2.urlopen returns)
Because as for now, this does not work for the main reason said previously (elif os.path.exists(self.file.name):
import urllib2 from django.core.files.base import File from django.core.files.storage import FileSystemStorage url = '' fs = FileSystemStorage(location='./here/') f = File(urllib2.urlopen(urllib2.Request(url))) fs.save("downloaded_file.html", f)
Is it planned to be changed ? Maybe for django 1.4 ? How can we help ?
Thanks.
Changed 4 years ago by Michael Palumbo <michael.palumbo87@…>
comment:15 Changed 4 years ago by Michael Palumbo <michael.palumbo87@…>
- Cc michael.palumbo87@… added
I added a new patch because the chunks method of the File object did not work with urllib2.urlopen().
Indeed, this method was using the size which is unknown in this case.
I completed claudep's patch.
comment:16 Changed 4 years ago by anonymous
- Triage Stage changed from Accepted to Ready for checkin
I think this is good to go.
comment:17 Changed 3 years ago by claudep
- Resolution set to fixed
- Status changed from new to closed
patch | https://code.djangoproject.com/ticket/15644 | CC-MAIN-2015-40 | refinedweb | 703 | 58.99 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Won't Fix
- Affects Version/s: 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6
- Fix Version/s: None
- Component/s: groovy-jdk
-
- Environment:windows xp
Description
Hi,
If I put the "+" in the new line when I try to concatenate String/GString I got groovy.lang.MissingMethodException
def test = "Hello"
+"World"
produce the following exception:
groovy.lang.MissingMethodException: No signature of method: java.lang.String.positive() is applicable for argument types: () values: []
Possible solutions: notify(), size(), size(), tokenize(), tokenize()
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unaryPlus(ScriptBytecodeAdapter.java:764)
at com.cwh.aladdinslegacy.actions.ReplayActionBeanTest.groovyTest(ReplayActionBeanTest.groovy:37):182)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Process finished with exit code -1
I saw the similar problem by googling it. So I thought it is already been fixed, but apparently it is not fixed the version 1.8.0 up
Can you take a look into it since it's blocking the release of our product.
Thank you very much
Activity
- All
- Work Log
- History
- Activity
- Transitions
As per Paul's comment I close the issue as "Won't Fix". If you did mean something else, than the mere syntax issue, then feel free to reopen the issue or leave an comment
Just as a final comment, if you really did need that syntax you could roll your own support with some tricky metaprogramming or AST manipulation though the earlier mentioned supported options will be much easier.
Hi guys, I looked around and I saw your reasoning for not adding "+" at the beginning of the new line
"The reason why we don't support that is because it is not clear to the compiler what it means. With having to have a semicolon as line ending, the parser can know the expression does not end till the semi is reached and therefore put everything in the expression, even if there a re new lines. Actually new lines in an expression don't have all that much a meaning to the parser in Java. In Groovy a new line can terminate an expression and on top of that most expressions can be used where you would normally use a statement. +x is valid on its own, therefore in Groovy we cannot know if putting a +x on the next line belongs to the line before or not. We decided for having the style with the operator on the line before as a guide to ensure there will be no problem and that works. You can also use \ at then end to help the parser. But since there is no general working solution without this helping there won't be ever support for this unless a grammar change is made, that makes this case no longer having multiple meanings."
I would agree that it's hard for you guys to do it ... However, you guys have made so much effort in making groovy a friendly language to java, and this "+" is such a basic syntax in java, I wonder if you guys could let me know if you guys thinking of putting the support for the syntax in the near future like less than 3 months
It's not just a question of whether it is hard or not. Firstly, Groovy possibly has better ways of supporting what you are trying to do. Why not use multi-line Strings? Secondly, it was a decision made a while ago and changing it now could potentially break existing scripts/code. People can use the positive and negative methods in DSLs, e.g.
use(MovieRating) { name = 'The Spy Who Loved Me' audience = '18+' +"action packed" -"a little long" }
Would currently call: setName, setAudience, positive, negative.
After such a change it would call just the first two plus some string concatenations.
That syntax isn't supported. You can either join the two lines or use a "\" at the end of the first line, or at least have the "+" on the same line, or use a multi-line String. Lookahead is done for dots, so ".plus('World')" on the second line would be ok. | https://issues.apache.org/jira/browse/GROOVY-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-48 | refinedweb | 733 | 53.41 |
Information in this article comes from:
ghc -threaded ...Assumed in this article. Without
-threaded, you lose concurrency during FFI calls.
foreign import …defaults to
foreign import … safe.
foreign import … safeallows the C call to call Haskell, is concurrent if
-threaded, and incurs a bit more cost.
foreign import … unsafeis the opposite.
+RTS -N 2means that all your thousands of Haskell threads are cramped into 2 OS threads. OS threads running safe C calls do not have capabilities assigned.
Without much effort from the Haskell programmer, multiple Haskell
threads calling C together already works: they don't block
each other, and they don't block unrelated Haskell threads. The Haskell
programmer only needs to add
-threaded and delete
unsafe.
Here is how GHC does it. So an OS thread with a capability is happily churning along Haskell threads. Suddenly one unbound Haskell thread safe-calls C. (The story of the bound case is in the next section.) This Haskell thread is suspended, this OS thread loses its capability and runs the C code, and some other OS thread gains the capability and picks up the other Haskell threads. Everyone is happy.
An unsafe C call does not involve a transfer of capability. Therefore many other Haskell threads, including garbage collection threads, are put on hold as collateral damage.
When the C call finishes, eventually the original OS thread re-gains a capability to resume the caller Haskell thread (and picks up other Haskell threads).
Some of the above are probably technical details we don't have to worry about. For example, we don't mind which OS thread is chosen to run C, and we don't mind which OS thread is chosen to resume the caller Haskell thread. The point we care about is that one OS thread runs C and another OS thread runs Haskell.
The following example spawns 2 Haskell threads to make 2 slow C calls; meanwhile the main thread still has something to say. All of them have their say at the scheduled times. We also hear that two OS threads run the two C calls.
To compile on Linux:
ghc -threaded main.hs slow.c
main.hs:
import Control.Concurrent import Control.Exception(finally) import Foreign.C mforkIO action = do done <- newEmptyMVar forkIO (action `finally` putMVar done ()) return (takeMVar done) main = do w1 <- mforkIO (thread_code 3) threadDelay 100000 w2 <- mforkIO (thread_code 2) threadDelay 1000000 putStrLn "haskell thread here" w2 w1 thread_code :: CUInt -> IO () thread_code n = do ht <- myThreadId putStrLn (show ht ++ " starts") slow n putStrLn (show ht ++ " ends") foreign import ccall safe slow :: CUInt -> IO ()
slow.c (Linux only):
#define _GNU_SOURCE #include <sys/types.h> #include <sys/syscall.h> #include <unistd.h> #include <stdio.h> unsigned get_ostid(void) { return syscall(SYS_gettid); } /* yes, I gamble that pid_t is essentially a word. */ void slow(unsigned n) { printf("slow sleeps in OS thread %u for %u seconds\n", get_ostid(), n); sleep(n); }
Result:
ThreadId 4 starts slow sleeps in OS thread 4323 for 3 seconds ThreadId 5 starts slow sleeps in OS thread 4324 for 2 seconds [1 second later] haskell thread here [1 second later] ThreadId 5 ends [1 second later] ThreadId 4 ends
C calls happening in unpredictably chosen OS threads defeat some C libraries; such a library requires you to choose one OS thread and make all your library C calls there. This is the sole cause of all of the visible complications of GHC threading.
The complication is nicely contained by bound Haskell threads. When a bound Haskell thread is created, it is associated with an OS thread permanently. Every C calls from this bound Haskell thread are run in that associated OS thread. So, if you make all library C calls from this bound Haskell thread, they all go to the same OS thread. The library is happy.
(Nominally, Haskell code in this thread may still be run in whatever OS threads bearing capabilities, but unlikely in current GHC. So beware that a bound Haskell thread costs more for switching context.)
Three ways to obtain a bound Haskell thread:
Why does forkOS always create a fresh OS thread for the association? For concurrency: two forkOS'ed Haskell threads calling C at the same time necessitates two OS threads.
The following example first shows that an unbound Haskell thread can make C calls in different OS threads at different times. (I force it by exploiting a technical detail in the previous section.) Then it tests that a forkOS bound Haskell thread makes two C calls in the same OS thread (immune to my exploit); meanwhile another forkOS bound Haskell thread butts in.
To compile on Linux:
ghc -threaded main.hs slow.c
main.hs:
import Control.Concurrent import Control.Exception(finally) import Foreign.C mforkIO action = do done <- newEmptyMVar forkIO (action `finally` putMVar done ()) return (takeMVar done) mforkOS action = do done <- newEmptyMVar forkOS (action `finally` putMVar done ()) return (takeMVar done) main = do wait_ibm <- mforkIO ibm wait_ibm wait_ibm <- mforkOS ibm threadDelay 500000 forkOS (do x <- get_ostid putStrLn ("another forkOS calls C in " ++ show x) ) wait_ibm -- ibm = I've Been Moved! ibm = do b <- isCurrentThreadBound let msg = "ibm " ++ (if b then "" else "un") ++ "bound calls C in " x <- get_ostid putStrLn (msg ++ show x) wait_sleep <- mforkIO (sleep 2 >> return ()) threadDelay 1000000 x <- get_ostid putStrLn (msg ++ show x) wait_sleep foreign import ccall safe get_ostid :: IO CUInt foreign import ccall safe sleep :: CUInt -> IO CUInt
slow.c (Linux only):
#define _GNU_SOURCE #include <sys/syscall.h> #include <unistd.h> unsigned get_ostid(void) { return syscall(SYS_gettid); } /* yes, I gamble that pid_t is essentially a word. */
Result:
ibm unbound calls C in 5193 ibm unbound calls C in 5194 ibm bound calls C in 5196 another forkOS calls C in 5197 ibm bound calls C in 5196
C calling Haskell works without extra effort from the Haskell programmer (or the C programmer). Firstly, multiple C OS threads calling Haskell is concurrent. Secondly, if the called Haskell calls C, i.e., C → Haskell → C, the 2nd C code is run in the same OS thread as the 1st C code. So C libraries with thread-locality requirements are happy.
The most popular use case of C → Haskell → C is with GUI libraries and OpenGL: the 1st C is the event loop, the Haskell is an event handler you supply, and the 2nd C is your event handler giving commands to the library. The library requires the event loop and the commands to be in the same OS thread.
Here is how GHC does it. When C calls Haskell, the GHC RTS creates a fresh bound Haskell thread associated with the calling OS thread, to run the called Haskell. From what we now know about bound threads, everything just works when the called Haskell calls C.
This mechanism is also how multiple bound Haskell threads end up sharing the same OS thread. For example if we have this call chain:
C → Haskell → C → Haskell → C → Haskell → C → Haskell
then we have 4 bound Haskell threads associated with the same OS thread. This is harmless because at least 3 of them are suspended; only the last one is active and may make yet another C call. In fact, we also understand that it is important that all 4 C calls and any further ones are in the same OS thread, stacked upon each other.
The following example has a C function and a Haskell function recursively calling each other, showing that every call into Haskell is another bound Haskell thread, and they all use the same OS thread for C calls.
To compile on Linux:
ghc -threaded main.hs slow.c
main.hs:
import Control.Concurrent import Foreign.C import Foreign.Ptr foreign import ccall safe get_ostid :: IO CUInt hthreadinfo prefix = do t <- myThreadId putStr (prefix ++ ": haskell " ++ show t) b <- isCurrentThreadBound if b then do n <- get_ostid putStrLn (" bound to os thread " ++ show n) else putStrLn " unbound" main = do -- recall that main is also run in a bound thread haskell 5 foreign import ccall safe cfunc :: FunPtr (IO ()) -> IO () foreign import ccall "wrapper" ptr_for_cfunc :: IO () -> IO (FunPtr (IO ())) haskell 0 = return () haskell n = do hthreadinfo ("T minus " ++ show n) ptr <- ptr_for_cfunc (haskell (n-1)) cfunc ptr freeHaskellFunPtr ptr ht <- myThreadId putStrLn (show ht ++ " done")
slow.c (Linux only):
#define _GNU_SOURCE #include <sys/syscall.h> #include <unistd.h> #include <HsFFI.h> unsigned get_ostid(void) { return syscall(SYS_gettid); } /* yes, I gamble that pid_t is essentially a word. */ void cfunc(HsFunPtr callback) { callback(); }
Result:
T minus 5: haskell ThreadId 3 bound to os thread 3942 T minus 4: haskell ThreadId 4 bound to os thread 3942 T minus 3: haskell ThreadId 5 bound to os thread 3942 T minus 2: haskell ThreadId 6 bound to os thread 3942 T minus 1: haskell ThreadId 7 bound to os thread 3942 ThreadId 7 done ThreadId 6 done ThreadId 5 done ThreadId 4 done ThreadId 3 done
I have shown using Haskell as the main program. You can use C as the main program too; in fact, you can create OS threads on the C side, and from them call Haskell. All of the above still work.
I have more Haskell Notes and Examples | http://www.vex.net/~trebla/haskell/ghc-conc-ffi.xhtml | CC-MAIN-2014-52 | refinedweb | 1,523 | 71.14 |
jaa has asked for the wisdom of the Perl Monks concerning the following question:
my ( %hash ) = split /\s*[=,]\s*/, "apples=green,pears=brown,oranges
+=orange";
[download]
Thanks in advance!
Regards,
Jeff
my $string = join(",", map({"$_=$hash{$_}"} keys %hash));
[download]
Will that work?
They say that time changes things, but you actually have to change them yourself.
Andy Warhol
All the above solutions work, just remember that when you join your data back, you won't necessarily obtain a string identical to the original one. If you want your hash to remember the order in which you put elements into it, read perldoc -q order.
$_ .= "$k=$v," while ($k,$v) = each %hash;
[download]
my ( %hash ) = split /\s*[=,]\s*/, "apples=green,pears=brown,oranges=o
+range";
my ($string) = join(",", (map { my $a = "$_=$hash{$_}" } keys(%hash))
+ );
print "[$string]\n";
[download]
Out of curiosity: Why are you bothering to assign to a variable inside your map statement?
I have not tested it, but I am pretty sure if you run with warnings and strict you will be warned the $a is used only once. Beyond that you really should not use $a or $b as they are special variables used for sort.
It also made me read the JOIN doc more carefully - here is an inefficient, but fun OOlternative :).
use strict;
my %hash = (
apple => 'pie',
banana => 'custard',
cherry => 'ripe',
);
# use map to alternate $delim between = and ,
my $delim = '';
my $str = join $delim, map { $delim = ($delim eq ',' ? '=' : ','); $_
+} %hash;
print "result: [$str]\n";
# use class stringification overload to alternate
my $str2 = join( flipper->new(), %hash );
print "result: [$str2]\n";
#----------------------------------
package flipper;
use overload '""' => \&toString;
sub new {
my $class = shift;
my $self = { counter => 0 };
bless $self, $class;
}
sub toString {
my $self = shift;
return ( $self->{counter}++ % 2 ? '=' : ',' );
}
[download]
Does anyone have a more elegant use of the fact that JOIN stringifies the EXPR for each element? perhaps an anon-sub closure??
Abstain
Agree
Keep arguing until you agree
Disagreeing just for the sake of disagreeing
Going to another level of disagreement
Not agreeing to disagree
Be illogical
I disagree with all the above options
Results (167 votes). Check out past polls. | https://www.perlmonks.org/?node_id=479035 | CC-MAIN-2020-16 | refinedweb | 359 | 65.86 |
Control.Concurrent.Broadcast
Description
A Broadcast variable is a mechanism for communication between threads. Multiple reader threads can wait until a broadcaster thread writes a signal. The readers block until the signal is received. When the broadcaster sends the signal all readers are woken.
All functions are exception safe. Throwing asynchronous exceptions will not
compromise the internal state of a
Broadcast variable.
This module is designed to be imported qualified. We suggest importing it like:
import Control.Concurrent.Broadcast ( Broadcast ) import qualified Control.Concurrent.Broadcast as Broadcast ( ... )
Synopsis
Documentation
A broadcast variable. It can be thought of as a box, which may be empty of full.
Instances
newWritten :: α -> IO (Broadcast α)Source
read :: Broadcast α -> IO αSource
tryRead :: Broadcast α -> IO (Maybe α)Source
readTimeout :: Broadcast α -> Integer -> IO (Maybe α)Source
Read the value of a
Broadcast variable if it is available within a given
amount of time.
Like
read, but with a timeout. A return value of
Nothing indicates a timeout
occurred.
The timeout is specified in microseconds. A timeout of 0 μs will cause
the function to return
Nothing without blocking in case the
Broadcast was
empty. Negative timeouts are treated the same as a timeout of 0 μs.
write :: Broadcast α -> α -> IO ()Source | http://hackage.haskell.org/package/concurrent-extra-0.3/docs/Control-Concurrent-Broadcast.html | CC-MAIN-2013-48 | refinedweb | 209 | 60.82 |
I.
- C++ customers mostly develop native code applications. As part of this, you would like to see renewed emphasis on tools for writing native code.
- While firmly rooted in native code, many of you want to extend your applications to take advantage of managed functionality (especially WPF, WCF and workflow).
- You are using C++/CLI to bridge between native and managed code.!
The more that you focus your efforts on native code, the better.
Managed code is great, but 100% of my managed code development is done in C#. When transitioning to managed code, customers ask me to rewrite the code in C#. They do not want native / managed interop.
Each release of VC++ has improved compliance with the standard. However, there are a much of issues still remaining. I think these should be a high priority. The older standard should be fully implemented before implementing the currently proposed update. For starters, export is still missing. (Like it or not, it is part of the standard and it is not being removed.)
I can not speak for everyone, but in my experience, any work done to further interop will be useless to me. The more done to improve native code the better. Make the most compliant compiler on the market.
IDE Pain Points
Each version of VS
– loads more slowly
– compiles more slowly
– the IDE is more unstable
– library searches produce more irrelevant hits
– introduces more on screen clutter
Dynamic help isn’t and probably never was.
Closing a file opens the branch in solution explorer. Never found the collapse all.
Would be nice if the current IDE could drive the older compiler chains.
Debugger sometimes looses selected stack frame wrt watch variables.
Remote debugging setup stupidly manual.
CRT deprecation system is broken for legacy and cross platform code.
Platform Pain Points
Concurrency/Throughput
– some interesting technologies and APIs (e.g. IOCP), but…
Portability
– the great debacle for server type apps
– older Windows and Unix platforms have severe API differences
– least common denominator or multiple implementation?
– mixed results with common open source libraries
Standards
– kudos for improvements, keep it up
– IMO, forget export, it was DOA
Top 3 requests:
– C++ 0x support – really like to this get into future VC++
– IDE Performance (browsing, opening, editing, …) – can save a lot of time and electricity :-p
– Stardard compliance – VC++ is too tolerant for bad (old) C++ code
Larry,
We hear your concerns, especially regarding code bloat in VS and the impact it has on performance (such as load time and compile time). We’ve worked hard in VS2008 to hold the line on performance and to improve it in many areas, and we’ll continue to do so in the future.
In the mean time, if you have any specific performance concerns (especially for the VS2008 Beta) you’re welcome to e-mail us: DevPerf@Microsoft.com.
Best Regards,
David Berg
Developer Division Performance Engineering Team
David, my post ended up more negative than I had intended. I do like the VS editor and debugger very much. But I spend quite a bit of time with it and a fewe things just end up driving you crazy.
I tried beta 1 only briefly but did run into an issue. VS will refuse to (directly) open solution files if the previously open solution file’s directory is removed. I haven’t had a chance to install beta 2 yet. Is there a reason you need to register to submit bug reports?
Larry,
Thanks for the input, I’ll forward it to the appropriate team. Registration for Bug Reports is primarilly so we can follow up with the individual submitting the bug if we need more information or need help in verifying a fix.
Regards,
Dave
Larry,
Can you e-mail me details on how to reproduce the open solution issue you mentioned: David.Berg@Microsoft.com?
Thanks,
Dave
I was waiting for Orcas for one little C++ thing: a valid TR1 implementation.
It seems that I will have to wait for the release of C++0x in order to get all these nifty classes (other solution: DIY or buy a separate implementation… Well, DIY is quite fun (especially since compiler support hurts a bit), and there is not much available solutions out there).
Other than that, the C++ compiler is roughly similar to VC++ 2005 (it still accepts the same bogus code, especially wrt double phase lookup), so is there a specific reason to update?
Thanks everyone for the comments. We definitely hear you with regard to improved C++ conformance (both compiler and library) as well as IDE performance and scalability. There are things that we are actively working to address for Orcas+1.
Bill Dunlap
Visual C++ Development Team
I really would like to see more convinience libraries like the .NET classes over in C++. There are many similar looking things but they drive you crazy if you try to use them in the same way like in C#.
The ATL CPath class is very cool but CAtlFile is a nightmare to use.
CAtlFile OpenFile(const CString &file)
…
CAtlFile openedFile = OpenFile("C:\foo.txt");
If I try to use it as simple as this I get compiler warnings that I use a non standard compiler feature (conversion of a value to a reference).
If the open fails and I do check the file handle now for NULL or INVALID_HANDLE_VALUE?
After fumbling around I did go back to good old HANDLE values I can much easier check for correctness.
A new library with a consistent error handling approach would be very cool to make silly coding errors where you check for the wrong return code occur less often. Yes exceptions can help but any new library should work as seamless as possible with the Windows APIs for interop.
Yours,
Alois Kraus
I’d like to put in a vote for continuing to address interoperability smoothly. I am working on a very large legacy C++ project, so native code is critical. But as we build more data interoperability the XML and webby friendliness of the CLR libraries are too much to resist. And large pieces of the new interface ought to be built using managed code, but will clearly need to call back and forth with the unmanaged components — including unmanaged interfaces in libraries like Qt. In short, managed code should represent a practical way to augment existing C++ code without having to flip a switch and recompile the whole thing.
I really like VS 2005. I develop mostly C++ native apps. The performance of C++ Intellisense is the number one pain for me. I have 100,000 lines of code spread out over 10 projects, and normally keep them all in the same solution. I use boost and stl quite a bit. I have had to turn off Intellisense (via renaming feacp.dll) because my CPU goes nuts every time I change a header file. I have a dual core machine, not sure if that makes it worse or not.
In constrast to many of the folks who have posted comments, managed/native interop is EXTREMELY IMPORTANT to my group and to others who are developing high-performance critical applications. My application is medical imaging, and our front-end is .NET but Managed C++ for performance reasons. (We heavily utilize OpenMP and there is no way we can get the same performance using C# – believe us, we’ve tried).
Forms support in VS C++ 2005 basically sucks. Half the time the designer can’t load the form and how come it takes 10 minutes for a form to load while in C# it takes a couple of seconds? Sometimes we’ve had to resort to manually editing the header file to move buttons around!
Also, the designer generates atrocious looking header files. Why not borrow from C# 2.0 and use "partial class" definitions.
More OpenMP/multi-core enhancements would be appreciated, fix the forms support, I personally don’t care about export but do care about C++ TR01. My group is also very interested in STL/CLR.
As an ISV working in Visual Studio extensibility we work with every version of the Visual Studio family from VC6 and eVC4 through to VS2008. You wouldn’t believe some of the howlers we’ve seen in the IDE interfaces – the Visual C++ automation interfaces (VCProjectEngine) in particular.
We still develop mostly in VS2003. Why? Simply because developing add-ins in VS2005 is a royal pain – the environment is sluggish (especially when the IDE is loaded under a debugger, which we of course do regularly) frequently locks files it shouldn’t (especially Satellite DLLs) and can’t even successfully toggle the load settings of add-ins loaded through HKLM (this bug, incidentally was reported to MS in VS2005 Beta 2 and closed as "won’t fix").
As such we are stuck using earlier versions of the Windows SDK and with a real problem on dev machines running Vista.
We’d love to believe that VS2008 will be the platform that will allow us to move to the latest Visual C++ libraries, but the fact is that unless VS2008 addresses these issues *and* makes the runtime library easier to install (hint: provide a way to circumvent those SxS manifest requirements where appropriate) it just won’t happen.
+1 for the vote about allowing VS to open and work with previous versions of solution/project files too. We maintain dual VS2003 and VS2005 (the latter primarily for occasional work on Vista) solution and project files, and keeping the settings in sync is a royal pain. How about providing a tool for automated conversion in both directions?
Oh and if you are serious about really getting VC++ up to speed please do something about getting the toolset in sync with other Visual Studio supported languages. Tools like Sandcastle need to be able to handle native code too.
I’d love to see some tighter integration between VS debugger and virtualization technologies. Actually, that applies not only to VC++ but the whole VS in general. With the vastly improved performance of Virtual PC, I tend to do development & testing on a virtual machine more and more. The way I debug my applications in virtual machine is using Remote Debugger (or WinDbg). Well, it works, though I am absolutely positive that you guys can achieve some more seamless experience.
I presume improvements in this direction will be welcome even more when employing virtualization for development needs will become mainstream (if it hasn’t become such already).
We have a 1M lines C++ code base CAD application suite and very active ongoing UI intensive development. My main issue is developer productivity – I want my team to be able to use the state of the art in building complex UI using all the new .net, XML, etc tools, but I need to do it in C++ rather than in C#. We also have large MFC legacy (over 500 dialogs) so transitioning to a friendlier platfom for forms work is needed. In summary, our need is the developer friendliness of C# with the performance of C++.
I found the problem in resource processing (from VS2003 and after). I have Russian Windows and Italian resources. Each time I’m saving Italian resource file all accented symbols are lost. The resolution is only load Windows under Italian local.
Can you resolve such the bug?
1) Please fix the broken CRT in VS 8.0 – you can’t just build an app then copy the executables to another machine without getting that dreadful "this app is not configured properly". Its a *binary* app – it doesn’t need configuring. For this reason we cannot and will not port our projects from VS 6.0 to VS 8.0.
2) Same comment for the MFC libraries
3) Please can we have more than 1 watch window in VS8? VS 6 gives me 4 – so much more usable.
4) If the app is just a native app DO NOT ask me if I want to do managed debugging – just straight into the debugger as fast as possible. Some bugs I never get to see because by the time I’ve got through the GUI interventions asking me which debugging I want, oh dear the app is no longer running, thus I can’t see the bug. How useless is that?
5) Please re-instate the MFC class wizard.
6) If you have (as we do) 11 million lines (across 39 products) of C++ targetting MFC you will appreciate that we are not going to port this to C# and WPF. Its just not going to happen. Therefore we want and need first class support for C++ and MFC in Visual Studio (without that #1 issue above) before we can use Visual Studio 8.0.
You need to treat C++ equally to C# if you want us on board for the ride (and that includes the GUI design tools such as class wizard).
All that said, thank you for fixing in VS 8.0 the key bindings so that now F7 does actually do a build etc. Also nice that the project settings are always available (as they are implicitly selected) and that I don’t have to select the project before I edit the settings (like in VS 7.0 – so broken!)
Have now moved a large C++ project to VS2005 from VC6. Occasionally have to go back to VC6 to maintain an old version. Its like a breath of fresh air, like seeing an old friend again, makes you realize how slow VS2005 really is (or least for native C++) and no really nice extras such as re-factoring to compensate!
Major areas that seem to have gone down hill
Resource dialog editing – interface is terrible compared to VC6
Compile/load time
Help – nearly unusable in VS2005 both speed wise and feature wise compared to VC6
What would be nice to see:
Seamless integration with managed framework.
The Documentation facilities that are in C#
Re-factoring
Better/faster dialog resource editing
Pop Context when browsing through definitions (or maybe I can’t find it in VS2005!)
Overall speed up
Help – more like VC6 – The ‘Locate’ button was heaven!
Please make your tools case-sensitive. VC++ 2005 converts file names to lowercase during compilation, debugging, and source code browsing ("Find all references" etc). Sometimes it even *renames* the files. Ttypical scenario: you typed something in the file MySource.cpp opened by debugger and pressed "save" – oops, now it is mysource.cpp. This makes the development difficult. There are case-sensitive file systems and development tools (preprocessors, compilers, source control systems, log analyzers, etc.) that expect the file name to be unchanged.
My recommendation: let your tools show and use file names in the same form as in Windows Explorer.
It would be nice for VSTS to present a list of files updated when we do a "Get Latest". Keep improving the interop as we’d like to figure out ways to use WCF and WPF in our large native C++ application.
We’ve been waiting a long time for Class Designer for C++. We’d like to see static code analysis become more configurable, perhaps allowing us to add our own coding standard variable naming conventions. We would really like to see the VSTS Unit Test framework directly support native C++ rather than having to jump through hoops.
We appreciate all of your hard work. Keep the improvements coming – Visual Studio is the best IDE in the industry.
I agree with Stephen Kellett
And
I am very very concern by the interest from Msoft for C#, for various reasons, not all technical.
VC++ is becoming "poor parent", and C++ Looks like going to be in futur what Macro assembler is today; The language of choice for speed, not more.
But C++ is the language of choice (aside perl, etc ..) in the Unix world.
A dangerous attractor (dangerous ?) to switch to non-Windows OS as free community or collaborated code is done for; gcc, make, for BSD or Linus RunTime.
A good path would be to help VC++ being a *real* bridge to the Unix world. Take make, compile, run!
CRT surely require a revamp for that, Third party ? I won’t take that chance
Being Posix etc … is *not* the solution. Msoft have a long legacy of defacto standardization, same happens with C++ under unix which establish it’s set of defacto std.
I woud love more open env with pre compiler allowing perl script or PHP code to be converted to C++ and compiled. Speed *is* back as an issue with today requirement on server.
At same time asking an engineer to switch to ASP technology is an irrevrsibble path (too much to learn every minutes) so they reluctantly take that path. (ASP faster ?)
So
New guys use .Net VB, C#, Java
Old guys design considering need for Unix compatibility.
Too bad VC++ is the best IDE I know (so far) but it’s futur will stick to C++ futur or be put sideway
We don’t need an incremental VC++ version++ !!
We need the swiss army knife with same level of innovation as VC++ 2.0 gave. years ago.
Take your chance guys!!
Yes there will be flaws on first versions.
Yes you will have to consider being compatible with a specific version first.
But being static is NOT a sign of life.
I’ll look on 2008 and maybe change my mind but not sure from clumsy and over enphase put on new features.
PS: Don’t know how you work at Msoft for P&L but VC++ is way too expensive. (‘express’ version is a patch)
Loosing ground of small devloppers versus choosing big rich enough companies is not OK during mutation time, like what current Web mutation era is.
Not to mention endorsing dangerous switch to Silverlight, WPx etc .. versus legacy Flash
I like VC++ so move it please
Will GUI designers still use fixed coordinates instead of sizers (a la wxWindows, er… wxWidgets) or anchors (a la Delphi and C#)? Anything is better than manually resizing things.
Visual C++ really looks like a second class citizen when it comes to GUI tools and well-designed library support.
Bill Dunlap wrote:
"Thanks everyone for the comments. We definitely hear you with regard to improved C++ conformance (both compiler and library) as well as IDE performance and scalability. There are things that we are actively working to address for Orcas+1."
What does that mean exactly? Are we talking about TR1 conformance or 0x conformance, I certainly hope it’s the latter. I do of course understand that you might not have time to implement all of it but there are significant parts (all of TR1 and more) that have not been changed since they were made part of the draft, and probably wont change either.
I’m currently looking into a library licence from Dinkumware, but the cost combined with the licence for VS is quite high (considering that most of it I already got with VS)
I’ll reiterate what I said to Microsoft Canada when they asked me for my feedback on the VS development toolset.
1) I believe the development environment should be rock solid stable – like the OS. A debugger crash doesn’t give me confidence in the product I’m using. VS2002+ has this issue and it hasn’t been resolved. How many more crash reports do I have to send before VS2002, VS2003 and VS2005 will be fixed?
2) Which leads to the next issue – service packs. My end-users expect product hotfixes and updates because it tells them that my company stands by what we produce. What has the VS team been telling me for the last several years? Speaking as your customer, it is not confidence boosting to get a service pack and the issues fixed aren’t a pain point – see point 1.
3) MFC is important to many companies. Please continue to support, enhance and keep it alive. I’m working on a 2M LOC product – it will never be to ported to C#. There is no business case that could ever be created to justify rewriting 15 years of code.
4) Somewhere between VS6 and VS2002 the help system got seriously broken. It is impossible to find anything. Google has become the preferred help system.
5) A major pain point I have with MS dev tools since Microsoft C6 is that every release changes project files. As far as I can determine the only compatibility difference between a VC2002, 2003 and 2005 vcproj file is the file version number. Do you know how tiresome it is to change 100+ project files when you know all you have to do is change the file version number?
6) Oh, and more pain points. Can you guys do something about SxS? It doesn’t work. Very simple – it does not work. Period. Has anyone in the VS team gone through the fun of figuring out why a COM+ app won’t run because some SxS module is missing? Or contact a vendor to get an updated merge module so you can distribute your product?
Thanks for the opportunity to speak. I’m looking forward to playing with VS2008.
We use VC++ entirely for legacy application work. Our new systems are C#.net for most everything which calls via a RPC a few legacy computation engines written years ago in C++. Our main issue is that C++ foundered with half-adopted measures such as STL and Boost to address know shortcomings instead of a much needed POSIX v2 standard class library. Ideally, we would like to see a native set of file, directory, socket, rpc, memory classes from .NET 2.x framework make it into a C++ POSIX v2 ISO standard. This would greatly help us for both Unix side and Windows side C++ legacy code.
You should have titled this: "The future of Visual C++" or something. Futures and C++ mean something very different:
One of the biggest reasons to upgrade to Visual C++ 8 was improved conformance. I have told the rest of my team that I would be very surprised if Visual C++’s C++0x / TR1 support was lacking.
Please don’t make me out to be a liar!
The main thing that bugs me currently is that the STL seems much slower than necessary. I’ve seen tests showing STLPort’s vector class to be a full 4 times faster than the MS one. (and that’s when disabling bounds checking, checked iterators and all the other stuff)
Speaking of which, I think that is a mistake to enable by default as well. Here’s why:
– One of the major design philosophies for C++ is "you don’t pay for what you don’t use". So why enable bounds-checking on operator[] by default? We don’t expect it, it isn’t there in other implementations. And when we want it, we can enable it easily enough.
– Along the same lines is "the principle of least surprise". We expect [] to be fast, and .at() to do bounds checking.
– And finally, I think it might not actually reduce the number of bugs at all. Imagine an intermediate C++ programmer who isn’t aware that you decided to do bounds checking and all that extra stuff. He just uses std::vector because that’s what people say you should use. So he profiles a bit, and finds out the vector is *much* slower than an array. Guess what he does then? That’s right, rolls his own. So now you have people using home-made, buggy, insecure array wrappers instead of std::vector which is at least bug-free. If you want secure code, I think giving people the *option* of enabling bounds checking would be much better than stealthily enabling it by default.
–.
I think the whole "Secure SCL" is a mistake, at least by default.
Ok, another suggestion: Please please please improve support for property sheets. They’re awesome. Why don’t the standard project types use them? I hate how when I create a Win32 project, it overrides the output/intermediate paths, instead of just inheriting a value from a property sheet. So if I want to change it, I can’t just add a property sheet with my values, I have to *manually* clear out the overridden values in project settings. Can’t all the default project settings be specified in a property sheet?
And of course, the project wizard API itself… Yuck. Give me a nice clean C# interface like the cool people use, not an ancient, mess of javascript and html.
As people mentioned above, the whole manifest thing is silly too. I (think I) understand the reasons for it, but it is a major pain in the ass that when I build an executable, I can’t just deploy it on other computers. There’s got to be a better solution.
And finally, though this isn’t C++ specific, but Visual Studio in general, it’s slooooooow. Amazingly slow. And single-threaded… Does the entire program *really* need to freeze completely while waiting for Help to load? While waiting for intellisense to update? While doing *any* of the vast number of actions that takes 30+ seconds?
Jeff,
Can you send me some details about what scenarios you’re seeing where SxS doesn’t work? Are these design time issues or run time issues? What specific components? I’d like to make sure the right people are following up on this.
Thanks,
Dave Berg
David.Berg@Microsoft.com.
I am right now more C# than C++, but have found a lot of good things on the C# side that can jump to the C++ side without being Managed code, on libraries like MFC, ATL, there are a lot of Data Structures that can be included on a basic C++ for instance, Lists, Dictionaries, things like this, that allow a handling of information like it is in C#, but available to any application not exclusively MFC, some times I need this on a console application.
Win32 native windows programing is complex, and the visual tools for windows are not good, the language and tools are not object oriented, nor even event oriented, so it would be great to improve in this side.
Jeff’s comment about SxS reminded me, why on earth isn’t the Microsoft Visual C++ Redistributable Package sent out through Windows Update? I’ve developed dozens of small tools that should only require a quick copy and run, no fancy installations or such, but when I try to use them on a computer they don’t run because the runtime files are not installed. And on some computers the user might not have permissions to install them either. The installer is less than 3MB, I can see no reason why it should not be installed on every computer running Windows. If not the ability to run the applications developed on other machines is a feature I don’t know what is!, circumvent SxS when appropriate, etc. "’friction-free’ interop between native & managed code" should be secondary at this point.
One other request I’d like to see: Please provide a way to tell the IDE to NOT, in any way, shape, or form, pretend to know about a source code control system (ie, don’t offer to "unbind", "work offline", etc) Just treat read-only files as read only, don’t look for vssscc files, and so on. There’s times that source control integration just doesn’t work, so
– allow circumvention of SxS
– add refactoring capabilities
– add unit test framework that directly supports native C++
"’friction-free’ interop between native & managed code" should be secondary at this point.
One other request: Please provide a way to tell the IDE to NOT, in any way, shape, or form, pretend to know about a source code control system. When integrated source control works, it’s great, but there are times that it just doesn’t work, and the best I can do so far is get down to one dialog offering to unbind from source control (which sets all files to writeable if you choose ‘yes’). Don’t offer to "unbind", "work offline", try to overwrite writeable files, and so on; just treat read-only files as read only, and let us manage source control, if we so choose.
That said, there are some good improvements in VS2005 C++, such as the call browser and code definition windows, greatly improved error/warning display, and the Data Tips and Visualizers. Keep your team focused on what C++ developers need, and you can make VS the leader of the pack again.
First off, Soma please do more channel 9 interviews ( you rock ! ).
For me I am evaluating (and would really like to use) the usage of managed to un-managed code in my applications (communication oriented programs). I am also hoping that visual C++ will take advantage of the C++0X standard which is set to release soon (09?) .
I think both Managed and UnManaged code is needed in todays world. I just hope the VC++ team keeps up the good work and continues to build the best tools available for us grunts in the trenches !
Thanks again for all of your comments! We are reading each on and will factor your views into our product planning for the next release. A couple of high level bullets:
1) IDE performance/scalability – we hear you loud and clear. This is something we absolutely are looking to address in Orcas+1. We know that many of you are working with MLOC and we need to update the IDE to support this better.
2) IntelliSense isn’t very good – we are working on this one as well. As indicated previously, our goal is to provide a "C#-like" IntelliSense experience in Orcas+1. We working on a front-end parser re-architecture right now that will facilitate this (and a whole lot more).
3) MFC – we are working on a huge update to MFC that should knock your socks off. I can’t tell you too much right now, but this is closer than you might thing <g>.
4) TR1 – we’re doing a lot of work in this space as well. I can’t give our exact plans right now, but we do understand the importance of this library. As above, we’d like to release this sooner rather than later.
5) C++0x – we are carefully tracking the latest C++ standard. Our commitment to standards compliance still stands and we will work to support once all aspects of C++0x are finally approved.
6) Deployment – this is not as easy as it should be. We’re going to be looking at this for Orcas+1 as well.
Keep the comments coming!
Bill Dunlap
Visual C++ Development Team
I would definitely love the feature of ‘total deoptimization’ 🙂 It should generate such code, that no hackers will ever be able to understand what’s going on inside :-)))
I think the major problems with VS2005 have been well addressed here, except for one: F1 help. This is such a joke in VS2005, and worked so much better in VS6, that it is a constant source of amusement in our development shop – sometimes someone will forget how bad and how slow F1 help is in VS2005 and hit the F1 key, and say "Damn! just hit F1!" and everyone laughs. Google is an order of magnitude faster and more accurate than F1 help in VS2005. Please, please, fix this.
MSDN help needs attention both in the DVD form and via msdn.Microsoft.com. We’ve reverted to searching for API help in this order 1) google usenet groups, 2) google web search limited to microsoft.com 3) local msdn copy 4) msdn.Microsoft.com 5) google unrestricted web search. The main contention is that MSDN produces far too many false leads and far too many topics that have no sample code. We have the last MSDN before .NET was first released as a second major source of documentation since much useful content was removed instead of being updated once .NET was released.
You said you were going to "hold the line" in terms of performance. Does this mean you’ve given up on improving it because you recognize that it’s a lost cause?
For native code developers, the VS2005 IDE just plain sucks. It’s clear that native code developers have become an afterthought. How is the VC team going to address what is clearly the domain of the MS marketing nazis?
MSDN is completely worthless (to all developers, but especially the native code developers), and has been since the mid 90’s. It may as well not even be shipped with Visual Studio.
Yes, I’m bitter. I think .Net is NOT appropriate for desktop apps, and MS’s aim is to turn native code developers (that *got them where they are today*) into their mind-controlled robotic demon spawn of the CLR.
Okay, so maybe that’s a little over the top, but I think you understand how I feel…
Simply put, just get the new C++ compiler and make it work with VC++6.0. New compiler is good, but the IDE of VC++6.0 was the best and most suited and most stable for C++. Just make new one same as VC++6.0, so simple. Class Wizard was way better than this propery editor, and faster too.
Also, why do the IDE addins stop working with each new version? I am an author of one of the popular addins but I will not keep porting to each and new version. Shame on you for that. You should be able to make old addins work with new IDE. I can see people actually implementing addins that work for multiple versions on CodeProject. Why can’t you?
Product is way too expensive for the quality of it. I use the latest versions at work – company is silly enought to pay for that. But tool is way overpriced and wont make it to my home. I can afford it easily, but it’s just not stable of convenient enought for me to buy that.
Stop inventing the wheel where a de-facto standards exists. E.g. XML comments are so primitive for documentation – why not support doxygen format natively? Also, why not support CVS natively? The source control interface is so bad I don’t know where to start on that…
New IDE looks, from the begining in 2002, like a re-worked Visual Interdev from VS60. It even calls workspaces a solution – dead giveaway. Well, Visual Interdev was one of the worst tools I even worked with and now it’s calling itself VS2003-2005. It’s primitive and looks like made by some VB developer on a weekend. I guess that makes VB developers quite happy.
Annoying message boxes must stop. I drag a project onto the IDE and it tells me I can’t and I should open a solution instead!? Or while I am debugging I try to type the code and it tells me to stop debugging first! But it tells me what to do instead of giving a button to actually do that, it’s totally counter-productive and make me think that VS development team doesn’t understand the first thing about GUI design and can’t make a functional product.
Don’t care much about intellisense – I use Visual Assist for that. If you want to see what to do in that aread then look ath Visual Assist and other addins (resharper etc) and just do that.
I am not sure why I even bother to type all that. Nothing will probably change because we seem to be living in some sort of IT dark age where "simple" and "easy" are keywords to cover for arrogance and ignorance of modern programmers.
Microsoft’s Visual C++ team has been mapping out the future of the development platform, looking to highlight native and managed code capabilities, a Microsoft executive wrote this week. "The Visual C++ team has been looking at what they should [feature]
I don’t even care about submitting feedback anymore. MS failed to listen to C++ devs between 2002 and 2005. MS even publicly solicited feedback on the IDE in February 2005[1], and failed to act on it in 2005, and again has failed to act on it in 2007. So it’s been 9 years since the last good IDE was made and we’ll have to wait until what, 2010 for Orcas+1? Then _maybe_ you’ll fix some things in the IDE? No thanks, I’ll only let you abuse me for so long. I’ll stick with VC6 where I can actually get work done.
[1]
Thanks, everyone, for the fantastic feedback. Your passion and enthusiasm for the product has certainly gotten our attention — in fact, this very thread was the major topic of hallway conversation here on the VC++ team today. 🙂
I’d like to take a moment to touch on some of the topics raised here…
>>You said you were going to "hold the line" in terms of performance. Does this mean you’ve given up on improving it because you recognize that it’s a lost cause?<<
I won’t put words into Dave’s mouth, but I can say that the effort of Dave’s perf team have definitely paid off for us. Right now our VS2008 perf tests show us as beating VS2005 in several of the key performance scenarios that we are critical for VC++ developers. There’s still much more that we want to do here, but I see it as a major step forward that we’ve reversed the regression trend.
>>For native code developers, the VS2005 IDE just plain sucks. It’s clear that native code developers have become an afterthought. How is the VC team going to address what is clearly the domain of the MS marketing nazis?<<
Native code developers are absolutely the focus of the VC++ team. Bill and I discuss this in detail in the Channel9 video that Soma references above. Also, as Soma mentions, we already have a good chunk of our team working to create a fantastic IDE experience to ship in the next major VS release after VS2008 "Orcas."
>>MSDN help needs attention…<<
Absolutely agreed. I admit that I do not have my finger on the pulse of what the help team is doing to improve the F1 experience, but the volume of feedback I’ve heard here has convinced me to make it a priority.
>>I’ll stick with VC6 where I can actually get work done.<<
Indeed, VC6 is a common theme in this thread. While I actually believe VS2005 and VS2008 are superior products, I can also see why VC6 remains a touchstone for many: it’s fast, it’s simple, and it takes a totally C++-centric view of the world. This feedback is well taken, and we have been paying a lot of mind to the things VC6 did well as we design the Orcas + 1 IDE experience.
>>Jeff’s comment about SxS reminded me, why on earth isn’t the Microsoft Visual C++ Redistributable Package sent out through Windows Update?<<
That’s a good suggestion. We are actually looking right now at what the redist deployment story should be for Orcas + 1, and we know that the current SxS system has caused some headaches.
>>Will GUI designers still use fixed coordinates instead of sizers (a la wxWindows, er… wxWidgets) or anchors (a la Delphi and C#)?<<
We don’t have any specific plans here, but — as Bill mentioned — we are doing some major new work in MFC that we’ll announce shortly.
>>Other than that, the C++ compiler is roughly similar to VC++ 2005 (it still accepts the same bogus code, especially wrt double phase lookup), so is there a specific reason to update?<<
You can learn more about our VS2008 feature set on the VC++ team blog as well as some of the channel9 videos we did recently. Specific to the compiler, we’ve made some good improvements in build throughput on multicore machines and for managed/mixed code. Regarding compiler bugs, we’ve addressed them according to the bar we published back in June 06 at.
Thanks again,
Steve Teixeira
Group Program Manager, VC++
I have almost the exact same concerns as the first post from Larry. In my career I have been using VS for the last 10 years for mostly MFC based applications. I believe I started out using VS 5 (possibly 4.X?) and now I am using .NET 2003. Being that we have an academic license with maintenance I have used all versions of VS from (VS 5 to VS 2005) to write and maintain the approximately 500K lines of mostly MFC code I have worked on in that time. Most of this time was spent in VS 6 which although the compiler lacked features (templates) it had by far the best IDE for building my MFC apps. My biggest complaints with the .NET versions all seem to have a buggy GUI and the help system gets progressively worse. With .NET 2003 (which I use mostly now) it is so very rare that the help (F1 when a keyword is selected) actually finds something useful that I generally opt to do a google web search instead as that actually has a chance of finding something relevant. This is very annoying as the help for VS 6 was much better than this although it was annoying that I could not get rid of the Foxpro docs and still be able to see the SDK help. Then there are the gui bugs. Clicking on a class in the class view (in vs 2003) and then selecting add a member function or variable crashes vs2003 probably 20% of the time so I have learned to avoid that feature that I used and liked a lot with vs6 and this is not hardware related as this has happened on several different machines. Also the built in tools for editing message maps and adding handlers is significantly harder to use (and more awkward) than it was in vs6.
Now the improvements I would like to see after the bugs I have mentioned above (are fixed) are the following:
1) I would like to see a large improvement in the build system to at least make better use of multi core processors while building and possibly extending this to include some type of pooled build like gcc does with distcc. Also possibly implementing something like ccache as well.
2) Better tab support in the GUI (like wndtabs does).
3) Implement saved window layout where choosing a button on the toolbar will restore a window layout which includes the sizes and positions of the docked windows. Being that program and remotely debug with very different screen resolutions I find myself updating the window layout in vs2003 50 times a day and this is very annoying.
I liked the workspace, slide out with the tab on it,of the files,classes and resource. better than all the slide out that keep popping out on me every time I move my mouse.
but like the tabs in the editor better than 6 to switch and select the editing file.
and what happen to the tip of the day,splash screen, Great help output,and the class wizard are the thing I miss
Thanks for adding BRIEF key mappings back into VS5, and please have a BRIEF aficionado actually use it. VS6 is as good a BRIEF implementation as there can be and VS5 is possibly the worst. I get the feeling that nearly no-one uses BRIEF, so this may only be my problem….
First, it sounds like y’all have a good idea of the direction for future improvements. In our environment, we have some managed code, in C#, a lot of native code, in C++, and a little bit of glue code in managed C++. This mix is unlikely to change significantly. (In case it’s of interest, one of the main reasons we use C# instead of C++/CLI for the managed code is to avoid the problem of too much mental overlap between managed and native code. Easier to mentally switch gears from C++ to C# mode.)
Second, I agree with all the comments about needing serious support for high-productivity usage of GUI stuff from C++. This seems to have been left rotting over the last couple releases.
Finally, a minor request: add an easy way to make different activity-based configurations the same — in other words, make it possible to transfer GUI rearrangements from coding mode to debugging mode. It’s much easier to customize the minor changes to make one mode more useful than it is to redo the same major toolbar etc. customizations multiple times.
Responding to Dave Berg and to the list:
My compaints about SxS is not really about whether the technology works but the extra burden placed on developing software.
What did me in with SxS was when I encountered a problem with a debug COM+ component that wouldn’t load and the only way I could get it to work was to create a setup program for it. I think the issue was a dependent DLL needed one of the runtime but the error that was happening wasn’t useful.
The other scenario that makes SxS difficult to diagnose is with managed code. If your assembly is dependent on a native code DLL and you’re missing runtimes it is very, very difficult to figure out that you’re missing a library. The .NET runtime doesn’t provide too much assistance to help. Previously you placed the DLLs in the directory or looked in the system directory. How do you find out if you’re missing a DLL now?
SxS seems to me as an overly complex way to deal with something that’s been pretty much resolved. We haven’t had problems with DLL versioning for a number of years. Not since MS started forcing versioning as a requirement to getting software certified.
I like Erik’s suggestion that the runtimes being distributed via WUS would go a long way to dealing with the many scenarios where things don’t run and all that is missing is the runtime install wasn’t done.
But there are a couple other things that make installation rather challenging.
The first is having to chase down merge modules. Can’t this be worked out with vendors? Especially if they have some of their components distributed with the VS development system? ie. the VS versions come with MMs but the full version of the product MMs are hard to get.
The other issue is the klunky way that you have to search a system to figure out if some library is installed. There seems to be no clear way to figure this out.
Thanks for the soap box.
It would greatly help us to have tools to convert COM based applications into a single statically linked executable (i.e., remove all of the COM plumbing and COM calls) since a good portion of our legacy C++ systems were written by ex-VB6 developers overly used to plopping in VB6 COM components for everything even when both the application is the only user of the component and they both are on the same machine. We’re incrementally improving the stability of our systems by making them statically linked and free of any type of RPC, COM call or CORBA call. Our applications are hard hit with any OS patch, OS upgrade or new version of our product because it affects 100+ production machines. Multiple binaries (dll, com, or exe), registry settings and COM registration make it much harder to make sure all componets get setup / installed correctly.
I agree with some of the previous statements – please work to keep VC++ the platform of choice for developing *native* code. Keep the dependencies to a minimum so that native applications can be deployed (e.g. no CRT or MFC runtime conflicts – re-think SxS) with minimal fuss.
A huge percentage of us develop the third-party native code applications that keep the Windows platform alive and vital. Please stop pretending that we should all switch to WinForms / WPF / WhateverNewThingComesToMind.
If I’m writing managed code, I’m using straight C# – and have very little interest in native / managed interop.
With regard to Intellisense being poor – who cares? Every C++ developer should own a copy of Whole Tomato Software’s Visual Assist (no I don’t work for them). Why focus Microsoft’s internal efforts on Intellisense issues when a third party app solves them adequately and inexpensively?
Please keep progressing MFC, bring back the class wizard, and improve resource editing for native apps. What’s there today in VS2005 is just pathetic.
Hello. Catching up with the last set of comments. Here are some high level thoughts:
1) MSDN/documentation – I agree that this is a problem. It’s not that there isn’t good content, it’s that separating the useful from the irrelavent is too difficult. While we don’t own this technology, we will definitely share with them your comments.
2) IntelliSense revisited – I see a number of people telling us "don’t worry about IntelliSense…just use Visual Assist’. I agree that Visual Assist is a very fine tool. But the bottom line is that fast, accurate coding tools should be like air or water. There’s no reason you should need to turn to another tool for this level of support. In addition, don’t think of IntelliSense as the "end game". Think of it as the beginning. Once we have our new engine in place – and an object model on top of our parser – there could be a whole set of source analysis tools we (or third parties) could provide to help you better manage large-scale software projects.
3) Interop – we know that most "pure" .NET applications are being built in C# and VB. That’s one of the reasons we should be focusing on native – we’re the only player in that particular space. But interop is important because there are lots of people that want to extend their native apps to use .NET code. We need to ensure that this is an easy thing to do..
5) Do we listen to feedback – We honestly do. Certainly I can’t make it so everybody’s favorite feature goes into the product, but the high-level trends we hear do influence where the product is going. That’s why it’s important to share your perspective. And now that we have a focus (once again) on native code, we should be able to show bigger leaps in progress.
Thanks again for your comments!
Bill Dunlap
Visual C++ Development Team
Some other issues that are sort of C++ related…
COM+ components continue to be a pain to distribute commercially. Is it possible it will ever be possible to write and deploy a COM+ app in a standard setup program without some extra effort?
Oh, and COM+ stuff again. The attributed programming thingie to generated IDL is a great idea. I haven’t used it in a while because it didn’t work as I expected. The scenario that caused me an issue was I wanted to pass an COM object as a parameter. i.e. and ADO object as a function parameter. I had all sorts of problems trying to figure out how to include the typlib in the IDL so that it could build properly. What I had to do was save off the generated IDL, hand edit and go back to the old-fashioned way using MIDL myself. A work around but I wasn’t happy with the attributed COM programming thing.
This is a more general comment on the whole VS development IDE. It is evident that the development platform has migrated to scenarios where the developer is working and deploying software in-house.
This needs to be rebalanced to work with us that write and sell software for a living. The problem is huge in the .NET world. For example, Click-Once applicaton for commercial distribution is very difficult to do, web applications and Biztalk applications are a problem too.
While it may not seem to be a C++ issue it is when you’re trying to distribute apps with .NET application UIs and native DLL and COM component back-ends.
I think MS should offer tools and technologies that are capable of distrubiting in many different scenarios.
My wish list
1. Stop removing features from the IDE (Connect ID 105507 for example). It makes us less productive when we "upgrade".
2. Stop breaking features in the IDE (Connect ID 139752 for example). It makes us less productive when we "upgrade".
3. Add multi-targeting support for MSVCRT/MFC just like for .NET since the project and solution files are incompatible across all IDE versions since 4.2. Without it, it is more difficult to "upgrade" to a newer version, since everyone on the team has to do it at once, but it can’t be done until the project has been built and tested on the new compiler.
4..
6. Complete and correct support for standard C++.
> But interop is important because there are lots of people
> that want to extend their native apps to use .NET code.
Let me be blunt: customers that I work with wanting to move existing applications to .Net specifically ask me to rewrite code in C# and specifically not to use interop. In other words, it is either all native or all managed, but no where in between.
The only use that I have for C++ is for native code. (Just as the only use for C# is managed code.) I’m sure that Microsoft has their own internal agenda and goals. But please do not waste too much time and effort on interop. I have absolutely no use for it. And looking at some of the comments above, a number of other people seem to be of similar opinion.
I would suggest roughly 90% be spent on improving standard compliance. While there has been much improvement lately, significant portions of ISO/IEC 14882:1998 and 2003 are still not implemented or need further attention. The other 10% could be spread out over other miscellaneous IDE features, proprietary libraries, interop etc.
I’m looking forward to Orcas+1.
Good luck.
I will be upgrading to Orcas specifically because I have heard that it has an integrated class diagram feature. This is important to me.
We have a large existing code base in C++. Now we would like to start using some of the new Windows features that are available in WinXP SP2+, but without using managed code or interop. I hope this will be possible in the future.
."
Bill, that’s the best news and possibly the only mission statement you need for VS. If you did this, we’d all be very happy. Can you make VS as fast as VC6 (eg I start the debugger… and wait, I edit a resource and …. wait), use as much memory as VC6 (I can open 4,5, 6 VC6 IDEs no problem whatsoever. I open 2 VS2005 IDEs and its welcome to the land of swap).
Interop needs to be more seamless, you should be able to write GUI apps in jav, sorry C#, and connect them to back-end native services easily. Similarly, using managed libraries should not need so much changed syntax for stuff the compiler should be able to figure out (I’m still annoyed at the 2 destructors you have to provide for C++/CLI, and think the ‘managed pointer’ is just laziness on the C++/CLI team not being able to add better support for a C++ that can be used as if it were C++ and not a bastardized extension language).
The help is pants. Bring back the technology used for VC4, that was the best system I remember using, and Visual Interdev is quite the worst IDE MS ever wrote, its such a shame VS2005 feels so much like it.
Anyway, I’m sure we’re all waiting impatiently for proper support while we’re thinking about just trying out the competing IDEs out there 🙂
I’d like to see proper compiler performance diagnostics. When writing template heavy code, I often hit a kind of "inline" limit where simple static passthrough functions don’t get inline any more, so I wind up with ASM functions that basically contain just a single call instruction.
Trying to figure out why the heck the compiler didn’t inline these is no fun today, you can try to set __forceinline on all and try where it can’t inline and start guessing, but the warning messages are worse than bad (pointing into STL stuff where there is no STL usage anywhere).
Take a look where the Intel Compiler is going to, giving you verbose information about which loop was unrolled etc., and Sun Studio (see for example their compiler commentary:). VC8 is far behind that in any respect, even though the optimizer is not that bad.
Second, give the users more options. If I want super-expensive inlining where the compiler is going to spend hours to find out which one-liners can be directly expanded, let me do it, same goes for loop unrolling etc. Don’t force the people to get used to the compiler, rather allow the compiler to be driven by the user with loads of options.
Third, try to help people who use modern development methods. Writing template static dispatchers is not easy already, together with a buggy/sluggy IntelliSense which does display bogus stuff half of the time and a compiler which does not inline single call statements, I’m often using the VS as a better text editor with a 3rd party compiler in the background.
BTW What’s the recommended way to get in touch with the compiler dev team to share such kind of feedback, somehow I get the feeling that commenting blog entries is surely not the right way?
We use VC6 as an IDE for third-party cross compilers targeting various micros via external makefiles. With Visual Assist and Window Tabs it actually makes it halfway pleasant to write embedded C code – as much as it can when you’re targetting a 4K device anyway. Most cross compilers can be run from a command prompt or from an external makefile, and their ‘IDE’s leave a lot to be desired.
We’d love to be able to do this in VS2K5 or VS2K8. You should market an express version that these compiler writers could target with their products – the ‘eclipse’ of the embedded market.
Hello Tom
Re your comment: “I will be upgrading to Orcas specifically because I have heard that it has an integrated class diagram feature. This is important to me.”
The VC++ Class Designer team recently asked for feedback:. Although the survey may be closed now you can still add comments/suggestions/feedback on the VC blog page of you like.
Thanks
Damien Watkins
[Jalf]
> The main thing that bugs me currently is that
> the STL seems much slower than necessary.
There are definitely areas in which the performance of our Standard Library implementation can be improved. If you can create self-contained test cases that clearly demonstrate areas in which we could be faster, please file bugs with Microsoft Connect.
> I’ve seen tests showing STLPort’s vector class
> to be a full 4 times faster than the MS one.
Missed inlining opportunities can often trigger such massive performance differences. Fortunately, these are often easy to fix, with a minor simplification of code.
> So why enable bounds-checking on operator[] by default?
Security.
> And when we want it, we can enable it easily enough.
It’s best to err on the side of security than the side of performance. (At least, when the performance cost is small enough.) People who notice performance problems can easily disable _SECURE_SCL and friends, while noticing security problems is a lot harder (and more painful).
> And finally, I think it might not actually reduce the number of bugs at all.
_HAS_ITERATOR_DEBUGGING’s purpose is to find bugs during development so that you can squash them.
_SECURE_SCL’s purpose is to serve as a last line of security defense – your program may be buggy and trigger a heap overrun on the user’s machine, but at least the program will die instantly rather than compromising the machine.
I know we haven’t communicated their purposes as clearly as we might have ("iterator debugging" and "iterator checking" sound so similar).
>.
It’s "when you use Standard algorithms, the bounds checks are lifted out". Even in the absence of _SECURE_SCL, it’s a really good idea to use Standard algorithms whenever possible.
> I think the whole "Secure SCL" is a mistake, at least by default.
There is a tradeoff here. Performance is very, very important, but so is security.
[Craig Vayes]
> I would suggest roughly 90% be spent on improving standard compliance.
> While there has been much improvement lately, significant portions of
> ISO/IEC 14882:1998 and 2003 are still not implemented or need further
> attention.
2003 supersedes 1998. 🙂
Exactly which portions do you consider to be "not implemented or need[ing] further attention" in the compiler and standard library? (Two-phase name lookup, exception specifications, and export are well-known to be missing. If I had my way, export would be erased from human memory…)
Thanks,
Stephan T. Lavavej
Visual C++ Libraries Developer
Hi ~AVA:
The behavior you are observing is not by design in the Visual C++ IDE. The problem you mention about the case changing to lower case when debugging, compiling and browsing is a product bug. We are currently working on a VS 2005 hotfix for the latter (browsing). Coincidentally, I happen to be testing it. It is currently in the Customer Testing stage. Basically, on this stage our PSS guys work with the customer that reported the issue to make sure our fix addresses their problems. After that, we’ll test it one last time and make it available.
Can you provide us with examples on how compiling or debugging changes the case of a file name? I think for the debugging scenario, it might be a side effect of the same problem we are fixing for browsing. Compiling would be a whole other story. Can you elaborate?
Thanks for your feedback. Keep posted for that VS 2005 hotfix.
Alvin Chardon
Visual C++ IDE Team
Stephan,
> If I had my way, export would be erased from
> human memory…)
Take a look at SC22/WG21/N1459, section 6.3
Export is in the standard and it is not going anywhere. Comeau and Borland compilers support it. Because VC++ does not implement it, more and more of our code base does not cross-compile as is. I keep looking for it it with each new release of VC++, but to no avail. How about simply announcing which version and year it will be implemented so that I can check back then.
Earlier this week Soma, Vice President for the Microsoft Developer Division, blogged about the future
[Allan Richards]
> Export is in the standard and it is not going anywhere.
I don’t always get my way. 🙂 (Let’s not even mention the eternal enemy, vector<bool>.)
> Comeau and Borland compilers support it.
Borland supports export now? (I don’t follow news for that compiler.) For a very long time, only EDG-powered compilers supported export.
> Because VC++ does not implement it
Neither does GCC.
> more and more of our code base does not cross-compile as is
If you want your code to be actually-portable (rather than theoretically-portable), you should avoid using export. Support for it is not widespread. A great majority of C++ developers get along just fine by ignoring the existence of export, and some (e.g. me, and anyone else who takes the position advocated by "Why We Can’t Afford Export") are convinced that it doesn’t actually buy anything, and carries significant complexity costs.
How exactly does export help you?
Thanks,
Stephan T. Lavavej
Visual C++ Libraries Developer
The fact is that export is part of the standard, so I should reasonably be able to assume that it will be available on standard-conforming compilers. Suggesting that developers avoid standard-conforming code to be portable does not seem right. Perhaps I am wrong, but I have always thought that being standard conformant was a high priority of VC++.
In my experience, most developers "get along just fine" without export because they have never had a chance to use it. It took me a little bit to get used to, but now I find it quite useful when writing templates. Since VC++ (and gcc, as you pointed out) does not support it, though, I always need to sprinkle some preprocessor magic around it to enable "portability".
We’re writing code that compiles on halve a dozen platforms by even more compilers (or compiler versions).
Standard conformance is our main concern. If all compilers were standard conforming, we’d lose most of our headaches instantly.
>>Exactly which portions do you consider to be "not implemented or need[ing] further attention" in the compiler and standard library? (Two-phase name lookup, exception specifications, and export are well-known to be missing.<<
Two-phase lookup is what I miss most as VC lets slip truly and hilariously absurd bugs due to this. I am actually forced to write much of my template code on the Mac (which I hate, since it forces to grab the mouse for just about everything), as its compiler catches these errors.
>>If I had my way, export would be erased from human memory […] some (e.g. me, and anyone else who takes the position advocated by "Why We Can’t Afford Export") are convinced that it doesn’t actually buy anything, and carries significant complexity costs. How exactly does export help you?<<
A big problem we have with template code is the cascading dependencies. If I want to give users of my code the template T, I also have to give them access to T1 which is used in T’s implementation. However, T1 needs T2 which needs T3 which… This means pulling in hundreds of LOC and polluting namespaces with irrelevant identifiers. And if the minor helper template irrelevant::detail::helper<T> changes we need to re-compile millions of LOC . (That’s no theoretical scenario. We spent a lot of money on IncrediBuild to "solve" this problem by throwing raw processing power at it.) I have been told by people who (implemented or) work with export that it solves the problem of cascading dependencies. Which is why we would be happy to have it.
So there seems to be general consensus that VC++ has fallen behind in a number of important ways. It’s obvious that the product hasn’t received the attention it needs for quite some time, and now we’re hearing that this is going to change.
This is great news indeed, but when is Orcas+1 actually going to be delivered? I would just like to remind MS that it’s very hard for commercial application developers to wait around – our need to evolve our software products (to remain competitive in our respective markets) constantly evolves. If VC++ cannot evolve with us (and it hasn’t), we must pursue other native code development options.
My suggestion at this point? Compensate. Get us the capabilities we’ve been lacking sooner rather than later, whatever it takes.
Please cancel Orcas.
I’m serious. Based on trying the beta (VERY disappointing) and reading blogs, it appears that it’s nothing more than a Visual Studio 2005 SP2. In the last month you and several other Microsoft employees have made it very clear that almost ALL the important changes and fixes are for a mythical product called "Orcas+1". Since that’s what I want and thought Orcas was going to be, why should I waste my time with Orcas? Answer; I won’t. Seriously.
I’ve been planning a major update and migration for our entire company next spring in an effort to have a single development platform (we use three version of VS C++.) I am on the verge of canceling this migration entirely AND PERMANENTLY.
So, congratulations, in regards to C++ you guys are now acting like Borland. (In case you didn’t get the point–your middle finger is blinding me.)
Brian said:
"Help – more like VC6 – The ‘Locate’ button was heaven!"
I think you are talking about the ‘Locate in the TOC’ feature – ? It is still there, and it wasn’t obvious to me at first, either – it is a book icon with a left arrow and right arrow, just left of the Ask a Question Icon/Button on the default toolbar.
Other features I miss from VS6 are Search Titles Only, Search in These Results and more…how about you?
…
Some of you will remember me from my time on the VC++ team. I left VC++ to work on help. It hasn’t turned out to be as straight forward as I thought it might be, but please know I’m lobbying for the changes, resources and investments needed to improve the help! I am also working with the content teams’ managers to prioritize and work on documentation specific improvements. Thanks for all of the feedback here, it does get attention!!
– April Reagan
Developer Division User Education Program Manager
I really like to push standards compliance in blog comments, but i am much more excited about C++0x than export.
Hell, even C99 has a few nice features (specially those that bring C closer to C++ and those that are coming in C++0x)
Wish list:
1. Stable IDE for ISO C++ *and* C++/CLI
2. See 1.
Thanks – Mike
Interop is important to me!
[Allan Richards]
> "It took me a little bit to get used to, but now I find it quite useful when writing templates."
As I asked earlier, what exactly does it do for you? "Why We Can’t Afford Export" () addressed every claimed advantage of export that I’ve heard of (but perhaps there are others).
[Hendrik Schober]
> "Standard conformance is our main concern."
Mine too!
> "I have been told by people who (implemented or) work with export that it solves the problem of cascading dependencies."
But it doesn’t – see "Phantom advantage #2: Fast builds, reduced dependencies" in the paper above.
[ikk]
> "I really like to push standards compliance in blog comments, but i am much more excited about C++0x than export."
That perfectly captures my state of mind.
Stephan T. Lavavej
Visual C++ Libraries Developer
I have read a few comments, and many here seem to think that C++/CLI is only useful for migration, and that good .NET-Apps should be written in C# exclusively anyway.
This may be true for some apps, but it’s definitely not a general rule: We’re building a .NET app from scratch, and we’re building lots of C++/CLI code, mainly for performance-critical parts and for code parts that needs to use complex 3rd-party libraries (with lots of pointers…). IMHO that’s one of the key advantages of the .NET framework: You get all the benefits of a managed environment, but it’s still very easy to program closer to the metal where you have to. I hope future development of C++/CLI continues to support this.
And, btw, I also think that SxS hell (the replacement for dll hell) should be improved. What I really would like to see would be a tool like Depends that can tell you exactly what other assemblies and P/Invoke’d dlls some assembly will load at runtime, and where it will look for them. Currently I have tools like Reflector that tell me what an assembly references, but that doesn’t include P/Invoke-DLLs, and it doesn’t tell me from where it will look for them. I also have Depends, but that can only tell me what DLLs get loaded while a programm is running, so I need to manually test all the functions of the program to find out which assemblies it uses, because they are loaded on demand. It’s also not a great help if assemblies aren’t found. Fuslogvw can sometimes help when an assembly can’t be found, but it doesn’t help much if a P/Invoke’d DLL or one of it’s dependencies is missing.
I’ve been using VC since VC. Most of my complains against VC2005 have already been said by others.
IntelliSense is terribly slow and useless 90% of the time (several coworkers have asked my how to disable it, and they have a Core 2 Duo 2.66 GHz !). Useful (and more importantly, correct) refactoring tools would be nice.
Standard compliance is one of the best, but compared to the latest GCC version it is clearly inferior (e.g. template friends). Even if not strictly required by the standard, GCC is also able to check non-instanciated template code and detect several errors that VC2005 does not see (e.g. undeclared variable name). And of course their is C++0x.
_SECURE_SCL should not be on by default. Most people do not know it exists and wonder why their applications are 4 times slower.
Also, disabling _SECURE_SCL implies recompiling *ALL* used C++ libraries. E.g. linking an application compiled with _SECURE_SCL=0 with Boost, which is compiled by default with _SECURE_SCL=1, will result in weird and random crashes that are hard to diagnose.
I do not buy the _SECURE_SCL should be on by default because of security. _SECURE_SCL mostly affects the STL, and code using the STL is already inherently more secure than the typical C-buffer-overflow-code. Checked iterators are a good thing, I’m actually very glad that they’ve been implemented in VC2005. I just do not think it’s a very bright idea to enable it by default for Release builds.
VC2005 STL implementation is significantly slower that the one that ships with the latest GCCs (even with checked iterators disabled). In my experience, areas where GCC is significantly faster (i.e. 2-4 times faster) are strings and streams. I’ve also found that their containers implementation tends to be faster due to better memory/cache allocation and management.
For example, our OBJ 3D Model loader (that basically linearly parses an ASCII file) is still two times faster on a Linux/GCC old Athlon XP 2.1 GHz than on a Windows/VS2005 Core 2 Duo 2.66 GHz.
The build system on multicore is not very logical. VS2005 does build projects in parallel whenever possible, but will only run one thread for compiling one project. Makefile have solved that problems for decades: one project compilation should still use multiple available CPUs by compiling several files at the same time.
As it was already proposed, VC redit runtime should be on Windows Update to avoid useless deployment hassles.
I’ve been using VC since VC.
Make that VC6. 😉
You know what I would like to see? Stop breaking existing code.
For example: Service Pack 1 for VS.NET 2003 broke the editor. When you create a new document, the cursor doesn’t display until you’ve entered a second line. If you do a select all and then delete, the cursor stays on the same line as its last position, even though the text has been deleted.
In VC6, you could use block indent/outdent to increase/decrease the tab level of a block of code. If some of that code wasn’t indented partially with spaces, it still worked. VS.NET 2002 and 2003 broke that. They substitute tab characters, period. That forces the user to go back and replace the spaces you removed.
I know these sound like nits. The point is, these are things that simply ought to be _right_, with no questions.
I’m leaving a comment to emphasize that "F1 isn’t king anymore".
I loved the context sensitive help integration in VC6. You just put your cursor on something, pressed F1 and got the information you needed.
An by "information you needed" I mean reference information! No community content or whatever.
If I’m looking for community content I’m using my web browser to search for that.
And as already mentioned here before the performance of the help system is bad. It takes much more time to have any result displayed after pressing F1. During the waittime I can easily start IE, do a google search. I always thought VS is hosting the IE to display help content, so why is it much more slower than the IE itself which must transfer the contents via tcp/ip? I stopped using F1 after VC6.
Also I would like to see some "Intelli-Content". Means if I’m working on a native MFC project It’s very likley I want to see the samples for an API function in native C++ for the native API function, not some C# for Compact Framework. If VS knows what type of project I’m working on please use only the subset of helptopics I need to view for my project.
And please do this for VS2005 SP2. The missing help is a major issue to me
Thanks
Jan
[Tanguy Fautre]
> _SECURE_SCL should not be on by default.
Think carefully – which is actually better?
1. Having extra security barriers enabled by default, making developers of applications whose performance is significantly degraded have to go figure out that the security barriers exist and then figure out how to work around them or turn them off – probably during development, since the performance will be noticed then – OR
2. Having extra security barriers disabled by default, making developers of applications that contain security bugs have to go figure out – *after a security exploit* – that the security barriers exist and then figure out how to turn them on – probably after development, since security exploits won’t be noticed until then, so now they have to ship a security patch and suffer the hit to their reputation.
Having to choose a default for all VC users worldwide, we chose "on". Can you blame us?
> weird and random crashes that are hard to diagnose.
If you read my latest VCBlog entry, I mentioned that we are investigating how to diagnose _SECURE_SCL mismatch between statically linked translation units in VC10.
Yes, in VC9 things are quite painful if you are not extremely careful.
> code using the STL is already inherently more secure
> than the typical C-buffer-overflow-code.
Yes! But more security is better, at a small performance cost. _SECURE_SCL usually has a small performance cost.
>.
Stephan T. Lavavej
Visual C++ Libraries Developer
I have been coding in C++ since the late 80’s, and have written numerous books on the subject (mostly early 90’s.)
At first I was of the opinion as many here are that MFC is important and the focus for new work on Visual C++ should be on cleaning up the many problems it has.
But recently, after using WinForms in C# for a while, I’ve changed my mind. Not having to worry about resource ID’s, (a fatal flaw of MFC) and being able to move code for dialogs, controls around with ease between projects makes the WinForm approach much superior, not to mention all the fancy new dressings that are available to keep your app in fashion.
Having had a stint with C# and WinForms for a while, and being much more productive at building gui’s with it, I’m moving back to C++/CLI, but will stick with WinForms, either in C# or C++/CLI.
I’m of the opinion it’s much easier to learn WinForms, etc, using C# than C++/CLI mainly because of cleaner syntax, and no longer having to hassle with header files. But having learned the gist of how it’s done in C#, it’s time to go back to my favorite language of choice, C++.
C#’s screwed up memory management (where it appears the designers of C# were severely confused about what destructors were for, which is to clean up after the object, not neccessarily just worry about freeing memory, and thus the designers discarded deterministic destructors), is my opinion an almost fatal flaw of C#, and the work arounds for it are much worse than the disease (the Dispose() pattern is confusing, convoluted and error prone — if only the C# designers had implemented destructors right in the first place, C# would be a fine language.)
Thus, I vote for making C++/CLI work better with WinForm’s, etc. For example, have the code generator create both headers and source files for the designer code, rather than just jamming it in a header file. Make the generated code more readable and formatted better. Follow C#’s approach and split the class definition and implementation into code generated pieces and user crafted pieces.
Better yet, figure out some mechanism for eliminating the need for header files in the first place. The #using approach of C# is much superior. I found that going back to C++ was a real headache, having to maintain both a header and source file. I’m not saying get rid of header’s altogether, they are part of the C++ standard and should be kept in, just add the module approach of C# somehow (and I have no idea how.)
Don’t worry about what kind of extensions you need to make to C++ to do this — who cares if you add keywords for gui purposes, gui code is rarely portable anyway, so who cares if it "breaks" the language this way. Just be sure to keep the ability to write standard C++ code for the non-gui parts.
Microsoft has already done a lot of this with the managed extensions to C++ in C++/CLI. And I think the effort along these lines has been commendable. (And thank heavens you guys got destructors right in the managed C++ world.)
As far as making STL work better in VC++, this may sound heretical, but I don’t care one bit. I personally think STL is an abomination. I avoid it like the plague. It’s not that having standard libraries is a bad thing, but STL has gone way overboard. And really, having to include thousands and thousands lines of code everytime you include a header to use STL is really dumb dumb dumb. Sure, doing meta-programming with templates is a fascinating adventure, but for production code/maintainence, it is a nightmare. (Sorry Stroustrup.)
STL wouldn’t so bad if the whole implementation of templates wasn’t severely broken in the language. Not being able to separate implementation from definition for templates is almost a fatal flaw. It’s not completely fatal, if you use templates sparingly and use them simply. Unfortunately, that doesn’t sound like STL, does it?
>I left VC++ to work on help.
>It hasn’t turned out to be as straight forward as I thought it might be,
>but please know I’m lobbying for the changes, resources and investments needed to improve the help!
>
>- April Reagan
Please April Wan Kenobi … your our only hope 😉
Time to summarize some of the big things we’ve heard in the last set of comments:
1) Improve "fit & finish" – I totally agree that this is something we need to do moving forward. The features we add need to feel right for C++ developers.
2) C++/CLI – I want to assure people that C++/CLI is not going away. It’s very important to our strategy and we will continue work in this area. Understand, however, that we are not pursuing the path of making VC++ a "first-class" .NET development environemnt. C# and VB are perfect for this role. There’s nothing to stop you from using the C++/CLI language facilities to build a full-fledged managed application, but – moving forward – our feature set will not be geared towards this.
3) Orcas – Please reserve judgment on Orcas until after launch. As I hinted at previously, we are working on some major library updates around MFC and TR1. Stay tuned.
Thanks again for the very interesting posts. I know that some of you are frustrated – and you have a right to be. Hopefully you’ll see that our transparency (both here and in our blog) represents a new dynamic. Our strategy is firmly in place and we know we need to supplant VC6 as the "world’s greatest native development tool". Hang with us and we’ll do you right.
Bill Dunlap
Visual C++ Development Team’ll offer a slight dissenting view to Bryan Flamig. I find WinForms vastly inferior to the traditional resource file (MFC in his parlance) method since it mixes interface and implementation.
Likewise, I prefer the header/source separation of C++. While I easily adjust to the C# model, I sometimes get annoyed by having my member definitions getting lost in all the code. Nevertheless, I do agree that it is jarring to switch back to header/source when doing C++/CLI since my mindset is usually more in the C# camp than in the traditional C++ camp.
I totally agree that non-deterministic destructors (or at least a method that’s guaranteed to be called when a class goes out of scope) is a big problem with C#. It actually creates resource leaks which are far worse, in my own experience, than memory leaks.
I also agree that STL is an abomination. I personally find the non-inheritable model to be beyond stupid. Plus, all too often it’s like killing mosquitoes with a sledgehammer. Beyond that, I just don’t like the way it, and boost for that matter, were designed. (Long before STL and Boost, like many C++ developers, I wrote my own very lightweight collections library. It still blows STL and Boost away for what I need.)
Incidentally, a huge problem with C++/CLI is that the documentation is simply terrible and there is very little comprehensive sample code.
[Bryan Flamig]
> thus the designers discarded deterministic destructors
Indeed, deterministic destruction is underappreciated. Modern C++ avoids resource management problems *and* the complexities introduced by garbage collection.
It also appears that shared_ptr in C++0x will have a cycle collection ability. (This has not yet been voted into the Working Paper, but it’s still under development.)
> Not being able to separate implementation from definition for templates is almost a fatal flaw.
It is a fundamental consequence of how templates work. The object code (which is just machine code with a few loose ends not tied up) generated by a template can be massively different depending on how that template is instantiated. Therefore, the definition of a template is required at the point where it is instantiated.
[Joe]
> I personally find the non-inheritable model to be beyond stupid.
Inheritance has been overused in C++. ("Object-Oriented Programming" was all the rage once upon a time.) The problem is that inheritance introduces extremely strong coupling – the relationship between a base and derived class is stronger than any other relationship except friendship. Runtime polymorphism also causes programs to forget types, and dynamically discover them through virtual function calls. Often, loose coupling without forgetting types is desired. This is what templates enable.
The simplest example of this is containers. In olden days, people used "based object" containers, by making every object derive from an Object and then having a container of Object *. The problem with this is that you have to cast pointers upon getting them out of containers. You also have to deal with resource management (something that wasn’t completely solved until shared_ptr – which is a template). And you can accidentally make your containers heterogeneous, because you have relaxed your type safety.
The STL did away with this true abomination and made containers work with value types. This is a much simpler model that is also more robust, and works with builtin types as easily as user-defined types. (int, after all, does not derive from Object.)
Note that when you combine shared_ptr with the STL, you can now put polymorphic objects in containers – the best of both worlds. Container-of-shared_ptr is an extremely powerful construct.
Stephan T. Lavavej
Visual C++ Libraries Developer
Hello
Regarding your comment (Saturday, August 18, 2007 4:17 AM by Anonymous):
am sorry to hear that we have not been as attentive or responsive as we should have been – thanks for taking the time to let us know. Going forward please drop by the VC blog whenever you get the chance; we really enjoy posting about what we are planning/doing and soliciting input/feedback on all aspects of VC++:
Thanks
Damien Watkins
Visual C++
Regarding the comment about "Incidentally, a huge problem with C++/CLI is that the documentation is simply terrible and there is very little comprehensive sample code."
I totally agree.
I will for sure pass this along to the doc folks. Hopefully, in areas that apply we should have samples on all languages.
Thanks,
Ayman Shoukry
Lead Program Manager
VC++ Team
Firstly I would like to thank everyone who has the taken the time and gone to the effort of writing to this thread – we find your first hand comments invaluable in directing and prioritizing our work..
With this in mind, we’ve been working throughout the month of August on fixing some of the performance issues reported. Bear in mind however, that due to the complex nature of this component we are not be able to address every issue. We are looking first at increasing the responsiveness of the IDE during expensive Intellisense operations, especially when using it with large scale codebases.
We plan to release these fixes for both VC2005 and Orcas as a downloadable hotfix this autumn. Once the download is available, we will post the link on blogs.msdn.com/vcblog. Please stay tuned as we look forward to your feedback on this hotfix.
Thanks,
Marian Luparu
Visual C++ IDE
Pass this on to the help people:
Help is becoming more and more useless. For example, if I search for some API, I get all kinds of .NET trash, even though I’m working on native Apps. If I set the filters to concentrate only on VC++ Native, I *still* get .NET trash, all kinds of CLI methods. These would only be of interest if I was programming using CLI, which I don’t. The search in VS2005 is truly useless.
NEVER drop a KB article! If it becomes obsolete, then its content should change to say "The former contents of this KB article are no longer relevant starting with VS2005" (or XP Pro, or Vista), but the problem is that it is *still* relevant to VS6, running under Windows 2000.
I often either receive code or write code based on a KB article and see, or include as my comments, "as per KB article XXXXXXX". But a few years later, when I go back to look at the article to see why the code looks like it looks, the KB article is simply gone. This is inexcusable. We need to understand legacy code, code that is a decade old, perhaps written under VS4.0. If a workaround is no longer needed, we need to know that the code we have can be rewritten. If a special hack is no longer required, we need to know that we can remove it. But since we don’t know why the code is there, we don’t know anything about its significance. There is absolutely no excuse for deleting an article because it has become obsolete; we need to know why it had relevance in the first place!
Hello Joseph,
Will for sure pass your feedback to our help folks.
Thanks for sharing such issues.
Regards,
Ayman Shoukry
Lead Program Manager
In Orcas we have a new Index/TOC filter for native development, so that should help at least in the TOC/Index pane.
KB articles are owned by CSS, so I’ve passed on your feedback to them.
Orcas includes a major F1 fix that makes F1 work better for .c files. While I’m not sure it will address all the concerns raised here, it should improve the situation, especially for C code.
The Help system in general is in need of a dedicated, sustained improvement effort. So far that hasn’t happened, but feedback like this helps us make the case that it’s needed.
Thanks!
Gordon Hogenson, Visual C++ Documentation Manager
Gordon,
For far too much of the MSDN api documentation, the documentation entries are just a minimally reformatted function header from a source code file.
The api/library calls that do include sample code most times do not have any real world error handling. Good example code has a) identifying if an an error occured and b) breaking the error down into specific test cases.
For example, to open a file:
if (open file failed) then
{ //this is really bad example code
print ‘open file failed’
return;
}
else
{
do processing
}
Versus better sample code
if (open file failed) then
{
if (failure code is ‘file is in use by someone else) then
{
….
}
if (failure code is ‘file does not exist’) then
{
….}
and so on until the major errors are handled
}
else
{
do processing
}
I find that I have to sandbox test most of the library/API calls because the sample code either does not exist or does no error handling.
My real world systems need to return a meaningful error code to the end user and, more importantly, to our technical support staff. A generic ‘file could not be opened’ is ok but much less useful than ‘file ABC.txt could not be opened (code E_0345)’. E_0345 would refer to a specific block or C++ code and a specific api/library error code.
Stephen Kellett wrote:
“Please can we have more than 1 watch window in VS8? VS 6 gives me 4 – so much more usable.”
VC8 also has 4 Watch windows available just like VC6. Default shortcuts are Ctrl+Alt+W, ‘n’ where n is between 1..4.
Steven Ackerman wrote:
“Most cross compilers can be run from a command prompt or from an external makefile, and their ‘IDE’s leave a lot to be desired. We’d love to be able to do this in VS2K5 or VS2K8”
Starting with VC2005, you can use the IDE for makefile projects. Use File > Project > New Project from Existing Code, specify the files part of the project, the build, rebuild and clean command lines you use, and you are ready to go.
Sergey Kartashov wrote:
“I have Russian Windows and Italian resources. Each time I’m saving Italian resource file all accented symbols are lost.”
Can you try saving the RC file as Unicode? On VC2005 SP1 or above, select “View Code” in the context menu, then File > Save As…, and in the Save dialog, select from the Save split button “Save with Encoding”. In that dialog, pick Encoding: “Unicode – Codepage 1200”. If you are still seeing dataloss after this, please open an incident on connect.microsoft.com and follow up with me on email at mluparu at ms.
John M. Drescher wrote:
“I would like to see a large improvement in the build system to at least make better use of multi core processors while building”
Tanguy Fautre wrote:
“VS2005 does build projects in parallel whenever possible, but will only run one thread for compiling one project”
As Tanguy already noted, VC2005 already support multi-proc build on project level. Coming with VC2008 there is also the ability to build multiple files in the same project in parallel. All you need to do is add /MP to your cl.exe switches. For details please refer to Peter’s VCBlog entry:
On the same topic of build system improvements, Managed Incremental Build is another new feature coming with VC2008 that improves throughput in scenarios where you’re building managed C++ projects. More details here:
Thanks for all the feedback. And keep it coming!
Marian Luparu
Visual C++ IDE Team
Interop technologies are extremely important. The majority of us cannot ‘rewrite’ everything just so it is all in C#. There is no good business reason to disrupt application stability and take away resources from being able to add new functionality for our customers.
Of course, the ideal scenario would be ‘true’ support for continued native development. Why is the only way to take advantage of new features such as WCF, WPF, LINQ only available with ‘managed’ code? And then to make it worse, the support of C++/CLI by Microsoft is dismal. At all Microsoft events, conferences, magazine articles; the only decent level of support by Microsoft representatives are for C#. Why no representation for C++/CLI? Why such poor marketing of the C++/CLI capabilities? It appears that C++ was abandoned and therefore we must now change everything we’re doing if we want support by Microsoft.
I find it disgusting. If I must now rewrite everything, why in the world would I choose to follow Microsoft again? They have already demonstrated that they only care about the ‘new and fluffy’. Significant more time is spent in maintenance. Very little time (comparatively) is spent in the ‘new and fluffy’ startup code.
If the ‘future’ is all about running in a virtual machine, then I would rather go with Java. At least Java isn’t constantly pulling the rug out from underneath developers.
And ‘now’ you say that you care about us and want to know what we want so that ‘maybe’ in the future something ‘might’ become available to help us with our C++ code base.
You do have us stuck, what can we do? Rewriting everything in Java makes about as much as sense as rewriting everything in C#. Basically, it doesn’t make business sense.
[Stephan T. Lavavej]
> "I have been told by people who (implemented or) work with export that it solves the problem of cascading dependencies."
But it doesn’t – see "Phantom advantage #2: Fast builds, reduced dependencies" in the paper above.
I think you might like to read another paper, written by Jean-Mark Bourguet on this issue:
Please also note that, although the previously cited paper was presented to the C++ standard committee, and was quite debated there, the decision was not to remove export, not for compatibility reasons, I beleive, but for its own benefits.
[Stephan T. Lavavej]
>> _SECURE_SCL should not be on by default.
>
>Think carefully – which is actually better?
>
> 1. Having extra security barriers enabled by default […]
> 2. Having extra security barriers disabled by default […]
I can understand the importance you give to security. However, all C++ programmers I personally know do always disable it. Hence, for all these programmers (myself included), having it enabled by default is counter-productive.
I realize that alone I’m not really statistically relevant. But it may be good to know actually how many C++ programmers do leave it enabled versus how many disable it.
>> code using the STL is already inherently more secure
>> than the typical C-buffer-overflow-code.
>
> Yes! But more security is better, at a small performance cost.
> _SECURE_SCL usually has a small performance cost.
Depends on the application. I’ve always found _SECURE_SCL performance impact to be non-negligeable.
>>.
I’ve just submitted the bug report. Its ID is 294551.
Submitted another STL performance bug report. ID 294554.
If the only useful fixes are coming in "Orcas+1" drop Orcas altogether and go straight to "Orcas+1".
I will gladly pay you Tuesday for a hamburger today – Wimpie
[Tanguy Fautre]
"all C++ programmers I personally know do always disable it."
If any of their applications have security exposure (and the ones that don’t are few and far between), that makes me nervous.
It’s really tempting to hit the "go faster" button, but you should think really hard about what you’re giving up.
Thanks for submitting the two perf bugs! They’ve been ported to our internal database and I’ve tagged them so I can track them. The /MT versus /MD one is especially intriguing.
Stephan T. Lavavej
Visual C++ Libraries Developer
In all this discussion thread I’ve something else to say.
People here are talking all sort of stuff regarding VC++ stuff with regard to the commercial application sure they are but lets not forget the Open Source Developers too. Specially guys like me who are trying to port open source software on windows and trying to avoid mingw or cygwin.
Here is my wish list. I know this will be like second priority for you guys but please put some thoughts.
1. nmake compatibility: As of now VC make file and gnu make files are not cross compatible. Please provide us with some tools or build support conversions of make (or nmake for that matter) to vcproj files.
Currently we are setting up a cygwin command line environment using gnu make setting environment variables like CC=cl and LD=link. but this requires lots of rework and tweaking in the rules files. cmake folks have done great work but it is very slow.
2. CVS/SVN Support: Even a very basic support will help us a lot. Please consider this if this is possible.
3. Project From Existing Code: This wizard is not very helping. What I’m looking forward with this wizard is to do a) automatic library dependency scanning stuff and if the required library is not available it should suggest the meaning full library names (not just the missing header files).
4. Support for POSIX (I know I’m asking too much) libraries: libc, libstdc++, libcrypt(OpenSSL),and libpthread. Ok I do know that this is like random thing I’m asking for but see, this helps in portability.
5. Backward Compatibility with Visual Studio Plug-ins: Its not their. If it’s for marketing reason I can’t go any far talking about this
6. Support for Backporting: VS should support earlier version of common Microsoft libraries (mfc/atl/crt) and some IDE option for enabling or disabling earlier version support. Just like we have dotNet target version stuff in 2008.
7. Intellisense: I wont go more in detail, but just to give a quick Idea, each time I open KDE4 (kdelibs) project in visual studio with intellisense makes my computer frozen and CPU usage is 100%. Please do something about that.
8. Document Removed Features in a more better way. Helps us porting application with newer version of IDE.
9. Side by side assembly: Its breaking compatibilities. Please help us with a tool option to disable SxS.
10. Disable secure scl. If we want it we will enable it. Write a KB regarding this
11. Missing Library Suggestion System: Instead of reporting a missing header files, Visual studio should go one step further in detecting and finding the dependency of the project and suggest the users with library names
——————————————–
Some minor stuff:
12.Please have same code beautification as we have with C#. for example,
C#/ if entered code : a=34+45
converts to a = 34 + 45
it improves code readability
13. vcproj and sln files backward compatibility. If you are really adding a lot of stuff into these files while incrementing versions then its OK otherwise it looks very unnecessary.
[Kunal De.
Stephan T. Lavavej
Visual C++ Libraries Developer
[Stephan T. Lavavej]
>> > "Standard conformance is our main concern."
Mine too!<<
Good. Can I have two-phase lookup by yesterday please? The lack of it is embarrassing. (Having export by tomorrow would be early enough.)
>> > "I have been told by people who (implemented or) work with export that it solves the problem of cascading dependencies."
But it doesn’t – see "Phantom advantage #2: Fast builds, reduced dependencies" in the paper above.<<
Well. I’ve known Herb’s paper for a long time. AFAIK, when he wrote it, he did not have access to a compiler providing ‘export’. So it’s all what he /thought/ about it by then. (To quote P.J.Plauger: "[…] Sutter gave a glib summary from incomplete data and he was soundly countered by those of us who had some real experience.")
OTOH, Daveed Vandervoorde, who implemented, and P.J.Plauger, who uses, ‘export’, in newsgroup discussions (which with their threading and ability to quote are so much better than this abomination here, BTW) repeatedly said that it is indeed able to solve the problem of cascading dependencies. (And also, that it prevents cluttering namespaces with irrelevant symbols, which Plauger once said he considers its greatest advantage.)
What do you think I trust more – a many year old theoretical thought or concrete experience?
>> > _SECURE_SCL should not be on by default.
Think carefully – which is actually better?
1. Having extra security barriers enabled by default […]
2. Having extra security barriers disabled by default […]<<
No need to think here. The biggest project here currently is at 2.6MLoC. (When you shipped iterator debugging etc. it was probably 2.0MLoC.) Iterator debugging found us about half a dozen bugs in there, which so far had slipped. Let’s assume it helped us catch another few, so we’re up to a dozen now. A dozen bugs /which nobody ever noticed/. TTBOMK, no report of one your runtime security features kicking in ever came from the field. (We do get other bugs, though.) OTOH, we have to put a lot of effort in speeding up our applications, as /this/ is what customers demand. (Not to forget the gazillions of spurious warnings we get when we compile a well-tested piece of platform-independent 3rd-party code on VC. I think we get a four-digit number of warnings from compiling 3rd-party code. Code we cannot fix, which is well-tested, and very portable. This makes warnings more or less useless, as we can’t find the important ones anymore. And all this because you are unwilling to tell those who write unsafe code that they might have to take a book or two and learn a few basic techniques.
>>Intellisense<<
Some of us here have simply turned it of. (Did I say "simply"? Having to rename an installed DLL…) Mine’s still on, but don’t ask me why. Visual Assist is so much more better. Incidently these guys also proof that you don’t need to wait for a new compiler in order to improve on the current situation. Obvioulsy, a new compiler isn’t what is needed for this. All it takes is a few determined bright minds.
On last thing: This small edit window is ridiculous. So is the quoting ability. And the fact that there’s no threading here. Can’t you start this discussion in the newsgroups? It’s what they are for, and it’s what they are good at.
Here is my wish list for next versions of visual C++ (I know many have already been asked for, I just repeat them to give more weigth) :
Editor:
– Intellisense that works as well as for C# (I’ve been quite happy to read it’s on its way), refactoring
– Improved help system, faster, with more usefull filters (separating C++ from win32 from C++/CLI from the .NET framework)
– Please, keep the possibility to use the same key shortcuts as in VC6. Modified shortcuts are a real pain for all developpers.
– A UML tool with real time code update on graph modification & real time graph modification on code change (something like the Together software) would make my wildest dreams come true !
Compilation:
– Support for all of C++98 & or TR1. A strict mode that check for non standard extensions, but still enable the use of windows headers.
– Optionnal support for C++0x (at least partially, mostly for the stuff that is no very likely to change before the official standard)
– Improved build time, especially wrt templates
– Improved compilation diagnostics, not with the horrible error list, that hides interresting information, but by enhancing the output window : Optionnal filters for templates error messages, use of color to differenciate error, warnings, #pragma todo,… additionnal shortcut to go to the next error, not only to the next (file/line) info next info about the current error.
Project/Solutions
– The possibility for project properties to define a default for all configurations, and then a configuration by configuration delta with this default. With the current system, it is quite common to edit the project un debug mode, but forget to apply the same modification in release mode. Properties sheet are a step in the good direction.
– Recursive solution. For instance a developper works on a solution that handles his part of the projet, an integrator includes this solution into the daily build solution. Of a unit test solution include the main solution.
Integration with other languages:
– A possibility to automatically provide a .NET C++/CLI wrapper for a C++ class, just like what exists with tlbimp for COM.
– I personally don’t care much for MFC. For GUI, I either use .net after creating a wrapper (that’s why I’d like simplified wrapper creation) or with something like Qt.
And some misc stuff:
– More intelligent syntax coloring, for instance it would be nice to be able to color the line starting a function definition
– Autolink for windows libraries
– And everything else I forgot 🙂
I Am very speaking for parallel compiling. It may not be so advanced like IncrediBuild but would spend a lot of time in every project. Some people here speak for increasing compiler performance. Heh, it is nice but can give only small performance increase. Ever developer has at least dual-core machine and it’s a shame he cannot employ it. (And I don’t mean that parallel thing in VS2005 which looks like some student’s work. I want compile single project in paralel!).
Tomas,
The capability to compile multiple files in a single project in parallel exists in the upcoming Visual Studio 2008 "Orcas" using the /MP switch. You can test it out here:.
Thanks!
Steve Teixeira
Group Program Manager, VC++
[Hendrik Schober]
> Can I have two-phase lookup by yesterday please?
I’d like two-phase name lookup too. Unfortunately, it is perceived as being able to break a lot of customer code while providing little value to customers. While my sympathy for non-conformant code is almost nonexistent, and my desire for conformance overwhelming, I must admit – my only counter to this objection is "but you could have a compiler option".
If you (and others) can clearly make a case for why two-phase name lookup support would significantly help you, above merely checking for non-conformant code, that would help address the "little value to customers" perception. (Note that by compiling code with multiple compilers, you get the union of their conformance checks – although you also have to endure the union of their bugs and quirks.)
I can think of lots of things I want more than two-phase name lookup, as well.
> What do you think I trust more – a many year old theoretical thought or concrete experience?
Fair enough. I’m more inclined to believe Vandevoorde and Plauger, instead of the dozens of random programmers I’ve seen who think they want export (but haven’t used it yet).
On the other hand, I notice GCC’s conspicuous lack of movement towards export.
[Loïc Joly]
> Support for all of C++98
C++03, please! 🙂
> Autolink for windows libraries
Oh, that would be *nice*. I love Boost’s autolinking.
Stephan T. Lavavej
Visual C++ Libraries Developer
As a full time software engineer and a part time academic, I have spent over a one and a half decades using the Microsoft C++ compilers. In that time I have been trying to build metrics gathering tools, refactoring engines and debugging tools as well as delivering software products to our customers. We have a product-line-architecture at work that has received person centuries of developer effort and weighs in at more than 10 million lines of C++.
Building tools for this volume of code is hard. So I thought I would list the MS offered options and try to build a bigger picture.
Remember the browser database? Well there was an API to read the file called BSCKit. It’s still there if you want it, but it breaks and is old and does not scale to multi million lines of code. What it offered is a wonderful symbolic engine. It listed every reference to a variable, every call site for a function and, as it was symbolic, would cope with virtual functions in the correct way – by referencing the known base type at the call site.
What the BSCKit lacked was type information. So we can turn to the DIA SDK. This is an API that allows you to crack the PDB debugger information. (Search for “DIA SDK” and an article by me is the first hit). There is such a wealth of type information in the PDB file; I have written tools that generate complete C++ headers just from an (unstripped) PDB file.
Then we must consider MSR. They developed a alternate backend to VS6 called ASTKit. It was research and broken (did not cope with templates, did not scale and did not track service packs), but allowed you access to the entire abstract syntax tree that MSVC created during the compile.
More recently there has been phoenix – MSR’s compiler writer’s toolkit. Its early days, but phoenix completely replaces the ASTKit with an environment for writing all sorts of spelunking / dev tools. However it lacks a full symbolic engine as offered by the BSCKit, and does not have the rich type information of the DIA SDK.
Now we hear about the improvements to the database behind intellisense, and I fear yet again we will get half a story. A C++ parser is very hard to write; it must expose as much information to the developer as possible. Writing real refactoring tools for C++ is key the language’s survival (look at what Eclipse does for Java). The C++ community requires industrial grade tool-suites, not just the industrial grade compiler.
If we are to build such tools we require symbol engines, type information and binary re-writers. IBM Visual Age promised such information with their Montana project, GCC-XML is starting to offer parts of this, as are the Edison Design Group.
Please allow the community to build a wonderful and rich ecosystem for you. To do so offer us the information you have locked away in your compiler & debugger developer divisions. You have the power to re-energise the C++ community to make the non Windows developers even more envious. Please …
Cheers,
James.
James Westland Cain, Ph.D.
Principle Software Architect
Quantel Ltd.
james.cain@quantel.com
[Stephan T. Lavavej]
>>If you (and others) can clearly make a case for why two-phase name lookup support would significantly help you, above merely checking for non-conformant code, that would help address the "little value to customers" perception.<<
Consider this:
struct X { template< typename T > void f() {h();} };
here, ‘X::f()’ calls a function ‘h()’ that’s not declared. VC never catches this silly error. Any compiler implementing two-phase lookup (GCC, CW, Intel, como…) does. (The reason is that ‘h’ is not a dependent name, so the compiler should check the call when it first sees the template. If it was dependent, it shouldn’t be checked until ‘X::f()’ is instanciated with a specific template argument.)
Everyone who writes template code to be used by others will regularly run into this problem: You write your code, it compiles, it even works, but you haven’t instanciated /all/ templates in your code. Simple spelling errors like the one above (meant to call ‘g()’ which is declared somewhere but mistyped ‘h()’ instead) will slip through as VC doesn’t find them. Then someone compiles your code using a conforming compiler and sends you an embarrassing list of compiler errors.
But there’s more to this: Even if ‘h()’ was the right function and I make sure ‘X::f()’ is instanciated during my test, ‘h()’ might be declared where I /instanciate/ ‘X::f()’ but not where it is /defined/. VC will happily accept this (as it only parses the code when the template is instanciated), while other compilers would (rightly) flag an error in ‘X::f()’. /This/ can be a real show-stopper as you might have to re-factor code due to the previously undiscovered dependency of ‘X’ on ‘h()’. I ran into this several times, which is why I now use other compilers when I write none-trivial template code.
(Note that ‘export’ would make it possible for me to move the implementation of ‘X::f()’ into some cpp file and not clutter my header with it – thus reducing such declaration dependencies.)
Of course, at first look this only affects you should your code ever be compiled using a conforming compiler. However, unless Microsoft openly announces that they will never attempt to implement two-phase lookup, some day VC will bark at this. The longer you procrastinate fixing this issue, the more code with such (undetected) errors will be out there the day Microsoft comes out with it – and the bigger will the customer’s problems will be. IME this holds for all bugs that make a compiler accept faulty code: The longer the compiler accepts it, the harder it will be to fix it without seriously annoying everybody. (Almost ten years after the release of VC6 many still haven’t upgraded because they can’t afford to fix their code so that it compiles using modern compilers.)
Resource Editor – support anchored controls in MFC Forms & Dialogs.
I want:
– IDE optimization. Each new version gets more bugs and is slower! Have you tryed to manage a Form with 500 controls? what a pain. If you add to that the Intelisense updating itself each 30second and the AutoRecover function = you can’t code, the computer is halted.
– A button to update MANUALLY the damm Intelisense, not when it wants… when I want.
– Distributed cluster compilation ( ala Incredibuild ).
– Windows Presentation Foundation with FULL Managed C++ support ( currently in VS2008 beta is only C# ).
– SSE3 optimization compiler option ( /arch:SSE3), SSE4 support too.
– New languages support: Digital Mars D, C++ 0x
Oh sorry, I forgot…
– Support for GCC .a/.so libraries.
– Allow to insert images in the code comments… For example, imagine I want to attach to the file an image with a diagram explaining how to calculate a geometric reflection vector from an input vector and a surface normal… Currently you have to make ugly ASCII art for this… I should be able to insert images in C++ comments… perhaps with a #pragma imagecomment or in the project file settings.
– Official support for Subversion source control, not only for Source Safe/Team.
Interested to know … What IDE/compiler tools does the VS Dev team use ?
Thanks
I was disapointed that profiling (performance testing) was removeed and then later added to the Team System. I would like it back (but I don’t expoect you will give us this one).
Would like faster compiling. Every new version seems to halve the compile time. I do understand this is partly necessary, but some work here would be good.
(Current) Standards comformance is important. Getting C++TR1 and up and coming C++0X which will probably be done before orca+1 is important too.
Better help system (i.e. only look for C++ Standard libray , or only MFC , or only WIN32 or a selection of these in a specified order)
– Faster compilation times, ala VS6
– More optimizations and support for SSE3, SSE4, etc…
– Better intellisense
– Standard C++0X
– More frequent updates to visual studio
– Don’t mind about .NET integration, but mixed types could be usefoul.
I really appreciated Refactor!
Here is my wishlist:
* fix intellisense
don’t consider it fixed unless it works with stl, boost, wxWidgets and similar toolsets
actually test it with these libraries
* add refactoring tools
* add a c++ class designer
* improve help
throw away current help and start from scratch
both, the help mechanism and the contents of the help files are badly broken
* stop changing things from one version to the next just for the sake of it (projects/solutions/help/addin)
* decide on an architecture and keep it stable for 10 years, unless you can deliver really dramatic improvements for programmers
* make ‘property sheets’ for projects usable
its a joke that, when I want to switch from static linking to DLL for example I have to click my way through a ton of dialogs.
* deliver a solid, robust profiler. Make it available from professional versions on upward. Take a hint from IBM VisualAge profilers from the 1990ies, especially their way of visualising multithread execution. You could actually, visually/graphically see deadlocks, race conditions or performance bottlenecks. They are still the best profiling tools I’ve seen.
I’ve been using VS since version 6 and have lately wondered if I might ever have to switch to a different toolset.
As MS focuses on managed code and .NET I get the impression that the resources for VS C++ development have dwindled. I’ve seen a similar thing happen to IBMs excellent (back then) VisualAge C++ compiler 4 (for OS/2). Resources are assigned elsewhere and the product is left to die.
That had me wonder for how long Visual Studio C++ will remain a reasonable solution for C++ development.
Hopefully Orcas+1 can counter that impression. Microsoft, has lost a lot of credibility.
.NET 2002 -> horrible
.NET 2003 -> soso
VS 2005 -> slow and broken
compilation ERROR messages with templates:
These are not much better than they used to be with vc6, back then. we need to do better than that. I know this is not an IDE issue, but a compiler issue.
MSDN Help has abandoned C++! Today, you go to google or msdn.microsoft.com and all you get are C# hits!! not to mention that the current pages are much harder to navigate than the old ones. If it’s not broke, dont fix it!! please bring back the native code msdn pages.
I’d vote for a tick box to remove all managed (.net framework) help from the help,so if you’re developing native it doesn’t even try and look for .net classes etc
I’d also like to thank the folks who put the BRIEF keyboard option back into VS2005. I wish it *worked* perfectly, but that’s a different problem altogether. 🙂
I’ve got a ton of code written in MFC and a rapidly aging interface. We don’t *want* to port the back end code out of MFC, but we’d desperately like to be able to put a modern looking interface on the client application without going absolutely insane. And we’re still an MDI app and likely to stay one for a while.
In other notes:
It’d be nice to have a set of serialization macros that supported schemas for virtual base classes that contain data values. I hacked my own, but I hate having to do that.
I would *love* to see a cookbook for porting an MFC app using views and dialogs (and things like DDX/DDV) to use some of the .NET style interface controls.
Could we *please* have an object that combines a Mutex (single writer) with a Semaphore (many readers) in MFC? Once again, I rolled my own, but I’d rather not have to.
CMultiLock doesn’t execute the Lock function of the underlying objects. I *understand* this is by design, but you should at least *document* it.
Will Class Designer finally work for VC++?
That’s the only thing I really want for XMas.
We use VC8 and C++/CLI to compile up a legacy MFC project with over a million lines of C++ code into a single .EXE
My #1 request — get incremental linking working with /clr!
When we compile the project as managed (/clr), it always does a full link which takes over 4 min on a fast machine and severely reduces our productivity. When we compile the code as unmananged, we get incremental linking and life is good again. We want to take advantage of all the new .NET stuff, but the increased edit-compile-link times are killing us.
My #2 request — Fix intellisense
We use a lot of #ifdef’s in our code and VC8 frequently gets confused about what is compiled and what is not. I’ll change a #define, wait for the intellisense update and code I know is now in the project is still grayed out. Sometimes making a change to the file seems to force intellisense to recheck the file and enable the code, but the flakiness is annoying.
For VS2005, my #1 request is fix bugs before adding features, and don’t make us upgrade to VS2008 to get the fixes. On a large project team, the cost to upgrade is a major problem, and we all need to run the same version.
Right now, my #1 quality problem is that mixed mode debugging often randomly hangs the IDE (have to kill process on devenv.exe) when single stepping in C# or C++. I’ve given up trying to use it. (I’m on a multi-core machine)
Also, why is there no list of hotfixes published since SP1?
I would love to see enforced code formattting in C++ (like there is in C#). I’ll also put another vote in for Intelli-sense and interop.
[Hendrik Schober]
> VC never catches this silly error.
That’s a missed chance to detect invalid code. While I’d like VC to detect this, regularly compiling your code with multiple compilers will catch this.
There is a second, more lethal type of nonconformance: the type that causes you to rewrite perfectly valid code in order to work around a compiler’s deficiencies. There are ways to "take advantage" of two-stage name lookup in this manner, but they are all rather obscure.
[jogshy]
> SSE3 optimization compiler option ( /arch:SSE3), SSE4 support too.
[Thomas]
> More optimizations and support for SSE3, SSE4, etc…
VC9 has intrinsics for everything except AMD’s very recently announced SSE5. This includes SSE3, SSSE3, SSE4 (both SSE4.1 and SSE4.2), and AMD’s SSE4A.
/arch hasn’t been extended to automatically use these new instructions.
[Digby]
> What IDE/compiler tools does the VS Dev team use ?
Speaking for myself: I don’t use an IDE. Never have, and probably never will. I use a Notepad clone (Metapad) for editing, and Source Insight for browsing large codebases.
The VC compiler and libraries are, of course, built with themselves. We bootstrap them on the command line, which is now mostly powered by MSBuild. (Andreea Isac, our MSBuild guru, blogged about this at .)
[Thomas]
> Faster compilation times, ala VS6
[Malcolm D]
> Would like faster compiling.
Orcas includes a wonderful bugfix that will significantly accelerate the compilation of large projects that are template-heavy and use PCHs. I remember the speedup as being around 20% for a real example.
> Getting C++TR1
We’re working on it! 🙂 See Bill Dunlap’s comments above.
[Thomas]
> More frequent updates to visual studio
Goodies are "closer than you might think", to use Bill’s words.
[vick]
> compilation ERROR messages with templates:
Agreed! I love-love-love templates, but even I must admit that template error messages can be rather frightening.
Whenever you find a code construct that triggers a warning or error that could be significantly better, PLEASE file a bug in Microsoft Connect.
As you can imagine, this sort of thing is really difficult to test for, so customer examples are super valuable to us.
[Mike Hudgell]
> I’d vote for a tick box to remove all
> managed (.net framework) help from the help
I’ll second that! 🙂
[Bill Roper]
> we’d desperately like to be able to put a
> modern looking interface on [MFC]
We hear you! Some Vista support was added to MFC in VC9 (see Pat Brenner’s post at ). On top of that, to quote Bill again, "we are working on a huge update to MFC that should knock your socks off".
Stephan T. Lavavej
Visual C++ Libraries Developer
Re: What IDE/compiler tools does the VS Dev team use ?
I can’t speak for everyone inside Visual Studio but I am using the Beta-2 release of the IDE and for tools I am using a set of tools that was built on August 15th.
Especially from the tools side I always like to ensure that I am using a really "fresh" set of tools – doing so makes it easier to catch potential problems sooner.
Jonathan Caves
Visual C++ Compiler Team
I hope the decision not to make C++/CLI a "first class .Net language" is not based on someone’s personal affinity for a homegrown language (C#). That decision will only pigeonhole .Net itself as a RAD environment for in-house projects that aren’t required to stand up to commercial scrutiny.. And that’s why Hungarian notation IS a good thing because what happens underneath IS important.
If Microsoft starts placing more emphasis on native C++ development and leaves C++/CLI to languish in obscurity, then why use .Net at all? There are some great tools and technologies in MSVC 2005 and .Net and it would be a shame to limit them to fluff languages that hide details from the programmer.
C# was Microsoft’s answer to Java. Java never lived up to its hype. So, let C++/CLI be the killer language it could be for ALL Windows development..
I really liked working with VS2005 until I stumbled upon lots of bugs and broken features. The dependency settings is broken. While debugging, VS locks the built executable. When I try to rebuild the application, the linker is unable to update the exe (access denied) . Grrrr. The VS 2k5 sp1 installation is just too painful. On the third time I tried to install it it crashed and the only way I could fix my visual studio installation was by a clean windows install. You guys are unnecessarily bloating the IDE, and releasing revision upon revision where none of them is any good. I see good improvements in the compiler but the IDE tends to get worse. I don’t see why a company should waste money and developer time in upgrading to newer versions of VS, when the returns are very minimal. Until a couple of years ago I used to feel that VS was the best app from MS. Don’t feel so anymore.
1. I like to see functional intelisense (like in c#). often it dont work properly or at all (*.ncb file has more than 40mb)
2. IDE should be faster (like in VS .NET 2003)
3. fix annoying bug with cooperation of source safe and VS IDE (right click on solution ->get latest version, than you switch to another app (not VS nor VSS) and if you have confirmation dialog from VSS you can not choose anything but to hit escape key
4. VS has problem with resources wich are modified by external program. you have to manualy recompile resources
5. sometimes code generated by VS is wrong and rebuild is needed. why?
As pointed out by many people:
*Intellisense
The exact problem — If you change a header file, the IDE will often freeze for quite some time. Also, when first loading a project, the same happens. Number One pain point.
*Clipboard ring
Where did it go? It was in the toolbox in VS 2003.NET.
I realize leaving out the clipboard ring discourages "cut and paste" programming… but it was quite useful to be able to cut and move large sections into different source/header files, or to copy sections to utility files.
It was also useful to select and paste the same comment into multiple places in the code
Thanks.
Chad Lehman
Chad
The clipboard ring still exists in VS2005 and VS2008, it just no longer publishes the text to the toolbox. If you use Ctrl-Shift-V, you can cycle through your recent sections of copied text.
-Meghan
1. support C++ TR1
2. enhance intellisence at least at the level of c# editor even exceed Visual Assist^^
3. class diagram like it has been in rational rose but not in visio
I like to see the dialog editor as competent as the Window forms designer. I don’t see why this functionality wasn’t a part of VS as far back as 2002 or 2003; After all the interface is separate from the code it generates. The auto guides alone would greatly improve the dialog editor.
Hello, Jens Winslow, Dwight and Kirchhoff,
Thank you for your interest in C++ Class Designer. In 2008, the scope of support for C++ is restricted to read-only visualizationis. That is, C++ classes can be viewed in the diagram, however they cannot be created or modified in the diagram. We’ll do improvement in the future.
Hello, lxxxk,
re your comment "3. class diagram like it has been in rational rose but not in visio", I’ll reflect this to the feature team. Thank you!
I would like the profiler to be available from the Pro version upwards. (Even if only in a restricted version – eg function timing/coverage. That is so useful to find which bit of code is slow).
What I would also find useful the the resurrection of the component gallery AND a vastly extended ‘wizard’ for generating a class stub.
eg.
I want a property sheet/3 pages, modal, not wizard style, no apply/cancel/whatever buttons…
or
I want a new class, called Cxxx, it is to be a singleton wrapping a text file.
or
I want a new object, pure virtual, and I want to use a linked list collection to contain it…
1. Fully agree with all the guys here about the IDE – it has too much memory leaks as well…
2. The Team System Profiler is terrible. not intuitive and practically useless.
Strange it might sound usely performence problems are not(!) ‘for’ loops…
If you would like to see how profiler should be I sugget you look at the new "bounds checker" profiler – graphically and easy to use. We are using it and it’s a tool that really solve performance problems.
I’ve been using Microsoft C++ development environment since 1.0, both in large team development and for solo projects . Every so often I look at the competition to Visual Studio and end up saying “what competition?”
I used to use Intel’s compiler engine with MS’s IDE because it produced slightly faster code. But the continual good relationship with Intel means that MS’s compiler is still near as damnit as good.
I agree with the other comments here about focusing less on managed C++. Don’t worry, guys: We hard-nosed C++ hackers all use C# too, and know when it’s the appropriate tool.
So: A wish-list.
1. Handle multiple monitors better: VS should support two full-screen MDI windows, not just one, which you either have to stretch across the monitors, or else have undocked windows on the secondary monitor.
2. Tightly integrate XML doc (with pretty XSL stylesheets for those of us who are lazy) for C++, not just C#.
3. “Push/pop” cursor caret: So I can “Go to Definition” of a symbol, then “pop back” to the reference. I shouldn’t need to code this basic IDE functionality in macros.
4. I should NEVER, EVER, have to use Google or Google Desktop to find MSDN documentation, but it’s usually more effective than using MSDN’s own navigation and search facilities! Get it together, guys!
5. size_t should be blue in the source code editor (optionally), like wchar_t is.
6. Configuration manager should load/unload projects too, or at least indicate clearly in the Solution Explorer which projects are going to be built and which are excluded fro the config.
7. Better error reporting from the compiler for templated code. Just ask anyone who uses BOOST.
Josh Greifer
[Stephan T. Lavavej]
>>That’s a missed chance to detect invalid code. […] There is a second, more lethal type of nonconformance: the type that causes you to rewrite perfectly valid code in order to work around a compiler’s deficiencies.<<
I know that your emphasis was on making VC compile conforming code, not to reject non-conforming code and I understand this policy and think it was sound as long as you were so far behind std C++. However, AFAICS VC by now doesn’t give us any more such problems than the other compilers we use. You have done a pretty good job over the last couple of years and have caught up a lot. How about now moving your emphasis regarding std compliance more towards what hinders productivity? The lack of two-phase lookup severely does (and so does export, as I explained earlier).
<pre.</pre>
But the current ‘Visual C++’ filter gives you managed and unmanaged documentation. If you select ‘Platforms SDK’ then you only get the platform SDK and not the C++ language stuff [basic, but sometimes necessary].
A ‘Visual C++ [unmanaged]’ filter would work fine…
I’ve been developing 24×7 telecom server solutions with C/C++ a far has I can remember. My main development platforms, in the last 10 years have been VC++ 6.0 (a lot of time here) and (VC++ 2003 and VC++ 2005) others have passed by me but I will focus on the experience with these 3 compilers/ide/….
Main development in native code with some incursion’s into managed code.
Missing:
– More Native ATL type libraries examples:
o Some WS infrastructure that I can use instead of DCOM (great but doesn’t play well with others out of fashion) or WCF (great, but not a native technologie, so not upgrading to .NET just to use this). My current solution is to use ATL Server with some changes to be able to use it has a HTTP/TCP server or use gSoap. Not “upgrading” the communication layer for native apps causes “stress” in development of distributed native apps.
o Keep up with trends in development. For example a good and fast XML library. We already have XMLLite that’s great, but how about a little more functionality and a class that integrates the API with STL programming concepts!?
– Key technologies missing, don’t want/need C++/CLI to keep up with C# innovation but I want to keep programming in C++ instead of having to use C# (ex: WPF)
Must have:
– C++0x
Please do more:
– Static analysis
– PGO
– OpenMP (still using Win32 threads, but see a lot of potential here)
– C++/CLI (a little hard to swallow at first, but comparing it to MC++ and to C# no reason for me to use C# anymore)
– STL.NET (just great, how about keeping the non supported VS2005 compatibility?!)
–
Problems with the IDE:
– A lot slower & buggier
Problems with the help system:
– Have to crawl through a lot of .NET stuff to be able to find Win32 API stuff. Other stuff simply has just vanished from the local help (for ex: some OLE…. APIs)
Forget …
vastly improved … experience, rich-client user interfaces, Windows platform innovation,
just make it fast.
My fully loaded multi-core machine running VS 2005 IDE today is slower and less stable than my 100 Mhz single core machine running VC++6 IDE was in the 90s.
VS IDE is …
– slow to launch
– slow to shutdown
– slow to open a solution
– slow to close a solution
– slow to start debugging
– slow to stop debugging
– crashes more than it should
– is a resource hog.
BTW – the comments refer only to the IDE. The compiler is excellent.
Josh –
Regarding your request above
<3. “Push/pop” cursor caret: So I can “Go to Definition” of a symbol, then “pop back” to the reference. I shouldn’t need to code this basic IDE functionality in macros.>
Have you tried the "Go Back" functionality? There is a drop down on the main menu bar that will allow you to go forward and backward in time from where you have visited and there is also a key binding associated with it. For some profiles this is Ctrl + ‘-‘. I think this will do exactly what you want.
-Meghan
Hi all,
I’m excited to see the enthusiasm for Visual C++ and the passionate conversations. The C++ team is actively watching what you are asking for and incorporating that feedback into future plans. I look forward to continued collaboration with you all as we continue to make improvements to the C++ compiler and IDE support. Keep talking, we are listening.
-somasegar
Meghan – thanks for the ‘Ctrl -‘ info. (frantically revises project timeframes…)
Since posting, I’ve watched Bill Dunlap and Steve Teixeira’s webcast. It was good to hear that the MSVC team are focussing more on native and interop support. Having Herb Sutter so closely involved is also a good sign that MSVC will keep up with standards development.
Generally, if my source code in C# doesn’t have any blue wavy lines, it will at least build ok. C++ would benefit from this important productivity gain.
So When I’m coding C++, I’d like the IDE to do something similar to C#’s reference binding intellisense feature, e.g. If I type
socket s;
The IDE should prompt to insert "#include winsock2.h" — I shouldnt need to compile code to find this kind of error.
In fact, better management by the IDE of #includes, #imports and namespaces presents a lot of potential productivity gains.
I should be able to inject code design patterns (e.g. pimpl). I don’t think of this as refactoring or source code tempates.
I guess C++ needs a smarter internal representation of the program at code-time — the "compiler" needs to be active, and to "know what I’m coding" *while* I’m coding.
Josh
Heres a simple request (tweak):
Starting with VS 2005, if you try to clear all bookmarks in the document you will be asked "Are you sure you want to delete all of the bookmark(s)?" It would be great if there was a way to turn this question off.
Same for breakpoints as well – delete them all without being asked are you sure.
Thanks
dig
Want to know the single dumbest thing about the Visual Studio IDE?
If you are using Microsoft’s TFS for source control, the IDE knows a simple one-to-one mapping between the working directory on the file system and paths within TFS. This is how it is possible to type ‘tf get’ from the command line with no other arguments, and it just works. Great.
So why, when I open a file from Source Control Explorer and start editing it, is the IDE so dumb that it can’t automatically check out the file for editing? The inevitable answer is that all the headers should be "bound" to projects, but what about very large numbers of headers for shared libraries? What’s the point? Why not just make the IDE marginally less dumb?
Note that by right-clicking on the tab for a file, I can choose ‘Open Containing Folder’ – a very helpful feature. So why not ‘highlight in Source Control Explorer’? That would be a step in the right direction. But seamless automatic check out would be better still.
Currently I’m working solo, and the "solution" I’m working on contains 48 projects (of which 17 are unit-test applications). I’m doing my best to manage it, but VS could make it a lot easier for me:
Better integration with VSS, please.
I still need to launch VSS (6) to label projects. I should be able to do that from a context menu.
C++ development would benefit from more target- focused concepts: There should be a VS entities called "Applications", "Executables" and "Libraries" , (between "Solutions" and "Projects"?). E.G. For an Application I want to —
Deploy/Create Installation
Update Version (no editing of VERSION resources)
Update Version History (Readme)
etc
Applications, Executables and Libraries (source, object, dll, COM), not "Projects" are what we build in C++.
Josh
Hi guys,
Theres a lot I love about the IDE, its still the best out there – but there are things I want and need that it cant do right now.
Firstly, we write in C++ to get performance and control – threads, management of small resource pools (such as WinMobile) and access to lower level libs makes the job a necessity – managed code is bloated, the overhead is just too big for some platforms.
Thats my justification out of the way, now what I want is:
1. I know this sounds simple and dumb, but its often the case that theres a library out there that does something I want but I wont know unless I move in certain circles. XMLLite being a case in point. Improved help – make it component centric, not function centric – i.e list anything to do with networks, under networking.
2. MAKE WHAT YOU HAVE MORE ACCESSIBLE ! – trying to incorporate something like a simple SAX XML reader into a C++ project is, to be honest, a nightmare. I’d love to see a better component library so I can add a reader without having to read pages and pages of web. Easy in managed, not in native.
3. Update your sample code – you rely too much on CodeProject. Most dont build under VS2005/8 and theres no upgrader.
4. The property editor for dialog design drives me nuts – its a pain. I spend half my time looking up and down the list for something simple thats badly named.
5. You ditched VSS – huzzah!, well done.
6. The IDE could be better, a better image and icon editor, more support for 24/32-bit colour, transparency and image types. The text editor is decent (although I’d like section expand/collapse to be off by default). And its slow…….
7. Whilst I realise Vista is important, most people still and will use XP – targetting specific features for XP and Vista is hard work.
Rant over, you’re doing a good job – we’re here to stay BTW…
Chris
I would like better support for environment variables. That is, in addition to the built in variables like $ProjectDir for example, I would like to be able to use environment variables. For example to set additional include $ENV{MyProjectRootEnvVariable}/include.
Also when starting devenv /useenv, it would be nice if variables could be set separately for different platforms. Perhaps instead of just INCLUDE, LIB and PATH one could have INCLUDE_x64, INCLUDE_x86 environment variables.
One suggestion the the next Visual Studio 200x editor: like Outlook it could uses the Word as code editor; so it will be possible to mix code with objects (tables, images, formulas and so on….). This features allows an high level code documentation.
Thanks.
While I agree VC6 was really good, and VC8 is OK,
I still want to have more features.
Some while ago, I used Eclipse + Java.
This was heaven – of course not everything was good,
but following features stood really out.
1. Quickfix (Ctrl+1)
2. Refactoring … renaming a java class in eclipse is no problem – it takes me less than a minute.
In C++? Renaming a class? I won’t do it without a very very good reason… it’s just too painful.
Good refactoring support, really changes how you develop programs! So please add it.
3. Intellisense that just works… in VC8 and VC6, sometimes it doesn’t work. And there is no feedback – why it doesn’t work…
other wishes:
4. Of course C++ 09 support. 😉
5. Fix the .NET Framework Distribution problem.
Nobody has the latest .NET framework installed.
The binaries are too large to distribute with your product.
And telling customers to download 20MB + from microsoft.com, is also not helpful convincing customers of your product.
JE
I develop Console-based/compulational intensive (3-4 hour run times) programs for 32/64 bit Windows and Linux systems. The same code must compile on each operating system seamlessly. VS2005 is my develop platform and it does what I need reasonably well. We would like to move this application to a multi-threaded / computer cluster environment, but Windows/Linux code compatiblity is frustrating our efforts. I like how C# does this, but the performance of C# is a major drawback.
C++ needs to be a full citizen from now on. When I was at a recent MSDN event, the question was asked how many VB programmers do we have here, then how many C# programmers, and that was it. I wanted to yell out C++. I still think 2005 (VC8) is better then 2003 (or 2002, yuk) as far as C++ is concerned.
The help seems broken compared to VC6, and the integration of C++ is not nearly as seemless as C#. Interop with managed code is a must as is more tools to do the same stuff in unmanaged C++.
Keep fighting the good fight!
I second Glen’s comments regarding C++ being a full citizen. With C++/CLI, why should we have to bother with annyoing C# syntax?
Here’s a simple one: The Find In Files output should not be cleared at the start of each search operation. Yes, we have two Find Results windows we can alterante from, but we wouldn’t really need them if the search results were simply appended to the results of the previous search. Appending search results provides a very useful search history. I often find myself using Textpad’s file search mechanism for this reason alone. And if possible, please include this in the next service pack for VS2005. Thanks. 🙂
Beside inteli-lack-of-sense being too slow, I find it "digs" too deep.
90% of time the function I want is "in" the current class not some parrent class level, but intellisense insist on listing all functions, making it impossible to find function you want – unless you already know it’s name (and hence do not need intellisense)
I would love to have intellisense list the current class member functions, then a line, then all the "other" parent stuff. I.e. for class B, intellisence:
MyB()
—–
MyA()
…
for
Class A {
int MyA(void);
}
Class B : public A {
int MyB(void);
}
I wanted to respond to recent comments by Glen and Cygnus…
>>I hope the decision not to make C++/CLI a "first class .Net language" is not based on someone’s personal affinity for a homegrown language (C#).<<
The Visual C++ team is responsible for determining how C++/CLI fits into the overall Visual C++ product strategy. Please rest assured that this issue has absolutely nothing to do with any emotional attachment for C# and everything to do with the feedback we receive from Visual C++ customers.
What we have heard from an overwhelming majority of customers is that C++ is most important to them in native code and multiplatform scenarios. We also find that C++ developers are perfectly comfortable working with C# for .NET-specific parts of their application, such as UI code generated by IDE designers. We on the VC++ team feel that we can offer customers far more value by focusing on those native code scenarios that ONLY the VC++ team can deliver on, as opposed to diverting resources to provide C++/CLI experiences for designer-oriented scenarios that are already handled well by C#.
>>That decision will only pigeonhole .Net itself as a RAD environment for in-house projects that aren’t required to stand up to commercial scrutiny.<<
I’m going to have to agree to disagree with you on this point. My feeling is that the success of .NET as a platform does not rest on whether or not certain .NET designers or features are surfaced with C++/CLI support.
>.<<
I agree. This is why C++/CLI is an important piece of our product strategy. C++/CLI offers by far the best native/managed interop experience as well as the greatest control over runtime characteristics. I encourage everyone to leverage C++/CLI in cases where these things are important.
>>If Microsoft starts placing more emphasis on native C++ development and leaves C++/CLI to languish in obscurity, then why use .Net at all?<<
Hopefully I’m helping to paint a more accurate picture of the role of C++/CLI. In a nutshell, we’re focused on areas where we can add maximal value, and we’re deliberately avoiding investments in "me too" areas where we can offer little value beyond a "C# with a preprocessor" experience. In practical terms, this will often mean leaving the UI designer space in the capable hands of C# and VB.NET while VC++ focuses on ensuring developers can get the most out of the "guts" of their software.
>>C++ needs to be a full citizen from now on. When I was at a recent MSDN event, the question was asked how many VB programmers do we have here, then how many C# programmers, and that was it. I wanted to yell out C++.<<
Assuming you mean a full citizen of Visual Studio, I wholeheartedly agree! I also agree with you that the message that goes out through our field-focused organizations can be overly .NET-centric. However, we’re working with these field-focused organizations, such as Developer & Platform Evangelism, to provide more balanced managed/native messaging going forward.
Thanks!
Steve Teixeira
Group Program Manager, VC++
Clean solution/ rebuild solution – delete the .aps file!!!
Steve, thanks for your comments. They are appreciated. Can I ask, then, if you intend to leave new .Net UI development to C# and VB but fully support C++/CLI where system performance matters, why don’t you support Web Services and WCF in C++/CLI like you do C#?.
And besides, we have enough technologies out there to keep track of. It makes it much easier if we can work with them all in the same language.
[Josh Greifer]
> 5. size_t should be blue in the source code editor (optionally), like wchar_t is.
Remember that size_t is a typedef, while wchar_t is an actual type. The distinction occasionally matters, such as when overloading.
Stephan T. Lavavej
Visual C++ Libraries Developer
Hello Digby
Re your question: What IDE/compiler tools does the VS Dev team use?
Although Jonathan (Caves) answered you above, he has just posted a much more specific description of his setup on the VC blog if you are interested in much more detail. See here:
Thanks
Damien Watkins
Visual C++
Hi Cygnus,
>>Can I ask, then, if you intend to leave new .Net UI development to C# and VB but fully support C++/CLI where system performance matters, why don’t you support Web Services and WCF in C++/CLI like you do C#?<<
I think it’s fair to say that this is a hole today. I agree that we need to improve the VC++ web services story.
>.<<
While RAD tools may make it easier to write bad UI code, there is nothing inherently bad or inefficient about writing UI in C# or VB.NET. One can certainly write good UI code in C# and VB.NET as well, and I work with a lot of customers that do this on a very large scale.
>>And besides, we have enough technologies out there to keep track of. It makes it much easier if we can work with them all in the same language.<<.
Thanks again!
Steve Teixeira
Group Program Manager, VC++
I see lots of comments about the filter for C++ – Gordon mentioned that in the Orcas release has a new C++ (native) filter. I will see if we can put out a file and registration for a filter to fix-up your earlier releases/libraries. If so, I’ll post back here and also on my blog.
April
Our application area is CAD and with 10 million lines of code in C++ & C#, Visual Studio 2005 is good, but it’s not great. Some improvements that we’d like to see:
In contrast to some of the comments here, interoperability, improved C++ standard support, esp. C++0x, and STL improvements are high on the wishlist. That said, we often find ourselves using STLport rather than Dinkumware, as it’s faster and offers debug support. Further improvements to C++/CLI are also high, as we’d rather use a proven language as a basis for .NET over one of Microsoft’s invention that we continue to experience fundamental flaws with, such as non-deterministic object lifetime (vs non-deterministic memory cleanup which is much easier to manage).
Help separation: If I want to search for a class in Win32, finding 100 .NET uses of the word is really irritating. The same goes for C++ vs C++/CLI. They’re two different languages, can we please avoid mixing them up?
On top of that, knowing what features apply to what codebase is so important and yet glossed over by the help. Often a given function or setting is listed as ‘this applies to the .NET framework 2.0’ – when it’s a C runtime function, I worry! A clear separation between native and managed is critical. I can remember looking up manifests and assembly configuration file settings and nowhere did it cover the absolutely critical differences between managed and native settings.
Help improvements: I end up using Google for most of my help, it searches MSDN far more effectively. I think that safely covers how effective the searching is in the current help. The Visual Studio 6 help by comparison actually finds what I want in a tenth of the time.
Downloadable/cacheable help: The Internet is cool, but please can we have a feature that has help on the hard drive? Something simple like a help equivalent of offline files would be nice, where I have a cached copy available when the Internet is inaccessible, or slow, or MSDN is down, again.
IDE performance: VS2005 is very very slow when you give it large solutions (100 projects, each with on average 100 source files). Intellisense kicking in when you least expect usually still freezes the application, even after the Service Pack supposed to stop this, is also deeply annoying. The general responsiveness of the application seems to drop dramatically with this amount of loading.
Text editing: Can we please not have managed keywords highlighted in native code? It’s really disconcerting when you happen to name a variable ‘property’ and have it go blue, when you’re writing native code only. It’s things like this that present an appearance of a focus by Microsoft on managed code only.
IDE integration: How on earth did we end up with two totally different IDEs within one, one for C++ and one for .NET? In C++ I get a separate window for project properties, I get property sheet support, and generally the feel of a fully fledged IDE. In C# I get a docking window with a minimal set of properties and only half the features that the C++ IDE offers. Please integrate these so we can get used to a single set of actions.
Property sheet support: When dealing with hundreds of projects, being able to use property sheets to define common settings is a godsend. Sadly, the IDE only partially recognises these useful features. There’s no proper support for saving them, for source control over them, making a useful feature a pain to work with. Worse, .NET doesn’t use them or property user macros at all!
IDE Properties: Back in VS6 when I selected multiple configurations, I’d get the valid combination of things like libs, preprocessor, and if I edited those, I didn’t lose everything else. Today, if I select All Configurations, far too many boxes go empty, instead of displaying the valid intersection of the configurations.
This means that if I need to change a single preprocessor entry in 12 different configurations, I have to go through each and every one manually and pray that I didn’t miss one.
On top of that, can it please display by some means in the main IDE properties window, inherited libs, preprocessor settings etc. Often someone will say, how come this project doesn’t have X defined – in fact it is, it’s just buried 3 levels down in a property sheet that the project inherited. Worse, someone will say ‘it can’t do that, it doesn’t have that setting set’ and in fact it does have it inherited. The user should be able to see the result, not just the project-level settings, in the main properties window.
Crashes: Despite sending hundreds of error reports when the compiler or linker crashes, we’re still getting common repeatable crashes with incremental compilation and linking. Sadly we can’t send the reproduction information to Microsoft, as it’s all our copyrighted source code!
Debugging: Getting debugging to work with managed/native transitions, and crash dump processing with managed code present is still very complex and doesn’t always work. In particular, the Auto feature of debugging doesn’t seem to work out when a startup project is native, but the launching exe is managed, that it needs Mixed mode, and that the exe (even when it’s in the same solution) needs to have unmanaged debugging turned on. In short, what should be a single click debugging session turns into a hunt-the-checkbox game which usually ends up not working anyway.
Runtime checking: Please make the various checks such as stack corruption faster. It took an enormous amount of political battling to get these turned on even in debug only because they slow the process down so much. I know debug is for checking, but when senior developers who claim they never write buggy code are wanting a fast debug experience, it’s hard to convince them to have the checks on, and I’d rather we found bugs during development than later. Btw, if anyone hasn’t used these, do so, they can be a lifesaver.
Files: It’s all very well having the .proj and .sln files in XML, it’s a marked improvement over the old unreadable VS6 files, but please can you stop rearranging them every time there is a simple edit? A single change in dependencies in a solution reorders almost everything. Clearly whoever designed this never subsequently had to look at these changes in source control. Have a simple rule – one change, one line in the file changes (ok, it might not be this simple, but you get the idea).
Source control: Please provide better support (which may be to the vendors rather than users) for other source control products. Far too often one cannot use the source control plugins written for VS because they don’t work in the way the source control application expects. If it requires liasing with the likes of CVS and Perforce, then please do it.
Replies to some comments:
‘One can certainly write good UI code in C# and VB.NET as well’
Sure you can, but that UI has to do more than just be a UI, it has to call non-UI code. Therein lies the problem and it’s just too much of a pain doing this interoperably at the moment.
‘4..’
Hear hear. This is so important it ought to go top of the list.
Consider that quite a few developers are creating portable engines that then have an OS specific bottom and top (system and GUI) and having to somehow have managed transitions for all of the GUI is an incredibly costly process in terms of maintenance and design.
Asking someone to rewrite everything in a different language is just a non-starter, so something somewhere has to enable this concept to work.
Just stick to the standard.
Ask yourself what can you do to fulfill it.
I am really glad you are making this opportunity available for all users to provide feedback. There will be lots of personal pet peeves of course, and you can never make everyone happy, but there is always hope for a company that listens to its customers.
I also want to say I’m very happy with the move towards full public betas and the express editions. This allows small developers to anticipate and prepare for API changes just like the big boys.
I’m a busy C++ developer. I don’t have the time or inclination to spend hours in the registry or installing scripts etc. I really don’t have time to write this note, but I feel I can’t complain if I don’t vote!
So here goes (VS 2005 feedback):
1) Make the most common tasks fast and reliable. Here’s a secret trick: try to develop the classic asteroids game using the latest VS, but on an old 386 or 486 system. If you can’t stand it, find out why. Performance really does matter. No, really do this. Take a week and do it.
2) Less integration, more inter-operation. A simple example: the start page has html links (I got to this blog via one of them). That’s great. But when I click on a link, the page is displayed inside VS. Isn’t it more likely that I already have a preferred web browser, and in fact am already an expert at using it? The problem with integrating such functionality into the IDE is that you don’t automatically integrate the user’s domain knowledge of how to use that functionality. The user has to learn new ways to do things he already knows how to do. I’d prefer that all context sensitive tools could be unbundled this way (the help system sort of works like this, but I’m not allowed to choose a different help program).
3) Make all dialog boxes re-sizable, and remember where I last put them and how big they were. You do this sometimes (Window->Windows…), but not always (Project->Properties). I like big dialog boxes for text fields and scrolling lists.
4) The task list window should show comment tasks from the file, project, or entire solution. VS 2005 only displays them from the current file. I don’t use this feature because of that.
Stephan, yes, size_t is not a keyword like wchar_t. Which is why I added "(optionally)". For me, the syntax coloring of typedefs which are essentially aliases for built-in types leads to easier to read code. I guess I’m looking for something similar to the useful cyan coloring you get when you make aliases in C#:
using size_t = System.UInt32;
I’d like to see the ability to skip comments in search & replace.
I think you are missing a lot of potential users. I like to build platform neutral apps with ACE. But VC++ is always tough to make work with makefiles. It would be nice to support both so I could drop eclipse and EMACS and all of that stuff on the Linux side. I would love to be able to just scp and build on Linux once I’m done testing on NT.
Also, make the heap faster. It gets too slow when apps get busy. For a lot of my high-performance apps I am afraid to use std::string too much, which is a shame.
[Jeff H]
> STL improvements are high on the wishlist.
Which improvements, specifically?
> That said, we often find ourselves using STLport
> rather than Dinkumware, as it’s faster
If you find specific test cases where STLport (or libstdc++, or whatever) is significantly faster than VC’s Standard Library implementation, please file bugs in Microsoft Connect. We’ve already gotten two bugs from this thread, one of which appears to be a real problem that we’re investigating, and the other one appears to be a place where we’re actually significantly faster than libstdc++ (at least on the same OS, which is all we can ask for).
> and offers debug support.
How, specifically? VC8’s Standard Library implementation enables _HAS_ITERATOR_DEBUGGING by default in debug mode, which performs powerful correctness checks. Additionally, we’ve taught the IDE debugger to visualize STL containers and iterators. What does STLport do better?
> please can we have a feature that has help on the hard drive?
Doesn’t Visual Studio come with a copy of the MSDN docs? (I’ve never installed them myself.)
> Can we please not have managed keywords highlighted in native code?
That’s a bug – file it in Microsoft Connect, please.
> Please make the various checks such as stack corruption faster.
You mean /RTCs? /RTC is documented as slowing down code by 5% at most. If you’ve found a case where it’s more expensive than that, please file a bug (at the very least, the documentation would have to be updated).
[Josh Greifer]
> For me, the syntax coloring of typedefs which are
> essentially aliases for built-in types leads to easier to read code.
Makes sense. If I regularly edited with syntax highlighting, I’d want that level of customization. I think you can file suggestions in Microsoft Connect.
(Aside: I implemented hit highlighting in Outlook 2007, so any sort of highlighting has a special place in my heart. 🙂
[Josh]
> But VC++ is always tough to make work with makefiles.
How so?
> Also, make the heap faster.
I’m going to sound like a broken record, but please file a bug in Microsoft Connect. "Make the heap faster" is not as actionable as "here is a test case that demonstrates how your std::string is slower than it should be".
Thanks,
Stephan T. Lavavej
Visual C++ Libraries Developer
I can’t help but comment on:
."
That is the very attitude that will eventually kill VS for C++ users. If the majority can use C#, then some can’t. And who writes code that some coworkers can’t maintian or upgrade.
The biggest problem with intellisense isn’t in the CPU cycles – we know that is done in background thread at low priority. Its the fact that it uses a lot of memory (either directly or indirectly by the OS during the file caching), which will cause paging which will slow devstudio and everything else to a crawl. This is on a 2 gig ram machine with dual procs. Along this same line, I had devstudio take over five minutes to shutdown. Again with a 2gig machine and nothing else opened. Why, again its because of the paging that devstudio frees every memory thats been allocated. Now I’m in the habit of using task manager and kill process. Bam – less than a second. How about adding a command to do that!
echoing others, help has gone down hill ever sense you moved to html help. With no perceived benefits that i see, help just got more and more useless and much to slow. I now use google for pretty much all my help needs.
Bring back the profiler! What a crock that you made it only for the team system.
We work with dual screens / desktops, and it would be really nice if devstudio worked nicer on very wide screen. Now when you open a file, the default window is way to wide. How about and option to set the default width & height. Even better, let us move the text windows outside of the devstudio like other windows (output,call stack, etc)
Thanks,
Leigh Stivers
Still about Intellisence.
In VC 2003, when I press F12 on a member function declaration in a base class, a dialog will pop up listing the function definition of the base class as well as the function definitions of all the descendant classes so that you can choose one from them. This may be not accurate for the reason that Intellisence cannot determine which class we want. But this is a great feature. We are often faced with virtual member function invocation via a pointer of the base class and we don’t know in what class this member function will be invoked when reading the code. To better understand the code, it is quite useful for us to see all the possible implementations of the virtual function. And VC 2003 provides this great feature, that is when we press F12 we can see a dialog listing all the possible implementations. While VC 2005 makes Intellisence more accurate (that is when you press F12 on a function declaration of a base class you will be navigated to the definition of the base class), it fails to recognise the purpose of reading code– to understand code. I hope this feature be considered in future.
Thanks,
John Tang
Bring back Brief emulation.
TIA 🙂
I develop my applications in VS C++ mixing managed and unmanaged code. Only the presentation layer is written in VS C#. That is because VS C# always has better design support then VS C++. The superiority in power of the VC++ language over the C# language is indisputable (for those who know both).
The biggest problem of VS C++ right now is the lack of support for designer tools like the ones for VS C#.
A small feature I’d like to see improved on is the Configuration Manager in VC++.
It would be great for us who develop on multiple platforms to be able to add new platforms easily without having to write a VS-plugin for every one we want to build on.
Right now, with the default config, only Win32 and x64 are available in the choices. It’d be nice to be able to manage our GC, Wii, PS2, PS3, etc. project the same way, easily.
Perfect feature to fix for an internship!
Thanks,
Fred
Daniel,
re: support for TFS versioning of files that are not in a versioned solution/project.
Yes, I agree, this annoys me too and we plan to address it in the future. The IDE integration for version control has historically been focused around projects and solutions and we did realize until late that this was thinking about the problem too narrowly. I hope to get this addressed in our Rosario release. Thanks for the feedback.
Brian Harry
Product Unit Manager, TFS
Hello Feri T.
Re your comment: The biggest problem of VS C++ right now is the lack of support for designer tools like the ones for VS C#.
If your comment is specifically about designers/tools/features for targeting .NET, then we agree that languages like C# and VB that specifically target .NET provide incredible features for working with .NET. Our advice is definitely to use these tools from these languages whenever you need/want to. Going forward our goal will not be to provide parity with these languages/environments when specifically targeting .NET.
If you comment is about providing such functionality but for native development then we would also agree that we do not have parity today (Intellisense being one such example often mentioned in this thread) but rest assured that our goal going forward will be to provide similar (or better!) functionality for targeting native code development.
Steve and Bill cover our motivation and reasoning in their Channel 9 video, referenced above but I will included it here again:
Thanks
Damien Watkins
Visual C++? I didn’t even KNOW about C++/CLI until about 6 months ago, and it was just by accident that I found out. Prior to that, I had just tuned out .Net development because I knew C# and VB wouldn’t cut it where I work.
Hello Eric
Re your comment: “I am really glad you are making this opportunity available for all users to provide feedback. There will be lots of personal pet peeves of course, and you can never make everyone happy, but there is always hope for a company that listens to its customers.”
And there is absolutely no hope for a company that does not!
Re your comment: “I also want to say I’m very happy with the move towards full public betas and the express editions. This allows small developers to anticipate and prepare for API changes just like the big boys.”
Thanks, nice to hear that these initiatives are working well for you.
Re your comment; “I’m a busy C++ developer. I don’t have the time or inclination to spend hours in the registry or installing scripts etc. I really don’t have time to write this note, but I feel I can’t complain if I don’t vote!”
Absolutely, the only way to be heard is to speak up – further more please feel free to use our VC blog on occasions too. And I hope you do not always have to “complain” too 🙂
Re your comment: “So here goes (VS 2005 feedback):
1) Make the most common tasks fast and reliable.
2) Less integration, more inter-operation.
3) Make all dialog boxes re-sizable,
4) The task list window should show comment tasks from”
Thanks for taking the time to tell us about these. A number of us are reading this blog almost daily and we are recording the suggestions. We are already addressing some, such as the amply identified Intellisense issue(s), but this thread reiterates why completing that work in particular is so vital. For other issues mentioned here, customer feedback helps us decided what priority order to put things. As you say above, we unfortunately cannot address every issue for every customer but we have to make sure we address the right ones first and foremost.
Thanks
Damien Watkins
Visual C++
I’d like to have a way of choosing which function/method to step into when debugging a line of code. Example line of code:
myclass.DoSomething(someObject.ToString(), someOtherObject.GetValue());
When debugging this line of code and pressing F11 to step into it, I always end up in the calls made to get the parameter values and I have to step out of those and then eventually I can step into the outermost call (DoSomething).
It would be nice to have some way of choosing which call to step into. Right clicking the method or parameter and choosing step into is an option. Also having some shortcut key "step into outermost call" could be an option.
This goes for all languages, manged or unmanged, it would always be useful.
Thanks
Fredrik Stahre
Standard conformance is critical.
It would also be nice to have implementations of the new tr1 and standard features out quickly.
Frankly, most of the opinions expressed here are ignorant. You have a lot of moronic customers who have no idea a) how to get best use out of the toolset and b) who plainly feel a sense of entitlement based on their experiences with earlier versions such as VC6/7. Half of the recommendations on this page are for features that are already supported, or to fix performance issues that are only an issue on sub-standard hardware. I’ve been with VC++ since the beginning and each version is a tremendous improvement on previous versions in almost every area. Developers whine about a lack of support for native development when VC++ is the single most capable native development platform in the world, bar none.
And the developer who suggested you take VS2005 and develop asteroids on it as a way of demonstrating the need for "performance".. he’s out of his mind. These little kiddies need to learn that new operating systems and new development environments require new hardware PERIOD.
Responding to Chris Hansen’s message about our stated on to not seek parity with C# and VB.NET in the .NET designer space for C++/CLI:
>>That is the very attitude that will eventually kill VS for C++ users. If the majority can use C#, then some can’t. And who writes code that some coworkers can’t maintian or upgrade.<<
The recommendation we’re making here is that if you’re going to make a technology decision to build a user interface using one of the managed UI frameworks, you need to invest in building C# and/or VB.NET expertise on your development staff. I’ve yet to talk to a customer for whom this is a serious problem, but I admit I haven’t talked personally to all of them. 🙂
The bigger picture here is that leaving this space in the capable hands of C# and VB.NET gives us the ability organizationally to focus on the native UI space (which I fully admit we haven’t done a good job at in recent versions). I fully expect that we’re going to take some heat for stating our intentions and recommendations in this space so bluntly, but I believe it’s the right thing to do because it enables customers to plan their technology investments accordingly .
And replying to Cygnus’ message:
>?<<
We rely on a number of sources of customer input, ranging from the very anecdotal (e.g., meeting customers at conferences) to the semi-representitive (e.g., discussions in broad-reach forums such as this) to true statistical instruments (we have a few internal market ananlysis tools that we’ve been using and refining over many years).
BTW, I really hope you don’t feel we’re deflecting requests about C++/CLI. Our intent is to be very up-front about our plans for C++/CLI and to help everyone understand how our plans for C++/CLI are informed by the realities of our business.
Thanks!
Steve Teixeira
Group Program Manager, VC++
+ try to break the speed (and responsiveness) of Visual Studio 6.0. The responsiveness of Visual C 2005 is way too slow.
+ get better stability of the editor, like it was in Visual Studio 6, and less resource consuming
+ change back to the help system of Visual Studio 6.0. current "dexplore" is too slow, too much flickering and too resource consuming.
+ show relevant help items when hitting the F1-key,
for all relevant C / Platform SDK / WDK keywords
(sprintf, SHCreateFolder, IoCreateDevice)
My desired improvements:
1) True long double support (not an 8 byte typedef) when extra precision is required for engineering & scientific applications. Optimizations and enhancements for floating point intensive applications. Most of your competitors are far ahead of you in this area. It seems like not much has been done recently for those of us who develop these applications yet they really are computationally expensive.
2) Improved IDE performance, esp. debugging large solutions with many projects and files.
3) Improve C++ IntelliSense. I think it is actually worse in 2005 than earlier versions. IntelliSense rarely works on my large solution.
Standards conformance has come a long way since VC++ 6.
Better configuration of multi-target projects. We have projects we want to target to Win32, Win64, and various different Windows CE and Windows Mobile platforms.
Unfortunately, this requires a lot of manual work. When creating a new project, you can’t choose all of the above, so you must add the other configurations later (this is also the case for existing projects when expanding what they target). When you add a new configuration, you can make it blank or copy setting from an existing configuration, but you can’t copy from the default settings that you would get if you created a project from scratch with that configuration. That means, to create (for example) a Windows Mobile Pro config, I must either start from scratch and manually change all the options or I must copy from Win32 and then fix all the defines and other options that aren’t appropriate.
It turns out that the easiest way is actually to just create another project and then manually edit the XML of the original project file to add the new configuration, copied from the new project…
Small, but powerful, feature from the old Smalltalk days:
Right-click paste has a sub-menu off of it that shows you a list of your last, say, five clipboard contents (up to the first 20 or so characters of each). Much more intuitive and friendly than the clipboard ring. One often gets sidetracked when about to paste something; this allows one to easily dig back out without starting over. All text editing widgets should do this…
improve the editor. last the editor was destoryed my project and compiler couldn’t build some of the pages of my project . i’ve just think that if i had a big proj and the compiler burned my proj what shall i do !!!!!!!
make a list of events of controls as parameters in editor pages(like as VB or C#) so it don’t need to type the events by keyboard.
don’t change the symboles . for ex the symbol of pointer on vc++ 2003 is * but in 2005 is ^ . at all i think that the V.C++ 2005 has many problem than the last version.
I really agree with the people that keep missing VC6.
It was the IDE that was really superior to all others at that time. Now with VS2005 I’m not so sure anymore… I’ve worked with 5/6/2003/2005 and simply can’t say that 2005 improved much for me…
– It is slow(er).
– Intellisense is slow, a CPU-hog and often simply doesn’t work (SP1 improved that a bit)
– Caller graphs and "find all references" simply don’t work.
– The are still quite a few errors in it!
– for the docs: If space allows it, bring back the Win32-API documentation. proper.
– Where’s refactoring?
Why not improve 2005 first? VC6 had 6! service packs afaik and 2005 only got one, still has errors and is already being replaced by 2008… I’m not a friend of switching IDEs and installing redistributables on numerous machines. I’d only do that for serious improvements…
Imo there’s lots of people (including us) that use C++ and need to keep it that way because they’re doing cross-platform development (Win/Linux/Mac) and/or need highest possible performance…
Nice of you to provide VC++ Express. Though it lacks some stuff of the regular version it is a really powerful IDE.
regards
Kim Rosenbohm
I use both VS2005 and VC++ 6.0. At our company, we make most of our applications in C#. However, for our embedded devices (we have both CE and XPE flavors) they are totally in native code, and I don’t expect that to change. However, if MS could provide a toolkit to help communicate with WCF services straight from native code, it would be great. Then we can make all our communications the same. We have moved some of our managed code to use WCF. The only stopper to using it across the board is having to use it in our embedded applications (and it has to support the same code base for both WinCE and XPe). So by providing a standard C++ toolkit that doesn’t have many dependencies, you could possibly use WCF from native code on any windows platform, and that would be the coolest !
What about "#define" (macros) debugging? I have a lot of C++ code inserted as #define because I have a lot of similar methods, but It’s very hard to debug and find any error when it resides in macros. It would be nice if we have to debug #defines as if they were methods or any alternative way, such as automatic code expansion of them (like C++ preprocesor does in compile time).
Regards,
Sen
I’ve been using Microsoft’s C and C++ compilers since 1989 and I’m currently the only developer on an application with with around 450k lines of C++ and MFC code. C# is not an option. Last year, I enrolled in the Empower ISV program expecting great things from VS2005, but was thoroughly disappointed and removed the compiler from my PC some weeks after installing it. I did not renew my Empower subscription.
Two comments which echo Stephen Kellett:
1. Bring back the Class Wizard – it is a fast and efficient way to manage an application’s boilerplate code and can be driven entirely by the keyboard. The VS2005 property-based UI is slow, buggy and requires me to remove my hands from the keyboard, grab the mouse, issue multiple clicks, some of them over tiny areas on the screen. I estimate it’s around 10% or 15% of the speed of the previous interface and it’s annoying as hell.
2. Throw away the broken CRT and MFC library installation procedure, or fix it. Many of my clients are banks who run isolated or locked down PC’s which are rarely up to date with service packs and almost never have internet access, and frequently, not even administrator access. I cannot install VC2005-compiled software on these machines.
I look forward to the promised update to MFC – what’s planned here, folks? Other than that, I really hope that the next VS can provide the efficiency of the 1998 environment together with the newer compiler’s runtime checks.
Try to make it worth my while to upgrade, folks. Please!
Intellisense/Autocomplete and Help needs to be more tightly integrated. When I type a function or object, and hit F1 on it, it should take me to THE (not one, not many) page that documents that function or object.
Also would be nice: an "Assist" pane in the VS IDE. When Intellisense/Autocomplete sees that I’m typing a function, the assist pane is a little HTML document that shows:
1. The function I’m typing, with up-down arrows for all the overloads.
2. Documentation of the arguments. (See Excel’s function wizard, for example).
3. Hyperlinks to THE MSDN documentation for that function (don’t bother with links to communities).
I know, Help and documentation is mind numbling boring. But this is something Microsoft should start investing more heavily on, despite what Charles Petzgold says about Intellisense.
Wish #2, related:
Wish #1 is really an alaboration of the tooltip that is part of Intellisense. The problem with this tooltip is that it "gives up" on me. Expecting the customer to keep the carat in the argument list as a requirement to see the tooltip is too limiting. It needs to come up whenever I place the carat
in a function, not just when I’m actively entering arguments.
Wish #3:
Please keep J# alive, and support 64-bit. If you can’t support intellisense, etc on it, for the time being, then disable those features.
Wish #4:
Please make configuration management more intuitive and robust. This Any CPU/Mixed Platform/x86/x64 stuff all has its reasons, but it’s just too complicated. (Fix J# 32-bit-onlyness.) I should be able to always code under "Any CPU" and enjoy the safety of not accidentally switching configurations.
Related to this: the configuration dropdown is lying to me. If I have a multi-project solution, the Configuration management dropdown is where the actual configurations per project live. As per paragraph #1, I really don’t care to be managing my experience at this level in the first place. I don’t really care about configurations!
Make debug/release a separate project property, and stop trying to cartesian-product-out your configuration matrix with the debug/release dimension. Managed code doesn’t have as many Debug-specific settings as native code does, but that doesn’t have to mean it inherits the messy configurability.
Other than that, Visual Studio is simply awesome that will keep my loyalty for a long time to come.
i’m agree with brian . the MSDN have so many problem . it doesn’t have a good organize . some times while i can’t work with new controls i refered to MSDN but i can’t get a clear answer. i think that the microsoft must be think about it .
I second the suggestion Michael H made. Please include true support for long double in Orcas..
I’ve read a few more comments from other users. I’d like to re-iterate a few of their requests:
* The ‘assist pane’ is a great idea.
* Once intellisense doesn’t lock the GUI, have intellisense pop up whenever the carret is placed in the argument list.
* macro debugging would be really nice.
* An easy way to store stack-frame information would be really nice as well. Stack frames in .NET exceptions are ver useful when debugging. Some way of easily storing the same information in C++ (and C) would make life a LOT easier!
* C++/CLI support is key for .NET interop. It is so much easier (and maintainable) than C# p-Invokes. Please get tool support for C++/CLI up to par with C# tools. And don’t let C++/CLI fall behind technologically as a .NET language.
* Please support C++/CLI on Windows CE. I hear there are technological reasons that this isn’t possible. Please fix these thechnological problems on the next CE OS, then get compiler support for C++/CLI for WinCE.
Small request: more than two "Find Results" windows.
Mixed debugging of 64-bit code.
I’m part of a game development team that has just started to migrate to a C++ / C++/CLI / C# workflow (writing our engine code in C++ and game code in C#). We think that C# and C++/CLI are excellent, but there is something we’d really like –
Native/CLI code IDE improvements. The C# intellisense is amazing, and we’d like even a fraction of the improvements to be backported into C++. Things like the context sensitive list suggestions that you get in C# would be a godsend to C++ development.
Oh really? I’d like to see Intellisense working, without hanging VS.
In VC6 is possible to share pch across projects (in my case ~60 projects from libraries to exe) In VC 2005 after a long war i realized that is not possible to do this. Please share pch across project’s for two reason:
1 speed-up compilation time
2 decrease disk usage (each pch is mant MB and is the same in every project !)
Pepito Sbarzeguti, I opened a bug on supporting shared pch’s. But I think the IDE should just do the right thing and automatically use pch’s without user involvement.
I would like to see such basic IDE usability improvements as
– Switching between header/cpp with a keystroke, regardless if they’re in the same directory or not.
– The ability to add a new header/cpp pair to a new project with a single click. Newly added units should not be completely blank. It would be nice to customize the content, but every unit needs some kind of a header guard, a namespace, and the cpp usually needs to include stdafx and the corresponding header. It is currently too painful to add a new C++ unit/header pair, it works so much better with C#. Sometimes I wonder how people add new units to a project, because the IDE’s usability is poor when it comes to C++.
– It is a major pain that the IDE renames files to lowercase. ThisIsMyFileName.cpp is easy to read; thisismyfilename.cpp is not. It’s annoying that I have to constantly rename files back. Save should never rename, that’s what Save As is for.
– The C++ forms editor is not a first class .NET citizen. At the minimum, it shouldn’t litter the header file. Partial class support would be even nicer. I gave up on doing GUI development in C++/CLI.
Like others have mentioned, C# intellisense is so helpful. I love C++ with a passion, yet I enjoy writing C# code, because it’s so fluent.
Keep up the work on the ever improving C++ standards compliance.
I apologise if this has been mentioned, but I really would like to see an easier way of changing the compiler itself, but retain the IDE and (most) of its features. All of the projects I work on are cross-platform, requering either special VS integration to compile the different binaries, or our own error-prone makefile projects, or even in worst case, editing in VS and building offline.
The "Makefile Project" gets us about halfway there but it does require hand-maintaining the makefile itself. If that part was automated (generating a makefile from the file/project/solution properties), and being able to swap out with a different compiler, it would make things go much smoother.
If there is an (easy) way of doing this already, I have not found it.
My wish list? only ONE thing: my main and only problem is the slowness and clumsy interoperability between Windows Forms and native code. The new marshalling functions of Beta 2 are a good start, and I would finally move to .NET without hesitation if the performance was OK, but it’s just not. When do we use C++? When performance matters. Why then is a C++ Windows Forms application so much slower than its counterpart in C# or C++ with Codegear’s VCL?
Right now, I still have to use CodeGear’s vastly inferior solution as my main tool, with VC used only to develop the core high performance libraries.
I can see I’m not the first or the only one here to mention that… Maybe there’s a reason?
Did you guys consider offering as a new alternative to MFC, basically a native version of Windows Forms? (hint: buy CodeGear’s C++ VCL – it’s not like they’re gonna ask for real money hehe).
I fully understand the .NET rationale, and agree with it 100%. But I can’t bear the performance hit, nor can I go back to MFC.
My top complaints about VS are IDE related, this is where I spend most of my time. All of my code is in C++, and I don’t see this changing in the foreseeable future.
Gripes:
#1: Splitter window support when using dual head displays: I used the tab views and find it very annoying that the splitter keeps moving around. I set the splitter at the inside edges between my dual screens, but entering the debugger changes the position, so now my break point marks are on the left screen. Using the toolbox, solution explorer, class view, and other auto hideable, dockable views also keep moving the splitter. Let me pin the splitter to the inside edges of my dual heads and don’t move it!
#2: Problem introduced with the SP to VS2005: VS fails to open all windows when re-opening a solution if you had a resource open in the IDE at exit. When VS2005 with the SP applied (and the beta Orcas) reopens a project it only loads the files you have opened until it hits a resource, like a dialog or menu, and then it stops restoring the windows.
#3: Icon and toolbar editing is very painful, need to work on this. Better support for colors.
#4: Also agree with comments inf favor of the old ClassWizard. Attaching messages is much clumsier now.
I agree with many of the comments on VS6 being the pinnacle of VisualStudio. I do think things have improved from VS2003 to VS2005.
I am using VS2005 because of code generation that I need. It took my group a long time to move from VS6 to VS2005 because of all of the things that broke in VS2003.
Thanks for asking for the feed back. I really hope you use it.
Hey John,
With reference to your F12 issue, I’m happy to let you know that we have fixed it in VS 2008.
Hello
For everyone who expressed an interest in the future of MFC I just wanted to point out Ale’s session at TechEd Developers – 05-09 November 2007, Barcelona, Spain.
The details are:
Title:)
Please drop by and say hello to Ale!
Thanks
Damien
Hello
Re: Monday, September 17, 2007 4:33 AM by anonymous
> Oh really? I’d like to see Intellisense working, without hanging VS.
Firstly, I just want to say that we hear all of you loud and clear on Intellisense. Just to reiterate what Marian (Luparu ) said above:
.”
A working Intellisense “without hanging VS” (as you put it) is a top/major priority of ours going forward and with the work on the front-end rewrite that is already underway we are confident that Orcas +1 will be a different experience.
Thanks
Damien
This is answering Peter Westerström regarding mixed debugging on 64bit. We are aware of the importance of such feature and is a high priority on our interop feature list & investments.
Thanks,
Ayman Shoukry
Lead Program Manager
VC++ Team
Hello
Re: Tuesday, September 18, 2007 4:17 AM by Phidias
> I fully understand the .NET rationale, and agree with it 100%. But I can’t bear the performance hit, nor can I go back to MFC.
Always after a little feedback/understanding of our customer’s requirements – so I am just curious what prohibits you going back to MFC?
Thanks
Damien
one word: metaprogramming
IntelliSense is a great option, and works perfectly in languages like C# so it would be extremely helpful if you could get IntelliSense working as perfectly in C++. When I add a dozen new members to a class, I have to keep switching back to the class’s header file to remember what I named them! I would love if IntelliSense would update straight away. go with the wind 😀 goes with the wind 😀
I’d love to see a 1sr class visual SQL designer for any ODBC source. Personally the more cross platform database support the better. Embedded SQL and the works.
I use a lot of DB2 and DB2/400 and feel it would be nice to have better support tools in VC when working with databases.
I use VS2003 for native C++ code, and are moving to VS2005.
Improve remote debugging.
1) the compiler should automatically transfer the executable when I make changes to the solution and rebuild. Why does it also check and say, debugging information is invalid, whenever I change files.
This should be transparent to me. Maybe copy the file to a temporary file on the remote system and run it.
2. Make JIT (just-in-time) compiling better. Usually I just change some source lines, and continue debugging. Provide some kind of capability to add variables when using JIT. Make the buffer for adding code larger, so I can continually make changes and not eventually run out of memory.
3. JIT (just-in-time) is not usable when remote debugging, and you are using the network to contain the source files.
—> I’ve seen where you suggest to transfer the WHOLE source directory to the target, to enable. But this is silly, in the NETWORK connected world. Why not make JIT work with the network access to the source files.
—————————–
Give the capability to use the VS200x interface to create VS2003, VS2005, VS2008 code, which will run with those libraries.
—> This will allow me to use the current environment to continually support legacy code.
—> For this feature, I just want to the ability to setup the libraries built with to be VS200x, so I can be run on a system with the same VS200x "dll’s"
————————
As everyone else has said, Speed up the IDE.
I’m just an old time systems prgrammer (IBM SDD) who would like to find a C++ compiler (and: link-editor, loader, debugger) which has a learning curve whose slope is considerably less than TAN[PI/2].
I’ve purchased Visual C++.net 2003 and find myself looking at an IDE which has been designed to do almost anything for almost anyone. In complexity it can be compared to the console of an F-15 (whose original software I helped to design and implement using multiple 8080’s). Don’t you guys have something that resembles the dashboard of a Volkswagon Beetle?
I already know what I want to do; know how to do it and require a simple tool that I can easily and quickly use. A Swiss knife is a beautiful thing, but not if what you really need is a very strong screwdriver.
Can you help me? I’d be glad to purchase whatever simple development tool you’ve got available.
Eric (learning_one@msn.com)
My issues are:
1- Why doesn’t help default to the gigabytes of crap that MSDN has installed on my machine. I can’t make it work. It always goes to the web??? I have messed with the settings and I can’t make it work.
2- Intellisense is horribly flakey.
3- Compiling is horrifically slow. So is debugging. It would be acceptable if I was getting some great functionality for this cost but I recently came from VC ++6 and I don’t feel like I have done anything but slow down.
Nobody really cares about new features when the current ones don’t work or work poorly
We have multiple large applications in native C++, many with ATL and MFC.
First, all the speed/correctness mentioned before. This is necessary, but not sufficient for continued use of MS products and platforms. (MS wake up– the pressure from open source and other platforms is very real.)
What we would like would be:
– Actual real reasons to upgrade tools, not for "not wanting to be left behind", but for real productivity gains. Make our customers notice that we must have upgraded because we develop better/faster/prettier, not because we disappeared for 6 months trying to get stuff just to work again.
– Make it easy for us to get snazzy interfaces without suffering. Note that this does not mean nonsense like HTML dialogs, but things like resizeable dialogs or decent printing support.
– Don’t make use drink the koolaid for whatever the MS push du jour is. Seriously, we cannot by fiat make our customers install .NET, move to Vista, etc. To the extent new features require these things, any time MS spends on them is wasted from our perspective.
– Keep our developers from writing code that should be delivered in libraries– parsings, plumbings, wrappers. These just waste development, build, and testing time.
Thanks.
One thing I’d like to see in Visual C++ going forward is support for data breakpoints in managed, or at least mixed-mode projects. I have an application that uses IJW to allow me to use the .NET XML facilities for input/output, and a native C++ kernel which does computational fluid dynamics calculations. The "Set New Data Breakpoint" dialog is grayed out, even in my native code. I’ve seen from Help () that this feature is only supported for native C++, but apparently not for native portions of IJW projects. I’ve tried setting them in files compiled with and without /clr: no dice.
Additionally, I hope this feature will work smoothly with OpenMP.
parkzone — the Help setting you want is in the Tools -> Options -> Help -> Online
Try local only, not Online
in VS 2005. ……
Responding to Eric Roll – have you tried Visual C++ Express? This is a free tool that’s designed to be a slimmed down easy to use version of Visual C++. It doesn’t have a lot of the useful features you get with other versions like ATL, MFC, the profiler etc, but it may be just what you’re looking for.
Ben Anderson
Visual C++ Libraries QA
I’m not sure this is C++ or not but the Debugger when in C++ mode.
I’d really like much more official support for the autoexp.dat file (maybe even take it out of that file and give a UI for setting this stuff globally and per project)
*) the ability to stop the debugger from stepping into certain things (like string constructors). Maybe a pragma for this instead of autodat.exp?
*) the ability to get it to print more useful info when in the watch window or hovering the mouse. (for example right now if I hover over an STL map all I get is ???
Supposedly those things are handled by autoexp.dat but I’ve never had any luck with it.
Also, Array inspector windows (like CodeWarrior) and the ability to cast a memory window to any type or array of types (like CodeWarrior)
When parsing files with extensions that don’t match the expected file format (make.inc for example, or any .x file from a unix ONC/RPC code base), some of the parsers choke up and go off into never-never land. Give me a button or some other way to stop the parsing, so that it doesn’t hang the interface.
When I hit "save all", please save all; including my window and tab layouts in the editing pane. Save the entire solution. Every time the above parsing problem happens, I smack my forehead, because I know that all of the work I’ve put into opening a file set, and arranging my tabs and such will be lost.
Put the intellisense parser into a background thread and set the priority to "low". When I create a new project that parses a large code base (which I do about 3 times a week), the intellisense parser consumes so much CPU that the interface becomes unresponsive. I now create those projects before I go home for the evening, hoping that it will be done by teh time I get back in the morning (and that it hasn’t crashed before completing that too).
I keep running into lost delayed write issues because my projects are stored on a Network Attached Storage system which can sometimes close my CIFS connection. Other applications deal with this just fine, but Visual Studio doesn’t. When it happens, the intellisense database that is kept open ends up causing lost delayed writes. Does that database need to stay open all day long? Can’t the updates to it be batched up and committed all at once, or something? This may be a CIFS redirector problem, not necessarily a VS issue, but it could also be something that VS is doing differently than many other office-type applications.
I have small improvement in mind that would solve the annoying thing which many people here complain of. The problem is when you are debbuging code and stepping into functions, you usually want to skip all uninteresting things like string constructors or own already debugged functions.
In eclipse (despite it is terrible IDE at all) is a smart feature that you select by doubleclick the name of the function which you want to step into and hitting F11 will step directly into this function. Why not add this feature into VS? When nothing selected the old behaviour could be preserved.
Responding to Ben Anderson – RE: simplified programming environment.
Thanks Ben. I’ll take a look before going over to gcc altogether.
If some of Microsoft’s compiler writers (programmers) would sit down and design a simple development package which caters to designer/programmers like me — there are lots of us — then I believe that they could consider starting up their own business into a ready-made, profitable market niche.
Eric Roll
I would like to see a simpler way to handle File I/O. Such as creating outlines of files, and the connected classes to fill them, then creating functions to easily write to these files.
.
.
.
.
.
Proper Multiple Monitor Support
.
.
.
.
PLEASE
.
.
.
I have filed soo many bugs concerning multiple monitor support with VS2005 and they all have been closed as "will not fix".
It is a real pain in the ass to use multiple monitors with VS2005. Windows forget their position, you get "invalid property" dialog boxed, etc…
I would like to see a servicing plan more responsive than "release one service pack two years later and fix all other bugs in the next product." When I am experiencing bugs in Visual Studio that are causing major pain in daily development, I can’t wait two years and I don’t feel I should have to pay several hundred dollars in upgrade fees to get bug fixes. Upgrading also doesn’t work because there are always new bugs and new features, breaking changes in the project formats and IDE, etc.
I write native code applications in C++. Looking at the feature list for VS2008, there is very little added value for me, and yet I don’t know if I can wait until Orcas+1 for a dialog editor that selects the control I actually click on, an editor that doesn’t constantly thrash my disk while rebuilding the Intellisense database, and a project system that doesn’t reorder the file list in the .vcproj file every time someone edits it.
I realize that Visual Studio is a huge application and doing bug fixes and QA is a massive undertaking, but it really hurts to see my favorite development environment have a constantly high level of problems from version to version. I still think Visual Studio is the best IDE for what I do, but since seeing what has happened to it since VS2002 I am seriously keeping an eye out for alternatives.
My number 1 wish would be a complete C99 support.
A working TR1 implementation wouldn’t also be bad.
C++/CLI isn’t so important for me since I rarely mix C++ and managed code.
It would be useful if Newer versions of Visual C++ could recognize older versions of C++(All the way back to the first version)
I think you can look at text processing languages like Perl and Python and try to do similar capabilities.
Thanks
Pain points 🙂
1. Library pains
a. Native windowing library? MFC is antiquated, period. It doesn’t make full use of the language, and its quite limited. Wheres the next application framework? Right now the only modern libraries are oriented around non-microsoft platforms.
b. Fix the dynamic CRT (which is the default). The CRT will crash the app on startup – intentionally I might add, if it wasn’t loaded via native-fusion (SXS). Being able to xcopy deploy applications is a requirement for most internal tools at companies.
2. IDE pains.
a. Intellisense reliability. Its slow, it uses a lot of CPU horsepower, and its not 100% reliable.
b. Intellisense on projects not built using the project system. (Why can’t intellisense figure everything out from a PDB?)
3. General feature requests
Time-travel debugging! You get a crash, you want to know how things got to where they did, you can step backwards, run backwards, see state the program was in when it last wrote to a memory address, etc etc.
A COM Visualizer!
It is a pain now (as we have been put into so much luxuries like support for STL while debugging native unmanaged code) 🙂
I was desperately looking for it in the current release 😀
Hi,
Just a couple of aethetic points:
– Bring back tabbing of code files. I know the VS 2005 has its own unique way of organizing code files that you open in VS 2005 and after a certain number you have to use the drop down arrow (or CTRL+Tab to see the same list) and choose a file from the list. Major annoyance! Sometimes its good to keep what worked well from a previous version (VS2003)
– Intellisense support. VS 2005 has had a LOT of intellisense issues and I have personally tuned it out now and exclusively use VAX. Personally, I have great faith that MS can produce intellisense which can match or surpass VAX.
Thanks and best of luck for the project!!
I write native code in C++, although I can see us using C# and .NET for GUI in the future. Our application uses compute intensive proprietary algorithms that are implemented in C++ and assembly language, and this code will likely be married to the hardware for the forseeable future. We need Microsoft to pay attention to native C++.
And, we don’t want weird extensions that aren’t portable either, we want the C++ standard.
We don’t need the "export" keyword, but most of the rest of the standard should be met. On that note, kudos for Microsoft for already meeting this requirement, the compiler is very good both in compliance, and in generated code. I only mention this because I fear that the big push towards .NET will result in Microsoft dropping attention to the importance of these features for many users (despite marketing claims to the contary – I never believe such claims unless they come directly from product managers – tides shift!)
I agree with the comments about Intellisense. Perhaps this feature should run out-of-process? It’s incredibly annoying.
Given the complexity of maintaining the help system, which must allow adding help for addins and new SDKs, I can understand why it is the way it is. You obviously won’t be able to go through your help system and find all the inaccurate data. You could add a "simple" way for people to report errors over the Internet and implement a simple way for help to be updated automatically (perhaps once a day, or once a week, during off hours.
The Blogs from Visual C++ group are very definitive in stating a renewal in development of MFC, and a dedication to development on interop between native C++ and C++/CLI. However, they are not definitive in stating a committment to further development of WinForms.
Question: Will WinForms continue to receive active development in C++/CLI land? If I am starting a new app for .NET and native C++ ‘interop’, would I be wrong in deciding to build it using WinForms?
1. C++ 0x! I know that writing a compiler for such a language is not an easy task, still it would be great if in re-writing the IntelliSense you kept in mind the upcoming changes in C++ such as new ‘auto’ semantics, so when the compiler comes, you would just "uncomment" the piece of code you already had at hand.
2. If not C++ 0x at this time – maybe some code-snippet-like support for "auto" or "decltype"? It would be great if IntelliSense, which is already capable of interpreting types, could resolve:
map <int, int> m;
$auto i = m.first(); // a suggestion for the code-snippet syntax
to
map <int, int>::iterator i = m.first(); // code expanded at newline, semicolon or compile request.
3. Simple automatic code generation tool like "implement methods" – user selects some code in the header file and VS generates the displaced implementations for selected member function headers in a user-defined .cpp file. Much developer’s time could be saved for more important tasks.
4. I would VERY much like to see a C++/CLI for non-win32 platforms. For me it is a bit of a misunderstanding, that mobile devices, where resources are far more critical than on normal desktop machines, are missing such a great technology that would allow to easily put memory-efficient unmanaged code in operation.
C++ compilation time is a major strike against the entire language. The solution on two big projects has been to get Xoreax’s Incredibuild. It makes my current project compile in 10 minutes, down from 45 minutes. I also believe that Incredibuild saved Madden 2006 for XBox 360. It made Madden’s compile time go from 1.5 hours to 20 minutes.
I have fought the compilation time problem for most of my career and would love if distributed building was an integral part of developing for C++. Instead I have to convince each individual project to buy Incredibuild and that takes lots of time and effort.
Suggest improvements to the user experience to be able to get into the source, make a change, and get out quickly. Some very simple examples, I’m sure you can think of others:
– be able to create a cpp/h pair and (optionally) keep them in sync — so if I add a function or change the declaration in the cpp the h is updated (like refactoring, but just make it work).
– make an option to "correct" the case of variables I type in to match the declaration (like VB6). In most cases I don’t want to be case sensitive, and it just wastes my time to figure it out.
– In MFC, the old class wizard wasn’t the greatest, but now it’s much worse. Simply adding a menu item to existing code just took me an hour when the same thing could have been done in VB in about 10 minutes.
Also, I would like to see a managed version of MFC. Might be hard, but I have a legacy product that is heavily Doc/View in MFC, needs to go to managed code, and will cost too much to rewrite from scratch in VB or C# WinForms.
Some bugs?
In the IDE, the "find next" across documents often does not work right (not finding things that I know are there).
You can’t set a breakpoint on a goto statement because the compiler emitted two jumps and the debugger maps to the second one.
I’d love to see the class wizard back in action. it was so much better then, right clicking a grey looking graphic and selecting a method and property, and then manually having to add events and such. Were as the class wizard I could add events and even select timers button down events etc etc… fast and easily. Now I’m forced to create applications in vc6, add all the events methods and props there then import into 2005. i do like 2005’s attach to process way better then vc6’s and tabs 🙂
So please, please bring back the class wizard 🙂
why is it if you have 40 projects in the ide, it takes days to open, but if you look at the task manager the cpu is hardly getting hit.???
why does the ide crash so often when opening the form editor?
We are prevented from making full use of an excellent third-party toolkit because of VC++’s implementation decision on initialization of local static variables in a multi-threaded application..
Please restore the profiler. It has been part of this product since it was C under DOS.
If you want to make better tools for extra cost, wonderful, but don’t take away a tool that has been there for, what, 15 years?
I don’t know if the following topic is posted or not!
Hardly I have time to read all the comments. Sorry, if my comment is irrelevant or redundant.
I would like to see some extension, which is specific to GPU based computers.
something like
GPU float4 texture(1024,768);
which makes an array of 1024x768x4 float array in the GPU memory, which can be used, just like any other array
like
texture.setColor(x, y, RED);
texture1 = texture2 for deep copying etc
1) Allow pasting of pictures into the code pane.
Give the developer the option of externally linking images so that source-control isn’t screwed up, but also the option to encode the images as text (ala email) has the benefit of never losing your images with your code – plus still works as a text source control file.
2) Allow developer to have his/her own meta-data area within the code window (which is currently done via comments)… however, various products all want to use the comments area for their own special markup. Let’s formalize the meta-data/comments area and make it much more useful… and leave the comments – just for comments.
3) Don’t add new features until existing bugs are sorted out.
[Stephan T. Lavavej]
>> STL improvements are high on the wishlist.
> Which improvements, specifically?
Apologies for not replying sooner. Aside from the obvious (hopefully continual) performance improvements, things like unsorted hash_maps (requiring an operator< is actually quite a problem, when third party code only provides a hash and equality method) and the further TR1 improvements come to mind.
[Stephan T. Lavavej]
> What does STLport do better?
This blog was the first I’d heard of the VS STL having a checked mode, so without having time to sit down and compare each feature by feature, it’s hard to say which is better. Perhaps such things get lost in the deluge of C# marketing.
That said, by STLport being a portable plug-in, it offers extras that enable unusual usages, such as not using the std:: namespace, which can be incredibly useful in cases where third parties have (probably incorrectly) exposed STL in their public headers and you want to maintain a separation between your internal STL and the one you’re forced to compile and link with in std.
One related really obvious advantage STLport has over the VS STL is that it can be used with any compiler, so where we have no choice but to use VC6 or VC2003 etc. (again, unsurprisingly, due to third party code) it offers something that is compatible, yet maintained with bug fixes and improvements, whereas the only way to use the latest VS STL is to upgrade compiler (or at least that’s what’s been communicated).
I suppose a long-term goal that comes to mind that would solve both of these is to arrange matters such that C++ results from older MS compilers e.g. LIBs and DLLs are compatible and usable with later MS compilers and libraries. However, this would be quite a sea-change for Microsoft, so I’m not going to hold my breath 🙂
[Stephan T. Lavavej]
> Doesn’t Visual Studio come with a copy of the MSDN docs?
Good question – it’s possible our default install simply assumed Internet-only, but I’ve not seen an obvious option for a local copy on reinstalls. However, the help options that someone else pointed out (which I would never have found myself) inside VS itself imply it can be done.
[Stephan T. Lavavej]
> …mentions various things being bugs and to put them on Connect…
Will do when I get time, but the hard part (as it always is in debugging) is giving precise reproduction information. For example, the RTC issue I mentioned; we’ve struggled to identify where the problem arises, but have been able to observe that commonly run code can take up to 50% longer.
[JMD]
>These little kiddies need to learn that new operating systems and new development environments require new hardware PERIOD.
Not quite sure why you’re bothering flaming, but your flame is also inaccurate. We’re running VS2005 on top of the range machines, well above the advertised recommended system specifications, as the applications we design need a lot more than VS2005 does. We still experience slowdowns, mysterious delays, and the various other issues others have reported. Simply blaming these problems on older hardware is just plain wrong.
Another issue that have come to mind since my first post:
Whatever happened to OLEView in VS6? I’ve not been able to find an equivalent in any later Visual Studio. If it’s there and I’m just failing to notice it, please point me to where it is, it’s such a useful tool; if it’s not there, please consider re-adding it.
Other than that, I have to say, well done on the continuing conformance to the C++ standard, and despite the pains, Visual Studio still beats other IDEs out there, so keep up the good work! It’s gratifying to see people at Microsoft listening 🙂
1) Add distributed build feature (just like Incredibuild). Building time may take hours in HUGE projects without it.
2) Integrate a wrapper generator (such as swig).
We maybe the exception or minority, but our team DUMPED our prior development environment for Visual Studio C++ so that we sould migrate our existing native C++ applications to .NET managed applications.
1) I was extremely disappointed that the Beta1 of Orcas and Beta2 of Orcas didn’t have (for C++ anyways) any designer for the latest platform GUI.
2) Additionally, I was disappointed that Orcas Beta1 and Beta2 didn’t include performance improvements in the designers. Compared to our prior drag and drop environment that we dumped, our developer on NEWER and horsier machines are LESS productive… Why? Because they spend more time WAITING on the GUI to even render in the designer so they can make even trivial changes. I really hoped that this would be addressed in Orcas but doesn’t feel like it from what I looked at.
3) I’m excited (can’t really express HOW excited) to hear that Orcas+1 will have refactoring, I’ve longered for this for a long time and HOPEFULLY improve on performance of the IDE and GUI designer and add support for WPF, etc.
Keep up the good work with the tools…
I personaly would appreciate to see shorter compile time by having VC use more then one thread to compile ‘large’ projects without installing third party software.
No ‘huge’ distributed build system, anything that can use more then 25% of my processor would add to productivity.
Also it seem that VC8 dont do much parallel Disk IO? because 25% disk usage is absolute best case and its more often lower.
Thanks,
I only write native C/C++ apps. What I find particularly lacking is a profiler. There used to be one in VC6. I would like to see it returned.
I am especially interested in profiling floating-point code, as I do lots of signal processing.
I would also like a tool to help write MMX/SSE code based on intrinsics. The tool should analyze dependencies and warn me if there is a pipeline stall (for multiple CPUs), or give hints where there might be performance problems. Of course, this has nothing to do with C/C++, but many signal processing developers use MMX/SSE intrinsics intermixed with C/C++.
Kind regards,
Niels.
I never EVER want to see assembler when I step in to a function. This happens every time any of the functions parameters is passed by value. The only work around I have found is to pass const X& instead. This obviously can’t be done for 3rd party code.
Hello,
our project has a few sets of project arguments that launches the app in different modes during development. We currently have to copy paste from a text file the desired set of command line arguments.
would be great if the command arguments field was an mru combobox instead of an edit box, so we can get a list of recently used arguments.
If possible allow us to add pictures to code window (like you do in word). It makes it easier to visualize the problems when writing games. I guess this would be a lot of work but am sure people will appriciate 🙂
cheers
Pranay
I’d like to see a couple of things:
Sometimes after I stop debugging and I try to do a build, the linker cannot save the DLL to disk. Process Explorer shows me that devenv has a handle to the DLL opened. Sometimes I can hit build again (once or twice) and the DLL can be saved. Other times I have to close VS and reopen in order to link. I do a lot of edit&continue debugging and I won’t swear it but I believe I only see this problem when doing E&C.
Speaking of closing … The time to close a project in 2005 can be atrocious. It is much faster in 2003 and dang near instantaneous in VC 6. I blame part of the slowness on the TFS integration but that is just a guess.
Intellisense can become intelli-non-sense. Sometimes it just quits working for no apparent reason. Also sometimes the update just never finishes as it seems to enter a never ending cycle (90 files to go .. 20 files to go … 120 files to go .. 90 files to go ..). I have to exit, delete the ncb and reopen the project and hope it finishes. I have sent gigs worth of full dumps to MS support and all that was determined is that it appears to be in a constant loop.
Where is support for 24 and 32 bit (alpha channel) bitmap files in the resource editor? Png? I want to see this message go away: "The bitmap imported correctly, but because it contains > 256 colors it cannot be loaded in the bitmap editor". Please, the bitmap editor needs to leave the 20th century. In the very least I know these bitmaps could easily be displayed so I am guessing that the only reason this has not been updated is due to the lack of editing tools. At least allow me to view the images if not edit them.
Fix the MSDN search tool. How bad does it look when I do a search for something using the MSDN search command and come up empty only to go to google and do a search and find the entry … in the MSDN!
Some of this is more IDE than VC++ and some are just bugs (that rarely seem to ever get fixed as there rarely seems to such a thing as a visual studio service pack). But just as bugs in Windows or MFC are bugs in our products ,as far as our customers are concerned, so too are such non VC++ bugs VC++ bugs.
By the way, there is lots of features in 2005 that I love. "Ask a question" is great. Well except when a reply is posted telling me I should use "connect.com" or "this belongs in a newsgroup". My thoughts are along the lines of "I clicked the button and this is where I was taken. If it belonged somewhere else, tell that to the IDE developers. Besides, I have yet to discover the window in the IDE that shows newsgroups. Perhaps a newsgroup command (and view) is in order.
[Dan Konigsbach]
> Please restore the profiler.
[Niels Moseley]
> What I find particularly lacking is a profiler.
VC still has a profiler. It’s just in the uber Team SKU (along with /analyze, etc.).
[Jeff H]
> unsorted hash_maps
VC8 already has <hash_map> and <hash_set> in the stdext namespace. TR1 will bring <unordered_set> and <unordered_map> in the std::tr1 namespace.
> third party code only provides a hash and equality method
Usually, operator<() is easy to write, while a hash function is more involved. (I am continually amazed at how people always think "hash" when they hear "associative container" – I know that in other languages, associative containers are usually implemented with hashes, but it remains surprising.) Still, the stdext containers should serve you well until TR1 arrives.
I have not used hash_map and hash_set myself, but they are supported and the compiler itself uses them.
> This blog was the first I’d heard of the VS STL having a checked mode
I’ve written a couple of posts about STL checking on VCBlog (linked at the end of Soma’s post above), which you might like to read.
> the hard part (as it always is in debugging)
> is giving precise reproduction information.
Agreed, but a precise repro is necessary to investigate anything. (We can deal with complex repros, although simple ones are nicer – but we need one to begin with.)
Stephan T. Lavavej, Visual C++ Libraries Developer
[Dan Konigsbach]
."
Actually given the current C++ Standard you can’t argue both ways – the Standard assumes a single threaded environment. Also I am completely certain that given the strong C++ principle of "If you don’t use it you don’t pay for it" making the assumption that every application would run in a multi-threaded environment (and hence the initiialization of local static variables would need to be guarded) would not go down well with a lot of developers.
I don’t even think that the new C++ Standard (which does include multi-threaded considerations) is considering changing this aspect of local static variables. They want users to be explicit about the guarding and unguarding of variables – after all if the compiler does this automatically then 50% of users always going to be convinced that the compiler made the wrong choice of implementation.
Hi Dan,
>>Please restore the profiler. It has been part of this product since it was C under DOS.<<
Actually, we do have a profiler. It works for native, managed, and mixed code and is available in the "Team System" edition of Visual Studio.
Steve Teixeira
Group Program Manager, VC++
[George P Boutwell]
> I’m excited (can’t really express HOW excited) to hear that Orcas+1 will have refactoring
We’re glad that our reinvigorated interest in native code is something that will add value to your development scenarios.
> I’ve longered for this for a long time and HOPEFULLY improve on performance of the IDE and GUI designer and add support for WPF, etc
Adding WPF designer support is not happening in Orcas and is unlikely in the near future. To fully allow us to focus our efforts in native code and native/managed interop scenarios that are unique to the C++ developer, we’re deprioritizing the scenarios for using C++ for pure managed development.
[PranayKamat]
> would be great if the command arguments field was an mru combobox instead of an edit box, so we can get a list of recently used arguments.
This is indeed a great idea. I’ll see what we can do to enable this scenario after Orcas.
many new improvements 🙂
but witch version of VS?
because if in TS, that is it not improvements…
what will be in stadard/professional version?
Separate Intellisence and Class wizzard.
As any of us who uses 3rd party libraries I am not able to work if Intellisence is turned on, so I have to delete(rename) feacp.dll. This causes Class wizzard to stop creating C++ classes.
IDE configurations are a mess. We have a large C++ workspace that contains about 70 related projects and we have configurations that build a subset of the projects (this was ported from VS6). When I add a new project, the new project gets added and enabled in every configuration, and I have to go through manually and disable the project in all configurations where it doesn’t belong. I’d much rather have the project be added to all the configs, but disabled and then I can turn it on in the configurations where it belongs. Alternatively, another configuration view from a project’s point of view might be useful — a grid listing all the configurations that a project belongs to. That would be ideal. I get more griping from co-workers about the configuration system than anything else, since I’m the one that championed moving from VS6 to VS8. I’ve even resorted to editing the .sln files manually.
One thing that really upsets me in several recent releases of VC++ is the deteriorating support for browse info. First, the call graph view was gone, then it became impossible to do searches for symbols prefixed with the class name using browse info. Why?! How do you expect me to look for CMyClass::Initialize in a project of 5M LOC? The search for Initialize takes minutes and returns hundreds or thousands of results.
Including all files into the project to generate ncb (which I suspect is being touted as "replacement" to browse info) for the product I work on is completely impossible – there are 10s of thousands of them spread over a crazy dir structure. Go try to add them 1 file or 1 directory at a time as VS UI forces you to do.
Oh, and if I do include some files in ncb, I get duplicate results when I search for symbols that are declared in those files (one hit for ncb, one for bsc). This is very annoying.
Please provide a nice, working, performant and easy-to-use symbol browsing solution with your next version. I really hate switching to Source Insight which the majority of my colleagues have done already.
I concur with many on this thread on that VC++ has become a "poor parent" of C#. Really, my productivity as a C++ developer has not improved since version 4.2. Man, I miss it.
Hi Pavel Ivlev,
We realize we have Intellisence issues. As a matter of fact, we already have plans to focus on such issues since we do believe these affects the overall productivity of VC++ developers.
Feel free to contact me directly at aymans at microsoft dot com and I can connect you to the owners directly.
Thanks,
Ayman Shoukry
Lead Program Manager
Visual C++ Team
C99 complete support
Is there support (planned) for Extension Methods in C++/CLI? It appears that VB and C# will both support it, but I can find no mention of it for C++/CLI
I simply find handling C# a nicer experience. Refactoring support, intellisense always correct, etc. If the VC++ IDE could just close the gap a bit more I would be happy. Oh, and the SxS stuff tends to make me link everything as static to work around it – shame. Surely the CRT updates could go out on Windows Update?
Hello
For those interested in VC++ Documentation (and we have seen many comments on just that here), you can view a Channel 9 video with Gordon Hogenson on “Documenting Development Technologies” at:
Thanks
Damien
Although a ‘Studio’ sounds nice to marketing folks that somehow still desire feature-bloated can-do-all software, productivity-enhancing software doesn’t work like that.
I’d vote for somehow separating all the non-C++ nonsense into a different software package. There’s so much stuff that is in the way (judging from the memory & resource use) that even on a fast machine things are so much slower than VS6 was. Hardly progress.
I think it’s hard to convince MS folks of any improving designs though, since the basic rule seems to be: "every next version more complex and bloated" (see Office & Vista for example). Still, there has to be some smart people there who can see that all these complex solutions for simpler problems (SxS for example) lead us to none other than other OSes and other software. Unbelievable.
Revert back to a compliant C++ compiler with a VC6-like IDE in terms of memory usage and performance, make that compatible with the Platform SDK and you’re back in business.
Somehow how though, I think MS’ future is doomed in the software department. Too much marketing pressure to do the wrong thing. 🙁
One more pain we endure is with the resource editor. It needs to allow one to specify the ID BEFORE writing to the include file.
So for instance, I do not want to create a bitmap and then go to the properties window to change the ID from, e.g., "ID_BITMAP1" to something meaningful like "ID_CANCEL_BITMAP" only to have the include file insert definitions for both ID_BITMAP1 AND ID_CANCEL_BITMAP.
The issue we have is that we have to manually edit the include file and remove ID_BITMAP1. Failure to do so can result in compiler errors.
It seems like this is something easily done. Don’t update the include file until I compile the resources or otherwise save the resource changes.
Also, again today I have a project that intellisense simply cannot update. Give me a command to turn it off in the current project/workspace so I don’t have to endure the update constantly hogging my cpu as I try to modify code in this project. A fix for the problem would be nice but in lieu of a fix, let me turn it off and on without hiding the fecap.dll file. Even with a hotfix we got (two actually) from MS support to run with a lower thread priority, it still causes the IDE to stutter as I try to edit or navigate thru files.
For MFC coders, there could be a lot more support by the dialog editor. In VB/Foxpro, for years, it was easy to write static text in multiple fonts. Doing this in an MFC app requires custom coding. It would be great if:
-static text could either go beyond 255 characters or the edit box for that property was limited to 255 characters (so you know when to stop typing and create a new static text item).
-static text could support full font properties.
When a dialog box is created, it is possible to copy and paste it from one project to another. Unfortunately, only the dialog is copied. Why not copy the resources involved (both resource.h and res files) to the other project as well? This would save time for this operation and make reusable dialog components more practicable.
The help search engine constrains the developer to "OR" searches for each keyword typed. This only expands the results. Developers are able to understand boolean algebra and certainly bringing back AND, OR, NEAR, and NOT would save a considerable amount of time.
Also, the ability to save searches as "help favorites" would be useful, and in the encapsulated browser, it would be great if excellent browser plug-ins like IE7Pro would function.
Compiler performance is definitely affected by adjusting the priority of the IDE/CL.exe under task manager. It would be handy if there was a way to control the priority of the compiler from within it. Somedays you need to multitask during a long compile, and somedays you need to compile as fast as possible (or the machine might be dedicated to compilation at that time).
Remember the "turbo" button on 1980s PCs? It was popular! Why not do the same with software priorities?
Debugging apps without having a debugger is problematic because it’s non-trivial to get the context of the crash. It used to be possible to enable a number of critical error interrupts or traps, get control in a critical error handler, and dump out developer-produced contextual information.
This is hampered now because:
1. common calls such as sin() unconditionally produce floating point critical errors each time it is called.
2. MFC calls Dr. Watson by default and this typically produces standardized output that does not offer any opportunity for developer-produced contextual information.
3. The call stack is not available from a function call as it is for .NET applications.
Note: I can still create a critical exception handler that bypasses Dr. Watson, but there’s no support for getting context data which is so important at that time.
It would be great to ship self-diagnosing applications. A little help from the CRT/MFC to walk the callstack would go a long way here.
If there’s a problem with the read only attribute on the resource files and a resource is attempted to be created, there is a warning dialog that says the files are read only. However the .ncb file is now corrupted because it thinks the resource got created and it is added to that file. Deleting and re-creating the .ncb file is now necessary, as well as making the resource files r/w.
How about heading off this issue "at the pass" by offering to make the resource files r/w?
VSS 6.0 still sells. I own a personal license, my company owns many licenses.
The IDE interacts with VSS more than ever before. I think there are more warning dialogs than in the VSS IDE itself. There are many problems with the VSS interface, partly due to limitations within VSS.
For example, one cannot have different projects within a workspace on different drives. I am frequently forced to check out a project though I don’t want to make any changes in it.
Looking at project file "changes" directly by diffing the file, I frequently see useless numeric ids that have changed that really have nothing to do with the project from a developer sense. This data should be removed, moved to an external file, or otherwise gotten out of the way so that only when the developer changes a setting on the project is it ever checked out.
Other issues happen with dialogs that offer choices like: fix the server bindings or keep the server bindings and allow local files to be overwritten. There’s no data presented that indicates what would change — how many files are affected, etc. This makes it a little scary to make a decision without any further information.
Sometimes one cannot continue without checkout. If I want to make local changes only, then I save the file, the VSS interface sometimes pops-up and forces me to check out a file — I can’t remember the specifics of the context, but it’s there.
Lastly, the "pending check-ins" are often well confused. States like "content" and "add file" are often totally incorrect. Many times I’ve used VSS diff on "apparently different" files and the dialog "files are identical" comes back. I’ve also seen files checked in initially into an odd VSS project folder configuration that doesn’t match the rest of the project. Right now I have a file showing with Change Type "File not available" but the file is neither on my hard drive in the indicated location nor does it show up in the VSS IDE in the indicated location. Looking at a file history of the folder (an include file folder), the file doesn’t even show up. What’s most likely is that it was deleted at some point, however, why should it show up on the "pending checkins" screen?
I have seen bindings labelled as "invalid" but that’s not very helpful as to determining what’s wrong and how to fix it.
Ever seen this? "Microsoft Visual Studio needs to reopen source control database connections for projects in the solution. Your source control provider may prompt for your credentials." Just hit cancel after looking at the "change source control" dialog. Why this comes up I have no idea. The project is already connected and valid in this case.
You have controls on some VSS dialogs to "not show this dialog again" — but is there any interface to change one’s mind about these dialogs?
If a project is in two workspaces, each with different relativized locations to the project, this information is stored in the project file itself. Changing the project relativization may suit developer B on workspace B, but not developer A on workspace A. In addition to potentially causing a problem between development configurations because of differences in locating a project, it is completely unnecessary to keep this data in the project file — couldn’t this be moved to the .vssscc file or .sln file, and thus be individualized per workspace?
There are more problems with the interface than I am reporting here — I kind of hate to just say "could you test this interface further and resolve more issues and limitations with it," but could you test this interface further and resolve more issues, make it easier to use, and not require redundant, unnecessary checkouts and checkins on files that shouldn’t, or haven’t changed?
These are all on the dialog editor & its code generator:
Could you change the Dialog Editor such that overrides are available while in the main editor, rather than (or in addition to) having to have the class selected in the Class View?
While dialog units and twips are admirable solutions to resolution independence, I think I can safely represent most native C++ programmers in saying we deal in pixels. Would it be possible to add this as a unit of measure in the dialog editor? Right now it’s a pain to create a dialog box that is exactly 1024×768 pixels. I use a custom overlay created in a program like paint that is sized how I need it, then just scale the DU-based dialog until they match in size. Then I can create my layout based on the intended final resolution of the dialog.
Also, would it be possible to have an "Add Item" in addition to "Add and Edit" so we can add a bunch of interface items and THEN go and code?
Typically, the dialog editor adds a "public:" statement always. Can the editor call somewhere into the code state machine and find out if the permissions are public and intelligently add a public just once (or not at all)? If this is not easy, what about changing the editor to add a feature to eliminate this redundancy automatically?
Would it be possible to actually delete a deleted message handler rather than commenting it out?
Would it be possible to eliminate obsolete properties ("3D Look")?
Also, does it make sense to you to integrate contextual help editing into the dialog editor?
Alpha transparency is a bigger part of windows than it used to be. Some basic bitmap tools to assist here would be invaluable. When editing images, would it be possible to add support for at least screen-door alpha transparency, even on a file that doesn’t have it in the native format (there would of course need to be a conversion to a format that supported alpha or to create an independent bitmask file).
And lastly (but not least), the set of MFC controls has been relatively unchanged compared to C#. It would be great if there was "official" support for a data grid and other basics that aren’t covered yet in MFC, yet have been around for years in Microsoft’s other windows-based languages. The WYSIWYG support is also weak (perhaps the proper term is "ancient") compared with other languages. WYSIWYG is perhaps the chief benefit of the editor, however merely placing controls is different from laying them out, which if done through coding, can be quite a trial and error process. The more complex the control is, and the more customized the dialog, the less WYSIWYG support there is for the details in layout, at least in native C++.
I only work with unmanaged code, and as of now I’ve got to deal with tedious tasks everyday, tasks that could be highly simplified.
1. Simplify the choice between standard and secure SCL. This could be easily done while creating the project having a tickbox (even enabled by default, if you will) to make the choice. The average Joe will just leave that on or google for its meaning, while people who know what it means will make their choice.
After all you’re developing a programming environment for an unmanaged language, so your first goal shouldn’t be saving users from their own stupidity like it happens in Office or Vista etc.
2. Improve process creations through wizards. There’s no reason on this world not to create a quick wizard to let the project creator select between UNICODE and MBCS quickly, same goes for C++ vs. C (in fact – weird as it seems – some people still use pure C, from time to time…).
3. Improve F1 searches in MSDN. It often happens that searching for STL functions via F1 leads to finding homonyms in other classes/namespaces etc. Code samples for advanced constructs on MSDN could be improved/increased too, but that’s a minor I wouldn’t really die for.
4. Release more frequent updates, rather than one massive Service Pack ages after the release and then abandoning the product.
5. Offer native compilation as an option for C++.NET and make it a visible one, not one hidden by 55 prompts. Why? It’s the only way you’ll get more standard C++ developers to move over to the .NET version, because native compilation would eliminate interpretation overhead and decompilation ability (which is a concern of *EVERYONE* I’ve ever heard talking about .NET for consumer applications – and please don’t reply talking about obfuscators, because obfuscated code is still 1000x more readable than plain assembly, let alone a well packed program). I’m pretty sure this thing won’t even be taken in consideration, but hey, I had to let you know that it’s something desireable.
Along with the above, all that you’ve already pinpointed as on your list is surely something that, in my opinion, deserves quite a lot of attention. Finally, try to keep in touch with the developers’ community as much as you can, and all along the development and testing phases, because it’s not by closing 5 clever people in a room thinking of improvements (which seems what you’ve pretty much done in the last 3 versions of Visual C++) that you’ll meet real-worlds demands and requirements.
While doing the above may help and strengthen Visual C++’s position on the market, failing in doing it may result in a loss of a great market share.
Best regards
I would like to see an option to preserve the clipboard content’s formatting when applying formatting by pasting.
A simple request, and then a more complex one:.
Thanks.
A dark, dreary night, with but a pale sliver of moon occasionally sneakig through the deep, dark clouds. Her face was still in my thoughts, sending a cold chill down my spine; so cold that the frigid wind seemed warm in comparison. She had seemed so kind, pleasant and gentle back then, as did her request. "A simple project wizard, That’s all I want. Just something with a few linked pages to create a c++ project in vs 2005. I’m sure you can have that ready in a couple of days?" Her smile was endearing; no so much that of the attractive young woman she once was, but something almost motherly, in spite of the nearness of our ages.
"But what kind of mother would put me through this!", I howled in anger into the vast emptyness that was that night. It had seemed such a simple thing. Just do it in c++ instead of c#, like I usually do.
"Sure Karyn, ‘l hve it for you Friday morning."
After all, I just need to click on another option or something, and the UI of the wizard will all be done for me. It’s not like I will need to learn an entirely new programming language, double my knowledge of html, and scan vainly through MSDN for real documentation on how some obscure feature works to do something that is trivial in c#. I had once laughed in my head at the absurdity of such thoughts. Now I laughed loudly, and bitterly into the night at the horror of their reality. There I had sat, alone in the darkness with my lap top, malevolently about the task of violently tearing app wizards apart in the hopes of figuring out how they worked. Having long ago given up on real understanding, like Frankenstein building his monster I was now blindly stiching together the severed remains of my cyber-victims in the hopes that some how the monstrosity I was creating, with java script guts prodruding from html lesioins, would somehow live, call me master, and do my bidding by morning.
I pondered in horror the fate of my app wizard vicitms that I had sacrificed in my relentless efforts to appease my colleague. "Why couldn’t they have at least given an example of something more useful than a one page console wizard!" I cried in anguish before collapsing in overpowering sobs tor the third time in as many hours.
I have one request. I recently made the transition from Visual Studio 6 to 2005.
One of the features I really miss is the ability to browse to a method that is within commented code. In VS 6 you could comment out a block of code, but still use F12 to browse freely. Now you can’t.
I don’t know why VS 2005 had to get rid of this feature. I used it all the time, for example in the method header comments I would have a See Also: section to document the significant callers or other methods of interest.
Please add this back if you can. It was a mistake to consider commented areas as not browseable.
Thanks for the opportunity to comment. I’ve been using MS dev tools since the IBM PC was first introduced. I used to be very happy with them, but not any longer. I have to eat what you serve, but I’m here to tell you I’m not enjoying it any more.
I’ve been working for ISVs for the last 17 years. I only code C++, un-managed, no MFC, and I use STL and ATL. I’ve written or contributed to over a dozen retail Windows products in that time. Several have won industry awards. Thanks to MS I’ve been making a nice living.
VisualStudio quality is down (crashes, bad behavior, bugs, second-rate tools). MSDN grows steadily more pathetic (really my biggest complaint). Platform SDK samples diminish when they should be burgeoning. Complexity of the target platforms continue to increase yet the dev environment doesn’t keep pace. Its obvious MS has almost abandoned the C++ community. That translates into lower productivity, lower quality products, frustration and anger.
Some comments from MS employees in this blog seem to indicate a renewed commitment. Unfortunately, MS has a history of talking optimistically in public, but the delivery has usually not matched the words. Not that it matters. You’ll still collect your tribute, no matter what you deliver. We peons living outside the wall are at your mercy. And the white shining knight that is Linux rides a gnu, so i won’t be joining his crusade.
Most of the dozens of things that bother me have already been mentioned in previous comments so I’ll just contribute one that I didn’t see in the comments I read –
I use a pretty high end workstation for development. Compiles with VS2005 VC++ are slowwww. I’ve investigated and watched TaskManager as the compiles runs on one processor while the other processor sits idle. It appears that the granularity of the compiler threading is at the .vcproj level, not the .cpp level. After some testing I discovered I could improve my compile performance significantly by removing all the dependencies from my project and doing multiple passes on my solution (yet I want dependencies because using them avoids debugging red herrings if I forget to build enough times). So, I’d like to see the compiler’s threading granularity designed in a way that results in the saturation of all my machine’s resources.
First, after using VS 2002 and 2003 I really feel 2005 is a step up from previous versions and I look forward to the future versions.
Second, one word I did not see in any of the other comments is "Agile."
Unfortunately, I am not old enough to have written anything in VC++ 6 so I do not have a favorite feature that I would like reinstated and I have no benchmark on what the compiler performance used to be like, but I really think that if VS wants to be the C++ IDE of the future it has to move to another level. There is more than compiling, linking, and debugging now:
The features that I find lacking the most in VS2005 are those that enable me to be a more agile developer, which I really believe is the future development style no matter what language you work in. I think several posts have already touched on the missing features for native C++ to enable agile developers: Integrated Unit Testing and Refactoring.
I went to a conference a couple weeks ago and saw some Java developers do things in Eclipse that just made me drool. Right-click on a failed test and get a new class or method created for you! Holy-crap! Now you write tests to generate code; that’s the way it is supposed to be.
There do exist third party tools to enable some of this, but I don’t think they can compete at the same level as the things Eclipse (and oter Java IDEs) can do. Plus, VS integration would have saved me tons of time trying to find and implement the third party solutions. And I know that C++ is a much more complicated language than Java (which is why even Eclipse doesn’t have those features for C++). Despite this complexity, there will have to be some innovation that enables the C++ developers of the future. Someone will make the next C++ IDE, the Agile C++ IDE.
During the Visual Studio 2005 beta, I posted the following to microsoft.public.vc.language:
// Start of quote
> Jerry Coffin <jcof…@taeus.us> wrote in
> news:MPG.1cab3ccbcfa1a79698981b@msnews.microsoft.com:
> > While VS 2005 _does_ seem to be better than 2003, at least the betas
> > I’ve seen so far are still _clearly_ inferior to VS 6.
> Jerry – I work on the Visual C++ IDE and my main goal in life now is to
> make VS2005 a great release for those who loved the VC6 IDE 🙂
> Can you elaborate on why you find VS2005 clearly inferior to VS6?
I’d love to.
For the moment, I’ll try to stick to a reasonably short list of the
most crucial items.
1) creating a blank file: right now, I go to File | New | File… |
(on the left side) Select Visual C++ | (on the right side) Select C++
file. This is a common enough operation that I think I should be able
to do it with ONE button press, just like I can in VS 6.
2) compiling that code: right now, I have to save that file, then
create a new project and then add that file to the project to compile
it. In VS 6, I could just click the compile button, and it would
create a new project that included the file, so I could compile it
with two mouse clicks, where the new IDE requires a dozen or so.
3) finding and replacing is sufficiently common that it should NOT be
hidden in a sub-menu — it should either be available directly in the
Edit menu, or else put into a Find menu (or whatever) of its own.
4) I’d like to see the settings for the tools (e.g. compiler, linker,
etc.) go back to a skeleton format something like PWB used to use. To
go with this, I’d like to be able to create dialogs and link them up
with these skeleton files, so I could run tools of my choice without
having to write macros or add-ins to do the job. Granted, this one
can be worked around, but it’s currently much more work to add a tool
than there’s any real reason for.
5) in the help system, specifically the index, the drop-down list box
in the "look for" entry is (much) worse than useless — without it, I
could get close to the right spot in the index, and then use the
cursor keys to move to the right entry. Now, using the cursor key
instead goes back to some previous search, which is essentially NEVER
what I wanted to have happen. I’m not sure I’ve EVER wanted what this
drop-down does, but hardly a day goes by that I don’t end up opening
something I didn’t want because of it. If this "feature" was dropped
completely, I, for one, am pretty sure I’d never miss it, but if it’s
retained, it should be moved to a separate button where it isn’t a
constant irritation.
6) once upon a time, we could make up our own filters for the "filter
by" used in the help index and/or searching. I.e. I could pick out
whatever sections I wanted in the TOC, give a name to that group, and
from then I could use that filter just by picking that name. Now in
the index, I get a small set of filters that are, quite frankly,
mostly useless. In the search, as far as I can tell, I don’t get any
named filters at all — each search, I have to manually pick the
language, technology and topic type.
7) Once I can add my own filters to the index/search pages, I’d like
the named filters to be saved with the project, so anytime I open
that project, I get those filters again. I should also be able to
make a filter global, so it’s visible at all times.
8) I also think it was better when the search window opened as a tab
along with the contents and index — right now, when I start to do a
search, it covers up the help page I’m currently looking at. I often
look something up in the index, then do a search on some terms in the
page I brought up, to find everywhere those things are mentioned.
Covering up the page I’m looking at makes that unnecessarily
difficult.
9) (Something VS 6 didn’t have either). I think the default for
internal help should be that the help window open as a separate tab
group from the source code (etc.) windows. I can make it a separate
tab group, but anytime I close and re-open it, it comes back in the
same tab group, so looking at the help covers up my source code, and
vice versa.
10) The profiler is still needed. I know the compiler has profile-
guided optimization, but this isn’t really a replacement.
11) The Add Member Variable Wizard is currently quite poorly laid
out. In particular, selections should be more or less hierarchical
from left to right, top to bottom. Right now, the "Category" drop-
down is off to the right of the "Variable Type" drop down that it
controls, which is quite counter-intuitive.
Personally, I see little point in a separate "Category" drop-down at
all — I think it would be better to just have a Control class (e.g.
CEdit) as one of the types that could be selected from the Variable
Type drop-down.
12) In the AppWizard, it should be made more apparent that the list
down the left really is a list of items that control the selections
available on the right — perhaps this changes with the color scheme
chosen, but at least with the colors I have on my machine and the IDE
in (I believe) a default configuration, it’s essentially impossible
to tell which item on the left is currently selected except by
reading the label at the top and matching it up with the item on the
left.
13) Right now, when I right-click on a class in the class view, I get
an ‘Add’ sum-menu that currently has entries for Function and
Variable. IMO, another entry should be added for "Message handler"
that would bring up a dialog much like the VS 6 ClassWizard where I
can add a message handler. Alternatively, add a "messages" list to
the dialog that comes up to add a function, so when I add a function
I can also select a message for it to handle.
// end of quote
As far as I can see, the same problems all remain in the Orcas beta…
As an in-Microsoft user who develops exclusively in C++ but has to debug managed code, all I want is improved performance in booting, loading symbols and attaching to a process.
1. I would like to see a new GUI model other than MFC (which is broken and cumbersome). Perhaps whatever is in C# can be ported to C++.
2. A new prepocessor token __FUNCTION__ which translates to the current function name in the code like __FILE__
Thanks
NATIVE CODE SUPPORT WOULD BE AWESOME!
If I want to write something for C#, I’ll use C#. But, if I want to write my own hardcore application, I’ll write to C++. I just love performance.
1) There has been a discrepency between intellisense and the compiler. Often, intellisense will show a type for a variable, yet the compiler will say it’s undeclared. It’s difficult to locate the missing declaration when it shows up as being there in intellisense.
2) With VS2003, the Solution Explorer would track the current file. If I had 50 files open, opening the project would bounce the Solution Explorer between all 50 of the files. That was wasteful. The really annoying part was closing a file would bounce the Solution Explorer to some other file, hardly ever the one I wanted to look at next. Then in VS2005, the Solution Explorer stopped tracking the current file, but the problem there was it’s harder to locate the current file. Ideally, the Solution Explorer could have a trigger, such as a double-click in the current file or an item on the right-click menu that would bring the current file into focus on the Sollution Explorer.
Thanks so much to everyone for their comments. It’s great to see the enthusiasm you all have for Visual C++. We’ll definitely take your thoughts into account as we plan the next release.
To summarize what we’ve heard, I’m providing the following consolidated view:
* IntelliSense doesn’t work
Performance is slow and results are spotty
* Help system doesn’t provide useful information
Loads of irrelevant hits
Far too many topics don’t include sample code
API content is “minimally reformatted headers"
Content isn’t helpful
Want a powerful local help solution
* IDE is slow and buggy
* Build speed is slow
Many people asking for distributed build/link
* Better debugger stability
* Improved scalability in IDE and debugger
* Improved ISO standards compliance / TR1 implementation
While the points listed above were mentioned so many times they receive top ranking, there’s a “second tier” of issues that were repeated more than once. These include:
* Improve interop support & kill interop support <g>
* MFC enhancements
* Bring back the Class Wizard
* Better dialog editing
* Better support for parallel programming
* Round-trip support for project files between versions
* Refactoring
* More static code analysis features
Thanks again for the comments!
Bill Dunlap
Visual C++ Development Team
The ability to mark and hide lines in the find window. I often do a find, and then investigate each hit quickly, and then revisit the ones that need further investigation. It would be nice to filter the list as I go.
And more than 2 find windows (let me name them and keep them around).
>> * Refactoring
>> * More static code analysis features
in team system or professional?
> Content isn’t helpful
I would say the content is actually extremely helpful, at least in terms of the Win32 API. Most of the functions I’ve looked at were rather well documented.
Also, the Index feature in the local help lets me instantly find any function/windows message/constant I’m after with no hassle. Don’t see what people are complaining about.
If you know exactly what you are looking for, the index is way faster than the search.
I hope you guys don’t just throw away all the good documentation you have in the MSDN and start rewriting it all.
Note: My comments only relate to native win32 API as this is the only aspect I use.
* IDE performance is poor, more if I compare with other C++ IDEs.
* Better conformance of C++ Standards
* C++ 0x support
* I do remember old help files with VC++ used to be more useful because right now I get thousands of alternatives when search for something. Versions before a clear and simple definition plus a sample when required fullfilled the request.
a really nice C++ feature has been the chance of platform portability. That’s why native code support is a definitive must.
The installer should install at least native languge help. MSDN is cool but it has more material that needs to be filtered carefully for not to waste time.
Years ago we decided to have a front-end using VC++, it was a pain to have all the features that a windows application based on VB was able to do with a few simple lines of code. Today those things have not changed and I see VC++ oriented to other scenarios as always been.
I see a greath challenge ahead for you guys, try to have a good balance between managed and native code. But let’s face that native code modules are for specific purposes just like managed code is. I’d like to see the IDE behaving quickly as with C# and VB.
Other things:
– A pattern wizard or advisor will be a nice help.
– Refactoriong
Last week I decided to go back to VC++ 6 and wrote a native code application because IDE is faster and help is quite better without lots of extra stuff that steal my time.
Provide the ease of use of the .Net FW in native C++ by writing useful high level libraries. XML serialisation may be a useful topical one – even if only done in a similar way to MFC’s serialization facilities.
Hi Norm,
>>.
These are great ideas and we hope to make searching way better and way more customizable in the next release.
Regarding the GUI in CLI,
I dont know what is the problem when we change the header file of the form and it’s so slow when we come back the the designer, while my 2 core cpu, 2gb ram are still free, but my hard disk keeps reading 5 minutes for that.
Is there anyway to use all my 2 cores and 2gb ram to speed up this. otherwise adding 1 button or 1 event in to the form is too painfull.
Anyway, CLI C++ is the super feature of VS, at least for game developer. Most of the code for game is C++, and we really need CLI to build the tools that support the game.
CLI C++ please keep moving.
Thanks.
Regarding Visual C++ changes
I would like to see an improved linker options page within the IDE. The compiler options should actually be switches (checkbox style) in the IDE.
Linker option help descriptions should be easily available from these switches via a F1 help key. Perhpas switch conflicts could be identified within dynamic help.
also the improvement of entering paths to third party libraries could be improved. Perhaps a separate pop up form for each library to be linked. To describe path and comments about library versions used in the build.
A working "sourcesafe" system. Something like svn. Because sourcesafe is not as safe for your code as you would expect. Especially when used by a team of developers it hurts. it hurts. it ehm well you know it doesn’t work. I don’t want to check out files. I want to check them in (like svn) and if the file in de database is newer… hey tell me (like svn). You know what. Just make a svn compatible implementation. btw with compatible I mean compatible not "compatible + ms extra’s to break compatibility".
Would be nice.
For the rest I like VC a lot.
Keep up the hard/good working.
2 things:
1. incremental linking in mixed code
2. edit & continue in mixed code
The #1 thing I’d like to see is hierarchical project settings.
I have a product with approx 1m lines of legacy C++ code. To keep it managable and portable, I need to be able to restrict where project settings are applied to be mostly global. I was forced to use gmake because VS2005 project configurations do not seem to share settings.
For example:
1. I want our comany-specific defines done globally.
2. Next, I want project-specific settings to merge with them (add defines, override defines, prepend/append include paths)
3. Next, per-file settings, if present. I’d rather restrict those
4. Perhaps at each stage have it be shared+coniguration-specific (i.e. global+globalDebug or global+globalRelease)
Now, if someone sets preprocessor includes on an individial file for Debug, they don’t get set for Release and the build breaks.
I would love to see the following:
– Working forms designer with event hook-ins like we used to have
– integration of third party single user source code control. My biggest gripe here is an easy way of version stamping executables (both numerically and in text) so that the MSI can self update and the process of turning release build into distribution is easy.
– integrate code signining into the IDE for EXEs now it is so important for vista.
You are completely right choosing native development as a 1st priority. C++/CLI, WPF, LINQ and other cool stuff are important from marketing point of view, but the real work is in native code. I’d like to note, that I rarely see problems with Intellisense, crashes and similar things, but that’s because I work very carefully: delete ncb, when it’s big, have very neat project settings, all files in the project are writeable and so on. Also I preventively restart VS when I’ve worked too long. I delete Intermediate directories periodically. IDE is pretty fast for me but other people, who do not care about tuning, experience terrible delays and slowness. Why developers should bother disabling last access bit in NTFS system and other similar staff?
0. More bugs should be fixed
– My company legally purchases us all tools and it’s bad that you don’t service them, as, for instance, we service our products. There are pretty many bugs in VC++ 8.0, but we got only SP1 after which your team switched to Orcas and Orcas + 1.
– There is one particular not C++ related bug (Find does not work) which, if I were at Microsoft, I would like to fix myself: it’s so intriguing that VS Team cannot fix it.
1. Tools performance
– I have everything tuned, antivirus not installed, optimized and overclocked with RAID 0 and 2(yet) cores CPU. It works pretty fast, but some things just don’t scale well in VS. Here we have bad handling of huge workspaces and lengthy tasks which block all IDE without ability to cancel.
– It’s simple: preinitialize dialogs and other facilities in background. Macro engine is slow to initialize. Many IDE dialogs are slow when opened first time. Second time they are fast, preinitialize and the impression will be that VS is very fast.
– Compiler is slow. 50 MB of C++ code with medium-to-heavy template usage and MFC containers reimplemented via STL (workaround for Windows CE 4.x-5.x HeapAlloc). And we use /Zm in some places. It’s a pain to compile. But VC6(==EVC4) is 2+ times faster for same code.
– Local F1 is slow (probably because it does find anything when it finally opens). It’s slow, I don’t understand why. It’s only help system. I have almost empty RAID 0 and defrag it every night.
2. Intellisense-related
– Want folder browsing tooltip when typing in include paths.
– If I change configuration or platform, Intellisense reparses all the files, but it could have parsed them beforehand for all configurations/platforms (drive space is cheap, my time is not)
– Add open this file’s header/cpp pair command (I have a macro for this assigned to Ctrl+’)
– Order F12 hints by likelihood, not by ABC
– VS2008 Beta2 Intellisense support for heavy template usage is better but it fails anyway
– Improve Find All References. Ideally, I would like to be 100% sure that it lists everything so I don’t have to double-check with grep.
– My application is being debugged, CPU consumption is only 1%, Intellisense thread is stuck. Why?
2. Project settings
– PLEASE, PLEASE backward/forward compatibility between VS version N and N+1. I work on N+1 version for 6-8 months after VS update while all other team works with VS N. It’s a pain for me to merge their changes to N+1 vcprojs.
– DO NOT (X) reorder vcproj files. Aaaaaaargh!!!! Aaaaaaargh!!!!
– Options intersection like in VS6. But, on the other hand, also leave an ability to clean setting in all multiply selected projects/configurations.
– Drag’n’drop and clipboard support for vsprops. Want to add a vsprop to multiple projects with 1 click. We have to edit xml manually.
-Macro API to manipulate vsprops (didn’t find one).
3. Compiler and libraries
– We have Rational Purify, it complains about uninitialized memory read (UMR) inside MS STL or crtl. I debugged crtl complain and yes, your vsprintf series (harmlessly) do UMR in some conditions (the bug is in Connect)
– PREFast – more warnings, naming rules, better control
-Standard: Exception specifications, and forget about export templates.
– ARM binaries are larger than ones compiled with EVC even without RTTI and exceptions. Hard to squeeze into Windows CE virtual memory.
– C++/CLI support (or better interop support) on Windows CE 6.0 and above.
4. IDE
– IDE is a face of the product. But MS delegated its development to lower-class, less-educated people. I understand that: I would rather like buggy Resource Editor than buggy Debugger or runtime, but it’s a face of the product anyway
– More abilities to drag’n’drop items within VS IDE. It is very important. I don’t use drag’n’drop between applications, but within IDE it’s natural and it does not work most of the time.
– There can be a lot of suggestions about the IDE but you have lots of C++ devs and I’m sure you can get feedback pretty easily.
– Wizard generated code MUST generate forward slashes, not back slashes in C++ code include paths.
5. Licensing
– Consider Professional+ edition with PREFast and Profiler. Add only $100-200 to Prof price. This will make Microsoft(!) Windows(!) applications better.
What:
Typeof operator for compile time time evaluation (not typeid). Often implemented as a compiler extension (IBM XL C++ V8, GCC) and likely for C++0x.
Why:
Valuable for templates, safer macros, and refactoring.
Examples:
Safe max that evals params only once:
#define max(a,b)
({ typeof(a) _a = (a);
typeof(b) _b = (b);
_a > _b ? _a : _b; })
Declare pointer to whatever X is:
typeof (*x) y;
Iterators (until auto keyword appears in c++0x)
std::vector vec;
for (typeof(var)::iterator i=vec.begin(); …
Current workarounds:
Combination of template specialization, function pointer overloading, and a macro, as shown by Steve Dewhurst on Dr. Dobbs Journal, Bill Gibbons in 2000, and in the boost library. Lack of built in compiler support currently forces that each type be preregistered before use.
Feasibility:
Realistic, since the parser already knows the types of each expression.
Thank you
Guys,
A few comments regarding VC++ 2005.
I think MS should focus on fixing bugs in VS2005 instead of offering another buggy IDE. I have been using it for about 6 months and honestly I haven seen such crap in my live. Before I used vc6.0 and it was 10000x better, I had to switch to the newer IDE just because I have changed the job. Comments are:
+ linker is increadibly slow. same solution which took no more than 15 secs to link under VC6.0 TAKES NOW up to 20 MINUTES under VC++ 2005. I cannot believe how MS could release such crap, really. I had to rise an incident with MS just because it was not possible to work with the new IDE. No fix so far, of course,
+ ide is buggy – floating windows jumping around, suddenly changing positions. Guys, if you do not have means and abilities to develop decent software, please do not introduce new ideas. Test more!!!
+IDE crashes very often – not sure why. I have been using VC6.0 for about 6 years now and it has never ever crashed. I have about 5-6 crashes with VC2005 EVERY DAY (about 10 hours dev a day). This is unacceptable.
+IDE is very, very slow, again comparison to VC6.0 – a quad core computer with 3 GB of RAM has difficulties coping with two instances of VC2005. Takes up to 2 mins to close a solution with 50 projects. just few secs for VC6.0.
+Once you decide to add new features – to it properly. Look at class viewr and code competion and comare it with Visual Assist X (Whole Tomato). Perhaps you should send a couple of guys for consulting there. Same with "PARALELL COMPILING" – learn from Incredibuild guys, a simple light plugin which doeas magic.
Conclusions – focus on fixing the existing bugs intead of trying to cheat on developers selling them beta software.
One more thing – make sure you distribute the next version of VS as a trial version, our company is not going to risk 60 licenses paying for ….
Lex
Layout/UML, it’s in the managed as view class diagram. Symantec had it in 1996 or earlier.
Edit and continue in debug. As someone said about native DLLs, we still need depend.exe.
Find and Replace, go back to the VC6 format with the original key commands. The current version is too much like the VB6 version that was to mouse dependent.
I would like to see optimizations around inline assembly, as well as inline assembly for x86-64 and ia64 (Itanium). The C runtime library could use a few optimizations in some of the basic functions such as memcpy(). In fact, it took me all of five minutes to produce a version of memcpy() that performed sigificantly faster than the SSE2 implementation in the CRT.
Micro-optimization? Perhaps.
Almost 25% faster? Definitely.
Considering the almost global use of the CRT, making some of the components faster may add a slight performance boost to most applications in a small way. Again, not big, but everywhere.
I would personally enjoy seeing a better integration of DirectX into VC++. Seems after all these years when installing the SDK it doesn’t integrate into VC++ the way it would have been expected to do. For example, lack of templates and lack of linking to the correct libraries with out manually adding them. For the libraries I suppose a wizard that lets you select which libraries you want to have linked in would be very nice as well as a reconfiguration to change them via a wizard rather than manually changing them would also be very helpful.
Lex, if you have a specific solution where you can show linker regressions please use this guide to get us a reproducible case.
and.
If you have already filed a bug please give me reference to it so I can follow-up.
Rick Benge
Visual C/C++ Program Manager
Dwayne: this feature will be added to C++-0x – though the syntax will be decltype(x) instead of typeof(x). We are currently evaluating what C++-0x features to implement and when and this feature is definitely on our list.
Jonathan Caves
Visual C++ Compiler Team
I am satisfied with the compiler (good job!), but the IDE and frameworks are horrible.
1. Intellisense just plain doesn’t work. Each release is worse than the previous. It is too slow, often doesn’t find anything and messes up the project definitions if you have multiple projects in the solution.
2. Performance of the IDE. Again, each release is worse than the previous. What exacly is VS2005 written in? I am really curious. It’s way too slow and unstable.
3. The resource editor has been a joke for many years. Could we not at least be able to _look_ at bitmaps with over 256 colors?
Also, put me on the list of people that want a viable GUI framework/designer for native code. MFC is old and had design problems from the start. WTL looks interesting, but there is no official support so I dare not even consider using it commercially.
To use two languages (C#/C++), or two different "paradigms" (managed/unmanaged), in the same application is NOT ideal IMO. I totally fail to see the advantage of managed code, and I can’t pay the performance price even if I wanted to.
Why don’t you make a native framework and put a managed wrapper on top of it instead of doing it the other way around? Makes no sense to me except from a marketing perspective.
Oh, and frameworks need to be able to statically link. I can’t force customers to upgrade. Many big organizations are _very_ conservative and are still running Win2k. They also don’t appreciate downloading and installing the latest version of an enormous runtime. Many of their computers are not even connected to the internet for security reasons.
I really feel that MS entire idea has been to bring programming to the masses and lock them into a proprietary language. The professionals with large projects that need performance and interoperability are left out in the cold. I am sure it’s easy to write a widget (gadget?) or some standard database front-end in C# though.
Thanks for listening; I hope something comes out of it.
Hello
Many people on this thread have highlighted issues with MSDN Documentation. The VS Documentation Team is currently conducting a user survey. Since we know that many of you have strong feelings and innovative ideas about what we need to do to move the documentation system forward, please feel free to take the survey and let them know how they can best serve you:
You can find more information on the survey at Kathleen’s blog:
Thanks
Damien
Why don’t you release the new Orcas +1 version as a service pack to Orcas (Visual Studio 2008)? The fact that we need to wait for another 2-3 years to get the next _most_issues_fixed Visual C++ doesn’t sound exciting for me.
Thanks,
Arun
Hi,
I tried the class diagram feature of Visual Studio 2008 for C++ (managed as well as native) and it didn’t work very well.
Could you please enhance the class diagram feature for C++ so that it will work with managed C++ as well as native C++.
I write all my code in C++, since I need to interface with device drivers as well as do quite a bit of native Win32 calls.
regards,
Yogesh
Hi, Yogesh,
Thank you for your interest in C++ Class Designer.
For some reason, managed C++ was not supported in VS 2008. However we did take that into consideration and we know it’s an important feature for C++ users.
We will do our best to make C++ Class Designer better in the future.
Thanks again!
Yang
I like VC but I but two things would make it much better.
1) It sometimes allows you to write code that is not exactly ANSI C++ complient; allowing ‘bad’ code to compile that would not compile with other compilers such as GCC.
2) a data-display debugger that made graphical representations of the data and/or an omniscient debugger such that you could debug back in time.
Thanks
Ardavon
good integrated (configure?) with msdn!!!
e.g.:
i write native code in winapi
write ShowWindow and press F1
what is open in msdn?
COleControlSite::ShowWindow !!!
i know… in "f1 options" i can select correct function
but i MUST select it…
possibility to configure search durign "f1 press"
e.g. – only in win32, not in mfc, not in dot.net, not in atl, not in…
I’d like an IDE that didn’t keep forgetting my settings of where the windows and panes go. About 20% of the times I launch an app for debugging, VS.NET squishes the build/debug area to an unreadably thin strip across the bottom of my screen. Very annoying.
I posted before, but forgot one item I’ve always wanted. It is a relatively simple feature compared to many things asked for here
I wish, "Find in Files" and the general "Find" command, had a language-sensitive checkbox named, "Ignore commented out text." Even when commented out code has the C++ style commenting, making it possible to tell the line is commented out, where I work, there is a huge amount of code like this, which makes it very difficult to find what I’m looking for.
While I like that the "Find" commands allow RegEx searching, it’s not easy to use, and it would be slower than a built in feature.
Some thoughts, based on a 1+ million line C89 codebase:
– I echo comments about F1 help – it is way too slow to start. Also, it’s too hard to restrict the results to the area of interest: e.g. try looking for info on Win32 calls and you’re bombarded with info about SQL, or about managed functions. If nothing else, provide a flag saying "native, managed, both" in the help.
– I work with 1million+ lines of C code – not C++. There’s little chance of this being updated to even C++ let alone C#. It would be great to see the C99 and C0x standards fully supported, and more work on C-capable browsers (rather than just class-based).
– Managed interop is of almost no interest to us.
– I frequently get tripped up because the C compiler is more forgiving than other compilers (esp. gcc 4), and it would be nice to have a "strict" setup that complained, for example, about // comments in C89 files, or about various typecasts. I do understand that you can’t "emulate" gcc, but there are things that could be done here.
– Please make Intellisense work in C mode. When I used to use C# it was great. In fact, make the C/C++ VS setup work as well as C# setup, (other than where it’s impossible 🙂 )
– I frequently work in an environment where I have one of my own projects (a DLL) has its solution open, but the overall app involves code from other DLLs (which have their own solutions) and the main executable (with its solution). When I’m debugging I usually open "my" solution, and the debugger will happily let me set breakpoints, etc, in the other source files, but various other things (e.g. Find All References) dont work. It would be great if you could tell the debugger about all the components involved.
Note: each solution here is an _individual product_ with its own config and branching, so its not appropriate to have them all bound together in one solution. I suppose I’m asking for the ability to create on the fly super-solutions, consisting purely of other more detailed solutions.
– Provide improved ways to copy/duplicate subprojects from one solution to another where there are common libraries.
– when adding items to projects, have the open / add dialog remember previous settings – especially the file type. I frequently add "non-C" files to my projects, and find it irritating that I have to keep setting the file type to All files.
For that matter, adding a subtree would be great, too. That is, point an "Add Existing Tree" dialog at a root dir, and it adds all files found, using "Filters" to represent directories.
– STANDARDS – I hate it that on Mac, Linux we have C89 compatible functions (e.g. snprintf, ftime), but on VC we have _snprintf, _ftime. Why? When I learnt C, the convention was that leading underscore represented an internal, system function that you never call, but snprintf is a standard C library call! Please don’t say "but you’re supposed to use …" because we are constrained by the other platforms too.
– Improved visualisation support in the debugger. It’s got better over the years, but how about something like a visual graph of the call stack, or dynamically created & updated data relationship diagrams?… think out the box here!
Oh, and by the way can we have this all for last Tuesday 🙂
HTH,
Ruth
The two things that get me are speed of opening a project and stability.
My machine is relatively new <1yr, but whenever I open a project I’m dead in the water for a minute while my disk spins and intellisense does its stuff. (I defrag my drive daily.) We do use boost, so there is a reasonable amount for intellsense to handle. If I touch my machine during this time, apart from being totally unusable, the interaction appears to hang VisualStudio which never completes the open process.
Earlier this week I tried very hard to debug a problem in an exe containing 1 cpp that linked two a couple of small static library files and a reasonably large dll. Every time I stepped into the dll VC crashed. I tried all the usual, deleting the solution file, the ncb, the Debug directory to no avail. I keep hitting the send to Microsoft, so you should have plenty of dumps for this 🙂
When I exit VisualStudio, the devenv.exe process hangs out forever until I reboot my machine or crush it in task manager. When I notice things getting really bad I’ll go into task manager and see 4 or 5 instances of devenv.exe just sitting there.
VisualStudio routinely crashes on me. Usually when debugging, but not always.
Other annoyances are:
Help. This must get worse with every release. When I go to the help for an API call I end up in CE or MFC land. We don’t use CE and chastise anyone who writes new code with MFC. Ideally I’d like to prioritize the order since we do have legacy MFC. Otherwise I’d wish to disable MFC and CE help and never see them again.
I see the Visual Studio is waiting for some internal operation to complete dialog far too often; daily. Please complete this operation in a timely fashion.
On a good note, I am constantly surprised when I discover some new feature that is a joy to use. Alt-O is my latest joy.
Thanks,
Steve
Game developer here, pretty much only C++ dev for engine stuff.
I may not have looked hard enough, but I would like to see per-solution settings that can add/modify things in individual projects.
For example: I’d like my solution file to be able to #define UNIT_TESTS for all the projects present. gcc and make can do this with "-D UNIT_TESTS" but I haven’t found a way yet to do it per-solution.
Anything you can do to better manage 10 different projects with 16 different cofigurations per-project on 3 different platforms would be nice. It sucks having to add an include path 180 times.
Other things:
– Make the platform|config name show during builds and when you’re running. Seems like a small issue, but it’s weird how many times I forgot while building or debugging something what I had selected. Instead of hiding those combo boxes (e.g. for "Debug, Win32"), just grey it out.
– Maybe not a VS thing, but perforce plugin integration does all sorts of weird checking out strange files and not modifying them. Just make sure the VS integration is tight with this stuff. Also, adding a new .cpp/.h file to a solution does not automatically add it to source control, which is really annoying and is the highest cause of broken builds around here. Could be a perforce plugin thing though.
– Echoed sentiments on the Visual Assist X, the find files.. (shift+alt+O) has made going to the solution explorer completely unnecesary. Check this out, seriously. You go there, start typing part of the name of a file, scroll 2 or 3 down the list, and you’re in.
– Edit+continue == SUPER important with games. Make the output be a bit clearer when it can’t change the code instead of the dreaded "your 2 line change needs template ridiculousness XYZ…" insane output.
Also forgot the most important thing:
When I press Ctrl+Break, stop the freaking build. Don’t keep going, stop. Also, if there’s a syntax error in one project, stop everything right there, don’t keep going. At least make this an option.
00:00 Me: ctrl+break
00:01 VS: hang on hang on I’m doing something
00:05 Me: ctrl+break ctrl+break ctrl+break
00:10 VS: ok fine!…..
00:25 there! done.
00:26 Me: good work, now throw that away.
Ruth,
I represent Microsoft on the C standards committee. Visual C++’s C99 standards compliance has be driven primarily by user interest, so it’s great to hear from C users like you. Where we’ve received many requests for specific features, we’ve gone ahead and tried to implement them (or analogous features). A couple examples of these are variadic macros, long long, __pragma, __FUNCTION__, and __restrict. Apart from your general interest in seeing Visual C++ fully support C99 and any future revisions to the standard, are there particular features you would find most useful in your work?
Arjun Bijanki
Microsoft Visual C++ Team
My requests that are different than those already requested:
1) A code prettifier built in with options. This should be integrated into SourceSafe so that when a programmer checks out the code they automatically see it formatted the way they like it. Code checked in should be formatted according to a company standard (if specified) so that if it is pulled out outside of VS it looks correct. This feature would make many a manager not have to listen to their employees whine endlessly about how their co-workers format their code.
2) Integrated documentation like doxygen, but with intelligence about parameters, return values, etc. like in C# XML documentation. The output should also always be present as if it were part of MSDN documentation. Libraries built by the company are just as important as Microsoft and other standard libraries and coders should have quick access to the documentation. Integration of links to MSDN documentation (types, classes, etc) should be present. If code is manually refactored by a coder, a //TODO: comment should be added to update the documentation.
3) Several individuals have requested image insertion in code. I would take this a bit further. It would be really cool if you could embed a Visio document, e.g. I would want editing the embeded object integrated with SourceSafe, as well. Of course with images and and other embedded objects you should have a +/- option to collapse it or expand it. An alternative would be to do this within the documentation generated by 2).
4) Definitely one that was mentioned once that should be repeated over and over: Make the task list global across the project and not just relevent to the current file. Have the ability to prioritize the tasks and group by file or custom defined grouping. For example Bug #555. The format could be something as simple as:
//TODO: <BUG #555> Fix blah, blah, blah.
I’d definitely agree on that native code should be a priority in Visual C++. Personally, I’d prefer some kind of plug-in handling for C#, VB, etc, since Visual Studio becoming increasingly bloated. It would be really nice if the plug-in system could be configured by simply deselecting the plug-ins that you don’t want to load.
I also think that most experts that use Visual for building performance-intensive or low-level development is much more interested in IDE improvements rather than seeing more of the CLI features. There are two simple reasons for Visual to be as popular as it is: 1) a great editor/IDE, and 2) a great debugger. Don’t screw it up!
Some practical IDE suggestions:
– Drag and drop files to the solution explorer to add files to a project stopped working in VS2003? VC2005? I’d like it back.
– When working on larger projects one often finds that it would be nice to have a project merger/splitter. Imagine that you could select some of the files in a projects and click “move to new project”. All include file paths would be adjusted according to some basic rules (I did not say ‘wizard’). This feature could be huge if you also add refactoring to it, so that namespaces are automatically changed, inserted and removed where applicable! Without this tool one has to do these steps manually, which takes hours for larger projects with hundreds of files in.
– Help for Win API should load instantly (“F1”). Now it takes 10 seconds, which is totally unacceptable! Some kind of plug-in handling for the different help systems could ensure this (se above).
If you guys aren’t careful, you are going to get smoked by the Linux people.
KDE is already better than Visual Studio for pure C++ development. The source formatting is better, the project management is better and the source control integration is better. Even their intellisense is getting pretty good. If they ever got KDE to run on Windows, you’d lose for C++ development…
It’s that, you haven’t done anything real with Visual C++ in a long time, and other people are catching up. C++ for some people means, C++, and actually having portable code that is compiled and works on different platforms.
1) Get rid of the ads in the environment. The start page is stupid. If you are going to keep the start page, then, have a button to clear the recent projects -on the page-.
2) Get your documentation in order. For C++, the F1 key is now basically useless under VS 2005. I am shocked, after installing VS Pro, to find that F1 on CreateWindow doesn’t work. How’d you manage that? In fact, MS help is now so useless that most people I know use intellisense to find things to google on. It’s terrible. And where did the OpenGL documentation go? There in VS 2003, gone in 2005.
3) Having the windows bound to the coding activity was a nice idea, but, it ultimately fails. In general, the VS IDE is a train wreck of icons and blinking things… what I really like, is the way KDE does it with their bars on the side.
There is something that I really would like to see in a future version of VC++ and that’s a debugger for built exe files.
That is because I am currently in a project that cannot be debugged because it depends on another application’s data input that cannot be sent manually…! Frastrated ha? Ask me, in the development!….
I agree with most people here that VisualC++ 6 was the pinnacle. It was lean and mean.
Here’s a couple of features I would love to have.
* Customizable syntax highlighting. Let’s say I want syntax highlighting for ASM files and nvidia CG files. As a workaround I add all the keywords to usertype.dat and then add the extension to Options->Text Editor->File Extensions->C++. The problem is that all the keywords are enabled for all cpp file. It’s annoying to see mov eax highlight in a C file.
* Documentation for autoexp.dat. There’s so much good things you can do with it but it’s a shame that you have to rely on internet searches and arcane experimentation to achieve anything.
* The MASM support is now better than ever in VS2005, but it would be great if it was a first class citizen and popped up in the New File dialog.
* I would like so see folder support for the Solution Explorer. It would help a lot if you have lots of projects and they can be grouped in categories.
Thank you and keep up the good work!
Cheers,
Peter
Todd,
Thanks for your excellent feedback. I can’t address all of your concerns with this reply, but I thought I would mention a quick tip regarding the Start Page in case it improves your experience at all. If you find that the Start Page isn’t useful it can be disabled. First go to Tools then Options in the menu. Then, see Startup under Environment in the tree. Under "At startup:" choose "Show empty environment" and click OK.
To many others whose comments I read and could not address directly,
Thanks very much for your insightful feedback and taking the time to write it up. I’m hoping that we can use it to improve Visual C++ in the future.
Thanks,
Richard Russo
Microsoft Visual C++ Team
I think improving c++ is a complete waste of time.
We have an enterprise c++/mfc application with a large code base.
Ideally we would want to restart the application in .Net, but we would never get the authority to do this from our management.
Therefore what I want from c++/c# is to create a new nice .Net application shell and plug all the old c++ dialogs and views into it.
I don’t want to have to write a load of code to do this or convert each dialog.
I then want to write all new "dialogs" as windows forms.
I’d hoped to be working exclusively in .Net by now, but the interop is so poor, in my opinion, that this has held us up for ages.
As well as the actual difficulty of interop, there is the speed of building, linking and debugging.
It’s so slow its unbelievable. Its always doing unnecessary full rebuilds/links (this may be addressed in 2008). Its also incredibly slow to debug.
We have to create all new c# forms in a simple test project as trying to do it in our main project is just too slow. I can’t sit and wait for a full rebuild every time I want to check the line of c# code I’ve just added.
I’d also like some work done on CWinFormsDialog to ensure that it does actually work with the more complex .Net features.
In particular I’m talking about Visual Inheritance where the user control the CWinFormsDialog can be a derived one. I’m not convinced this works well.
I think the .Net designer needs to handle Visual Inheritance better to further encourage people to completely move to .Net.
Perhaps the fact that there are separate c++ and c# teams is why interop is so hard…
I think there is a certain brand of programmer who wants life to be unnecessarily difficult and would like to still be writing code in notepad/emacs if they could.
I just do not see the problem some people have in things being made easy for them in the new .Net languages.
In summary, having easy interop would completely remove the need for other c++ improvements.
Better debugging for compile-time issues.
Today I’m tracking down two different issues:
We have a dependency problem in the main source tree that is causing massive rebuilds. And I’ve got a #ifndef guarding some code that intellisence shows as active even though the guarding symbol is defined. No idea why, but the guarded definitions appear to be in force since code that uses them compiles when it shouldn’t.
Nothing I’ve found gives me any visibility into these problems at all: the only tool I have is Zen. And I can’t even do trial-and-error debugging on the dependency problem quickly because there is no analogue of the unix make "-n" or "-t" options.
While we’re at it, for configuration control I’d like to be able to see all my project settings for all my configurations and platforms in some kind of master view that helps with "which of these things is not like the others" analysis. As it stands I’m going to have to parse the XML and write a bunch of design rule checking code.
As a research guy, I would like the see the following:
1. OpenMP 3.0 support
2. OpenMP thread tracing & debugging
3. Less painful 64-bit migration
Hi,
I would appreciate this feature – in C++ debuger watch window to be able to sort the members also in alphabetical order. We have some classes with really long list of members.
Even more advanced function would allow filtering the members 🙂 for substrings for example.
Best Regards,
Petr
You know what would be really nice? Increasing the string literal limit from its current 2k limit so that compiler error C2026 isn’t hit all the time. A quick search shows that alot of people hit this limit.
A post on microsoft.public.vc.language from a VC++ MVP indicates that the C++ standard suggests, but does not require, a 64k limit.
Meeting the 64k limit suggested by the standard would be really nice.
I am glad to hear the MFC is going to be updated in VC2008. It is such a sad story for Microsoft that we serious C++ programmers have to wait for several major releases for such an update. Meanwhile, you Microsoft has invested so much for producing all kinds of shitty .NET managed stuff (C#, Managed C++, C++/CLI, Interop, Workflow) that seem forever half-finished. IDE crashes for no apparent reason. Design views of forms won’t open, etc. etc. You should be ashamed of yourself because you are wasting precious time for people on a large scale. Seriously, some key people at Microsoft should be fired for producing so much junk in the past several years.
My advice to Microsoft: Do not spend your time to produce all kinds of products/technologies that are unreliable or half-useful. Focus your attention to a few products/technologies that are useful and make them robust.
By the way, stop calling native C++ unmanaged C++. Instead, call managed C++ or C++/CLI you produce bastard C++.
Well, in the 6 weeks since I last commented here, I’ve implemented CLR hosting in my native C++ DSP application, to support 3rd party .NET plugins.
The most obvious shortcoming in Visual Studio is the inability to debug managed code called by ExecuteInDefaultAppDomain() and
ExecuteInAppDomain(). These are the key functions used in unmanaged hosting.
The whole unmanaged hosting area needs to be rethought out and given first-class citizenship. It looks to me pretty ad-hoc (I guess most of the documented functionality is there because of the SQL Server team’s enhancement requests).
Josh.
Our application have arround 60 different modules(dlls) each managed aby a different development team. We still use the NMAKE utility for building our application and the modules beacuse we can specify a similar setting for all the projects and control it from one shared file that provides all the optimization and compiler switches. This way we can ensure all modules compile with same compiler switch and optimization. We definitely want to retire this system and embrace the new world but if there was a way to provide a template vcproj file that would specify the compile time switches and other option and all the project file have a link to this template file. During compile time they pick up the settings as mentioned in the template file. This will be a huge help for every large scale application that are modular.
abhijit said:
> if there was a way to provide a template vcproj file that would specify the compile time switches and other option and all the project file have a link to this template file
We added this feature in VS2005. The feature is called "project property sheets". Here’s a link to how it works:
Thanks
Tarek Madkour
Microsoft Visual C++ Team
I ‘d like to see in next Visual Studio release a designer for window in native c++ code. Simil to Windows Forms designer but with WINAPI function and classes. I hope it will be possible. Thanks
Dear Friends, I have been a developer since Microsoft began. I am currently finishing the development of a very large program. It has to be composed of about 280 dialogs. When I use the Professional C++ ver 5.0 debugger everything is OK. However when using the final EXE. every so often a message appears on the monitor saying that Microsoft has found an error and that the program will have to be started from the beginning again, this does not happen when using the debugger. I have searched to find out what this error could be, and I came finally to the conclusion that this fault is the making of Microsoft’s programmers and I would like to see it corrected as soon as possible.
Will you do the correction of this error as soon as possible?
Yours
Frederick Post
Please make visual studio capable of producing identical binary (.obj, .lib, .exe )files when compiled from identical sources. The current state is that if a person checks out the source code from version control and builds everything, the binary will be different from the one that another user gets, preventing proper checks for build consistency, detecting what changed, etc.
I would also like to see an alternative to date-based dependencies – they don’t interact well with version control, and theres a lot of ways for them to screw up. I would like to see:
.objs, .libs, and .exes contain md5s of the source files and .libs (as well as compile options and compiler version number) which were used to produce them. When they do not match, that causes a build to happen. This information should be strippable from dlls, exes, and libs.
In addition, there would be a tool, which could look at a .lib, .dll, or .exe, and would check these checksums for errors. In particular, it would complain if a .exe was built out of source files which had been compiled with a different version of the same .h file.
It has probably been said before but I’ll say it again.
The help system needs to get alot better. It used to be great in the good old days of MFC but these days it has gotten terribly hard to find what you are looking for. The information is often in there somewhere but for some reason you keep getting help about everything but the thing you are looking for. Even pressing F1 on a method in code doesn’t work well anymore.
If I’m in a C# code file with the cursor on a .NET Framework Method and press F1 I’m probably not interesed in some FoxPro documentation, right?
I have been recently working on a project involving managed/unmanaged code interop. Few thoughts:
1) There is no simple/elegant way of "marshalling" events from native C++ code to the managed C++ (in VS 2003 one would write a nested unmanaged class – overwriting virtual methods of the base native class, inside managed one and invoke managed class’ methods when unmanaged class’ virtual method is invoked). The VS 2005 does not allow nested unmanaged classes within managed ones. This is a BIG PAIN.
2) Managed C++ classes do not allow multiple inheritance. I would welcome at least a way to inherit from one class and multiple interfaces (or pure virtual classes) – the way it is done in C#.
3) Would it posiible to "unify" exceptions in the next release of the VC++ (native code)? Right now one must handle C++ exceptions and SEH exceptions separately (not to mention that SEH do not allow proper destruction of the classes instantiated locally within the same method that uses SEH – one has to write method "wrappers" to use SEH and smart pointers in the code). It would be nice to have a compiler switch that would cause automatic translation of Windows SEH exceptions into native C++ exceptions before they reach the code proper.
4) I would really appreciate native-code version of the XML parser library included with the future VS C++ compiler.
5) The wizard generating C++ projetcs always uses managed runtime libraries (DLLs). It would be a nice touch if native runtime libraries were used by default when generating native-code binaries (exe, dll).
6) It would be an advantage if future VS included option (a compiler switch) to support writing code that is POSIX-compliant. It would help writing portable applications. (Is it just a dream???)
Thanks,
Peter
I have not read all the comments above but feel that my requests might be unique.
1) I use MSProject with the team to develop clean outlines of effort. I would like to put the project file in the solution and have it open in a tab.
2) I would like to open VSS in a tab in the IDE.
3) I would like to have a sound play at the end of a compile/build when it is success and a different when it is in error (this is a BIG deal to us!!).
4) I would like to have an editor I can use in DOS. I know this sounds antiquated but for the most critical systems we have telnet/ssh running and using a telnet option is something real we have to deal with. IDEs are cool, but C++ in a terminal has it’s up sides too. No intellisense, excellent performance … so very awesome! Please just kick this idea around.
beginthreadex, you can use EDIT.EXE to edit text files in DOS, or you can use *any* third party editor you want, such as vim (for dos).
Unfortunately, EDIT seems to be a 16-bit application, as such runs in some sort of virtual machine under windows, as a result it’s slow.
But you can use VIM for dos or MicroEmacs etc, and they should be fast.
Beside the obvious mentioned several times above:
* better C++ conformance, keeping pace with TR1, better
* *much faster* intellisence (our largest’s solution .ncb is 100MB!)
There are 3 main things which would make us more productive. When working on maintenance of a quite large C++ product, we often have to jump from one bug to the next, awaiting code review of a proposed fix, or input from a key developer, or whatever. In the mean time, you must move to another bug, but then all the breakpoints and open windows related to the previous bug either interfere with the next bug, or you must close/remove them.
The ability to save and restore "sessions", which would remembers windows positions, breakpoints / watches, documents open, etc… would definitely help. Keep these around, and when a bug is reopen by testing, you jump right back into the thick of it, which helps a lot with the mental context switch required.
In the same maintenance-developer POV, and general debugging, more powerful and robust custom views/viewers of app data types would help a lot during debugging. AutoExpand and the new visualizers help, but they’re almost not documented, break the IDE if improperly defined, and could be much better. Similarly, tutorial and more in-depth examples of writting more useful viewers (od-like, with byte-swapping features) in Add-ins would help.
Finally, stop assuming a monolithic app that fits in a single solution. We have a main solution that weigh in at more than 500 KLOC, and over 30 small to mid size other solutions which are plugins for the main solution. And that’s not even counted the hundreds of plugins developed by our clients using our SDK. We’ve had to write scripts to generate startup .bat script for the IDE to properly configure the inter-solution dependencies. Intellisense doesn’t work across solutions (loading .ncb manually is too cumbersomes), and dependencies between solutions have to be declared outside the IDE.
We need to have better inter-solution interoperability, and a way to declare some kind of SDK descriptor for a library or set of libraries to automatically adjust compile flags, include paths, lib paths, etc… for using this sets of libraries as a dependency in a project, i.e. a level of abstraction above raw settings you have to manage manually, similar to what you can do within a solution’s projects.
Thanks for hearing me out 😉 –DD
I have encountered a problem that the low efficiency in VC++ 9.0 nearly make me crazy!
For instance,when we use sort function in stl that sort 200000 numbers,the time elapsed is nearly 6 seconds!But in VC 6.0,the time is only 0.2 seconds,that’s why?
Displaying iterator values in the "watch" window would be great.
1. Tooltip values for the reverse iterators are invalid.
2. "#pragma region r" doesn’t work – it is not collapsed
when the file is being opened.
3. It will be usefull to go to the form from the class.
I’m a native developer, but most of the time developing can be easier and faster with just some new features( as I know you you are doing one of them for Icon designer )
but what about new new features to design dialog box, for example more properties for controls or any other controls that should be created with APIs not dialog editor…show bitmap for buttons , changing their colors and etc.
Add drag and drop feature to allow to add a new source file to the Solution Explorer window _from_ the main Visual Studio container where a file is currently open. (user would select the tab containing the filename and then drag/drop it ).
Thank you.?
[Kevin]
"You know what would be really nice? Increasing the string literal limit from its current 2k limit so that compiler error C2026 isn’t hit all the time."
MSDN documentation indicates that the limit was 2K chars (or 1K wchar_ts) in VC7.1, but it is about 16K chars (or about 8K wchar_ts) in VC8 and VC9. While this is not the 64K suggested by the Standard, it’s certainly "pretty big".
[Sapphire]
"For instance,when we use sort function in stl that sort 200000 numbers,the time elapsed is nearly 6 seconds!But in VC 6.0,the time is only 0.2 seconds,that’s why?"
I was unable to reproduce this. I compared VC9, VC8 SP1, and GCC 4.2.1 std::sort() performance on a vector<unsigned int> of 200,000 values produced by Boost/TR1’s mt19937 random number generator seeded with 0x17012161UL. VC9 was 5% slower than VC8 SP1, and 36% slower than GCC 4.2.1; these are minor and significant, respectively, but nothing like 30x slower.
Are you comparing VC6 to VC9 in release mode with optimizations enabled? Note that if you’re compiling through the IDE instead of through the command line, there’s a bug in VC9 that drops the /O2 flag in Release configurations when a project is converted. (This will be present in VC9 RTM but fixed in VC9 SP1.)
Please post a self-contained repro to the MSDN forums, Microsoft Connect, or directly to me at stl@microsoft.com .
Thanks!
Stephan T. Lavavej, Visual C++ Libraries Developer
It would be nice if Visual C++ developers concentrated more on making it a RAD Development tool for native code. C++ does not really need CLI and managed code. C# works great for that! The Borland Turbo Explorer C++ is currently the best tool for fast native WIN32 application development. Next to that is VB6. (Assuming non-trivial GUI on a well polished application)
Writing GUI code in WIN32 or MFC is no fun at all. Scrap MFC and come up with a FAST, LIGHTWEIGHT, NATIVE win32 library. Otherwise .NET apps will start be written just to run on other O/S’es.
Load different language Windows created solution don’t crash. I don’t want change OS localization setting when I build project each time.
— Fix the bugs! Especially the long lasting annoying ones. E.g., there has been a minor but annoying bug from day one of the IDE where it miscounts errors, especially in the link phase. It would not take a genius to correct this one. I suspect all it does is count the word error in the output. This is much too simplistic and reflects badly on Microsoft’s ability to implement technology.
— Each version is slower than the last — perhaps you could try rewriting it in C++ rather than the current C# to speed it up.
Bring back floating editor windows. Perhaps they are there, but I cannot find them (then it is a UI bug).
It has been said often, but I say it again:
– Please fix intellisense! This is such a major pain in VS2005. I have VAX and renamed feacp.dll to get rid of that uber-parsing. But this can only be a workaround, w/o that dll you cannot atl objects to atl projects etc.
– Again an often mentioned topic: Even better standard (TR1, C++0x)
– Iterator debugging/slowdown in release builds via SECURE_SCL. I think it is a great thing in debug builds. But I agree with the others that it is dubious to have it enabled as default on release builds.
– The resource editor/class wizard: I wonder that no one in our UI team went postal yet. That piece of the IDE just got *evil* with 2003. I do not know how much the guys of the UI team investigated or if they just resigned. But it seems to be nearly unuseable when you want to add new dialogs to a old MFC project that has a separate resource dll.
– Please give an option to use ‘standard’ regular expression for search/replace.I hate it that ‘:’ is used for keywords. Try search for ClassName::Method, or even namespace::classname::method. Having to prefix ‘:’ with ” is really annoying.
– Refactoring for native c++ would be really great. But please a refactoring tool that can cope with large projects.
– The same for unit testing.
– I would really like something like ccache.
– Resource (CPU/memory) consumption of the IDE went from good (VC6) to ‘so so’ (2003) to insane (2005).
– An option for the debugger: Deferred loading of pdbs, much like windbg.
– The help system really has it quirks.
Haven’t tried Orcas yet but from a 2005 perspective this is what would be most important for me.
* TR1 support.
Really important features.
* Faster IDE.
It is bloated and slow, not like eclipse-slow but still.
* Better and more generalized build tools.
MSBuild is a step in the right direction but build tools are generally quite primitive. It was quite a while ago projects only included source-code. Most companies need to have their own resources that needs rebuilds on dependancy.
Not so much c++, but I’d like to see the capabilty of dropping in a sqlexpress instance in a project that would make it painless to distribute with app.
I don’t know how representative I and the C++ developers I know are but most seem to think these points are the most important ones:
>Standards compliance.
The visual studio compiler has been improved continuously. Good job and please continue.
>IDE speed.
It has got even worse in every release of visual studio.
>Intellisense is a disaster.
Both clogging the computer and the progressive worse search relevance in each release are frustrating.
>Customized build and dependency handling.
One-source-to-one-destination
Several-source-to-one-destination
Company specific build steps are so common (most products has them) so it should be possible to handle completely within visual studio.
I know it can be handled with prelink/custom build, nmake and msbuild but it’s cumbersome and unintuitive, it would be nice to just add files in the filelist inside visual studio and set the dependencies. Add the destination file and then specify the build tool and a list of the source files it has dependancy to.
Thank you for your time.
I’d like to see support for inline assembly for 64-bit. Honestly, I’m surprised this was disallowed in the first place.
Have very large legacy C++ app. Love many of the changes in 2005/v8 – especially the ease of using managed objects in C++. Like using WinForms controls to enhance MFC dialogs, but would really like it if you would figure out how to allow us to use the designer for these controls. In our legacy app, its rare we get the opportunity to convert to a WinForm dialog. We do some new assemblies in C#, and love the ease of such integration, but will probably never be able to convert our C++ code entirely to C#.
Really, really hate the speed of compiling and linking large C++ projects. Its almost intolerable and very expensive. Rebuilding Intellisense is nonsense and is costs unacceptably large amounts of programmer productivity. Please, please allow us to set the priority on the intellisense thread. Truely hope 2005/v8 was the quick and dirty version and the next will be much more efficient. Would be very happy if performance improvement was all that happened in the next version.
See what is Softvelocity doing with Clarion 7, aplication generator with lot of wizards, templates, diagramer.
I would like Visual C++ for development of both business and system aplication needs.
Thanks.
I would like Visual C++ to produce native compiled code
without need od MFC libraries or ATL, lightweight code
directly calling windows API functions, native code similar
to Clarion or Delphi aplications ( no need of MFC or ATL).
Create scripts, macros or better libraries for faster development – 5 generation RAD tool.
Up until now, I had used MFC/ATL to develop native C++ applications for the Windows platform.
However, I have finally given up on Microsoft and doubt they will ever again give serious priority to C++ developers.
I now happily use Qt from Trolltech for all my development (and I can code for UNIX/LINUX and MAC OS, in addition to Windows).
Both Bob and I are using Visual Studio 2005 ( not the .Net version). When an application runs and terminates
‘abnormally’ ( which may mean to us deliberately ….such as clicking of the execution window)
then Visual Studio thinks that we have a memory leak, and starts dumping info which ( even if it made sense)
is still useless to us, and the only reliable way we can terminate the dump to the output window is ctrl/alt/delete.
There seems no know way to shut off this useless and annoying feature.
"It’s "when you use Standard algorithms, the bounds checks are lifted out". Even in the absence of _SECURE_SCL, it’s a really good idea to use Standard algorithms whenever possible."
Oh, I agree with that, but one of the main points of std::vector in particular is random access. The standard library algorithms are of limited use for that particular scenario. if I need to access element #216 in a vector, no algorithm can really help me. So expecting people to use the std algorithms instead isn’t much of a solution in the general case.
"> I think the whole "Secure SCL" is a mistake, at least by default.
There is a tradeoff here. Performance is very, very important, but so is security."
Of course, I didn’t mean that security wasn’t important.
Rather, I think you’re *reducing* security with this approach.
Why? Because I have personally seen far too many C++ developers go "Hmm, my std::vector is too slow. That sucks. I’m gonna use an array instead.
How safe is that?
And then of course, the logical followup is "I’ve already determined that the standard library is too slow. Guess I’ll use char* instead of std::string too, just to be on the safe side."
I’d much rather have people use std::vector without bounds checking, than resorting to C arrays (which degenerate to pointers half the time). The former isn’t perfect security-wise, but it’s far better than the alternative.
The best (only) way to improve security in C++ is to make the standard library attractive to use. True, it’s not terribly secure, but it’s a hell of a lot better than what most people would write if they didn’t use it. If developers see it clearly outperformed by old-fashioned C constructs (C arrays, C strings), then at least some proportion of them will (and already do) take the easy way out, and switch to old buffer-overflowing, insecure, fragile C-style code.
I’ve seen this happen countless times already, and it worries me, because I quite like security too.
But when people program in C++, they also expect performance, and if they don’t immediately get that…. they might just stop using the "secure" C++ constructs entirely.
Which sucks for everyone.
If you used a less heavy-handed approach (perhaps only enabling SECURE_SCL in debug builds, by default), then developers would still see at least some of the benefits, but without the nasty side-effects that scare them completely away from the standard library.
Especially as C++ gets more and more relegated to performance-intensive applications, while everything else moves to .NET or other (vastly more secure) platforms, you shouldn’t underestimate the backlash you’ll get from slowing down C++. The slower its secure features get, the less they’ll be used.
That’s from a practical "I want secure C++ code too" point of view.
There’s also the much more fundamental "It goes against the principles of C++" approach.
One of the most fundamental ideas of C++ is "you don’t pay for what you don’t use". Why do I have to pay for bounds-checking on vectors, if I didn’t ask for it? If I specifically wanted it, I could use at() (or I could define _SECURE_SCL myself), but if I didn’t do those, why should I pay for these features being enabled?
Hello
Re: Tuesday, November 06, 2007 1:47 PM by Bill Buklis
>> I’d like to see support for inline assembly for 64-bit. Honestly, I’m surprised this was disallowed in the first place.
Thanks for taking the time to post your comment. There was a VC blog post on this topic recently which contains more specific details: New Intrinsic Support in Visual Studio 2008 –, however, in summary, there are no plans to support 64-bit inline assembler at the moment.
Thanks
Damien
Hello
For the many who respond to this post and requested “updating MFC” and “adding TR1 support”, for some good news please see the flowing posts:
Thanks
Damien
Hello
And for all those who mentioned Intellisense performance pain-points, this post on the VC Blog may be good news:. Unfortunately we realise that this will not solve every issue and I should repeat that more improvements are planned for VS10 too.
Thanks
Damien
I’m developing mostly high-performance code and interested in advanced code optimizations like:
1) support for vectorization (as Intel Compiler or gcc) using SSE/MMX, etc instructions ( i never saw bswap being generateg for ntoh series of functions). MMX registers can be used as general-purpose registers to avoid spilling in memory…
2) multi-target compilation – say code with SSE2 support and generic branch for compatibility (also as in Intel Compiler for C++)
3) Advanced debugger with support for multi-threading – context for each thread, TLS, support for fibers, etc.
4) Tools like Intel Thread Profiler & Intel Thread Checker being integrated in profiler with support for C++ & .NET
5) Scalable CRT library – multi-threading friendly implementations of memory allocation, I/O, etc.
6) support for multi-threading in STL – unfortunately not part of standard, but very useful. Say, like Intel Threading Building Blocks.
Thanks,
Vadim
Some recent observations on small points of irritation for large solutions in VS2005:
When you want to apply a right-click operation (eg compile file or compare with SCC) to an open file you have to locate it in the tree view. This can take a while in a large project. There should be a way to jump to the file’s representation in the current "Explorer" or "Manager" view from the edit window’s title bar or tab.
On a related point, when you right-click and "compile" a C++ file you get "build only" behavior; it does not build dependencies. But when you right-click and "build" a project in the same tree view you get a buld with dependents; you have to navigate to "project only" build to get a broken build. This is particularly irritating when the PCH is out of date or non-existent.
When you create a solution-specific property sheet file, SCC integration is badly broken. "save all" does not save it, attempting to close the solution does. By then it was too late and I’d lost my changes. Subsequently I discovered that you can right-click-and-save in the property manager tree view but it won’t automatically check the property sheet out for you, even if it is assigned as an "owned file" to one of the projects and resided inside the tree covered by one of the projects.
Finally, when you’ve created the solution-wide property sheet and you want to go through about a zillion projects and make almost all of each project’s properties inherited rather than overridden you discover that the UI for doing that is ridiculously inconsistent. The "<inherit …>" context menu item is sometimes first, sometimes last, and sometimes not available.
Also, you can’t tell the difference when you have all or multiple configurations selected between "inconsistently inherited", "inconsitently overridden", "inherited null", and "overridden null" value states for a property.
While we’re at it, when the origin of a symbol or the source of an error is in the property sheet jungle rather than the source it would be very nice to have right-click navigation link to it. I recognize that this is hard but it would help with discoverability as well as reduce the Zen koan quality of the experience:
symbol defined in project settings "preprocessor settings’ page appears in header. Hover over it and it has a value in the tooltips. Right-click goto definition and the symbol is undefined. Ah, grasshopper, the definition that is undefined is the effemeral definition ….
I have recently had to do more c++ programming after having used primarily c# on windows. I understand that c++ is just not going to be treated as an equal NET citizen because you don’t want to replicate what is being provided by other languages, and want to concentrate on tools for native developers fir c++. But the differential in tools between c++ and c# is horrific. Why can’t a c++ developer have code snippets, a built in refactor tool (that third party tool from your web won’t install on my machine properly) and a decent class diagram? Decent (that is working at least 60% of the time) intellisense would be nice as well. And even if you aren’t going to add to c++ forms support, why can’t the built in features (such as application settings in the designer) at least work? I am orders of magnitude more productive in C# just because of a lot of these features.
And please make it reasonably (seamless) to attach to a gcc or some other compiler. I have to program for Linux quite a bit, but certainly don’t want to work in that @#%$# forsaken #$^# operating $$%^@ system any more than I absolutely have to.
Matt Brown
Please allow us to comment on your documentation. I spend so much of my time helping colleagues with undocumented "features" of Microsoft SDK’s or caveats, or providing the missing example code that they need to get the job done. Let us in there to fix your documentation, it’s so bad that as a previous post mentioned it’s more useful to use google or krugle to get the answer.
You can contribute directly to our documentation using the MSDN Wiki (). Within a topic, you can click "Add Content . . ." to jump to the Community Content section, or you can just scroll to the bottom of the page and click "Add new Community Content" (you’ll need to log in with your Windows Live ID).
If you prefer to just let us know about an error/omission or any suggestions you have for improvement, you can send us feedback from within a topic (see for more information). Also, please consider participating in the Visual Studio Content Survey (), if you haven’t already done so.
–Kathleen McGrath
Please bring back the Class Wizard!
Five small requests:
1. When selecting bunch and of lines and hitting the tab/shift+tab to indent the code please do not mess with the spacing, simply add/subtract a tab from the beginning of the line (Today all the lines must has tab+1 indentation ).
2. I know that VS2005 optimizer is much much better than VC6 and enables whole program optimization.
Tough, it would be nice that even when /GL is turned on (whole program optimization) you could step through the code (F10, F11) in the debugger and not the debugger jumping into crazy places or not advancing at all and you are playing the game: "What line is executed now?".
3. Seconds the request: change only one line in the .sln/.vcproj if a one action has been changed in the soultion/projects.
4. Please fix the bug when adding a single .def file to a project, please make it the default export definition file. (Yes, even if it is in the headers folder).
5. When working on a solution with a lot of projects (~100 most of them are unitests). Please, could you make changing the build configuration from ‘debug’ to ‘release’ and from ‘win32’ to ‘x64’ not last forever.
Thanks..
Reverse debugging, The ability to step back through code from the point of a bug would be nice.
First of all, thanks MS that you guys finally decided to do smth to better support C++. You have to realize that most old C/C++ developers (DEVELOPERS, not leads/mgrs) hate .NET crap and with further C++ abndoning push us away from MS platforms towards Unix, at least in our hearts & minds…
I may add only one stupid question to the above: have you guys ever thought of adding support for other object file formats and target platforms?
Thanks
I haven’t read everyone’s comments, so maybe it’s already been posted, but please work on fixing error messages to be better. When I’m developing code and I get it to work fine on my development machine, upon delivering it to another machine to be used on a research experiment, most of the time I get a stupid error message to the effect that the application cannot run, please reinstall it or see a system administrator. I think it has to do with manifest files. Is that "managed code?" Anyway, it’s wasted a lot of my time. with VC++ 6.0 I never had these problems.
If you’re going to force this stuff on your users, you need to have better error messages and good resources to explain what the heck is going on and how to fix it.
I don’t think we should be forced to create some kind of msi just to allow an executable to run. I don’t know who came up with this but it’s very frustrating to the user/developer experience.
thanks.
I think it’d be valuable to have an integrated unit testing plugin in Visual Studio. Right now there isn’t a true heads-above-others solution (UnitTest++ is pretty good but…) so it’s an opportunity for MS…. for example, you can look at how well junit has done in Java.
Using VS2008 Beta 2 via VS2005 Express, VS2003, and VC 6
VS2008 seems to bring back the speed and elegance msiisng from VS2005 Express and to a lesser extent in VS2003. However, nothing beats VC 6 for speed not just in compiling but also in IDE performance. (I run both on a 2.8GHz dual core Pentium1GB RAM under WinXP – so hardware speed is not a factor – what hardware platforms do VS developers use?
My biggest gripe though is reserved for what has happened to BRIEF editor support within the IDE. The VC 5 & 6 implementation was correct as far as I was concerned, and the editor is very responsive. VS 2003, 2005 and 2008 have all got damaged implementations in varying degrees. The most frustrating of these is the highlight and copy function. Once a selection is made, it is impossible to turn off the selection until either text is copied, moved or deleted. In VC6 clicking outside the highlit area was enough to switch off the highlight (and this is how I remember it from using real BRIEF under MSDOS). In use, this "feature" occurs regularly when I accidently click the mouse at the wrong point in source code whilst source sits in the clipboard. The only solution seems to be to modify the source code and then undo the change, or to overwrite the clipboard and then return to the original point and copy the original source code back into the clipboard.
OK, rant over. I like VS 2008 and the promise of updated MFC. (I like .NET but I still have to overcome a mental hurdle to some of its assumptions.)
Pro: style, class diagrams
Con: Incomplete BRIEF emulation
Thanks for listening.
Class Designer anybody?
This would be about the most useful thing you could finally put in.
– NOT bridge BUT not using c#,VB.net Only using C++/CLI for developing WPF,WCF,WF and Silverlight like another language( delphi, c#, vb.net … ) please! I don’t want to use ActionScript with mxml (Flex)
– Future Standard C++ 0x ‘s new language feature and libraries suppport as fast as possible.
– Expand and Extend MFC (not ATL) for native highlevel applications not lowlevel application(driver,service,game …)
* Improve Intelisense. See Whole Tomatoe Visual Assist X.
* Add more macroing ability in the IDE. Kinda advanced autocomplete for code like if I type if give me the options to chose from.
* Code Beautification – I’d love to see a feature that could clean up coding standards like brace placement, tabs, etc. There are a number of 3rd party tools that do a 3rd rate job. Particularly I’d like to see the idea of code conforming to my coding preferences and then being able to translate it in to something that conforms to the coding standards of the organization.
* C++ Refactoring. Must Have.
* Allow for a non manifest deployed Sxs version of the the libraries. We have to support old version of Windows that don’t support side by side distribution properly so it would be nice if there was a way to compile my app to support old versions that don’t understand manifests.
* Cross compiling. Our code base is built on Windows and Linux, and in the future Mac. It would be great if there was a feature that would be able to check the compilation on the other compiler via a ssh or some other manner. Pretty pie in the sky.
After switching from VC++ 2008 Beta 2 Express to the final release Express: projects using dynamic RTL won’t run and even rebuild with the new release doesn’t help. Please test these things.
The most annoying problem with 2008: a bug in application locks debugger and the whole OS (XP SP2). I cannot use even the Task Manager. After the reboot the IDE starts to crash immediatelly and I need to uninstall and reinstall it. Happens about once a week, with Beta 2 and the relase.
I run into at least two code generator bugs in release mode. The happens with a large systems and it is not practical to reduce the problem to a few liner (the project depends on dozens of libraries and has dozen of MB of sources). I wish there was a debug or logging version of the compiler/linker, something what produces a trail sufficient for later analysis.
Other wishes:
* Better C++ compatibility.
* automatic background check for invalid file names in #include – why do I need to wait until compilation?
* Search and/or filtering capability for file names in Solution, ability to see which files were modified recently.
* Better tool-tip help. The IDE documentation in HTMLHelp became practically useless (for me) – too complicated, no clear structure.
* Rename of a configuration in Configuration manager is buggy – it does not get not reflected in project properties dialog.
* Improved debugger. Now it is almost useless with access violation errors – doesn’t give any clue about location, stack is not shown. Also ability to use debugger only for selected threads would be nice – now debugging threaded code is nightmare.
* More frequent patches, even unofficial "use on your own risk" ones would be useful.
* Ability not to bother with certain IDE propeties. I’d like to state somewhere: this project doesn’t use XML or databases, don’t show options about these, I don’t care.
* Visible ability to quickly switch between header and *.cpp file where this is unambiguous. It is probably available somwehere but I never managed to find out the right shortcut.
* Background transparent and automatic constant recompilation/relinking of changed source files. With a huge project this would be a killer feature.
I’ve been using Microsoft IDEs for a …long… time. I was at the Programmer’s Workbench (PWB) launch at the Boston Computer Society many years back. I finally switched from VC6 to VS2005 because I needed Vista support.
I agree with most of the comments already mentioned.
and I’d like to add:
1. Let me perform Find Declaration or Find Definition on words in comments and #if sections and, PARTICULARLY, on words in the find box in the toolbar. All of these worked in VC6.
2. Document the various keybindings. There are several tasks mentioned in comments here that are possible to do with existing keybindings, but you only find that out if you read one of the "VS Secrets" web pages. Examples include View.PopContext" (after go to definition, for example) and "open file name under cursor" (Ctrl-Alt-G – you’d never guess this from the shortcut name.) Make sure these are documented somewhere that I can read linearly. Trying to browse through hundreds of keybindings four at a time in the current dialog is fruitless. I still can’t find the binding for "go to matching #ifdef/#endif/#else".
3. Speed up symbol loading when debugging. My app loads Extended MAPI, so loads over 50 DLLs during startup. It takes a LONG time to start up on a Core 2 Duo machine, and this is a small 1.5MB EXE.
4. Rethink how projects work. At least for me, VC6 did it right. I keep multiple projects in my solution that are tightly coupled, but each is a different product. It drives me nuts that changing to a different configuration doesn’t change my startup project (it did in VC6.) It also drives me nuts that adding a new configuration adds it to project and I have to go fix all projects.
5. Help me with the pain of WinSxS and remote debugging. I need to do it one way with Win2K (Put the MFC/CRT DLLs on the PATH), another way with WinXP with debugging symbols (copy some undocumented directories to me EXE’s directory), another way with release (everything should be installed properly in WinSxS), and let’s not even discuss Win98. It took me days to figure all of this out the first time.
6. I *love* the debugger support for STL in VS2005, but I really need STLport support for consistent cross platform use. Pease add debugger support for viewing STLport containers. Also provide better documentation on the language used in autoexp.dat.
7. Put back support for DELAYLOAD in pragmas. This worked in VC6 and hasn’t worked since. For example: #pragma comment(linker, "/DelayLoad:MAPI32.dll") Without this feature, I can’t create library modules that automatically set the right switches when linked in.
Thanks!
The F1 help has been beat to death, but let me add some thoughts.
First, the original help on Multimedia Viewer (Look at the MSDN Library betas) was incredibly fast. You hit F1 and the help opened. Almost no pause, even on a lowly 486. Every new iteration of viewer since then has gotten slower when running on faster hardware. ‘Nuf said.
Today, when I hit F1 on a keyword, I just want it to go to the online web page for that word. IMPORTANT – that page should include the Wiki discussion so I can have some help dealing with under-documented functions and classes.
I don’t want it to search for alternatives. I don’t want it to find community information. I don’t want it to open a table of contents that takes ten seconds to download. I just want to see the reference page. Immediately, if not faster.
Finally, this is the interesting part:
Tie the F1 help into the XML documentation for the current project. Really. (That does, of course, require you to implement XML documentation for C++.) And don’t make me wait 10 minutes to integrate changes into the help system. If I don’t need full text search, then opening the correct page should take a fraction of a second.
In addition, allow me to define a "search server" within my company. If I have dozens of developers on my project, then F1 should be able to query the server for a keyword and get a response back almost immediately.
Thanks.
One thing I think needs to be improved in the interop solutions is the annoying LoaderLock exception/breakpoint (from the Managed Debugging Assistants).
It can be disabled while you are debugging using the VS2005, but when using the VS6 debugger it cannot, and in our older projects I get those very frequently.
We need to be able to turn off this "feature" for good and system wide, so it will never fire again ever.
I agree that the IntelliSense platform that you seem to be developing will be a great improvement. But just in case at least provide a way to disable it.
Also, I do not like the Visual Assist approach and I think you could do it so much better. The C# IntelliSense could be the idea to follow; I find it comfortable and simple.
Interop will be needed and to be improved, no matter some folks are not and will not be using it.
It needs to be able to handle big and I mean really big projects.
And I think the MSDN help currently is having mostly automatic method skeletons grabbed from the source code, and not real help reference.
Juan,
Can you tell me what you mean by "really big projects?"
How big are the projects you’re currently working on? And how big do you anticipate them being in the future?
What are your biggest concerns / problems in working with big projects?
You can reply here, or e-mail us at DevPerf@Microsoft.com.
Thanks,
David Berg
Microsoft Developer Division, Performance Engineering
Hi David,
We have developed a developing framework which is basically a main MFC executable with a lot of COM based dependencies some of which are optional (plug-in like components).
At some point last year we came up with the idea to update the main executable to allow .Net assemblies to be attached as if they were COM components via interop.
But since most of the components need some sort of UI the main application had to be ported to C++/CLI in order to use the interop features of the new MFC.
Well several things were difficult: We had to turn off IntelliSense, because it was continually parsing all the time, another thing is that since we have lots of classes we could not organize classes into folders like we could in VC6. Finding a class in ClassView now takes a lot of time.
So what I meant is that the IDE needs also a system in which we could organize our classes (in ClassView mode) and not only the files into folders. Also the VS2005 ClassView has two areas: one for the classes and below the methods and properties of the selected class; I think it was more comfortable when we had the simple tree in VC6 ClassView.
I cannot foretell how big our projects will become in the future, but we have learned that we have to keep projects small for compiling time issues and maintenance.
Thanks.
Ability to write macros in a language other than VB, particularly C++ and maybe C#. Because the ability to parse C++ and run programs on the AST is a brilliant feature (well done) and ought to be made full use of.
I know that I am echoing the comments of many others, but these are important careabouts to us. If we don’t speak up, you can’t know what’s important to us, right?
We have a large commercial application…16 years of legacy…with ~50 projects and lots of code. Started life under VC++ 1.5 on Win 3.11. Feeling old…
1. Our late model 4G QuadCore systems labor under the burden of Intellisense when we load our master build solution. Gigs of data…yikes.
2. Porting to C# simply isn’t in the cards, but we would dearly love to have the UI improvements available to native code through MFC.
3. I drug my feet as long as I was able in moving to VS2003 and VS2005 in large part due to the loss of the VS6 style Class Wizard. Others here disagree with me, but I found it far more intuitive and easier to use.
4. Similar feelings about control properties in dialog editing. Old dogs and new tricks perhaps, but I do not find a textual listing of 30 properties nearly as easy to use as the VS6 property pages.
5. Commercial application developers are probably in the substantial minority relative to corporate developers or web centric developers. It would therefore make sense that Microsoft would concentrate heavily on the tools and technologies appropriate to those functions.
For us in the first group however, our products are literally our financial lifeblood, not simply what I do at work. The apparent de-emphasis on native development leaves us feeling a bit…vulnerable. It would be a great encouragement to see some renewed investment in native development.
Thanks for your efforts and willingness to listen!!
Hi Juan,
you can organize your C++ code in namespaces and the class view will show your class hierarchically in the namespaces. You can also put namespaces in namespaces and see that in the class view. Very convenient!
—
SvenC
I would like the possibility of printing debug data like arrays, multi-dimensional arrays directly from the debugger.
Also printing memdumps would be usefull.
I would like to have MFC support in the Visual C++ Express edition.
I would like to see
=> improved performance of the build time in VC++ such as Increditbuild – distribued build tool ()
=> Memory leak detection in the native and Managed runtimes via APIS
=> Heavy support for Web Services via APIs
=> features that are unique to Visual C++ as compared Managed world.
Srinivas
I agree that C++ is there for native development, for managed development C# is a much better choice.
If Microsoft would have understood that sooner, we probably wouldn’t need 3rd party software just to make our native applications look good.
But there is something else I would really like to see improved: The help. When I press F1 I really need instant help on the identifier I’m on. On line help costs me a lot of precious time. It takes like 10 seconds and then the quality of the help is poor sometimes. I really hope you are able to provide us with local help, the sooner the better.
Jos
Absolutely, 110%, I would love to see a more open approach to cross-platform compiling, mixed build systems, and above all, the development of a "native" .NET framework.
I don’t want to use C# and .NET to make a simple GUI frontend. I don’t want to use WinAPI directly. I don’t want to use WxWidgets, or 1000 other frameworks I dislike.
Make us a GUI designer for native C++ using WinAPI. I don’t like MFC, and I don’t think I ever will. I am not an old-school C++ programmer like some developers, being only 22, but I love C++ and I think managed code is a horrible mistake in this industry.
Apart from that, improve the GUI, and add support for SVN/CVS without forcing me to use AnkhSVN and other such plugins.
Oh, and before I forget: refactoring and intellisense MUST get better, like Eclipse meets Tomato Software.
Thanks a lot for accepting our feedback. I’m looking forward to Orcas+1.
Hi Michael,
so you don’t want managed and don’t want WinAPI and don’t want WxWidget and don’t want MFC and you do want to build the best UI ever with a GUI designer? Can you give us or Microsoft a few details on how you want to things and not only how you do not want to do things.
I get the feeling that you want to build Office like apps by clicking and dragging some controls together. And my guess is: building those apps is more complex than you think…
—
Sven.
And you’re claiming that POSIX is not? It’s defined by IEEE. And the "I" stands for?
I’ve been porting code back and forth from Win to *nix for 10 years and lack of POSIX is the biggest problem -especially for threading.
Do some homework please.
Hi SvenC,
Sorry if I wasn’t clear enough. I know it may sound like I am a newbie programmer who wants the next best thing since sliced bread, and while my GUI work is limited, here are my views of what I want to see:
– .NET style WinForms GUI designer. This makes snapping together the basic GUI fast and easy. The only problem is I don’t like the 20% performance hit my entire application takes from being inside the CLR. If there is some way to create a WinForms-like native C++ library, I think you would find a lot of people who would love it. I hear people tell me all the time, "Making GUIs in .NET is so nice. Wish it was that easy in C++."
– Lightweight GUI library. While MFC is a powerful framework, I have read many opinions on it, seen many example programs, and I feel the entire library is too large, has an outdated programming model, and has too many dependencies. I admit I have not tried ATL or the new WTL, but I am actually going to give WTL a try sometime. I’ve heard good things about it.
– Exception-based programming model. I think that modern libraries should make use of exceptions. I read an excellent blog entry on exceptions versus return values and I have to agree with it. While I (and many others) are comfortable with traditional programming models, I think that modern libraries should use exceptions. It’s something that makes Java and .NET both very nice to use, even if exceptions aren’t perfect.
Also, let me clarify a few remarks:
I would like to see a library that makes it easier for us native C++ programmers to make GUIs because doing straight WinAPI / MFC is difficult for most beginner/intermediate programmers. It’s not even about the learning curve, so much as, designing the entire GUI in code is slow and unproductive. It’s the same reason I dislike making GUIs in Java: no GUI designer that is worth its’ lines in code.
About the ‘building Office apps’ comment, I agree with you. It’s surely not as easy as I made it sound. Making such complicated software takes a lot of work, but when the tools to create the GUI aren’t nice to use, this is only so much harder to do.
I’ve recently started programming in templates in C++, and am really loving C++. I’ve also done work in ASP.NET 1.1, C#/VB.NET, and Java, and I still keep coming back to C++ as my favorite language. RIAA, stack-based variables, templates, and performance are some of the reasons.
So, sorry for not being as clear as I could of been. Sometimes I don’t feel like writing up a storm during finals week 🙂 I know there are other priorities, but a nice GUI framework would really give C++ on Windows and edge it has been missing for a long time.
Thanks for replying and thanks for letting us give feedback. It’s a really nice thing to see.
The VS Debugger is the best and ever was on Windows! Debugging is much fun. Especially to learn how things are working.
– IntelliSence must be enhanced (False hitting is annoying). Maybe a way to go – VAssistX from WholeTomato Soft.
– Speed improvements as said
– most omportantly the editor -> just make it like in C#, code folding which is working like in C# and so on
In-Place parding and better code formatting like in C# with syntax check and correction. I know it’s not that simple but you should end there where C# began concerning this.
cu
Martin
What I would like to see in future versions of Visual C++:
Continue to track the C++ standards.
Continue support for native Windows SDK applications. I want to stay away from NET and managed code.
Continue to provide great documentation. Currently I rate the documentation as awesome!
Thanks for doing a great job.
Thanks for VS2008!
I just have a comment… Why C++/VB are not a first-class citizen programming Avalon/WPF/Silverlight? I see you can currently program it using C# …
Like other people,
1) I would like to see a new GUI designer for native C++ using Windows API without wasting 5 times more than .net just becasue of designing its interface. I want to stay away from managed code.
(yeah, but I think in my dream)
2) a new search engine for Local MSDN library.
sreaching a a disaster in local searching, Can you find any word with undescore ( _ ) or a combination of more than one words, No I can never do that.
using Google and typing msdn plus what you want will solve that but it is not local.
Two simple requests of IDE:
1.A short-cut to open corresponding .h/.cpp file,eg, if current text window has manager.h, open manager.cpp if it exist in the project.
2. An open project file dialog. A lot of time, source files are arranged in hierarchy instead of flat, so navigate the solution explorer tree or file directory is not so convenient.
Both these two feature exist in visual assist.
Maybe these can be done by some macros?
Thanks for your time.
to clarify the 2nd request:
open a file in side solution without have to type the full path name.
wish list:
– intelli-sense that is actually working
– better utilization of project dependencies while building in parallel (compile every project regardless of dependencies, only link as stated by project dependencies)
– link-time performance improvement (link-time >10min for release-builds of bigger projects is a pain)
– improve setting breakpoints in often instantiated template functions (setting a breakpoint can take >1min if instantiated often with many different template function parameters)
– no feature crap, this is c++, c++ developers choose it because it is rock-solid
Intellisense sometimes follows a different include path to a declaration than the compiler
VS2005 often crashes when closed (onExit by clicking the X in the top right corner) and ALWAYS defaults to restarting even when you are trying to close VS.
Debug runs too slow to be useful for debugging so we have been trying to build a Release_with_Debug hybrid like we had in Studio 2003. Linker just fails with a LNK1106 error and there’s no indication as to which file it doesn’t like (there are tens of thousands of them in the link).
I love the tooltip displays of the data structures in VS2005 debugger but sometimes when one is very very wide and only 1 line tall, it’s really really hard to run the mouse straight along it to the + at the beginning of the line to open the next level without the whole thing dissappearing when you get the mouse about half way over to the +
Often setting up a new Release_with_Debug configuration gets incompatibility issues between LibCmt and LibCmtD or maybe between NafxCw and NafxCwD or some combination of those. It would be very helpful if the correct libs were automatically inluded based upon the current link project settings, rather than on including wrong libs based upon how other libs from other projects were built.
There also seems to be an ordering issue between NafxCwD and LibCmtD where the auto gets them backwards so you get a duplicately defined error and have to exclude them both then re-specify them in the correct order.
In some cases while debugging, if the code throws an exception, the call stack gets wiped out (maybe caught vs. not caught?? not sure). Anyway, when that happens there’s no way to find who threw the exception. When running in debug, don’t EVER wipe the call stack on an exception. I had to track one of those down and it was about 47 levels deep in recursive Boost serialization code before I found it. It took days of debugging to get there. With a call stack intact, it would have been seconds.
I found a performance issue when debugging where if I open a solution on the local C: drive vs. opening an identical solution on a network drive we get about a 100 X increase in debugger performance when debugging the same executable (by setting the executable path on the debugger property page). Task Manager identifies the devenv.exe as doing hundreds of thousands of IO other operations when the solution was opened from a network drive that disappear when opening the solution from the local C: drive.
I see others complaining about the long time it takes to switch between Debug and Release configurations. I also have that problem, although I identified that the problem only exists if Intellisense is enabled.
I have many projects which I use Intellisense with and many which it does not apply to because they are in another language which is unknown to Intellisense and are built via Custom Build rules. Intellisense can only be enabled/disabled at the application level so I can’t turn it off for projects it doesn’t apply to. It should not only be enabled at the project level, but also be able to be set to only apply to a list of user specified file extensions (types).
Right now I often have to wait for it to update in projects it doesn’t apply to. It also takes from 5 minutes up to 15 minutes to update so it can really kill my productivity.
What happened to the help compiler? I had to copy if from VS2003 into VS2005 and it’s working again, but it was missing from the VS2005 installation.
With VS2003, you could hover over a word in a comment and it would display the value for that variable name. With VS2005 it seems to ignore the comment lines. I had gotten used to altering a comment line by typing a name on the end of the comment line then hovering over it to see it’s value. Now with VS2005, I have to move down to the watch window to type in a name and it distracts from the code I am studying.
HELP: The help normally fails to tell what #include is necessary when you are looking at how to use a method or function. Then you get a compile error and it’s obvious looking at the documentation that I did the call right, so I figure out that I need a #include to prototype it, but I have no idea which one. The only way to find it is with google looking outside of MSDN.
HELP: There are several groupings in the help and you can supposedly filter on them but all that does is gray out some of them… It doesn’t really filter them out and you can’t search only C++ or only SDK or a multiple selection of C++ and SDK together…. I keep getting hits in other areas like VB, C# or some other product I’ve never even heard of. Can you make the help filters actually work? and allow multiple selection also?
When I set a break point with VS2005 in a template, it lockes up the machine for several minutes. Most of the time I don’t realize it’s a template until after I am forced to wait for those many minutes. In those cases, I want to abort the setting of the breakpoint and instead maybe open the file that will be called from the line I’m setting the breakpoint on. Maybe have an option in the VS settings where the behaviour of setting a breakpoint in a template can be selected as 1) set it 2) prompt first. Granted, VS2005 is far better than VS2003 which prompted for which one of the 600 breakpoints do you want to set (by memory address so you couldn’t possibly know which was which). Keep the set the breakpoint in all instantiations of the template… Just allow a prompt first (yes / no)
In the debug window "Locals" you need to be able to click the title and have it sort the entries, just like explorer sorts file names. Without this feature, it’s impossible to locate a variable in the "Locals" window which renders it totally useless for any but the simplest source file. It forces the use of the "Watch" windows as the only way to view a local variable.
I think it would be great if there were a "realloc" keyword that could be overriden the same way "new" and "delete" can be overriden for custom memory allocators. The default "realloc" behavior could just use the CRT.
With VS6, VS2003 & VS2005 every time you open the edit dialog for a custom build rule (or any other edit window from the properties page) the window shrinks from the previous opening by the size of the window frame. After a few times, the user has to manually enlarge the window again. It’s pretty annoying when copying settings from file to file in a large set of projects.
I would like to see the ability to combine projects that come out of different source safe repositories into the same solution file. I don’t need more than one repository per project, that restriction is fine, but the restriction of one repository per solution causes me to have to duplicate code into 2 repositories which means any code modifications have to be manually performed duplicately in each of the 2 copies of the source code.
David: I not completely sure what you are asking. My first thought would be why don’t you use placement new for this? That is a pretty close C++ alternative to realloc.
[James Balster]
"When I set a break point with VS2005 in a template, it lockes up the machine for several minutes."
Please open a Microsoft Connect bug and provide precise repro steps. You can post the bug ID here.
Thanks,
Stephan T. Lavavej
Visual C++ Libraries Developer
I agree that most VC++ programs are targeted for native code, but I still do some things in VC++ where I want a managed output.
I have recently transitioned back to VC++ from C# and C++ Unix and embedded development. I am using C++ native features for multimedia. This is a different application domain than many of the comments above. My last heavy Windows experience with C++ was using VC6.
Since I am using a large body of VS languages, I like the integrated aspects of the environment. This does raise the complexity, but the integration benefits are great.
While digging heavily into VS2003, VS2005, and VS2008 features, I have noticed that sometimes the best improvements in VC++’s are easy to miss. A set of overview documents (not detailed change notes) on the significant differences between Visual Studio versions would be a great help. Suggestions of best practice changes between these tools could help alleviate many frustrations that occur when developers use new tools in old ways. Topics could include:
1. New managed/unmanaged C++ interop approaches, and how things have changed since VS2003.
2. How to manage large code bases using project and properties. This would include the Team system and the Pro series of tools.
3. How to migrate large code bases over to newer best practices.
4. How to configure stable portable GUI build systems between developers, that also port to a command line build.
The documentation is available, but VERY difficult to find in one place.
For some detailed changes (some of these have been documented elsewhere)
A. Support dOxygen better (and integrate Sandcastle with native code). This would allow current best practice documentation with a transition to new practices. A tool to port between the two would be useful for a transparent experience.
B. The XML documentation form for .Net is ugly. It makes the code more difficult to read. If you want to look to better (easier approaches) look at the lightweight markup language ideas. XML may be a standard, but for just reading code it’s obtrusive. Support a simple markup form (Textile, markdown with a few tags, or use doxygen**) and make it easy to port to the XML monstrosity.
C. Better refactoring tools.
D. Get an HCI set of experts to look at switchable simplified coding views in Visual Studio. The current tool tries to place everything in view. How about some encapsulation in the tools? Make a very simplified view where the following tasks are easy, with little visual overhead.
– just plain coding
– just plain constructing code
– project setup
Do not use 500 screen wizards, do not have a billion GUI features on the screen. Just have simplified interface (without the visual clutter). C++ programmers can support a larger memory overhead than a newbie VB programmer. Just make it lean and mean (and transparent), with the option to toggle into the kitchen sink view we now have.
99% of the time, this would be the environment I would work in. Why clutter this to make the 1% tasks easier?
I wish the tool were more stable. I should not be able to cause it to crash. I experience a crash once every three days.
Closing VS has gotten slower and slower.
The interplay between tabbed source files and the solution explorer needs to better engineering. Clicking on one should activate the other.
The algorithm for selecting which tabbed source files remain visible needs to be improved. VS seems to remove the ones that I access frequently.
Would like to see much faster on-line help.
If I get a linker error, e.g., 1106, I shouldn’t have to Google the Internet to fine the solution.
Almost any application these days has need for a grid. While there are several third party options available, it would be a significant improvement to have a grid option in MFC classes for use in dialogs.
Similarly, it would also be a significant improvement to have a set of reporting classes built in. While one can make the current CView derivatives work, it would be relatively straightfoward to extend a set of grid classes to incorporate reporting functions. Again, almost any application has need for reporting– why make developers everywhere reinvent reporting extensions of CView.
Thanks,
I agree with everyone else:
1.) Don’t eat every clock cycle and megabyte for gobbs of usless "features" like forums, start pages, internet help, flashing lights and sirens in the IDE. When programming, I need a few of the clock cycles myself and if I want to surf, I’ll use a browser.
2.) HELP AND DOCUMENTATION
Provide help and documentation – not little sprinkles of this-n-that scattered throughout hundreds of thousands of pages of unrelated stuff.
In examples, make them as small and to-the-point as possible. Don’t add a bunch of unnessary code that needs explaining as well.
Categorize the help based on required functionality – not on class names that don’t seem to correspond to it’s functional purpose.
Don’t rename algorithms and processes that already have been identified prior to Microsoft’s existence – I don’t know how many times I’ve been confused by concepts I understand well simply because Microsoft decided to create a new name for it.
3.) Provide options to the C++ developer to control the formatting of the source code. Borland C++ Builder does this well (but VSs Intellisense is better and quicker that Borlands) – let me define how to indent and where to place curly braces.
4.) Don’t push C# and managed code – it’s rinky dink – good for database/web page programming but that’s about it. I’ll take it more seriously whenever I see Microsoft use it to write an operating system.
When I check in my source code, Source Safe properly alters my source code by replacing the "$Revision: $" tag.
That string is in a source code line, rather than a comment and it is parsed and used as the minor version number for my application.
The problem is that after doing the check in, VS2005 thinks my source file is up to date so I have to do a REBUILD instead of a BUILD, even though Source Safe has altered the source code since the prior build.
VS2003 and VS6 had the same problem.
Although I very rarely crashed VS2003, I find that with VS2005, I crash it almost on a daily basis, sometimes 2 or 3 times a day. The most frequent crash (over 90 % of the crashes I have encountered) occur when a solution is being closed. With VS2005, it also takes a VERY long time to close a solution… at least as long as it takes to open it, possibly longer.
I’m wondering if possibly intellisense is kicking off during the closing of the solution and trying to access data structures that are no longer valid?
Good idea Thor!
The grid control as a built in sounds like a great idea.
We also use a 3rd party grid control and it’s just one more repository/solution/project that has to be carried around and built repeatedly.
In the VS2005 debugger, the values displayed in tooltip format when you hover over a variable sometimes are lacking in info.
For certain pointers to classes, it only displays the type of the class and not the address contained in the pointer.
This makes it difficult to determine if the pointer being looked at is valid or not.
I was just looking at one and I busted open each of the + signs to look at all of the class heirarchy and found no information indicating for certain that it wasn’t a valid class pointer, yet it also didn’t contain any data values.
Finally, I entered the variable into a watch window and discovered that my pointer had a value of 0x00000000. It would have been far quicker if that was displayed in the tooltip window along with the type information.
How about always including the address in the display as well?
Debugging VC++ application that contains nested objects is not easy & productive. For example when an objects internal object(s) is modified locals/watch windows hightlight those with RED color. But the tree in the locals/watch window needs to be expanded to see which objects are changed.
This is very tedious, time consuing and distracts from solving the actual problem. I used to Product Routing algorithams that contain many nested objects (bredth & depth wise).
It would be much more productive if the VC++ debugger UI supports some sort of settings to automatically display the inner objects when they are changed.
Srinivas
I would like to be able to edit and sort the RECENT PROJECTS list shown on the Start Page. I’d also like to be able to lock it to prevent accidential changes (similar to how the quick launch bar on the start menu can be customized).
The limit of 24 solutions on the RECENT PROJECTS list is too small for our instance. Our source code base has 20 solutions for one contract and 7 more for another contract and we have both a shared version on a network drive plus a personal version on the C: drive of each solution. (24 + 7) * 2 = 62 solutions.
The large number of solutions being dealt with increases the need for being able to sort the RECENT PROJECTS list.
Hello James
Re your comment: Monday, December 17, 2007 2:20 PM by James Balster
Thanks for taking the time to post to us on your experiences. Recently we have done some work on Intellisense performance and correctness, have you seen these entries:
Have you installed the GDR? Was this an improvement?
Thanks
Damien
I would like to be able to include comments with both text and graphics in the source code, with a graphics editor something akin to Corel Designer or Corel Draw.
It would be ok (possibly better) if the actual graphics resides in a separate file or database, but it should appear seamless to the developer. Perhaps a GUID would be included in the source that references the graphic.
It amazes me that the best tools I have today for including diagrams in the source code is blocks of characters. It makes maintaining source code difficult when complex geometries are involved.
Hi Steve,
integrating a graphics editor so that a graphic does show up directly in your code seems a bit odd to me. But you could do something like this: add a comment with a URL. URLs in comments can be left clicked when you hold down <ctrl> and they will be opened in a new tab in VS. So when you can assign a URL to your graphics they can show up in VS. Try it – put this in your code, press <ctrl> and left-click the URL:
//
—
SvenC
How about integrating the "Find Results" windows better. If you could search the code base for a string, then edit the results to delete ones from files you are not interested, then using the remaining results to drive a partial rebuild of the code base would be very useful.
Either a "Build all files in find results" or a "Build selected files in find results" are 2 choices I’d like to have.
Now with C++ it may seem silly, but for those of us who still have some Fortran code, there’s no automatic dependency checking to know if a file needs to be rebuilt….
I also do miss the (handy) keyboard control of VC6.0.
Ctrl-J, Ctrl-K for jumping to the matching preprocessor directive was very convenient (I don’t even find the command in VS 2005).
Pressing Esc cleared out all the find-output-build results off the screan… Now it requires much more movements (and it’s quite hard to do without turning to the mouse).
And the opportunity to disable the "Copy without selection" thingy… it appears strange to me, that it’s gone… 🙁
One more thing already mentioned above: F12 (Go to Definition) worked from everywhere (even half of a token in a comment), now it is much more scrupulous.
shared_ptr
-Improve IntelliSense’s reliability. It keeps on breaking with any larger project.
-The "Edit and Continue" feature also tends to break.
-Slow linking time is an issue with large projects, even with incremental linking.
-Make "Copy without selection" optional, as it was in the VC6 editor. I wouldn’t like the actual line to be copied to the clipboard if I accidentally press ctrl-c instead of ctrl-v. VS’s behaviour is not consistent with other applications.
I would like to see a more powerful scripting interface in the VC++ Immediate and Command debugger windows. I would like to be able to write scripting loops and function calls on the fly (maybe it’s possible and I just don’t know how). Something similar to how Python’s SWIG or Lua can integrate and cooperate very well with C++ code, but at the debugging level. Ideally, after a breakpoint is caught I would like to be able to manipulate the current scope of the breakpoint any way I want via a scripting language. And, possibly, navigate the stack as well (I wrote my own separate thread that can provide stack navigation in the debugger, but it needs code instrumentation and re-compilation; I obviously don’t want to recompile, hence the scripting language features). Don’t get me wrong, I like the Immed/Cmd windows, but they can be much more powerful (either that or I don’t know how to make the best out of them, maybe something similar might be achieved with VB?). Thanks.
Regarding BRIEF support, Philip Taylor (UK) is correct about being broken since VS2003. Another problem is the ‘column block’ cut/copy and paste. Copy a column and then attempt to insert at end of line. It inserts after CR/LF instead of before. Therefore inserted column begins at beginning of next line between current line where cursor is placed to do insert and what now is the next line prior to the insert.
Also, same problem with highlihght not turning off after after double-clicking on a word. Next time you place cursor somewhere, highlight extends from last hightlight to current cursor.
Annoying for MS. I know BRIEF is ‘really’ old, but I know that people that use BRIEF are probably the fastest typists on the planet. At one time, I could do over 130wpm easy (obviously not always when writing code) as BRIEF is truly the most ergonomic ‘touch-typist’ key combos anywhere (minimal contorsionist key-combos as prevelant in other editors). Obviously many others may disagree, and since support has continued, why not keep it? What could it hurt until all us old-timers croak? Otherwise, VSxxxx has always been the BEST!
Currently VC++ has no IDE support in Visual Studio for programming these CLR technologies:.
You did an excellent job of creating a first class .NET language with C++/CLI and you have done a general good job in interoperating between C++/CLI and standard C++, but you have done a terrible job of bringing .NET technologies to C++/CLI.
I believe there is a big miss in thinking that developers, especially c and c++ developers, want to "find a way to integrate" with .net technologies.
These developers want a way to get it done w/o using garbage collection in any manner ( or in the words of the verbally esquisite, "non-deterministic finalization").
These technology partners want to control memory, threads and processes and that is what they are paid to do.
Opening up new technologies both current and future with a c/c++ api would see these technologies achieve a faster market saturation. Failure to do so, will in fact limit their adoption and see competing products take your market. As has been done currently.
I’d like to see all the Visual Studio debuggers capable of supporting highlighting which of the calls on a line will be stepped into next.
I frequently find many "trivial" function calls on a line (often these may become inlined in the release code) that can be really annoying to step through. The problem is further compounded when "fluent" style approaches are taken to classes or interfaces.
It would be nice if you could right click on the file in the editor window and select "compile" rather than have to hunt for it in the Solution explorer.
Coming from VS2003, At first I liked the fact that VS2005 solution explorer didn’t follow the file selected in the editor… that was great while editing, but once I needed to compile the file, I had to hunt and hunt and hunt for it in the solution explorer…. I ended up having to turn the follow option back on… and I lose the other benefit. If I could launch the compile without having to use solution explorer then I can turn that follow option back off.
I see Rob Grainger asking for highlighting the next call on a line, but that won’t really address the problem… The problem is you want to be able to step over just one call, but not over the entire line…. For instance, if a call is passing 2 std::string variables as arguments to a method, then each of the 2 arguments will end up generating a call into <string>… both of which I want to step over before stepping into the main method call on the line.
I’ve been using VStudio since it was version 3.0 and while a lot has gotten better, we have some issues:
1) Intellisense (as many people have noted) doesn’t work well. I have a batch file on my desktop to delete the ncb file to cause a rebuild that I run almost daily.
2) The CArray constructs have run-time checking in release mode that you can’t turn off that just kill performance. We’ve had to subclass the arrays to get decent performance and that’s dumb.
3) The fact that everything you’re inside selects when you click in the dlg editor is crazy. Try sliding six radio buttons to the right without sliding the enclosing box.
4) Improve the task manager stuff. You don’t even have a filiter option. We’re trying to use this for task tracking during checkin and it’s just awful. Primitive would be kind.
5) Make it easy to generate custom workitem reports. The current report management is impossible. Why am I trying to figure out how to create SQL data readers?
6) Improve assert. There’s no robust way to report issues to the programmer other than the primitive assert function and in release mode they have zero value.
7) Track memory leaks. What you have works ok for DEBUG_FILE stuff (most of our source) but is useless for GDI and runtime library stuff.
8) Allow the mfc dialog wizard stuff to work with inheritance. Now if you don’t have resource.h included in the header and it’s not based on CDIalog or CPropertyPage you have no wizard.
9) Have some way to view hierarchy other than generating UML for the entire project.
10) Refactoring tools.
11) Code style enforcement tools.
Thanks,
Mark
I have used Visual C++ since 2.0. I read all of the people complaining about standards, but I have 1.5 million lines of C++ code that is very old and ported from UNIX to Windows for the "core" of the application. (All the UI is new of course, but that is the smallest part of the application). The main item for me is don’t break stuff that worked in the past, even if it was a "Microsoft-ism", we still have a significant amount of effort in our applications, I understand the need for standards, however you also need to understand that we have applications that do a little more than "Hello World".
Visual Studio 2005 is much better than past versions, in 2003 we actually used interop to our old 6.0 C++ dlls because it was pretty painful to convert to the new environment. With 2005 the conversion took about 1 week for all the C++ code, including interfaces to FORTRAN (it isn’t dead yet for structural analysis).
The biggest complaint I have is the slowness of stuff like ClassView, even though it is probably no slower than 6.0, I wish the "updating of SourceSafe status" was more transparent, intellisense & updating the sourcesafe status really take over your machine, to the point we go for a coffee break after opening a project. They also tend to "wake up" at times and take over, it would be nice if they were more of a "background" update and less intrusive.
Keep up the good work. I’ll wait until the Beta’s are real before trying 2008.
Thanks
Alan Anderson
Dir Applications Development
Varco Pruden Buildings
With VS2005, I tried disabling intellisense yet after doing so, I still end up waiting at least 10 minutes at a time for an hour glass mouse cursor while the status bar shows "updating intellisense". Why is it still rebuilding and updating the intellisense database after it’s turned off??
I’ve been working 4 days now, trying to do a single rebuild of all of our code, using 3 parallel machines. The intellisense updating continually locks up each workstation and makes it totally non-responsive.
I also found what appears to be a problem….One of the projects is working with a pre-processor and once the preprocessor finishes, the custom build rule executes the command
DEL *.h+
The prior line before the DEL command is running a program that merges 10 files matching *.h+ or more specifically $(InputName)*.h+ into a single file $(InputName).h
This operation works just fine on one of our networks, but on the other one, the DEL command acts up ONLY IF there is a 2nd copy of VS2005 running on that PC, even if the 2nd copy of VS2005 is idle and has not even loaded a solution file. What happens is that after a pause of about 10 minutes from the start of the DEL command, I receive a message something like this
Cannot access file – locked by another process
Once I receive that message, I went to another PC and with windows explorer, I also tried to delete the file and indeed I received the same error message. As soon as I close down the 2nd copy of VS2005 on the machine doing the build, the problem disappears and the compile speed picks back up to about 10 seconds per file.
I don’t know what kind of logging is enabled on that network, but I do know that there is some enabled as our IT guys have complained to us asking machines to be shutdown when not in use because of the size of the log files being up to a million lines per day from 10 to 15 workstations being booted up, even if they are not logged into.
Our other network does not have the logging enabled and the problem cannot be reproduced there. Our workstations are running XP on both networks and are kept up to date with service packs and updates.
As an interm solution, I have commented the DEL command out of the custom build rule and I am leaving the clutter of tens of thousands of intermediate files for a later date.
Linq to work with native STL
WHAT VISUAL C++ LIBRARY THAT TARGETS THE WEB?
Currently VC++ has no IDE support in Visual Studio for programming these CLR technologies (why???):.
What is the real reason of releasing the only Web library ATL Server on CodePlex.com? Is it to hard to ingrate some of sophisticated web components like in ASP. NET to make our only C++ Web library ATL Server web developpement easy to use?
MFC will support BCG controls bar but it is only for desktop application and not for web application.
Can we built 3-tiers application entirely in VC++?
You did an excellent job of creating a first class .NET language with C++/CLI and you have done a general good job in interoperating between C++/CLI and standard C++, but you have done a terrible job of bringing .NET technologies to C++/CLI.
WHAT VISUAL C++ LIBRARY THAT TARGETS THE WEB?
I would want to say that still use *.srf web page to open connection in the mail box. And *.srf pages are built using ATL Server technology. It ‘s the same for .NET passeport web pages.
Can you explain this?
WHAT VISUAL C++ LIBRARY THAT TARGETS THE WEB?
With Silverlight 1.1 we can build interaction web application using C# or VB .NET with the .NEt Framework and C++/CLI is not a .NET language?
Is ASP .NET only for C# and VB.NET?
But it is not all of .NET language?
Can you explain the real Visual C++ Futures?
WHAT VISUAL C++ 2008 LIBRARY THAT TARGETS THE WEB?
WHAT VISUAL C++ 2008 TOOLS THAT TARGETS WPF?
WHAT VISUAL C++ 2008 TOOLS THAT TARGETS WCF?
WHAT VISUAL C++ 2008 TOOLS THAT TARGETS LINQ?
WHAT VISUAL C++ 2008 TOOLS THAT TARGETS ASP. NET?
WHAT VISUAL C++ 2008 TOOLS THAT TARGETS SILVERLIGHT?
WHAT ABOUT THE FUTUR OF VISUAL C++?
WHAT KIND OF VISUAL STUDIO 2008 FOR C++ DEVELOPPERS?
I posted earlier, but I neglected to mention that my number 1 desire is that Intellisense only makes me wait when opening a solution. I never, ever, should have to wait after that. Never!
Here are some ideas. Not all of them are well thought out. Some of this is already possible, but I think it should be improved.
Visual Studio:
* more relevant results with F1
* refactoring
* better designer support
* separation of user generated and computer generated code
* only show relevant items in intellisence (no private members when you can’t access them)
* SVN integration?
* Allow lower level development with VS (like developing and debugging an OS kernel in a virtual machine)
* make it easier to work with assembly (especially NASM)
* improved code size optimization (if I don’t use a particular class in MFC or a few of my functions, don’t compile them in)
* managed code interop is low priority for me
C++ in general:
* more libraries (such as something like System.AddIn in .NET)
* switch to exceptions as the error-handling model
* make it easier to develop high performance web applications in C++
Also, LLVM is show real promise in the Apple/open-source world. Interoperability with LLVM would be great, or you could compete with them if you want.
I would like to see a list of the files which VS thinks are out of date and need to be rebuilt. When it takes hours (or even days) to do a full rebuild, but only about 9 minutes to compile a single file and relink, the decision as to whether or not to do a build or to try to use the debugger to stop at a breakpoint and then manually try to do a workaround could be based upon how many files would get rebuilt…
This could possibly be a separate window, such as the "pending checkins" window, or it could be integrated into the solution explorer much the same way source safe is integrated…. Maybe a very light gray background around the filename text if it would get rebuilt, rather than an icon next to it. That could prevent it from getting confused with the source safe icons. Hovering on the file could still display the tooltip like it does now, but rather than saying
"Checked-in"
it could say one of these
"Checked-in, up-to-date"
"Checked-in, out-of-date"
If done with highlighting, then The out-of-date highlighting should be propogated up to the project and solution level as well.
I would like a code tidier similar to the Rearranger/Code reformat in JetBrains IntelliJ Idea for Java.
It puts methods in a controllable standard order. It gives you very fine control of indenting and spaces.
This gives you a standard format before checking. If everyone uses the corporate standard, you don’t get false deltas.
I saves you so many keystrokes not to have manually align code.
Hello everybody
I am using c#.
I want to know to programatically change my C# code in order to get it compiled by distcc.
does any body know how to use distcc with c# sample program ?
or maybe how to change the compiler in C# and use the distcc instead.
please help.
* Have a switch to allow exploring system include files in Class View. This was a life-saver for iostreams in VS2005.
* allow storing pin_ptr in a class for those who want the higher-level abstraction of .Net without the security. Maybe add an ‘unsafe’ switch for that. Or is there some other way of holding a reference to a CLR class?
Re. docking windows, allow ‘splits inside a page’ as well as the current ‘pages inside a split’. eg. so you can have one page with small windows like call stack/autos/locals visible together and then just flip the page and get the output window on its own page. Maybe allow duplicate windows too so that you can have different layouts of the same things.
I have worked for a week with VS08 and I have run into two designer showstoppers; showstoppers that I have no way of working around.
I am going back to VS05. It is far better to limp along, with the designer crashing the IDE every ten to fifteen minutes when I work with form layout, than to be dead in the water. I may play with VS08 again when, if ever, the first service pack comes along.
To me, that a product works properly and as documented is far more important than anything else. I can work around the lack of features and that the compiler mishandles some source code; I cannot work around error messages such as the two I have encountered with VS08. Note that it only has taken me about a week to run into an impasse with VS08. If my experience is representative, VS08 is going to be one of the more outstanding lead balloons in Microsoft history — if possible, even worse than Bob.
I have made a DVD with a snapshot of the present state of the project. Should Microsoft be interested in improving the abysmal quality of the designer in VS08, they need to provide me with a snail-mail address to which I can mail the DVD. (My connection does not have a good enough upload speed to make uploading a large amount of information practical.) Please note that the code is proprietary and copyrighted; do not disseminate it beyond Microsoft and do not use it for any purpose except fixing bugs in Microsoft software.
Hello
Re: # re: Visual C++ Futures
Sunday, January 06, 2008 12:25 PM by Peter Bergh
Please open a bug report at Connect ( – customer reported bugs receive a high priority). They will take your information and send it on the relevant group to look at.
If you have any problems/issues/questions after opening the bug report then feel free to contact me directly via email – “my first name” at MS.com.
Thanks
Damien
i have a small sugession, we can have a clipboard in IDE also so that we can copy and paste more than one code snippets at a time
Over all I have very mixed fillings about VS. First of all it is the best integrated environment(debugger, editor etc.) out there for native c++ development. Linux does not have anything that can compete, eclipse is a jock, KDEDevelop and Anjuta are plain useless. Sun studio is too bulky and too out of date to be useful. Macintosh’s XCode is probably the only real competitor you have but even it is not as useful as VS. So for native c++ development there is only VS(it is even used in many places to develop Linux applications). You are the best, which is unfortunately not much if you do not have real competition. Since we switched to VS 2005 I am straggling to understand why Microsoft decided to cripple the application, and I dread the day we will have to switch to Orcas.
UI in 2005 is not just slower it is border line useless.
1. Find and replace dialog takes seconds to show up. It is show case for bad Windows UI. For that matter any new UI operation takes visible effort on the part of VS. Amount of Windows resources used by an instance of VS 2005 is mind buggling
2. Look up functionality (F12) works 30% of the time. It used to work (2003) even in comment block, now it hardly works in existing code. All of that with ncb sizes in 10-s of megabytes.
3. Somebody decided to switch nice tree view of class’s functions into hardly usable small window under the class view.
4. Source control integration with 3-rd party providers (Perforce) is very ugly. While I do realize that provider is at fault here – you integration API is the source of the problem.
5. Very nice and intuitive XML editor introduced in 2003 is replaced with something that is not much more useful than notepad.
6. Resource editor has the same problems it used to have since version 1.52. If it is edited through UI it replaces custom macros into hard coded values, it removes custom sections it pretty much forces us to use text editor to edit complex resource files.
7. If you have solution with several projects in it and some projects have 64-bit configuration and others don’t, there is no way to add 64-bit configuration to projects that do not have it because “configuration already exists”.
8. using forward slash in include dialogs pretty much disables look up directory functionality – it used to work in 2003.
9. Macro substitution dialog works 40% of the time (OutDir, TargetDir etc.)
10. Manifest generator interface with WTL projects that need common controls 6.0.0.0. I am yet to figure out how to make it work with 64 bit AMD. Again it is my ignorance that is at fault here, but VS is of no use in this case, it is actually aggravates the problem.
11. Crush dumps can not find appropriate symbols 1/2 of the time with SymbolServer setup according to instructions from MSDN.
12. MSDN (F1) takes several seconds to show up, but when it does it usually finds nothing but bunch of .Net references on the WEB(yes I tried filtering) – try to search for “operator overloading” with “Visual C++” filter
13. Search for native WindowsApi or Windows message either finds nothing or points to MFC/.Net article. – try SendMessage or WM_SIZE
14. Search for stl help does not work 1/2 of the time.
I can continue the list, but the main point is that if Orcas will match VS 2003 in performance and usability it will already be huge victory for your team. Adding new features is probably not that important, especially considering the smashing work your compiler team did. If we can get that new compiler with robust and fast UI I am almost sure most of the native developers will be happy.
C++ .Net has very limited value to native developers, and .Net people mostly use c#, so you might be wasting your time on that. Interop enhancements are useful , but again if they stay where they are it is good enough.
P.S. Please change the behavior of deprecated macro to _CRT_SECURE_NO_DEPRECATE – it is really insulting for native programmers to be forced to use sprintf_s instead of sprintf, considering
TJ – There is the feature called the "clipboard ring" that allows you to copy multiple items at a time and then paste. In some profiles the key combination is Ctrl-Shift-V but you can bind it to whatever you would like. Just keep holding down Ctrl-Shift and pressing V multiple times will cycle through your clipboard content until you find the item you want to paste.
problem
>>> 8. using forward slash in include dialogs pretty much disables look up directory functionality – it used to work in 2003.
>>> 9. Macro substitution dialog works 40% of the time (OutDir, TargetDir etc.)
Explain plz.
Hey, I really like the VS2005 IDE and have pushed for the last year at work for a switch from Borland-based apps to VS. We’re talking about it, so there’s progress.
I think that Visual Studio is the best MS application available.
The MSDN help leaves something to be desired, though; it’s difficult to navigate and it doesn’t always keep my filter settings.
Keep up the great work. I haven’t run into many problems with VS2005.
allow functions from the program being debugged to run from the debugger auto-expand rules!
I need to port a bunch of VC6 projects to VS 2008. Please make it "pain free".
# re: Visual C++ Futures
Wednesday, January 02, 2008 12:41 AM by Gabriel Litsani
> WHAT ABOUT THE FUTUR OF VISUAL C++?
Thanks for your posting. Not sure I quite understand all your questions but if you want to understand the future of VC++ (and how it relates to some our managed technologies) then this is a good place to start: Steve Teixeira and Bill Dunlap: Visual C++ Today and Tomorrow ()
Thanks
Damien
To VeroMaxx
thank for paying attention to my complains
problem
>>>Explain plz.
>>> 8. using forward slash in include dialogs pretty much disables look up directory functionality – it used to work in 2003.
1. Select Project in the solution explorer
2. Properties->C/C++/General/Additional Include Directory
3. Click on editable part, button appears – press the button
4. type …. – at this point of time drop-down shows all available subdirs in "…."
5. repeat with "../.." – nothing happens even though it is valid compiler syntax
>>> 9. Macro substitution dialog works 40% of the time (OutDir, TargetDir etc.)
This one is very annoying since it papers to work out if the box, but stops working from time to time
same as 8.
2. Librarian OR Linker/General/Output File
3. Select edit from drop down
4. press "Macros<<" button
5. At this point of time if you press Insert it might or might not insert selected macro into output string.
It usually happens with big solutions (10-15 projects) –that is the only consistency about this item.
ASP.net support for C++/CLI
This is a must !! The lack of this ability is simply a major hindrance to C++ adoption on the new platform. Because the web front end is forced to be written in C#, often the rest of the tiers also get done in C#.
Any project type that is creatable in C# should be creatable in C++/CLI. Wether if it is the new game programming XNA projects, Silverlight or what have you. Thats the only way C++ can be a fist class in the .Net world..
When your end product is a DLL, you quite often need to test it with many different executables. It would be very nice if there was a sticky list of the last 4-8 executables (and the associated arguments) on the "Command" and "Command Arguments" in the Debugging section of the project properties tab.
Quit trying to turn C++ into .Net. Give us even better support for C++ standards. Quit trying to one up the standards bodies with Microsoft’s custom interpretation of how things should be. Change C++ enough and native C++ programmers will eventually get fed up and leave forever. Eclipse is becoming a rather feature rich IDE with decent support for C++. We’re not always going to eat the dog food you feed us.
Rich
"…Quit trying to turn C++ into .Net. .."
John Selbie
"…Please innovate by adding a time-travelling…"
Could not said it better my self (no really – I could not, because my English is not that good 🙂 ). But if I could I would have signed under both of the posts.
Please, keep in mind us poor programmers that need to type every letter with our own fingers 😉
C# is nice in that it is a bit easier to write, automatic code formatting is good, but still cannot match eclipse.
#1: I would like to see } on next line whenever I type {
#2: I would like to see "for (|;;) {n}" whenever I type "for " and those ";" marked as "overwrite-me-if-typed" as eclipse does for quite a lot of things.
#3: "do {nt|n} while ();" for "do " or "dor".
#4: Change "ptr.field" to "ptr->filed", "class.method.." to "class::method", "nspace.class" to "nspace::class" if you can figure (IntelliSence) what is needed (…why is C++ defined like this? …ahh yes, for easier parsing… poor programmer.)
#5: REFACTOR!!
#..: Just look at eclipse 😉
Integrate CCFinderX (or any such great tool) with your Refactor – Do you know about CCFinderX? If yes, try using it as the first step to show duplicated/cloned source-code pieces, and suggest to convert them into functions in Refactoring stage… Currently, Extract-Method in Refactoring is done manually… It can be converted to semi-automatic method better with CCFinderX.
How about a multithreaded compiler, or at the least the ability to have it run on multiple files at a time, kinda like the way make -j4 does. Would be nice for large projects. Incremental builds help, but on fresh checkouts from code repositories, the build time can just be so painfull!
I’d like to be able to optionally exclude comments from the search feild while doing a search or a search/replace.
CamelCaseDetection 🙂
If I use ctrl to skip words, I want it to stop 3 at C then C, then D i nthe above example.
Other word detection:
CLSID_MYCLASS should stop at the C, then at M, not assume that whole thing is one word.
More customizeable syntax highlighting! I want to use VS for all sorts of other editing.
Multithreaded compiling: We have these 4 quad core machines out there, why can’t we get all those 16 cores busy? At the very least somethng similar to that of make -j4 in gnu’s make utility would be nice. Incremental compilation helps a lot, but when you first check out a large project from source control it can just be painful to wait for the initial build.
hmmm it said it failed on yesterdays, so I posted again, only to see todays work just fine, then see yesterdays. Odd….
I’d like to see a Selectable Undo:
Highlight a block of code.
Select Undo (or ctrl-z).
It starts undoing only changes made to that block of code.
Microsoft can certainly add a new dimension to their VS IDE by supporting "image/video" insertion. Just like Microsoft Word, the users should be able to insert any video clip, or image file into the source code file. That will greatly improve the code readability, and by allowing programmers to insert diagrams/program design sketch, the source code will become more comprehensive.
I believe this is certainly possible. =]
My opinion
– VC++ need the same poowerfull Intelisense like VC# have.
– Powerfull Class creator for native C++ code
– An option that permit rename functions and variables throught entire code with 1 click.
– And the same as othe people, faster compilation times, and more powerfull code optimitzations 🙂
Thanks for all.
LLORENS
– I’d like to know where linker dependencies are coming from.
– I’d like Visual Studio’s Build option to be able to do parallel compilation like "make -j" since operating system and CPU architecture is making that significantly more efficient.
– I’d like to be able to have multiple precompiled headers per project.
– I’d like a way to have the editor insert the current tooltip/prototype, e.g. | https://blogs.msdn.microsoft.com/somasegar/2007/08/08/visual-c-futures/ | CC-MAIN-2016-44 | refinedweb | 71,822 | 71.24 |
See following code,
Op1
Op2
Op3
package local
import akka.actor._
object Constant {
case object Op1
val Op2 = "msg"
val Op3 = 'msg
}
object Local extends App {
implicit val system = ActorSystem("LocalSystem")
val localActor = system.actorOf(Props[LocalActor], name = "LocalActor")
localActor ! Constant.Op1
localActor ! Constant.Op2
localActor ! Constant.Op3
}
class LocalActor extends Actor {
def receive = {
case Constant.Op1 =>
println("1")
case Constant.Op2 =>
println("2")
case Constant.Op3 =>
println("3")
}
}
Well... the term "best practices" is very subjective. And the context of your problem statement will generally have an impact on which is the best practice for you. And for this reason you should focus more on
why is this a best practice than
what is the best practice.
Now lets discuss this in a generic context.
As you understand that Akka actors use messages to communicate with each other. And you must understand that whenever there is communication between two parties there exist a need of a protocol between them which enables them to communicate in a un-ambigous manner.
Also, we know that both the parties can as well decide to communicate in just english without any protocol, but that introduces the possibilities of misunderstanding each other.
Now this problem of having a possibility of ambiguity is kind of unwanted when we want to build reliable systems. Thats why we start to setup protocols between our actors.
object LaughProtocol { sealed trait LaughMessage case object Lol extends LaughMessage case object Rofl extends LaughMessage case object Lmao extend LaughMessage } class LaughActor extends Actor with ActorLogging{ import LaughProtocol._ override def receive = { case msg @ LaughMessage => handleLaugh(msg) case msg @ _ => log.info("unexpected message :: {}", msg) } def handleLaugh(msg: LaughMessage) = { case Lol => println("L O L") case Rofl => println("R O F L") case Lmao => println("L M A O") } }
This brings us the simplicity of being able to be sure about the nature of the messages that we are going to handle. Also... for any other actor communicating with
LaughActor, it clearly spells out the rules for communicating with
LaughActor.
Now... you will say what was wrong with just defining these as
Strings.
class LaughActor extends Actor with ActorLogging{ override def receive = { case "Lol" => println("L O L") case "Rofl" => println("R O F L") case "Lmao" => println("L M A O") case msg @ _ => log.info("unexpected message :: {}", msg) } }
You can argue that even in this case, any developer can look at the code of this actor and figure out the protocol. And while it is true that they can but when looking at real world code spanning dozens of files... it becomes very very difficult.
And not only that... you loose one of your most important helper in writing correct code which is called
compiler. Just consider following lines taken from a potential user of
LaughActor.
// case 1 - With sealed hierarchy of messages laughActorRef ! LuaghProtocol.Lol // case 2 - with strings, laughActorRef ! "lol"
What if the developer committed a mistake and wrote following instead of above
// case 1 - With sealed hierarchy of messages laughActorRef ! LuaghProtocol.Loll // case 2 - with strings, laughActorRef ! "loll"
In the first case the compiler will immediately point out the error... but in the second case this error will pass un-noticed and may cause a lot of headache to debug when hidden in a code base of tens of thousands of lines.
But again... you can even avoid this issue by using only pre-defined strings, like following
object LaughProtocol { val Lol = "Lol" val Rofl = "Rofl" val Lmao = "Lmao" } class LaughActor extends Actor with ActorLogging{ import LaughProtocol._ override def receive = { case msg if msg.equals(LaughProtocol.Lol) => println("L O L") case msg if msg.equals(LaughProtocol.Rofl) => println("R O F L") case msg if msg.equals(LaughProtocol.Lmao) => println("L M A O") case msg @ _ => log.info("unexpected message :: {}", msg) } }
But... consider a bigger application with dozens of actors written by a team of 5-6 developers.
Notice that... as of now developers are solely relying on the defined members
LaughProtocol.Lmao etc... and they actually do not look at what values
LaughProtocol.Lmao etc have.
Now... lets say another actor has a protocol like following,
object LoveProtocol { val LotsOfLove = "Lol" }
Now... you should be able to notice the problem. Consider the case of any actor which is supposed to handle messages of both of these protocols. All it will ever receive is a String
Lol and it will have no way of knowing whether it is a
LaughProtocol.Lol or
LoveProtocol.LotsOfLove.
So... now whenever a developer is adding any new message to any of the protocols he need to make sure that no other protocol is using the same
String. And this is simply not an option for teams working on large code bases.
And these are just few of the reasons why people prefer things like
sealed protocols in their code bases. | https://codedump.io/share/LxwTwPLqsCOT/1/what39s-the-difference-for-object-string-symbol-when-sent-with-akka | CC-MAIN-2016-50 | refinedweb | 813 | 56.66 |
# Tips and tricks from my Telegram-channel @pythonetc, February 2019

It is new selection of tips and tricks about Python and programming from my Telegram-channel @pythonetc.
[Previous publications](https://habr.com/ru/search/?q=%5Bpythonetc%20eng%5D&target_type=posts).
Structures comparing
--------------------
Sometimes you want to compare complex structures in tests ignoring some values. Usually, it can be done by comparing particular values with the structure:
```
>>> d = dict(a=1, b=2, c=3)
>>> assert d['a'] == 1
>>> assert d['c'] == 3
```
However, you can create special value that reports being equal to any other value:
```
>>> assert d == dict(a=1, b=ANY, c=3)
```
That can be easily done by defining the `__eq__` method:
```
>>> class AnyClass:
... def __eq__(self, another):
... return True
...
>>> ANY = AnyClass()
```
`sys.stdout` is a wrapper that allows you to write strings instead of raw bytes. The string is encoded automatically using `sys.stdout.encoding`:
```
>>> _ = sys.stdout.write('Straße\n')
Straße
>>> sys.stdout.encoding
'UTF-8'
```
`sys.stdout.encoding` is read-only and is equal to Python default encoding, which can be changed by setting the `PYTHONIOENCODING` environment variable:
```
$ PYTHONIOENCODING=cp1251 python3
Python 3.6.6 (default, Aug 13 2018, 18:24:23)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.stdout.encoding
'cp1251'
```
If you want to write bytes to `stdout` you can bypass automatic encoding by accessing the wrapped buffer with `sys.stdout.buffer`:
```
>>> sys.stdout
<_io.TextIOWrapper name='' mode='w' encoding='cp1251'>
>>> sys.stdout.buffer
<\_io.BufferedWriter name=''>
>>> \_ = sys.stdout.buffer.write(b'Stra\xc3\x9fe\n')
Straße
```
`sys.stdout.buffer` is also a wrapper that does buffering for you. It can be bypassed by accessing the raw file handler with `sys.stdout.buffer.raw`:
```
>>> _ = sys.stdout.buffer.raw.write(b'Stra\xc3\x9fe')
Straße
```
Ellipsis constant
-----------------
Python has a very short list of built-in constants. One of them is `Ellipsis` which is also can be written as `...`. This constant has no special meaning for the interpreter but is used in places where such syntax looks appropriate.
`numpy` support `Ellipsis` as a `__getitem__` argument, e. g. `x[...]` returns all elements of `x`.
PEP 484 defines additional meaning: `Callable[..., type]` is a way to define a type of callables with no argument types specified.
Finally, you can use `...` to indicate that function is not yet implemented. This is a completely valid Python code:
```
def x():
...
```
However, in Python 2 `Ellipsis` can't be written as `...`. The only exception is `a[...]` that means `a[Ellpsis]`.
All of the following syntaxes are valid for Python 3, but only the first line is valid for Python 2:
```
a[...]
a[...:2:...]
[..., ...]
{...:...}
a = ...
... is ...
def a(x=...): ...
```
Modules reimporting
-------------------
Already imported modules will not be loaded again. `import foo` just does nothing. However, it proved to be useful to reimport modules while working in an interactive environment. The proper way to do this in Python 3.4+ is to use `importlib`:
```
In [1]: import importlib
In [2]: with open('foo.py', 'w') as f:
...: f.write('a = 1')
...:
In [3]: import foo
In [4]: foo.a
Out[4]: 1
In [5]: with open('foo.py', 'w') as f:
...: f.write('a = 2')
...:
In [6]: foo.a
Out[6]: 1
In [7]: import foo
In [8]: foo.a
Out[8]: 1
In [9]: importlib.reload(foo)
Out[9]:
In [10]: foo.a
Out[10]: 2
```
`ipython` also has the `autoreload` extension that automatically reimports modules if necessary:
```
In [1]: %load_ext autoreload
In [2]: %autoreload 2
In [3]: with open('foo.py', 'w') as f:
...: f.write('print("LOADED"); a=1')
...:
In [4]: import foo
LOADED
In [5]: foo.a
Out[5]: 1
In [6]: with open('foo.py', 'w') as f:
...: f.write('print("LOADED"); a=2')
...:
In [7]: import foo
LOADED
In [8]: foo.a
Out[8]: 2
In [9]: with open('foo.py', 'w') as f:
...: f.write('print("LOADED"); a=3')
...:
In [10]: foo.a
LOADED
Out[10]: 3
```
\G
--
In some languages, you can use `\G` assertion. It matches at the position where the previous match is ended. That allows writing finite automata that walk through string word by word (where word is defined by the regex).
However, there is no such thing in Python. The proper workaround is to manually track the position and pass the substring to regex functions:
```
import re
import json
text = '**foo**barbar'
regex = '^(?:<([a-z]+)>||([a-z]+))'
stack = []
tree = []
pos = 0
while len(text) > pos:
error = f'Error at {text[pos:]}'
found = re.search(regex, text[pos:])
assert found, error
pos += len(found[0])
start, stop, data = found.groups()
if start:
tree.append(dict(
tag=start,
children=[],
))
stack.append(tree)
tree = tree[-1]['children']
elif stop:
tree = stack.pop()
assert tree[-1]['tag'] == stop, error
if not tree[-1]['children']:
tree[-1].pop('children')
elif data:
stack[-1][-1]['data'] = data
print(json.dumps(tree, indent=4))
```
In the previous example, we can save some time by avoiding slicing the string again and again but asking the re module to search starting from a different position instead.
That requires some changes. First, `re.search` doesn' support searching from a custom position, so we have to compile the regular expression manually. Second, `^` means the real start for the string, not the position where the search started, so we have to manually check that the match happened at the same position.
```
import re
import json
text = '**foo**barbar' * 10
def print_tree(tree):
print(json.dumps(tree, indent=4))
def xml_to_tree_slow(text):
regex = '^(?:<([a-z]+)>||([a-z]+))'
stack = []
tree = []
pos = 0
while len(text) > pos:
error = f'Error at {text[pos:]}'
found = re.search(regex, text[pos:])
assert found, error
pos += len(found[0])
start, stop, data = found.groups()
if start:
tree.append(dict(
tag=start,
children=[],
))
stack.append(tree)
tree = tree[-1]['children']
elif stop:
tree = stack.pop()
assert tree[-1]['tag'] == stop, error
if not tree[-1]['children']:
tree[-1].pop('children')
elif data:
stack[-1][-1]['data'] = data
def xml_to_tree_slow(text):
regex = '^(?:<([a-z]+)>||([a-z]+))'
stack = []
tree = []
pos = 0
while len(text) > pos:
error = f'Error at {text[pos:]}'
found = re.search(regex, text[pos:])
assert found, error
pos += len(found[0])
start, stop, data = found.groups()
if start:
tree.append(dict(
tag=start,
children=[],
))
stack.append(tree)
tree = tree[-1]['children']
elif stop:
tree = stack.pop()
assert tree[-1]['tag'] == stop, error
if not tree[-1]['children']:
tree[-1].pop('children')
elif data:
stack[-1][-1]['data'] = data
return tree
_regex = re.compile('(?:<([a-z]+)>||([a-z]+))')
def _error_message(text, pos):
return text[pos:]
def xml_to_tree_fast(text):
stack = []
tree = []
pos = 0
while len(text) > pos:
error = f'Error at {text[pos:]}'
found = _regex.search(text, pos=pos)
begin, end = found.span(0)
assert begin == pos, _error_message(text, pos)
assert found, _error_message(text, pos)
pos += len(found[0])
start, stop, data = found.groups()
if start:
tree.append(dict(
tag=start,
children=[],
))
stack.append(tree)
tree = tree[-1]['children']
elif stop:
tree = stack.pop()
assert tree[-1]['tag'] == stop, _error_message(text, pos)
if not tree[-1]['children']:
tree[-1].pop('children')
elif data:
stack[-1][-1]['data'] = data
return tree
print_tree(xml_to_tree_fast(text))
```
Result:
```
In [1]: from example import *
In [2]: %timeit xml_to_tree_slow(text)
356 µs ± 16.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [3]: %timeit xml_to_tree_fast(text)
294 µs ± 6.15 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
Round function
--------------
*Today's post is written by [orsinium](https://habr.com/ru/users/orsinium/), the author of @itgram\_channel.*
The `round` function rounds a number to a given precision in decimal digits.
```
>>> round(1.2)
1
>>> round(1.8)
2
>>> round(1.228, 1)
1.2
```
Also you can set up negative precision:
```
>>> round(413.77, -1)
410.0
>>> round(413.77, -2)
400.0
```
`round` returns value of type of input number:
```
>>> type(round(2, 1))
>>> type(round(2.0, 1))
>>> type(round(Decimal(2), 1))
>>> type(round(Fraction(2), 1))
```
For your own classes you can define round processing with the `__round__` method:
```
>>> class Number(int):
... def __round__(self, p=-1000):
... return p
...
>>> round(Number(2))
-1000
>>> round(Number(2), -2)
-2
```
Values are rounded to the closest multiple of `10 ** (-precision)`. For example, for `precision=1` value will be rounded to multiple of 0.1: `round(0.63, 1)` returns `0.6`. If two multiples are equally close, rounding is done toward the even choice:
```
>>> round(0.5)
0
>>> round(1.5)
2
```
Sometimes rounding of floats can be a little bit surprising:
```
>>> round(2.85, 1)
2.9
```
This is because most decimal fractions can't be represented exactly as a float (https://docs.python.org/3.7/tutorial/floatingpoint.html):
```
>>> format(2.85, '.64f')
'2.8500000000000000888178419700125232338905334472656250000000000000'
```
If you want to round half up you can use `decimal.Decimal`:
```
>>> from decimal import Decimal, ROUND_HALF_UP
>>> Decimal(1.5).quantize(0, ROUND_HALF_UP)
Decimal('2')
>>> Decimal(2.85).quantize(Decimal('1.0'), ROUND_HALF_UP)
Decimal('2.9')
>>> Decimal(2.84).quantize(Decimal('1.0'), ROUND_HALF_UP)
Decimal('2.8')
``` | https://habr.com/ru/post/444228/ | null | null | 1,535 | 62.24 |
I'm after the same thing. A long running project is about to go live (well.. it'll still take a while) and it needs a major cleaning since a lot has been refactored and I'm pretty sure some methods, routes and views are left unused.
14th July, 2017
Pendo left a reply on Find Unused Methods/Routes/Views • 1 year ago
I'm after the same thing. A long running project is about to go live (well.. it'll still take a while) and it needs a major cleaning since a lot has been refactored and I'm pretty sure some methods, routes and views are left unused.
6th July, 2017
Pendo left a reply on Npm Run / Gulp Not Working • 1 year ago
Did you do
npm install first before running
npm run watch?
21st April, 2017
Pendo left a reply on Mails Sent Using Mail::queue Won't Apear In Mailgun Logs • 1 year ago
Nope, sorry. Didn't look into it, just kept using SMTP access instead of the API to send e-mails.
9th February, 2017
Pendo left a reply on Add New Payment Method In Spark • 1 year ago
Stripe is now supported in the Netherlands, however I've found out Spark is build for just credit card support so it would need some editting to get it running. Stripe offers recurring payments via bank accounts now (as does Mollie by the way). Not sure when, but I'm going to see what the best way is to get a payment method for the Dutch market connected.
1st December, 2016
Pendo left a reply on Should I Use An Admin Template Such As JoshAdmin? • 1 year ago
Pendo left a reply on Laravel + WYSIWYG Security • 1 year ago
@semperfoo: and what you decide to give users other permission for showing data on your website? For instance, you'd like to allow links which you didn't before.
As far as I'm concerned, store data as you get it (perhaps basic filtering for javascript etc. is smart) and then clean the output when displaying. But opinions are different on that subject :)
30th November, 2016
Pendo left a reply on Createing Cron Running Log • 1 year ago
@dynamiccarrots: sorry it took so long. Didn't have any time to look into it. I created the package and tested it locally to validate it works. You can find the package below:
What it does is pretty simple: it replace Schedule interface that is used when running schedule:run. My interface automatically binds the before() and after() methods (as shown in my 3rd post). Obviously, there are much easier ways (only using a model and the before/after hooks), but this is an all-in-one automated solution.
I've added a check that only logs the execution times when in local or staging environment (see the modification of app/Console/Kernel.php on Github). If you want to log the production, just modify this piece of code.
There's an example included to display the execution time of the jobs that have been executed. I guess this is all you need...
It was at least worth looking into this for me to learn more about the framework, so don't feel bad if you don't use the code!
17th November, 2016
Pendo left a reply on Createing Cron Running Log • 1 year ago
So, I've been busy all day trying to figure out how to create a package for this. Not because it's so super usefull, just because I get triggered by things I think I can't do ????
So far I've come up with a Schedule class to extend Laravel's and a custom Event class that extends Laravel's aswel. The idea is pretty simple, the Schedule class (I called it LogSchedule) dispatches an LogEvent and this LogEvent class adds the before() and after() closures by default.
These closures then call a singleton from within the IoC container and save the start and end date when firing a command via the scheduler. Hardest thing so far was figuring out how to replace the Schedule/Event classes because I was getting a lot of errors.
For now it works, have to fine tune some things a bit. I'm not sure if this is production ready since there are far more easier ways (like the one above) to do stuff like this. But hey, I might get some tips and might learn some new tricks while trying to get it to work this way.
I'm going to try and finish this setup somewhere next week. I'll get back here with the solution.
Pendo left a reply on Determining Available Time Slots(for Scheduling) • 1 year ago
@Eddie212: there is more to it than just that conditional:
if($from->between($eventStart, $eventEnd) || $to->between($eventStart, $eventEnd) || ($eventStart->between($from, $to) && $eventEnd->between($from, $to))) { return false; }
That's what I have been using. Your conditional will fail on some points. Besides that, I subtracted 1 second off of the $to-time so the appointment of 13:30 till 14:00 doesn't overlap the 14:00 - 14:30 while checking.
Pendo left a reply on Createing Cron Running Log • 1 year ago
Maybe this is what you're looking for:
$schedule->command('purge:users') ->daily() ->before(function () { CronLogger::start('purge_old_users'); }) ->after(function () { CronLogger::end('purge_old_users'); });
Pendo left a reply on Createing Cron Running Log • 1 year ago
You mean like some kind of observer that watches for commands being ran and logging start/end by itself? If so.. that goes beyond what I know about Laravel so far.
Pendo left a reply on DB::query Not Working Laravel 5.3 • 1 year ago
DB::query("SELECT * FROM users")->get();
You're nog getting anything yet.
Pendo left a reply on Createing Cron Running Log • 1 year ago
If I was to build a functionality like this I'd create some kind of CronLogger class that you can use within the commands that you are running. Each cron would get it's own key and all I would do was:
CronLogger::start('purge_old_users'); // Your cron working CronLogger::end('purge_old_users');
When reading the data from your database if the start timestamp is greater than the end timestamp you can assume it's running (or even failed if it stays that way).
Just aproach this as simple as possibe.
Pendo left a reply on Should I Use An Admin Template Such As JoshAdmin? • 1 year ago
Honestly I think stripping down takes far less time than designing an admin theme, converting it to HTML, adding scripts for what you use, etc.
In my past experiences all the admin themes I've used so far saved me so much time compared to building one myself (which I've done 1 time and never ever again after that first time).
On the other hand, if you want a fully unique design for your web app: just go for it. It'll take time but you'll be more happy with the result in the end. It also depends on the amount of time you have to create the theme I guess.
What template you use doesn't really mather, most of them are good and it's all based on personal taste of the design. So as far as I'm concerned: admin templates save you a ton of time and if you're no designer you'd better be spending all that time on building your project instead.
17th October, 2016
Pendo left a reply on Artisan Fails In Staging/productions - Works Local • 1 year ago
As usual.. few hours of bugfixing won't lead to the solution. But after posting the question you find the solution.
There was one little difference between local & staging/production: on vendor's files weren't published locally and that single config file contained a reference to the url() helper method.
Pendo started a new conversation Artisan Fails In Staging/productions - Works Local • 1 year ago
Hi all,
been looking at this issue for some time now and none of the suggested things I came across on the forum or Google lead me to solving this issue for me. Below is the error I get when trying to run ANY artisan command:
PHP
What I came across so far was that having helper functions such as url() and asset() in the config files caused this issue. I double checked but there aren't any of these present in any of the configuration files. Besides, I guess the error should occure local aswel if this was the case.
I tried to clear the composer cache, dump-autoload, clear the cache folders of laravel (permissions are correct rwxrwxr-x). I also tried setting the staging environment variable to "local", but the issue persists. PHP versions are 5.6.13 local and 5.6.26 on the server I'm working on. Then there was a Middleware that forced a redirect.. thought that might be an issue, but disabling the middleware didn't solve anything.
That's what I've tried so far to solve the issue, without any luck.
15th August, 2016
Pendo left a reply on Speed Up Ajax Request • 2 years ago
Just pushed the changes to the staging server, looks like it's all just in the development stack. On the live server the response went down from 700ms to ~250ms. Thanks for thinking along!
Pendo left a reply on Speed Up Ajax Request • 2 years ago
@ohffs: let me check what is happening, will come back at that in a few minutes. @toniperic: it's just a search in a varchar(155) field actually.
I may be able to replace the 'search' function with a datasource for the autocomplete to filter by itself. Thanks so far.
Pendo started a new conversation Speed Up Ajax Request • 2 years ago
Hi all,
I'm developing an autocomplete function on a set of 60 categories in one of the projects I'm doing. I noticed the ajax requests are pretty slow (about 700ms per request) when I hadn't added any caching. But after adding caching, the responses still are between 200 and 700ms.
The cache driver is file-based, the code I use is below:
// Set or Get cached value (12 hours) if(Cache::has($cachekey)) { $matches = Cache::get($cachekey); } else { $matches = CompanyCategory::where('name', 'LIKE', '%'.$keyword.'%')->where('parent_id', '!=', 0)->with('parent')->get(); Cache::put($cachekey, $matches, (60 * 12)); } return response()->json($result);
(1) User types in a value in an input field (2) AJAX request fires to a route (3) A controller method (above) is executed and returns json result (4) Results are displayed as autocomplete to the input
Is there any way to speed up this proces? I can't believe that these simple requests, especially when cached, need to take 700ms.
5th August, 2016
Pendo left a reply on When Would You Update The Laravel Framework In A Project? • 2 years ago
Right, I guess I'm on the same page as most of you. For my own CMS I would maintain and update the source better en quicker because it's going to be used on multiple projects. Custom projects like the ones I mentioned need to work. Too bad I'm just starting to get into TDD (or at least into writing tests), so for the past few projects it will be a lot harder to test and check all code after an update.
Laravel 5.1.* will be good enough for the near future anyway.. based on the opinions/experiences you guys have I don't think I should think about it too much. Thanks all :)
4th August, 2016
Pendo left a reply on When Would You Update The Laravel Framework In A Project? • 2 years ago
My biggest issue is what @zachleigh said: future maintenance and being able to spread the work for updating the project over multiple months/years instead of having to rewrite a large piece of your software 3 years further down the road.
The projects I started in 5.1.x all make use of an external permission/roles package, while 5.2 comes with it's own, I'm kind of curious to see how much work it takes to update to L5.2 or even L5.3 in a few days/weeks when it's released. Laravel Shift seemed like a good starting place, it automatically updates the framework code and creates a seperate Git branch. I think I'm going to test the effort it takes to update the code in a few weeks after the storm settles (the storm being clients that are pushing ;)) just to get some experience in it.
I even have had procedural PHP projects from 5/6 years ago that I'd wish I had updated at least a little bit to the most recent changes of their CMS basis that I'm using in projects I started later on. Old code is a bitch.. definitly if you get used to the newer code.
3rd August, 2016
Pendo left a reply on When Would You Update The Laravel Framework In A Project? • 2 years ago
So, when using 5.1 that would be an update to whatever latest version 5.1.* is? That's what I was thinking about aswel, why bother to upgrade to a newer version if the functionality added by the update isn't used because the application is running correctly.
However, there must be people that update from 5.1 to 5.2 to 5.3, kind of curious to their thoughts and why one would do this.
Just came across Laravel Shift - this automates the update proces of the laravel core if I understand correctly and you can focus on updating the custom code (controllers, models, etc) yourself. Right?
Pendo started a new conversation When Would You Update The Laravel Framework In A Project? • 2 years ago
Been thinking about this for a while, I've got a few project running on Laravel 5.1, some development projects on 5.2 and probably my next on is going to be on 5.3. But when would one decide to update the framework version to the latest? The 5.1 projects had been started while 5.2 was already released, but chosen due to being a LTS release.
I can imagine you won't update it to each latest version (at least,.. I can't see myself doing that or my clients paying for that every time), but there comes a time when an update is inevitable (or even when it's too late already). So,.. what guidlines do you guys follow when it comes to updating the core framework of your projects and maybe more important: how do you handle the update and testing?
2nd August, 2016
Pendo started a new conversation Let Users Manage The Website Menu(s) - Best Practice? • 2 years ago
Hi all,
last couple of months I've been figuring out a lot about Laravel and I can't even describe how much I regret not starting sooner! Just after a few projects and a bunch of books and blogs I've came to the conclusion that my CMS needs a major re-do using Laravel. I've created a few topics in the past with questions / topics that I needed some information one, mainly because I work by myself and sometimes you just hit a brick wall when trying to figure stuff out. This is one of those cases.
Where I think I have figured out most parts of the CMS, the setup, the functions, the modules, etc. there is still one part that I'm not feeling completely sure about: the (admin) module that allows the website owner/manager to create menus for different places on the website. Of course there are some obvious things, such as: we must be able to create different kind of menus with (unlimited?) multiply layers for subnavigation. But here's the thing:
How would you make this manageable? The CMS is going to be modular, each module has it's own configuration file/manifest, so I would be able to store menu specific data in one of these files. My plan is on using names routes with optional paramenters, so I won't be storing the actual links (to prevent 404's). So far so good, but where I can't seem to get my head wrapped around is this:
I don't want to display just the "general" links, like "Blog Entrance" and "Blog Archive", but I'd also like to give the user the ability to link directly to a page "Blog Article". For a blog we'd be using an ID field and a Title field, but this might be different if we are creating a link to a profile page (ID, Firstname, Lastname). How would I get my "MenuManager" to understand how to build a correct link / display the correct data.
This is what I'm after so far: create an interface "MenuBuilder" that has a few methods that each module must implement, most important one would be "getLinks" which gets all the links and "getItems" which would return an array of all rows for the resource. Then the module configuration/manifest would state that the module is linkable and what class should be used. For example: ProfileMenuBuilder. Then, foreach active module the MenuBuilder would be used to call the method that returns all available links.
Is this anywhere near best practice (or perhaps good practice) or is this perhaps the worst possible idea to get this all to work? Thanks a bunch for thinking along.
3rd July, 2016
Pendo left a reply on Laravel Route In <a> Tag • 2 years ago
@tylernathanreed: way too much effort for a developer of his type.
2nd July, 2016
Pendo left a reply on Laravel Route In <a> Tag • 2 years ago
Not trying to be a d*ck or anything, but I get the idea your trying to get help the easy way without spending any time on finding a solution yourself. That error has no relation with the initial question and besides that it explains what the problem is right in the error message. Check your Request-file or your validation rules in the method.
Use the documentation and you'll be a wise man in no time:
Pendo left a reply on Moving From Windows To Mac • 2 years ago
I think none of us can really decide for you what to do. I switched from Windows to Mac about 3 years ago, coming from +- 15 years of windows usage. It was strange at first but if you're used to working with computers you'll get the hang of it soon enough.
Basic tasks repeat theirself every day (installation of tools, usage of windows, basic shortcuts and mouse actions). I think it took me about a week or two to get used to the basic things of a mac.
Learning the programs took me from a few minutes to a few hours each, but as with everything: if you use it on a regular basis you'll get to know the ins and outs easily. As long as you can accept that you've got to change the way you work and are eager to learn I see no issues. Google is your friend, there hasn't been a single problem I ran in to that didn't have a solution posted on the net further than 3 mouse clicks away ;)
Good luck!
Pendo left a reply on Redirect->intended() Not Working After Login • 2 years ago
Pendo left a reply on Determining Available Time Slots(for Scheduling) • 2 years ago
@tjphippen: sure, if I can be of any help let me know! I'm just taking a break from setting op a helper class that handles all checks for my application. I took a lot of code from your example and added some extra things to it to fit my needs, for example:
(1) A day can have multiple timeIntervals (eg. 8.00-12.00 & 13.00-18.00), so I'd have to loop the function for each interval (2) I have a function that returns availability per day (1 free spot? day available, 0 free spots? unavailable) instead of a full list of slots (3) A day can be closed (which would return false on that given day)
Besides that my application is setup a little bit diferent ofcourse, but I'd be happy to share what I made of it.
13th June, 2016
Pendo left a reply on Validator - Ignoring Field If Value Is 0 For Exists Rule. • 2 years ago
@pmall, thanks! Why didn't I think of that, much better to have the language in the view indeed.
But in my case, what you're using prepends this value (I tried multiple things but it always kept assigning a 0-value).
<option value="0">Please select a category</option>
And the
exists validation rule then tries to find a record with ID=0 (which fails). I'm using Laravel 5.1.37 (LTS). Do you have similar behaviour?
Pendo left a reply on Validator - Ignoring Field If Value Is 0 For Exists Rule. • 2 years ago
I came across the same issue myself yesterday and my solution wasn't perfect for this. I was using the following to prepent my select list with a default value:
$category_list = CompanyCategory::withParent(0)->lists('name', 'id'); $category_list->prepend('Please select a category'); // this results in <option value="0">Please select a category</option>
This resulted in
0 being sent as value instead of
"" or
null. Therefore, I added a new validation rule, might come in handy for others:
/** * Validate ExistsInDatabase or 0/null */ Validator::extend( 'exists_or_null', function ($attribute, $value, $parameters) { if($value == 0 || is_null($value)) { return true; } else { $validator = Validator::make([$attribute => $value], [ $attribute => 'exists:' . implode(",", $parameters) ]); return !$validator->fails(); } } );
You would put this in the boot() method of the AppServiceProvider. I'm including a file containing all custom rules like so:
public function boot() { require_once app_path() . '/Http/validator.php'; }
After adding this you can use the validation rule like this:
'parent_id' => 'sometimes|exists_or_null:company_categories,id'
What it simply does is check for the value to be either
0 or
null, if so it passes the validation. Otherwise it does an
exists validation using the same parameters.
29th May, 2016
Pendo left a reply on Mails Sent Using Mail::queue Won't Apear In Mailgun Logs • 2 years ago
Nope, haven't sorted this yet.. used SMTP Login for the time being which works for me and the client. Don't have that much time to dig in deeper.
Pendo left a reply on Determining Available Time Slots(for Scheduling) • 2 years ago
@tjphippen: I'm currently creating a plan for a similar project, been thinking about this for the past few weeks (I've had the time to think things thru) and came up with the following:
(1) We have a user that manages or multiple calenders and that manages it's own services (let's say "Wash hands, 10 minutes").
(2) Each calendar has an availability set, these are one or multiple timespans per given date. If the user is available each monday to friday between 13:00 and 18:00 for a whole year, that results in 52 * 5 = 260 records for availability. This is too much so I'm going to have each calender have it's default settings for monday to friday (multiple slots per day) and then another table containing the availability that overwrites the default settings (other times, closed days, etc). This keeps the table with the default times as little as possible (7 days x 2 timespans (if a lunchbreak is present in the day for example) per calender). Having 30 days that aren't according the default plan that reduces the amount of availability-records from 260 to about 50.
(3) Then there's the appointment table containing all appointments (like your events).
At first: thanks for pointing me out to DatePeriod, that'll take on headache away from defining the available slots in the calendar. Huge help, thanks!
Second: my biggest issue before, and still, is how to determine if a date is fully booked. I want to reduce the amount of queries needed as much as possible, but my first guess would be to do a calculation for each day in the calender, much alike finding a free slot on a day. But that would be a huge proces to calculate 31 days ahead.
What I'm leaning to is to have the script take the service that takes the least amount of time and perform an availability check on that day. Then, if there's no spot, an extra record should be generated in a table that locks out the date. Down side to this is that the day would stay available even if a user selects the service with the highest amount of time needed.
Can't really get my mind wrapped about the fatest (performance-wise) way to achieve this.
18th May, 2016
Pendo left a reply on Laravel Installer Use 5.1.* Instead Of Latest Version • 2 years ago
Yeah, I'm aware of the composer function, but since Laravel Installer grab a few packages when creating a new projects I was hoping to have the ability to select a version in the installer as well. But I guess the installer doesn't support any other version but the latest.
Pendo started a new conversation Laravel Installer Use 5.1.* Instead Of Latest Version • 2 years ago
Hi all,
can't seem to find anything but, is there an option to use the LTS version of Laravel when using the Laravel Installer?
Thanks.
12th May, 2016
Pendo started a new conversation Mails Sent Using Mail::queue Won't Apear In Mailgun Logs • 2 years ago
Hi all,
I've got some pretty strange (at least,.. I think so) issues with my Laravel setup and Mailgun. I think the issues are related, but correct me if I'm wrong about that.
The configuration I setup my .env file to specify the driver as mailgun and set the API key in the config/services.php file, I also setup all variables in the config/mail.php file:
//config/mail.php return [ 'driver' => env('MAIL_DRIVER'), 'host' => env('MAIL_HOST'), 'port' => env('MAIL_PORT', 587), 'from' => ['address' => env('MAIL_FROM'), 'name' => env('MAIL_NAME')], 'encryption' => env('MAIL_ENCRYPTION', 'tls'), 'username' => env('MAIL_USERNAME'), 'password' => env('MAIL_PASSWORD'), 'sendmail' => '/usr/sbin/sendmail -bs', 'pretend' => env('MAIL_PRETEND', false), ]; //config/services.php 'mailgun' => [ 'domain' => env('MAILGUN_DOMAIN'), 'secret' => env('MAILGUN_SECRET'), ], /*.env MAIL_DRIVER=smtp MAIL_HOST=smtp.mailgun.org MAIL_PORT=2525 [email protected] MAIL_PASSWORD=******** MAIL_ENCRYPTION=tls MAILGUN_DOMAIN=mg.my-project.nl MAILGUN_SECRET=key-00000000000 [email protected] MAIL_NAME=Rumbold [email protected] */
The From/To Address For an unknown reason, all mails are being send to and from [email protected], the [email protected] address is never user. I tripple checked the Mailgun configuration and search in all files from my project, but there is absolutely no place where [email protected] is used. Because of this issue we made an alias [email protected] -> [email protected] - but that's the only occurance of that e-mailaddress in the full project.
Mailgun Logs empty When using Mail::send the e-mail shows up in the Mailgun logs, when using Mail::queue however, the e-mail is delivered (I've added myself as BCC to check) but there is no reference of the e-mail in the Mailgun logs.
API vs SMTP When using the API all e-mail, even while specifying [email protected] as sender, are delivered from the default [email protected] account. When using the SMTP settings (MAIL_DRIVER=smtp) then the sender is correct: [email protected]
I'm confused.. more than that. I can't seem to figure out two things:
(1) Where the hell does the [email protected] address comes from (2) Why do mails via Mail::queue don't show in my Mailgun logs
Any help is greatly appreciated, I am willing to pay for help if that's needed. The project is live already and this is a big pain in the youknowwhat.
Thanks in advance.
4th May, 2016
Pendo left a reply on ModelIdentifier Returned Instead Of The Model In Listener • 2 years ago
Thanks! I'll give that a try just to see what it does.
30th April, 2016
Pendo left a reply on What's The Right Way To Save HTML In Database? • 2 years ago
Thanks, at least that's an answer after 4 months, haha!
28th April, 2016
Pendo left a reply on ModelIdentifier Returned Instead Of The Model In Listener • 2 years ago
Alright, so I found out it worked when implementing ShouldQueue and it also works when removing the SerializesModel treat. Can someone explain me what the difference is? Because a other Listeners I have work without any problems, the only difference is this piece of code:
$groupname = ''; if(intval($event->user->group_choice) > 0) { $group = Group::find($event->user->group_choice); if(!is_null($group)) { $groupname = $group->name; } }
So my guess is that the Mail::queue and Mail::send functions know how to handle the model identifier and the piece of code above does not. But when am I supposed to use the SerializesModel treat and when not?
Pendo started a new conversation ModelIdentifier Returned Instead Of The Model In Listener • 2 years ago
Hey all,
just ran into an issue, I've got this Event:
<?php namespace App\Events; use App\User; use App\Events\Event; use Illuminate\Queue\SerializesModels; class UserRegistered extends Event { use SerializesModels; public $user; /** * UserRegistered constructor. * @param User $user */ public function __construct(User $user) { $this->user = $user; } /** * Get the channels the event should be broadcast on. * * @return array */ public function broadcastOn() { return []; } }
and this Listener:
<?php namespace App\Listeners; use Mail; use App\Group; use Illuminate\Mail\Mailer; use App\Events\UserRegistered; class EmailCopyToAdministrator { public $mail; /** * EmailCopyToAdministrator constructor. */ public function __construct(Mailer $mailer) { $this->mail = $mailer; } /** * Handle the event. * * @param UserRegistered $event * @return void */ public function handle(UserRegistered $event) { $groupname = ''; if(intval($event->user->group_choice) > 0) { $group = Group::find($event->user->group_choice); if(!is_null($group)) { $groupname = $group->name; } } $this->mail->queue('emails.auth.copy', ['member' => $event->user, 'groupname' => $groupname], function($message) use ($event) { $message->from(env('MAIL_FROM'), env('MAIL_NAME')); $message->to(env('MAIL_FROM'), env('MAIL_NAME')); $message->subject('Nieuwe registratie Rumbold!'); }); } }
If I die-and-dump the $user variable from the Event constructor I get the model with accessible properties like $user->full_name, $user->email etc. However, when firing the event the Listener gives me this response:
ErrorException in EmailCopyToAdministrator.php line 31: Undefined property: Illuminate\Contracts\Database\ModelIdentifier::$group_choice
I can't seem to figure out where the problem lies. I noticed that it works when implement 'ShouldQueue', but since I'm using $this->mailer->queue I don't think I need that at all..
Any ideas?
16th March, 2016
Pendo left a reply on When To Use Middleware. • 2 years ago
Middleware is executed before a request, the manual says it all:
If you were to add auth middleware to a user profile route, an unauthenticated user would not see anything at all. Preventing a guest from seeing parts of a view can be done using Blade directives @can for example.
8th March, 2016
Pendo left a reply on Best Practice: Crud Pages On Both Front And Back-end • 2 years ago
Doh, I feel a bit stupid right now.. I can even use a shared base controller using the Gate facade. Thanks.
Pendo started a new conversation Best Practice: Crud Pages On Both Front And Back-end • 2 years ago
Hi all,
as I'm getting deeping into a project, there are a few things that I'm seeking the best practice for. Most of these things I'll figure out myself, but I'd like to know how you guys tackle this particular problem.
I'm working on an Agenda module for a website, in the back-end the administrator kan view, edit, create or delete all records in the database. On the front-end however, users can create items and edit their own. How would I structure the code so that both codes can use functions that are no different at all. Right now I have 2 controllers:
App\AgendaController.php App\Admin\AgendaController.php
As far as I can see right now, the create and edit methods are exactly the same, also the methods to upload or remove an image that is connected to an item won't change at all. The only thing different will be the views they're calling.
I was thinking of one extending the other, but the fact that views aren't shared makes that not work. Should I accept the fact that I'm having duplicate code in this case? Or is there anything I haven't thought of?
28th February, 2016
Pendo left a reply on Put *everything* In Controllers? • 2 years ago
I can imagine you using example #1, but #2 and #3 are really bad options I think..
For example 2 I'd say something like:
return view('bars')->with([ 'locations' => $queryResult ]);
And then have the bars.blade.tpl extend a master template containining the header and footer. And the same goes for #3, why not just return home.blade.php which extends a master with header/footer. In the end it's all up to you, but it definitly ain't clean code and it doesn't make that much sense. The routes file is about routing (duh..) and in my opinion it should never or at least not very often contain anything of your applications functionallity.
27th February, 2016
Pendo left a reply on Input Old And Array • 2 years ago
That's a typo in the sample, I did it in my native language while testing. It should be field[]
Pendo left a reply on Input Old And Array • 2 years ago
Did a test, and the dot notation works.
{{ var_dump(old('field.0')) }} <form method="post" action="fields"> {!! csrf_field() !!} <input type="text" name="veldnaam[]" value="{{ old('field.0') }}" /> <input type="text" name="veldnaam[]" value="{{ old('field.1') }}" /> <input type="text" name="veldnaam[]" value="{{ old('field.2') }}" /> <input type="submit" name="submit" value="test" /> </form>
field.0 returns anything you entered in the first input field.
Pendo left a reply on Input Old And Array • 2 years ago
Doesn't the dot notation work? Like
{{ old('field.0') }}
if not, that would be a good addition to Laravel I guess.
26th February, 2016
Pendo left a reply on Run Laravel And Forum On Same Server • 2 years ago
Symlinks are just shortcuts on the server. You could add the forum software outside your Laravel project, create a forum directory inside laravel/public and symlink that to the folder where the software sits. That separates both packages completely.
Pendo left a reply on Run Laravel And Forum On Same Server • 2 years ago
Symlinks an option?
25th February, 2016
Pendo left a reply on Inject Class Based On Route Parameters • 2 years ago
I think you can do this in the RouteServiceProvider:
$router->bind('post_type', function($value) { // return anything, $value is what {post_type} holds as value });
You could simply return a concrete "return App\PostType" or you can add the switch in the route service provider. Or any other code that eventually returns the right class.
Want to change your profile photo? We pull from gravatar.com. | https://laracasts.com/@Pendo | CC-MAIN-2018-39 | refinedweb | 5,957 | 69.11 |
NAME
request_key - Request a key from the kernel's key management facility
SYNOPSIS
#include <keyutils.h> key_serial_t request_key(const char *type, const char *description, const char *callout_info, key_serial_t keyring);
DESCRIPTION
request_key() asks the kernel to find a key of the given type that matches the specified description and, if successful, to attach it to the nominated keyring and to return its serial number. request_key() first recursively searches all the keyrings attached to the calling process in the order thread-specific keyring, process- specific keyring and then session keyring for a matching key. If request_key() is called from a program invoked by request_key() on behalf of some other process to generate a key, then the keyrings of that other process will be searched next, using that other process's UID, GID, groups and security context to control access. The keys in each keyring searched are checked for a match before any child keyrings are recursed into. Only keys that are searchable for the caller may be found, and only searchable keyrings may be searched. If the key is not found then, if callout_info is set, this function will attempt to look further afield. In such a case, the callout_info is passed to a userspace service such as /sbin/request-key to generate the key. If that is unsuccessful also, then an error will be returned, and a temporary negative key will be installed in the nominated keyring. This will expire after a few seconds, but will cause subsequent calls to request_key() to fail until it does. The..), add_key(2), keyctl(2), request-key(8)
COLOPHON
This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/maverick/man2/request_key.2.html | CC-MAIN-2015-32 | refinedweb | 292 | 58.11 |
Bo Xie wrote: > Hi, > > I've checked the following code in ffmpeg.c > ------------- > #ifndef CONFIG_WIN32 > if ( !using_stdin && verbose >= 0) { > fprintf(stderr, "Press [q] to stop encoding\n"); > url_set_interrupt_cb(decode_interrupt_cb); > } > #endif > ------------- > I mean how to let "Press [q] to stop encoding" works for MinGW? > > Thank you very much! > > Best Regards, > Xie Bo > First you remove the #if. I think it shaould compile fine, but since this is Mingw, not Cygwin, the system API is the WIN32API, so you dont have tcsetattr and things like that. So, you will have to fill in the stubs for term_init, read_key etc in ffmpeg.c using goody oldy kbhit and getch. When that works, post a patch! HaND, -- Michel Bardiaux Peaktime Belgium S.A. Bd. du Souverain, 191 B-1160 Bruxelles Tel : +32 2 790.29.41 | http://ffmpeg.org/pipermail/ffmpeg-devel/2005-April/000373.html | CC-MAIN-2019-18 | refinedweb | 132 | 85.59 |
import org.jruby.Ruby;
..
org.jruby.embed.internal.LocalContext
A single
can be configured to act as a facade for multiple
ScriptingContainer
runtimes
(or really multiple Ruby VMs, since each
org.jruby.Ruby
instance is a Ruby "VM"), and this enum controls
that behaviour. (this behaviour is bit like that of
org.jruby.Ruby
— it changes its behaviour
silently depending on the calling thread, an act of multiplexing.)
java.lang.ThreadLocal
When you think of this multiplexing behaviour, there are two sets of states that need separate attention.
One is
instance, which represents the whole VM, classes, global variables, etc. Then
there's attributes and so-called
variables, which are really a special scope induced by the
scripting container for better JSR-223 interop.
In this documentation, we refer to the former as "the runtime" and the latter as "the variables",
but the variables shouldn't be confused with the global variables in Ruby's semantics, which belongs
to the runtime.
org.jruby.Ruby
public enum
LocalContextScope {LocalContextScope {
All the
s that are created with this scope will share a single
runtime and a single set of variables. Therefore one container can
ScriptingContainer
set a value and another container will see the same value.
will not do any multiplexing at all.will not do any multiplexing at all.
ScriptingContainer
will get one runtime and one set of variables, and regardless of the calling
thread it'll use this same pair.
ScriptingContainer
If you have multiple threads calling single
instance,
then you may need to take caution as a variable set by one thread will be visible to another thread.
ScriptingContainer
If you aren't using the variables of
, this is normally what you want.
ScriptingContainer
Known as the "apartment thread model", this mode makes
lazily
creates a runtime and a variable map separately for each calling thread. Therefore, despite
the fact that multiple threads call on the same object, they are completely isolated from each other.
(the flip side of the coin is that no ruby objects can be shared between threads as they belong
to different "ruby VM".)
ScriptingContainer
In this mode, there'll be a single runtime dedicated for each
,
but a separate map of variables are created for each calling thread through
ScriptingContainer
.
In a situation where you have multiple threads calling one
java.lang.ThreadLocal
, this means
ruby code will see multiple threads calling them, and therefore they need to be thread safe.
But because variables are thread local, if your program does something like (1) set a few variables,
(2) evaluate a script that refers to those variables, then you won't have to worry about values from
multiple threads mixing with each other.
ScriptingContainer | http://grepcode.com/file/repo1.maven.org$maven2@org.kill-bill.billing$killbill-platform-osgi-bundles-jruby@0.0.3@org$jruby$embed$LocalContextScope.java | CC-MAIN-2016-44 | refinedweb | 450 | 51.89 |
I am currently working on a project for my C++ course and it requires that i find the first 500 prime numbers using an array. How would I go about doing this?
Printable View
I am currently working on a project for my C++ course and it requires that i find the first 500 prime numbers using an array. How would I go about doing this?
Please use more descriptive subject titles.
And nobody here is going to do your homework for you. You will get help if you try it on your own first and then come to us with a specific problem, but you clearly haven't done that yet. Either try writing it on your own, or search the board for some tips.:
----------
with that aside, here's your answer:
- find out how to figure out a prime number
- figure out how to use arrays
- figure out how to find a prime number using arrays.
I apologize if by my post it seemed as if I had not attempted the problem. So far I have been working on it for about 3 days now. I understand that to find a prime number I need to divide each individual test value by all of the previous numbers in the array, but for some reason I can't come up with a logical code that will accomplish that. i think that if someone could just push me in the right direction that would be a big help.
Post the code that you have so far. If you haven't even started, start out with a simple loop that will do what you said above.
As of now this is what I have written in:
I'm not quite sure what to put in the parameters for the for and while loops though...I'm not quite sure what to put in the parameters for the for and while loops though...Code:
#include <iostream>
using namespace std;
int main()
{
int prime[500];
int test = 3;
int i = 1;
cout << "\nThese are the first 500 prime numbers:\n";
prime[0] = 2;
for ()
{
while()
{
if ((test%prime[i]) = 0)
test = test + 2;
else
prime[i + 1] = test;
}
}
Well you do want to have 2 (maybe more- depending on how you do each one) loops. The first is just going to loop through the code to find prime numbers until you have 500. The second loop is going to test each prime number by dividing it by each number below it.
I haven't looked at it that thoroughly, but it looks like you've done a pretty decent job on the logic inside you loops. Keep the purpose of each loop in mind, and that should help you decide how you should set them up.
thanks for your help and I'll be sure that my future posts are a little more thought out before I put them up. I'll just keep plugging away at it.
Sounds good. And now that we've seen that you have actually put work into this (we get a lot of requests from people expecting us to do everything), I will remind of you what I suggested earlier: search the boards. This is a very common assignment, and you might get some ideas from seeing how other people have handled it.
Just wanted to post to let you know that i did figure that problem out. I just kinda tinkered with it (got a TON of errors) but ended up with this as my result:
Thanks for your help in pushing me in the right direction, I'll be sure to come back around for questions and to possibly help out anyone else, if I can.Thanks for your help in pushing me in the right direction, I'll be sure to come back around for questions and to possibly help out anyone else, if I can.Code:
for (j = 0; j <499; j++)
{
for (i = 0; i <= j; i++)
{
if ((test%prime[i]) == 0)
{
test = test + 2;
}
}
prime[i] = test;
test = test + 2;
}
Congratulations - I'm glad you got it working. | http://cboard.cprogramming.com/cplusplus-programming/63885-help-please-printable-thread.html | CC-MAIN-2015-35 | refinedweb | 687 | 82.99 |
[ooo-build] Crash when browsing document templates when Assistive Technologies (accessibility) used (WARNING **: Invalidate all children called)
Bug Description
Binary package hint: openoffice.org-core
In any openoffice app, I click on "new" then "templates and documents", then "templates", and openoffice crashes, backtrace follows.
#0 0xb798c46b in SvtFileView:
from /usr/lib/
#1 0xb77e03f4 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#2 0xb780d872 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#3 0xb77e4af1 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#4 0xb79a1976 in SvHeaderTabList
from /usr/lib/
#5 0xb780c260 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#6 0xb780c686 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#7 0xb780c6a8 in non-virtual thunk to SfxStyleSheet:
from /usr/lib/
#8 0xb7c3a10d in VclEventListene
from /usr/lib/
#9 0xb7e065f3 in Window:
from /usr/lib/
#10 0xb79d1e9b in SvTreeListBox:
from /usr/lib/
#11 0xb79a29f1 in non-virtual thunk to SvHeaderTabList
#12 0xb79d5b94 in SvTreeListBox:
from /usr/lib/
#13 0xb79a6c2d in non-virtual thunk to SvHeaderTabList
#14 0xb79a6d4b in non-virtual thunk to SvHeaderTabList
#15 0xb7dcb585 in SelectionEngine
from /usr/lib/
#16 0xb79a80ab in non-virtual thunk to SvHeaderTabList
#17 0xb79d2e1a in SvTreeListBox:
from /usr/lib/
#18 0xb7e19717 in Window::~Window ()
from /usr/lib/
#19 0xb7e1ac1d in Window::~Window ()
from /usr/lib/
#20 0xb5aba119 in GtkSalFrame:
from /usr/lib/
#21 0xb586db00 in _gtk_marshal_
from /usr/lib/
#22 0xb5d3779b in g_closure_invoke () from /usr/lib/
#23 0xb5d47b93 in g_signal_
from /usr/lib/
#24 0xb5d48e7f in g_signal_
#25 0xb5d49279 in g_signal_emit () from /usr/lib/
#26 0xb59815f8 in gtk_widget_
from /usr/lib/
#27 0xb5866ef3 in gtk_propagate_event () from /usr/lib/
#28 0xb58680f7 in gtk_main_do_event () from /usr/lib/
#29 0xb5ebb7ea in _gdk_events_init () from /usr/lib/
#30 0xb5cc3802 in g_main_
#31 0xb5cc67df in g_main_
#32 0xb5cc6d45 in g_main_
#33 0xb5a94291 in GtkXLib::Yield ()
from /usr/lib/
#34 0xb56a8037 in X11SalInstance:
from /usr/lib/
#35 0xb7c30f08 in Application::Yield ()
from /usr/lib/
#36 0xb7da027d in Dialog::Execute ()
from /usr/lib/
#37 0xaeeed3fb in SfxApplication:
from /usr/lib/
#38 0xaeee43d8 in SfxApplication:
from /usr/lib/
#39 0xaf0b97ca in SfxDispatcher:
from /usr/lib/
#40 0xaf0b9f68 in SfxDispatcher:
from /usr/lib/
#41 0xaf0b9fd8 in SfxDispatcher:
from /usr/lib/
#42 0xaf0e9915 in non-virtual thunk to SvxSearchItem:
from /usr/lib/
#43 0x08824a68 in ?? ()
#44 0x08d51730 in ?? ()
#45 0xbfa759a8 in ?? ()
#46 0x0808f17e in non-virtual thunk to desktop:
#47 0x08d584d8 in ?? ()
#48 0x08867168 in ?? ()
#49 0xbfa759c8 in ?? ()
#50 0xaf0e98b9 in non-virtual thunk to SvxSearchItem:
from /usr/lib/
#51 0x08867168 in ?? ()
#52 0x08d51730 in ?? ()
#53 0x08d527f4 in ?? ()
#54 0xb7ef25f0 in ?? () from /usr/lib/
#55 0x08093ba0 in (anonymous namespace)
#56 0x08d49f20 in ?? ()
#57 0xbfa75bb8 in ?? ()
#58 0xb7e1b096 in Window::~Window ()
Fixed in feisty.
Reopening, exactly the same odd behaviour in intrepid beta.
Don't know if the above backtrace is still valid, will try to produce a new one.
This also happens in hardy, must be some dependency missing. I attach the backtrace in hardy.
I found the problem: I had assistive technologies enabled (not that I truly need it, just wanted to see how orca works). This came to my mind after I remembered a similar problem being closed in suse, so it's several years this problem has been there. You may want to prioritise it for the sake of your users that need assistive technologies.
Is there a tag for such problems?
When you enable Gnome Assistive Technologies and try to go to "File->
I was able to replicate it.
bt:
#0 0x00007faddf090138 in ?? () from /home/rodo/
#1 0x00007fadc5232239 in accessibility:
#2 0x00007fadc525d123 in accessibility:
#3 0x00007fadc5238035 in accessibility:
from /home/rodo/
#4 0x00007fadc526cb66 in accessibility:
#5 0x00007faddf0a3c17 in SvHeaderTabList
#6 0x00007fadc525b6b2 in accessibility:
#7 0x00007fadc525bbeb in accessibility:
#8 0x00007fadddb83b6a in VclEventListene
#9 0x00007fadddd4a52f in Window:
#10 0x00007faddf0ac47f in ?? () from /home/rodo/
#11 0x00007faddf0d4be6 in SvTreeListBox:
#12 0x00007faddf0979f7 in ?? () from /home/rodo/
#13 0x00007fadddd63383 in ?? () from /home/rodo/
#14 0x00007fadddd6503d in ?? () from /home/rodo/
#15 0x00007fadd5b201a3 in ?? () from /home/rodo/
#16 0x00007fadd5b20624 in ?? () from /home/rodo/
#17 0x00007fadd5661998 in ?? () from /usr/lib64/
#18 0x00007fadd898520d in g_closure_invoke () from /usr/lib64/
#19 0x00007fadd899908c in ?? () from /usr/lib64/
#20 0x00007fadd899a392 in g_signal_
#21 0x00007fadd899aa53 in g_signal_emit () from /usr/lib64/
#22 0x00007fadd5776a8e in ?? () from /usr/lib64/
#23 0x00007fadd565a5ed in gtk_propagate_event () from /usr/lib64/
#24 0x00007fadd565b55b in gtk_main_do_event () from /usr/lib64/
#25 0x00007fadd97022ac in ?? () from /usr/lib64/
#26 0x00007fadd84eb93a in g_main_
#27 0x00007fadd84ef040 in ?? () from /usr/lib64/
#28 0x00007fadd84ef1dc in g_main_
#29 0x00007fadd5af38e4 in ?? () from /home/rodo/
#30 0x00007fadddb7c61e in Applica...
Thanks a million Vincenzo, I had the same problem (it's fun playing with orca... :P ) but would never have suspected orca to crash my openoffice installation.
Same problem in Ubuntu Intrepid (8.10):
Ubuntu OpenOffice.org 2.4.1
openoffice.org-core 1:2.4.1-
The problem in a nutshell:
OpenOffice Writer
File > New > Templates and Documents > Templates > My Templates > Crash.
(OpenOffice Recovery tool says: "Click 'Next' to open the Error Report Tool". But there is no 'Next'.)
Quick fix:
System > Preferences > Assistive Technologies
Un-check 'Enable Assistive Technologies'
Then Log-Out and log back in (or reboot).
Assistive Technologies is off, but OO has crashed ever on Templates since the installation of 8.04. Took out the Ubuntu version and reinstalled it, same problem. Tried Oxygen Office, same problem. These are the installed Templates, not additional/
Any progress on this bug?
I am having the same problem - Ubuntu 8.10 and have turned off the assistive technologies.
This seems a serious shortcoming quite beyond me to do anything about other than comment that it should be fixed to preserve OO/Ubuntu credibility.
Not a very helpful remark, I know, but I can't think of anything else to say.
Ciao,
Nigel
But can someone please report upstream?
Vincenzo,
It was reported to go-oo (ooo-build) upstream 7 months ago. They haven't been able to track down what is wrong because the issue goes away while debugging.
Chris
Thank you Chris. Can the upstream bug be linked to here or is it a closed BTS perhaps?
Vincenzo,
It already is... just look at the top on the page:
https:/
This is an ooo-build issue which is why it is linked to the novell bugzilla instead of openoffice.org issue tracker.
Il giorno ven, 15/05/2009 alle 20.55 +0000, Chris Cheney ha scritto:
>
> It already is... just look at the top on the page:
>
> https:/
I just didn't know that go-oo was related to novell, sorry for noise. I
thought it was the corresponding bug in Suse.
I am having the same problem Ubuntu jaunty 64bit
openoffice.org, Architecture: amd64, Version: 1:3.0.1-9ubuntu3
The 'openoffice.
apt-cache depends openoffice.org-gtk talk
...
Kollidiert: libgtk2.0-0
Kollidiert: <oooqs-kde>
Kollidiert: <ooqstart-gnome>
...
In the system is the libgtk2.0-0 version 2.16.1-0ubuntu2
The pc of my father has got the same problem.
- 32bit
- Ubuntu 9.04
- 1:3.0.1-9ubuntu3
I found that starting a guest-account the templates works fine.
Removing the .openoffice.org and .openoffice.org2 directory (I think the last is an old version and not necessary to remove). It doesn't solve the problem.
hello,
Excuse me for the last message. As mentioned in the title of this bug report when the "Assistive Technologies" is enabled this problem occurs. I disabled the "Assistive Technologies" and now the templates of Openoffice.org works fine.
Martin
Vincenzo Ciancia, this issue is unreproducible in LibreOffice. Does this work for you? http://
500 http://
100 /var/lib/
1:
500 http://
Vincenzo Ciancia, Please execute the following command, as it will automatically gather debugging information, in a terminal:
apport-collect 69247
When reporting bugs in the future please use apport by using 'ubuntu-bug' and the name of the package affected. You can learn more about this functionality at https:/
hi, I had this problem, turned off the accessibility stuff as suggested, (though I usually need this so that's not very convenient)
and it changed to
** (soffice:3221): WARNING **: Invalidate all children called
I have much crashing and freezing and auto save is a bit screwy too.
anyone have any clues as to whats up? Crash seems to centre on moving stuff or adding movements in my non-tech opinion. am in 10.10 with two screens and not enough possessing power
.
Vincenzo Ciancia, your crash report for LibreOffice is missing. Please follow these instructions to have apport report a new bug about your crash that can be dealt with by the automatic retracer.
If you are running the Ubuntu Stable Release you might need to enable apport in /etc/default/apport and restart.
Now open your file manager, navigate to your /var/crash directory and open the crash report you wish to submit.
If this fails you will have to open a terminal and file your report with 'ubuntu-bug /var/crash/
I'm closing this bug report since the process outlined above will automatically open a new bug report which can then dealt with more efficiently. Thanks in advance for your cooperation and understanding.
I am unable to reproduce the crash with LO-3.5. There were some fixes in this area => closing.
The report was incomplete: after clicking on "templates" one has to additionally click on one of the template folders which are presented. This bug is grave, is it possible that nobody uses templates in openoffice on ubuntu? | https://bugs.launchpad.net/ubuntu/+source/openoffice.org/+bug/69247 | CC-MAIN-2015-27 | refinedweb | 1,564 | 57.06 |
Not sure how to approach this one.
User supplies an argument, ie, program.exe '2001-08-12'
I need to add a single day to that argument - this will represent a date range for another part of the program. I am aware that you can add or subtract from the current day but how does one add or subtract from a user supplied date?
import datetime ... date=time.strptime(argv[1], "%y-%m-%d"); newdate=date + datetime.timedelta(days=1)
Arnauds Code is valid,Just see how to use it :) :-
>>> import datetime >>> x=datetime.datetime.strptime('2001-08-12','%Y-%m-%d') >>> newdate=x + datetime.timedelta(days=1) >>> newdate datetime.datetime(2001, 8, 13, 0, 0) >>>
Okay, here's what I've got:
import sys from datetime import datetime user_input = sys.argv[1] # Get their date string year_month_day = user_input.split('-') # Split it into [year, month, day] year = int(year_month_day[0]) month = int(year_month_day[1]) day = int(year_month_day[2]) date_plus_a_day = datetime(year, month, day+1)
I understand this is a little long, but I wanted to make sure each step was clear. I'll leave shortening it up to you if you want it shorter. | http://m.dlxedu.com/m/askdetail/3/d4b83b417ad3a17d1fe90a88551109d9.html | CC-MAIN-2019-30 | refinedweb | 194 | 67.15 |
On Fri, 5 Oct 2007, jumpjoe at fastwebnet.it wrote: >> I think that you can search the registry for entry with m_class_object >> that matches your PyObject's type or search the lvalue_chain(s) for a >> converter that returns !=0. But there is no api for that so far. > > This is another interesting approach, but after a few tries, I > discovered the registry is really hard to get by, as it is a static > variable defined in a "hidden" function in an anonymous namespace in the > registry.cpp file. Any suggestions on how to extract it? As an "extern" > of some kind? Without modifying the source code, there is no way. The only way to search the registry externally is via the boost::python::converter::query function, but then you need to provide a type_info, and to do this exhaustively is not really simpler than using an "inverse" registry like I mentionned. -- Francois Duranleau | https://mail.python.org/pipermail/cplusplus-sig/2007-October/012613.html | CC-MAIN-2016-30 | refinedweb | 153 | 61.36 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
how create a new selection field with option to create another contained in this field
hi to all I want to create a new field type "selection". there is the same example in the stock module product is the category selection list,, how to do?? Thanks
Hello Kizayko,
Syntax for creating selection field is
selection(values, string, ...)
values: list of values (key-label tuples) or function returning such a list
For creating dynamic values in the selection. You can add a function like :
fields.selection(_get_selection,'myselection')
then you can define this function as:
def _get_selection(self, cr, uid, context=None): """ Your logic to get the values """ return[(key1,value1),(key2,value2)]
Hope this helps !
Thanks, Naresh
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
You need to give some more details on what you want to accomplish. Please put in the module and field you want to edit along with the version of openERP you are using. | https://www.odoo.com/forum/help-1/question/how-create-a-new-selection-field-with-option-to-create-another-contained-in-this-field-9082 | CC-MAIN-2017-43 | refinedweb | 205 | 54.12 |
jGuru Forums
Posted By:
Aarun_Jones
Posted On:
Saturday, November 5, 2005 05:57 AM
Hi All,
I've created a text.xsd and a test_sample.xml. I've used XML spy to make sure that they are valid.
now I want to parse XML with validation in Java.
I've done factroy.setvalidating(true); and
factroy.setNamespaceAware(true); But, when I parse the XML, I get the error as follows:
The prefix "myReqnamespace" for element "myReqnamespace:myReq" is not bound.
Also, in case of SAX how do I specify the schema file name?
I appreciate your help.
Thanks,
Aarun
Re: validating XML with a schema using SAX
Posted By:
Aarun_Jones
Posted On:
Sunday, November 6, 2005 12:23 AM
I am using xerces SAX parser and I've set the required properties(schemalocation and noNameSpaceSchemaLocation) and feature (validation/dynamic) set.
the error FatalError :1:193: The prefix "n" for element "n:myReq" is not bound.
I believe it is not finding my xsd file. How do I specify that in the properties and where in the directory structure should I keep the actual myReq.xsd file?
I really appreciate your help!thanks,Aarun | http://www.jguru.com/forums/view.jsp?EID=1270315 | CC-MAIN-2014-52 | refinedweb | 193 | 58.89 |
Autoencoders are a deep learning model for transforming data from a high-dimensional space to a lower-dimensional space. They work by encoding the data, whatever its size, to a 1-D vector. This vector can then be decoded to reconstruct the original data (in this case, an image). The more accurate the autoencoder, the closer the generated data is to the original.
In this tutorial we'll explore the autoencoder architecture and see how we can apply this model to compress images from the MNIST dataset using TensorFlow and Keras. In particular, we'll consider:
- Discriminative vs. Generative Modeling
- How Autoencoders Work
- Building an Autoencoder in Keras
- Building the Encoder
- Building the Decoder
- Training
- Making Predictions
- Complete Code
- Conclusion
Bring this project to life
Discriminative vs. Generative Modeling
The most common type of machine learning models are discriminative. If you're a machine learning enthusiast, it's likely that the type of models that you've built or used have been mainly discriminative. These models recognize the input data and then take appropriate action. For a classification task, a discriminative model learns how to differentiate between various different classes. Based on the model's learning about the properties of each class, it classifies a new input sample to the appropriate label. Let's apply this understanding to the next image representing a warning sign.
If a machine/deep learning model is to recognize the following image, it may understand that it consists of three main elements: a rectangle, a line, and a dot. When another input image has features which resemble these elements, then it should also be recognized as a warning sign.
If the algorithm is able to identify the properties of an image, could it generate a new image similar to it? In other words, could it draw a new image that has a triangle, a line, and a dot? Unfortunately, discriminative models are not clever enough to draw new images even if they know the structure of these images. Let's take another example to make things clearer.
Assume there is someone that can recognize things well. For a given image, he/she can easily identify the salient properties and then classify the image. Is it a must that such a person will be able to draw such an image again? No. Some people cannot draw things. Discriminative models are like those people who can just recognize images, but could not draw them on their own.
In contrast with discriminative models, there is another group called generative models which can create new images. For a given input image, the output of a discriminative model is a class label; the output of a generative model is an image of the same size and similar appearance as the input image.
One of the simplest generative models is the autoencoder (AE for short), which is the focus of this tutorial.
How Autoencoders Work
Autoencoders are a deep neural network model that can take in data, propagate it through a number of layers to condense and understand its structure, and finally generate that data again. In this tutorial we'll consider how this works for image data in particular. To accomplish this task an autoencoder uses two different types of networks. The first is called an encoder, and the other is the decoder. The decoder is just a reflection of the layers inside the encoder. Let's clarify how this works.
The job of the encoder is to accept the original data (e.g. an image) that could have two or more dimensions and generate a single 1-D vector that represents the entire image. The number of elements in the 1-D vector varies based on the task being solved. It could have 1 or more elements. The fewer elements in the vector, the more complexity in reproducing the original image accurately.
By representing the input image in a vector of relatively few elements, we actually compress the image. For example, the size of each image in the MNIST dataset (which we'll use in this tutorial) is
28x28. That is, each image has
784 elements. If each image is compressed so that it is represented using just two elements, then we spared
782 elements and thus
(782/784)*100=99.745% of the data.
The next figure shows how an encoder generates the 1-D vector from an input image. The layers included are of your choosing, so you can use dense, convolutional, dropout, etc.
The 1-D vector generated by the encoder from its last layer is then fed to the decoder. The job of the decoder is to reconstruct the original image with the highest possible quality. The decoder is just a reflection of the encoder. According to the encoder architecture in the previous figure, the architecture of the decoder is given in the next figure.
The loss is calculated by comparing the original and reconstructed images, i.e. by calculating the difference between the pixels in the 2 images. Note that the output of the decoder must be of the same size as the original image. Why? Because if the size of the images is different, there is no way to calculate the loss.
After discussing how the autoencoder works, let's build our first autoencoder using Keras.
Building an Autoencoder in Keras
Keras is a powerful tool for building machine and deep learning models because it's simple and abstracted, so in little code you can achieve great results. Keras has three ways for building a model:
- Sequential API
- Functional API
- Model Subclassing
The three ways differ in the level of customization allowed.
The sequential API allows you to build sequential models, but it is less customizable compared to the other two types. The output of each layer in the model is only connected to a single layer.
Although this is the type of model we want to create in this tutorial, we'll use the functional API. The functional API is simple, very similar to the sequential API, and also supports additional features such as the ability to connect the output of a single layer to multiple layers.
The last option for building a Keras model is model subclassing, which is fully-customizable but also very complex. You can read more about these three methods in this tutorial.
Now we'll focus on using the functional API for building the autoencoder. You might think that we are going to build a single Keras model for representing the autoencoder, but we will actually build three models: one for the encoder, another for the decoder, and yet another for the complete autoencoder. Why do we build a model for both the encoder and the decoder? We do this in case you want to explore each model separately. For instance, we can use the model of the encoder to visualize the 1-D vector representing each input image, and this might help you to know whether it's a good representation of the image or not. With the decoder we'll be able to test whether good representations are being created from the 1-D vectors, assuming they are well-encoded (i.e. better for debugging purposes) Finally, by building a model for the entire autoencoder we can easily use it end-to-end by feeding it the original image and receiving the output image directly.
Let's start by building the encoder model.
Building the Encoder
The following code builds a model for the encoder using the functional API. At first, the layers of the model are created using the
tensorflow.keras.layers API because we are using
TensorFlow as the backend library. The first layer is an
Input layer which accepts the original image. This layer accepts an argument named
shape representing the size of the input, which depends on the dataset being used. We're going to use the MNIST dataset where the size of each image is
28x28. Rather than setting the shape to
(28, 28), it's just set to
(784). Why? Because we're going to use only dense layers in the network and thus the input must be in the form of a vector, not a matrix. The tensor representing the input layer is returned to the variable
x.
The input layer is then propagated through a number of layers:
Denselayer with 300 neurons
LeakyReLUlayer
Denselayer with 2 neurons
LeakyReLUlayer
The last
Dense layer in the network has just two neurons. When fed to the
LeakyReLU layer, the final output of the encoder will be a 1-D vector with just two elements. In other words, all images in the MNIST dataset will be encoded as vectors of two elements.
import tensorflow.keras.layers import tensorflow.keras.models)
After building and connecting all of the layers, the next step is to build the model using the
tensorflow.keras.models API by specifying the input and output tensors according to the next line:
encoder = tensorflow.keras.models.Model(x, encoder_output, name="encoder_model")
To print a summary of the encoder architecture we'll use
encoder.summary(). The output is below. This network is not large and you can increase the number of neurons in the dense layer named
encoder_dense_1 but I just used 300 neurons to avoid taking much time training the network.
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoder_input (InputLayer) [(None, 784)] 0 _________________________________________________________________ encoder_dense_1 (Dense) (None, 300) 235500 _________________________________________________________________ encoder_leakyrelu_1 (LeakyRe (None, 300) 0 _________________________________________________________________ encoder_dense_2 (Dense) (None, 2) 602 _________________________________________________________________ encoder_output (LeakyReLU) (None, 2) 0 ================================================================= Total params: 236,102 Trainable params: 236,102 Non-trainable params: 0 _________________________________________________________________
After building the encoder, next is to work on the decoder.
Building the Decoder
Similar to building the encoder, the decoder will be build using the following code. Because the input layer of the decoder accepts the output returned from the last layer in the encoder, we have to make sure these 2 layers match in the size. The last layer in the encoder returns a vector of 2 elements and thus the input of the decoder must have 2 neurons. You can easily note that the layers of the decoder are just reflection to those in the enc)
After connecting the layers, next is to build the decoder model according to the next line.
decoder = tensorflow.keras.models.Model(decoder_input, decoder_output, name="decoder_model")
Here is the output of
decoder.summary(). It is very important to make sure the size of the output returned from the encoder matches the original input size.
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= decoder_input (InputLayer) [(None, 2)] 0 _________________________________________________________________ decoder_dense_1 (Dense) (None, 300) 900 _________________________________________________________________ decoder_leakyrelu_1 (LeakyRe (None, 300) 0 _________________________________________________________________ decoder_dense_2 (Dense) (None, 784) 235984 _________________________________________________________________ decoder_output (LeakyReLU) (None, 784) 0 ================================================================= Total params: 236,884 Trainable params: 236,884 Non-trainable params: 0 _________________________________________________________________
After building the 2 blocks of the autoencoder (encoder and decoder), next is to build the complete autoencoder.
Building the Autoencoder
The code that builds the autoencoder is listed below. The tensor named
ae_input represents the input layer that accepts a vector of length 784. This tensor is fed to the encoder model as an input. The output from the encoder is saved in
ae_encoder_output which is then fed to the decoder. Finally, the output of the autoencoder is saved in
ae_decoder_output.
A model is created for the autoencoder which accepts the input
ae_input and the output
ae_decoder_output.")
The summary of the autoencoder is listed below. Here you can find that the shape of the input and output from the autoencoder are identical which is something necessary for calculating the loss.
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= AE_input (InputLayer) [(None, 784)] 0 _________________________________________________________________ encoder_model (Model) (None, 2) 236102 _________________________________________________________________ decoder_model (Model) (None, 784) 236884 ================================================================= Total params: 472,986 Trainable params: 472,986 Non-trainable params: 0 _________________________________________________________________
The next step in the model building process is to compile the model using the
compile() method according to the next code. The
mean square error loss function is used and
Adam optimizer is used with learning rate set to
0.0005.
import tensorflow.keras.optimizers ae.compile(loss="mse", optimizer=tensorflow.keras.optimizers.Adam(lr=0.0005))
The model is now ready for accepting the training data and thus the next step is to prepare the data for being fed to the model.
Just remember that there are 3 models which are:
- encoder
- decoder
- ae (for the autoencoder)
Keras has an API named
tensorflow.keras.datasets in which a number of datasets can be used. We are going to use the MNIST dataset which is loaded according to the next code. The dataset is loaded as NumPy arrays representing the training data, test data, train labels, and test labels. Note that we are not interested in using the class labels at all while training the model but they are just used to display the results.
The
x_train_orig and the
x_test_orig NumPy arrays hold the MNIST image data where the size of each image is
28x28. Because our model accepts the images as vectors of length
784, then these arrays are reshaped using the
numpy.reshape() function.
import tensorflow.keras.datasets import numpy :])))
At this moment, we can train the autoencoder using the
fit method as follows:
ae.fit(x_train, x_train, epochs=20, batch_size=256, shuffle=True, validation_data=(x_test, x_test))
Note that the training data inputs and outputs are both set to
x_train because the predicted output is identical to the original input. The same works for the validation data. You can change the number of epochs and batch size to other values.
After the autoencoder is trained, next is to make predictions.
Making Predictions
The
predict() method is used in the next code to return the outputs of both the encoder and decoder models. The
encoded_images NumPy array holds the 1D vectors representing all training images. The decoder model accepts this array to reconstruct the original images.
encoded_images = encoder.predict(x_train) decoded_images = decoder.predict(encoded_images)
Note that the output of the decoder is a 1D vector of length
784. To display the reconstructed images, the decoder output is reshaped to
28x28 as follows:
decoded_images_orig = numpy.reshape(decoded_images, newshape=(decoded_images.shape[0], 28, 28))
The next code uses the
Matplotlib to display the original and reconstructed images of 5 random samples.")
The next figure shows 5 original images and their reconstruction. You can see that the autoencoder is able to at least reconstruct an image close to the original one but the quality is low.
One of the reasons for the low quality is using a low number of neurons (300) within the dense layer. Another reason is using just 2 elements for representing all images. The quality might be increased by using more elements but this increases the size of the compressed data.
Another reason is not using convolutional layers at all. Dense layers are good for capturing the global properties from the images and the convolutional layers are good for the local properties. The result could be enhanced by adding some convolutional layers.
To have a better understanding of the output of the encoder model, let's display all the 1D vectors it returns according to the next code.
matplotlib.pyplot.figure() matplotlib.pyplot.scatter(encoded_images[:, 0], encoded_images[:, 1], c=y_train) matplotlib.pyplot.colorbar()
The plot generated by this code is shown below. Generally, you can see that the model is able to cluster the different images in different regions but there is overlap between the different clusters.
Complete Code
The complete code discussed in this tutorial is listed below.
import tensorflow.keras.layers import tensorflow.keras.models import tensorflow.keras.optimizers import tensorflow.keras.datasets import numpy import matplotlib.pyplot # Encoder) encoder = tensorflow.keras.models.Model(x, encoder_output, name="encoder_model") encoder.summary() # Dec) decoder = tensorflow.keras.models.Model(decoder_input, decoder_output, name="decoder_model") decoder.summary() # Autoencoder") ae.summary() # RMSE def rmse(y_true, y_predict): return tensorflow.keras.backend.mean(tensorflow.keras.backend.square(y_true-y_predict)) # AE Compilation ae.compile(loss="mse", optimizer=tensorflow.keras.optimizers.Adam(lr=0.0005)) # Preparing MNIST Dataset :]))) # Training AE ae.fit(x_train, x_train, epochs=20, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) encoded_images = encoder.predict(x_train) decoded_images = decoder.predict(encoded_images) decoded_images_orig = numpy.reshape(decoded_images, newshape=(decoded_images.shape[0], 28, 28))") matplotlib.pyplot.figure() matplotlib.pyplot.scatter(encoded_images[:, 0], encoded_images[:, 1], c=y_train) matplotlib.pyplot.colorbar()
Conclusion
This tutorial introduced the deep learning generative model known as autoencoders. This model consists of two building blocks: the encoder, and the decoder. The former encodes the input data as 1-D vectors, which are then to be decoded to reconstruct the original data. We saw how to apply this model using Keras to compress images from the MNIST dataset in twapplied the autoencoder using Keras for compressing the MNIST dataset in just 2 elements.
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/autoencoder-image-compression-keras/ | CC-MAIN-2022-27 | refinedweb | 2,804 | 54.93 |
django-uturn 0.2.0
Overriding redirects in Django, to return where you came from
Provides the HTTP redirect flexibility of Django’s login view to the rest of your views.
Here’s what happens when you –as an anonymous user– try to access a view requiring you to log in:
- Django redirects you to /login?next=/page-you-wanted-to-see
- You log on
- Django’s login view notices the next parameter and redirects you to /page-you-wanted-to-see rather than /.
With Uturn, you’ll be able to use the same feature by simply changing some template code and adding middleware or decorators to your views.
Installation
django-uturn is available on Pypi:
pip install django-uturn
Uturn is currently tested against Django versions 1.2, 1.3 and 1.4.
Typical use cases
From master to detail and back again
You’ve got a list of… let’s say fish. All kinds of fish. To enable users to find fish by species, you’ve added a filter. Enter bass and your list is trimmed to only contain the Australian Bass, Black Sea Bass, Giant Sea Bass, Bumble Bass…
Wait a minute! Bumble Bass isn’t a species you’ve ever heard of - it’s probably the European Bass. So you hit the edit link of the Bumble Bass, change the name and save the form. Your view redirects you to the list. The unfiltered list. Aaargh!
If you’d just used the Uturn redirect tools, you would have been redirected to the filtered list. Much better (in most cases).
Multiple origins
This is basically a more general application of the previous use case. Suppose you have a form to create a new ticket that you can reach from both the project page and the ticket list page. When the user adds a new ticket, you want to make sure she’s redirected to the project page when she came from the project page and the ticket list page when she reached the form from the ticket list page.
Enter Uturn.
How to use Uturn
Redirecting in views
A typical form processing view function probably looks a bit like this:
from django.shortcuts import redirect, render from forms import TicketForm def add_ticket(request): if request.method == 'POST': form = TicketForm(request.POST) if form.is_valid(): form.save() return redirect('ticket-list') else: form = TicketForm() context = {'form': form} return render(request, 'tickets/ticket_list.html', context)
This view always redirects to the ticket list page. Add Uturn redirects:
from django.shortcuts import render from uturn.decorators import uturn from forms import TicketForm @uturn def add_ticket(request): if request.method == 'POST': form = TicketForm(request.POST) if form.is_valid(): form.save() return redirect('ticket-list') else: form = TicketForm() context = {'form': form} return render(request, 'tickets/ticket_list.html', context)
We simply add the uturn decorator to the view which will check the request for a valid next parameter and - if present - use that value as the target url for the redirect instead of the one you specified.
If you want to apply Uturn’s redirect logic to all requests, add the uturn.middleware.UturnMiddleware class to your middleware instead.
Passing the next page along
How do you add that next parameter to the URL in your project page? Here’s what you’d normally use:
<a href="{% url ticket-add %}">Add ticket</a>
This would render, depending on your url conf of course, a bit like this:
<a href="/tickets/add/">Add ticket</a>
Here’s what you’d use with Uturn:
{% load uturn %} <a href="{% uturn ticket-add %}">Add ticket</a>
The uturn template tag will first determine the actual URL you want to link to, exactly like the default url template tag would. But the uturn tag will also add the current request path as the value for the next parameter:
<a href="/tickets/add/?next=%2Fprojects%2F">Add ticket</a>
Clicking this link on the project page and adding a ticket will get you redirected to the /projects/ URL if you add the correct field to your form.
Passing through forms
The easy way to add the parameter to your forms is by adding the uturn_param template tag inside your form tags. If you’re using Django’s builtin CSRF protection, you’ll already have something like this:
<form action="." method="post"> {{ form.as_p }} {% csrf_token %} <input type="submit" value="Save"> </form>
Change that to this:
<form action="." method="post"> {{ form.as_p }} {% csrf_token %} {% uturn_param %} <input type="submit" value="Save"> </form>
Note: if you’re using Django 1.2, you will have to pass the request:
<form action="." method="post"> {{ form.as_p }} {% csrf_token %} {% uturn_param request %} <input type="submit" value="Save"> </form>
Don’t worry if you don’t want to use next as the parameter. You can specify a custom parameter name with the UTURN_REDIRECT_PARAM setting. And if you want to redirect to other domains, you can specify those domains with the UTURN_ALLOWED_HOSTS setting. Otherwise requests to redirect to other domains will be ignored.
Overriding URLs in templates
There’s just one more thing we need to change: the cancel link on your form:
<form action="." method="post"> {{ form.as_p }} {% csrf_token %}{% uturn_param %} <input type="submit" value="Save"> or <a href="{% url ticket-list %}">cancel</a> </form>
That link should point to the project page when applicable. Use the defaulturl tag to accomplish this:
{% load uturn %} <form action="." method="post"> {{ form.as_p }} {% csrf_token %}{% uturn_param %} <input type="submit" value="Save"> or <a href="{% defaulturl ticket-list %}">cancel</a> </form>
The defaulturl tag will default to standard url tag behavior and use the next value when available. Here’s what your form would look like from the ticket list page (with or without the next parameter):
<form action="." method="post"> ... <input type="submit" value="Save"> or <a href="/tickets/">cancel</a> </form>
And here’s what that same form would look like when you reached it from the project page:
<form action="." method="post"> ... <input type="submit" value="Save"> or <a href="/projects/">cancel</a> </form>
Thanks to django-cms for the backported implementation of RequestFactory.
- Downloads (All Versions):
- 40 downloads in the last day
- 122 downloads in the last week
- 574 downloads in the last month
- Author: Kevin Wetzels
- License: BSD licence, see LICENCE
- Categories
- Package Index Owner: roam
- DOAP record: django-uturn-0.2.0.xml | https://pypi.python.org/pypi/django-uturn/0.2.0 | CC-MAIN-2015-18 | refinedweb | 1,053 | 62.88 |
The bcrypt npm package is one of the most used packages to work with passwords in JavaScript.
This is security 101, but it’s worth mentioning for new developers: you never store a password in plain text in the database or in any other place. You just don’t.
What you do instead is, you generate a hash from the password, and you store that.
In this way:
import bcrypt from 'bcrypt' // or // const bcrypt = require('bcrypt') const password = 'oe3im3io2r3o2' const rounds = 10 bcrypt.hash(password, rounds, (err, hash) => { if (err) { console.error(err) return } console.log(hash) })
You pass a number as second argument and the bigger that is, the more secure the hash is. But also the longer it takes to generate it.
The library README tells us that on a 2GHz core we can generate:
If you run
bcrypt.hash() multiple times, the result will keep changing. This is key because there is no way to reconstruct the original password from a hash.
Given the same password and a hash it’s possible to find out if the hash was built from that password, using the
bcrypt.compare() function:
bcrypt.compare(password, hash, (err, res) => { if (err) { console.error(err) return } console.log(res) //true or false })
If so, the password matches the hash and for example we can let a user log in successfully.
You can use the
bcrypt library with its promise-based API too, instead of callbacks:
const hashPassword = async () => { const hash = await bcrypt.hash(password, rounds) console.log(hash) console.log(await bcrypt.compare(password, hash)) } hashPassword()
Check a couple examples in this Glitch:
Download my free JavaScript Beginner's Handbook and check out my JavaScript Course!
More js tutorials:
- Roadmap to Learn JavaScript
- JavaScript Algorithms: Binary Search
- The Deno Handbook: a concise introduction to Deno 🦕
- Semicolons in JavaScript
- How to calculate the number of days between 2 dates in JavaScript
- JavaScript Type Conversions (casting)
- The String trimEnd() method
- How to replace white space inside a string in JavaScript
- The Stack JavaScript Data Structure | https://flaviocopes.com/javascript-bcrypt/ | CC-MAIN-2021-17 | refinedweb | 341 | 73.27 |
The Preferences API is now included in the core Java release, as of Version 1.4. It provides a simple, fully cross-platform mechanism for storing small amounts of data, and uses a simple hierarchical "name/value" structure for organizing data. It is intended to be used for configuration and preference data.
Preference data is stored differently on each platform. In fact, is entirely up to each Java implementation how it will store the actual data that is, what backing store it will use. In general, the backing store is not intended to be secure. For example, it may be implemented on top of the Registry, or some other storage facility that does not provide a way to hide sensitive data.
This article considers the technique of automatically encrypting data before storing it in the preferences database. This permits applications to use the Preferences API even for sensitive data, such as passwords and personal information.
What You'll Learn
This article is not intended to provide a tutorial on encryption. It is assumed that you already understand how encryption in Java works, or that you are willing to learn about it elsewhere.
This article focuses on integrating encryption techniques with the Preferences API. We won't focus on the many encryption algorithm options we'll use a simple DES key to perform encryption and decryption, with the understanding that you may well want to replace this approach with another Java-based encryption method.
The most important aspect of this technique is making the encryption transparent. We want the encryption to happen behind-the-scenes, with as little intervention as possible. As you'll see, we'll be creating an
EncryptedPreferences object, which acts just like a regular Preferences object except that it transparently takes care of encryption for us.
If you haven't ever used the Preferences API, don't worry. You'll pick up what you need to know along the way.
A Simple Test Program
Before we get into the details of how it all works, let's take a look at a simple test program. This program (pkg.Test) stores a couple of values in the preferences database.
Preferences root = Preferences.userNodeForPackage( Test.class ); root.put( "not", "encrypted" ); Preferences subnode = root.node( "subnode" ); subnode.put( "also not", "encrypted" ); root.exportSubtree( System.out );
You can find the full source to pkg.Test in Listing One.
The first two lines acquire a Preferences object for this program, "Test.class." Or rather, for the package it's contained in, "pkg." Remember, each package gets its own private area within the preferences database. The
userNodeForPackage() method gets the Preferences object for our private area. This is the root node of the area in which we will store data.
Listing One: A simple test program. It stores a value in the preferences database in the root node for its package ("pkg"), and another value in a subnode of the root node.
// $Id$ package pkg; import java.util.prefs.*; public class Test { static public void main( String args[] ) throws Exception { Preferences root = Preferences.userNodeForPackage( Test.class ); root.put( "not", "encrypted" ); Preferences subnode = root.node( "subnode" ); subnode.put( "also not", "encrypted" ); root.exportSubtree( System.out ); } }
The next line stores a value or rather, a key/value pair. The key is "not," and the value is "encrypted." Later on, you can ask for the value corresponding to the key "not," and you'll get back the value "encrypted."
The next two lines create a subnode of our main node. Into this subnode, we put another key/value pair. The key is "also not," and the value is "encrypted."
Finally, we take a look at what we've done by exporting the entire database that is, the entire database for our program. While the backing store might store data in any format, the exported data always uses the same format, which you can see in Listing Two.
Listing Two: The preference data for our sample program pkg.Test,>
If the data is being stored in the Registry, you can see it by using regedit. In my system, the preferences data is stored in \HKEY_CURRENT_USER\Software\ JavaSoft\Prefs\pkg, as you can see in Figure 1.
Trying It with Encryption
Figure 1: The results of running pkg.Test
Using encrypted preferences is easy. Here's the encrypted version, pkg.encrypted.EncryptedTest, which does the same thing as pkg.Test, except that it uses encryption:
Preferences root = EncryptedPreferences.userNodeForPackage( EncryptedTest.class, secretKey ); root.put( "transparent", "encryption" ); Preferences subnode = root.node( "subnode" ); subnode.put( "also", "encrypted" ); root.exportSubtree( System.out );
You can find the full source to pkg.encrypted.EncryptedTest in Listing Three.
Listing Three: A simple test program, this time using encryption. It does more or less the same thing as the program in Listing One, except that this variant uses an Encrypted Preferences object, which transparently encrypts the data before storing it, and decrypts it before retrieving it.
// $Id$ package pkg.encrypted; import java.security.*; import java.util.prefs.*; import javax.crypto.*; import javax.crypto.spec.*; import ep.*; import pkg.Util; public class EncryptedTest { static private final String algorithm = "DES"; static public void main( String args[] ) throws Exception { byte rawKey[] = Util.readFile( "key" ); DESKeySpec dks = new DESKeySpec( rawKey ); SecretKeyFactory keyFactory = SecretKeyFactory.getInstance( algorithm ); SecretKey secretKey = keyFactory.generateSecret( dks ); Preferences root = EncryptedPreferences.userNodeForPackage( EncryptedTest.class, secretKey ); root.put( "transparent", "encryption" ); Preferences subnode = root.node( "subnode" ); subnode.put( "also", "encrypted" ); root.exportSubtree( System.out ); } }
The most important thing to see here is that instead of using the
Preferences.userNodeForPackage() method, we're using the
EncryptedPreferences.userNodeForPackage() method. And this method returns an EncryptedPreferences, rather than a regular Preferences object. | http://www.drdobbs.com/security/encrypted-preferences-in-java/184416587?cid=SBX_ddj_related_mostpopular_default_cpp&itc=SBX_ddj_related_mostpopular_default_cpp | CC-MAIN-2014-42 | refinedweb | 940 | 51.55 |
- Creating a fork
- Repository mirroring
- Merging upstream
- Removing a fork relationship
- Create a fork with the fork project form
Project repository you don’t have access to.
Creating a fork
To fork an existing project in GitLab:
On the project’s home page, in the top right, click Fork.
Select the project to fork to:
The project path must be unique in the namespace.
(Recommended method) Below Select a namespace to fork the project, identify the project you want to fork to, and click Select. Only namespaces you have Developer and higher permissions for are shown.
(Experimental method) If your GitLab administrator has enabled the experimental fork project form, read Create a fork with the fork project form. Only namespaces you have Developer and higher permissions for are shown.
GitLab creates your fork, and redirects you to the project page for your new fork. The permissions you have in the namespace are your permissions in the fork. use
git pull to update your local repository
with the upstream project, then push the changes back to your fork to update it.
Read more about How to keep your fork up to date with its origin.
Merging upstream
When you are ready to send your code back to the upstream project, create a merge request. For Source branch, choose your forked project’s branch. For Target branch, choose the original project’s branch.
Then you can add labels, a milestone, and assign the merge request to someone who can review your changes. Then click Submit merge request to conclude the process. When successfully merged, your changes are added to the repository and branch you’re merging into.
Removing a fork relationship
You can unlink your fork from its upstream project in the advanced settings.
Create a fork with the fork project form
- Introduced in GitLab 13.11.
- It’s deployed behind a feature flag, disabled by default.
- It’s disabled on GitLab.com.
- It’s not recommended for production use.
- To use it in GitLab self-managed instances, ask a GitLab administrator to enable it.
This experimental version of the fork project form is available only if your GitLab administrator has enabled it:
To use it, follow the instructions at Creating a fork and provide:
- The project name.
- The project URL.
- The project slug.
- (Optional) The project description.
- The visibility level for your fork.
Enable or disable the fork project form
The new fork project form is under development and not ready for production use. It is deployed behind a feature flag that is disabled by default. GitLab administrators with access to the GitLab Rails console can enable it.
To enable it:
Feature.enable(:fork_project_form)
To disable it:
Feature.disable(:fork_project_form) | https://docs.gitlab.com/14.0/ee/user/project/repository/forking_workflow.html | CC-MAIN-2021-39 | refinedweb | 449 | 66.54 |
On Sun, 29 Sep 2002, Jens Axboe wrote:Hi,> On Sun, Sep 29 2002, Alan Cox wrote:> > On Sun, 2002-09-29 at 10:12, Jens Axboe wrote:> > > 2.5 is definitely desktop stable, so please test it if you can. Until> > > recently there was a personal show stopper for me, the tasklist> > > deadlock. Now 2.5 is happily running on my desktop as well.> >> > Its very hard to make that assessment when the audio layer still doesnt> > work, most scsi drivers havent been ported, most other drivers are full> > of 2.4 fixed problems and so on.>> I can only talk for myself, 2.5 works fine here on my boxes. Dunno what> you mean about audio layer, emu10k works for me.>> SCSI drivers can be a real problem. Not the porting of them, most of[snip]simply replying to one of you all ...Most important problem I currently see is that one of two kernelsdo not boot on my MP machine I use as a workstation.Apart from that and after early 2.5.3x probs were sorted outI already had 2.5-bk-kernels running and did the following on thatMP machine:- compiled linux-2.5-bks- compiled X (runs with multi head)- listend to music (emu10k)- watched TV (bttv)- burned CDs (SCSI)- ran amanda: dumped multiple input streams from network to IDE disks before writing to SCSI tape- ran vmware (after patchwork to compile ;-)- started looking at sym53c416 cli() removal and had the scanner doing his work (started to debug some pnp things there too, results to be posted)- changed to devfs- printing and serial are fine too- the new input stuff now behaves properly toooften did multiple things in parallel (watching tv while compilinga new kernel, ...)had really few crashes (~4-6 since 2.5.34)had some compilation probs with modules and MP but they got eitherfixed too fast or patches went into bk within 1-2 days :-)Going to check JFS (and XFS) in the near future...So I think I am either one almost happy person with a lotta luck oryou all (did) do a very excellent job!!! ... but please get thoseMP (boot) probs sorted out ;-)Before you start asking what probs: this time it's around ACPI init.--- snipp ---PCI: PCI BIOS revision 2.10 entry at 0xfdb91, last bus=1PCI: Using configuration type 1ACPI: Subsystem revision 20020918 tbxface-0099 [03] Acpi_load_tables : ACPI Tables successfully loadedParsing Methods:......................................................................................................Table [DSDT] - 309 Objects with 22 Devices 102 Methods 19 RegionsACPI Namespace successfully loaded at root c03a741c--- dead end where no keyboard or serial console sysreqs are answered ---so it must be around ... and I assume it's mp_config_ioapic_for_sci()but still have to trace ...--- drivers/acpi/bus.c:606 --- /* * Get a separate copy of the FADT for use by other drivers. */ status = acpi_get_table(ACPI_TABLE_FADT, 1, &buffer); if (ACPI_FAILURE(status)) { printk(KERN_ERR PREFIX "Unable to get the FADT\n"); goto error1; }#ifdef CONFIG_X86 /* Ensure the SCI is set to level-triggered, active-low */ if (acpi_ioapic) mp_config_ioapic_for_sci(acpi_fadt.sci_int); else eisa_set_level_irq(acpi_fadt.sci_int);#endif status = acpi_enable_subsystem(ACPI_FULL_INITIALIZATION); if (ACPI_FAILURE(status)) { printk(KERN_ERR PREFIX "Unable to start the ACPI Interpreter\n"); goto error1; }--- end ----- GreetingsBjoern A. Zeeb bzeeb at Zabbadoz dot NeT56 69 73 69 74 unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/9/29/222 | CC-MAIN-2018-09 | refinedweb | 566 | 62.68 |
JWT Access Auth Identity Policy for Morepath
Project description
more.jwtauth: JWT Authentication integration for Morepath
Overview
This is a Morepath authentication extension for the JSON Web Token (JWT) Authentication.
For more information about JWT, see:
- JSON Web Token draft - the official JWT draft
- Auth with JSON Web Tokens - an interesting blog post by José Padilla
To access resources using JWT Access Authentication, the client must have obtained a JWT to make signed requests to the server. The Token can be opaque to client, although, unless it is encrypted, the client can read the claims made in the token.
JWT validates the authenticity of the claimset using the signature.
This plugin uses the PyJWT library from José Padilla for verifying JWTs.
Introduction
- The general workflow of JWT Access Authentication:
- After the client has sent the login form we check if the user exists and if the password is valid.
- In this case more.jwtauth generates a JWT token including all information in a claim set and send it back to the client inside the HTTP authentication header.
- The client stores it in some local storage and send it back in the authentication header on every request.
- more.jwtauth validates the authenticity of the claim set using the signature included in the token.
- The logout should be handled by the client by removing the token and making some cleanup depending on the implementation.
You can include all necessary information about the identity in the token so JWT Access Authentication can be used by a stateless service e.g. with external password validation.
Requirements
- Python (2.7, 3.3, 3.4, 3.5)
- morepath (>= 0.16.1)
- PyJWT (1.4.2)
- optional: cryptography (1.5.2)
Note
If you want to use another algorithm than HMAC (HS*), you need to install cryptography. On some systems this can be a little tricky. Please follow the instructions in and be sure to install all dependencies as referenced.
Installation
You can use pip for installing more.jwtauth:
- pip install -U more.jwtauth[crypto] - for installing with cryptography
- pip install -U more.jwtauth - installing without cryptography
Usage
For a basic setup just set the necessary settings including a key or key file and pass them to JWTIdentityPolicy:
import morepath from more.jwtauth import JWTIdentityPolicy class App(morepath.App): pass @App.setting_section(section="jwtauth") def get_jwtauth_settings(): return { # Set a key or key file. 'master_secret': 'secret', # Adjust the settings which you need. 'leeway': 10 } @App.identity_policy() def get_identity_policy(settings): # Get the jwtauth settings as a dictionary. jwtauth_settings = settings.jwtauth.__dict__.copy() # Pass the settings dictionary to the identity policy. return JWTIdentityPolicy(**jwtauth_settings) @App.verify_identity() def verify_identity(identity): # As we use a token based authentication # we can trust the claimed identity. return True
The login can be done in the standard Morepath way. You can add extra information about the identity, which will be stored in the JWT token and can be accessed through the morepath.Identity object:
class Login(object): pass @App.path(model=Login, path='login') def get_login(): return Login() @App.view(model=Login, request_method='POST') def login(self, request): username = request.POST['username'] password = request.POST['password'] # Here you get some extra user information. email = request.POST['email'] role = request.POST['role'] # Do the password validation. if not user_has_password(username, password): raise HTTPProxyAuthenticationRequired('Invalid username/password') @request.after def remember(response): # We pass the extra info to the identity object. identity = morepath.Identity(username, email=email, role=role) request.app.remember_identity(response, request, identity) return "You're logged in." # or something more fancy
Don’t use reserved claim names as “iss”, “aud”, “exp”, “nbf”, “iat”, “jti”, “refresh_until”, “nonce” or the user_id_claim (default: “sub”, see settings). They will be silently ignored.
- Advanced:
For testing or if we want to use some methods of the JWTIdentityPolicy class directly we can pass the settings as arguments to the class:
identity_policy = JWTIdentityPolicy( master_secret='secret', leeway=10 )
Refreshing the token
There are some risks related with using long-term tokens:
- If you use a stateless solution the token contains user data which could not be up-to-date anymore.
- If a token get compromised there’s no way to destroy sessions server-side.
A solution is to use short-term tokens and refresh them either just before they expire or even after until the refresh_until claim not expires.
To help you with this more.jwtauth has a refresh API, which uses 4 settings:
- allow_refresh: Enables the token refresh API when True.
- Default is False
- refresh_delta: The time delta in which the token can be refreshed
- considering the leeway. Default is 7 days. When None you can always refresh the token.
- refresh_nonce_handler: Either dotted path to callback function or the
- callback function itself, which receives the current request and the userid as arguments and returns a nonce which will be validated before refreshing. When None no nonce will be created or validated for refreshing.
- verify_expiration_on_refresh: If False, expiration_delta for the JWT
- token will not be checked during refresh. Otherwise you can refresh the token only if it’s not yet expired. Default is False.
When refreshing is enabled by setting refresh_delta the token can get 2 additional claims:
- refresh_until: Timestamp until which the token can be refreshed.
- nonce: The nonce which was generated by refresh_nonce_handler.
So when you want to refresh your token, either because it has expires or just before, you should adjust your jwtauth settings:
@App.setting_section(section="jwtauth") def get_jwtauth_settings(): return { # Set a key or key file. 'master_secret': 'secret', 'allow_refresh': True, 'refresh_delta': 300, 'refresh_nonce_handler': 'yourapp.handler.refresh_nonce_handler' }
Alternatively you can set the refresh_nonce_handler by decorating a closure which returns the handler function:
from .app import App from .model import User @App.setting(section="jwtauth", name="refresh_nonce_handler") def get_handler(): def refresh_nonce_handler(request, userid): # This returns a nonce from the user endity # which can just be an UUID you created before. return User.get(username=userid).nonce return refresh_nonce_handler
After you can send a request to the refresh end-point for refreshing the token:
from morepath import Identity from more.jwtauth import ( verify_refresh_request, InvalidTokenError, ExpiredSignatureError ) from .app import App from .model import User class Refresh(object): pass @App.path(model=Refresh, path='refresh') def get_refresh(): return Refresh() @App.view(model=Refresh) def refresh(self, request): try: # Verifies if we're allowed to refresh the token. # In this case returns the userid. # If not raises exceptions based on InvalidTokenError. # If expired this is a ExpiredSignatureError. username = verify_refresh_request(request) except ExpiredSignatureError: @request.after def expired_nonce_or_token(response): response.status_code = 403 return "Your session has expired." except InvalidTokenError: @request.after def invalid_token(response): response.status_code = 403 return "Could not refresh your token." else: # get user info from the database to update the claims User.get(username=username) @request.after def remember(response): # create the identity with the userid and updated user info identity = Identity( username, email=user.email, role=user.role ) # create the updated token and set it in the response header request.app.remember_identity(response, request, identity) return "Token sucessfully refreshed."
So now on every token refresh the user data gets updated.
When using the refresh_nonce_handler, you can just change the nonce if the token gets compromised, e.g. by storing a new UUID in the user endity, and the existing tokens will not be refreshed anymore.
Exceptions
When refreshing the token fails, an exception is raised. All exceptions are subclasses of more.jwtauth.InvalidTokenError, so you can catch them with except InvalidTokenError. For each exception a description of the failure is added. The following exceptions could be raised:
- InvalidTokenError: A plain InvalidTokenError is used when the refreshing API is disabled, the JWT token could not be found or the refresh nonce is invalid.
- ExpiredSignatureError: when the refresh_until claim has expired or when the JWT token has expired in case verify_expiration_on_refresh is enabled.
- MissingRequiredClaimError: When the refresh_until claim is missing if a refresh_delta was provided or when the nonce claim is missing if refresh_nonce_handler is in use.
- DecodeError: When the JWT token could not be decoded.
Settings
There are some settings that you can override. Here are all the defaults:
@App.setting_section(section="jwtauth") def get_jwtauth_settings(): return { 'master_secret': None, 'private_key': None, 'private_key_file': None, 'public_key': None, 'public_key_file': None, 'algorithm': "HS256", 'expiration_delta': datetime.timedelta(minutes=30), 'leeway': 0, 'allow_refresh': False, 'refresh_delta': timedelta(days=7), 'refresh_nonce_handler': None, 'verify_expiration_on_refresh': False, 'issuer': None, 'auth_header_prefix': "JWT", 'userid_claim': "sub" }
The following settings are available:
- master_secret
- A secret known only by the server, used for the default HMAC (HS*) algorithm. Default is None.
- private_key
- An Elliptic Curve or an RSA private_key used for the EC (EC*) or RSA (PS*/RS*) algorithms. Default is None.
- private_key_file
- A file holding an Elliptic Curve or an RSA encoded (PEM/DER) private_key. Default is None.
- public_key
- An Elliptic Curve or an RSA public_key used for the EC (EC*) or RSA (PS*/RS*) algorithms. Default is None.
- public_key_file
- A file holding an Elliptic Curve or an RSA encoded (PEM/DER) public_key. Default is None.
- algorithm
- The algorithm used to sign the key. Defaults is HS256.
- expiration_delta
- Time delta from now until the token will expire. Set to None to disable. This can either be a datetime.timedelta or the number of seconds. Default is 30 minutes.
- leeway
- The leeway, which allows you to validate an expiration time which is in the past, but not very far. To use either as a datetime.timedelta or the number of seconds. Defaults is 0.
- allow_refresh
- Setting to True enables the refreshing API. Default is False
- refresh_delta
- A time delta in which the token can be refreshed considering the leeway. This can either be a datetime.timedelta or the number of seconds. Default is 7 days. When None you can always refresh the token.
- refresh_nonce_handler
- Dotted path to callback function, which receives the userid as argument and returns a nonce which will be validated before refreshing. When None no nonce will be created or validated for refreshing. Default is None.
- verify_expiration_on_refresh
- If False, expiration_delta for the JWT token will not be checked during refresh. Otherwise you can refresh the token only if it’s not yet expired. Default is False.
- issuer
- This is a string that will be checked against the iss claim of the token. You can use this e.g. if you have several related apps with exclusive user audience. Default is None (do not check iss on JWT).
- auth_header_prefix
- You can modify the Authorization header value prefix that is required to be sent together with the token. The default value is JWT. Another common value used for tokens is Bearer.
- userid_claim
- The claim, which contains the user id. The default claim is ‘sub’.
The library takes either a master_secret or private_key/public_key pair. In the later case the algorithm must be an EC*, PS* or RS* version.
Algorithms
The JWT spec supports several algorithms for cryptographic signing. This library currently supports:
- HS256
- HMAC using SHA-256 hash algorithm (default)
- HS384
- HMAC using SHA-384 hash algorithm
- HS512
- HMAC using SHA-512 hash algorithm
- ES256 [1]
- ECDSA signature algorithm using SHA-256 hash algorithm
- ES384 [1]
- ECDSA signature algorithm using SHA-384 hash algorithm
- ES512 [1]
- ECDSA signature algorithm using SHA-512 hash algorithm
- PS256 [1]
- RSASSA-PSS signature using SHA-256 and MGF1 padding with SHA-256
- PS384 [1]
- RSASSA-PSS signature using SHA-384 and MGF1 padding with SHA-384
- PS512 [1]
- RSASSA-PSS signature using SHA-512 and MGF1 padding with SHA-512
- RS256 [1]
- RSASSA-PKCS1-v1_5 signature algorithm using SHA-256 hash algorithm
- RS384 [1]
- RSASSA-PKCS1-v1_5 signature algorithm using SHA-384 hash algorithm
- RS512 [1]
- RSASSA-PKCS1-v1_5 signature algorithm using SHA-512 hash algorithm
Developing more.jwtauth
Install more.jwtauth for development
Clone more.jwtauth from github:
.. code-block:: console
$ git clone git@github.com:morepath/more.jwtauth.git
If this doesn’t work and you get an error ‘Permission denied (publickey)’, you need to upload your ssh public key to github.
Then go to the more.jwtauth directory:
.. code-block:: console
$ cd more.jwtauth
Make sure you have virtualenv installed.
Create a new virtualenv for Python 3 inside the more.jwtauth directory:
.. code-block:: console
$ virtualenv -p python3 env/py3
Activate the virtualenv:
.. code-block:: console
$ source env/py3/bin/activate
Make sure you have recent setuptools and pip installed:
.. code-block:: console
$ pip install -U setuptools pip
Install the various dependencies and development tools from develop_requirements.txt:
.. code-block:: console
$ pip install -Ur develop_requirements.txt
For upgrading the requirements just run the command again.
If you want to test more.jwtauth with Python 2.7 as well you can create a second virtualenv for it:
.. code-block:: console
$ virtualenv -p python2.7 env/py27
You can then activate it:
.. code-block:: console
$ source env/py27/bin/activate
Then uprade setuptools and pip and install the develop requirements as described above.
Note
The following commands work only if you have the virtualenv activated.
Running the tests
You can run the tests using py.test:
.. code-block:: console
$ py.test
To generate test coverage information as HTML do:
.. code-block:: console
$ py.test –cov –cov-report html
You can then point your web browser to the htmlcov/index.html file in the project directory and click on modules to see detailed coverage information.
Various checking tools
flake8 is a tool that can do various checks for common Python mistakes using pyflakes, check for PEP8 style compliance and can do cyclomatic complexity checking. To do pyflakes and pep8 checking do:
.. code-block:: console
$ flake8 more.jwtauth
To also show cyclomatic complexity, use this command:
.. code-block:: console
$ flake8 –max-complexity=10 more.jwtauth
Tox:
.. code-block:: console
$ tox -l
You can run all tox tests with:
.. code-block:: console
$ tox
You can also specify a test environment to run e.g.:
.. code-block:: console
$ tox -e py35 $ tox -e pep8 $ tox -e coverage
CHANGES
0.11 (2018-01-18)
- Remove support for Python 3.3 and add support for Python 3.6.
- Upgrade PyJWT to version 1.5.3 and cryptography to version 2.1.4.
0.9 (2017-03-02)
New: Add an API to refresh the JWT token (see issue #6).
This implement adding 4 new settings:
- allow_refresh: Enables the token refresh API when True.
- refresh_delta: The time delta in which the token can be refreshed considering the leeway.
- refresh_nonce_handler: Dotted path to callback function, which receives the userid as argument and returns a nonce which will be validated before refreshing.
- verify_expiration_on_refresh: If False, expiration_delta for the JWT token will not be checked during refresh. Otherwise you can refresh the token only if it’s not yet expired.
It also adds 2 claims to the token when refreshing is enabled:
- refresh_until: Timestamp until which the token can be refreshed.
- nonce: The nonce which was returned by refresh_nonce_handler.
For details see README.rst.
Removed: The verify_expiration setting has been removed as it was mainly for custom handling of token refreshing, which is now obsolente.
Pass algorithm explicit to jwt.decode() to avoid some vulnerabilities. For details see the blog post by Tim McLean about some “Critical vulnerabilities in JSON Web Token libraries”.
Allow expiration_delta and leeway as number of seconds in addition to datetime.timedelta.
Some code cleanup and refactoring.
0.8 (2016-10-21)
- We now use virtualenv and pip instead of buildout to set up the development environment. A development section has been added to the README accordingly.
- Review and optimize the tox configuration.
- Upgrade to PyJWT 1.4.2 and Cryptography 1.5.2.
0.7 (2016-07-20)
- Upgrade to Morepath 0.15.
- Upgrade to PyJWT 1.4.1 and Cryptography 1.4.
- Add testenv for Python 3.5 and make it the default test environment.
- Change author to “Morepath developers”.
- Clean up classifiers.
0.6 (2016-05-19)
Make Cryptography optional.
Breaking Change: For using other algorithms than HMAC you now need to install the crypto dependencies explicitly. Read the note in the Requirements section and the new Installation section of README.rst.
Add an Installation section to the README.
Refactor the cryptography test suite.
0.5 (2016-04-25)
- Adding some tests.
- Increase coverage to 100%.
- Add travis-ci and tox integration.
- Some clean-up.
- Upgrade to Morepath 0.14.
- Some improvements to the setup and release workflow.
0.4 (2016-04-13)
- Upgrade to Morepath 0.13.2 and update the tests.
- Upgrade PyJWT to 1.3.0 and cryptography to 1.3.1.
- Make it a PyPI package and release it. Fixes Issue #1.
0.3 (2016-04-13)
- Upgrade PyJWT to 1.4.0 and cryptography to 0.9.1.
- Python 3.2 is no longer a supported platform. This version of Python is rarely used. PyUsers affected by this should upgrade to 3.3+.
- Some cleanup.
0.2 (2015-06-29)
- Integrate the set_jwt_auth_header function into the identity policy as remember method.
- Add support for PS256, PS384, and PS512 algorithms.
- Pass settings directly as arguments to the JWTIdentityPolicy class with the possibility to override them with Morepath settings using the method introduced in Morepath 0.11.
- Remove JwtApp as now we use JWTIdentityPolicy directly without inherit from JwtApp.
- Add a Introduction and Usage section to README.
- Integrate all functions as methods in the JWTIdentityPolicy Class.
- Refactor the test suite.
0.1 (2015-04-15)
- Initial public release.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/more.jwtauth/ | CC-MAIN-2019-35 | refinedweb | 2,889 | 52.36 |
The primary goal of a web browser is to display the information identified by a URL. To do so, a browser first uses the URL to connect to a server somewhere on the Internet, and then requests information from that server. The web page is the data in the server’s response.
To display a web page, the browser first needs to get a copy of it. To do so, it asks the OS to put it in touch with a server somewhere on the internet; the URL for the web page tells it the server’s host name. The OS then talks to a DNS server which convertsOn some systems, you can run
dig +short example.org to do this conversion yourself. the host name like
example.org into an IP address like
93.184.216.34.Today there are two versions of IP: IPv4 and IPv6. IPv6 addresses are a lot longer and are usually in hex, but otherwise the differences don't matter here. Then the OS decides which hardware is best for communicating with that IP address (say, wireless or wired) using what is called a routing table, and then uses device drivers to sends signals over a wire or over the air.I'm skipping steps here. On wires you first have to wrap communications in ethernet frames, on wireless you have to do even more. I'm trying to be brief. Those signals are picked up and transmitted by a series of routersOr a switch, or an access point, there are a lot of possibilities, but eventually there is a router. which each send your message in the direction they think will take it toward that IP address.They may also record where the message came from so they can forward the reply back, especially in the case of NATs. Eventually this reaches the server, and the connection is created. Anyway, the point of this is that the browser tells the OS, “Hey, put me in touch with
example.org”, and it does.
On many systems, you can set up this kind of connection manually using the
telnet program, like this:The “80” is the port, discussed below.
telnet example.org 80
You might need to install
telnet; it is often disabled by default. On Windows, go to Programs and Features / Turn Windows features on or off in the Control panel. On macOS, you can use the
nc -v command as a replacement:
nc -v example.org 80
The output from
nc is a little different from
telnet but it does basically the same thing. You can install
telnet on most Linux systems; plus, the
nc command is usually available from a package called
netcat.
You'll get output that looks like this:
Trying 93.184.216.34... Connected to example.org. Escape character is '^]'.
This means that the OS converted
example.org to the IP address of
93.184.216.34 and was able to connect to it.The line about escape characters is just instructions on using obscure
telnet features. You can now type in text and press enter to talk to
example.org.
Once it’s connected, the browser requests information from the server by name. The name is the part of a URL that comes after the host name, like
/index.html, called the path. The request looks like this:
GET /index.html HTTP/1.0 Host: example.org
Here, the word
GET means that the browser would like to receive information,It could say
POST if it intended to send information, plus there are some other obscure options. then comes the path, and finally there is the word
HTTP/1.0 which tells the host that the browser speaks version 1.0 of HTTP.Why not 1.1? You can use 1.1, but then you need another header (
Connection) to handle a feature called "keep-alive". Using 1.0 avoids this complexity. There are several versions of HTTP (0.9, 1.0, 1.1, and 2.0). The HTTP 1.1 standard adds a variety of useful features, like keep-alive, but in the interest of simplicity our browser won't use them. We're also not implementing HTTP 2.0; HTTP 2.0 is much more complex than the 1.X series, and is intended for large and complex web applications, which our browser can’t run anyway.
After the first line, each line contains a header, which has a name (like
Host) and a value (like
example.org). Different headers mean different things; the
Host header, for example, tells the host who you think it is.This is useful when the same IP address corresponds to multiple host names (for example,
example.com and
example.org). There are lots of other headers one could send, but let's stick to just
Host for now.Many websites, including
example.org, basically require the
Host header to function properly, since hosting multiple domains on a single computer is very common.
Finally, after the headers comes a single blank line; that tells the host that you are done with headers.
Enter all this into
telnet, remembering to leave add a blank line after the line that begins with
Host. You should get a response.
The HTTP/1.0 standard is also known as RFC 1945. The HTTP/1.1 standard is RFC 2616, so if you're interested in
Connection and keep-alive, look there.
The server’s response starts with this line:
HTTP/1.0 200 OK
That tells you that the host confirms that it, too, speaks
HTTP/1.0, and that it found your request to be "OK" (which has a numeric code of 200). You may be familiar with
404 Not Found; that’s another numeric code and response, as are
403 Forbidden or
500 Server Error. There are lots of these codes, and they have a pretty neat organization scheme:The status text like
OK can actually be anything and is just there for humans, not for machines.
Note the genius of having two sets of error codes (400s and 500s), which tells you who is at fault, the server or the browser.More precisely, who the server thinks is at fault. You can find a full list of the different codes on Wikipedia, and new ones do get added here and there.
After the
200 OK line, the server sends its own headers. When I did this, I got these headers (but yours will differ):
Cache-Control: max-age=604800 Content-Type: text/html; charset=UTF-8 Date: Mon, 25 Feb 2019 16:49:28 GMT Etag: "1541025663+ident" Expires: Mon, 04 Mar 2019 16:49:28 GMT Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT Server: ECS (sec/96EC) Vary: Accept-Encoding X-Cache: HIT Content-Length: 1270 Connection: close
There is a lot here, about the information you are requesting (
Content-Type,
Content-Length, and
Last-Modified), about the server (
Server,
X-Cache), about how long the browser should cache this information (
Cache-Control,
Expires,
Etag), about all sorts of other stuff. Let's move on for now.
After the headers there is a blank line followed by a bunch of HTML code. This is called the body of the server’s response, and your browser knows that it is HTML because of the
Content-Type header, which says that it is
text/html. It’s this HTML code that contains the content of the web page itself.
Let’s now switch gears from manual connections to Python.
So far we've communicated with another computer using
telnet. But it turns out that
telnet is quite a simple program, and we can do the same programmatically. It’ll require extracting host name and path from the URL, creating a socket, sending a request, and receiving a response.
A URL like has several parts:
http, explains how to get the information
example.org, explains where to get it
/index.html, explains what information to get
There are also optional parts to the URL. Sometimes, like in, there is a port that comes after the host, and there can be something tacked onto the end, a fragment like
#section or a query like
?s=term. We’ll come back to ports later in this chapter, and some other URL components appear in exercises.
In Python, there's a library called
urllib.parse that splits a URL into these pieces, but let’s write our own.There’s nothing wrong with using libraries, but implementing our own is good for learning. Plus, it makes this book easier to follow in a language besides Python. We'll start with the scheme—our browser only supports
http, so we just need to check that the URL starts with
http:// and then strip that off:
Now we must separate the host from the path. The host comes before the first
/, while the path is that slash and everything after it:
The
split(s, n) function splits a string at the first
n copies of
s. The path is supposed to include the separating slash, so I make sure to add it back after splitting on it.
With the host and path identified, the next step is to connect to the host. The operating system provides a feature called “sockets” for this. When you want to talk to other computers (either to tell them something, or to wait for them to tell you something), you create a socket, and then that socket can be used to send information back and forth. Sockets come in a few different kinds, because there are multiple ways to talk to other computers:
AF. We want
AF_INET, but for example
AF_BLUETOOTHis another.
SOCK. We want
SOCK_STREAM, which means each computer can send arbitrary amounts of data over, but there's also
SOCK_DGRAM, in which case they send each other packets of some fixed size.The
DGRAMstands for "datagram" and think of it like a postcard.
IPPROTO_TCP.Nowadays some browsers support protocols that don't use TCP, like Google Chrome's QUIC protocol.
By picking all of these options, we can create a socket like so:While this code uses the Python
socket library, your favorite language likely contains a very similar library. This API is basically standardized. In Python, the flags we pass are defaults, so you can actually call
socket.socket(); I'm keeping the flags here in case you're following along in another language.
import socket s = socket.socket( family=socket.AF_INET, type=socket.SOCK_STREAM, proto=socket.IPPROTO_TCP, )
Once you have a socket, you need to tell it to connect to the other computer. For that, you need the host and a port. The port depends on the type of server you’re connecting to, and for now should always be 80.
Note that there are two parentheses in the
connect call:
connect takes a single argument, and that argument is a pair of a host and a port. This is because different address families have different numbers of arguments.
The syntax of URLs is defined in RFC 3987, which is pretty readable. Try to implement the full URL standard, including encodings for reserved characters.
You can find out more about the "sockets" API on Wikipedia. Python more or less implements that API directly.
Now that we have a connection, we make a request to the other server. To do so, we send it some data using the
send method:
There are a few things to be careful of here. First, it’s important to have the letter “b” before the string. Next, it's very important to use
\r\n instead of
\n for newlines. And finally, it’s essential that you put two newlines
\r\n at the end, so that you send that blank line at the end of the request. If you forget that, the other computer will keep waiting on you to send that newline, and you'll keep waiting on its response. Computers are dumb.
Time for a Python quirk. When you send data, it's important to remember that you are sending raw bits and bytes; they could form text or an image or video. That's why here I have a letter
b in front of the string of data: that tells Python that I mean the bits and bytes that represent the text I typed in, not the text itself, which you can tell because it has type
bytes not
str:
If you forget that letter
b, you will get some error about
str versus
bytes. You can turn a
str into
bytes by calling its
encode("utf8") method, and go the other way with
decode("utf8").Well, to be more precise, you need to call
encode and then tell it the character encoding that your string should use. This is a complicated topic. I'm using
utf8 here, which is a common character encoding and will work on many pages, but in the real world you would need to be more careful.
You'll notice that the
send call returns a number, in this case
44. That tells you how many bytes of data you sent to the other computer; if, say, your network connection failed midway through sending the data, you might want to know how much you sent before the connection failed.
To read the response, you'd generally use the
read function on sockets, which gives whatever bits of the response have already arrived. Then you write a loop that collects bits of the response as they arrive. However, in Python you can use the
makefile helper function, which hides the loop:If you're in another language, you might only have
socket.read available. You'll need to write the loop, checking the socket status, yourself.
Here
makefile returns a file-like object containing every byte we receive from the server. I am instructing Python to turn those bytes into a string using the
utf8 encoding, or method of associating bytes to letters.It would be more correct to use
utf8 to decode just the headers and then use the
charset declaration in the
Content-Type header to determine what encoding to use for the body. That's what real browsers do; browsers even guess the encoding if there isn't a
charset declaration, and when they guess wrong you see those ugly � or some strange áççêñ£ß. I am skipping all that complexity and by again hardcoding
utf8. I’m also informing Python of HTTP’s weird line endings.
Let's now split the response into pieces. The first line is the status line:
statusline = response.readline() version, status, explanation = statusline.split(" ", 2) assert status == "200", "{}: {}".format(status, explanation)
Note that I do not check that the server's version of HTTP is the same as mine; this might sound like a good idea, but there are a lot of misconfigured servers out there that respond in HTTP 1.1 even when you talk to them in HTTP 1.0. (Luckily the protocols are similar enough as to not cause confusion.)
After the status line come the headers:
headers = {} while True: line = response.readline() if line == "\r\n": break header, value = line.split(":", 1) headers[header.lower()] = value.strip()
For the headers, I split each line at the first colon and fill in a map of header names to header values. Headers are case-insensitive, so I normalize them to lower case. Also, white-space is insignificant in HTTP header values, so I strip off extra whitespace at the beginning and end.
Finally, the body is everything else the server sent us:
It’s that body that we’re going to display. Before we do that, let’s gather up all of the connection, request, and response code into a
request function:
Now let’s display the text in the body.
Many common (and uncommon) HTTP headers are described on Wikipedia.
The
Accept-Encoding header allows a web browser to advertise that it supports receiving compressed documents. Try implementing support for one of the common compression formats (like
deflate or
gzip)!
The HTML code in the body defines the content you see in your browser window when you go to. I'll be talking much, much more about HTML in future chapters, but for now let me keep it very simple.
In HTML, there are tags and text. Each tag starts with a
< and ends with a
>; generally speaking, tags tell you what kind of thing some content is, while text is the actual content.That said, some tags, like
img, are content, not information about it. Most tags come in pairs of a start and an end tag; for example, the title of the page is enclosed a pair of tags:
<title> and
</title>. Each tag, inside the angle brackets, has a tag name (like
title here), and then optionally a space followed by attributes, and its pair has a
/ followed by the tag name (and no attributes). Some tags do not have pairs, because they don't surround text, they just carry information. For example, on, there is the tag:
<meta charset="utf-8" />
This tag explains that the character set with which to interpret the page body is
utf-8. Sometimes, tags that don't contain information end in a slash, but not always; it’s a matter of preference.
The most important HTML tag is called
<body> (with its pair,
</body>). Between these tags is the content of the page; outside of these tags is various information about the page, like the aforementioned title, information about how the page should look (
<style> and
</style>), and metadata (the aforementioned
<meta> tag).
So, to create our very very simple web browser, let's take the page HTML and print all the text, but not the tags, in it:If this example causes Python to produce a
SyntaxError pointing to the
end on the last line, it is likely because you are running Python 2 instead of Python 3. These chapters assume Python 3.
in_angle = False for c in body: if c == "<": in_angle = True elif c == ">": in_angle = False elif not in_angle: print(c, end="")
This code is pretty complex. It goes through the request body character by character, and it has two states:
in_angle, when it is currently between a pair of angle brackets, and
not in_angle. When the current character is an angle bracket, changes between those states; when it is not, and it is not inside a tag, it prints the current character.The
end argument tells Python not to print a newline after the character, which it otherwise would.
Put this code into a new function,
show:
We can now string together
request and
show:
This code uses the
sys library to read the first argument (
sys.argv[1]) from the command line to use as the URL. Try running the code you’ve written, passing the URL:
python3 browser.py
You should see some short text welcoming you to the official example web page. You can also try using it on this chapter!
So far, our browser supports the
http scheme. That’s pretty good: it’s the most common scheme on the web today. But more and more, websites are migrating to the
https scheme. I’d like this toy browser to support
https because many websites today require it.
The difference between
http and
https is that
https is more secure—but let’s be a little more specific. The
https scheme, or more formally HTTP over TLS, is identical to the normal
http scheme, except that all communication between the browser and the host is encrypted. There are quite a few details to how this works: which encryption algorithms are used, how a common encryption key is agreed to, and of course how to make sure that the browser is connecting to the correct host.
Luckily, the Python
ssl library implements all of these details for us, so making an encrypted connection is almost as easy as making a regular connection. That ease of use comes with accepting some default settings which could be inappropriate for some situations, but for teaching purposes they are fine.
Making an encrypted connection with
ssl is pretty easy. Suppose you’ve already created a socket,
s, and connected it to
example.org. To encrypt the connection, you use
ssl.create_default_context to create a context
ctx and use that context to wrap the socket
s. That produces a new socket,
s:
When you wrap
s, you pass a
server_hostname argument, and it should match the argument you passed to
s.connect. Note that I save the new socket back into the
s variable. That’s because you don’t want to send over the original socket; it would be unencrypted and also confusing.
Let’s try to take this code and add it to
request. First, we need to detect which scheme is being used:
scheme, url = url.split("://", 1) assert scheme in ["http", "https"], \ "Unknown scheme {}".format(scheme)
Encrypted HTTP connections usually use port 443 instead of port 80:
While we’re at it, let’s add support for custom ports, which are specified in a URL by putting a colon after the host name, like in:
Custom ports are handy for debugging.
Next, we’ll wrap the socket with the
ssl library:
if scheme == "https": ctx = ssl.create_default_context() s = ctx.wrap_socket(s, server_hostname=host)
These two steps should be all you need to connect to HTTPS sites.
TLS is pretty complicated; you can read the details in RFC 8446. Implementing your own is not recommended: writing security-sensitive code is a pretty different and more difficult skill than just writing code, and without a lot of very careful work a custom TLS implementation will be very insecure.
This chapter went from an empty file to a rudimentary web browser that can:
sockets
Hostheader
Yes, this is still more of a command-line tool than a web browser, but it already has some of the core capabilities of a browser.
Along with
Host, send the
User-Agent header in the
request function. Its value can be whatever you want—it identifies your browser to the host.
Error codes in the 300 range refer to redirects. Change the browser so that, for 300-range statuses, the browser repeats the request with the URL in the
Location header. Note that the
Location header might not include the host and scheme. If it starts with
/, prepend the scheme and host. You can test this with with the URL, which should redirect back to this page.
Add support for Data URLs, which embed the whole resource into the URL. You’ll need to undo the
base64 encoding; use the Python
base64 library’s
b64decode function for this.
Only show the text of an HTML document between
<body> and
</body>. This will avoid printing the title and style information. You will need to add additional variables
in_body and
tag to that loop, to track whether or not you are between
body tags and to keep around the tag name when inside a tag.
Support multiple file formats in
show: use the
Content-Type header to determine the content type, and if it isn't
text/html, just show the whole document instead of stripping out tags and only showing text in the
<body>. | https://browser.engineering/http.html | CC-MAIN-2019-51 | refinedweb | 3,899 | 71.95 |
PCRE - Perl-compatible regular expressions
#include <pcre.h>
pcre_jit_stack
*pcre_jit_stack_alloc(int startsize,
int maxsize);
pcre16_jit_stack
*pcre16_jit_stack_alloc(int startsize,
int maxsize);
pcre32_jit_stack
*pcre32_jit_stack_alloc(int startsize,
int maxsize);
This function is used to create a stack for use by the code compiled by the JIT optimization of pcre[16|32]_study(). The arguments are a starting size for the stack, and a maximum size to which it is allowed to grow. The result can be passed to the JIT run-time code by pcre[16|32]_assign_jit_stack(), or that function can set up a callback for obtaining a stack. A maximum stack size of 512K to 1M should be more than enough for any pattern. For more details, see the pcrejit page.
There is a complete description of the PCRE native API in the pcreapi page and a description of the POSIX API in the pcreposix page. | https://www.zanteres.com/manpages/pcre_jit_stack_alloc.3.html | CC-MAIN-2022-33 | refinedweb | 145 | 69.11 |
On Wed, Jan 03, 2001 at 07:37:22PM -0500, Guido van Rossum wrote: > > In other words: use it! :) > > Mind doing a few platform tests on the (new version of the) patch? Well, only a bit :) It's annoying that BSDI doesn't come with autoconf, but I managed to use all my early-morning wit (it's 6:30AM <wink>) to work around it. I've tested it on BSDI 4.1 and FreeBSD 4.2-RELEASE. > I already know that it works on Red Hat Linux 6.2 (my box) and Solaris > 2.6 (Andrew's box). I would be delighted to know that it works on at > least one other platform that has getc_unlocked() and one platform > that doesn't have it! Sorry, I have to disappoint you. FreeBSD does have getc_unlocked, they just didn't document it. Hurrah for autoconf ;P Anyway, it worked like a charm on BSDI: (Python 2.0) total 1794310 chars and 37660 lines count_chars_lines 0.310 0.300 readlines_sizehint 0.150 0.150 using_fileinput 2.013 2.017 while_readline 1.006 1.000 (CVS Python + getc_unlocked) daemon2:~/python/python/dist/src > ./python test.py termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.354 0.350 readlines_sizehint 0.182 0.183 using_fileinput 1.594 1.583 while_readline 0.363 0.367 But something weird is going on on FreeBSD: (Standard CVS Python) > ./python ~thomas/test.py ~thomas/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.265 0.266 readlines_sizehint 0.148 0.148 using_fileinput 0.943 0.938 while_readline 0.214 0.219 (CVS+getc_unlocked) > ./python-getc-unlocked ~thomas/test.py ~thomas/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.266 0.266 readlines_sizehint 0.151 0.141 using_fileinput 1.066 1.078 while_readline 0.283 0.281 This was sufficiently unexpected that I looked a bit further. The FreeBSD Python was compiled without editing Modules/Setup, so it was statically linked, no readline etc, but *with* threads (which are on by default, and functional on both FreeBSD and BSDI 4.1.) Here's the timings after I enabled just '*shared*': (CVS + *shared*) > ./python ~thomas/test.py ~thomas/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.276 0.273 readlines_sizehint 0.150 0.156 using_fileinput 0.902 0.898 while_readline 0.206 0.203 (This was not a fluke, I repeated it several times, getting hardly any variation.) Enabling readline and cursesmodule had no additional effect. Adding *shared* to the getc_unlocked tree saw roughly the same improvement, but was still slower than without getc_unlocked. (CVS + *shared* + getc_unlocked) > ./python ~thomas/test.py ~thomas/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.272 0.273 readlines_sizehint 0.149 0.148 using_fileinput 1.031 1.031 while_readline 0.267 0.266 Increasing the size of the testfile didn't change anything, other than the absolute numbers. I browsed stdio.h, where both getc() and getc_unlocked() are defined as macros. getc_unlocked is defined as: #define __sgetc(p) (--(p)->_r < 0 ? __srget(p) : (int)(*(p)->_p++)) #define getc_unlocked(fp) __sgetc(fp) and getc either as #define getc(fp) getc_unlocked(fp) (without threads) or static __inline int \ __getc_locked(FILE *_fp) \ { \ extern int __isthreaded; \ int _ret; \ if (__isthreaded) \ _FLOCKFILE(_fp); \ _ret = getc_unlocked(_fp); \ if (__isthreaded) \ funlockfile(_fp); \ return (_ret); \ } #define getc(fp) __getc_locked(fp) _FLOCKFILE(x) is defined as flockfile(x), so that isn't the difference. The speed difference has to be in the quick-and-easy test for whether the locking is even necessary. Starting a thread on 'time.sleep(900)' in test.py shows these numbers: (standard CVS python) > ./python-shared-std ~/test.py ~/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.433 0.445 readlines_sizehint 0.204 0.188 using_fileinput 1.595 1.594 while_readline 0.456 0.453 (getc_unlocked) > ./python-getc-unlocked-shared ~/test.py ~/termcapx10 total 1794310 chars and 37660 lines count_chars_lines 0.441 0.453 readlines_sizehint 0.206 0.195 using_fileinput 1.677 1.688 while_readline 0.509 0.508 So... using getc_unlocked manually for performance reasons isn't a cardinal sin on FreeBSD only if you are really using threads :-) Lets-outsmart-the-OS-scheduler-next!-ly y'rs -- Thomas Wouters <thomas@xs4all.net> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! | https://mail.python.org/pipermail/python-dev/2001-January/011324.html | CC-MAIN-2019-22 | refinedweb | 715 | 71.21 |
A Datasource is any URL that provides data to a feature at runtime. In order to be used as a datasource, a URL must:
be publicly accessible using javascript in the browser (have appropriate cross-origin headers)
respond with a JSON payload
respond to a GET request
Datasources are shared across all features in your project, and added using the Project panel under the Datasources tab. This reduces redundancy as multiple features that need to use the same data can share it, and allows events in one feature (say, a successful resource creation) to modify data used in another feature. Give it a name, provide a URL, any query or header parameters, and hit Fetch. You have now added a datasource to your Project.
The response is not stored by Mason, but used during the build process to determine the structure of the expected response and configure mapping rules for your data and UI. The response structure must be consistent.
Datasources are created in your project, but fetched by your features when they mount. This is because you may not have all the dynamic data relevant to the datasource until a specific feature mounts. In order to tell a feature to fetch a datasource when it mounts, check the box next to the datasource name in the Configure section of the builder under the Fetch Datasources header. Ensure the feature that fetches the datasource has the appropriate url parameters and callbacks, if required.
If you are using tokens or unique identifiers in your datasource, you may mark them as private using the key button in the Builder. Any header or query parameters marked as private will not be supplied to your Datasource at runtime, and must be provided by you using a callback (see below). All parameters not marked as private will be supplied to your features at runtime, and will be visible by anyone with access to your application.
You may inject dynamic header or query parameters, like authorization tokens, at runtime by using the
willFetchData callback. Your function will receive the datasource to be fetched as an argument, which you may modify and return. See below for an example.
import React from 'react';import { Canvas } from '@mason-api/react-sdk';class MyFeed extends React.Component {render() {const { search, token, user } = this.props;return <Canvasid="YOUR_COMPONENT_ID"willFetchData={(datasource, featureId) => {if (datasource.id === 'YOUR_DATASOURCE_ID') {return {...datasource,headers: { 'Authorization': token },queries: { 'search': search },};}return datasource;}}/>;}}
Your function will receive two arguments:
datasource, an object with the following structure
{url: '',headers: {'Content-Type': 'application/json'},queries: {foo: 'bar'},name: 'DATASOURCE_NAME',id: 'DATASOURCE_ID'
and
featureId, the 12-byte unique identifier of your feature (which you can find in the Export instructions in the Builder).
You may modify any part of the datasource including the URL. However, URL modifications are most easily accomplished using the
urlParams property. You must return the datasource, if you have no modifications return the datasource unmodified.
As an alternative to providing callbacks using props, particularly if you are not using React, you may use the
Mason.callback function to register your
willSendData callback. Here is an example:
import Mason from '@mason-api/react-sdk';Mason.callback('willSendData', (datasource, featureId) => {if (datasource.id === 'YOUR_DATASOURCE_ID') {return {...datasource,headers: { 'Authorization': token },queries: { 'search': search },};}return datasource;}, 'YOUR_FEATURE_ID');
The third argument to the
callback function is an optional feature id. Even though datasources are shared across all features in a project, fetch events are triggered by feature's mounting (more on this below). If you want Mason to invoke a callback only when a specific feature is fetching a datasource, you may provide its id as the third argument.
In some cases you may want to use a form submission response to update a datasource and trigger a UI update. To accomplish this, use the Success event menu in the Form tab of the Configure section of the Builder. You may merge or replace a datasource with the response from your form submission. You may also trigger a refetch of the entire datasource.
Replace simply overwrites the entire datasource. When merging, the behavior is as follows:
if the Datasource is an object, the response will be shallowly merged
if the Datasource is an array, and the response is not an array, the response will be pushed onto the end of the array
if the Datasource is an array, and the response is an array, the response's entries will be pushed onto the end of the array | https://docs.trymason.com/development/fetching-data | CC-MAIN-2020-50 | refinedweb | 744 | 52.9 |
source: Strip is a password generation utility made freely available by Zetetic Enterprises. Strip is a PalmOS based application designed to generate and store important passwords. A problem with Strip makes it possible for a user that has attained an encrypted password generated with Strip to easily guess the password. The pseudo-random number generation is done through the SysRandom() syscall of PalmOS, which offers simplistic number generation. Additionally, the PNRG is seeded with number that may be small depending on the operation time of the Palm device. Finally, the maximum size of the seed is 16 bits. Therefore, it is possible for a user to easily guess passwords generated with Strip, which have a maximum of 2^16 possibilities. /* * Crack passwords generate by strip ("Secure Tool for Recalling * Important Passwords") for the Palm; see * <> for details. * * Copyright (c) 2001 by Thomas Roessler * <roessler@does-not-exist.org>. * * Use, distribute and modify freely. * */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <crypt.h> /* The PalmOS SysRandom() RNG. */ static unsigned int multiplier = 22695477; static unsigned int _seed = 0; short palm_rand (unsigned int new_seed) { if (new_seed) _seed = new_seed; _seed = (_seed * multiplier) + 1; return (short) ((_seed >> 16) & 0x7fff); } /* * Strip's password generation algorithm for the alphanumeric case - * you can easily change this to cover the other cases as well. */ static char *alphas = "abcdefhijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; static char *numerics = "0123456789"; char *possible_password (unsigned int seed, int size) { static char pwbuff[1024]; char z[1024]; int i, r; int valids; if (size > sizeof (pwbuff)) exit (1); sprintf (z, "%s%s",numerics, alphas); valids = strlen (z); r = palm_rand (seed); for (i = 0; i < size; i++) { r = palm_rand (0); pwbuff[i] = z[r % valids]; } pwbuff[i] = '\0'; return pwbuff; } /* check all possible passwords */ int main (int argc, char *argv[]) { int i; char *pw; for (i = 0; i <= 0xffff; i++) { pw = possible_password ((short) i, 8); if (!argv[1] || !strcmp (argv[1], crypt (pw, argv[1]))) printf ("%s\n", pw); } return 0; }
Related ExploitsTrying to match CVEs (1): CVE-2001-0597
Trying to match OSVDBs (1): 7677
Other Possible E-DB Search Terms: Strip Password Generator 0.3/0.4/0.5, Strip Password Generator 0.3, Strip Password Generator | https://www.exploit-db.com/exploits/20746/ | CC-MAIN-2017-04 | refinedweb | 361 | 53.81 |
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
We have internal email lists for questions about programming languages. Here's one that came across recently that I thought illustrated a good point about language design.
An interview candidate gave the following awful implementation of the factorial function. (Recall that factorial is notated "n!", and is defined as the product of all the integers from 1 to n. 0! is defined as 1. So 4! = 4 x 3 x 2 x 1 = 24.)
If you note that n! = n x ((n-1)!) then a recursive solution comes to mind. When asked to implement the factorial function in C, an interview candidate came up with this:
int F(int x){ return (x > 1) ? (x * F(--x)) : x;}
Now, leaving aside for the moment the fact that this badly-named function does no bounds checking on the inputs, potentially consumes the entire stack, returns a wrong or nonsensical answer for inputs less than one, and is a recursive solution to a problem that can easily be solved with a simple lookup table, it has another big problem -- it doesn't even return the correct answer for any input greater than one either! F(4) will return 6, not 24, in Microsoft C.
And yet the seemingly equivalent C#, JScript and VBScript programs return 24.
The question was "What the heck is up with that?"
This looks perfectly straightforward. If x is 4, then we evaluate the consequence of the conditional operator...4 * F(3) = 4 * 3 * F(2) = 4 * 3 * 2 * F(1) = 4 * 3 * 2 * 1 = 24
right? What is broken with C?
Page 52 of K&R has the answer.
evaluation."
The Microsoft C compiler actually evaluates the function call first, which causes x to be decremented before the multiplication. So this is actually the same as 3 * F(3) = 3 * 2 * F(1) = 3 * 2 * 1 = 6.
Why does the specification call out that the compiler can choose any order for subexpression evaluation? Because then the compiler can choose to optimize the order so that it consumes a small number of stack or register slots for the results.
Of course, C was written back in the dark ages -- the 1970's -- when indeed most languages were pretty ill-specified and full of terrible "gotchas" like this. JScript, VBScript, C#, and most modern languages do guarantee that functions will be evaluated in left-to-right order. In all these languages,
x = f() + g() * h();
will evaluate f, then g, then h.
As K&R notes "The moral is that writing code that depends on order of evaluations is a bad programming practice in any language." Amen to that! | http://blogs.msdn.com/b/ericlippert/archive/2005/04/28/bad-recursion-revisited.aspx?PageIndex=2 | CC-MAIN-2015-35 | refinedweb | 458 | 65.52 |
#include <masterclient.h>
Abstract class base for all MasterClients. This is expected to fetch a list of IP addresses which will be turned into Servers.
Clears the server list.
Extracts engine name from pluginInfo() if available.
Method that is supposed to produce the contents of server list request packet that is sent to the master server.
Implemented in CustomServers, and MasterManager.
Serves as an informative role for MasterManager. If the master client is disabled, master manager will omit it during the refresh..
Emit this signal each time a new batch of servers is received.
This signal should be called by the plugin after the response packet delivered to readMasterResponse() is processed. Master servers that send their response in multiple packets should be handled nicely by.
Implemented in CustomServers, and MasterManager.
Reads master response only if address and port are of this server.
Reimplemented by MasterManager.
Reimplemented in MasterManager.
Called to read and analyze the response from the MasterServer.
Implemented in CustomServers, and MasterManager.
Requests an updated server list from the master.
This function is virtual since MasterManager overrides it.
Reimplemented in CustomServers, and MasterManager.
Times the refreshing process out.
This calls timeoutRefreshEx() and then emits listUpdated() signal.
Indicates that the server has timeouted recently.
This is reset to false by refresh() and set to true by timeoutRefresh(). If you reimplement refresh() please remember to set this to false.
Generic Doomseeker's socket used for network communication.
If this is not NULL plugins may use this socket to send UDP packets. In fact the default implementation of MasterClient::refresh() method will use this socket in order to send the challenge data. In this case any responses that arrive on this socket will automatically be redirected to appropriate master client instance by the Doomseeker's Refreshing thread.
If engine requires a customized way to obtain the server list it must be implemented within the plugin itself. | http://doomseeker.drdteam.org/docs/doomseeker_0.8/classMasterClient.php | CC-MAIN-2019-43 | refinedweb | 314 | 52.56 |
Code. Collaborate. Organize.
No Limits. Try it Today.
The diagram given below describes various components of .NET
Framework[3]:
The component developed in this type of language
can be used by any other language.
The component developed in this type of language
can be used by any other language.
The language in this category can use classes
produced in any other language. In simple words this means that the language
can instantiate classes developed in other language. This is similar to how
COM components can be instantiated by your ASP code.
The language in this category can use classes
produced in any other language. In simple words this means that the language
can instantiate classes developed in other language. This is similar to how
COM components can be instantiated by your ASP code.
Languages in this category can not just use the
classes as in CONSUMER category; but can also extend classes using
inheritance..
CLR is .NET equivalent of Java Virtual Machine (JVM). It is the
runtime that converts a MSIL code into the host machine language code, which is
then executed appropriately. [7] gives a detailed description of CLR.:
Lets look at the following Visual C++ code extract.:
CView
myView;myView.MessageBox("Hello
World", ".Net Article", MB_OK);::MessageBeep(MB_ICONHAND);
The code above first creates an object of CView. u is a
built in MFC class. In second line the code calls CView method MessageBox() to
show a dialog box containing an OK button and a "Hello World" message.
The caption of the dialog box is ".Net Article". In the third line the
program makes a direct call to a windows API, the scope resolution symbol
"::" before a method indicates that it is a direct API call.
MessageBeep() uses a system defined wave file "MB_ICONHAND" to play
the appropriate sound. The above three lines uses two different types of
functions - one that the MFC provides and the other that the Operating System
provides. Remember that MFC is nothing but a group of wrapper classes that
encapsulate APIs. You can totally bypass MFC and develop an application solely
using APIs. How about writing the same code in Visual Basic ?. The MFC
equivalent of Visual Basic is VBrun. Although you may be able to use the
MessageBeep() API in VB; but the Class CView does not exist in VBRun. You will
have to learn VBRun in order to use an equivalent of CView. The same holds for
other programming languages; and to further complicate the situation various
vendors have their own names and hierarchy of the wrapper classes. What this
means is that MFC is Microsoft wrapper classes for C++; Borland has their own
wrapper classes. Same goes for Java. Microsoft provides a powerful wrapper
classes package named WFC to be used with Visual J++, other vendors have
their own wrapper classes.
CView
MessageBox()
MessageBeep()
MB_ICONHAND
In addition to this, if your application uses a COM component
then you code would look radically different in different languages - this is
because different languages have different implementation of COM, and have
different data types. Given below is an extract of a COM component code (this
has been taken out from SMTP.Server application). The
code below returns a String. But in the world of COM, there is no string data
type; instead the equivalent is "BSTR". The MFC implementation of
string is the class CString (very similar to String class in Java). CString
provides a function AllocSysString() that does the necessary conversion to
BSTR.
BSTR
CString
AllocSysString()
BSTR
BSTR CServer::GetCcTo()
{
return m_strCcTo.AllocSysString();
}
Now if a COM component was to be developed in VB. The above code
would need to go through serious changes. Based on the above discussion we come
to the following conclusion :
Every Windows application language has their own implementation
and interfaces for the following:
COM Components,
Operating System Specific APIs (e.g. Win32API, Win16 API,
Windows CE APIs)
Wrapper Classes (e.g. MFC, VBRun, WFC)
The above mentioned differences create unnecessary work
for a programmer; and hamper interoperability between various languages.
Many Visual C++ programmers are reluctant to learn Visual Basic
despite the fact that VB is much easier than VC. Although VC applications are
faster, and this may be the reason why programmers may prefer VC, but in case of
a simple COM component the increased productivity in VB more than make up for a
slight penalty in speed. Personally I prefer to stick to VC, mainly because of
the fact that I will have to learn VBRun, VB specific data types, and VB
specific COM implementation. It would be great if VB and VC had common
data types, and if MFC was also present in VB. This would reduce my learning
curve to almost none, and would encourage thousands of programmers like me to
embrace VB.
There is another problem with existing COM implementation. While
a COM component can be used in many languages irrespective of how they were
developed. These components can not be extended/inherited from. I have always
fancied the very idea if being able to inherit/extend from a COM component.
Unfortunately it is not (at least up till now) possible unless you have an
access to the source code of the component. So the need of the hour is :
A Common wrapper class implementation
A Common Data Type system
Ability to inherit/extend from COM Components.
And the solution to all this is - .NET Class Framework. Those of
you who are familiar with MFC/VBRun/WFC can look at this framework as a group of
wrapper classes that are shared across VC, VB, and any other .NET compliant
language (a language that follows the Common Language Specifications
"CLS" - set forth by Microsoft). So now we all have to learn
only one Class Framework and can use it whether we are working in VB or in VC or
in any other CLS compliant language. An important terminology related to
.NET Framework is Namespace. Since you would frequently come across this
term in any .NET article; its good if we formally define it. Namespace is a
logical grouping of related interfaces, structures and classes. Java programmers
are familiar with the package concept. The namespace is very similar to the
package concept. A Namespace may contain further namespaces. This would result
in a tree like hierarchical structure. .Net Class Framework is nothing but a
hierarchical structure of namespaces. In .NET Class Framework "System"
is the starting namespace. A few other namespaces within System are
System.Security, System.Data, System.Console, System.WinForms etc.
System.Security
System.Data
System.Console
System.WinForms
If you want to program a .NET application you will have to learn
.NET Class Framework; just as a Java programmer learns the basic package
hierarchy (e.g. java.util, java.lang, javax.servlet etc.).
Web services is an extension of ActiveX. Those of you who have
used ASP and JSP both, know the apparent shortcomings of ASP. JSP has been
enriched with the concepts of Beans and Tags. ASP equivalent for Beans and Tags
was ActiveX Controls and ActiveX automation servers. Let me take a minute to
explain this point a bit further. Web Services is NOT a Microsoft proprietary
standard. It is a W3Consortium standard, and has been developed by Microsoft,
IBM and many other big names of the industry.
Functions are of two types. The ASP built-in functions, and the
programmer defined/implemented functions . In order to use the built-in
functions you just need to pass the appropriate parameters and make a simple
call to these functions. The functions are implemented by the ASP itself. The
string manipulation functions, Number conversion functions are an example of
built in functions.
The programmer defined functions are the functions that are
defined and implemented by the programmer. A programmer can either write these
functions in the same asp file or can write them in another file. If the
function code resides in the same asp file then the programmer can directly call
that function. In case the function resides in another file , say "func.asp";
then the programmer needs to include that file by writing a statement like
<!-- #include file="func.asp" -->; and now the programmer can
use the function. The programmers can also make ActiveX automation servers, and
call various function of these ActiveX servers. But one limitation is very
obvious -- no matter which type of function you use, the function MUST
physically reside on the same machine. For
example your ActiveX automation server must be implemented either as a .dll
or as an .exe and then must also be registered in Windows Registry before
an asp code can call its functions. (you may download
SMTP.Server - an ActiveX component developed by the author - to get a better
idea of how to use an ActiveX component from you ASP/VC/VB code.) In a world
where the Internet has become not only a necessity but also a way of life - it
is obvious that this limitation is a strong one. Microsoft's answer to this
problem is "Web Services". The idea goes something like this :
<!-- #include file="func.asp" -->
The Web service provider develops a useful function(s), and
publish/advertise it. The Web Service provider uses Web Service
Description Language (WSDL) standard to describe the interface of
the function(s). This is much like the Type Libraries (TLB) and Object
Description Language files (ODL) that needs to be generated with the ActiveX
automation servers.
The programmer/client who needs the function does a lookup
by using a process called - Web Service Discovery or SOAP Discovery
(also called DISCO for Web Service DISCOvery)
The Actual communication between the client program and the
web service takes place through a protocol called Simple Object Access
Protocol (SOAP) - SOAP is an XML based light weight protocol used
for communication in a decentralized distributed environment.
As is evident from the above discussion that at the heart of all
the communication is XML. Both SOAP, WSDL leverage on XML.
We have all either used or at least heard of network
communication protocols like RPC (Remote Procedure Call); RMI (Remote Method
Invocation), CORBA, DCOM, IIOP. All these technologies have the same purpose -
to enable calling a function/Object on a remote machine. So
how is Web Service (or SOAP) any different than these existing technologies ? .
The main difference is that SOAP uses HTTP/HTTPS protocol; unlike all the other
technologies that uses specialized protocols for distributed communication.
Microsoft, with this simplified approach has tried to bring sanity and
unification to the world of distributed programming. Distributed applications
are heavily dependent on JNDI lookups, RMI, CORBA, IIOP, Serializability and
other intricacies. With Web Service and .NET development tools ; Microsoft have
provided us with a much simpler way of developing distributed applications. So
what is the catch ?. The obvious catch is that this is an ASP.NET specific
technology (at least for now); but with time SOAP, WSDL, DISCO will most
certainly gain wider acceptance.
According to Microsoft's tests an application developed ASP.NET
using ADO.NET and Web Services, is many times more efficient than an
equivalent application developed in JAVA, JSP, Servlets and EJBs. [1]
Note that .NET has no direct equivalent of EJBs. So considering
Web Services as an equivalent to EJB will be in correct. However some of the
functionality of an EJB can be provided by Web Services.
With .NET, Microsoft has followed one guiding principle - make
it as simple as possible. And Web Services is no exception to this ideology.
See the example below and judge for yourself as to how easy it is to develop a
Web Service. And compare this with how "EASY ?" it was to develop an
ActiveX automation server; or how "easy ?" it is to develop an EJB.
Open any text editor and type in the following Visual Basic code,
and save the file under ".asmx" extension.
Imports System
Imports System.Web.Services
Imports Microsoft.VisualBasic
Public Class HelloWorld : Inherits WebService
Public Function <WebMethod()> GreetTheUser(strUserName as String) as String
Select Case strUserName
Case "Kashif Manzoor"
return "Hello Author"
Case "Amna Kashif"
return "Hello Ms. Amna Kashif"
Case "Ahmed Luqman"
return "Hello little Ahmed"
Case Else
return "Hello Stranger"
End Select
End Function
End Class
The first three lines import needed Classes. Import is similar
to the import used in Java or the #include in C/C++. The rest of the code is
self explanatory. Notice that the Class extends/inherits from the built in Web Service
class; this is mandatory. Also notice that the Function is declared
with the key word <WebMethod()> this indicates that this function
can be invoked from the web across the Internet. You may add other private
functions in your class but those functions will not be accessible to outside
word.
So that's it !!! you have successfully made your first Web Service. Although the service simply takes in a name and returns a greeting;
but it is enough to give you a flavor of Web Services. This Web Service can now be
accessed from your ASP.NET code. This article does not intend to explain either
ASP.NET or Web Services in details , the interested reader should consult ASP.NET
manual or visit MSDN site for more details.
Deploy your ".asmx" file on Web Service aware
Application Server like IIS. And open up a Web Service aware browser like IE.
Type in the appropriate URL for the file. If the file is in the default web
application folder then the URL would be "".
What do you think would happen ?. You will see a well formatted web page, giving
you the details of the GreetTheUser() method. And at the bottom of the page you
will be given an edit box, where you can enter the "strUserName" and
then press a button beside the edit box. Once you do that, you will receive the
greeting string as an XML document. This is a new and a wonderful
feature.
GreetTheUser()
Lets not be unfair with Sun's technologies here. Making an EJB
(at least a stateless and stateful EJB) is no more difficult than the above
example. What makes EJBs tricky is the deployment, the JNDI lookups, the Stubs and
the application servers that support EJBs. With Microsoft's "click and
ready to go" approach and easy to use utilities that come with Visual
Studio.NET, deploying any of .NET application is extremely easy.
In conclusion Web Services is an evolutionary idea as opposed to
being a revolutionary idea, its
just an other distributed development tool - which happens to be extremely
simple to use. Incorporation of Web Services in ASP.NET, has taken ASP to a new
level of respectability. Web Services has already started gaining popularity and
is also incorporated in the Java platform. Visit
to get the latest on Web Services support in Java platform.
Just as the Win Forms provide a unified way of developing GUI for
desktop application, the Web Forms provide similar tool for web applications.
Web Forms has been introduced in .NET as a part of ASP.NET. Web Forms are a
forms engine, that provide a browser-based user interface.
To appreciate Web Forms you may consider how GUI is rendered in
current Web applications. The GUI is rendered by using HTML tags. (e.g. <input
type=text name=editbox1 maxlength=10 size=10 >, will draw an edit
box on the web page) Web Forms can also have the intelligence to use HTML,
DHTML,WML etc. to draw the controls on the web page based on the browser's
capabilities. Web Forms can also incorporate the logic behind these controls.
Its like hooking up the code to a GUI control. Just like in your VB application,
you can associate a code with a button on the web page, this code will be run on
the server when the button is pressed. This is in contrast to the scripts that
run on the clients when a button is pressed. This approach is different to the
Java approach. In Java a programmer can simulate this functionality through JavaScript
and Servlets. But with Web forms this is done transparently. A Java programmer
may consider as if each HTML control has its dedicated "Servlet"
running in the background. Every time the control receives any event of interest
(e.g. button pressed, selection changed etc.) this specific "Servlet"
is called. This results in much cleaner code and an excellent logic separation
between presentation and business logic layers.
<input
type=text name=editbox1 maxlength=10 size=10 >
Web Forms consist of two parts - a template, which contains
HTML-based layout information for all the GUI elements and a Component which
contains all the logic to be hooked to the controls or GUI elements. This
provides a neat presentation layer and application logic layer separation.
The GUI will be rendered on the client side, while the code that
has been hooked to the GUI elements will run on the server side (very much like
a button being pressed on a JSP and a Servlet being called in response. But with
Win Forms this has been made extremely easy). The incorporation of Web Forms in
ASP.NET is an attempt to take ASP to a new level where it can seriously
challenge JSP.
Another good feature of Web Forms is that it can be built to
have enough intelligence to support a vast variety of browsers. The same ASP
page would render itself using DHTML, if the browser is IE 5.5. but if the
browser is Netscape the web page will be rendered using HTML tags; if the
page is being accessed through a WAP device the same page will render itself
using WML tags.
One of the obvious disadvantage of ASP over Java was that there
was that an ASP code was a maintenance nightmare. While a Java programmer can
use Java Beans, Tags and also Servlets to achieve presentation and business
layer separation - no such mechanism was present to a ASP programmer. With
ASP.NET Microsoft has provided such presentation-business layer separation - by
introducing the concept of Web Forms:
For those of you (like me) who turned to Java for web
development mainly due to spaghetti code of ASP- ASP.NET is worth exploring.
Since it introduces some exciting new ways to write clean code (Personally I
find Web Forms an exciting new concept - that does not have a direct equivalence
in Java platform)
Windows forms (also called Win Forms) are used to create GUI for
Windows desktop applications. The idea of Win Form has been borrowed from Windows
Foundation Classes (WFC) which were used for Visual J++. Win Form provide an
integrated and unified way of developing GUI. It has a rich variety of
Windows controls and user interface support.
Numerous classes and functions were used by programmers to
handle GUI. MFC in VC++, direct API in C++ and VB Forms Engine in VB are just a
few examples of different ways of handling GUI.
Simply put - Win Form is just another group of wrapper classes
that deal specifically with GUI. Win Form classes encapsulate the Windows
Graphical APIs. Now the programmers would not need to use the Windows Graphical
APIs directly; and since Win Form has been made a part of .NET
Class Framework; all the programming languages would use the same Win Form classes. This would rid the programmers of the need to learn different GUI
classes/tools. Win Forms in the part of the namespace
System.Winforms.
System.Winforms
With Win Forms we can make a single user interface, and use it in
VC++, VB, C#. Using Visual Studio.NET simply design the GUI, by dragging the
controls on a form (something that all VC++ and VB programmers are well familiar
with). Now you can use the same form either in VB, VC++ or in C#. And this is
all made possible because Visual Studio.NET uses the System.Winforms
namespace to draw the GUI. And any language that has the appropriate CLS
compliance can use this form directly.
Sun intended to present JVM - as a single language virtual
Machine. Meaning that only a Java program can be converted to a byte code
(.class file) and then presented to JVM, which interprets the programs and runs
it on the host machine. Although In concept, any language can be compiled to
Java Byte code and then fed to JVM; but Sun did not encourage such approaches.
Despite Sun's lack of initiative in this regard many researchers and companies
have developed languages following this approach. Sun's vision of Java being
"One language fits all" has both its advocates and its
critics[5]
With CLR, Microsoft has adopted a much liberal
policy. Microsoft has themselves evolved/developed/modified many of their
programming language to be compliant with .NET CLR.
Although
Visual C++ (VC++) , has undergone changes to incorporate .NET; yet VC++ also
maintain its status as being a platform dependent programming. Many new MFC
classes have been added; a programmer can choose between using MFC and compiling
the program into a platform specific executable file; or using .NET framework
classes and compile into platform independent MISL file. A programmer can also
specify (via directives) when ever he uses "un-safe" (the code that by
passes CLR - e.g. the use of pointers) code.
ASP, is another
language that has been improved markedly. Most programmers know that ASP did not
measure upto JSP; Microsoft has tried to turned the tables by introducing
ASP.NET. ASP.NET makes extensive use of Web-Services. Web-Services is an open
standard and JSP can use Web-services (Sun's official web site gives a detail on
Web services and how that are being incorporated in Java platform). There are
many other features that have been introduced in ASP.NET, to make it an ideal
distributed programming tool and to measure up against JSP. ASP code within
<% %> tag, is compiled into .NET Framework (similar to JSP code being
compiled into a servlet). This approach is different than how the <% %>
was handled in ASP ASP.NET has been enhanced by Microsoft.
Out
of ALL .NET languages, Visual Basic.NET (VB.NET) is one language that has
probably undergone the most changes. Now VB.NET may be considered a complete
Object-Oriented Language (as opposed to its previous "Half Object Based and
Half Object Oriented" status).
Microsoft has also
developed a brand new programming language C# (C Sharp). This language makes
full use of .NET. It is a pure object oriented language. A Java programmer may
find most aspects of this language to be identical to Java. If you are a new
comer to Microsoft Technologies - this language is the easiest way to get on the
.NET band wagon. While VC++, and VB enthusiast would stick to VC.NET and VB.NET;
they would probably increase their productivity by switching to C#. C# is
developed to make full use of all the intricacies of .NET. The learning curve of
C# for a Java programmer is minimal. Microsoft has also come up with a The
Microsoft Java Language Conversion Assistant - which is a tool that
automatically converts existing Java-language source code into C# for developers
who want to move their existing applications to the Microsoft .NET Framework.
Microsoft
has also developed J# (Java Sharp). C# may be similar to Java, but it is not
entirely identical. It is for this reason that Microsoft has developed J# - the
syntax of J# is identical to Visual J++. Microsoft's growing legal battle with
Sun over Visual J++ - forced Microsoft to discontinue Visual J++. So J# is
Microsoft's indirect continuation of Visual J++. It has been reported that
porting a medium sized Visual J++ project, entirely to J# takes only a few days
of effort.
Microsoft encourages third party vendors to make
use of Visual Studio.Net (launched on Feb 13, 2002). Third party vendors can
write compilers for different languages - that compile the language to MSIL
(Microsoft Intermediate Language). These vendors need not develop their own
development environment. They can easily use Visual Studio.NET as an IDE for
their .NET compliant language. A vendor has already produced COBOL.NET that
integrates with Visual Studio.NET and compiles into MSIL[3]. Theoretically it
would then be possible to come up with Java compiler that compiles into MSIL
instead of Java Byte code; and uses CLR instead of JVM. However Microsoft has
not pursued this due to possible legal action by Sun.
Although the Beta of Visual Studio.Net has been around for over
two years; it was officially launched on Feb 13, 2002. The future of .NET is
very promising. With .NET Microsoft has diverged from their age-old philosophy
of "proprietorship". Microsoft has always been coming up with good
tools - which unfortunately have used proprietary technologies. One reason of
unpopularity of DNA, COM, DCOM was that they were all based on proprietary
Microsoft binary format. Microsoft has learned from its mistakes; .NET has a
foundation of ASCII based XML. Microsoft submitted C#, and CLI for standardization
to ECMA which on December 13, 2001, ratified the C# and Common
Language Infrastructure (CLI) specifications into international standards.
The ECMA standards will be known as ECMA-334 (C#) and ECMA-335 (the CLI). There
is also a technical report on the CLI which will be known as ECMA TR84. In
addition, ECMA approved the fast-track motion of these specifications to ISO.
This is a huge step toward the Microsoft .NET Framework being widely accepted by
the industry.
As of now, CLR is only available on Windows
platform. .NET can only challenge Java, when the CLR becomes available for other
platforms. Corel is working on a "Port Project" - that aims to port
.NET framework to LINUX. Another company by the name of XIMIAN is also working
on a similar project named - "Mono". With third party projects like
these - we would soon have .NET versions for various non-Windows platforms.
In
future we would probably see J2EE and .NET chasing each other with no single
technology ever being able to replace the other. Historically Microsoft platform
is considered inappropriate for enterprise solutions - whereas it is
considered perfect tool for standalone applications. Java platform on the other
hand, has always been considered suitable for Enterprise applications, and has
been considered slow and at times inefficient for standalone applications. With
the healthy competition between Java and .NET we would probably see much better
application platforms. As a programmer - we stand to loose nothing. Whether .NET
gains more acceptance than J2EE or vice versa - the programming aspects remain
the same. Whether you program in C# or, Java or J# - the syntax (essentially)
remain the same - and with the similarity between the .NET and Java framework
classes it would take an average programmer only a month or so to switch from
one to another.
It is the author's opinion that .NET should
be treated as a valuable addition to a programmer's toolbox. In .NET we have
another tool at our disposal, how we use it, and when we use it is subject to
our discretion.
go to top
The author would love to hear
your comments/suggestions/criticisms regarding this article. Products and technology names
used in this article are registered trademarks of the respective
developers. | http://www.codeproject.com/Articles/1821/Introduction-to-NET?msg=1006479 | CC-MAIN-2014-23 | refinedweb | 4,554 | 57.27 |
Ipad slowdownAnthonyAdb Apr 4, 2012 8:15 PM
I'm working on an image gallery touch scrolling Ipad app and It needs to be able to scroll through a very big quantity of full screen jpg (1024x768 and possibly double size for Ipad3).
The scrolling works fine with Tweenlite, my problem is when I load and unload Movieclips containing the images, the screen freeze for ~1/2 sec cutting off the scrolling animation.
I'm using CacheAsBitmap, addChild and removeChildAt for every load and unload so I understand they all take some cpu for memory allocation etc ...
So my question is :
How can I load and unload images but still keep a smooth scrolling animation ?
Is there any way to load in the background ?
Any help would be appreciated
Thx
1. Re: Ipad slowdownsinious Apr 6, 2012 7:21 AM (in response to AnthonyAdb)1 person found this helpful
You could use Stage3D for mobile in AIR3.2 with Starling and improve your performance a huge amount. It means learning a lot but it will be worth it. Plus just moving images like a gallery is a trivial task for Starling.
Otherwise, prior to Stage3D in AIR 3.2, to really make it as smooth as possible you're going to have to opt for a blitting technique. In a nutshell, you have a single bitmapdata that you draw the images into. Sounds complex but the performance is very good. However this technique relies 100% on the CPU which is nowhere near as fast as using Stage3D and Starling with the GPU. So you can have a Vector of sprites containing the images in your gallery and as the user moves their finger you simply keep running the copyPixels() method to draw the images into the bitmapdata.
Here's a blitting engine and a quick demo that you can try for this but blitting itself isn't too complex.
The bottom line is the display list works great on computers but devices are absolutely not computers. They're a mere fraction of the speed of a modern computer. So while the gallery will work silky smooth using a computer, it can perform intolerably slow on devices. You need to use techniques that maximize performance rather than try to use the standard display list or (far worse) the timeline for animation.
BTW cacheAsBitmap can hurt you because devices have very little memory. Not only do they have low memory but the amount your app is allowed to use is very small. I probably wouldn't cacheAsBitmap those images. On a desktop computer I absolutely would, but not on a device.
2. Re: Ipad slowdownAnthonyAdb Apr 9, 2012 9:17 AM (in response to sinious)
Thanks Sinious,
Stage3d sounds really nice and I'll probably have a longer look at it after this small project but for now blitting sounds easier ^__^
I've had a look at blitMask (btw I was using TweenLite for the sliding) and here is what I have so far :
package
{
import flash.display.MovieClip;
import flash.events.MouseEvent;
import flash.events.Event;
import com.greensock.*;
import com.greensock.easing.*;
import flash.display.Bitmap;
import flash.display.BitmapData;
public class Gallery extends MovieClip
{
var offsetX:Number;
var blitMask:BlitMask;
var slideAnchor:MovieClip;
var slideAnchorTargetX:Number = 0;
var imageArray:Array = new Array(Catalogue_D_00, Catalogue_D_01, Catalogue_D_02);
var imageCurrentIndex:int = 0;
var imageOffsetX : Number = 0;
var imageWidth:Number = 768;
var mousedownX:Number;
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
public function Gallery()
{
loadImage();
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
function loadImage():void
{
if(slideAnchor == null)
{
slideAnchor = new MovieClip();// creating the slide anchor to attach all images to it
this.addChildAt(slideAnchor, 0);
}
for(var i=0; i<imageArray.length; i++){
var mc:MovieClip = new imageArray[i];
slideAnchor.addChild(mc);
mc.x = imageOffsetX;
imageOffsetX += imageWidth;
mc.addEventListener(MouseEvent.MOUSE_DOWN, mouseDownHandler);
}
blitMask = new BlitMask(slideAnchor, 0, 0, 768, 1024, true);
blitMask.disableBitmapMode();
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
function mouseDownHandler(e:MouseEvent):void
{
TweenLite.killTweensOf(slideAnchor);
mousedownX = this.mouseX;
offsetX = slideAnchor.x - this.mouseX;
e.currentTarget.addEventListener(MouseEvent.MOUSE_MOVE, mouseDragHandler);
e.currentTarget.addEventListener( MouseEvent.MOUSE_UP, mouseUpHandler);
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
function mouseDragHandler(e:MouseEvent):void
{
slideAnchor.x = this.mouseX + offsetX;
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
function mouseUpHandler(e:MouseEvent):void
{
e.currentTarget.removeEventListener(MouseEvent.MOUSE_MOVE, mouseDragHandler);
e.currentTarget.removeEventListener( MouseEvent.MOUSE_UP, mouseUpHandler);
// Images slide to the RIGHT
if (this.mouseX > mousedownX && slideAnchorTargetX + imageWidth <= 0)
{
slideAnchorTargetX += imageWidth;
imageCurrentIndex--;
}
// Images slide to the LEFT
else if (this.mouseX < mousedownX && slideAnchorTargetX - imageWidth > -imageWidth * imageArray.length)
{
slideAnchorTargetX -= imageWidth;
imageCurrentIndex++;
}
TweenLite.to(slideAnchor, 0.5, {x:slideAnchorTargetX, onUpdate:blitMask.update, onComplete:blitMask.disableBitmapMode});
}
////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////
}// public class Gallery
}// package
I'm only loading 3x jpg 1024x768 images at the same time for this test but my Ipad1 is still very slow to run this and all motions are not smooth at all.
did I miss something with the blitMask ?
3. Re: Ipad slowdownsinious Apr 9, 2012 11:36 AM (in response to AnthonyAdb)1 person found this helpful
I'd make 2 recommendations. First, let me say I've never used greensock's blitmask. I do my own blitting just copyPixels()ing from images in memory onto my bitmapdata. It's a pretty straigth forward technique.
For one, I find MOUSE_MOVE is a very sporadic and loud event. It fires off TONS of times and the firing is very erratic. What I do is start the MOUSE_DOWN. Once that handler fires I remember where my mouse is (stage position) and I just keep track of a single integer. I subtract from it if the user moves their finger left or add to it if they go right. I then use the integer to determine what pixels I need to copy and from what images (which are loaded in memory in a vector). To keep the app feeling a bit more responsive I use Event.ENTER_FRAME and sample the position rather than MOUSE_MOVE. I find it's a lot smoother.
Second, often a persons finger can go off the object it started on. You should consider listening for MOUSE_UP on the stage itself so you guarantee your success at capturing that event. Otherwise you might get some odd behavior if the MOUSE_MOVE event is allowed to continuously fire like crazy.
You'll have to experiment with disableBitmapMode(), I'm not sure what will run better.
Just remember to do as little as humanly possible in whatever event handler handles moving your tween. If you have to crunch a lot of math you'll kill your frameRate.
4. Re: Ipad slowdownfljot Apr 11, 2012 7:02 AM (in response to AnthonyAdb)
Two key things could help you in this situation:
1. For image Loaders set LoaderContext#imageDecodingPolicy to ImageDecodingPolicy.ON_LOAD
2. Reduce potention GC calls by keeping and reusing objects when possible
5. Re: Ipad slowdownAnthonyAdb Apr 11, 2012 8:27 AM (in response to sinious)
I've totally taken out all events for testing and made the movieclip ping pong tween from side to side, the result is still the same, very very choppy animation.
I've tested it with and without BitmapMode, same result.
I've taken a look at some blitting tutorials, this seems to be used mainly for displaying a small part of a larger image.
In my case I've well over 50+ jpg that need to be placed side by side and have their own buttons so I'm not really sure how to apply this method.
Would you have any specific tutorial or doc I could look at ?
I'm not using any loaders, my images are in my library, I've placed each of them in a separate MovieClip saved in the library and setup to "Export as ActionScript" + "Export on frame 1"
For this test I'm only adding 3x images placed in movieclip, I did that to have buttons placed over the image.
I import them in the stage by calling their class:
var mc:MovieClip = new imageArray[i];
slideAnchor.addChild(mc);
Should I be loading those images with a Loader instead, will that improve the performance ?
6. Re: Ipad slowdownfljot Apr 11, 2012 9:40 AM (in response to AnthonyAdb)
Should I be loading those images with a Loader instead, will that improve the performance ?
No, I thought you load them in runtime. Strange though in general. I wrote an app several monthes ago with the functionality quite similar to MinimalFolio (scrolling is done via scrollRect) — was running smoothly with AIR 3 on the first iPad (it was more then 3 images). So you're doing something wrong =) Check your app descriptor (gpu/cpu) and packaging target mode.
7. Re: Ipad slowdownAnthonyAdb Apr 11, 2012 5:58 PM (in response to fljot)
8. Re: Ipad slowdownfljot Apr 12, 2012 12:18 AM (in response to AnthonyAdb)1 person found this helpful
Of course that deployment type is the quickest to publish but the slowest to run.
9. Re: Ipad slowdownsinious Apr 12, 2012 8:36 AM (in response to fljot)
Oh geez, you're using CS5? You don't know what you're missing in CS5.5..
I would honestly say Flash Builder is the best path forward if you REALLY want performance but being you're probably unwilling to learn a whole new paradigm then download the trial version of CS5.5 and output your project using AIR for iOS. The output proceedure is just about the same as CS5 so there won't be any learning curve. However. The performance will be MUCH better.
Also if you take Flash CS5.5 and overlay AIR3.2 (latest version) you will also see a noticable performance bump on that.
Here is AIR 3.2 SDK:
Here are instructions to overlay it on Flash CS5.5:
I know it's a lot of work just to test something but I never made anything in Flash CS5 that didn't immediately make me upgrade. You'll be amazed at the performance difference using the same code.
Note: I'm only suggesting using the trials. Be warned that the CS6 suite is probably pretty close to being released. It'd be worth waiting for CS5 rather than buying CS5.5 right now. It's pegged for Juneish.
10. Re: Ipad slowdownAnthonyAdb Apr 12, 2012 5:37 PM (in response to sinious)
I just found this :
I'll check with my local reseller
so CS5.5 + ADT package command should give me the smooth scroll I'm looking for ?
I'll have a look at those command lines today and see how it goes
thx
11. Re: Ipad slowdownsinious Apr 13, 2012 6:32 AM (in response to AnthonyAdb)
Perfect! Grab the CS5.5 trial and test your product but be sure to overlay AIR3.2 before you do it. Follow the instructions I linked you to. CS5 is only capable of producing iOS apps and it does not embed the AIR runtime. Now you'll be running the latest version of AIR runtime which itself should give you a noticable boost. It also opens up the latest AIR framework and AIR for Android and Blackberry as well which CS5 didn't have when I used it.
Downloading a trial costs you nothing. If it works, buy it. At least you know CS6 will be free for you. Good link on that offer, I wondered where that cutoff was.
Note: If you have Master Suite you obviously get Flash Pro, Flash Catalyst and Flash Builder all together. If you REALLY want the best performance with the best debugger (with profiling) with serious ease of use (eventually) in Spark/Flex, Flash Builder is what you want.
12. Re: Ipad slowdownAnthonyAdb May 9, 2012 9:14 AM (in response to sinious)
I finally got my CS5.5 last week and tested the same AS3 code using AIR3.2 and adt, the result is pretty decent on IPad 1 and 3, MUCH better than with CS5 which was very laggy, but not ultra smooth :-P
Would it achievable to get the same scrolling smoothness as the original Ipad photo app or is that impossible ?
here is the adt command I used :
adt.bat -package -target ipa-ad-hoc -storetype pkcs12 -keystore C:\OpenSSL-Win32\bin\ios_development.p12 -provisioning-profile C:\OpenSSL-Win32\bin\development.mobileprovision Catalogue.ipa CatalogueIpad-app.xml CatalogueIpad.swf
I should be receiving CS6 soon since it's already out (?) so I will be running the same test code once I get the package.
13. Re: Ipad slowdownAnthonyAdb May 14, 2012 7:25 AM (in response to AnthonyAdb)
I finally got it to work ...
I got rid of the blitMask and TweenLite and simply used an event listener with "enterFrame". I'm really not sure what was going on before but it's now scrolling very smoothly at 60fps
Thanks for your help and patience guys
14. Re: Ipad slowdownsinious May 14, 2012 7:37 AM (in response to AnthonyAdb)
Glad you got it to work! CS5 was simply the issue. You can use adt and AIR3.2 with CS5 but it's such a pain in the neck it's worth the money over wasted time to simply upgrade. And at the same time if you upgraded you get CS6 free. Even though CS6 is acting extremely poorly (as you can see people complaining all over) so I myself would avoid using it, even if you have it, until they fix some rampant issues.
Aside that, manual blitting is how I'd handle your situation with a Vector continuously being loaded/unloaded with the next/prev photo and simply slide them in using ENTER_FRAME blitting. It seems like you're doing that now. And this is the easy way of doing things so glad you got that working! On iPad1 I find it to be 'tolerable' but on iPad2 and up it's very smooth.
If you really want it silky smooth, you should consult using Stage3D. Blitting uses the CPU whereas Stage3D is all GPU. Use the Starling2D framework which you should find extremely simple and you can achieve near iOS native smoothness pretty easily.
Download (has a github link there as well):
Basics tutorials on the wiki:
15. Re: Ipad slowdownAnthonyAdb May 14, 2012 9:19 AM (in response to sinious)
Do you mean adt is only useful with CS5 ? no need to use it with CS5.5 + ? I have to say I didn't see much of a difference using it with CS5.5, not sure whats the difference with the built in flash publishing to AIR 3.2
I'm actually not using any blitting atm, my previous blitmask test didnt go so well , so I'll see how it goes with my 50+ images with dynamic loading/unloading and plan from there.
I was about to install CS6 to have a try but now I'll just wait for some fix I guess ... I hope they can patch things soon
This small catalogue I'm working on is very very simple so I'm hopping I can complete it using my limited knowledge of AS3
I'm very tempted to try Stage3d / Starling2d I'll probably check out the video tutorial a bit later ... back to my dynamic loader test 1st !
thx
16. Re: Ipad slowdownsinious May 14, 2012 9:30 AM (in response to AnthonyAdb)
CS5.5 uses ADT to export for you so no, you don't need to use adt. Just use the publish menu. I 'read' that CS5 users can use adt to compile for them so I mentioned that. Only if you use native extensions will you need to use adt to compile with CS5.5. In CS6 I read they support native extensions but with all the issues I see all over the forums I'm avoid that like a pothole.
Starling2D is a great framework and for your simple needs should be ideal. It will give your performance a considerable bump. Just note that they try to make it similar to using the display list, but Stage3D and GPU programming is a whole different world. You will need to learn how to use it. Only limited flash knowledge transfers.
Do note Stage3D is below the display list in flash. Any content you add to flash's normal non-3d display list will appear over it. So if you have a background of any type you will need to put that background in Stage3D instead or you will cover it up completely and see nothing.
The tutorials are good. Take a look | https://forums.adobe.com/message/4405744 | CC-MAIN-2017-26 | refinedweb | 2,732 | 73.37 |
Inexperienced programmers often think that Java’s automatic garbage collection completely frees them from worrying about memory management. This is a common misperception: while the garbage collector does its best, it’s entirely possible for even the best programmer to fall prey to crippling memory leaks. Let me explain.
A memory leak occurs when object references that are no longer needed are unnecessarily maintained. These leaks are bad. For one, they put unnecessary pressure on your machine as your programs consume more and more resources. To make things worse, detecting these leaks can be difficult: static analysis often struggles to precisely identify these redundant references, and existing leak detection tools track and report fine-grained information about individual objects, producing results that are hard to interpret and lack precision.
In other words, leaks are either too hard to identify, or identified in terms that are too specific to be useful.
There actually four categories of memory issues with similar and overlapping symptoms, but varied causes and solutions:
- Performance: usually associated with excessive object creation and deletion, long delays in garbage collection, excessive operating system page swapping, and more.
- Resource constraints: occurs when there’s either to little memory available or your memory is too fragmented to allocate a large object—this can be native or, more commonly, Java heap-related.
- Java heap leaks: the classic memory leak, in which Java objects are continuously created without being released. This is usually caused by latent object references.
- Native memory leaks: associated with any continuously growing memory utilization that is outside the Java heap, such as allocations made by JNI code, drivers or even JVM allocations.
In this post, I’ll focus on Java heaps leaks and outline an approach to detect such leaks based on Java VisualVM reports and utilizing a visual interface for analyzing Java technology-based applications while they’re running.
But before you can prevent and hunt down memory leaks, you should understand how and why they occur. (Note: If you have a good handle on the intricacies of memory leaks, you can skip ahead.)
Memory Leaks: A Primer
For starters, think of memory leakage as a disease and Java’s
OutOfMemoryError (OOM, for brevity) as a symptom. But as with any disease, not all OOMs necessarily imply memory leaks: an OOM can occur due to the generation of a large number of local variables or other such events. On the other hand, not all memory leaks necessarily manifest themselves as OOMs, especially in the case of desktop applications or client applications (which aren’t run for very long without restarts).
Why are these leaks so bad? Among other things, leaking blocks of memory during program execution often degrades system performance over time, as allocated but unused blocks of memory will have to be swapped out once the system runs out of free physical memory. Eventually, a program may even exhaust its available virtual address space, leading to the OOM.
Deciphering the
OutOfMemoryError
As mentioned above, the OOM is a common indication of a memory leak. Essentially, the error is thrown when there’s insufficient space to allocate a new object. Try as it might, the garbage collector can’t find the necessary space, and the heap can’t be expanded any further. Thus, an error emerges, along with a stack trace.
The first step in diagnosing your OOM is to determine what the error actually means. This sounds obvious, but the answer isn’t always so clear. For example: Is the OOM appearing because the Java heap is full, or because the native heap is full? To help you answer this question, lets analyze a few of the the possible error messages:
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
java.lang.OutOfMemoryError: request <size> bytes for <reason>. Out of swap space?
java.lang.OutOfMemoryError: <reason> <stack trace> (Native method)
“Java heap space”
This error message doesn’t necessarily imply a memory leak. In fact, the problem can be as simple as a configuration issue.
For example, I was responsible for analyzing an application which was consistently producing this type of
OutOfMemoryError. After some investigation, I figured out that the culprit was an array instantiation that was demanding too much memory; in this case, it wasn’t the application’s fault, but rather, the application server was relying on the default heap size, which was too small. I solved the problem by adjusting the JVM’s memory parameters.
In other cases, and for long-lived applications in particular, the message might be an indication that we’re unintentionally holding references to objects, preventing the garbage collector from cleaning them up. This is the Java language equivalent of a memory leak. (Note: APIs called by an application could also be unintentionally holding object references.)
Another potential source of these “Java heap space” OOMs arises with the use of finalizers. If a class has a finalize method, then objects of that type do not have their space reclaimed at garbage collection time. Instead, after garbage collection, the objects are queued for finalization, which occurs later. In the Sun implementation, finalizers are executed by a daemon thread. If the finalizer thread cannot keep up with the finalization queue, then the Java heap could fill up and an OOM could be thrown.
“PermGen space”
This error message indicates that the permanent generation is full. The permanent generation is the area of the heap that stores class and method objects. If an application loads a large number of classes, then the size of the permanent generation might need to be increased using the
-XX:MaxPermSize option.
Interned
java.lang.String objects are also stored in the permanent generation. The
java.lang.String class maintains a pool of strings. When the intern method is invoked, the method checks the pool to see if an equivalent string is present. If so, it’s returned by the intern method; if not, the string is added to the pool. In more precise terms, the
java.lang.String.intern method returns a string’s canonical representation; the result is a reference to the same class instance that would be returned if that string appeared as a literal. If an application interns a large number of strings, you might need to increase the size of the permanent generation.
Note: you can use the
jmap -permgen command to print statistics related to the permanent generation, including information about internalized String instances.
“Requested array size exceeds VM limit”
This error indicates that the application (or APIs used by that application) attempted to allocate an array that is larger than the heap size. For example, if an application attempts to allocate an array of 512MB but the maximum heap size is 256MB, then an OOM will be thrown with this error message. In most cases, the problem is either a configuration issue or a bug that results when an application attempts to allocate a massive array.
“Request <size> bytes for <reason>. Out of swap space?”
This message appears to be an OOM. However, the HotSpot VM throws this apparent exception when an allocation from the native heap failed and the native heap might be close to exhaustion. Included in the message are the size (in bytes) of the request that failed and the reason for the memory request. In most cases, the <reason> is the name of the source module that’s reporting an allocation failure.
If this type of OOM is thrown, you might need to use troubleshooting utilities on your operating system to diagnose the issue further. In some cases, the problem might not even be related to the application. For example, you might see this error if:
- The operating system is configured with insufficient swap space.
- Another process on the system is consuming all available memory resources.
It’s also is possible that the application failed due to a native leak (for example, if some bit of application or library code is continuously allocating memory but fails to releasing it to the operating system).
<reason> <stack trace> (Native method)
If you see this error message and the top frame of your stack trace is a native method, then that native method has encountered an allocation failure. The difference between this message and the previous is that the allocation failure was detected in a JNI or native method rather than in Java VM code.
If this type of OOM is thrown, you might need to use utilities on the operating system to further diagnose the issue.
Application Crash Without OOM
Occasionally, an application might crash soon after an allocation failure from the native heap. This occurs if you’re running native code that doesn.
In some cases, the information from the fatal error log or the crash dump will be sufficient. If the cause of a crash is determined to be a lack of error-handling in some memory allocations, then you must hunt down the reason for said allocation failure. As with any other native heap issue, the system might be configured with insufficient swap space, another process might be consuming all available memory resources, etc.
Diagnosing Leaks
In most cases, diagnosing memory leaks requires very detailed knowledge of the application in question. Warning: the process can be lengthy and iterative.
Our strategy for hunting down memory leaks will be relatively straightforward:
- Identify symptoms
- Enable verbose garbage collection
- Enable profiling
- Analyze the trace
1. Identify Symptoms
As discussed, in many cases, the Java process will eventually throw an OOM runtime exception, a clear indicator that your memory resources have been exhausted. In this case, you need to distinguish between a normal memory exhaustion and a leak. Analyzing the OOM’s message and try to find the culprit based on the discussions provided above.
Oftentimes, if a Java application requests more storage than the runtime heap offers, it can be due to poor design. For instance, if an application creates multiple copies of an image or loads a file into an array, it will run out of storage when the image or file is very large. This is a normal resource exhaustion. The application is working as designed (although this design is clearly boneheaded).
But if an application steadily increases its memory utilization while processing the same kind of data, you might have a memory leak.
2. Enable Verbose Garbage Collection
One of the quickest ways to assert that you indeed have a memory leak is to enable verbose garbage collection. Memory constraint problems can usually be identified by examining patterns in the
verbosegc output.
Specifically, the
-verbosegc argument allows you to generates a trace each time the garbage collection (GC) process is begun. That is, as memory is being garbage-collected, summary reports are printed to standard error, giving you a sense of how your memory is being managed.
Here’s some typical output generated with the
–verbosegc option:
Each block (or stanza) in this GC trace file is numbered in increasing order. To make sense of this trace, you should look at successive Allocation Failure stanzas and look for freed memory (bytes and percentage) decreasing over time while total memory (here, 19725304) is increasing. These are typical signs of memory depletion.
3. Enable Profiling
Different JVMs offer different ways to generate trace files to reflect heap activity, which typically include detailed information about the type and size of objects. This is called profiling the heap.
4. Analyze the Trace
This post focuses on the trace generated by Java VisualVM. Traces can come in different formats, as they can be generated by different tools, but the idea behind them is always the same: find a block of objects in the heap that should not be there, and determine if these objects accumulate instead of releasing. Of particular interest are transient objects that are known to be allocated each time a certain event is triggered in the Java application. The presence of many object instances that ought to exist only in small quantities generally indicates an application bug.
Finally, solving memory leaks requires you to review your code thoroughly. Learning about the type of object leaking can be very helpful and considerably speedup debugging.
How Does Garbage Collection Work in the JVM?
Before we start our analysis of an application with a memory leak issue, let’s first look at how garbage collection works in the JVM.
The JVM uses a form of garbage collector called a tracing collector, which essentially operates by pausing the world around it, marking all root objects (objects referenced directly by running threads), and following their references, marking each object it sees along the way.
Java implements something called a generational garbage collector based upon the generational hypothesis assumption, which states that the majority of objects that are created are quickly discarded, and objects that are not quickly collected are likely to be around for a while.
Based on this assumption, Java partitions objects into multiple generations. Here’s a visual interpretation:
- Young Generation - This is where objects start out. It has two sub-generations:
- Eden Space - Objects start out here. Most objects are created and destroyed in the Eden Space. Here, the GC does Minors GCs, which are optimized garbage collections. When a Minor GC is performed, any references to objects that are still needed are migrated to one of the survivors spaces (S0 or S1).
- Survivor Space (S0 and S1) - Objects that survive Eden end up here. There are two of these, and only one is in use at any given time (unless we have a serious memory leak). One is designated as empty, and the other as live, alternating with every GC cycle.
- Tenured Generation - Also known as the old generation (old space in Fig. 2), this space holds older objects with longer lifetimes (moved over from the survivor spaces, if they live for long enough). When this space is filled up, the GC does a Full GC, which costs more in terms of performance. If this space grows without bound, the JVM will throw an
OutOfMemoryError - Java heap space.
- Permanent Generation - A third generation closely related to the tenured generation, the permanent generation is special because it holds data required by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example, objects describing classes and methods are stored in the permanent generation.
Java is smart enough to apply different garbage collection methods to each generation. The young generation is handled using a tracing, copying collector called the Parallel New Collector. This collector stops the world, but because the young generation is generally small, the pause is short.
For more information about the JVM generations and how them work in more detail visit the Memory Management in the Java HotSpot™ Virtual Machine documentation.
Detecting a Memory Leak
To find and eliminate a memory leak, you need the proper tools. It’s time to detect and remove such a leak using the Java VisualVM.
Remotely Profiling the Heap with Java VisualVM
VisualVM is a tool that provides a visual interface for viewing detailed information about Java technology-based applications while they are running.
With VisualVM, you can view data related to local applications and those running on remote hosts. You can also capture data about JVM software instances and save the data to your local system.
In order to benefit from all of Java VisualVM’s features, you should run the Java Platform, Standard Edition (Java SE) version 6 or above.
Enabling Remote Connection for the JVM
In a production environment, it’s often difficult to access the actual machine on which our code will be running. Luckily, we can profile our Java application remotely.
First, we need to grant ourselves JVM access on the target machine. To do so, create a file called jstatd.all.policy with the following content:
grant codebase "file:${java.home}/../lib/tools.jar" { permission java.security.AllPermission; };
Once the file has been created, we need to enable remote connections to the target VM using the jstatd - Virtual Machine jstat Daemon tool, as follows:
jstatd -p <PORT_NUMBER> -J-Djava.security.policy=<PATH_TO_POLICY_FILE>
For example:
jstatd -p 1234 -J-Djava.security.policy=D:\jstatd.all.policy
With the jstatd started in the target VM, we are able to connect to the target machine and remotely profile the application with memory leak issues.
Connecting to a Remote Host
In the client machine, open a prompt and type
jvisualvm to open the VisualVM tool.
Next, we must add a remote host in VisualVM. As the target JVM is enabled to allow remote connections from another machine with J2SE 6 or greater, we start the Java VisualVM tool and connect to the remote host. If the connection with the remote host was successful, we will see the Java applications that are running in the target JVM, as seen here:
To profile and analyze the application, we just double-click its name in the side panel.
Now that we’re all setup, let’s investigate an application with a memory leak issue, which we’ll call MemLeak.
MemLeak
Of course, there are a number of ways to create memory leaks in Java. For simplicity we will define a class to be a key in a HashMap, but we will not define the equals() and hashcode() methods.
A HashMap is a hash table implementation for the Map interface, and as such it defines the basic concepts of key and value: each value is related to a unique key, so if the key for a given key-value pair is already present in the HashMap, its current value is replaced.
It’s mandatory that our key class provides a correct implementation of the
equals() and
hashcode() methods. Without them, there is no guarantee that a good key will be generated.
By not defining the
equals() and
hashcode() methods, we add the same key to the HashMap over and over and, instead of replacing the key as it should, the HashMap grows continuously, failing to identify these identical keys and throwing an
OutOfMemoryError.
Here’s the MemLeak class:
package com.post.memory.leak; import java.util.Map; public class MemLeak { public final String key; public MemLeak(String key) { this.key =key; } public static void main(String args[]) { try { Map map = System.getProperties(); for(;;) { map.put(new MemLeak("key"), "value"); } } catch(Exception e) { e.printStackTrace(); } } }
Note: the memory leak is not due to the infinite loop on line 14: the infinite loop can lead to a resource exhaustion, but not a memory leak. If we had properly implemented
equals() and
hashcode() methods, the code would run fine even with the infinite loop as we would only have one element inside the HashMap.
(For those interested, here are some alternative means of (intentionally) generating leaks.)
Using Java VisualVM
With Java VisualVM, we can monitor the Java Heap and identify if its behavior is indicative of a memory leak.
Here’s a graphical representation of MemLeak’s Java Heap just after initialization (recall our discussion of the various generations):
After just 30 seconds, the Old Generation is almost full, indicating that, even with a Full GC, the Old Generation is ever-growing, a clear sign of a memory leak.
One means of detecting the cause of this leak is shown in the following image (click to zoom), generated using Java VisualVM with a heapdump. Here, we see that 50% of Hashtable$Entry objects are in the heap, while the second line points us to the MemLeak class. Thus, the memory leak is caused by a hash table used within the MemLeak class.
Finally, observe the Java Heap just after our
OutOfMemoryError in which the Young and Old generations are completely full.
Conclusion
Memory leaks are among the most difficult Java application problems to resolve, as the symptoms are varied and difficult to reproduce. Here, we’ve outlined a step-to-step approach to discovering memory leaks and identifying their sources. But above all, read your error messages closely and pay attention to your stack traces—not all leaks are as simple as they appear.
Appendix
Along with Java VisualVM, there are several other tools that can perform memory leak detection. Many leak detectors operate at the library level by intercepting calls to memory management routines. For example: HPROF, is a simple command line tool bundled with the Java 2 Platform Standard Edition (J2SE) for heap and CPU profiling. The output of HPROF can be analyzed directly or used as an input for others tools like JHAT. When we work with Java 2 Enterprise Edition (J2EE) applications, there are a number of heapdump solutions that are friendlier to analyze, such as IBM Heapdumps for Websphere application servers. | http://www.toptal.com/java/hunting-memory-leaks-in-java | CC-MAIN-2014-35 | refinedweb | 3,458 | 52.49 |
Frequently Asked Questions¶
How does Scrapy compare to BeautifulSoup or lxml?¶
BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy is an application framework for writing web spiders that crawl web sites and extract data from them.
Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.
In other words, comparing BeautifulSoup (or lxml) to Scrapy is like comparing jinja2 to Django.
Can I use Scrapy with BeautifulSoup?¶
Yes, you can.
As mentioned above, BeautifulSoup can be used
for parsing HTML responses in Scrapy callbacks.
You just have to feed the response’s body into a
BeautifulSoup object
and extract whatever data you need from it.
Here’s an example spider using BeautifulSoup API, with
lxml as the HTML parser:
from bs4 import BeautifulSoup import scrapy class ExampleSpider(scrapy.Spider): name = "example" allowed_domains = ["example.com"] start_urls = ( '', ) def parse(self, response): # use lxml to get decent HTML parsing speed soup = BeautifulSoup(response.text, 'lxml') yield { "url": response.url, "title": soup.h1.string }
Note
BeautifulSoup supports several HTML/XML parsers.
See BeautifulSoup’s official documentation on which ones are available.
What Python versions does Scrapy support?¶
Scrapy is supported under Python 2.7 and Python 3.4+ under CPython (default Python implementation) and PyPy (starting with PyPy 5.9). Python 2.6 support was dropped starting at Scrapy 0.20. Python 3 support was added in Scrapy 1.1. PyPy support was added in Scrapy 1.4, PyPy3 support was added in Scrapy 1.5.
Note
For Python 3 support on Windows, it is recommended to use Anaconda/Miniconda as outlined in the installation guide.
Did Scrapy “steal” X from Django?¶
Probably, but we don’t like that word. We think Django is a great open source project and an example to follow, so we’ve used it as an inspiration for Scrapy.
We believe that, if something is already done well, there’s no need to reinvent it. This concept, besides being one of the foundations for open source and free software, not only applies to software but also to documentation, procedures, policies, etc. So, instead of going through each problem ourselves, we choose to copy ideas from those projects that have already solved them properly, and focus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel free to steal from us!
Does Scrapy work with HTTP proxies?¶
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP
Proxy downloader middleware. See
HttpProxyMiddleware.
How can I scrape an item with attributes in different pages?¶
See Passing additional data to callback functions. breadth-first or depth-first order?¶
By default, Scrapy uses a LIFO queue for storing pending requests, which basically means that it crawls in DFO order. This order is more convenient in most cases. If you do want to crawl in true BFO order, you can do it by setting the following settings:
DEPTH_PRIORITY = 1 SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue' SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue'
Those messages (logged with
DEBUG level) don’t necessarily mean there is a
problem, so you may not need to fix them.
Those messages are thrown by the Offsite Spider Middleware, which is a spider middleware (enabled by default) whose purpose is to filter out requests to domains outside the ones covered by the spider.
For more info see:
OffsiteMiddleware.
What is the recommended way to deploy a Scrapy crawler in production?¶
See Deploying Spiders.
999 is a custom response status code used by Yahoo sites to throttle requests.
Try slowing down the crawling speed by using a download delay of
2 (or
higher) in your spider:
class MySpider(CrawlSpider): name = 'myspider' download_delay = 2 # [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with the
DOWNLOAD_DELAY setting.
Can I call
pdb.set_trace() from my spiders to debug them?¶
Yes, but you can also use the Scrapy shell which allows you to quickly analyze
(and even modify) the response being processed by your spider, which is, quite
often, more useful than plain old
pdb.set_trace().
For more info see Invoking the shell from spiders to inspect responses.
Simplest way to dump all my scraped items into a JSON/CSV/XML file?¶
To dump into a JSON file:
scrapy crawl myspider -o items.json
To dump into a CSV file:
scrapy crawl myspider -o items.csv
To dump into a XML file:
scrapy crawl myspider -o items.xml
For more information see Feed exports
Parsing big feeds with XPath selectors can be problematic since they need to build the DOM of the entire feed in memory, and this can be quite slow and consume a lot of memory.
In order to avoid parsing all the entire feed at once in memory, you can use
the functions
xmliter and
csviter from
scrapy.utils.iterators
module. In fact, this is what the feed spiders (see Spiders) use
under the cover.
How can I instruct a spider to stop itself?¶
Raise the
CloseSpider exception from a callback. For
more info see:
CloseSpider.
How can I prevent my Scrapy bot from getting banned?¶
See Avoiding getting banned.
Should I use spider arguments or settings to configure my spider?¶
Both spider arguments and settings can be used to configure your spider. There is no strict rule that mandates to use one or the other, but settings are more suited for parameters that, once set, don’t change much, while spider arguments are meant to change more often, even on each spider run and sometimes are required for the spider to run at all (for example, to set the start url of a spider).
To illustrate with an example, assuming you have a spider that needs to log into a site to scrape data, and you only want to scrape data from a certain section of the site (which varies each time). In that case, the credentials to log in would be settings, while the url of the section to scrape would be a spider argument.
I’m scraping a XML document and my XPath selector doesn’t return any items¶
You may need to remove namespaces. See Removing namespaces. | http://docs.scrapy.org/en/master/faq.html | CC-MAIN-2019-13 | refinedweb | 1,065 | 66.33 |
XML Documents and Data
The XML classes in the in the System.Xml namespace provide a comprehensive and integrated set of classes, allowing you to work with XML documents and data. The XML classes support parsing and writing XML, editing XML data in memory, data validation, and XSLT transformation.
In This Section
- What's New in System.Xml
Introduces features that are new in this release of the .NET Framework.
- Migrating from Version 1.1 of the XML Classes
Discusses migration issues.
- Architectural Overview of XML in the .NET Framework
Provides an overview of the XML architectural in the .NET Framework.
- Security and Your System.Xml Applications
Discusses security issues when working with XML technologies.
- Process XML Data In-Memory
Discusses the two models for processing XML data. The XmlDocument class, and its associated classes, is based on the W3C Document Object Model. The XPathDocument class is based on the XPath data model.
- XML Data Types
Describes how the XmlConvert class encodes and decodes names in XML data.
- Namespaces in an XML Document
Describes how the XmlNamespaceManager class is created and used whenever namespaces are needed, while holding the prefix and the namespace it represents.
- Type Support in the System.Xml Classes
Describes the various type support features. | http://msdn.microsoft.com/en-us/library/2bcctyt8(v=vs.80).aspx | CC-MAIN-2014-42 | refinedweb | 208 | 51.65 |
We're going to control this game using the accelerometer that comes as standard on all iPads, but it has a problem: it doesn't come as standard on any Macs, which means we either resign ourselves to testing only on devices or we put in a little hack. This course isn't calling Giving Up with Swift, so we're going to add a hack – in the simulator you'll be able to use touch, and on devices you'll have to use tilting.
To get started, add this property so we can reference the player throughout the game:
var player: SKSpriteNode!
We're going to add a dedicated
createPlayer() method that loads the sprite, gives it circle physics, and adds it to the scene, but it's going to do three other things that are important.
First, it's going to set the physics body's
allowsRotation property to be false. We haven't changed that so far, but it does what you might expect – when false, the body no longer rotates. This is useful here because the ball looks like a marble: it's shiny, and those reflections wouldn't rotate in real life.
Second, we're going to give the ball a
linearDamping value of 0.5, which applies a lot of friction to its movement. The game will still be hard, but this does help a little by slowing the ball down naturally.
Finally, we'll be combining three values together to get the ball's
contactTestBitMask: the star, the vortex and the finish.
Here's the code for
createPlayer():
func createPlayer() { player = SKSpriteNode(imageNamed: "player") player.position = CGPoint(x: 96, y: 672) player.zPosition = 1 player.physicsBody = SKPhysicsBody(circleOfRadius: player.size.width / 2) player.physicsBody?.allowsRotation = false player.physicsBody?.linearDamping = 0.5 player.physicsBody?.categoryBitMask = CollisionTypes.player.rawValue player.physicsBody?.contactTestBitMask = CollisionTypes.star.rawValue | CollisionTypes.vortex.rawValue | CollisionTypes.finish.rawValue player.physicsBody?.collisionBitMask = CollisionTypes.wall.rawValue addChild(player) }
You can go ahead and add a call to
createPlayer() directly after the call to
loadLevel() inside
didMove(to:). Note: you must create the player after the level, otherwise it will appear below vortexes and other level objects.
If you try running the game now, you'll see the ball drop straight down until it hits a wall, then it bounces briefly and stops. This game has players looking down on their iPad, so by default there ought to be no movement – it's only if the player tilts their iPad down that the ball should move downwards.
The ball is moving because the scene's physics world has a default gravity roughly equivalent to Earth's. We don't want that, so in
didMove(to:) add this somewhere:
physicsWorld.gravity = .zero
Playing the game now hasn't really solved much: sure, the ball isn't moving now, but… the ball isn't moving now! This would make for a pretty terrible game on the App Store.
Before we get onto how to work with the accelerometer, we're going to put together a hack that lets you simulate the experience of moving the ball using touch. What we're going to do is catch
touchesBegan(),
touchesMoved(), and
touchesEnded(), and use them to set or unset a new property called
lastTouchPosition. Then in the
update() method we'll subtract that touch position from the player's position, and use it set the world's gravity.
It's a hack. And if you're happy to test on a device, you don't really need it. But if you're stuck with the iOS Simulator or are just curious, let's put in the hack. First, declare the new property:
var lastTouchPosition: CGPoint?
Now use
touchesBegan() and
touchesMoved() to set the value of that property using the same three lines of code, like this:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { guard let touch = touches.first else { return } let location = touch.location(in: self) lastTouchPosition = location } override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) { guard let touch = touches.first else { return } let location = touch.location(in: self) lastTouchPosition = location }
When
touchesEnded() is called, we need to set the property to be
nil – it is optional, after all:
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) { lastTouchPosition = nil }
Easy, I know, but it gets (only a little!) trickier in the
update() method. This needs to unwrap our optional property, calculate the difference between the current touch and the player's position, then use that to change the
gravity value of the physics world. Here it is:
override func update(_ currentTime: TimeInterval) { if let currentTouch = lastTouchPosition { let diff = CGPoint(x: currentTouch.x - player.position.x, y: currentTouch.y - player.position.y) physicsWorld.gravity = CGVector(dx: diff.x / 100, dy: diff.y / 100) } }
This is clearly not a permanent solution, but it's good enough that you can run the app now and test it out.
Now for the new bit: working with the accelerometer. This is easy to do, which is remarkable when you think how much is happening behind the scenes.
All motion detection is done with an Apple framework called Core Motion, and most of the work is done by a class called
CMMotionManager. Using it here won't require any special user permissions, so all we need to do is create an instance of the class and ask it to start collecting information. We can then read from that information whenever and wherever we need to, and in this project the best place is
update().
Add
import CoreMotion just above the
import SpriteKit line at the top of your game scene, then add this property:
var motionManager: CMMotionManager!
Now it's just a matter of creating the object and asking it start collecting accelerometer data. This is done using the
startAccelerometerUpdates() method, which instructs Core Motion to start collecting accelerometer information we can read later. Put this this into
didMove(to:):
motionManager = CMMotionManager() motionManager.startAccelerometerUpdates()
The last thing to do is to poll the motion manager inside our
update() method, checking to see what the current tilt data is. But there's a complication: we already have a hack in there that lets us test in the simulator, so we want one set of code for the simulator and one set of code for devices.
Swift solves this problem by adding special compiler instructions. If the instruction evaluates to true it will compile one set of code, otherwise it will compile the other. This is particularly helpful once you realize that any code wrapped in compiler instructions that evaluate to false never get seen – it's like they never existed. So, this is a great way to include debug information or activity in the simulator that never sees the light on devices.
The compiler directives we care about are:
#if targetEnvironment(simulator),
#else and
#endif. As you can see, this is mostly the same as a standard Swift if/else block, although here you don't need braces because everything until the
#else or
#endif will execute.
The code to read from the accelerometer and apply its tilt data to the world gravity look like this:
if let accelerometerData = motionManager.accelerometerData { physicsWorld.gravity = CGVector(dx: accelerometerData.acceleration.y * -50, dy: accelerometerData.acceleration.x * 50) }
The first line safely unwraps the optional accelerometer data, because there might not be any available. The second line changes the gravity of our game world so that it reflects the accelerometer data. You're welcome to adjust the speed multipliers as you please; I found a value of 50 worked well.
Note that I passed accelerometer Y to
CGVector's X and accelerometer X to
CGVector's Y. This is not a typo! Remember, your device is rotated to landscape right now, which means you also need to flip your coordinates around.
We need to put that code inside the current
update() method, wrapped inside the new compiler directives. Here's how the method should look now:
override func update(_ currentTime: TimeInterval) { #if targetEnvironment(simulator) if let currentTouch = lastTouchPosition { let diff = CGPoint(x: currentTouch.x - player.position.x, y: currentTouch.y - player.position.y) physicsWorld.gravity = CGVector(dx: diff.x / 100, dy: diff.y / 100) } #else if let accelerometerData = motionManager.accelerometerData { physicsWorld.gravity = CGVector(dx: accelerometerData.acceleration.y * -50, dy: accelerometerData.acceleration.x * 50) } #endif }
If you can test on a device, please do. It took only a few lines of code, but the game is now adapting beautifully to device tilting!
LEARN SWIFTUI FOR FREE I have a massive, free SwiftUI video collection on YouTube teaching you how to build complete apps with SwiftUI – check it out! | https://www.hackingwithswift.com/read/26/3/tilt-to-move-cmmotionmanager | CC-MAIN-2019-47 | refinedweb | 1,446 | 55.64 |
Receiving downlink packets (e.g join-accept) does not work
Hi all, I am trying to sniff downlink packets (e.g. join-accept) by using the FiPy device, but I am not able to receive any data.
As it is detailed here: I am using the "869.525 - SF9BW125 (RX2 downlink only)" frequency, SF and BW. (In order to do the tests I have another FiPy device, which connects to the TTN network (OTAA) and sends some UP packets repeatedly) - I have also tried to send some downlink packets from the TTN console.
Same tests were done with other UP frequencies such as 868100000 and it worked well. Do you know what I am doing wrong?
Thanks!
Code in use:
from network import LoRa import binascii import socket import time _frequency = 869525000 _tx_power = 14 _bandwidth = LoRa.BW_125KHZ _sf = 9 _preamble = 8 _coding_rate = LoRa.CODING_4_5 _power_mode = LoRa.ALWAYS_ON lora = LoRa(mode=LoRa.LORA, frequency=_frequency, tx_power=_tx_power, bandwidth=_bandwidth, sf=_sf, preamble=_preamble, coding_rate=_coding_rate, power_mode=_power_mode) print("[*] fq:%d sf:%d bw:%d" % (_frequency, _sf, _bandwidth)) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(False) while True: response=s.recv(1024) if len(response) > 0: print("[*] Packet received: %s" % binascii.hexlify(response)) response = "" | https://forum.pycom.io/topic/5982/receiving-downlink-packets-e-g-join-accept-does-not-work | CC-MAIN-2022-05 | refinedweb | 204 | 59.9 |
Sorry I screwed up, corrected NMU diff is attatched. Again please don't upload/Sorry I screwed up, corrected NMU diff is attatched. Again please don't upload/The NMU diff is attatched to this mail (note: please do not upload/commit thisThe NMU diff is attatched to this mail (note: please do not upload/commit thischange yet, I want to see if it actually solves the problem first).
commit until I confirm that this solves the problem.-28 00:09:42.000000000 +0000 @@ -16,6 +16,11 @@ export LDFLAGS = -Wl,--no-relax endif +ifeq ($(DEB_HOST_ARCH), armhf) + export CFLAGS = -marm +endif + + # warning: if the --with autoreconf is removed then # the patch debian/patches/debian-no-linked-scripts # must be adapted to also patch the Makefile.in! | https://lists.debian.org/debian-arm/2012/06/msg00069.html | CC-MAIN-2017-26 | refinedweb | 126 | 58.42 |
dip - Dynamic instrumentation like DTrace, using aspects
#
This is the documentation for the
dip module. If you are looking for the documentation on the
dip program, use
perldoc dip or
man 1 dip..
At the end of your program run, during
END time, all aggregators - see below - will dump their results. Also any other hashes you have written to in your dip scripts will be dumped if they are declared as
our variables.
For example, if you simply wanted to know which kinds of objects have been instantiated at least once, you could use:
our %c; before { $c{total}++ } call qr/::new$/
and then
%c will be dumped.
dip provides aggregating functions that help in understanding a set of data. You can keep counts of occurrences, or quantize data, much like with DTrace.
The
quantize aggregating function generates a power-of-two distribution - see its documentation.
Remembers the dip script given on the command-line so we can run it in
instrument(). Complains if there was no dip script. The
--delay option is passed in this way as well.
Evaluates the dip script we remembered in
import() using
_eval_code().->();
Convenience function that takes a filename and evaluates the contents of the file using
_eval_code(). This is what
dip -s uses. For example:
dip -s myscript.dip myapp.pl
is more or less turned into:
dip -e 'run q!$file!' myapp.pl'
Returns what Carp's
cluck() would return, again with
Aspect:: and
dip namespaces omitted.
Returns what Carp's
longmess() would return, again with
Aspect:: and
dip namespaces omitted.$/
Convenience method to dump a variable like Data::Dumper does.
Example: Show all requests a Dancer web application handles:
before { dump_var ARGS(1) } call 'Dancer::Handler::handle_request'
Convenience function to right-trim a string...
The
gettimeofday() function from Time::HiRes is available to dip scripts.
The
tv_interval() function from Time::HiRes is available to dip scripts.
Color constants from Term::ANSIColor are available to dip scripts. For example:
before { say RED, ARGS(1), RESET } call qr/DBI::.*::prepare/
prints each DBI query in red text as it is prepared.
Is called for advice given on the command line and dip scripts evaluated by
run().
The following code is prepended to the code:
use strict; use warnings; use 5.10.0;
so that dip scripts are properly checked and
say() is available.
This is a helper function used by the
dip program to pass.
dip scripts are just Perl code and as such can use any helper module. For example, you might use the following code at the beginning of your dip scripts:
use strict; use warnings;") 2011 by Marcel Gruenauer.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~marcel/dip-1.17/lib/dip.pm | CC-MAIN-2015-48 | refinedweb | 463 | 66.03 |
#include <opencv2/core/types_c.h>.
Alignment of image rows (4 or 8). OpenCV ignores it and uses widthStep instead.
Ignored by OpenCV
Ditto.
Ignored by OpenCV.
ditto
Ignored by OpenCV
0 - interleaved color channels, 1 - separate color channels. cvCreateImage can only create interleaved images
Pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16S, IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported.
Image height in pixels.
version (=0)
Pointer to aligned image data.
Pointer to very origin of image data (not necessarily aligned) - needed for correct deallocation
" "
Image data size in bytes (==image->height*image->widthStep in case of interleaved data)
Must be NULL.
Most of OpenCV functions support 1,2,3 or 4 channels
sizeof(IplImage)
0 - top-left origin, 1 - bottom-left origin (Windows bitmaps style).
Image ROI. If NULL, the whole image is selected.
" "
Image width in pixels.
Size of aligned image row in bytes. | https://docs.opencv.org/3.4.7/d6/d5b/structIplImage.html | CC-MAIN-2022-27 | refinedweb | 143 | 61.12 |
$ cnpm install sha3
A pure JavaScript implementation of the Keccak family of cryptographic hashing algorithms, most notably including Keccak and SHA3.
:bulb: Legacy Note: In previous versions of this library, the
SHA3Hashobject provided a Keccak hash, not what we currently know as a SHA-3 hash. For backwards-compatibility, this object is still exported. However, users are encouraged to switch to using the
SHA3or
Keccakobjects instead, which provide the SHA-3 and Keccak hashing algorithms, respectively.
Via
npm:
$ npm install sha3
Via
yarn:
$ yarn add sha3
You can use this library from Node.js, from web browsers, and/or using ES6 imports.
// Standard FIPS 202 SHA-3 implementation const { SHA3 } = require('sha3'); // The Keccak hash function is also available const { Keccak } = require('sha3');
// Standard FIPS 202 SHA-3 implementation import { SHA3 } from 'sha3'; // The Keccak hash function is also available import { Keccak } from 'sha3';
FIPS-compatible interfaces for the following algorithms:
SHA3: The SHA3 algorithm.
Keccak: The Keccak algorithm.
SHAKE: The SHAKE XOF algorithm.
:bulb: Legacy Note: Savvy inspectors may notice that
SHA3Hashis also provided. Prior to v2.0.0, this library only implemented an early version of the SHA3 algorithm. Since then, SHA3 has diverged from Keccak and is using a different padding scheme, but for compatibility, this alias is sticking around for a bit longer.
import { SHA3 } from 'sha3'; const hash = new SHA3(512); hash.update('foo'); hash.digest('hex');
import { Keccak } from 'sha3'; const hash = new Keccak(256); hash.update('foo'); hash.digest('hex');
import { SHAKE } from 'sha3'; const hash = new SHAKE(128); hash.update('foo'); hash.digest({ buffer: Buffer.alloc(2048), format: 'hex' });
All hash implementations provided by this library conform to the following API specification.
#constructor([size=512])
The constructor for each hash (e.g:
Keccak,
SHA3), expects the following parameters:
size(Number): Optional. The size of the hash to create, in bits. If provided, this must be one of
224,
256,
384, or
512. Defaults to
512.
// Construct a new Keccak hash of size 256 const hash = new Keccak(256);
#update(data, [encoding='utf8'])
Updates the hash content with the given data. Returns the hash object itself.
data(Buffer|string): Required. The data to read into the hash.
encoding(string): Optional. The encoding of the given
data, if of type
string. Defaults to
'utf8'.
:bulb: See Buffers and Character Encodings for a list of allowed encodings.
const hash = new Keccak(256); hash.update('hello'); hash.update('we can also chain these').update('together');
#digest([encoding='binary'])
Digests the hash and returns the result. After calling this function, the hash may continue to receive input.
encoding(string): Optional. The encoding to use for the returned digest. Defaults to
'binary'.
If an
encoding is provided and is a value other than
'binary', then this function returns a
string. Otherwise, it returns a
Buffer.
:bulb: See Buffers and Character Encodings for a list of allowed encodings.
const hash = new Keccak(256); hash.update('hello'); hash.digest('hex'); // => hash of 'hello' as a hex-encoded string
#digest([options={}])
Digests the hash and returns the result. After calling this function, the hash may continue to receive input.
Options include:
buffer(Buffer): Optional. A pre-allocated buffer to fill with output bytes. This is how XOF algorithms like SHAKE can be used to obtain an arbitrary number of hash bytes.
format(string): Optional. The encoding to use for the returned digest. Defaults to
'binary'. If
bufferis also provided, this value will passed directly into
Buffer#toString()on the given buffer.
padding(byte): Optional. Override the padding used to pad the input bytes to the algorithm's block size. Typically this should be omitted, but may be required if building additional cryptographic algorithms on top of this library.
If a
format is provided and is a value other than
'binary', then this function returns a
string. Otherwise, it returns a
Buffer.
const hash = new Keccak(256); hash.update('hello'); hash.digest({ buffer: Buffer.alloc(32), format: 'hex' }); // => hash of 'hello' as a hex-encoded string
#reset()
Resets a hash to its initial state.
const hash = new Keccak(256); hash.update('hello'); hash.digest(); // => hash of 'hello' hash.reset(); hash.update('world'); hash.digest(); // => hash of 'world'
Run
yarn test for the full test suite.
Cryptographic hashes provide integrity, but do not provide authenticity or confidentiality. Hash functions are one part of the cryptographic ecosystem, alongside other primitives like ciphers and MACs. If considering this library for the purpose of protecting passwords, you may actually be looking for a key derivation function, which can provide much better security guarantees for this use case.
The following resources were invaluable to this implementation and deserve special thanks for work well done:
Keccak pseudocode: The Keccak team's excellent pseudo-code and technical descriptions.
mjosaarinen/tiny_sha3: Markku-Juhani O. Saarinen's compact, legible, and hackable implementation.
Phusion: For the initial release and maintenance of this project, and gracious hand-off to Twuni for continued development and maintenance. | https://developer.aliyun.com/mirror/npm/package/sha3 | CC-MAIN-2020-29 | refinedweb | 822 | 51.34 |
[
]
Rex Wang closed GERONIMO-5025.
------------------------------
Resolution: Fixed
has been resolved. close it.
> New module/app/global jndi contexts in javaee 6 spec
> ----------------------------------------------------
>
> Key: GERONIMO-5025
> URL:
> Project: Geronimo
> Issue Type: Sub-task
> Security Level: public(Regular issues)
> Components: deployment, naming
> Affects Versions: 3.0
> Reporter: David Jencks
> Assignee: David Jencks
> Fix For: 3.0
>
>
> Javaee platform spec describes some new jndi java: contexts that are more shared between
components.
> java:comp (existing)
> java:module
> java:app
> java:global
> My first idea for implementing this:
> 1. in RootContext, have the thread local represent java: rather than java:comp. So
all the namespaces will be in the Context object.
> 2. Construct this Context by federating objects for each scope. We'll have to maintain
a global context somewhere. The others can presumably be constructed during deployment and
set up in the existing gbeans for the app components.
> 3. Modify the naming builders to put stuff into the right namespace.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/geronimo-dev/201108.mbox/%3C2060677955.12059.1314237929063.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2014-23 | refinedweb | 170 | 51.75 |
hello - in the detailed mesh control, make sure ‘Maximum aspect ratio’ is zero and control the mesh size with ‘Maximum edge length’ and ‘Maximum distance edge to surface’, and Jagged Seams set.
-Pasca;
Dear @pascal
thank you so much.
could you please say:
best
Hello - use
PolygonCount for the number of polys - for the size, use
Area and then divide by count…
Here is a quick python:
import Rhino import rhinoscriptsyntax as rs def test(): id = rs.GetObject("eslect a mesh", filter=32, preselect=True) if not id: return mesh = rs.coercemesh(id) count = mesh.Faces.Count x = Rhino.Geometry.AreaMassProperties area = x.Compute(mesh).Area print "Average polygon area in model units is ", round( area/count, 3) pass test()
-Pascal
hello dear @pascal
excuse me, i have another question.
i dont need to generate mesh in the bottom of the shape.
how can i remove that mesh is generated in the bottom of the shape?
best
Hello -
ExtractSrf before meshing, or use
ExtractMeshPart or
Explode after.
dear @pascal
like as shown in below picture, i don’t know why there isn’t uniform mesh!
mesh of part a and mesh of part b are not equal size.
thanks for everything
Hello - yeah, I guess that is due to the object being an exact revolve weighted points. Try
RebuildUV in U with 12 points, then mesh the result.
-Pascal
You are welcome, I’m glad it worked out.
-Pascal
Dear @pascal
i have a question but i’m not sure, ask here or in the grasshopper.
now, i gonna set one brep in grasshopper but grasshopper dont select the shape.
dear @pascal
do you think, mesh in rhino has effect on grasshopper/ladybug result??
Hello - the mesh is not a brep - you’ll need a mesh component in GH. ‘brep’ = surface or polysurface, in Rhino.
-Pascal
hello
dear @pascal
how did you find 12 points in U ?
I have another 3D shape, that attached, i don’t know how many points and in U or V for RebuildUV!!!
best
ahad
yazdDome.3dm (10.6 MB)
Hi ahad - I would create a clean new curve for the revolve - as in the attached file - and make the revolve using the ‘Deformable’ option at 12 points - to avoid using RebuildUV at all.
yazdDome_Smooth.3dm (10.4 MB)
-Pascal
dear @pascal
thank you for reply.
could you please explain step by step, what do you do?
because i have 3 other shapes for generating mesh. and unfortunately i don’t know how do you create clean new curve!!
best
Hi ahad - it will be best to familiarize yourself with some of Rhino’s basic tools - curve drawing is key to a lot of operations… In this case, I duplicated the edges of your revolve since there was no profile curve in the file (
DupEdge) then Joined the result and ran Rebuild, to 12 points (in this case, quite a lot of points because the profile has some changes in direction that I tried to accommodate but 6 points would be enough to get a more idealized and cleaner shape - I don’t know enough about the goal here to make that decision)
-Pascal
The PDF linked in this message is a good introduction to NURBS curves so you better understand how they work: | https://discourse.mcneel.com/t/mesh-problem/94695 | CC-MAIN-2022-33 | refinedweb | 549 | 70.73 |
HI @all,
I create a new project and choose "Create Conda Env" as interpreter with Python Version = 3.4.
Then I open settings and go to Project Interperter and press "Install" (green + on the top right corner).
Any package that I install e.g. flask or pandas CAN be installed and I receive the message "Packages installed succesfully...." However, these packages will NOT be displayed in the Settings. There it Says "Nothing to show"
If I use older environments, the packages will be displayed correctly. Also checking with conda or anaconda, the packages will be displayed. Besides I can use the packages in PyCharm, they are just not shown!
The "Interpreter Paths" ar set correctly (at least from my prospective) including DLLs, Lib and Lib\site-packages, however I had to set them manually.
The whole thing is problematic because there is no autocompletioin, instead I get "Unresolved Reverence" - remember, I can load everything, also interactive into the python console.
An related Error I get using flask in interactive mode:
from flask import Flask
app = Flask(__name__)
Traceback (most recent call last):
File "<input>", line 2, in <module>
File "C:\...\lib\site-packages\flask\app.py", line 346, in __init__
root_path=root_path)
File "C:\...\lib\site-packages\flask\helpers.py", line 807, in __init__
root_path = get_root_path(self.import_name)
File "C:\..\lib\site-packages\flask\helpers.py", line 685, in get_root_path
'provided.' % import_name)
RuntimeError: No root path can be found for the provided module "builtins". This can happen because the module came from an import hook that does not provide file name information or because it's a namespace package. In this case the root path needs to be explicitly provided.
Please help. Thank you!
Could you please specify your Pycharm version? Are you able to reproduce the problem with the latest Pycharm 2016.2?
Hi Anna,
this problem still shows up for my old projects using PyCharm 2016.2. However, if I start a new project with PyCharm 2016.2 using the same way as described above, it'll show the expected outcome.
I guess there is no immediate help for the old Project?
Bye
Christof
For specification I'd like to add, that the "nothing to show" problem occurs when cloning a git repository.
I'm also seeing this issue with PyCharm 2016.2 and with a new project that I started just yesterday using 2016.2 version. One thing I did change today though was my default python runtime from Homebrew Python to Miniconda as I'm moving over to the simpler and more powerful Conda for managing packages (including Python versions) + envs.
Note, this is also a cloned Git repo as well..
I'm seeing something similar in 2016.2... created a conda environment, go to add packages via the PyCharm interface... nothing, and I mean *nothing* is available, findable... nothing. I can add packages manually from the command line using conda, but inside PyCharm... it *still* doesn't work.
I'm also using 2016.2 and suddenly seeing errors about package requirements not being met and "nothing to show" under the project interpreter that I've been using all along (~/anaconda/bin/python, v 2.7.11). Using "conda list" shows all the expected installed packages, but the preferences window under PyCharm shows nothing.
I tried invalidating caches & restarting, but still "nothing to show".
The problem persists on PyCharm 2017.3.3 (Professional Edition).
Could you please share idea.log ("Help | Show Log in...") after restarting IDE and reproducing the issue? You could use any online service or our FTP:
Hi Yaroslav,
I uploaded the file on
Sorry for the delay. What is the file name. Is it just idea.log?
Yes, just idea.log.
Do you use Conda? From the log system interpreter is used and there are no records about installing packages.
It's possible that I'm confusing something. So, I will just write the steps to reproduce the problem:
1) In PyCharm: File -> New Project -> [Pure Python, New environment using Conda] -> Create
2) When new project is created: File -> Settings -> Project: <project_name> -> Project Interpreter -> click on "+" - will open new window called "Available Packages".
And this is it. It says "Nothing to show" and I can't install anything from there.
Also, whenever I click on "+", following traceback appears in idea.log:
"""
2018-03-12 14:21:58,083 [19982544] INFO - packaging.PyPackageManagerImpl - Running packaging tool: C:\Users\Georgy\Miniconda3\envs\hello\python.exe C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\packaging_tool.py list
2018-03-12 14:21:58,621 [19983082] WARN - ackaging.PyCondaPackageService - Failed to get list of conda packages
2018-03-12 14:21:58,622 [19983083] WARN - ackaging.PyCondaPackageService - C:/Users/Georgy/Miniconda3/python.exe C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\conda_packaging_tool.py listall
2018-03-12 14:21:58,622 [19983083] WARN - ackaging.PyCondaPackageService - Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\conda_packaging_tool.py", line 14, in do_list_available_packages
from conda.cli.main_search import common
ImportError: cannot import name 'common'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\conda_packaging_tool.py", line 18, in do_list_available_packages
from conda.cli.main_search import get_index
ImportError: cannot import name 'get_index'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\conda_packaging_tool.py", line 51, in main
do_list_available_packages()
File "C:\Program Files\JetBrains\PyCharm 2017.3.3\helpers\conda_packaging_tool.py", line 21, in do_list_available_packages
from conda.api import get_index
ImportError: cannot import name 'get_index'
"""
I hope this helps.
Thanks for detailed information. Seems your log files was overriden on FTP, so I checked wrong one. Seems you face issue described here:. Please check the issue with 2018.1 EAP: | https://intellij-support.jetbrains.com/hc/en-us/community/posts/207205809-Project-Interpreter-Nothing-to-show-Conda | CC-MAIN-2020-45 | refinedweb | 976 | 59.9 |
30 March 2010 01:57 [Source: ICIS news]
By Tahir Ikram
SAN ANTONIO, Texas (ICIS news)--Asia will lead the demand growth of ethylene oxide (EO) product chain including mono ethylene glocol (MEG) in coming years with global consumption rising about 6% annually, a Shell executive said on Monday.
“Shell’s view is that growth will be consistent with previous years,” Thomas Chhoa said on the sidelines of NPRA International Petrochemical Conference (IPC) in ?xml:namespace>
Global demand growth was estimated at 6% while
America and Europe were expected to see demand growing at a rate of 2%, while South Amercia and the Middle East in excess of 10% - though the base was very small, he added.
Chhoa said the EO product stream had seen an “unprecedented” growth with a record level of capacity addition - about 2m-3m tonnes - in recent years, while in the next two to three years another “one million to one and a half million tonnes” would be added.
The new capacity would make the market extremely competitive and adjustments may take place so supply and demand would balance, Chhoa added.
“There will likely to be an un-equal sharing of pain,” he said.
“From our perspective we believe we will be strong competitor in the market,” he added.
Chhoa said the recent launch of Shell cracker in
He said the new project in
Hosted by the National Petrochemical & Refiners Association (NPRA) the IPC continues through Tuesday. | http://www.icis.com/Articles/2010/03/30/9346863/npra-10-asia-to-be-region-of-growth-for-eo-product-chain.html | CC-MAIN-2015-22 | refinedweb | 241 | 50.09 |
[PATCH 1/1] drm/i915: Fix BUG in i915_gem.c when switch to console
From:
Xi Ruoyao
Date:
Wed Mar 11 2015 - 01:47:14 EST
Next message:
Takashi Iwai: "Re: [PATCH 24/45] hdspm.h: include stdint.h in userspace"
Previous message:
Stephen Rothwell: "linux-next: Tree for Mar 11"
Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ]
In intel_crtc_page_flip, intel_display.c, the code changed the framebuffer
assigned to plane crtc->primary by
crtc->primary->fb = fb;
However, it forgot to change crtc->primary->state->fb. However, when we
switch to console, some kernel code will read crtc->primary->state->fb
to get the framebuffer assigned to crtc->primaty. Then a framebuffer
object can be unpinned twice and a kernel BUG will be produced in i915_gem.c.
So, update crtc->primary->state->fb in intel_display.c using
drm_atomic_set_fb_for_plane to fix the BUG.
Signed-off-by: Xi Ruoyao <xry111@xxxxxxxxxxx>
---
drivers/gpu/drm/i915/intel_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index e730789..97083fd 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -37,6 +37,7 @@
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "i915_trace.h"
+#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_crtc_helper.h>
@@ -9816,6 +9817,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
drm_gem_object_reference(&obj->base);
crtc->primary->fb = fb;
+ drm_atomic_set_fb_for_plane(crtc->primary->state, fb);
work->pending_flip_obj = obj;
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at
Next message:
Takashi Iwai: "Re: [PATCH 24/45] hdspm.h: include stdint.h in userspace"
Previous message:
Stephen Rothwell: "linux-next: Tree for Mar 11"
Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ] | http://lkml.iu.edu/hypermail/linux/kernel/1503.1/02387.html | CC-MAIN-2020-34 | refinedweb | 314 | 53.58 |
Artifact 2f89485eaaa5fbdd0bb9736533e65e1e446fb331:
- File tclreadline.n.in — part of check-in [9e3c1d7364] at 1999-09-22 00:09:07 on branch trunk — Wed Sep 22 02:07:23 CEST 1999 (user: johannes@zellner.org) [ancestry] [annotate] [blame]
.TH tclreadline n "@TCLREADLINE_VERSION@.@TCLREADLINE_PATCHLEVEL@" "Johannes Zellner" .\" (C) 1999 by Johannes Zellner .\" FILE: "/home/joze/src/tclreadline/tclreadline.n.in" .\" LAST MODIFICATION: "Tue Sep 21 21:18:31>, .\" # completion script should return an array of strings which is a list of completions for "text". If there are no completions, it should return an empty string "". The first entry in the returned list is the substitution for "text". The remaining entries are the possible completions. If the custom completion script returns an empty string and builtin completion is enabled (see . The custom completer could return a string like "$bl $black $blue", which will complete "$b" to "$bl" (the longest match) and offer a list of two further matches "$black" and "$blue". For further reference, see the proc tclreadline::ScriptCompleter in the file tclreadlineSetup.tcl. .TP] set a script which will be called, if readline returns the eof character (this is typically the case if CTRL-D is entered at the very beginning of the line). The default for this script is "puts {}; exit". Setting this to an empty value disables any action on eof. P 5 \fB::tclreadline::readline bell\fP Ring the terminal bell, obeying the setting of bell-style -- audible or visible. returns the current setting. .TP 5 \fB::tclreadline::Loop\fP [\fIhistoryfile\fP] enter the tclreadline main loop. This command is typically called from the startup resource file (something .tclshrc, depending on the interpreter you use, see the file `sample.tclshrc'). The main loop sets up some completion characteristics as variable -- try something like "puts $b<TAB>" -- and command completion -- try "puts [in<TAB>". If the optional argument ). : .CS package require tclreadline namespace eval tclreadline { proc prompt1 {} { return "[clock format [clock seconds]]> " } } ::tclreadline::Loop .CE" holds the version string "@TCLREADLINE_VERSION@". .TP 5 \fBtclreadline::patchLevel\fP holds the patch level string "@TCLREADLINE_VERSION@.@TCLREADLINE_PATCHLEVEL@". .TP 5 \fBtclreadline::library\fP holds the library string "@TCLREADLINE_LIBRARY@". > >, Matthew Clarke <Matthew_Clarke@mindlink.bc.ca> .SH "DEBIAN PACKAGE" David Engel <dlengel@home.com>, <david@debian.org> .SH "DISCLAIMER". . | http://chiselapp.com/user/rkeene/repository/tclreadline/artifact/2f89485eaaa5fbdd | CC-MAIN-2018-17 | refinedweb | 369 | 60.61 |
FS_MOUNT.9,v 1.7.2.1 2001/12/17 11:30:18 ru Exp $ .\" $DragonFly: src/share/man/man9/VFS_MOUNT.9,v 1.3 2004/06/01 11:36:53 hmp Exp $ .\" .Dd July 24, 1996 .Os .Dt VFS_MOUNT 9 .Sh NAME .Nm VFS_MOUNT .Nd mount a filesystem .Sh SYNOPSIS .In sys/param.h .In sys/mount.h .In sys/vnode.h .Ft int .Fn VFS_MOUNT "struct mount *mp" "char *path" "caddr_t data" "struct nameidata *ndp" "struct proc *p" .Sh DESCRIPTION Mount a filesystem into the system's namespace. .Pp Its arguments are: .Bl -tag -width data .It Ar mp Structure representing the filesystem. .It Ar path Pathname where the filesystem is being mounted. .It Ar data Filesystem specific data. This should be read into the kernel using .Xr copyin 9 . .It Ar ndp Contains the result of a .Xr namei 9 call on the pathname of the mountpoint. .It Ar p Process which is mounting the filesystem. .El .Pp This is called both to mount new filesystems and to change the attributes of an existing filesystem. If the .Dv MNT_UPDATE flag is set in .Fa mp->mnt_flag then the filesystem should update its internal state from the value of .Fa mp->mnt_flag . This can be used, for instance, to convert a read-only filesystem to read-write. It is also used by .Xr mountd 8 to update the NFS export information for the filesystem. .Pp If the .Dv MNT_UPDATE flag is not specified, then this is a newly mounted filesystem. The filesystem code should allocate and initialize any private data needed to represent the filesystem (it can use the .Fa mp->mnt_data field to store this information). .Sh SEE ALSO .Xr VFS 9 , .Xr vnode 9 .Sh AUTHORS This man page was written by .An Doug Rabson . | http://www.dragonflybsd.org/cvsweb/src/share/man/man9/VFS_MOUNT.9?f=h;rev=1.3 | CC-MAIN-2014-52 | refinedweb | 299 | 79.97 |
For 8/9 Student Becomes the Teacher, do you know how I would calculate the get_average part? That would be step 3.
8/9 Student Becomes the Teacher
What information does the function get?
Given that information, how would you do that manually?
That's exactly how your function would do it.
If you're unsure of what the function is supposed to be doing, then you'll have to start with that. Can't start writing it before you know what it's supposed to do.
Are you asking what that is or does that mean you know what it's supposed to do?
If the former, I'd tell you to read the instructions, if the later, I'd tell you to start writing
My problem is that I'm not sure how to write code that calculates average or whatever step 3 is asking me to do.
Well start by establishing what it's supposed to do. You can't write anything without knowing what it's supposed to do.
So write down step by step in English what's supposed to happen.
def get_class_average(whatever input info is): # do this # do that # present result
Then you can start thinking about what the matching code is, you'll know some of it and other parts you'll be able to look up | https://discuss.codecademy.com/t/8-9-student-becomes-the-teacher/40823 | CC-MAIN-2018-34 | refinedweb | 225 | 80.72 |
Input and Output Functions in Jina¶
This chapter explains the input and output functions of Jina’s Flow API.
Input Function¶
TL;DR¶
By default, everything is sent in a buffer
Use a crafter to handle the input
Shortcuts such as
index_lines,
index_ndarrayand
index_filesare available to input predefined formats.
In the Flow API, we highlight that you can use
.index(),
.search() and
.train() to feed index data and search queries to a Flow:
with f: f.index(input_fn)
with f: f.search(input_fn, top_k=50, on_done=print)
input_fn is
Iterator[bytes], each of which corresponds to a bytes representation of a Document.
A simple
input_fn can be defined as follows:
def input_fn(): for _ in range(10): yield b'look! i am a Document!' # `s` is a "Document"! # or ... input_fn = (b'look! i am a Document!' for _ in range(10))
Shortcuts¶
Usage of
index_ndarray()¶
import numpy as np from jina.flow import Flow input_data = np.random.random((3,8)) f = Flow().add(uses='_logforward') with f: f.index_ndarray(input_data)
Add a dummy Pod with config
_logforwardto the Flow.
_logforwardis a built-in YAML, which just forwards input data to the results and prints it to the log. It is located in
jina/resources/executors._forward.yml. You can also use your own YAML to organize
pods.
Use the Flow to index an
ndarrayby calling the
index_ndarray()API.
Calling the
index_ndarray() API generates requests with the following message:
request { request_id: 1 index { docs { id: 1 weight: 1.0 length: 100 blob { buffer: "\004@\316\362/D\333?\244>\235\305\027\311\336?\267\210\251\311^\260\345?\366\n(\014\022m\356?\374\262\017\030\036\357\351?-c\300\337\217V\345?\241G\241\352\233\024\356?\340\346lUf\353\350?" shape: 8 dtype: "float64" } } docs { id: 2 weight: 1.0 length: 100 blob { buffer: "\312Wm\337\250\217\354?t\212\326\020\261\r\320?\254\262\300u<O\323?\340\210\222$\321\216\314?\310.q,+\347\311?&\316\361\310\252R\331?\214\016\201a\231\262\330?\342\231\262\221\343%\324?" shape: 8 dtype: "float64" } } docs { id: 3 weight: 1.0 length: 100 blob { buffer: "kT\250\372K%\345?\237\017+u\300\227\353?\3668\256\340\251\227\350?\327\006$\032$\002\341?\274\300\3573\371\262\343?\346\371\265dV\330\342?\370\210\360\002P3\340?\022i-\016\374\320\331?" shape: 8 dtype: "float64" } } } }
The structure of this message is defined in the format of protobuf. Check more details of the data structure at
jina.proto. Messages are passed between the Pods in the Flow.
request contains input data and related metadata. The input is a 3*8 matrix that is sent to the Flow, which matches 3
request.index.docs, and the
request.index.docs.blog.shape is 8. The vector of the matrix is stored in
request.index.docs.blob, and the
request.index.docs.blob.dtype indicates the type of the vector.
search_ndarray() is the API for searching
np.ndarray. The data structure will be replaced from
request.index to
request.search, and the other nodes stay the same.
import numpy as np from jina.flow import Flow input_data = np.random.random((3,8)) f = Flow().add(uses='_logforward') with f: f.search_ndarray(input_data)
Usage of
index_files()¶
from jina.flow import Flow f = Flow().add(uses='_logforward') with f: f.index_files(f'../pokedex-with-bit/pods/*.yml')
API
index_files() reads input data from
../pokedex-with-bit/pods/*.yml. In this directory, there are 5 YAML files. Therefore, you can see them in the protobuf request as well:
5
docsunder
request.index
Each file’s path in a
request.index.doc.uri
request { request_id: 1 index { docs { id: 1 weight: 1.0 length: 100 uri: "../pokedex-with-bit/pods/encode-baseline.yml" } docs { id: 2 weight: 1.0 length: 100 uri: "../pokedex-with-bit/pods/chunk.yml" } docs { id: 3 weight: 1.0 length: 100 uri: "../pokedex-with-bit/pods/doc.yml" } docs { id: 4 weight: 1.0 length: 100 uri: "../pokedex-with-bit/pods/encode.yml" } docs { id: 5 weight: 1.0 length: 100 uri: "../pokedex-with-bit/pods/craft.yml" } } }
search_files() is the API for searching
files.
from jina.flow import Flow f = Flow().add(uses='_logforward') with f: f.search_files(f'../pokedex-with-bit/pods/chunk.yml')
Usage of
index_lines()¶
from jina.flow import Flow input_str = ['aaa','bbb'] f = Flow().add(uses='_logforward') with f: f.index_lines(lines=input_str)
index_lines() reads input data from
input_str. As you can see above, there are 2 elements in
input_str, so in the protobuf you can see:
2
docsunder
request.index.docs
Each individual string in
request.index.docs.text.
request { request_id: 1 index { docs { id: 1 weight: 1.0 length: 100 mime_type: "text/plain" text: "aaa" } docs { id: 2 weight: 1.0 length: 100 mime_type: "text/plain" text: "bbb" } } }
search_lines() is the API for searching
text.
from jina.flow import Flow text = input('please type a sentence: ') f = Flow().add(uses='_logforward') with f: f.search_lines(lines=[text, ])
Why Bytes/Buffer?¶
You may wonder why we use bytes instead of some Python native objects as the input. There are two reasons:
As a universal search framework, Jina accepts documents in different formats, from text to image to video. Raw bytes is the only consistent data representation over those modalities.
Clients can be written in languages other than Python. Raw bytes is the only data type that can be recognized across languages.
But Then How Can Jina Recognize Those Bytes?¶
The answer relies on the Flow’s
crafter, and the “type recognition” is implemented as a “deserialization” step. The
crafter is often the Flow’s first component, and translates the raw bytes into a Python native object.
For example, let’s say our input function reads gif videos in binary:
def input_fn(): for g in all_gif_files: with open(g, 'rb') as fp: yield fp.read()
The corresponding
crafter takes whatever is stored in the
buffer and tries to make sense out of it:
import io from PIL import Image from jina.executors.crafters import BaseCrafter class GifCrafter(BaseCrafter): def craft(self, buffer): im = Image.open(io.BytesIO(buffer)) # manipulate the image here # ...
In this example,
PIL.Image.open takes either the filename or file object as argument. We convert
buffer to a file object here using
io.BytesIO.
Alternatively, if your input function is only sending the file name, like:
def input_fn(): for g in all_gif_files: yield g.encode() # convert str to binary string b'str'
Then the corresponding
crafter should change accordingly.
from PIL import Image from jina.executors.crafters import BaseCrafter class GifCrafter(BaseCrafter): def craft(self, buffer): im = Image.open(buffer.decode()) # manipulate the image here # ...
buffer now stores the file path, so we convert it back to a normal string with
.decode() and read from the file path.
You can also combine two types of data, like:
def input_fn(): for g in all_gif_files: with open(g, 'rb') as fp: yield g.encode() + b'JINA_DELIM' + fp.read()
The
crafter then can be implemented as:
from jina.executors.crafters import BaseCrafter import io from PIL import Image class GifCrafter(BaseCrafter): def craft(self, buffer, *args, **kwargs): file_name, img_raw = buffer.split(b'JINA_DELIM') im = Image.open(io.BytesIO(img_raw)) # manipulate the image and file_name here # ...
As you can see from the examples above, we can use
buffer to transfer strings and gif videos.
.index(),
.search() and
.train() also accept
batch_size which controls the number of Documents per request. However, this does not change the
crafter’s implementation, as the
crafter always works at the Document level.
Further reading:
Output Function¶
TL;DR¶
Everything works asynchronously
Use
callback=to specify the output function
Jina’s output function is basically asynchronous callback. For the sake of efficiency, Jina is designed to be highly asynchronous on data transmission. You just keep sending requests to Jina without any blocking. When a request is finished, the callback function is invoked.
For example, the following will print the request after a
IndexRequest is finished:
with f: f.index(input_fn, on_done=print)
This is quite useful when debugging.
In the “Hello, World!” example, we use a callback function to append the top-k results to an HTML page:
def print_html(resp): for d in resp.search.docs: vi = 'o">+ d.meta_info.decode() result_html.append(f'<tr><td><img src="{vi}"/></td><td>') for kk in d.matches: kmi = 'o">+ kk.match_doc.meta_info.decode() result_html.append(f'<img src="{kmi}" style="opacity:{kk.score.value}"/>') # k['score']['explained'] = json.loads(kk.score.explained) result_html.append('</td></tr>\n')
f.search(input_fn, on_done=print_html, top_k=args.top_k, batch_size=args.query_batch_size) | https://docs.jina.ai/v1.0.0/chapters/io/index.html | CC-MAIN-2021-10 | refinedweb | 1,453 | 62.24 |
You want to know WHERE your .exe is
Why? Maybe, you are building a website and have a subdir ./static below the directory in which your .exe resides, from where you server all .css and .js that is static. Or, you put the logs right below the programs dir within ./log
See also HowToDetermineIfRunningFromExe for a shorter recipe
Problem
You cannot rely on __file__, because __file__ is not there in the py2exed main-script. You could try to rely on ".", the "Current Directory", but that only works if your application was started there. That may happen, but it is not guaranteed.
Solution
import os import jpath if hasattr(sys,"frozen") and sys.frozen in ("windows_exe", "console_exe"): p=jpath.path(os.path.abspath(sys.executable)).dirname()
now p contains the directory where your exe resides, no matter from where your exe has been called (maybe it is in the path)
"jpath" is the famous path module from "Jason Orendorff"
Alternate Solution
The solution above will fail with a UnicodeDecodeError when using a Japanese (or Chinese/Korean, probably) version of Windows (XP + others?), and the path contains double-byte characters. This is because the sys.executable is in the file system encoding ('mbcs' on WinXP Japanese), and python tries to decode it to Unicode using the default Python encoding ('ascii'). The solution is to explicitly convert the path to Unicode using the default file system encoding. Additionally, checking only for a value of "windows_exe" will fail for a console application. I decided to live dangerously, and just test for "frozen" :-).
import os import sys def we_are_frozen(): """Returns whether we are frozen via py2exe. This will affect how we find out where we are located.""" return hasattr(sys, "frozen") def module_path(): """ This will get us the program's directory, even if we are frozen using py2exe""" if we_are_frozen(): return os.path.dirname(unicode(sys.executable, sys.getfilesystemencoding( ))) return os.path.dirname(unicode(__file__, sys.getfilesystemencoding( ))) | http://www.py2exe.org/index.cgi/WhereAmI | CC-MAIN-2013-20 | refinedweb | 322 | 58.99 |
>> It can't be older then 2.5.4 becouse before it just wasn't there.> > > Old is a few days-a week on my time scale but the post i am referring to > is going at least a month or so back. And of course it can be older, you > seem to be forgetting that Andre's IDE patches have been around for a > _long_ time now for 2.4.x...But is a short time ago that IDE problems appeared in 2.4.18...Please just have a look at the actual code in ide-taskfile.cand ask yourself whatever it is *really* stable and usable.Using an editor with syntax highlighting for codeparts commented out by #if 0 .. #endif will reveal most ofthe trouble instantly.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2002/3/5/95 | CC-MAIN-2014-15 | refinedweb | 156 | 75.81 |
The lock keyword locks a specified code block so two threads can’t process the same code block at the same time. When one threads exits the locked block, another thread can enter the locked code block. The Monitor class offers the same functionality, but you specify the start and end of the locked code block with Monitor.Enter and Monitor.Exit. For both techniques you need a variable to lock on. A common pattern is to lock on this for instance data in a class or typeof(type) for static data.
using System; using System.Threading; public class LockObject { private static int counter = 0; public static void MonitorIncement() { Monitor.Enter(typeof(LockObject)); counter++; Monitor.Exit(typeof(LockObject)); } public static void LockIncement() { lock (typeof(LockObject)) { counter++; } } }
The problem with this is, this of typeof(type) could also be the lock object in an entirely different synchronization block outside the class in a unrelated code block. The result would be that two completely different synchronization blocks that synchronizes two different sets of data can block each other. The same thing can happen if you use a string a lock variable, because all the strings refer to the same instance. These problems can be solved with a private read-only field to lock on!
public class LockObject { private static int counter = 0; private readonly static object syn = new object(); public static void MonitorIncement() { Monitor.Enter(syn); counter++; Monitor.Exit(syn); } public static void LockIncement() { lock (syn) { counter++; } } }
The lock object is private so it can’t be used by code blocks outside the class as lock object! The read-only attribute prevents the variable from changes.
While you may be doing this as a simple example, others may quickly oversee that for incrementing,it’s easiest to use the Interlocked class:
Interlocked.Increment(ref counter); | http://www.mbaldinger.com/multithreading-which-lock-object-should-i-use/ | CC-MAIN-2014-15 | refinedweb | 303 | 55.74 |
Show Table of Contents
Example 23.3. The
Example 23.4. Adding a Port to a
23.3. Adding a Port to a Service
Overview
The endpoint information for a service is defined in a
wsdl:portelement, and the
Serviceobject creates a proxy instance for each of the endpoints defined in a WSDL contract, if one is specified. If you do not specify a WSDL contract when you create your
Serviceobject, the
Serviceobject has no information about the endpoints that implement your service, and therefore cannot create any proxy instances. In this case, you must provide the
Serviceobject with the information needed to represent a
wsdl:portelement using the
addPort()method.
The addPort() method
The
Serviceclass defines an
addPort()method, shown in Example 23.3, “The
addPort()Method”, that is used in cases where there is no WSDL contract available to the consumer implementation. The
addPort()method allows you to give a
Serviceobject the information, which is typically stored in a
wsdl:portelement, necessary to create a proxy for a service implementation.
Example 23.3. The
addPort() Method
void addPort(QName portName,
String bindingId,
String endpointAddress)
throws WebServiceException;
The value of the
portNameis a QName. The value of its namespace part is the target namespace of the service. The service's target namespace is specified in the targetNamespace property of the
@WebServiceannotation. The value of the QName's local part is the value of
wsdl:portelement's
nameattribute.parameter is a string that uniquely identifies the type of binding used by the endpoint. For a SOAP binding you use the standard SOAP namespace:. If the endpoint is not using a SOAP binding, the value of the
bindingIdparameter is determined by the binding developer.
The value of the
endpointAddressparameter is the address where the endpoint is published. For a SOAP/HTTP endpoint, the address is an HTTP address. Transports other than HTTP use different address schemes.
Example
Example 23.4, “Adding a Port to a
ServiceObject” shows code for adding a port to the
Serviceobject created in Example 23.2, “Creating a
ServiceObject”.
Example 23.4. Adding a Port to a
Service Object
package com.fusesource.demo; import javax.xml.namespace.QName; import javax.xml.ws.Service; public class Client { public static void main(String args[]) { ... 1 QName portName = new QName("", "stockQuoteReporterPort"); 2 s.addPort(portName, 3 "", 4 ""); ... } }
The code in Example 23.4, “Adding a Port to a
ServiceObject” does the following: | https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/apache_cxf_development_guide/jaxwsconsumerdevjavafirstport | CC-MAIN-2021-31 | refinedweb | 402 | 58.99 |
![if !(IE 9)]> <![endif]>
The analyzer has detected an expression of the 'this == 0' pattern. This expression may work well in some cases but it is extremely dangerous due to certain reasons. Here is a simple example:
class CWindow { HWND handle; public: HWND GetSafeHandle() const { return this == 0 ? 0 : handle; } };
Calling the CWindow::GetSafeHandle() method for the null pointer 'this' will generally lead to undefined behavior, according to the C++ standard. But since this class' fields are not being accessed while executing the method, it may run well. On the other hand, two negative scenarios are possible when executing this code. First, since the this pointer can never be null, according to the C++ standard, the compiler may optimize the method call by reducing it to the following line:
return handle;
Second, suppose we've got the following code fragment:
class CWindow { .... // CWindow from the previous example }; class MyWindowAdditions { unsigned long long x; // 8 bytes }; class CMyWindow: public MyWindowAdditions, public CWindow { .... }; .... void foo() { CMyWindow * nullWindow = NULL; nullWindow->GetSafeHandle(); }
This code will cause reading from the memory at the address 0x00000008. You can make sure it's true by adding the following line:
std::cout << nullWindow->handle << std::endl;
What you will get on the screen is the address 0x00000008, for the source pointer NULL (0x00000000) has been shifted in such a way as to point to the beginning of the CWindow class' subobject. For this purpose, it needs to be shifted by sizeof(MyWindowAdditions) bytes.
What's most interesting, the "this == 0" check turns absolutely meaningless now. The 'this' pointer is always equal to the 0x00000008 value at least.
On the other hand, the error won't reveal itself if you swap the base classes in CMyWindow's declaration:
class CMyWindow: public CWindow, public MyWindowAdditions{ .... };
All this may cause very vague errors.
Unfortunately, fixing the code is far from trivial. Theoretically, a correct way out in such cases is to change the class method to static. This will require editing a lot of other places where this method call is used.
class CWindow { HWND handle; public: static HWND GetSafeHandle(CWindow * window) { return window == 0 ? 0 : window->handle; } };
Another way is to use the Null Object pattern which will also require plenty of work.
class CWindow { HWND handle; public: HWND GetSafeHandle() const { return handle; } }; class CNullWindow : public CWindow { public: HWND GetSafeHandle() const { return nullptr; } }; .... void foo(void) { CNullWindow nullWindow; CWindow * windowPtr = &nullWindow; // Output: 0 std::cout << windowPtr->GetSafeHandle() << std::endl; }
It should be noted that this defect is extremely dangerous because one is usually too short of time to care about solving it, for it all seems to "work well as it is", while refactoring is too expensive. But code working stably for years may suddenly fail after a slightest change of circumstances: building for a different operating system, changing to a different compiler version (including update), and so on. The following example is quite illustrative: the GCC compiler, starting with version 4.9.0, has learned to throw away the check for null of the pointer dereferenced a bit earlier in the code (see the V595 diagnostic):
int wtf( int* to, int* from, size_t count ) { memmove( to, from, count ); if( from != 0 ) // <= condition is always true after optimization return *from; return 0; }
There are quite a lot of real-life examples of problem code turned broken because of undefined behavior. Here are a few of them to underline the importance of the problem.
Example No. 1. A vulnerability in the Linux core
struct sock *sk = tun->sk; // initialize sk with tun->sk .... if (!tun) // <= always false return POLLERR; // if tun is NULL return error
Example No. 2. Incorrect work of srandomdev():
struct timeval tv; unsigned long junk; // <= not initialized on purpose gettimeofday(&tv, NULL); // LLVM: analogue of srandom() of uninitialized variable, // i.e. tv.tv_sec, tv.tv_usec and getpid() are not taken into account. srandom((getpid() << 16) ^ tv.tv_sec ^ tv.tv_usec ^ junk);
Example No. 3. An artificial example that demonstrates very clearly both compilers' aggressive optimization policy concerning undefined behavior and new ways to "shoot yourself in the foot":
#include <stdio.h> #include <stdlib.h> int main() { int *p = (int*)malloc(sizeof(int)); int *q = (int*)realloc(p, sizeof(int)); *p = 1; *q = 2; if (p == q) printf("%d %d ", *p, *q); // <= Clang r160635: Output: 1 2 }
As far as we know, none of the compilers has ignored the call of the this == 0 check as of the implementation date of this diagnostic, but it's just a matter of time because the C++ standard clearly reads (§9.3.1/1): "If a nonstatic member function of a class X is called for an object that is not of type X, or of a type derived from X, the behavior is undefined.". In other words, the result of calling any nonstatic function for a class with this == 0 is undefined. As I've said, it's just a matter of time for compilers to start substituting false instead of (this == 0) during compilation.# ... | https://www.viva64.com/en/w/v704/ | CC-MAIN-2020-16 | refinedweb | 833 | 50.97 |
mmap2 - map files or devices into memory
Current Version:
Linux Kernel - 3.80
Synopsis
#include <sys/mman.h> void *mmap2(void *addr, size_t length, int prot, int flags, int fd, off_t pgoffset);
Description
This
Versions
mmap2() is available since Linux 2.3.31.
Conforming To
This system call is Linux-specific.
Notes
See Also
getpagesize(2), mmap(2), mremap(2), msync(2), shm_open(3)
Colophon
This page is part of release 3.80 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
License & Copyright
Copyright (C) 2002, 31 Jan 2002, Michael Kerrisk
Added description of mmap2 Modified, 2004-11-25, mtk -- removed stray #endif in prototype | https://community.spiceworks.com/linux/man/2/mmap2 | CC-MAIN-2018-17 | refinedweb | 123 | 66.33 |
{-# LANGUAGE GADTs, BangPatterns, RecordWildCards,
GeneralizedNewtypeDeriving, NondecreasingIndentation, TupleSections #-}
module CmmBuildInfoTables
( CAFSet, CAFEnv, cafAnal
, doSRTs, ModuleSRTInfo, emptySRT
) where
import GhcPrelude hiding (succ)
import Id
import BlockId
import Hoopl.Block
import Hoopl.Graph
import Hoopl.Label
import Hoopl.Collections
import Hoopl.Dataflow
import Module
import GHC.Platform
import Digraph
import CLabel
import Cmm
import CmmUtils
import DynFlags
import Maybes
import Outputable
import SMRep
import UniqSupply
import CostCentre
import GHC.StgToCmm.Heap
import Control.Monad
import Data.Map (Map)
import qualified Data.Map as Map
import Data.Set (Set)
import qualified Data.Set as Set
import Data.Tuple
import Control.Monad.Trans.State
import Control.Monad.Trans.Class
{- Note [SRTs]
SRTs are the mechanism by which the garbage collector can determine
the live CAFs in the program.
Representation
^^^^^^^^^^^^^^
+------+
| info |
| | +-----+---+---+---+
| -------->|SRT_2| | | | | 0 |
|------| +-----+-|-+-|-+---+
| | | |
| code | | |
| | v v
An SRT is simply an object in the program's data segment. It has the
same representation as a static constructor. There are 16
pre-compiled SRT info tables: stg_SRT_1_info, .. stg_SRT_16_info,
representing SRT objects with 1-16 pointers, respectively.
The entries of an SRT object point to static closures, which are either
- FUN_STATIC, THUNK_STATIC or CONSTR
- Another SRT (actually just a CONSTR)
The final field of the SRT is the static link field, used by the
garbage collector to chain together static closures that it visits and
to determine whether a static closure has been visited or not. (see
Note [STATIC_LINK fields])
By traversing the transitive closure of an SRT, the GC will reach all
of the CAFs that are reachable from the code associated with this SRT.
If we need to create an SRT with more than 16 entries, we build a
chain of SRT objects with all but the last having 16 entries.
+-----+---+- -+---+---+
|SRT16| | | | | | 0 |
+-----+-|-+- -+-|-+---+
| |
v v
+----+---+---+---+
|SRT2| | | | | 0 |
+----+-|-+-|-+---+
| |
| |
v v
Referring to an SRT from the info table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following things have SRTs:
- Static functions (FUN)
- Static thunks (THUNK), ie. CAFs
- Continuations (RET_SMALL, etc.)
In each case, the info table points to the SRT.
- info->srt is zero if there's no SRT, otherwise:
- info->srt == 1 and info->f.srt_offset points to the SRT
e.g. for a FUN with an SRT:
StgFunInfoTable +------+
info->f.srt_offset | ------------> offset to SRT object
StgStdInfoTable +------+
info->layout.ptrs | ... |
info->layout.nptrs | ... |
info->srt | 1 |
info->type | ... |
|------|
On x86_64, we optimise the info table representation further. The
offset to the SRT can be stored in 32 bits (all code lives within a
2GB region in x86_64's small memory model), so we can save a word in
the info table by storing the srt_offset in the srt field, which is
half a word.
On x86_64 with TABLES_NEXT_TO_CODE (except on MachO, due to #15169):
- info->srt is zero if there's no SRT, otherwise:
- info->srt is an offset from the info pointer to the SRT object
StgStdInfoTable +------+
info->layout.ptrs | |
info->layout.nptrs | |
info->srt | ------------> offset to SRT object
|------|
EXAMPLE
^^^^^^^
f = \x. ... g ...
where
g = \y. ... h ... c1 ...
h = \z. ... c2 ...
c1 & c2 are CAFs
g and h are local functions, but they have no static closures. When
we generate code for f, we start with a CmmGroup of four CmmDecls:
[ f_closure, f_entry, g_entry, h_entry ]
we process each CmmDecl separately in cpsTop, giving us a list of
CmmDecls. e.g. for f_entry, we might end up with
[ f_entry, f1_ret, f2_proc ]
where f1_ret is a return point, and f2_proc is a proc-point. We have
a CAFSet for each of these CmmDecls, let's suppose they are
[ f_entry{g_info}, f1_ret{g_info}, f2_proc{} ]
[ g_entry{h_info, c1_closure} ]
[ h_entry{c2_closure} ]
Next, we make an SRT for each of these functions:
f_srt : [g_info]
g_srt : [h_info, c1_closure]
h_srt : [c2_closure]
Now, for g_info and h_info, we want to refer to the SRTs for g and h
respectively, which we'll label g_srt and h_srt:
f_srt : [g_srt]
g_srt : [h_srt, c1_closure]
h_srt : [c2_closure]
Now, when an SRT has a single entry, we don't actually generate an SRT
closure for it, instead we just replace references to it with its
single element. So, since h_srt == c2_closure, we have
f_srt : [g_srt]
g_srt : [c2_closure, c1_closure]
h_srt : [c2_closure]
and the only SRT closure we generate is
g_srt = SRT_2 [c2_closure, c1_closure]
Optimisations
^^^^^^^^^^^^^
To reduce the code size overhead and the cost of traversing SRTs in
the GC, we want to simplify SRTs where possible. We therefore apply
the following optimisations. Each has a [keyword]; search for the
keyword in the code below to see where the optimisation is
implemented.
1. [Inline] we never create an SRT with a single entry, instead we
point to the single entry directly from the info table.
i.e. instead of
+------+
| info |
| | +-----+---+---+
| -------->|SRT_1| | | 0 |
|------| +-----+-|-+---+
| | |
| code | |
| | v
C
we can point directly to the closure:
+------+
| info |
| |
| -------->C
|------|
| |
| code |
| |
Furthermore, the SRT for any code that refers to this info table
can point directly to C.
The exception to this is when we're doing dynamic linking. In that
case, if the closure is not locally defined then we can't point to
it directly from the info table, because this is the text section
which cannot contain runtime relocations. In this case we skip this
optimisation and generate the singleton SRT, becase SRTs are in the
data section and *can* have relocatable references.
2. [FUN] A static function closure can also be an SRT, we simply put
the SRT entries as fields in the static closure. This makes a lot
of sense: the static references are just like the free variables of
the FUN closure.
i.e. instead of
f_closure:
+-----+---+
| | | 0 |
+- |--+---+
| +------+
| | info | f_srt:
| | | +-----+---+---+---+
| | -------->|SRT_2| | | | + 0 |
`----------->|------| +-----+-|-+-|-+---+
| | | |
| code | | |
| | v v
We can generate:
f_closure:
+-----+---+---+---+
| | | | | | | 0 |
+- |--+-|-+-|-+---+
| | | +------+
| v v | info |
| | |
| | 0 |
`----------->|------|
| |
| code |
| |
(note: we can't do this for THUNKs, because the thunk gets
overwritten when it is entered, so we wouldn't be able to share
this SRT with other info tables that want to refer to it (see
[Common] below). FUNs are immutable so don't have this problem.)
3. [Common] Identical SRTs can be commoned up.
4. [Filter] If an SRT A refers to an SRT B and a closure C, and B also
refers to C (perhaps transitively), then we can omit the reference
to C from A.
Note that there are many other optimisations that we could do, but
aren't implemented. In general, we could omit any reference from an
SRT if everything reachable from it is also reachable from the other
fields in the SRT. Our [Filter] optimisation is a special case of
this.
Another opportunity we don't exploit is this:
A = {X,Y,Z}
B = {Y,Z}
C = {X,B}
Here we could use C = {A} and therefore [Inline] C = A.
-}
-- ---------------------------------------------------------------------
{- Note [Invalid optimisation: shortcutting]
You might think that if we have something like
A's SRT = {B}
B's SRT = {X}
that we could replace the reference to B in A's SRT with X.
A's SRT = {X}
B's SRT = {X}
and thereby perhaps save a little work at runtime, because we don't
have to visit B.
But this is NOT valid.
Consider these cases:
0. B can't be a constructor, because constructors don't have SRTs
1. B is a CAF. This is the easy one. Obviously we want A's SRT to
point to B, so that it keeps B alive.
2. B is a function. This is the tricky one. The reason we can't
shortcut in this case is that we aren't allowed to resurrect static
objects.
== How does this cause a problem? ==
The particular case that cropped up when we tried this was #15544.
- A is a thunk
- B is a static function
- X is a CAF
- suppose we GC when A is alive, and B is not otherwise reachable.
- B is "collected", meaning that it doesn't make it onto the static
objects list during this GC, but nothing bad happens yet.
- Next, suppose we enter A, and then call B. (remember that A refers to B)
At the entry point to B, we GC. This puts B on the stack, as part of the
RET_FUN stack frame that gets pushed when we GC at a function entry point.
- This GC will now reach B
- But because B was previous "collected", it breaks the assumption
that static objects are never resurrected. See Note [STATIC_LINK
fields] in rts/sm/Storage.h for why this is bad.
- In practice, the GC thinks that B has already been visited, and so
doesn't visit X, and catastrophe ensues.
== Isn't this caused by the RET_FUN business? ==
Maybe, but could you prove that RET_FUN is the only way that
resurrection can occur?
So, no shortcutting.
-}
-- ---------------------------------------------------------------------
-- Label types
-- Labels that come from cafAnal can be:
-- - _closure labels for static functions or CAFs
-- - _info labels for dynamic functions, thunks, or continuations
-- - _entry labels for functions or thunks
--
-- Meanwhile the labels on top-level blocks are _entry labels.
--
-- To put everything in the same namespace we convert all labels to
-- closure labels using toClosureLbl. Note that some of these
-- labels will not actually exist; that's ok because we're going to
-- map them to SRTEntry later, which ranges over labels that do exist.
--
newtype CAFLabel = CAFLabel CLabel
deriving (Eq,Ord,Outputable)
type CAFSet = Set CAFLabel
type CAFEnv = LabelMap CAFSet
mkCAFLabel :: CLabel -> CAFLabel
mkCAFLabel lbl = CAFLabel (toClosureLbl lbl)
-- This is a label that we can put in an SRT. It *must* be a closure label,
-- pointing to either a FUN_STATIC, THUNK_STATIC, or CONSTR.
newtype SRTEntry = SRTEntry CLabel
deriving (Eq, Ord, Outputable)
-- ---------------------------------------------------------------------
-- CAF analysis
-- |
-- For each code block:
-- - collect the references reachable from this code block to FUN,
-- THUNK or RET labels for which hasCAF == True
--
-- This gives us a `CAFEnv`: a mapping from code block to sets of labels
--
cafAnal
:: LabelSet -- The blocks representing continuations, ie. those
-- that will get RET info tables. These labels will
-- get their own SRTs, so we don't aggregate CAFs from
-- references to these labels, we just use the label.
-> CLabel -- The top label of the proc
-> CmmGraph
-> CAFEnv
cafAnal contLbls topLbl cmmGraph =
analyzeCmmBwd cafLattice
(cafTransfers contLbls (g_entry cmmGraph) topLbl) cmmGraph mapEmpty
cafLattice :: DataflowLattice CAFSet
cafLattice = DataflowLattice Set.empty add
where
add (OldFact old) (NewFact new) =
let !new' = old `Set.union` new
in changedIf (Set.size new' > Set.size old) new'
cafTransfers :: LabelSet -> Label -> CLabel -> TransferFun CAFSet
cafTransfers contLbls entry topLbl
(BlockCC eNode middle xNode) fBase =
let joined = cafsInNode xNode $! live'
!result = foldNodesBwdOO cafsInNode middle joined
facts = mapMaybe successorFact (successors xNode)
live' = joinFacts cafLattice facts
successorFact s
-- If this is a loop back to the entry, we can refer to the
-- entry label.
| s == entry = Just (add topLbl Set.empty)
-- If this is a continuation, we want to refer to the
-- SRT for the continuation's info table
| s `setMember` contLbls
= Just (Set.singleton (mkCAFLabel (infoTblLbl s)))
-- Otherwise, takes the CAF references from the destination
| otherwise
= lookupFact s fBase
cafsInNode :: CmmNode e x -> CAFSet -> CAFSet
cafsInNode node set = foldExpDeep addCaf node set
addCaf expr !set =
case expr of
CmmLit (CmmLabel c) -> add c set
CmmLit (CmmLabelOff c _) -> add c set
CmmLit (CmmLabelDiffOff c1 c2 _ _) -> add c1 $! add c2 set
_ -> set
add l s | hasCAF l = Set.insert (mkCAFLabel l) s
| otherwise = s
in mapSingleton (entryLabel eNode) result
-- -----------------------------------------------------------------------------
-- ModuleSRTInfo
data ModuleSRTInfo = ModuleSRTInfo
{ thisModule :: Module
-- ^ Current module being compiled. Required for calling labelDynamic.
, dedupSRTs :: Map (Set SRTEntry) SRTEntry
-- ^ previous SRTs we've emitted, so we can de-duplicate.
-- Used to implement the [Common] optimisation.
, flatSRTs :: Map SRTEntry (Set SRTEntry)
-- ^ The reverse mapping, so that we can remove redundant
-- entries. e.g. if we have an SRT [a,b,c], and we know that b
-- points to [c,d], we can omit c and emit [a,b].
-- Used to implement the [Filter] optimisation.
}
instance Outputable ModuleSRTInfo where
ppr ModuleSRTInfo{..} =
text "ModuleSRTInfo:" <+> ppr dedupSRTs <+> ppr flatSRTs
emptySRT :: Module -> ModuleSRTInfo
emptySRT mod =
ModuleSRTInfo
{ thisModule = mod
, dedupSRTs = Map.empty
, flatSRTs = Map.empty }
-- -----------------------------------------------------------------------------
-- Constructing SRTs
{- Implementation notes
- In each CmmDecl there is a mapping info_tbls from Label -> CmmInfoTable
- The entry in info_tbls corresponding to g_entry is the closure info
table, the rest are continuations.
- Each entry in info_tbls possibly needs an SRT. We need to make a
label for each of these.
- We get the CAFSet for each entry from the CAFEnv
-}
-- | Return a (Label,CLabel) pair for each labelled block of a CmmDecl,
-- where the label is
-- - the info label for a continuation or dynamic closure
-- - the closure label for a top-level function (not a CAF)
getLabelledBlocks :: CmmDecl -> [(Label, CAFLabel)]
getLabelledBlocks (CmmData _ _) = []
getLabelledBlocks (CmmProc top_info _ _ _) =
[ (blockId, mkCAFLabel (cit_lbl info))
| (blockId, info) <- mapToList (info_tbls top_info)
, let rep = cit_rep info
, not (isStaticRep rep) || not (isThunkRep rep)
]
-- | Put the labelled blocks that we will be annotating with SRTs into
-- dependency order. This is so that we can process them one at a
-- time, resolving references to earlier blocks to point to their
-- SRTs. CAFs themselves are not included here; see getCAFs below.
depAnalSRTs
:: CAFEnv
-> [CmmDecl]
-> [SCC (Label, CAFLabel, Set CAFLabel)]
depAnalSRTs cafEnv decls =
srtTrace "depAnalSRTs" (ppr graph) graph
where
labelledBlocks = concatMap getLabelledBlocks decls
labelToBlock = Map.fromList (map swap labelledBlocks)
graph = stronglyConnCompFromEdgedVerticesOrd
[ let cafs' = Set.delete lbl cafs in
DigraphNode (l,lbl,cafs') l
(mapMaybe (flip Map.lookup labelToBlock) (Set.toList cafs'))
| (l, lbl) <- labelledBlocks
, Just cafs <- [mapLookup l cafEnv] ]
-- | Get (Label, CAFLabel, Set CAFLabel) for each block that represents a CAF.
-- These are treated differently from other labelled blocks:
-- - we never shortcut a reference to a CAF to the contents of its
-- SRT, since the point of SRTs is to keep CAFs alive.
-- - CAFs therefore don't take part in the dependency analysis in depAnalSRTs.
-- instead we generate their SRTs after everything else.
getCAFs :: CAFEnv -> [CmmDecl] -> [(Label, CAFLabel, Set CAFLabel)]
getCAFs cafEnv decls =
[ (g_entry g, mkCAFLabel topLbl, cafs)
| CmmProc top_info topLbl _ g <- decls
, Just info <- [mapLookup (g_entry g) (info_tbls top_info)]
, let rep = cit_rep info
, isStaticRep rep && isThunkRep rep
, Just cafs <- [mapLookup (g_entry g) cafEnv]
]
-- | Get the list of blocks that correspond to the entry points for
-- FUN_STATIC closures. These are the blocks for which if we have an
-- SRT we can merge it with the static closure. [FUN]
getStaticFuns :: [CmmDecl] -> [(BlockId, CLabel)]
getStaticFuns decls =
[ (g_entry g, lbl)
| CmmProc top_info _ _ g <- decls
, Just info <- [mapLookup (g_entry g) (info_tbls top_info)]
, Just (id, _) <- [cit_clo info]
, isStaticRep rep && isFunRep rep
, let lbl = mkLocalClosureLabel (idName id) (idCafInfo id)
]
-- | Maps labels from 'cafAnal' to the final CLabel that will appear
-- in the SRT.
-- - closures with singleton SRTs resolve to their single entry
-- - closures with larger SRTs map to the label for that SRT
-- - CAFs must not map to anything!
-- - if a labels maps to Nothing, we found that this label's SRT
-- is empty, so we don't need to refer to it from other SRTs.
type SRTMap = Map CAFLabel (Maybe SRTEntry)
-- | resolve a CAFLabel to its SRTEntry using the SRTMap
resolveCAF :: SRTMap -> CAFLabel -> Maybe SRTEntry
resolveCAF srtMap lbl@(CAFLabel l) =
Map.findWithDefault (Just (SRTEntry (toClosureLbl l))) lbl srtMap
-- | Attach SRTs to all info tables in the CmmDecls, and add SRT
-- declarations to the ModuleSRTInfo.
--
doSRTs
:: DynFlags
-> ModuleSRTInfo
-> [(CAFEnv, [CmmDecl])]
-> IO (ModuleSRTInfo, [CmmDecl])
doSRTs dflags moduleSRTInfo tops = do
us <- mkSplitUniqSupply 'u'
-- Ignore the original grouping of decls, and combine all the
-- CAFEnvs into a single CAFEnv.
let (cafEnvs, declss) = unzip tops
cafEnv = mapUnions cafEnvs
decls = concat declss
staticFuns = mapFromList (getStaticFuns decls)
-- Put the decls in dependency order. Why? So that we can implement
-- [Inline] and [Filter]. If we need to refer to an SRT that has
-- a single entry, we use the entry itself, which means that we
-- don't need to generate the singleton SRT in the first place. But
-- to do this we need to process blocks before things that depend on
-- them.
let
sccs = depAnalSRTs cafEnv decls
cafsWithSRTs = getCAFs cafEnv decls
-- On each strongly-connected group of decls, construct the SRT
-- closures and the SRT fields for info tables.
let result ::
[ ( [CmmDecl] -- generated SRTs
, [(Label, CLabel)] -- SRT fields for info tables
, [(Label, [SRTEntry])] -- SRTs to attach to static functions
) ]
((result, _srtMap), moduleSRTInfo') =
initUs_ us $
flip runStateT moduleSRTInfo $
flip runStateT Map.empty $ do
nonCAFs <- mapM (doSCC dflags staticFuns) sccs
cAFs <- forM cafsWithSRTs $ \(l, cafLbl, cafs) ->
oneSRT dflags staticFuns [l] [cafLbl] True{-is a CAF-} cafs
return (nonCAFs ++ cAFs)
(declss, pairs, funSRTs) = unzip3 result
-- Next, update the info tables with the SRTs
let
srtFieldMap = mapFromList (concat pairs)
funSRTMap = mapFromList (concat funSRTs)
decls' = concatMap (updInfoSRTs dflags srtFieldMap funSRTMap) decls
return (moduleSRTInfo', concat declss ++ decls')
-- | Build the SRT for a strongly-connected component of blocks
doSCC
:: DynFlags
-> LabelMap CLabel -- which blocks are static function entry points
-> SCC (Label, CAFLabel, Set CAFLabel)
-> StateT SRTMap
(StateT ModuleSRTInfo UniqSM)
( [CmmDecl] -- generated SRTs
, [(Label, CLabel)] -- SRT fields for info tables
, [(Label, [SRTEntry])] -- SRTs to attach to static functions
)
doSCC dflags staticFuns (AcyclicSCC (l, cafLbl, cafs)) =
oneSRT dflags staticFuns [l] [cafLbl] False cafs
doSCC dflags staticFuns (CyclicSCC nodes) = do
-- build a single SRT for the whole cycle, see Note [recursive SRTs]
let (blockids, lbls, cafsets) = unzip3 nodes
cafs = Set.unions cafsets
oneSRT dflags staticFuns blockids lbls False cafs
{- Note [recursive SRTs]
If the dependency analyser has found us a recursive group of
declarations, then we build a single SRT for the whole group, on the
grounds that everything in the group is reachable from everything
else, so we lose nothing by having a single SRT.
However, there are a couple of wrinkles to be aware of.
* The Set CAFLabel for this SRT will contain labels in the group
itself. The SRTMap will therefore not contain entries for these labels
yet, so we can't turn them into SRTEntries using resolveCAF. BUT we
can just remove recursive references from the Set CAFLabel before
generating the SRT - the SRT will still contain all the CAFLabels that
we need to refer to from this group's SRT.
* That is, EXCEPT for static function closures. For the same reason
described in Note [Invalid optimisation: shortcutting], we cannot omit
references to static function closures.
- But, since we will merge the SRT with one of the static function
closures (see [FUN]), we can omit references to *that* static
function closure from the SRT.
-}
-- | Build an SRT for a set of blocks
oneSRT
:: DynFlags
-> LabelMap CLabel -- which blocks are static function entry points
-> [Label] -- blocks in this set
-> [CAFLabel] -- labels for those blocks
-> Bool -- True <=> this SRT is for a CAF
-> Set CAFLabel -- SRT for this set
-> StateT SRTMap
(StateT ModuleSRTInfo UniqSM)
( [CmmDecl] -- SRT objects we built
, [(Label, CLabel)] -- SRT fields for these blocks' itbls
, [(Label, [SRTEntry])] -- SRTs to attach to static functions
)
oneSRT dflags staticFuns blockids lbls isCAF cafs = do
srtMap <- get
topSRT <- lift get
let
-- Can we merge this SRT with a FUN_STATIC closure?
(maybeFunClosure, otherFunLabels) =
case [ (l,b) | b <- blockids, Just l <- [mapLookup b staticFuns] ] of
[] -> (Nothing, [])
((l,b):xs) -> (Just (l,b), map (mkCAFLabel . fst) xs)
-- Remove recursive references from the SRT, except for (all but
-- one of the) static functions. See Note [recursive SRTs].
nonRec = cafs `Set.difference`
(Set.fromList lbls `Set.difference` Set.fromList otherFunLabels)
-- First resolve all the CAFLabels to SRTEntries
-- Implements the [Inline] optimisation.
resolved = mapMaybe (resolveCAF srtMap) (Set.toList nonRec)
-- The set of all SRTEntries in SRTs that we refer to from here.
allBelow =
Set.unions [ lbls | caf <- resolved
, Just lbls <- [Map.lookup caf (flatSRTs topSRT)] ]
-- Remove SRTEntries that are also in an SRT that we refer to.
-- Implements the [Filter] optimisation.
filtered = Set.difference (Set.fromList resolved) allBelow
srtTrace "oneSRT:"
(ppr cafs <+> ppr resolved <+> ppr allBelow <+> ppr filtered) $ return ()
let
isStaticFun = isJust maybeFunClosure
-- For a label without a closure (e.g. a continuation), we must
-- update the SRTMap for the label to point to a closure. It's
-- important that we don't do this for static functions or CAFs,
-- see Note [Invalid optimisation: shortcutting].
updateSRTMap srtEntry =
when (not isCAF && (not isStaticFun || isNothing srtEntry)) $ do
let newSRTMap = Map.fromList [(cafLbl, srtEntry) | cafLbl <- lbls]
put (Map.union newSRTMap srtMap)
this_mod = thisModule topSRT
case Set.toList filtered of
[] -> do
srtTrace "oneSRT: empty" (ppr lbls) $ return ()
updateSRTMap Nothing
return ([], [], [])
-- [Inline] - when we have only one entry there is no need to
-- build an SRT object at all, instead we put the singleton SRT
-- entry in the info table.
[one@(SRTEntry lbl)]
| uSE_INLINE_SRT_FIELD dflags
-- Info tables refer to SRTs by offset (as noted in the section
-- "Referring to an SRT from the info table" of Note [SRTs]). However,
-- when dynamic linking is used we cannot guarantee that the offset
-- between the SRT and the info table will fit in the offset field.
-- Consequently we build a singleton SRT in in this case.
&& not (labelDynamic dflags this_mod lbl)
-- MachO relocations can't express offsets between compilation units at
-- all, so we are always forced to build a singleton SRT in this case.
&& (not (osMachOTarget $ platformOS $ targetPlatform dflags)
|| isLocalCLabel this_mod lbl) -> do
-- If we have a static function closure, then it becomes the
-- SRT object, and everything else points to it. (the only way
-- we could have multiple labels here is if this is a
-- recursive group, see Note [recursive SRTs])
case maybeFunClosure of
Just (staticFunLbl,staticFunBlock) -> return ([], withLabels, [])
where
withLabels =
[ (b, if b == staticFunBlock then lbl else staticFunLbl)
| b <- blockids ]
Nothing -> do
updateSRTMap (Just one)
return ([], map (,lbl) blockids, [])
cafList ->
-- Check whether an SRT with the same entries has been emitted already.
-- Implements the [Common] optimisation.
case Map.lookup filtered (dedupSRTs topSRT) of
Just srtEntry@(SRTEntry srtLbl) -> do
srtTrace "oneSRT [Common]" (ppr lbls <+> ppr srtLbl) $ return ()
updateSRTMap (Just srtEntry)
return ([], map (,srtLbl) blockids, [])
Nothing -> do
-- No duplicates: we have to build a new SRT object
srtTrace "oneSRT: new" (ppr lbls <+> ppr filtered) $ return ()
(decls, funSRTs, srtEntry) <-
case maybeFunClosure of
Just (fun,block) ->
return ( [], [(block, cafList)], SRTEntry fun )
Nothing -> do
(decls, entry) <- lift . lift $ buildSRTChain dflags cafList
return (decls, [], entry)
updateSRTMap (Just srtEntry)
let allBelowThis = Set.union allBelow filtered
oldFlatSRTs = flatSRTs topSRT
newFlatSRTs = Map.insert srtEntry allBelowThis oldFlatSRTs
newDedupSRTs = Map.insert filtered srtEntry (dedupSRTs topSRT)
lift (put (topSRT { dedupSRTs = newDedupSRTs
, flatSRTs = newFlatSRTs }))
let SRTEntry lbl = srtEntry
return (decls, map (,lbl) blockids, funSRTs)
-- | build a static SRT object (or a chain of objects) from a list of
-- SRTEntries.
buildSRTChain
:: DynFlags
-> [SRTEntry]
-> UniqSM
( [CmmDecl] -- The SRT object(s)
, SRTEntry -- label to use in the info table
)
buildSRTChain _ [] = panic "buildSRT: empty"
buildSRTChain dflags cafSet =
case splitAt mAX_SRT_SIZE cafSet of
(these, []) -> do
(decl,lbl) <- buildSRT dflags these
return ([decl], lbl)
(these,those) -> do
(rest, rest_lbl) <- buildSRTChain dflags (head these : those)
(decl,lbl) <- buildSRT dflags (rest_lbl : tail these)
return (decl:rest, lbl)
where
mAX_SRT_SIZE = 16
buildSRT :: DynFlags -> [SRTEntry] -> UniqSM (CmmDecl, SRTEntry)
buildSRT dflags refs = do
id <- getUniqueM
let
lbl = mkSRTLabel id
srt_n_info = mkSRTInfoLabel (length refs)
fields =
mkStaticClosure dflags srt_n_info dontCareCCS
[ CmmLabel lbl | SRTEntry lbl <- refs ]
[] -- no padding
[mkIntCLit dflags 0] -- link field
[] -- no saved info
return (mkDataLits (Section Data lbl) lbl fields, SRTEntry lbl)
-- | Update info tables with references to their SRTs. Also generate
-- static closures, splicing in SRT fields as necessary.
updInfoSRTs
:: DynFlags
-> LabelMap CLabel -- SRT labels for each block
-> LabelMap [SRTEntry] -- SRTs to merge into FUN_STATIC closures
-> CmmDecl
-> [CmmDecl]
updInfoSRTs dflags srt_env funSRTEnv (CmmProc top_info top_l live g)
| Just (_,closure) <- maybeStaticClosure = [ proc, closure ]
| otherwise = [ proc ]
where
proc = CmmProc top_info { info_tbls = newTopInfo } top_l live g
newTopInfo = mapMapWithKey updInfoTbl (info_tbls top_info)
updInfoTbl l info_tbl
| l == g_entry g, Just (inf, _) <- maybeStaticClosure = inf
| otherwise = info_tbl { cit_srt = mapLookup l srt_env }
-- Generate static closures [FUN]. Note that this also generates
-- static closures for thunks (CAFs), because it's easier to treat
-- them uniformly in the code generator.
maybeStaticClosure :: Maybe (CmmInfoTable, CmmDecl)
maybeStaticClosure
| Just info_tbl@CmmInfoTable{..} <-
mapLookup (g_entry g) (info_tbls top_info)
, Just (id, ccs) <- cit_clo
, isStaticRep cit_rep =
let
(newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of
Nothing ->
-- if we don't add SRT entries to this closure, then we
-- want to set the srt field in its info table as usual
(info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, [])
Just srtEntries -> srtTrace "maybeStaticFun" (ppr res)
(info_tbl { cit_rep = new_rep }, res)
where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ]
fields = mkStaticClosureFields dflags info_tbl ccs (idCafInfo id)
srtEntries
new_rep = case cit_rep of
HeapRep sta ptrs nptrs ty ->
HeapRep sta (ptrs + length srtEntries) nptrs ty
_other -> panic "maybeStaticFun"
lbl = mkLocalClosureLabel (idName id) (idCafInfo id)
in
Just (newInfo, mkDataLits (Section Data lbl) lbl fields)
| otherwise = Nothing
updInfoSRTs _ _ _ t = [t]
srtTrace :: String -> SDoc -> b -> b
-- srtTrace = pprTrace
srtTrace _ _ b = b | https://gitlab.haskell.org/TDecki/ghc/-/blame/9196cd2a2e503e38a11045db21219a7070c2fa4a/compiler/cmm/CmmBuildInfoTables.hs | CC-MAIN-2020-45 | refinedweb | 4,015 | 50.57 |
432 questions
493 answers
182,919 users
Yes, you would write a small python module that runs on jevois and simply:
- get the next image as a numpy array in your process() function
- in there, just loop over the pixels and issue a bunch of jevois.sendSerial() commands to send the values over the serial port
have a look here to get started:
in this code:
import libjevois as jevois
import cv2
import numpy as np
class Hello:
def process(self, inframe, outframe):
img = inframe.getCvBGR()
outframe.sendCv(img)
you would implement that looping over pixels and sending the serial messages using jevois.sendSerial() before the last sendCv() line.
examples of jevois.sendSerial() are here:
Note that at this point you would still be streaming video. Once this works in the inventor, you would convert your code to headless by implementing a processNoUSB() function with essentially the same code as in process() except no sendCv() line at the end. See for more info about headless mode. | http://jevois.org/qa/index.php?qa=2500&qa_1=image-from-command-line | CC-MAIN-2019-26 | refinedweb | 166 | 68.3 |
Tsholofelo Goabaone Kenathetswe2,469 Points
please am frustrated by the challenge to create function, am failing over and over, kindly assist!
Create a function named square...
def split_check(square): return square*2
3 Answers
Maxwell NewberryFront End Web Development Techdegree Student 7,675 Points
In your return statement you are simply multiplying the argument by 2, when the challenge is asking for you to square the number. Meaning, multiply the passed argument by itself, not by 2.
def square(number): return number * number
Don't be frustrated! It takes time and you're definitely on the right track. :)
David GarciaPython Web Development Techdegree Student 3,384 Points
Tip: read the questions very carefully they tell you how to do said problem but you must come up with the logic...
1: first you need to create a defined function called 'square' that takes an argument called 'number' example: def example(argument):
2: then create the value of said argument called number inside defined function and tell it what it means example: def example(argument): argument = argument * argument
3: then return said value as soo....:
def example(argument): argument = argument * argument return argument
if you don't understand this, here's the answer. I'd advise you to look at the answer and study from it and remember what it does also remember there's different ways of doing problems.
def square(number): number = number * number return number
Owen Bell8,052 Points
Owen Bell8,052 Points
To add to this, you can define the index that you should raise your base to (i.e. how many times you should be multiplying the parameter by itself) with a double-asterisk:
Which is only slightly more streamlined in this particular case, but is much more useful for running higher-order calculations or functions where the index should be an argument to be provided or where it varies over time. | https://teamtreehouse.com/community/please-am-frustrated-by-the-challenge-to-create-function-am-failing-over-and-over-kindly-assist | CC-MAIN-2020-05 | refinedweb | 315 | 55.07 |
A number of display devices like LEDs, 7-segments, character and graphic displays can be attached to microcontrollers to create an interface between the user and an electronic system for displaying data or controlling the system. Sometimes you may need to add colorful images or graphics to your project, that’s where the TFT color displays come in handy.
ST7735 TFT display description.
TFT LCD is a variant of a liquid-crystal display (LCD) that uses thin-film-transistor (TFT) technology to improve image qualities such as addressability and contrast. In this tutorial we are going to show how to interface a 1.44″ TFT color display based on the ST7735 driver. It has 128×128 color pixels and can display full 16-bit color.
The display uses 4-wire SPI to communicate and has its own pixel-addressable frame buffer, it can be used with every kind of microcontroller.
Connecting TFT display to Arduino.
This is the type of display am using but they come with various pin configurations. However all the displays will have the major pins stated below and should be connected to the Arduino board as follows:
SCK to Digital pin 13
SDA to Digital pin 11
DC to Digital pin 9
Reset to Digital pin 8
CS to Digital pin 10
GND to Arduino GND
VCC to Arduino 3.3V
This TFT display uses 3.3V but comes with an on board voltage regulator therefore the VCC can be connected to the Arduino 5V. However for best practice it’s better to use the 3.3V.
Most code Libraries for this TFT ST7735 display with Arduino are programmed with the SDA and SCL pins connected to Arduino pins 11 and 13 respectively. Make sure you don’t change that order otherwise the display may not work.
Code for running the ST7735 TFT Display.
There are a number of libraries that have been developed to run the TFT ST7735 color display using Arduino but I found the Adafruit-ST7735-Library the best to use. Make sure you have this library installed in your IDE.
Basic commands.
Most TFT libraries have been programmed to support the following commands:
tft.fillScreen();This function is for changing the color of the screen.
tft.setCursor(x,y);For setting the cursor position using x and y coordinates of the screen.
tft.setTextColor(t);For setting the color of text.
tft.setTextColor(t,b);Setting the color of text and its background.
tft.setTextSize();For setting the size of text. This should be from 1 to 5.
tft.setRotation();Rotating the screen. Can take values of 0 for 0 degrees, 1 for 90 degrees, 2 for 180 degrees and 3 for 270 degrees.
tft.print();For displaying a string.
tft.println();Displaying a string and moves the cursor to the next line.
tft.drawFastVLine(x,y,h,t);This function draws a vertical line that starts in x, y location, and its length is h pixel and its color is t.
tft.drawFastHLine(x,y,w,t);This function draws a horizontal line that starts in x, y location, and its length is w pixel and its color is t.
tft.drawLine(xi,yi,xj,yj,t);This function draws a line that starts in xi and yi location ends is in xj and yj and the color is t.
tft.drawRect(x,y,w,h,t);function draws a rectangle in x and y location with w width and h height and t color.
tft.fillRect(x,y,w,h,t);function draws a filled rectangle in x and y location. w is width, h is height and t is color of the rectangle
tft.drawCircle(x,y,r,t);function draws a circle in x and y location and r radius and t color.
tft.fillCircle(x,y,r,t);function draws a filled circle in x and y location and r radius and t color.
tft.drawTriangle(x1,y1,x2,y2,x3,y3,t);function draws a triangle with three corner location x, y and z, and t color.
tft.fillTriangle(x1,y1,x2,y2,x3,y3,t);function draws a filled triangle with three corner location x, y and z, and t color.
tft.drawRoundRect(x,y,w,h,r,t);function draws a Rectangle with r radius round corners in x and y location and w width and h height and t color.
tft.fillRoundRect(x,y,w,h,r,t);function draws a filled Rectangle with r radius round corners in x and y location and w width and h height and t color.
There are many other functions and commands which you can use to program the TFT color display but the above are the commonest. You will meet many more with practice.
Before we can write our personal code we need to first test the display using the already made code examples from the installed library. This is done by going to File>Examples>Adafruit ST7735 and ST7789 Library. Then you can select any of the examples and upload it to the setup to see if the display works fine.In the diagram below I have shown how to access the graphics test code.
The example below shows the basic use of the above commands and functions for displaying simple colored shapes, lines and words on the TFT display.
#include <SPI.h> #include <Adafruit_GFX.h> // Core graphics library #include <Adafruit_ST7735.h> // Hardware-specific library #define TFT_CS 10 #define TFT_RST 7 // Or set to -1 and connect to Arduino RESET pin #define TFT_DC 9 Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST); void setup(void) { tft.initR(INITR_144GREENTAB); // Init ST7735R chip, green tab //tft.fillScreen(ST77XX_BLACK); tft.setRotation(0); // set display orientation } void loop() { tft.fillScreen(ST77XX_CYAN); print_text(25,30,"HELLO",3,ST77XX_ORANGE); print_text(20,70,"WORLD!",3,ST77XX_BLUE); delay(5000); tft.fillScreen(ST77XX_BLACK); tft.fillRoundRect(25, 10, 78, 60, 8, ST77XX_WHITE); tft.fillTriangle(42, 20, 42, 60, 90, 40, ST77XX_RED); delay(5000); tft.fillScreen(ST77XX_CYAN); tft.drawRect(5,5,120,120,ST77XX_RED); tft.drawFastHLine(5,60,120,ST77XX_RED); tft.drawFastVLine(60,5,120,ST77XX_RED); delay(5000); } void print_text(byte x_pos, byte y_pos, char *text, byte text_size, uint16_t color) { tft.setCursor(x_pos, y_pos); tft.setTextSize(text_size); tft.setTextColor(color); tft.setTextWrap(true); tft.print(text); }
Please also check out these other projects to better understand how the ST7735 TFT display is used with Arduino; | https://mytectutor.com/using-the-1-44-tft-st7735-color-display-with-arduino/ | CC-MAIN-2022-27 | refinedweb | 1,068 | 56.76 |
User Tag List
Results 1 to 4 of 4
Thread: update_attributes does not work
- Join Date
- Oct 2005
- 31
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
update_attributes does not work
I have problems using the update_attributes function in my controller.
Sometimes works and sometimes doesn't. I can always make it work if I
put a breakpoint just before the update_attributes function. What does
that mean?. Why is it doing this?.
Please help. If I don't get this working I'll quit Rails for good, it's
caused me a lot of trouble, I'm starting not to like it.
Thanks.
Here is my code:
Code:
def update @persona = Persona.find(params[:id]) @persona.fecha_modificacion = Time.now @persona.modificacion_user_id = 1 if @persona.fecha_creacion? == nil @persona.fecha_creacion = @persona.fecha_modificacion @persona.creacion_user_id = @persona.modificacion_user_id @persona.eliminado = 0 end #breakpoint if @persona.update_attributes(params[:persona]) flash[:notice] = 'Persona was successfully updated.' redirect_to :action => 'ver', :id => @persona else render :action => 'list' end end
Your post is a bit vague, what does "sometimes it works sometimes it doesn't" mean? Does validations fail sometimes, does it claim to have worked and doesn't? Check your development.log file and see what happened in detail with SQL queries and maybe post the entire request here.Ohai!
- Join Date
- Jan 2001
- Location
- Alkmaar, Netherlands
- 710
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
where do you show validation errors?
It sounds like sometimes your persona object does not validate because of values coming from form and it just does NOT save.
I dont know where you show your persona form but if it fails you should render that action (actually I would make one function for showing and updating object) and in that page just use
<%= error_messages_for("persona") %>
- Join Date
- Aug 2005
- 986
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Please do abandon Rails: less competition.
Your code looks good but one thing is strange: why do you redirect to the list action if update_attributes fails?
You need to give more code/information before we can find the error.
Also, why do you use spanish identifiers? If you'd used updated_at instead of fecha_modificacion you could have dropped this line:
Code:
@persona.fecha_modificacion = Time.now
Code:
if @persona.fecha_creacion? == nil @persona.fecha_creacion = @persona.fecha_modificacion @persona.creacion_user_id = @persona.modificacion_user_id @persona.eliminado = 0 end
Bookmarks | http://www.sitepoint.com/forums/showthread.php?417509-update_attributes-does-not-work | CC-MAIN-2015-35 | refinedweb | 389 | 56.15 |
I.
Go comes in part from Rob Pike and Ken Thompson, both influential in early UNIX. Both Rob Pike and Ken Thompson also were influential in working on Plan 9, a followup to UNIX.
UNIX's ideal is that "everything is a file". In Go terminology, this is a declaration that everything should be accessible via a uniform interface, which the OS specially privileges. One of Plan 9's core reasons for existing is that UNIX didn't take this anywhere near as far as it could be taken, and it goes much further in making everything accessible as a file in a directory structure.
I'm skeptical of both of these approaches. Everything isn't a "file".
There's numerous "files" that require ioctls to correctly manipulate, which are arbitrary extensions outside of the file interface. On the flip side, there are all kinds of "files" that can't be seeked, such as sockets, or files that can't be closed, like UDP streams. Pretty much every element of the file interface is one that doesn't apply to some "file", somewhere.
The Procrustean approach to software engineering tends to have the same results as Procrustes himself did, gravely or even fatally wounding the code in question.
Supervisor trees are one of the core ingredients in Erlang's reliability and let it crash philosophy. A well-structured Erlang program is broken into multiple independent pieces that communicate via messages, and when a piece crashes, the supervisor of that piece automatically restarts it.
This may not sound very impressive if you've never used it. But I have witnessed systems that I have written experience dozens of crashes per minute, but function correctly for 99% of the users..
(This is, of course, immediately followed by improving my logging so I do know when it happens in the future. Being crash-resistant is good, but one should not "spend" this valuable resource frivolously!)
I've been porting a system out of Erlang into Go for various other reasons, and I've missed having supervisor trees around. I decided to create them in Go. But this is one of those cases where we do not need a transliteration of the Erlang code into Go. For one thing, that's simply impossible as the two are mutually incompatible in some fundamental ways. We want an idiomatic translation of the functionality, which retains as much as possible of the original while perhaps introducing whatever new local capabilities into it make sense.
To correctly do that, step one is to deeply examine not only the what of Erlang supervision trees, but the why, and then figure out how to translate.
One of the things I've been really enjoying about Go is how easy testing is. The pervasive use of interfaces and composition-instead-of-inheritance synergize nicely for testing. But as I've expressed this online on reddit and Hacker News a couple of times, I've found that this does not seem to be a universally-shared opinion. Some have even commented on how hard it is to test in Go.
Since we are all obviously using the same language, the difference must lie in coding behavior. I've internalized a lot of testing methodology over the years, and I find some of the things work even better in Go that most other imperative languages. Let me share one of my core tricks today, which I will call the Environment Object pattern, and why Go makes it incrementally easier to use than other similar (imperative) environments.
There are a number of errors made in putative Monad tutorials in languages other than Haskell. Any implementation of monadic computations should be able to implement the equivalent of the following in Haskell:
minimal :: Bool -> [(Int, String)] minimal b = do x <- if b then [1, 2] else [3, 4] if x `mod` 2 == 0 then do y <- ["a", "b"] return (x, y) else do y <- ["y", "z"] return (x, y)
This should yield the local equivalent of:
Prelude> minimal True [(1,"y"),(1,"z"),(2,"a"),(2,"b")] Prelude> minimal False [(3,"y"),(3,"z"),(4,"a"),(4,"b")]
At the risk of being offensive, you, ahhh... really ought to understand why that's the result too, without too much effort... or you really shouldn't be writing a Monad tutorial. Ahem.
In particular:
- Many putative monadic computation solutions only work with a "container" that contains zero or one elements, and therefore do not work on lists. >>= is allowed to call its second argument (a -> m b) an arbitrary number of times. It may be once, it may be dozens, it may be none. If you can't do that, you don't have a monadic computation.
- A monadic computation has the ability to examine the intermediate results of the computation, and make decisions, as shown by the if statement. If you can't do that, you don't have a monadic computation.
- In statically-typed languages, the type of the inner value is not determined by the incoming argument. It's a -> m b, not a -> m a, which is quite different. Note how x and y are of different types.
- The monadic computation builds up a namespace as it goes along; note we determine x, then somewhat later use it in the return, regardless of which branch we go down, and in both cases, we do not use it right away. Many putative implementations end up with a pipeline, where each stage can use the previous stage's values, but can not refer back to values before that.
- Monads are not "about effects". The monadic computation I show above is in fact perfectly pure, in every sense of the term. And yes, in practice monad notation is used this way in real Haskell all the time, it isn't just an incidental side-effect.
A common misconception is that you can implement this in Javascript or similar languages using "method chaining". I do not believe this is possible; for monadic computations to work in Javascript at all, you must be nesting functions within calls to bind within functions within calls to bind... basically, it's impossibly inconvenient to use monadic computations in Javascript, and a number of other languages. A mere implementation of method chaining is not "monadic", and libraries that use method chaining are not "monadic" (unless they really do implement the rest of what it takes to be a monad, but I've so far never seen one).
If you can translate the above code correctly, and obtain the correct result, I don't guarantee that you have a proper monadic computation, but if you've got a bind or a join function with the right type signatures, and you can do the above, you're probably at least on the right track. This is the approximately minimal example that a putative implementation of a monadic computation ought to be able to do. | http://www.jerf.org/iri/categories/Programming/ | CC-MAIN-2015-32 | refinedweb | 1,160 | 59.13 |
- 04 Apr, 2019 1 commit
Now that we permit `key in somenode` remove the no longer needed function to check if a node contains a key. Signed-off-by:
Daniel Silverstone <daniel.silverstone@codethink.co.uk>
- 29 Mar, 2019 1 commit
- 27 Mar, 2019 1 commit>
- 25 Mar, 2019 2 commits
This variable indicates whether file contents are required to be in the local cache for an artifact to be considered cached. This will allow partial artifacts for remote execution and certain commands such as `bst show`.
- 21 Mar, 2019 1 commit
This encapsulates the logic of how we handle ids and the table in the plugin class itself, making it easier to refactor afterwards
- 16 Mar, 2019 1 commit
We anticipate other cases than build failures where buildtree caching will be required. E.g., incremental workspace build with remote execution. Or running tests in a buildtree in parallel with the build of reverse dependencies. This renames the option value 'failure' to the more generic 'auto' to cover these other cases as well.
- 14 Mar, 2019 1 commit
This involve introducing new Consistency states `STAGED` and `BOTH` that represent when the source is just in the local CAS and in both the local CAS and unstaged in the source directory. Sources are staged for each element into the local CAS during the fetch stage. If the sources are in the local consistency state `STAGED` when wanting to open a workspace, the original sources are fetched. Relavant tests this affects have been changed. Part of #440
- 28 Feb, 2019 2 commits
Extract directories are no longer used or created. This deletes extract directories from older versions of BuildStream on startup.
- 19 Feb, 2019 3 commits
This sits in Context allowing artifact cache to check the cas quota while not being used for CASServer. A lot of code that checks cache quota has been touched. Part of #870
Will check and move old artifact directory if it exists, and create symlink linking old directory to new. Part of #870
Makes artifactdir and builddir obsolete. Fixes #870
- 15 Feb, 2019 1 commit
Remove the need for the 'really-workspace-close-project-inaccessible' config option, as well as the option itself. As agreed on the mailing list [1], all the 'are you sure?' prompts on workspace reset and close were removed. While that discussion was going on, this new prompt and option was added. At the 2019 BuildStream Gathering, it was verbally agreed between myself and Tristan VB that we would also remove this instance. It was also agreed that we should have a notice to let the user know what they'd done, this was already in place if interactive. Moved it to be unconditional so that there's no difference in non-interactive behaviour. Made it output to stderr, as it's diagnostic meant for the user. Made it the last thing echo'd so it's next to the prompt - it's very relevant to what they type next. Added a test to make sure the text makes it to stderr in the appropriate case, and not in an inappropriate one. This is the last instance of any prompt configuration, so BuildStream can also forget all of that machinery. [1]
- 13 Feb, 2019 1 commit
- Tom Pollard authored
_context.py: Add cache_buildtrees global user context, the default of which is set to by default to 'always' via the addition of cache-buildtrees to userconfig.yaml cache group. 'failure' & 'never' can be given as valid options. app.py & cli.py: Add --cache-buildtrees as a bst main option, which when passed with a valid option can override the default or user defined context for cache_buildtrees. tests/completions/completions.py: Update for the added flag.
- 12 Feb, 2019 1 commit
While get_strict() doesn't look expensive per-se, it is called so many times that it is valuable to cache the result once computed. Since I don't think it can change once it is computable, cache it immediately that becomes possible and we save 20s in my test case. Signed-off-by:
Daniel Silverstone <daniel.silverstone@codethink.co.uk>
- 29 Jan, 2019 1 commit
This is a breaking change, as it affects behaviour that people might be relying on. An entry has been added to NEWS. As proposed on the mailing list, this change removes the unconditional prompts on: o: bst workspace reset o: bst workspace close --remove-dir If interactive, these commands would always interrupt you with a prompt like this: This will remove all your changes, are you sure? This seems like it may just save someone's work some time. It may also condition folks to hit 'y' quickly without thinking. This change also makes the non-interactive behaviour consistent with the interactive behaviour in the default case. There is also the case of the prompt configured by 'really-workspace-close-project-inaccessible', which may be tackled in later work. This change also removes the new config options to suppress those prompts, and their associated news entry. The relevant bit of the mailing list conversation is here: The issue to make interactive and non-interactive behaviour consistent is here: #744
- 24 Jan, 2019 2 commits
- Tristan Van Berkom authored
A frontend facing API for obtaining usage statistics. I would have put this on Stream instead, but the Context seems to be the de facto place for looking up the artifact cache in general so let's put it here.
_frontend/cli.py: Use new methods. Based on patches by Phillip Smyth.
- 16 Jan, 2019 3 commits
-
- 09 Jan, 2019 1 commit
- Valentin David authored
Fixes #631.
- 20 Dec, 2018 1 commit
In the event that the project could not be found, stop BuildStream from asking if the user would like to create a new project. Exit with error instead, and give a hint to the user in case they're new. As proposed on the mailing list here: The new interaction looks like this: $ bst show nonsuch.bst No project found. You can create a new project like so: bst init Error loading project: None of ['project.conf', '.bstproject.yaml'] found in '/src/temp/blah' or any of its parent directories Fixes #826
- 11 Dec, 2018 4 commits
Known issues: * `bst shell` works, but `bst shell COMMANDS...` doesn't, because click has no way of separating optional args from variable-length args. * `bst checkout` and `bst source-checkout`'s usage strings mark LOCATION as an optional argument. Because click gets confused if there's an optional argument before a mandatory argument, I had to mark LOCATION as optional internally. * `bst workspace open` makes no sense with element being optional, so I skipped it. * `bst workspace close` will probably need to be revisited when multiple projects can own one workspace. * `bst workspace reset` will happily delete the directory you're currently in, requiring you to `cd $PWD` to see the contents of your directory. I could exclude the top-level directory of the workspace being deleted, but it is entirely valid to run workspace commands from deeper in the workspace. This is a part of #222
cli: Interactively warn if the user is trying to close the workspace they're using to load the project This involves changes in: * _stream.py: * Add the helper Stream.workspace_is_required() * userconfig.yaml: * Add a default value for prompt.really-workspace-close-project-inaccessible * _context.py: * Load the prompt 'really-workspace-close-project-inaccessible' from user config. * cli.py: * If buildstream is invoked interactively, prompt the user to confirm that they want to close the workspace they're using to load this project. This is a part of #222
Changes to _context.py: * Context has been extended to contain a WorkspaceProjectCache, as there are times when we want to use it before a Workspaces can be initialised (looking up a WorkspaceProject to find the directory that the project is in) Changes to _stream.py: * Removed staging the elements from workspace_open() and workspace_reset() Changes in _workspaces.py: * A new WorkspaceProject contains all the information needed to refer back to a project from its workspace (currently this is the project path and the element used to create this workspace) * This is stored within a new WorkspaceProjectCache object, which keeps WorkspaceProjects around so they don't need to be loaded from disk repeatedly. * Workspaces has been extended to contain the WorkspaceProjectCache, and will use it when opening and closing workspaces. * Workspaces.create_workspace has been extended to handle the staging of the element into the workspace, in addition to creating the equivalent WorkspaceProject file. This is a part of #222
- 27 Nov, 2018 1 commit
- Jim MacArthur authored
Since the artifact cache and remote execution share the same local CAS store, they should share the same CASCache object. Moving this into context allows us to do this.
- 21 Nov, 2018 1 commit
- Will Salmon authored
This is to update the workspace CLI to as agreed on the mailing list This patch also introduces the default workspace directory.
- 20 Nov, 2018 4 commits
Provide options in project.conf to disable the 'Are you sure ...' prompts when making destructive changes: - Add prompt.really-workspace-close-remove-dir - Add prompt.really-workspace-reset-hard Add a NEWS item for these.
Provide an option in buildstream.conf to disable the 'Would you like to ...' prompt when we cannot resolve a project. Some users prefer not to be interrupted by such prompts, so pave the way to creating options to disable all those that might get in the way. Follow the example of the advice.* options 'git-config', and create a namespace for these UI options grouped by behaviour, rather than an over-reaching 'ui.*' namespace. In later work perhaps we'll also add 'advice.*' options. Add a NEWS item for this.
Use a new helper function to simplify working with nodes that can only accept certain strings. This will be used when adding the prompt.* config options. In later work we can see if this function would be useful elsewhere, and could be added to '_yaml.py'.
Enable this option of 'terminate', which is mentioned in userconfig.yaml and handled in _frontend/app.py:_handle_failure(). It appears to have been left out of the valid_actions as an oversight. Originally introduced in 2622d5da
- 19 Nov, 2018 1 commit
The default values are in userconfig.yaml, together with the documentation. The default values should not be duplicated in _context.py.
- 17 Nov, 2018 1 commit
- Tom Pollard authored
_context.py: Add pull_buildtrees global user context, the default of which is set to False via the addition of pull-buildtrees to userconfig.yaml cache group. _frontend/app.py & cli.py: Add --pull-buildtrees as a bst main option, which when passed will override the default or user defined context for pull_buildtrees. tests/completions/completions.py: Update for the added flag.
- 05 Nov, 2018 1 commit
-
- 25 Oct, 2018 1 commit
This pointless bare `return` was causing modern pylint to raise an error. Signed-off-by:
Daniel Silverstone <daniel.silverstone@codethink.co.uk>
- 18 Oct, 2018 1 commit
- Javier Jardón authored
Since python 3.3, collections has been moved to collections.abc module. For backwards compatibility, they continue to be visible in this module through Python 3.7. Subsequently, they will be removed entirely. See
- 27 Sep, 2018 1 commit
The artifact cache is no longer platform-specific. | https://gitlab.com/BuildStream/buildstream/commits/95565a6b899e5af9c668339b6fb319e6f25c52f2/buildstream/_context.py | CC-MAIN-2019-43 | refinedweb | 1,891 | 63.9 |
On Fri, 24 Jan 2003 10:41 am, Luke Opperman wrote:
> Hello all -
>
> I remember this being discussed a time or two before, but I couldn't find a
> way to work around without modifying roundup's cgi/client.py, so I'll put
> this out here.
>
> I'm using apache and roundup.cgi to serve a brand new 0.5.4 installation,
> only changes I've made are a few template/style things.
>
> My mod_rewrite configuration for the site is:
>
> RewriteRule ^/issue-tracker(.*) /cgi-bin/roundup.cgi/trackername$1 [L,PT]
>
> And in config.py TRACKER_WEB address is
> ""
>
> This worked fine, but logins wouldn't stick. The cookie (from cgi.client)
> was being given a path like /cgi-bin/roundup.cgi/trackername
> My solution was to modify the cookie path in cgi.client to read straight
> from TRACKER_WEB (path =
> urlparse.urlparse(self.instance.config.TRACKER_WEB)[2]).
This looks pretty much like the proposed fix in the sf.net tracker for this
issue, which I will try to get into 0.5.5 before I release it. Have a look
at:
> P.S. My actual modification once I realized this fixed my problem, since
> the cookie path is used a few times in cgi.client, was to create an
> instance variable self.cookie_path, and set it just after self.base at line
> 93. "self.cookie_path = urlparse.urlparse(self.base)[2]".
Sounds like a plan.
Richard
Please excuse my ignorance but what do I need to do to upgrade from
0.5.x to 0.5.5? I'm running the standalone server on a unix box.
TIA,
DAn
On Sat, 25 Jan 2003 12:55 am, Dan Grassi wrote:
> Please excuse my ignorance but what do I need to do to upgrade from
> 0.5.x to 0.5.5? I'm running the standalone server on a unix box.
"python setup.py install" just as the initial installation.
Richard
I often find myself omitting trailing slashes when I type my tracker
addresses into my browser -- rather than.
Under IE6/Win32 this sort-of-but-not-quite works: roundup-server returns a
rendered page for the tracker but IE then mis-forms the URL to style.css and
of any links followed from the page:
BADGER - - [24/Jan/2003 10:47:58] "GET /MLI HTTP/1.1" 200 -
BADGER - - [24/Jan/2003 10:47:58] code 404, message /_file/style.css
BADGER - - [24/Jan/2003 10:47:58] "GET /_file/style.css HTTP/1.1" 404 -
BADGER - - [24/Jan/2003 10:48:18] code 404, message /issue320
BADGER - - [24/Jan/2003 10:48:18] "GET /issue320 HTTP/1.1" 404 -
I had an "is Roundup broken" query from a user recently with the same
problem.
It'd be nice if roundup-server could issue a "moved permanently" 301
Redirect if it catches a missing trailing slash. A quick poke around in
roundup-server.py suggests that:
# figure out what the rest of the path is
if len(l_path) > 2:
rest = '/'.join(l_path[2:])
else:
rest = '/'
in RoundupRequestHandler.inner_run_cgi is the culprit: the else here traps
the case where tracker_name is found with no trailing slash and quietly
tweaks the trailing slash back on. This is OK for getting the tracker's home
page to render, but leaves the browser thinking all is well when it's not.
A quick hack suggests that:
#.
Thoughts?
--
James Kew, Mediabright
"It's a 70's animated children's show chill-out. It's an 'Are You Being
Served' brass-out. It is, quite frankly, complete bliss."
-- Amazon review of Lemon Jelly's "Lost Horizons"
On Fri, 24 Jan 2003 10:40 pm, James Kew wrote:
> I often find myself omitting trailing slashes when I type my tracker
> addresses into my browser -- rather than
>.
>
> [snip]
>
> #.
Actually I guess if we ONLY try to do this when the query part is empty, it
should be useful and safe(ish).
Grrr people and their url rewriting - life's so much simpler without it ;)
Richard
Hi
I'd like to include a 'latest activity' box across my tracker, and I
thought I'd use the renderWith method from cgi.templating. I used the
'home' template as an example, which has:
<span tal:replace="structure python:db.issue.renderWith('index',
sort=('-', 'activity'),group=('+', 'importance'),filter=['status'],
columns=['activity','title','assignedto', 'status','topic'],
filterspec={'status':['-1','1','2','3','4','5','6','7']})" />
So I put this in my 'page' template:
<td rowspan="2" valign="top" class="sidebar snapshot">
<h2 class="block">Latest activity</h2>
<p tal:content="structure python:db.issue.renderWith('brief_index',
sort=('-', 'activity'), pagesize=16, startwith=0,)" />
</td>
and created a new template, called 'issue.brief_index':
<tal:block tal:define="batch request/batch"
tal:
<table class="list">
<tal:block tal:
<tr>
<td>
<a tal:attributes="href string:issue${i/id}"
tal:title</a>
<br />
<span tal:,
<span tal:,
<span tal:
</td>
</tr>
</tal:block>
</table>
</tal:block>?
Any hints muc appreciated,
Cheers,
Felix.
On Fri, 24 Jan 2003 10:11 pm, Felix Ulrich-Oltean wrote:
> I'd like to include a 'latest activity' box across my tracker, and I
> thought I'd use the renderWith method from cgi.templating. I used the
> 'home' template as an example, which has:
>
> [snip using renderWith]
>
>
'''
Richard
On Fri, 2003-01-24 at 11:26, Richard Jones wrote:
>.
I'm guessing you mean filter() is half-implemented. What about the
renderWith method? Although it's 'icky', it seems like a fairly good
idea - is it basically broken or am I using it wrong?
Is there a way of getting my 'latest activity mini-list' to work with
the current code?
>
> '''
Yes, that would make sense and it seems like a straightforward patch,
maybe something like the following? I'd submit it to the sf tracker,
but it's probably not quite right.
def filter(self, request=None, filterspec=None, sort=None, group=None):
''' Return a list of items from this class,
The items are filtered, sorted and grouped either by the
filterspec/sort/group in the request, or by explicit filterspec,
sort and group arguments '''
if request is not None:
# look in the request, but give the explicit arguments priority
filterspec = filterspec or request.filterspec
sort = sort or request.sort
group = group or request.group
if self.classname == 'user':
klass = HTMLUser
else:
klass = HTMLItem
l = [klass(self._client, self.classname, x)
for x in self._klass.filter(None, filterspec, sort, group)]
return l
Felix.
=================================================
SC-Track Roundup 0.5.5 - an issue tracking system
=================================================
This is a bugfix release for version 0.5.x - if you're upgrading from before
0.5, you *must* read doc/upgrading.txt!
Unfortunately, the Zope frontend for Roundup is currently broken, with no
fix in the forseeable future.
Roundup requires python 2.1.3 or later for correct operation. Users of the
sqlite backend are encouraged to upgrade sqlite to version 2.7.3.
We've had a good crack at bugs (thanks to all who contributed!):
- fixed rdbms searching by ID (sf bug 666615)
- fixed metakit searching by ID
- detect corrupted index and raise semi-useful exception (sf bug 666767)
- open server logfile unbuffered
- revert StringHTMLProperty to not hyperlink text by default
- several fixes to CGI form handling
- fix unlink journalling bug in metakit backend
- fixed hyperlinking ambiguity (sf bug 669777)
- fixed cookie path to use TRACKER_WEB (sf bug 667020) (thanks Nathaniel Smith
for helping chase it down and Luke Opperman for confirming fix)
Source and documentation is available at the website:
Release Info (via download page):.1+ installation. It
doesn't even need to be "installed" to be operational, though a
disutils-based install script is provided.
It comes with two issue tracker templates (a classic bug/feature tracker and
a minimal skeleton) and six database back-ends (anydbm, bsddb, bsddb3, sqlite,
metakit and gadfly). | http://sourceforge.net/mailarchive/forum.php?forum_name=roundup-users&max_rows=25&style=nested&viewmonth=200301&viewday=24 | CC-MAIN-2013-48 | refinedweb | 1,302 | 66.23 |
enstaller 4.6.5
Install and managing tool for egg-based packages
The Enstaller (version 4) project is a package management and installation tool for egg-based Python distributions.
Enstaller consists of the sub-packages enstaller (package management tool) and egginst (package (un)installation tool). We find the clean separation into these two tasks, each of which has a well-defined scope, extremely useful.
enstaller:
enstaller is a management tool for egginst-based installs. The CLI, called enpkg, calls out to egginst to do the actual installation. shorter python path and faster import times (which seems to make the biggest difference for namespace packages). egginst knows about the eggs the people from Enthought use. It can install shared libraries, change binary headers, etc.—things which would require special post-install scripts if easy_install installed them.
The egg format:
The Enstaller egg format deals with two aspects: the actual install (egginst), and distribution management (enpkg). As far as egginst is concerned, the format is an extension of the setuptools egg format, i.e. all archive files, except the ones starting with ‘EGG-INFO/’ are installed into the site-packages directory. In fact, since egginst is a brain dead low-level tool, it will even install an egg without an ‘EGG-INFO’ directory. But more importantly, egginst installs ordinary setuptools eggs just fine. Within the ‘EGG-INFO/’ namespace are special archives that egginst is looking for to install files, as well as symbolic links into locations other than site-packages, and post-install (and pre-uninstall) scripts it can run.
As far as enpkg is concerned, eggs should contain a metadata file with the archive name ‘EGG-INFO/spec/depend’. The index file (index-depend.bz2) is essentially a compressed concatenation of the ‘EGG-INFO/spec/depend’ files for all eggs in a directory/repository.
Egg file name format:
Eggs follow the following naming convention:
<name>-<version>-<build>.egg
- <name>
- The package name, which may contain the following characters: Letters (both lower or uppercase), digits, underscore ‘_’ and a dot ‘.’
- same name and version but different dependencies. The platform and architecture dependencies of a distribution .
- Downloads (All Versions):
- 96 downloads in the last day
- 547 downloads in the last week
- 2604 downloads in the last month
-: enthought
- DOAP record: enstaller-4.6.5.xml | https://pypi.python.org/pypi/enstaller/4.6.5 | CC-MAIN-2015-22 | refinedweb | 382 | 56.15 |
Related
How To Create a Server to Send Push Notifications with GCM to Android Devices Using Python
Introduction
Push notifications let your Android application notify a user of an event, even when the user is not using your app. The goal of this tutorial is to send a simple push notification to your app. We’ll use Ubuntu 14.04 and Python 2.7 on the server, and Google Cloud Messaging as the push notification service.
We’ll use the term server to refer to the instance spun up with DigitalOcean. We’ll use GCM to refer to Google’s server, the one that is between the Android device and your server.
Prerequisites
You’ll need these things before you start the tutorial:
- An Android application; see developer.android.com
- A Ubuntu 14.04 Droplet
- Your Droplet’s IP address
About Push Notifications
Google-provided GCM Connection Servers take messages from a third-party application server, such as your Droplet, and send these messages to a GCM-enabled Android application (the client app) running on a device. Currently, Google provides connection servers for HTTP and XMPP.
In other words, you need your own server to communicate with Google’s server in order to send the notifications. Your server sends a message to a GCM (Google Cloud Messaging) Connection Server, then the connection server queues and stores the message, and then sends it to the Android device when the device is online.
Step One — Create a Google API Project
We need to create a Google API project to enable GCM for our app.
Visit the Google Developers Console.
If you’ve never created a developer account there, you may need to fill out a few details.
Click Create Project.
Enter a project name, then click Create.
Wait a few seconds for the new project to be created. Then, view your Project ID and Project Number on the upper left of the project page.
Make a note of the Project Number. You’ll use it in your Android app client.
Step Two - Enable GCM for Your Project
Make sure your project is still selected in the Google Developers Console.
In the sidebar on the left, select APIs & auth.
Choose APIs.
In the displayed list of APIs, turn the Google Cloud Messaging for Android toggle to ON. Accept the terms of service.
Google Cloud Messaging for Android should now be in the list of enabled APIs for this project.
In the sidebar on the left, select APIs & auth.
Choose Credentials.
Under Public API access, click Create new Key.
Choose Server key.
Enter your server’s IP address.
Click Create.
Copy the API KEY. You’ll need to enter this on your server later.
Step Three — Link Android App
To test the notifications, we need to link our Android app to the Google API project that we made.
If you are new to Android app development, you may want to follow the official guide for Implementing GCM Client.
You can get the official source code from the gcm page.
Note that the sources are not updates, so you’ll have to modify the Gradle file:
gcm-client/GcmClient/build.gradle
Old line:
compile "com.google.android.gms:play-services:4.0.+"
Updated line:
compile "com.google.android.gms:play-services:5.0.89+"
In the main activity, locate this line:
String SENDER_ID = "YOUR_PROJECT_NUMBER_HERE";
Replace this with the Project Number from your Google API project.
Each time a device registers to GCM it receives a registration ID. We will need this registration ID in order to test the server. To get it easily, just modify these lines in the main file:
if (regid.isEmpty()) { registerInBackground(); }else{ Log.e("==========================","========================="); Log.e("regid",regid); Log.e("==========================","========================="); }
After you run the app, look in the logcat and copy your regid so you have it for later. It will look like this:
======================================= 10-04 17:21:07.102 7550-7550/com.pushnotificationsapp.app E/==========================﹕JY0KNqpL4EUXTWOm0RxccxpMk 10-04 17:21:07.102 7550-7550/com.pushnotificationsapp.app E/==========================﹕ =======================================
Step Four — Deploy a Droplet
Deploy a fresh Ubuntu 14.04 server. We need this to be our third-party application server.
Google’s GCM Connection Servers take messages from a third-party application server (our Droplet) and send them to applications on Android devices. While Google provides Connection Servers for HTTP and CCS (XMPP), we’re focusing on HTTP for this tutorial. The HTTP server is downstream only: cloud-to-device. This means you can only send messages from the server to the devices.
Roles of our server:
- Communicates with your client
- Fires off properly formatted requests to the GCM server
- Handles requests and resends them as needed, using exponential back-off
- Stores the API key and client registration IDs.. Don’t worry now about managing it; it’s very simple and GCM provides you with help by giving you error messages in case a registration ID is invalid.
Step Five - Set Up Python GCM Simple Server
Log in to your server with a sudo user.
Update your package lists:
sudo apt-get update
Install the Python packages:
sudo apt-get install python-pip python-dev build-essential
Install
python-gcm. Find out more about python-gcm here.
sudo pip install python-gcm
Create a new Python file somewhere on the server. Let’s say:
sudo nano ~/test_push.py
Add the following information to the file. Replace the variables marked in red. The explanation is below.
from gcm import * gcm = GCM("AIzaSyDejSxmynqJzzBdyrCS-IqMhp0BxiGWL1M") data = {'the_message': 'You have x new friends', 'param2': 'value2'} reg_id = xxxqpL4EUXTWOm0RXE5CrpMk' gcm.plaintext_request(registration_id=reg_id, data=data)
Explanation:
from gcm import *: this imports the Python client for Google Cloud Messaging for Android
gcm: add your API KEY from the Google API project; make sure your server’s IP address is in the allowed IPs
reg_id: add your regid from your Android application
Step Six — Send a Push Notification
Run this command to send a test notification to your app:
sudo python ~/test_push.py
Wait about 10 seconds. You should get a notification on your Android device.
Troubleshooting.
If the notification does not appear on your device after about 10 seconds, follow these steps:
- Is your smartphone/tablet connected to the internet?
- Do you have the correct project key?
- Do you have the correct regid from the app?
- Is your server’s IP address added for the Google API server key?
- Is the server connected to the internet?
If you’re still not getting the notification, it’s probably the app. Check the logcat for some errors.
Where to Go from Here
Once you’ve done this simple test, you’ll probably want to send the notifications to all your users. Remember that you have to send them in sets of 1000. Also, if the CGM responds with “invalid ID,” you must remove it from your database.
You can adapt the examples in this tutorial to work with your own Android application.
28 Comments | https://www.digitalocean.com/community/tutorials/how-to-create-a-server-to-send-push-notifications-with-gcm-to-android-devices-using-python | CC-MAIN-2019-43 | refinedweb | 1,155 | 66.13 |
This.
This code is freely available under the General Public License for use. If you've any comments or questions I'd be eager to take any constructive input.
I am well aware that there is a debate raging between using prime factor hash tables and hash tables based on factors that are powers of 2. I regard both as good solutions for the definition of hash table widths and recognise that each has it's own advantages and disadvantages over the other (some of which are discussed briefly here).
This implementation is made up of two files. You might want to take the configuration section from the header file and place it in a separate file. It is included in the header file merely for compactness.
#include "chainedhashing.h" int main() { HASHITEM * removed_item; // Create table HASHTABLE * hash_table = create_hash_table(10); // Add items to the table insert_item(hash_table, create_hash_item("de", "Germany")); insert_item(hash_table, create_hash_item("uk", "United Kingdom")); insert_item(hash_table, create_hash_item("us", "United States of America")); print_hash_table(hash_table, 0); // Get a specific value by it's key printf("'%s' holds value '%s'\n", "uk", (char *)get_item_data(hash_table, "uk")); // Remove an item from the table removed_item = remove_item(hash_table, "de"); free(removed_item); print_hash_table(hash_table, 0); // Finally destroy the table and free up it's resources destroy_hash_table(hash_table); } | http://neverfear.org/projects/view/10/Chained_hashing_implementation/ | CC-MAIN-2014-52 | refinedweb | 213 | 55.58 |
12 October 2012 14:35 [Source: ICIS news]
HOUSTON (ICIS)--Foster Wheeler has won another contract for work on LANXESS’ 140,000 tonne/year neodymium polybutadiene (Nd-PBR) rubber plant project in ?xml:namespace>
Foster Wheeler said that under the terms of the contract it will be in charge of the Nd-PBR plant's engineering, procurement and construction management (EPCm).
The contract award follows Foster Wheeler’s successful completion of the project’s front-end engineering design earlier this year, it added.
ICIS reported previously that LANXESS' €200m ($260m) Nd-PBR plant is expected to start up in the first half of 2015. Construction began last month. Nd-PBR is used in tyre manufacturing.
LANXESS is building the Nd-PBR facility – “expected to be the largest of its kind in the world” – alongside its 100,000 tonne/year synthetic butyl rubber plant, which is due to start up in 2013, Foster said. Foster was also the EPCm contractor for the synthetic butyl rubber plant, | http://www.icis.com/Articles/2012/10/12/9603628/foster-wins-another-contract-from-lanxess-for-spore-rubber.html | CC-MAIN-2014-15 | refinedweb | 165 | 52.6 |
I have a Java project where I can view (debug) the Key of every record saved in the namespace.
This is my Key value when printed to String:
{Key@11379} "namespace:null:u17c3304e-9c9b-4905-9889-a9cf2b26dc9c:948b5799fd2f6d5e827e1eb3c5d1a14dfd31bccf"
How can I write a AQL query to identify this record in AQL? I need this to debug the logic. This is similar to AQL select with PK - #10 by kyle-banta-yoshida , however that thread deals with inserting the record via AQL with send_key option.
Can you please suggest how to construct the query to view the record, from the Key value? | https://discuss.aerospike.com/t/how-to-create-a-aql-query-corresponding-to-a-java-key-object/8571 | CC-MAIN-2021-31 | refinedweb | 101 | 78.38 |
Just wanting some of your opinions on session IDs.
I use code like this:
sub new_pid() { # generates a new pid..
return (time + ($$ ^ time));
}
[download]
I then carry this 'unique' number around with me (in form posts etc..) to determin if its the same session.
On some rare instances this can calculate two identical numbers so I was wondering if there was anything more robust for generating unique session IDs
Thanks,
___
/\__\ "What is the world coming to?"
\/__/
[download]
jdtoronto
Here is an excerpt from its POD:.
I wanted to also point out a pretty good discussion on this subject (one of many that I found with the search button): Secure Session ID values. One thing I learned in that thread is that there are "session hijackers" out there looking to figure out the algorithm that creates a session ID so that they can hijack a session in progress and hopefully get things like credit card information. For that reason, it's a good idea to not use an algorithm that produces a session ID by following a predictable pattern. This is probably why MD5 hashing is such a popular component of secure session ID's.
Dave
A company of my acquaintance had a gallery system. People could upload pictures which, once approved, would appear in the gallery. The pictures that had not yet been approved were stored under the session name in a temporary (but publically-readable) directory. The session key was based on the current time and a simple incrementing counter.
An enterprising porn operator noticed the system, and very quickly worked out how to access the public URLs of uploaded files. In a short period of time the company was inadvertantly hosting more than 3GB of, um, interesting pictures.
This case is a combination of a few minor things not to do, but the total effect was potentially very damaging.
-- bowling trophy thieves, die!
I agree that using something like CGI::Session is the best way to go, but if you want to roll your own or understand why this problem is harder than it looks, here's a little explanation. There are two goals to session IDs: uniqueness and difficulty to guess.
You're pretty close on the uniqueness one; time and PID ($$) are the traditional way to get a unique token, since if your process is running at a particular time there can't be any other process with the same PID running. But I'm not sure why you're adding and XORing them together; something like join(".",time,$$) would work better, and be simpler. This technique breaks if the same PID is re-used in the same second---fairly unlikely to happen on a normal system with sequential PIDs, but it's more likely on a system with random PIDs, and can sometimes be forced to happen by an attacker, for example by making 65535 requests to your Web server in the same second. This technique will also fail if you're using persistent processes to handle Web requests, for example mod_perl.
If you're storing sessions on the filesystem, the inode number of the session file is guaranteed to be unique. You can get that with stat. If sessions are in a database, the database can probably give you some kind of unique token (like an autoincrementing field). These techniques should always work, unlike PID+time, which at best will almost always work.
As far as making session IDs hard to guess, your best bet is to use a truly random number, decide how hard you want to make your session IDs to guess, and then append that many random bits onto the end. Math::TrulyRandom along with the Entropy Gathering Daemon (or /dev/urandom if your OS supports it) are a pretty good way to get truly random numbers; using rand gives pseudorandom numbers that aren't suitable for protecting anything.
--
Ilya Martynov, ilya@iponweb.net
CTO IPonWEB (UK) Ltd
Quality Perl Programming and Unix Support
UK managed @ offshore prices -
Personal website -
I generally use Data::UUID to generate unique IDs in my applications. Sometimes I use them as session IDs.
Cheers,
-Dan
I, for one, would be much happier to see a non decodable result, using say an MD5 hash.
Before storing the session id in MySQL, I confirm that it is unique. To date this has never returned a duplicate id.
I assuem this means you check if that id is still in your MySQL table. What do you do when a session times out? Keep it? I would have thought it would make for more efficient operation if the session were deleted when it was either closed or timed out.
It is not foolproof (i.e. your attacker/spoofer could be coming from the same IP address as the spoofed session, or could even be spoofing the IP address), but it does add an extra layer of difficulty for the potential attacker, especially the attacker trying to randomly guess session IDs.
--JAS
This assumes that your clients are not accessing through a proxy. A proxy could introduce one of two "problems" to this...
Situation 1 could happen with a large provider (AOL, for example). Situation 2 could happen with a farm of load balanced proxies / NAT / firewalls.
Point being that this could generate some false positives (from a hack detection view), but if you can live with that, then yes, jsegal's suggestion does have merit :)
--MidLifeX | http://www.perlmonks.org/index.pl?node_id=299647 | CC-MAIN-2015-06 | refinedweb | 910 | 69.31 |
github-linguistgithub-linguist
Warning: This package uses regular expressions to approximate the lines of code in a project. The results are not 100% precise because the regular expressions can return mistakes in edge cases, for example if comment tokens are present inside of multiline strings.
PrerequisitesPrerequisites
- Node.js 6+
InstallInstall
npm install github-linguist
or
yarn add github-linguist
UsageUsage
You can use
loc in you ternimal, or as a npm package in your projects.
Command line modeCommand line mode
Supports counting lines of code of a file or directory.
1. Lines of code in a single file1. Lines of code in a single file
# loc file <path> loc file src/index.ts
2. Lines of code in a directory2. Lines of code in a directory
# loc <pattern> loc dir **/*.ts
Third-party modeThird-party mode
import { LocFile, LocDir } from 'github-linguist'; // for a file. const file = new LocFile(filePath); const { info } = file.getInfo(); // for a directory. const dir = new LocDir({ cwd: // root directory, or leave blank to use process.cwd() include: // string or string[] containing path patterns to include (default include all) exclude: // string or string[] containing path patterns to exclude (default exclude none) }); const { info } = dir.loadInfo();
LicenseLicense
MIT License. | https://libraries.io/npm/github-linguist | CC-MAIN-2022-27 | refinedweb | 202 | 57.87 |
Introduction to C++ graphics
Graphics in C++ is defined to create a graphic model like creating different shapes and adding colors to it. It can be done in the C++ console by importing graphics.h library to GCC compiler. We can draw the circle, line, eclipse, and other geometric shapes too. The application of Object-oriented Programming is a primary technique to be used here. C++ does not have any built-in functions to perform drawing as they have low-level programs to use; instead, we can use API to do graphics.
Syntax
The formal syntax is given as:
# include<graphics.h> { Initgraph(); }
Few Graphics attributes are:
setcolor(color), setbkcolor(color), setlinestyle(style, pattern,thickness).
How do graphics work in C++?
The graphics are a two-dimensional concept; to implement this, we need implementation and few functions in C++ programming. So, a window or canvas is the main feature to show the output. Since we need a good framework to develop a good feature to draw, here in this article, I have used DevC++ IDE for that we need a certain package in addition to work with Graphics, to download this, we can refer WinBGIm to install the graphics library.
To work with DevC++, we need to download graphics.h and libbgi. a file. The next step is to go to the project and select project options followed by the parameter tab and paste the following in a linker tab: lbgi -lgdi32 -lcomdlg32 -luuid -loleaut32 -lole32.
Many GUI programming is stopped in C++ because they don’t have the default graphics library.
To work with graphics, we need a few essentials before entering the code.
1. Co-ordinates: This specifies how points are placed in a window; the initial origin of the screening point is assumed as (0,0). This co-ordinated system depicts how and where to perform a draw option is specified. The Graphics screen has 640 X 480 pixels.
2. Basics of Color: The default color elements are red, green, and blue; the output of all these colors confines to the pixels of the screen. To set a color, we can use setcolor (number); The number specifies the color code; for example, a number 14 is given for yellow. Shading and coloring add extra effects to the image.
Few functions make the code more attractive, which works well in graphics mode.
- BLINK: It helps to blink a character on the screen window.
- GOTOXY: It helps to move a cursor to any position on the screen.
- Delay: It suspends a few sections. For example, to move the next car. It waits for a while.
- Position functions like getmaxx(), getx() and gety().
Ok, let’s go with the working steps in graphics code.
- The first step is to include a header file GRAPHICS.H with a graphic function, and graphic.lib has built-in library functions.
- Next is to include a function initgraph () which starts the graphic mode and comes with two variables gd, a graphic driver and gm, graphic mode. followed by this, we can use a directory path.
- Closegraph () – This function shifts the screen back to text mode. To end the program, this function is used; it flushes the memory used before for the graphics.
- clear() – It returns the cursor position to (0,0).
- circle () – Creates a circle with a given radius.
- line () – Creates a line with starting and ending points.
For example, to draw a simple line or a circle, the following parameters are added.
- lineto(x,y): it moves from the current position to the user-defined position.
- circle (x, y, radius): To draw a whole circle, we need a center radius.
- rectangle (x1, y1, x2, y2): where x1, y1 is the upper left side and the lower right is x2, y2.
Examples of C++ graphics
Here I have given a sample program on how to work on the graphics mode and development process in devC++.
Example #1
To draw a triangle in C++ using graphics
Code:
#include <graphics.h> #include <iostream> int main() { int gd = DETECT, gm; initgraph(&gd, &gm, ""); line(140, 140, 350, 100); line(140, 140, 200, 200); line(350, 140, 200, 200); getch(); closegraph(); }
Explanation
The above simple Code draws a line of x1, y1, x2, y2 points on a screen. Gd, gm is a graph mode for a function initgraph. The generated graphics window of the above code is shown as:
Output:
Example #2
Creating a Home Page with Rectangle shapes and text
Code:
#include<iostream.h> #include<conio.h> #include<graphic.h> #include<math.h> void main() {clrscr(); int g=0,a; initgraph(&g,&d,""); setbkcolor(14); setcolor(6); settextstyle(2,0,4); outtextxy(180,130,"G"); setcolor(5); settextstyle(2,0,4); outtextxy(120,120,"O"); setcolor(6); settextstyle(2,0,4); outtextxy(300,120,"O"); setcolor(5); settextstyle(2,0,4); outtextxy(250,130,"G"); setcolor(2); settextstyle(2,0,4); outtextxy(360,160,"L"); setcolor(3); settextstyle(2,0,4); outtextxy(310,130,"E"); setcolor(9); settextstyle(2,0,4); setcolor(8); settextstyle(2,0,4); outtextxy(110,250,"surf"); settextstyle(2,0,4); outtextxy(350,320,"Go AHEAD"); setcolor(6); rectangle(130,210,450,210); rectangle(90,310,170,340); rectangle(360,320,510,320); getch(); }
Explanation
The above code draws a rectangle shape along with text in a different color.
Output:
Example #3
Code:
#include<stdio.h> #include<conio.h> #include<graphics.h> #include<dos.h> void flood(int,int,int,int); void main() { int gd,gm=DETECT; clrscr(); detectgraph(&gd,&gm); initgraph(&gd,&gm,"C:\\TurboC3\\BGI"); rectangle(60,60,90,90); flood (50,50,8,0); getch(); } void flood(int a,int b, int fcol, int col) { if(getpixel(a,b)==col) { delay(15); putpixel(a,b,fcol); flood(a+1,b,fcol,col); flood (a-1,b,fcol,col); flood (a,b+1,fcol,col); flood (a,b-1,fcol,col); } }
Explanation
The above code flooded a shape with the text color.
Output:
Example #4
Code:
#include <conio.h> #include <graphics.h> #include <iostream> #include <math.h> #include <stdio.h> #include <stdlib.h> using namespace std; void ellipsedr(int e1, int e2, int a1, int b1, float alp, int color) { float tt = 3.14 / 180; alp= 360 - alp; setcolor(color); int tetaa; for (int j = 0; j < 360; j += 1) { tetaa = j; int x1 = a1 * cos(t1 * tetaa) * cos(t1 * al) + b1 * sin(t1 * tetaa) * sin(tt * alp); int y1 = b1 * sin(t1 * tetaa) * cos(tt * alp) - a * cos(tt * tetaa) * sin(tt * alp); putpixel(e1 + x1, e2 - y1, color); } } void view(int e1, int e2, int rr, int a1, int b1, int alp, float pp, int color) { setcolor(color); float tt = 3.14 / 180; float ta, tb, d; float angle = (pp * alp); ta = cos(t * fmod(angle, 360)); tb = sin(t * fmod(angle, 360)); ta*= ta; tb *= tb; ta = ta / (a1 * a1); tb = tb / (b1 * b1); d = sqrt(ta + tb); d = 1 / d; int gox = e1 + (rr + d) * cos(tt * alp); int goy = e2 - (rr + d) * sin(tt * alp); int goang = angle + alp; ellipsedr(gox, goy, a, b, draw_ang, color); } void elipsecirc(int xc, int yc, int rr, int a1, int b1) { float tetaa = 0; double hei, pi1; hei = (a1 * a1) + (b1 * b1); hei /= 2; pi1 = sqrt(hei); pi1 /= rr; pi1 = 1 / (pi1); for (;; tetaa -= 1) { view(e1, e2, rr, a1, b1,tetaa, pi1, WHITE); circle(xcir, ycir, rr); delay(25); view(e1, e2, rr, a1, b1,tetaa, pi1, BLACK); } } int main() { int gd = DETECT, gm; initgraph(&gd, &gm, ""); int mx = getmaxx(); int my = getmaxy(); elipsecirc(mx / 2, my / 2, 90, 30, 26); closegraph(); return 0; }
Explanation
The above code displays an ellipse over the circle by setting x and y coordinates.
Output:
Conclusion
In this article, we have described how graphics work in C++ programming. We have presented simple and general functions used in graphics to do programming. We have also discussed the design and example process to understand the concept.
Recommended Articles
This is a guide to C++ graphics. Here we discuss how graphics work in C++ programming and Examples along with the codes and outputs. You can also look at the following article to learn more – | https://www.educba.com/c-plus-plus-graphics/ | CC-MAIN-2022-40 | refinedweb | 1,359 | 62.48 |
28 October 2013 22:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Monday’s end of day ?xml:namespace>
CRUDE: Dec WTI: $98.68/bbl, up 83 cents; Dec Brent: $109.61/bbl, up $2.68
NYMEX WTI crude futures finished up, tracking a strong rally on ICE Brent in response to a drop in Libyan crude exports due to protests. The market also responded to released data showing a jump in US industrial output, which helped prop up the dollar.
RBOB: Nov: $2.6309/gal, up by 4.38 cents/gal
US November reformulated blendstock for oxygen blending (RBOB) gasoline futures tracked higher crude oil futures. A decline in Libyan oil exports revived international crude oil supply concerns. Last week’s rally in the energy complex was also seen as a buying opportunity.
NATURAL GAS: Nov: $3.569/MMBtu: down 13.8 cents
The November contract on the NYMEX natural gas futures market closed its penultimate day as the front month, falling 4% after revised weekend weather forecasts predicted warmer-than-average temperatures for much of the country through early November.
ETHANE: lower at 25.50 cents/gal
Ethane spot prices weakened, tracking a dip in natural gas futures.
AROMATICS: mixed xylenes higher at $3.70-4.00/gal, toluene tighter at $3.40-3.47/gal
US mixed xylenes (MX) spot prices were discussed at $3.70-4.00/gal FOB (free on board) on Monday, sources said. The range was up from $3.65-3.73/gal FOB on Friday. Meanwhile, US toluene spot prices were discussed at $3.40-3.47/gal FOB, sources said. The range was tighter compared with $3.37-3.47/gal FOB the previous session, as bids firmed.
OLEFINS: ethylene done higher at 47.5 cents/lb, PGP wider at 62-65 cents/lb
US October ethylene traded at 47.50 cents/lb on Monday, up from the previous reported trade at 46.25 cents/lb on 22 October. US October polymer-grade propylene (PGP) bid/offer levels widened to 62.0-65.0 cents/lb from 62.5-64.0 cents/lb at the close | http://www.icis.com/Articles/2013/10/28/9719630/EVENING-SNAPSHOT---Americas-Markets-Summary.html | CC-MAIN-2015-14 | refinedweb | 355 | 71 |
NAME
perlglossary - Perl Glossary
VERSION
version 5.021011
DESCRIPTION.
- address operator.
- algorithm
A well-defined sequence of steps, explained clearly enough that even a computer could do them.
- alias
A nickname for something, which behaves in all ways as though you’d used the original name instead of the nickname. Temporary aliases are implicitly created in the loop variable for
foreachloops, in the
$_variable for
mapor
grepoperators, in
$aand
$bduringdeclaration.
- alphabetic
The sort of characters we put into words. In Unicode, this is all letters including all ideographs and certain diacritics, letter numbers like Roman numerals, and various combining marks.
- alternatives.
- anonymous
Used to describe a referent that is not directly accessible through a named variable. Such a referent must be indirectly accessible through at least one hard reference. When the last hard reference goes away, the anonymous referent is destroyed without pity.
- application
A bigger, fancier sort of program with a fancier name so people don’t realize they are using a program.
-Vis.
- Artistic License
The open source license that Larry Wall created for Perl, maximizing Perl’s
See hash. Please. The term associative array is the old Perl 4 term for a hash. Some languages call it a dictionary.
- associ Camel chapter 13, “Overloading”.
- autoincrement
To add one to something automatically, hence the name of the
++operator. To instead subtract one from something automatically is known as an “autodecrement”.
- autoload
To load on demand. (Also called “lazy” loading.) Specifically, to call an
AUTOLOADsubroutine on behalf of an undefined subroutine.
- autosplit
To split a string automatically, as the –a switch does when running under –p or –n in order to emulate awk. (See also the
AutoSplitmodule, which has nothing to do with the
–aswitch but a lot to do with autoloading.)
-.
- AV
Short for “array value”, which refers to one of Perl’s internal data types that holds an array. The
AVtype is a subclass of SV.
- awk
Descriptive editing term—short for “awkward”. Also coincidentally refers to a venerable text-processing language from which Perl derived some of its high-level ideas.
B
- backreference.
- backtracking the section “The Little Engine That /Couldn(n’t)” in Camel chapter 5, “Pattern Matching”.
- the
blessfunction in Camel chapter 27, “Functions”.
- block acts like a
BLOCK, such as within an
eval.
-”.
- bucket
A location in a hash table containing (potentially) multiple entries whose keys “hash” to the same hash value according to its hash function. (As internal policy, you don’t have to worry about it unless you’re into internals, or policy.)
- buffer
- built-in
A function that is predefined in the language. Even when hidden by overriding, you can always get at a built- in function by qualifying its name with the
CORE::pseudopackage.
- bundle
A group of related modules on CPAN. (Also sometimes refers to a group of command-line switches grouped into one switch cluster.)
- byte
A piece of data worth eight bits in most places.
- bytecode.
C
- C.
- cache
A data repository. Instead of computing expensive answers several times, compute it once and save the result.
- callback
A handler that you register with some other part of your program in the hope that the other part of your program will trigger your handler when some event of interest transpires.
-.
- canonical
Reduced to a standard form to facilitate comparison.
- capture variables
The variables—such as
$1and
$2, and
%+and
%–—that hold the text remembered in a pattern match. See Camel chapter 5, “Pattern Matching”.
- capturing
The use of parentheses around a subpattern in a regular expression to store the matched substring as a backreference. (Captured strings are also returned as a list in list context.) See Camel chapter 5, “Pattern Matching”.
-.
- casefolding
Comparing or matching a string case-insensitively. In Perl, it is implemented with the
/ipattern modifier, the
fcfunction, and the
\Fdouble-quote translation escape.
- casemapping
The process of converting a string to one of the four Unicode casemaps; in Perl, it is implemented with the
fc,
lc,
ucfirst, and
ucfunctions.
- character
The smallest individual element of a string. Computers store characters as integers, but Perl lets you operate on them as text. The integer used to represent a particular character is called that character’s codepoint.
- character class
A square-bracketed list of characters used in a regular expression to indicate that any character of the set may occur at a given point. Loosely, any predefined set of characters so used.
- character property
A predefined character class matchable by the
\por
\Pmetasymbol. Unicode defines hundreds of standard properties for every possible codepoint, and Perl defines a few of its own,. Also see instance method.
- client
In networking, a process that initiates contact with a server process in order to exchange data and perhaps receive a service.
- closure.
- cluster
A parenthesized subpattern used to group parts of a regular expression into a single atom.
- CODE
The word returned by the
reffunction when you apply it to a reference to a subroutine. See also CV.
- code generator
A system that writes code for you in a low-level language, such as code to implement the backend of a compiler. See program generator.
-”.
- co-maintainer
A person with permissions to index a namespace in PAUSE..
- command.
- command buffering.
- command-line arguments
The values you supply along with a program name when you tell a shell to execute a command. These values are passed to a Perl program through
@ARGV.
- command name
The name of the program currently executing, as typed on the command line. In C, the command name is passed to the program as the first command-line argument. In Perl, it comes in separately as
$0.
- comment
A remark that doesn’t affect the meaning of the program. In Perl, a comment is introduced by a
#character and continues to the end of the line.
- compilation unit
The file (or string, in the case of
eval) that is currently being compiled.
- compile
The process of turning source code into a machine-usable form. See compile phase.
- compile phase
Any time before Perl starts running your main program. See also run phase. Compile phase is mostly spent in compile time, but may also be spent in runtime when
BEGINblocks,
useor
nodeclarations, or constant subexpressions are being evaluated. The startup and import code of any
usedeclaration is also run during compile phase.
- compiler Camel chapter 16, “Compiling”.
- compile time
The time when Perl is trying to make sense of your code, as opposed to when it thinks it knows what your code means and is merely trying to do what it thinks your code says to do, which is runtime.
- composer the section “Creating References” in Camel chapter 8, “References”.
-, or subroutine that composes, initializes, blesses, and returns an object. Sometimes we use the term loosely to mean a composer.
- context).
- continuation.
- core dump
The corpse of a process, in the form of a file left in the working directory of the process, usually as a result of certain kinds of fatal errors.
- CPAN
The Comprehensive Perl Archive Network. (See the Camel Preface and Camel chapter 19, “CPAN” for details.)
- C preprocessor
The typical C compiler’s first pass, which processes lines beginning with
#for conditional compilation and macro definition, and does various manipulations of the program text based on the current definitions. Also known as cpp(1).
- cracker
Someone who breaks security on computer systems. A cracker may be a true hacker or only a script kiddie.
- currently selected output channel
The last filehandle that was designated with
select(FILEHANDLE);
STDOUT, if no filehandle has been selected.
- current package
The package in which the current statement is compiled. Scan backward in the text of your program through the current lexical scope or any enclosing lexical scopes until you find a package declaration. That’s your current package name.
- current working directory
See working directory.
- CV
In academia, a curriculum vitæ, a fancy kind of résumé. In Perl, an internal “code value” typedef holding a subroutine. The
CVtype is a subclass of SV.
D
- dangling statement
A bare, single statement, without any braces, hanging off an
ifor
whileconditional. C allows them. Perl doesn’t.
- datagram
A packet of data, such as a UDP message, that (from the viewpoint of the programs involved) can be sent independently over the network. (In fact, all packets are sent independently at the IP level, but stream protocols such as TCP_doormethod.
- DBM
Stands for “Databaseyour hash variables to various DBM implementations.
-.
- declarator
Something that tells your program what sort of variable you’d like. Perl doesn’t require you to declare variables, but you can use
my,
our, or
stateto denote that you want something other than the default.
- decrement
To subtract a value from a variable, as in “decrement
$x” (meaning to remove 1 from its value) or “decrement
$xby 3”.
- default
A value chosen for you if you don’t supply a value of your own.
- defined the
definedentry in Camel chapter 27, “Functions”.
- delimiter
A character or string that sets bounds to an arbitrarily sized textual object, not to be confused with a separator or terminator. “To delimit” really just means “to surround” or “to enclose” (like these parentheses are doing).
-.
- directive
A pod directive. See Camel chapter 23, “Plain Old Documentation”.
- directory
A special file that contains other files. Some operating systems call these “folders”, “drawers”, “catalogues”, or “catalogs”.
- directory handle
A name that represents a particular instance of opening a directory to read it, until you close it. See the
opendirfunction.
- discipline
Some people need this and some people avoid it. For Perl, it’s an old way to say I/O layer.
- dispatch.
- distribution
A standard, bundled release of a system of software. The default usage implies source code is included. If that is not the case, it will be called a “binary-only” distribution.
- dual-lived
Some modules live both in the Standard Library and on CPAN. These modules might be developed on two tracks as people modify either version. The trend currently is to untangle these situations.
- dweomer
An enchantment, illusion, phantasm, or jugglery. Said when Perl’s magical dwimmer effects don’t do what you expect, but rather seem to be the product of arcane dweomercraft, sorcery, or wonder working. [From Middle English.]
- dwimmeroperator. (Compare lexical scoping.) Used more loosely to mean how a subroutine that is in the middle of calling another subroutine “contains” that subroutine at runtime..
- encapsulation
The veil of abstraction separating the interface from the implementation (whether enforced or not), which mandates that all access to an object’s state be through methods alone.
- endian
See little-endian and big-endian.
- en passant
When you change a value as it is being copied. [From French “in passing”, as in the exotic pawn-capturing maneuver in chess.]
-operator.
-built
See status.
- exploit
Used as a noun in this case, this refers to a known way to compromise a program to get it to do something the author didn’t intend. Your task is to write unexploitable programs.
-
- false).
- fatal error
An uncaught exception, which causes termination of the process after printing a message on your standard error stream. Errors that happen inside an
evalare not fatal. Instead, the
evalterminates after placing the exception message in the
$@(
$EVAL_ERROR) variable. You can try to provoke a fatal error with the
dieoperator (known as throwing or raising an exception), but this may be caught by a dynamically enclosing
eval. If not caught, the
diebecomes a fatal error.
- feeping creaturism
A spoonerism of “creeping featurism”, noting the biological urge to add just one more feature to a program.
- field
A single piece of numeric or string data that is part of a longer string, record, or line. Variable-width fields are usually split up by separators (so use
splitto extract the fields), while fixed-width fields are usually at fixed positions (so use
unpack). Instance variables are also known as “fields”.
-glob
A “wildcard” match on filenames. See the
globfunction.
- filehandle.
- filename
One name for a file. This name is listed in a directory. You can use it in an
opento tell the operating system exactly which file you want to open, and associate the file with a filehandle, which will carry the subsequent identity of that file in your program, until you close it.
-.
- file test operator
A built-in unary operator that you use to determine whether something is true about a file, such as
–o $filenameto test whether you’re the owner of the file.
- filter
A program designed to take a stream of input and transform it into a stream of output.
- first-come
The first PAUSE author to upload a namespace automatically becomes the primary maintainer for that namespace. The “first come” permissions distinguish a primary maintainer who was assigned that role from one who received it automatically.
- flag
We tend to avoid this term because it means so many things. It may mean a command-line switch that takes no argument itself (such as Perl’s
–nand
–pflags) or, less frequently, a single-bit indicator (such as the
O_CREATand
O_EXCLflags used in
sysopen). Sometimes informally used to refer to certain regex modifiers.
- floating point.
- flush
The act of emptying a buffer, often before it’s full.
- FMTEYEWTK
Far More Than Everything You Ever Wanted To Know. An exhaustive treatise on one narrow topic, something of a super-FAQ. See Tom for far more.
- foldcase.
- fork
To create a child process identical to the parent process at its moment of conception, at least until it gets ideas of its own. A thread with protected memory.
- formal arguments
The generic names by which a subroutine knows its arguments. In many languages, formal arguments are always given individual names;list. See also actual.
G
- garbage collection.)
- Declarations” in Camel chapter 4, “Statements and Declarations”.
-.
- grapheme.
- greedy
A subpattern whose quantifier wants to match as many things as possible.
- grep
Originally from the old Unix editor command for “Globally search for a Regular Expression and Print it”, now used in the general sense of any kind of search, especially text searches. Perl has a built-in
grepfunction that searches a list for elements matching any given criterion, whereas the grep(1) program searches for lines matching a regular expression in one or more files.
- group
A set of users of which you are a member. In some operating systems (like Unix), you can give certain file access permissions to other members of your group.
- GV
An internal “glob value” typedef, holding a typeglob. The
GVtype is a subclass of SV.
H
-.
- handler
A subroutine or method that Perl callsin Camel chapter 27, “Functions”. (Header files have been superseded by the module mechanism.)
- 15 are customarily represented by the letters
athrough
f. Hexadecimal constants in Perl start with
0x. See also the
hexfunction in Camel chapter 27, “Functions”.
- home directory.)
- host
The computer on which a program or other data resides.
- hubris
Excessive pride, the sort of thing for which Zeus zaps you. Also the quality that makes you write (and maintain) programs that other people won’t want to say bad things about. Hence, the third great virtue of a programmer. See also laziness and impatience.
- HV
Short for a “hash value” typedef, which holds Perl’s internal representation of a hash. The
HVtype is a subclass of SV.
I
-.)
-.
- implementation
How a piece of code actually goes about doing its job. Users of the code should not count on implementation details staying the same unless they are part of the published interface.
- import
To gain access to symbols that are exported from another module. See
usein Camel chapter 27, “Functions”.
- increment
To increase the value of something by 1 (or by some other number, if so specified).
- indexingfunction merely locates the position (index) of one string in another.
- indirect filehandle
An expression that evaluates to something that can be used as a filehandle: a string (filehandle name), a typeglob, a typeglob reference, or a low-level IO object.
- indirection
If something in a program isn’t the value you’re looking for but indicates where the value is, that’s indirection. This can be done with either symbolic references or hard.
- indirect object
In English grammar, a short noun phrase between a verb and its direct object indicating the beneficiary or recipient of the action. In Perl,
print STDOUT "$foo\n";can be understood as “verb indirect-object object”, where
STDOUTis the recipient of the
"$foo"is the object being printed. Similarly, when invoking a method, you might place the invocant in the dative slot between the method and its arguments:
$gollum = new Pathetic::Creature "Sméagol"; give $gollum "Fisssssh!"; give $gollum "Precious!";
- indirect object slot
The syntactic position falling between a method call and its arguments when using the indirect object invocation syntax. (The slot is distinguished by the absence of a comma between it and the next argument.)
STDERRis in the indirect object slot here:
print STDERR "Awake! Awake! Fear, Fire, Foes! Awake!\n";
- data
See instance variable.
- instance method.
- instance variable
An attribute of an object; data stored with the particular object rather than with the class as a whole.
- integer
A number with no fractional (decimal) part. A counting number, like 1, 2, 3, and so on, but including 0 and the negatives.
- interface
The services a piece of code promises to provide forever, in contrast to its implementation, which it should feel free to change whenever it likes.
- interpolation.
- interpreter runtime system then interprets.
- to do what you think it’s supposed to do. We usually “call” subroutines but “invoke” methods, since it sounds cooler.
- I/O
Input from, or output to, a file or device.
- IO
An internal I/O object. Can also mean indirect object.
-throughET signatures.
K
- key
The string index to a hash, used to look up the value associated with that key.
- keyword
See reserved words.
L
- label
A name you give to a statement.
- leftmost longest.
- left shift
A bit shift that multiplies the number by some power of 2.
- lexeme
Fancy term for a token.
- lexer
Fancy term for a tokener.
- lexical analysis
Fancy term for tokenizing.
- lexical scoping.
- lexical variable
A variable subject to lexical scoping, declared by
my. Often just called a “lexical”. (The
ourdeclaration declares a lexically scoped name for a global variable, which is not itself a lexical variable.)
- library
Generally, a collection of procedures. In ancient days, referred to a collection of subroutines in a .pl file. In modern times, refers more often to the entire collection of Perl modules on your system.
- LIFO
Last In, First Out. See also FIFO. A LIFO is usually called a stack.
- line
In Unix, a sequence of zero or more nonnewline characters terminated with a newline character. On non-Unix machines, this is emulated by the C library even if the underlying operating system has different ideas.
- linebreak
A grapheme consisting of either a carriage return followed by a line feed or any character with the Unicode Vertical Space character property.
- line buffering
Used by a standard I/O output stream that flushes its buffer after every newline. Many standard I/O libraries automatically set up line buffering on output that is going to the terminal.
- line number.
- link
Used as a noun, a name in a directory that represents.
- LIST
A syntactic construct representing a comma- separated list of expressions, evaluated to produce a list value. Each expression in a
LISTof arguments tell those arguments that they should produce a list value. See also context.
- list operator
An operator that does something with a list of values, such as
joinor
grep. Usually used for named built-in operators (such asoperator.
- ASCII Camel chapter 12, “Objects”.
-, “CPAN”.
- minimalism
The belief that “small is beautiful”. Paradoxically, if you say something in a small language, it turns out big, and if you say it in a big language, it turns out small. Go figure.
- mode
In the context of the stat(2) syscall, refers to the field holding the permission bits and the type of the file.
- modifier
See statement modifier, regular expression, and lvalue, not necessarily in that order.
- module
A file that defines a package of (almost) the same name, which can either export symbols or function as an object class. (A module’s main .pm file may also load in other files in support of the module.) See the
usebuilt-in.
- modulus
An integer divisor when you’re interested in the remainder instead of the quotient.
- mojibake”.
- monger
Short for one member of Perl mongers, a purveyor of Perl.
- mortal
A temporary value scheduled to die when the current statement finishes.
- mro
See method resolution order.
- multidimensional array
An array with multiple subscripts for finding a single element. Perl implements these using references—see Camel chapter 9, “Data Structures”.
- multiple inheritance
The features you got from your mother and father, mixed together unpredictably. (See also inheritance and single inheritance.) In computer languages (including Perl), it is.
- NaN
Not a number. The value Perl uses for certain invalid or inexpressible floating-point operations.
- Perl strings. For Windows machines writing text files, and for certain physical devices like terminals, the single newline gets automatically translated by your C library into a line feed and a carriage return, but normally, no translation is done.
- NFS
Network File System, which allows you to mount a remote filesystem as if it were local.
- normalization
Converting a text string into an alternate but equivalent canonical (or compatible) representation that can then be compared for equivalence. Unicode recognizes four different normalization forms: NFD, NFC, NFKD, and NFKC.
- null character
A character with the numeric.
- octal
A number in base 8. Only the digits 0 through 7 are allowed. Octal constants in Perl start with 0, as in 013. See also the
octfunction.
- offset
How many things you have to skip over when moving from the beginning of a string or array to a specific position within it. Thus, the minimum offset is zero, not one, because you don’t skip anything to get to the first item.
- one-liner
An entire computer program crammed into one line of text.
- open source software
Programs for which the source code is freely available and freely redistributable, with no commercial strings attached. For a more detailed definition, see.
- operand”.
- options
See either switches or regular expression modifiers.
- ordinal
An abstract character’s integer value. Same thing as codepoint.
- overloading
Giving additional meanings to a symbol or construct. Actually, all languages do overloading to one extent or another, since people are good at figuring out things from context.
- overriding the section “Overriding Built-in Functions” in Camel chapter 11, “Modules”), and to describe how you can define a replacement method in a derived class to hide a base class’s method of the same name (see Camel chapter 12, “Objects”).
-.
- PAUSE
The Perl Authors Upload SErver (), the gateway for modules on their way to CPAN.
- Perl mongers
A Perl user group, taking the form of its name from the New York Perl mongers, the first Perl user group. Find one near you at.
- permission bits
Bits that the owner of a file sets or unsets to allow or disallow access to other people. These flag bits are part of the mode word returned by the
statbuilt-in when you ask about a file. On Unix systems, you can check the ls(1) manpage for more information.
- Pern
What you get when you do
Perl++twice. Doing it only once will curl your hair. You have to increment it eight times to shampoo your hair. Lather, rinse, iterate.
- pipe
A direct.
- pod
The markup used to embed documentation into your Perl code. Pod stands for “Plain old documentation”. See Camel chapter 23, “Plain Old Documentation”.
- pod command
A sequence, such as
=head1, that denotes the start of a pod section.
- pointer).
- polymorphism
The notion that you can tell an object to do something generic, and the object will interpret the command in different ways depending on its type. [< Greek πολυ- + μορϕή, many forms.]
- port.
- portable, such as a mobile home or London Bridge.
- porter
Someone who “carries” software from one platform to another. Porting programs written in platform-dependent languages such as C can be difficult work, but porting programs like Perl is very much worth the agony.
- possessive
Said of quantifiers and groups in patterns that refuse to give up anything once they’ve gotten their mitts on it. Catchier and easier to say than the even more formal nonbacktrackable.
- POSIX
The Portable Operating System Interface specification.
- postfix
An operator that follows its operand, as in
$x++.
- pp
An internal shorthand for a “push- pop” code; that is, C code implementing Perl’s stack machine.
- pragma
A standard module whose practical hints and suggestions are received (and possibly ignored) at compile time. Pragmas are named in all lowercase.
- precedence
The rules of conduct that, in the absence of other guidance, determine what should happen first. For example, in the absence of parentheses, you always do multiplication before addition.
- prefix
An operator that precedes its operand, as in
++$x.
- preprocessing
What some helper process did to transform the incoming data into a form more suitable for the current process. Often done with an incoming pipe. See also C preprocessor.
- primary maintainer
The author that PAUSE allows to assign co-maintainer permissions to a namespace. A primary maintainer can give up this distinction by assigning it to another PAUSE author. See Camel chapter 19, “CPAN”.
- procedure
A subroutine.
- process
An instance of a running program. Under multitasking systems like Unix, two or more separate processes could be running the same program independently at the same time—in fact, the
forkfunction is designed to bring about this happy state of affairs. Under other operating systems, processes are sometimes called “threads”, “tasks”, or “jobs”, often with slight nuances in meaning.
- program
See script.
- program generator
A system that algorithmically writes code for you in a high-level language. See also code generator.
- progressive matching
Pattern matching matching>that picks up where it left off before.
- property
See either instance variable or character property.
- protocol
In networking, an agreed-upon way of sending messages back and forth so that neither correspondent will get too confused.
- prototype operator X
that looks something like a literal, such as the output-grabbing operator, <literal moreinfo="none"`>and
waitpidfunction calls.
- record.
- recursion
The art of defining something (at least partly) in terms of itself, which is a naughty no-no in dictionaries but often works out okay in computer programs if you.
- Camel chapter 5, “Pattern Matching”.
- regular expression modifier
An option on a pattern or substitution, such as
/ito render the pattern case- insensitive.
- runtime but may also be spent in compile time when
require,
do
FILE, or
eval
STRINGoperators are executed, or when a substitution uses the
/eemodifier.
- runtime
The time when Perl is actually doing what your code says to do, as opposed to the earlier period of time when it was trying to figure out whether what you said made any sense whatsoever, which is compile time.
- runtime pattern
A pattern that contains one or more variables to be interpolated before parsing the pattern as a regular expression,
- sandbox
A walled off area that’s not supposed to affect beyond its walls. You let kids play in the sandbox instead of running in the road. See Camel chapter 20, “Security”.
- scalar
A simple, singular value; a number, string, or reference.
- scalar context.
-
From how far away you can see a variable, looking through one. Perl has two visibility mechanisms. It does dynamic scoping of
localvariables, meaning that the rest of the block, and any subroutines that are called by the rest of the block, can see the variables that are local to the block. Perl does lexical scoping of
myvariables, meaning that the rest of the block can see the variable, but other subroutines called by the block cannot see the variable.
- scratchpad
The area in which a particular invocation of a particular file or subroutine keeps some of its temporary values, including any lexically scoped variables.
- scriptfunction who need service to get in touch with it.
- service
Something you do for someone else to make them happy, like giving them the time of day (or of their life). On some machines, well-known services are listed by the
getserventfunction.
- setgid
Same asor
- sigil
A glyph used in magic. Or, for Perl, the symbol in front of a variable name, such as
$,
@, and
%.
- signal
A bolt out of the blue; that is, an event triggered by the operating system, probably when you’re least expecting it.
- signal handlerbuilt-in. See the
%SIGhash in Camel chapter 25, “Special Names” and the section “Signals” in Camel chapter 15, “Interprocess Communication”.
- single inheritance output> filehandle
STDERR. You can use this stream explicitly, but the
dieand
warnbuilt-ins write to your standard error stream automatically (unless trapped or otherwise intercepted).
- standard input
The default input stream for your program, which if possible shouldn’t care where its data is coming from. Represented within a Perl program by the filehandle
STDIN.
- standard I/O
A standard C library for doing buffered input and output to the operating system. (The “standard” of standard I/O is at.
- Standard Library
Everything that comes with the official perl distribution. Some vendor versions of perl change their distributions, leaving out some parts or including extras. See also dual-lived.
- standard output
The default output stream for your program, which if possible shouldn’t care where its data is going. Represented within a Perl program by the filehandle
STDOUT.
-.
If you’re a C or C++ programmer, you might be looking for Perl’s
statekeyword.
- static method
No such thing. See class method.
- static scoping
No such thing. See lexical scoping.
- static variable
No such thing. Just use a lexical variable in a scope larger than your subroutine, or declare it with
stateinstead of with
my.
- stat structure
A special internal spot in which Perl keeps the information about the last file on which you requested information.
- statusin Camel chapter 27, “Functions”.
-
given. See “The
givenstatement” in Camel chapter 4, “Statements and Declarations”.
- σύνταξις, “with-arrangement”. How things (particularly symbols) are put together with each other.
- syntax tree
An internal representation of your program wherein lower-level constructs dangle off the higher-level constructs enclosing them.
- syscallfunction, which actually involves many syscalls. To avoid any confusion, we nearly always say “syscall” for something you could call indirectly via Perl’s
syscallfunction, and never for something you would call with Perl’s
systemfunction.
T
- setuid (or setgid) program, or if you use the
–Tswitch.
- taint mode
Running under the
–Tswitch, marking all external data as suspect and refusing to use it with system commands. See Camel chapter 20, “Security”.
-
A character or string that marks the end of another string. The
$/variable contains the string that terminates a
readlineoperation, which
chompdeletes from the end. Not to be confused with delimiters or separators. The period at the end of this sentence is a terminator.
- ternary
An operator taking three operands. Sometimes pronounced trinary.
- text
A string or file containing primarily printable characters.
- thread one another.
- tie
The bond between a magical variable and its implementation class. See the
tiefunction in Camel chapter 27, “Functions” and Camel chapter 14, “Tied Variables”.
- titlecase.
- topic
The thing you’re working on. Structures like
while(<>),
for,
foreach, and
givenset the topic for you by assigning to
$_, the default (topic) variable.
- transliteratefunction.
- type
See data type and class.
- type casting
Converting data from one type to another. C permits this. Perl does not need it. Nor want it.
- typedef
A type definition in the C and C++ languages.
- typed lexical
A lexical variable lexical>that is declared with a class type:
my Pony $bill.
- Camel chapter 2, “Bits and Pieces”.
-function.
-.
- Unix.
- uppercase
In Unicode, not just characters with the General Category of Uppercase Letter, but any character with the Uppercase property, including some Letter Numbers and Symbols. Not to be confused with titlecase.
V
- value
An actual piece of data, in contrast to all the variables, references, keys, indices,stream to the effect that something might be wrong but isn’t worth blowing up over. See
warnin Camel chapter 27, “Functions” and the
warningspragma in Camel chapter 28, “Pragmantic Modules”.
- watch expression
An expression which, when its value changes, causes a breakpoint in the Perl debugger.
- weak reference
A reference that doesn’t get counted normally. When all the normal references to data disappear, the data disappears. These are useful for circular references that would never disappear otherwise.
- whitespace .
- word.
- working directory
Your current directory, from which relative pathnames are interpreted by the operating system. The operating system knows your current directory because you told it with a
chdir, or because you started out in the place where your parent process was when you were born.
- wrapper
A program or subroutine that runs some other program or subroutine for you, modifying some of its input or output to better suit your purposes.
- WYSIWYG
What You See Is What You Get. Usually used when something that appears on the screen matches how it will eventually look, like Perl’s
formatdeclarations. Also used to mean the opposite of magic because everything works exactly as it appears, as in the three- argument form of
open.
X
- XS
An extraordinarily exported, expeditiously excellent, expressly eXternal Subroutine, executed in existing C or C++ or in an exciting extension language called (exasperatingly) XS.
-
A process that has died (exited) but whose parent has not yet received proper notification of its demise by virtue of having called
waitor
waitpid. If you
fork, you must clean up after your child processes when they exit; otherwise, the process table will fill up and your system administrator will Not Be Happy with you.
AUTHOR AND COPYRIGHT. | http://docs.activestate.com/activeperl/5.22/perl/lib/perlglossary.html | CC-MAIN-2019-09 | refinedweb | 5,658 | 57.87 |
Functors, applicatives, arrows, monads and many such abstractions form part of the Scalaz repertoire. It’s no wonder that using Scalaz needs a new way of thinking on your part than is with standard Scala. You need to think more like as if you’re modeling in Haskell rather than in any object-oriented language.
Typeclasses are the cornerstone of Scalaz distribution. Instead of thinking polymorphically in inheritance hierarchies, think in terms of designing APIs for the open world using typeclasses. Scalaz implements the Haskell hierarchy of typeclasses - Functors, Pointed, Applicative, Monad and the associated operations that come with them.
How is this different from the normal way of thinking ? Let’s consider an example from the current Scala point of view.
We say that with Scala we can design monadic abstractions.
flatMapis the bind which helps us glue abstractions just like you would do with
>>=of Haskell. But does the Scala standard library really have a monad abstraction ? No! If it had a monad then we would have been able to abstract over it as a separate type and implement APIs like sequence in Haskell ..
sequence :: Monad m => [m a] -> m [a]
We don’t have this in the Scala standard library. Consider another example of a typeclass,
Applicative, which is defined in Haskell as
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
…
Here
purelifts
ainto the effectful environment of the functor, while
(<*>)takes a function from within the functor and applies it over the values of the functor. Scalaz implements
Applicativein Scala, so that you can write the following:
scala> import scalaz._
import scalaz._
scala> import Scalaz._
import Scalaz._
scala> List(10, 20, 30) <*> (List(1, 2, 3) map ((_: Int) * (_: Int)).curried)
res14: List[Int] = List(10, 20, 30, 20, 40, 60, 30, 60, 90)
Here we have a pure function that multiplies 2 Ints. We curry the function and partially apply to the members of the
List(1, 2, 3). Note
Listis an instance of
ApplicativeFunctor. Then we get a List of partial applications. Finally
<*>takes that
Listand applies to every member of
List(10, 20, 30)as a cartesian product. Of course the Haskell variant is much less verbose ..
(*) <$> [1, 2, 3] <*> [10, 20, 30]
and this is due to better type inference and curry by default strategy of function application.
You can get a more succinct variant in Scalaz using the
|@|combinator ..
scala> List(10, 20, 30) |@| List(1, 2, 3) apply (_ * _)
res17: List[Int] = List(10, 20, 30, 20, 40, 60, 30, 60, 90)
You can have many instances of Applicatives so long you implement the contract that the above definition mandates. Typeclasses give you the option to define abstractions for the open world. Like
List, there are many other applicatives implemented in Scalaz like options, tuples, function applications etc. The beauty of this implementation is that you can abstract over them in a uniform way through the power of the Scala type system. Just like
List, you can apply
<*>over options as well ..
scala> some(10) <*> (some(20) map ((_: Int) * (_: Int)).curried)
res18: Option[Int] = Some(200)
And since all Applicatives can be abstracted over without looking at the exact concrete type, here’s one that mixes an option with a function application through
<*>..
scala> some(9) <*> some((_: Int) + 3)
res19: Option[Int] = Some(12)
The Haskell equivalent of this one is ..
Just (+3) <*> Just 9
Scalaz uses two features of Scala to the fullest extent - higher kinds and implicits. The entire design of Scalaz is quite unlike the usual Scala based design that you would encounter elsewhere. Sometimes you will find these implementations quite opaque and verbose. But most of the verbosity comes from the way we encode typeclasses in Scala using implicits. Consider the following definition of map, which is available as a pimp defined in the trait MA ..
sealed trait MA[M[_], A] extends PimpedType[M[A]] {
import Scalaz._
//..
def map[B](f: A => B)(implicit t: Functor[M]): M[B] = //..
//..
}
maptakes a pure function
(f: A => B)and can be applied on any type constructor
Mso long it gets an instance of a
Functor[M]in its implicit context. Using the trait we pimp the specific type constructor with the
mapfunction.
Here are some examples of using applicatives and functors in Scalaz. For fun I had translated a few examples from Learn You a Haskell for Great Good. I also mention the corresponding Haskell version for each of them ..
// pure (+3) <*> Just 10 << from lyah
10.pure[Option] <*> some((_: Int) + 3) should equal(Some(13))
// pure (+) <*> Just 3 <*> Just 5 << lyah
// Note how pure lifts the function into Option applicative
// scala> p2c.pure[Option]
// res6: Option[(Int) => (Int) => Int] = Some(<function1>)
// scala> p2c
// res7: (Int) => (Int) => Int = <function1>
val p2c = ((_: Int) * (_: Int)).curried
some(5) <*> (some(3) <*> p2c.pure[Option]) should equal(Some(15))
// none if any one is none
some(9) <*> none should equal(none)
// (++) <$> Just "johntra" <*> Just "volta" << lyah
some("volta") <*> (some("johntra") map (((_: String) ++ (_: String)).curried))
should equal(Some("johntravolta"))
// more succinct
some("johntra") |@| (some("volta") apply (_ ++ _) should equal(Some("johntravolta"))
Scalaz is mind bending. It makes you think differently. In this post I have only scratched the surface and talked about a couple of typeclasses. But the only way to learn Scalaz is to run through the codebase. It's dense but if you like functional programming you will get lots of aha! moments going through it.
In the next post I will discuss how I translated part of a domain model of a financial trade system which I wrote in Haskell (Part 1, Part 2 and Part 3) into Scalaz. It has been a fun exercise for me and shows how you can write your Scala code in an applicative style.
2 comments:
"[...] and this is due to better type inference and curry by default strategy of function application."
And because Scala has - regrettably - no syntactic sugar support for Lists)
Remove the two "List" and it really looks smaller.
Stephan
scalaz is really cool.
Check out embeddedmonads mixing scalaz monads and scala's continuations for implicit monadic code.
The cool thing about monads is, one can even capture computations and analyze/interpret them later. For examples using embeddedmonads and scalaz for good see scala probability DSL | https://debasishg.blogspot.com/2010/11/exploring-scalaz.html | CC-MAIN-2017-39 | refinedweb | 1,071 | 73.07 |
Angular v7 is here but 7.1.0 is already breathing down its neck: RC phase kicks off
Angular v7 was recently released but the party is now officially over, now that Angular v7.1.0 is almost upon us – the release candidate phase has begun.
The release candidate phase for Angular v7.1.0 has officially begun. rc.0 arrives with three bug fixes and one feature in tow.
Feature
Update November 8, 2018
Ok, so Angular v7.1.0 is as real as it can be now that we’ve already reached the third beta.
This one is actually quite hefty: it brings six bug fixes and even more features!
Update November 1, 2018
The second beta for Angular v7.1.0 is rather modest; it brings five bug fixes and that’s about it. Still, you should know we’re not over the big seven just yet.
Yesterday, we published an interview with Manfred Steyer about the newly-released Angular v7, his thoughts on Ivy Renderer, how to select frameworks for your projects and more so if you want to read his first impressions and predictions, make sure to read it.
Update October 25, 2018
It’s been almost bug fixes and one feature.
Feature
- router: add prioritizedGuardValue operator optimization and allowing UrlTree return from guard (#26478) (fdfedce)
Update October 19.201
Yes, it is true — Angular v7 is here and the wait is finally over!
And we should be extra enthusiastic about this one since it’s a major release that implements changes, new features, and improvements throughout the entire platform, including the core framework, Angular Material, and the CLI.
Let’s take a quick look at some of the highlights.
CLI prompts
The CLI will now prompt users when running common commands like
ng new or
ng add @angular/material to help you discover built-in features like routing or SCSS support.
Application performance
After the Angular team discovered that many developers were including the
reflect-metadata polyfill in production, which is only needed in development, decided that in order to fix this, part of the update to v7 will automatically remove it from your
polyfills.ts file, and then include it as a build step when building your application in JIT mode, removing this polyfill from production builds by default.
Angular material & the CDK
Material Design has received a big update in 2018. Angular Material users updating to v7 should expect minor visual differences reflecting the updates to the Material Design specification.
Improved accessibility of selects
You can now improve the accessibility of your application by using a native
selectelement inside of a
mat-form-field. The native select has some performance, accessibility, and usability advantages, but we’re keeping
mat-select which gives full control of the presentation of options.
Angular Elements
Angular Elements now supports content projection using web standards for custom elements.
Partner Launches
Angular partners with several community projects that have launched recently. Namely:
-
Documentation updates
The documentation on angular.io now includes reference material for the Angular CLI.
Dependency Updates
The 7.0.0 release features updated dependencies on major 3rd party projects:
- TypeScript 3.1
- RxJS 6.3
- Node 10 — support for Node 10 added, and support for 8 continues.
Wait, still no Ivy?
According to the official blog post, Ivy is still under active development and is not part of the v7 release. “We are beginning to validate the backwards compatibility with existing applications and will announce an opt-in preview of Ivy as soon as it is ready in the coming months.”
For the full list of highlights and insights, head over to Stephen Fluin’s blog post and check out the GitHub repo for the extensive changelog.
Update October 11, 2018
There’s not a lot to say about rc.1 except that there aren’t any bug fixes or features. The only thing mentioned in the second release candidate is that “this version includes Ivy features and internal refactorings. There are no user-facing changes.”
We’re getting closer to the finish line!
Update October 1, 2018
The beta season is officially over. Now that the release candidate phase has begun, we’re one step closer to the grand revealing. Angular v7 should be released this month.
The first RC brings one feature.
Features
Update September 27, 2018
The beta tap is wide open! No.7 is here and there are still some features the Angular team is working on.
Case in point: this beta brings two bug fixes and two features.
Features
- compiler-cli: add support to extend
angularCompilerOptions(#22717) (d7e5bbf), closes #22684
- platform-server: update domino to v2.1.0 (#25564) (3fb0da2)
Update September 20, 2018:
Features
- bazel: add additional parameters to
ts_api_guardian_testdef (#25694) (2a21ca0)
- ivy: allow combined context discovery for components, directives and elements (#25754) (62be8c2)
- ivy: patch animations into metadata (#25828) (d2dfd48)
- ivy: resolve references to vars in .d.ts files (#25775) (96d6b79)
- ivy: support animation @triggers in templates (#25849) (e363388)
- ivy: support bootstrap in ngModuleDef (#25775) (13ccdfd)
Update September 6, 2018
The sixth beta arrives with nine bug fixes and two features in tow.
Let’s have a look!
Features:
- elements: enable Shadow DOM v1 and slots (#24861) (c9844a2)
- router: warn if navigation triggered outside Angular zone (#24959) (010e35d), closes #15770 #15946 #24728
Update September 6, 2018
The fifth beta contains four bug fixes and that’s about it.
The Release Candidate phase should be right around the corner but since there’s no release schedule, who knows when the beta tap will be turned off?
Update August 23, 2018
Another week, another beta. This time, beta.3 arrives with one feature in tow, namely:
Update August 16, 2018
Well, the headline pretty much says it all. The third beta arrives with two bug fixes in tow and … that’s about it.
Still, progress is progress and this means we’re one step closer to Angular v7. We miss the release schedule with all the betas and RCs but perhaps it’s for the best since Angular v6 came later than expected.
Update August 9. 2018
The countdown to the Angular v7 release has begun. The second beta arrives with four bug fixes and one feature in tow.
Sure, it’s just one feature but one now, one in beta.0, and before we know it, we’ll be able to put the pieces together and catch a glimpse of how Angular v7 looks like.
Feature:
Update August 3, 2018
Angular v7 will be here in September/October so there’s not a lot of time left. We’re one step closer to the general availability now that the first beta has landed.
There are just four bugfixes and one feature but what’s important is that we’re already seeing bits and pieces of the next version.
Feature:
Update July 26, 2018
Angular v6.1.0 is finally here and, as we can see from the long list of bugfixes and features, the team has been hard at work.
This important milestone arrives with almost 70 bugfixes and 20 interesting features, including TypeScript 2.9 support.
Here is the complete list of features:
- bazel: Initial commit of protractor_web_test_suite (#24787) (71e0df0)
- bazel: protractor_web_test_suite for release (#24787) (161ff5c)
- common: introduce KeyValuePipe (#24319) (2b49bf7)
- compiler: support
// ...and
// TODOin mock compiler expectations (#23441) (c6b206e)
- compiler-cli: update
tsickleto
0.29.x(#24233) (f69ac67)
-)
- core: add support for ShadowDOM v1 (#24718) (3553977)
- core: add support for using async/await with Jasmine (#24637) (71100e6)
- core: add support for ShadowDOM v1 (#24718) (3553977)
- core: add support for using async/await with Jasmine (#24637) (71100e6) ()), closes #24616
- platform-browser: add HammerJS lazy-loader symbols to public API (#23943) (26fbf1d)
- platform-browser: allow lazy-loading HammerJS (#23906) (313bdce)
- platform-server: use EventManagerPlugin on the server (#24132) (d6595eb)
- router: add urlUpdateStrategy allow updating the browser URL at the beginning of navigation (#24820) ([328971f]
- router: add navigation execution context info to activation hooks (#24204) (20c463e), closes #24202
- router: implement scrolling restoration service (#20030) (49c5234), closes #13636 #10929 #7791 #6595
- service-worker: add support for
?in SW config globbing (#24105) (250527c)
- typescript 2.9 support (#24652) (e3064d5)
And one more thing; there’s also a breaking change in 6.1.0, namely:
- bazel: Use of @angular/bazel rules now requires calling ng_setup_workspace() in your WORKSPACE file.
Update July 20, 2018 are some features to cheer you up:
- bazel: Initial commit of protractor_web_test_suite (#24787) (71e0df0)
- bazel: protractor_web_test_suite for release (#24787) (161ff5c)
- core: add support for ShadowDOM v1 (#24718) (3553977)
- core: add support for using async/await with Jasmine (#24637) (71100e6)
- router: add urlUpdateStrategy allow updating the browser URL at the beginning of navigation (#24820) (328971f), closes #24616
- service-worker: add support for
?in SW config globbing (#24105) (250527c)
As you can see, the fourth release candidate brings six features, as well as 15 bug fixes. It’s onwards and upwards from here!
Update July 13, 2018
The release candidate period for v6.1.0 has begun and there are already a lot of things happening.
rc.0 includes 13 bug fixes and four features.
Features
-!
Update July 9, 2018
Beta.3 is hardly new but up until now, there was just the announcement visible.
Now we can see that the fourth beta brings two bug fixes; we’re eager to see Angular v6.1.0.
Update July 2, 2018
More bug fixes! This time, the 6.0.7 release fixes a few things here and there.
Bug Fixes
- animations: set animations styles properly on platform-server (#24624)
- common: use correct ICU plural for locale mk (#24659)
Update June 22, 2018
Beta season continues for Angular v6.1.0 with a few minor fixes! This week’s bug fixes come with some improvements for the compilers and core for both 6.1.0 and 6.0.6.
Bug fixes
- compiler: support
.in import statements. (#20634) (d8f7b29), closes #20363
- core: Injector correctly honors the @Self flag (#24520) (ccbda9d)
Update June 14, 2018
We’re off to a good start! The second beta is here and it doesn’t come empty-handed. There are nine bugfixes and nine features.
Features
- common: introduce KeyValuePipe (#24319) (2b49bf7)
-)
- ivy: a generic visitor which allows prefixing nodes for ngtsc (#24230) (ca79e11)
- ivy: add support of ApplicationRef.bootstrapModuleFactory (#23811) (e3759f7)
- ivy: namespaced attributes added to output instructions (#24386) (82c5313)
- ivy: now supports SVG and MathML elements (#24377) (8c1ac28)
- router: implement scrolling restoration service (#20030) (49c5234), closes #13636 #10929 #7791 #6595.
PS: You can track their progress at ivy.angular.io.
Update June 7, 2018
Now that Angular v6 is here, it’s time to look toward the future, which happens to be all about Angular v7. What will this version bring? We don’t know yet but we’re pretty excited to see the bits and pieces and then put everything together this Fall.
That being said, it’s time to move on — to 6.1.0 to be more exact. The first beta arrived in early June with nearly 30 bugfixes and six feature in tow.
Features
- compiler: support
// ...and
// TODOin mock compiler expectations (#23441) (c6b206e)
- compiler-cli: update
tsickleto
0.29.x(#24233) (f69ac67)
- platform-browser: add HammerJS lazy-loader symbols to public API (#23943) (26fbf1d)
- platform-browser: allow lazy-loading HammerJS (#23906) (313bdce)
- platform-server: use EventManagerPlugin on the server (#24132) (d6595eb)
- router: add navigation execution context info to activation hooks (#24204) (20c463e), closes #24202
Angular v7 should be released in September/October 2018. Read more about the release schedule here.
Let’s revisit Angular v6
Angular v6 is the first release that unifies the Framework, Material and CLI. If you want to read more about the highlights and the new CLI-powered update workflow for your projects, check out the v6 release announcement. will keep your applications working. It’s more tree-shakable now, ensuring that only the pieces of RxJS that you use are included in your production bundles.
Note: If you use
ng update, your application should keep working, but you can learn more about the 5.5 to 6.0 migration..
3 Comments on "Angular v7 is here but 7.1.0 is already breathing down its neck: RC phase kicks off"
Wow, this is the most boring release
And what feature would you suggest?
Well @Fred i found this release very interesting. In my opinion, Jax team wrote the very detailed article and mentioned almost all the updated features. Great Work! | https://jaxenter.com/road-to-angular-v7-release-is-here-145326.html | CC-MAIN-2019-47 | refinedweb | 2,070 | 55.34 |
Can we insert a table in the target db by exploiting union-based SQL Injection? I have managed to dump all the tables from databases there
I was trying SQL injection on a demo lab. I managed to dump all tables and DB successfully like a charm. But I wonder if it is possible to create a new table using union-based SQL Injection. Consider the leniency in the privilege. I just want to know that there exists an approach. I am focussing the entire question on union-based SQL injection here on SQL.
See also questions close to this topic
- How can i update multiple table with one query php?
if($_SERVER['REQUEST_METHOD'] == 'POST'){ $content = file_get_contents('php://input'); $user = json_decode($content, true); $id_a = $user['id_a']; $id_b = $user['id_b']; $aName = $user['aName']; $bName = $user['bName']; $sql = " BEGIN TRANSACTION; UPDATE `tb1` SET `aName` = '$aName' WHERE `id_a` = '$id_a'; UPDATE `tb2` SET `bName` = '$nName' WHERE `id_b` = '$id_b'; COMMIT "; $result = $conn->query($sql); if($result){ echo json_encode(['status'=>'success','message'=>'Edited successfully']); } else{ echo json_encode(['status'=>'error','message'=>'An error occurred editing the information.']); } } else{ echo json_encode(['status'=>'error','message'=>'REQUEST_METHOD Error']); } $conn->close();
I need to update data multiple table but when i use above code it it response "Edited successfully" but in database data didn't change anything
but when update single table it can
- how to get an updated data from the database in bootgrid table as soon as i submit the html form?
this is my index. php file in which I write a code for getting the data from customersinfo.php file(in this file I'm getting the data from database) through ajax
I want the updated database after submitting the form not the existing database
- How to get the name of input field with PHP
I'm trying to create a function. which will check if the field is empty or not. If the input is empty it'll push an error for that input in the error array.
$errors = ["name" => "Name is required"];
like so
How can I do this?
function checkRequiredFields($fields = [], $errors = []) { foreach ($fields as $field) { if (is_blank($field)) { $errors[$field] = "$field is required"; } } }
btw here's how my
is_blankfunction looks:
function is_blank($value) { return !isset($value) || trim($value) === ''; }
- Google Chrome weird cursor blink on pages, never seen 'em before
As I'm paranoid more with online breach, recently noticed that Google chrome elements upon click showing cursor blink and that's spooky. HTML page elements became editable? It's not happening with Mozilla, and the extensions enabled on chrome are,
- Apollo Client Developer Tools,
- Authenticator
- Vue.JS Tools
[![enter image description here][1]][1]
On Firefox, it's allowing to select and no cursor blinks and that's the default behaviour. [1]:
- EAP-TLS implementing on a MicroController
I want to use eap-tls protocol to authenticate my embedded device which is controlled through a wifi chip to a wireless network under WpA2-enterprise security. Can anyone explain how to achieve this or recommend some good resources?
- resource injection issue at creating new url(resource);
In my project, the resource injection issue is coming at creating a new URL(resource) (fortiy static scan). how to fix this?
String username = "james"; String resource = ""+username; URL url = new URL(resource); //here it is giving resource injection issue in fortify scan System.setProperty("https.proxySet", "true"); System.setProperty("https.proxyHost", "11.09.11.111"); System.setProperty("https.proxyPort", "90"); HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
- Validating data before send to database. Looking for the correct way to do it
As I read in many articles, the validation of data must be on the serverside and not on the clientside.
I wondering What actions should I do to ensure the maximum security.
For now I'm doing only one action:
- Using statement + bind_params for any sql query.
If there Is any more actions that I should do?
I would like to know about them.
- Writing Injection vulnerable code sqlite3
I am writing a python file to query which is vulnerable to sql injection.
Here table name and column name on which constraint is made and constraint is given as command line argument while executing python file.
Here is the pyhon file:
import sqlite3 import sys con = sqlite3.connect("univ1.db") cur = con.cursor() table = sys.argv[1] column = sys.argv[2] constraint = sys.argv[3] cur.execute( """SELECT * FROM {} WHERE {} = '%s'""".format(table, column)% constraint) rows = cur.fetchall() for row in rows: print(','.join([str(val) for val in row]))
This code is spposed to be vulnerable to sql injection hence executing following command is expected to drop the specified table from the database along with printing the detail of classroom whose building is blah.
python3 query.py classroom building "blah'; DROP TABLE INSTRUCTOR; --'"
But since the cursor.execute can execute only one command at a time the program terminates with a warning.
How can I allow executing multiple command. Also note that fetchall function should return the relevant data.
Why am I asking this?
It is a part of an assignment where I am supposed to write both injection disabled as well as injection vulnerable query file.
- How to make sql string safe in vb.net without using parameters
I am working on a project written in
vb.netthat has
sql innjectionproblem, I almost fixed all injections by using
parameters, there is a case the sql string is built and then encrypted and passed to
viewState, and later it will retrieved and decrypted from
viewStateto be run, obviously can't use params in this case, also refactoring to not passing sql in
viewStateis not possible, the only option is to avoid injection when building sql statement as much as possible, or using parameters and then getting the generated sql statement and pass that to view state.
parameter values are mixed of string and numbers.
for example
sSQL = sSQL & " WHERE Name LIKE '" & lstBrowse.SelectedItem.Value & "%'" sSQL = sSQL & " WHERE State ='" & lstBrowse.SelectedItem.Value & "'" sSQL = sSQL & " WHERE OrganizationID=" & lstBrowse.SelectedItem.Value
How can I do that without using parameters? what should I consider and avoid?
- What filetype is automaticly executed in the directory?
Windows Server Security:
What happens if you allow all filetypes in a upload to my Windows Server? Is there a filetype that is automaticly executed when uploaded to a server? (a friend told me)
I also saw this - but what kind of vulnerabilities exists?
- What files do I need to modify on linux to gain root access?
Assuming that I have no ability to use sudo, but I have a shell script exploit that allows to me change the file ownership of a file to the current user by exploiting a c program with root permissions and specifically this line of code in which we can modify the "file" parameter.
execlp("chown", user, file)
How would I exploit the ability to gain ownership of any file in the system to ultimately gain sudo access over the system? What files would I modify?
I've tried modifying the sudoers file itself but it would give the following errors
sudo: no valid sudoers sources found, quitting sudo: /etc/sudoers is owned by uid 1000, should be 0
Note that I can't change the file's owner back to root as I cannot use chown on the file to root.
I am operating on a dummy VM right now and this is just an exercise, not doing anything illegal.
- Buffer overflow: p.interactive() does not give me a shell, despite exploit working without pwntools
I am trying to exploit the following program:
#include <stdio.h> #include <stdlib.h> int main(void) { char buf[256]; printf("Buffer is at %p.\n", buf); printf("Type in your name: "); fgets(buf, 1000, stdin); printf("Hello %s", buf); return 0; }
It has been compiled using
gcc -o bof bof.c -fno-stack-protector -z execstack. I am able to exploit the vulnerability if I disable ASLR. My exploit just has shellcode that executes /bin/sh, some useless NOPs, and finally the location of my shellcode on the stack.
$ python -c "import sys; sys.stdout.buffer.write' + b'\x90' * 186 + b'\x50\xdd\xff\xff\xff\x7f')" | ./bof Buffer is at 0x7fffffffdd50. $ echo hello world hello world $ exit sh: 2: Cannot set tty process group (No such process)
Yet, when I try doing the exact same thing within pwntools, I get the following:
$ python bof.py [+] Starting local process './bof': pid 10967 Received: b'Buffer is at 0x7fffffffdd40.\n' Using address: b'@\xdd\xff\xff\xff\x7f\x00\x00' Using payload: b"H1\xc0H1\xff\xb0\x03\x0f\x05PH\xbf/dev/ttyWT_P^f\xbe\x02'\xb0\x02\x0f\x05H1\xc0\xb0;H1\xdbS\xbbn/shH\xc1\xe3\x10f\xbbbiH\xc1\xe3\x10\xb7/SH\x89\xe7H\x83\xc7\x01H1\xf6H1\xd2\x0f\x05dd\xff\xff\xff\x7f\x00\x00" [*] Switching to interactive mode $ $ $ [*] Got EOF while sending in interactive
This is the code inside of bof.py:
from pwn import * # Start the process context.update(arch="i386", os="linux") p = process("./bof") received = str(p.recvline()) print("Received: " + received) # Get the address of the buffer buffer_addr_str = received.split()[3:][0][:-4] buffer_addr = p64(int(buffer_addr_str, 16)) print("Using address: " + str(buffer_addr)) # Generate the payload payload =' nops = b'\x90' * (264 - len(payload)) print("Using payload:") print(payload+nops+buffer_addr) print() # Trigger the buffer overflow p.send(payload + nops + buffer_addr) p.interactive()
This is the shellcode that I'm using:
section .text global _start _start: ; Syscall to close stdin xor rax, rax xor rdi, rdi ; Zero represents stdin mov al, 3 ; close(0) syscall ; open("/dev/tty", O_RDWR | ...) push rax ; Push a NULL byte onto the stack mov rdi, 0x7974742f7665642f ; Move "/dev/tty" (written backwards) into rdi. push rdi ; Push the string "/dev/tty" onto the stack. push rsp ; Push a pointer to the string onto the stack. pop rdi ; rdi now has a pointer to the string "/dev/tty" ; This is equivalent to doing "mov rdi, rsp" push rax ; Push a NULL byte onto the stack pop rsi ; Make rsi NULL ; This is equivalent to doing "mov rsi, 0" mov si, 0x2702 ; Flag for O_RDWR mov al, 0x2 ; Syscall for sys_open syscall ; Syscall for execve xor rax, rax mov al, 59 ; Push a NULL byte onto the stack xor rbx, rbx push rbx ; Push /bin/sh onto the stack and get a pointer to it in rdi mov rbx, 0x68732f6e ; Move "n/sh" into rbx (written backwards). shl rbx, 16 ; Make 2 extra bytes of room in rbx mov bx, 0x6962 ; Move "bi" into rbx. Rbx is now equal to "bin/sh" written backwards. shl rbx, 16 ; Make 2 extra bytes of room in rbx mov bh, 0x2f ; Move "/" into rbx. Rbx is now equal to "/bin/sh" written backwards. push rbx ; Move the string "/bin/sh" onto the stack mov rdi, rsp ; Get a pointer to the string "/bin/sh" in rdi add rdi, 1 ; Add one to rdi (because there is a NULL byte at the beginning) ; Make these values NULL xor rsi, rsi xor rdx, rdx ; Do the syscall syscall
I don't understand why calling p.interactive() doesn't spawn a shell. I am sending the same kind of payload that I would be sending if this was being done outside of pwntools. Why am I not getting a shell?
- Why I'm getting invalid attribute for applyToWebDAV
For security of my webite, I've been recommened to add below to web.config
<system.webServer> <security> <requestFiltering allowDoubleEscaping="true"> <verbs applyToWebDAV="false"> <add verb="PUT" allowed="false" /> <add verb="TRACE" allowed="false" /> <add verb="DELETE" allowed="false" /> </verbs> </requestFiltering> </security> </system.webServer>
But when I build the app, I get below warning
The 'applyToWebDAV' attribute is not allowed
I'm wondering if I'm doing it wrong and if it will be in effect at all or it will be ignored.
- How do I find malware in my Wordpress Directory?
What are the most common hack files you would find in your wordpress directory. So far I have found huh.php, 365.php, le.php, back.zip, login.zip and a folder /pp with a bunch of fake php files. What else should I be looking for?
- How does HttpOnly cookie protect against XSS/Injection attack if they are passed automatically with every request?
From what I understand, HttpOnly cookies cannot be read by client js but they are passed by the browser with any subsequent requests.
If an attacker is able to inject js in to a web page and makes a request to the endpoint, it would still go through because all cookies are passed along, correct?
What's the point of HttpOnly cookies? | https://quabr.com/64099226/can-we-insert-a-table-in-the-target-db-by-exploiting-union-based-sql-injection | CC-MAIN-2020-45 | refinedweb | 2,115 | 63.7 |
SVN::Fs - Subversion filesystem functions
SVN::Fs wraps the functions in svn_fs.h.
The actual namespace for filesystem objects is
_p_svn_fs_t.
TODO - doesn't work, segfaults if $s is null, doesn't do anything if its an empty string
See also
SVN::Fs::contents_changed
Cleanup the transaction
$txn_id,
removing it completely from the filesystem
$fs.
Return a string containing the unparsed form of the node or node revision id $id,
which must be a
_p_svn_fs_id_t object.
TODO - why isn't this a method of that object?
TODO - what can we do with the _p_svn_version_t value returned?
Return a new
_p_svn_fs_access_t object representing
$username.
$username is presumed to have been authenticated by the caller.
Creates a new transaction in the repository,
and returns a
_p_svn_fs_txn_t object representing it.
The new transaction's base revision will be $rev,
which should be a number.
Generate a unique lock-token using
$fs.
TODO - translate this to apply to Perl: This can be used in to populate lock->token before calling svn_fs_attach_lock().
The filesystem's current access context,
as a
_p_svn_fs_access_t object.
Returns undef if no access context has been set with the
set_access() method.
The UUID associated with
$fs.
A reference to an array of all currently active transactions in the filesystem.
Each one is a string containing the transaction's ID,
suitable for passing to
$fs->open_txn().
Get a transaction in the repository by name.
Returns a
_p_svn_fs_txn_t object.
The value of revision property
$propname in revision
$rev.
A hashref containing the names and values of all revision properties from revision
$rev.
Associate an access context with an open filesystem.
This method can be run multiple times on the same open filesystem,
in order to change the filesystem access context for different filesystem operations.
$access should be a
_p_svn_fs_access_t object,
or undef to disassociate the current access context from the filesystem.
Associate
$uuid with
$fs.
Return the number of the youngest revision in the filesystem. The oldest revision in any filesystem is numbered zero.
Kind of node at
$path.
A number which matches one of these constants: $SVN::Node::none,
$SVN::Node::file,
$SVN::Node::dir,
$SVN::Node::unknown.
The filesystem to which
$root belongs,
as a
_p_svn_fs_t object.
True if there is a node at
$path which is a directory.
True if there is a node at
$path which is a file.
True if the root comes from a revision (i.e., the contents has already been committed).
True if the root comes from a transaction.
TODO - _p_svn_fs_history_t
A reference to a hash indicating what changes are made in the root.
The keys are the paths of the files changed,
starting with
/ to indicate the top-level directory of the repository.
The values are
_p_svn_fs_path_change_t objects which contain information about what kind of changes are made.
Revision number of the revision the root comes from.
For transaction roots,
returns
$SVN::Core::INVALID_REVNUM.
In list context, a list of two items: the path to the node whose history this is, and the revision number in which it exists. In scalar context returns only the revision number.
Abort the transaction.
Any changes made in
$txn are discarded,
and the filesystem is left unchanged.
Note: This function first sets the state of
$txn to 'dead',
and then attempts to purge it and any related data from the filesystem.
If some part of the cleanup process fails,
$txn and some portion of its data may remain in the database after this function returns.
Use
$fs->purge_txn() to retry the transaction cleanup.
The transaction's base revision number.
Add,
change,
or remove a property from the transaction.
If
$value is
undef then the property
$name is removed,
if it exists.
Otherwise the property
$name is set to the new value.
Full name of the revision,
in the same format as can be passed to
$fs->open_txn().
The value of the transaction's
$name property.
A reference to a hash containing all the transaction's properties, keyed by name.
The root directory of the transaction,
as a
_p_svn_fs_root_t object.
my $access = SVN::Fs::create_access($username); my $access = $fs->get_access; $fs->set_access($access); my $username = $access->get_username; $access->add_lock_token($token);
Push a lock-token into the access context. The context remembers all tokens it receives, and makes them available to fs functions.
The username represented by the access context.
An object representing a directory entry. Values of this type are returned as the values in the hash returned by
$root->dir_entries(). They are like svn_dirent_t objects, but have less information.
TODO
Node kind. A number which matches one of these constants: $SVN::Node::none, $SVN::Node::file, $SVN::Node::dir, $SVN::Node::unknown.
The filename of the directory entry.
The type of change made. A number which matches one of the following:
Content at path modified.
Path added in transaction.
Path removed in transaction.
Path removed and re-added in transaction.
Ignore all previous change items for path (internal-use only).
Node revision id of changed path. A
_p_svn_fs_id_t object.
True if the properties were modified.
True if the text (content) was modified. | http://search.cpan.org/~mlanier/Alien-SVN-1.6.12.0/src/subversion/subversion/bindings/swig/perl/native/Fs.pm | CC-MAIN-2016-18 | refinedweb | 844 | 60.61 |
Content-type: text/html
cc [ flag... ] file... -l performance counter context with the calling LWP. The context allows the system to virtualize the hardware counters to that specific LWP, and the counters are enabled.
Two flags are defined that can be passed into the routine to allow the behavior of the interface to be modified, as described below.
Counter values can be sampled at any time by calling cpc_take_sample(), and dereferencing the fields of the ce_pic[] array returned. The ce_hrt field contains the timestamp at which the kernel last sampled the counters.
To immediately remove the performance counter context on an LWP, the cpc_rele() interface should be used. Otherwise, the context will be destroyed after the LWP or process exits.
The caller should take steps to ensure that the counters are sampled often enough to avoid the 32-bit counters wrapping. The events most prone to wrap are those that count processor clock cycles. If such an event is of interest, sampling should occur frequently so that less than 4 billion clock cycles can occur between samples. Practically speaking, this is only likely to be a problem for otherwise idle systems, or when processes are bound to processors, since normal context switching behavior will otherwise hide this problem.
Upon successful completion, cpc_bind_event() and cpc_take_sample() return 0. Otherwise, these functions return -1, and set errno to indicate the error.
The cpc_bind_event() and cpc_take_sample() functions will fail if:
EACCES
EAGAIN
EINVAL
ENOTSUP
Prior to calling cpc_bind_event(), applications should call cpc_access(3CPC) to determine if the counters are accessible on the system.
Example 1 Use hardware performance counters to measure events in a process.
The example below shows how a standalone program can be instrumented with the libcpc routines to use hardware performance counters to measure events in a process. The program performs 20 iterations of a computation, measuring the counter values for each iteration. By default, the example makes the counters measure external cache references and external cache hits; these options are only appropriate for UltraSPARC processors. By setting the PERFEVENTS environment variable to other strings (a list of which can be gleaned from the -h flag of the cpustat or cputrack utilities), other events can be counted. The error() routine below is assumed to be a user-provided routine analogous to the ", iter, after.ce_pic[0] - before.ce_pic[0], after.ce_pic[1] - before.ce_pic[1]); } if (iter != 20) error("can't sample '%s': %s", setting, strerror(errno)); free(setting); return (0); }
Example 2 Write a signal handler to catch overflow signals.
This example builds on Example 1, but demonstrates how to write the signal handler to catch overflow signals. The counters are preset so that counter zero is 1000 counts short of overflowing, while counter one is set to zero. After 1000 counts on counter zero, the signal handler will be invoked.
First the signal handler:
#define PRESET0 (UINT64_MAX - UINT64_C(999)) #define PRESET1 0 void emt_handler(int sig, siginfo_t *sip, void *arg) { ucontext_t *uap = arg; cpc_event_t sample; if (sig != SIGEMT || sip->si_code != EMT_CPCOVF) { psignal(sig, "example"); psiginfo(sip, "example"); return; } (void) printf("lwp%d - si_addr %p ucontext: %%pc %p %%sp %p, _lwp_self(), (void *)sip->si_addr, (void *)uap->uc_mcontext.gregs[PC], (void *)uap->uc_mcontext.gregs[USP]); if (cpc_take_sample(&sample) == -1) error("can't sample: %s", strerror(errno)); (void) printf("0x%" PRIx64 " 0x%" PRIx64 ",toevent(3CPC) for the syntax.
The most obvious use for this facility is to ensure that the full 64-bit counter values are maintained without repeated sampling. However, current hardware does not record which counter overflowed. A more subtle use for this facility is to preset the counter to a value to a little less than the maximum value, then use the resulting interrupt to catch the counter overflow associated with that event. The overflow can then be used as an indication of the frequency of the occurrence of that event.
Note that the interrupt generated by the processor may not be particularly precise. That is, the particular-INT32_MAX.
The appropriate preset value will often need to be determined experimentally. Typically, it will depend on the event being measured, as well as the desire to minimize the impact of the act of measurement on the event being measured; less frequent interrupts and samples lead to less perturbation of the system.
If the processor cannot detect counter overflow, this call will fail (ENOTSUP). Specifying a null event unbinds the context from the underlying LWP and disables signal delivery. Currently, only user events can be measured using this technique. See Example 2, above.. | http://backdrift.org/man/SunOS-5.10/man3cpc/cpc_bind_event.3cpc.html | CC-MAIN-2016-50 | refinedweb | 752 | 54.22 |
Previously I demonstrated how to use Oauth in an Ionic Framework 1 Android and iOS mobile application, but with Ionic 2 becoming all the rage, I figured my old guide needed a refresher.
Modern applications are always making use of APIs and data from third party services. The problem is, these remote services require a special kind of authentication to happen in order to work with the data they manage. The most common form of authentication for web services is Oauth.
In my Ionic Framework 1 tutorial I demonstrated Google Oauth, but this time we’re going to see how to use Facebook Oauth in an Ionic 2 application.
As you probably know already, Ionic 2 uses Angular. To best understand what we’re doing, it is best to make a fresh Ionic 2 application. From the Command Prompt (Windows) or Terminal (Mac and Linux), execute the following commands:
ionic start ExampleProject blank --v2 cd ExampleProject ionic platform add ios ionic platform add android
It is important to note that you must be using the Ionic CLI that supports Ionic 2. You must also be using a Mac if you wish to add and build for the iOS platform.
We’re going to be using a mobile web browser for the authentication flow. This means we’ll need to have the Apache Cordova InAppBrowser plugin installed. From your Terminal or Command Prompt with the project as your current directory, execute the following:
ionic plugin add cordova-plugin-inappbrowser
Before we start looking at code, we need to create a Facebook application. This can be done for free from the Facebook Developer Dashboard. During this creation process you want to make sure the Valid Oauth redirect URIs is set to. This setting can be found in the Facebook Dashboard’s Settings -> Advanced tab.
Take note of the App ID of your Facebook application. This is also known as a Client ID and it will be used within the Ionic 2 application.
With that out of the way, let’s code!
Open your project’s app/pages/home/home.html file and change it to look like the following:
<ion-header> <ion-navbar> <ion-title>Oauth Project</ion-title> </ion-navbar> </ion-header> <ion-content padding> <button (click)="login()">Facebook Login</button> </ion-content>
The above snippet is our UI. It contains only one button which will start the authentication flow. This is done through a
login() function that we’re going to create now.
Open your project’s app/pages/home/home.ts file and change it to look like the following:
import { Component } from '@angular/core'; import { NavController, Platform } from 'ionic-angular'; declare var window: any; @Component({ templateUrl: 'build/pages/home/home.html' }) export class HomePage { public constructor(public navCtrl: NavController, private platform: Platform) { } public login() { } public facebookLogin(): Promise<any> { } }
There are two functions here. Since the authentication flow is an asynchronous process I figured it would be easiest to split it from our button call. In other words,
login() will call the
facebookLogin() function.
However, before going there, notice the line near the top that reads:
declare var window: any;
We are using an Apache Cordova plugin that has no TypeScript type definitions. If we don’t declare this important object as
any, we’ll get errors during the build phase. Now we can look at the functions.
Starting with the
login() function, add the following:
public login() { this.platform.ready().then(() => { this.facebookLogin().then(success => { alert(success.access_token); }, (error) => { alert(error); }); }); }
In the above function we first make sure the app is ready. This is a requirement before trying to use Apache Cordova plugins. We know that
facebookLogin() is asynchronous so we wait for a success or error event to happen and show a message appropriately.
This is where the complicated stuff comes in. Take a look at the following
facebookLogin() function, then we’ll break it down:
public facebookLogin(): Promise<any> { return new Promise(function(resolve, reject) { var browserRef = window.cordova.InAppBrowser.open("" + "CLIENT_ID_HERE" + "&redirect_uri=", "_blank", "location=no,clearsessioncache=yes,clearcache=yes"); browserRef.addEventListener("loadstart", (event) => { if ((event.url).indexOf("") === 0) { browserRef.removeEventListener("exit", (event) => {}); browserRef.close(); var responseParameters = ((event.url).split("#")[1]).split("&"); var parsedResponse = {}; for (var i = 0; i < responseParameters.length; i++) { parsedResponse[responseParameters[i].split("=")[0]] = responseParameters[i].split("=")[1]; } if (parsedResponse["access_token"] !== undefined && parsedResponse["access_token"] !== null) { resolve(parsedResponse); } else { reject("Problem authenticating with Facebook"); } } }); browserRef.addEventListener("exit", function(event) { reject("The Facebook sign in flow was canceled"); }); }); }
First off, notice that we’re wrapping the whole thing in a
Promise. When we’re finished and we know login was a success we’ll call the
resolve and if there is an error or bail out for any reason we’ll call the
reject.
Now notice the following line:
var browserRef = window.cordova.InAppBrowser.open("" + "CLIENT_ID_HERE" + "&redirect_uri=", "_blank", "location=no,clearsessioncache=yes,clearcache=yes");
We’re launching the InAppBrowser plugin and keeping reference of it so that way we can inspect it whenever we want. The URL launched is the required endpoint per the Facebook documentation. If you’re using a different provider, you’d change it as necessary.
Since we have the browser reference we can look at the current page every time it loads. This is done through the
loadstart event like so:
browserRef.addEventListener("loadstart", (event) => {});
If the current page starts with like set as our redirect URI in the Facebook Dashboard, it means we’ve finished signing in and our access token is attached. We just need to parse the token out now.
The access token exists in the URL so the remainder of our code is just parsing it out and returning it successfully as an object to the user.
Above are some screenshots of what the Facebook Oauth flow might look like in the app we just made.
We just saw how to initiate a Facebook Oauth authentication flow in an Ionic 2 Android and iOS mobile application. It is a step up from our previous Ionic Framework 1 example. We saw how to use the Apache Cordova InAppBrowser plugin and monitor its events to process Facebook login which is an implicit grant flow. In all honesty, you can just use the ng2-cordova-oauth plugin I made to accomplish this stuff, but I’ll have another article on that subject. | https://www.thepolyglotdeveloper.com/2016/01/using-an-oauth-2-0-service-within-an-ionic-2-mobile-app/ | CC-MAIN-2018-51 | refinedweb | 1,053 | 55.84 |
Configuring remote_write with Helm and kube-prometheus-stack
In this guide you’ll learn how to configure Prometheus’s
remote_write feature to ship cluster metrics to Grafana Cloud.
This guide assumes you have installed kube-prometheus-stack in your Kubernetes cluster using the Helm package manager. To learn how to install Helm on your local machine, please see Install Helm from the Helm documentation. To learn how to install kube-prometheus-stack, please see Install Chart from the kube-prometheus-stack GitHub repo.
The kube-prometheus-stack Helm chart installs the kube-prometheus stack. The kube-prometheus stack. Prometheus Operator is a sub-component of the kube-prometheus stack..
If you did not use Helm to install kube-prometheus please see Configuring remote_write with Prometheus Operator. default
If you deployed your monitoring stack in a namespace other than
default, Prometheus’s configuration using a Helm values file.
Step 2 — Create a Helm values file with Prometheus remote_write configuration
In this step we’ll create a Helm values file to define parameters for Prometheus’s remote_write configuration. A Helm values file allows you to set configuration variables that are passed in to Helm’s object templates. To see the default values file for kube-prometheus-stack, consult values.yaml from the kube-prometheus-stack GitHub repository.
We’ll first create a values.yaml file defining Prometheus’s remote_write configuration, and then apply the new configuration to kube-prometheus-stack.
Open a file named new_values.yaml in your favorite editor. Paste in the following values:
prometheus: prometheusSpec: remoteWrite: - url: "<Your Metrics instance remote_write endpoint>" basicAuth: username: name: kubepromsecret key: username password: name: kubepromsecret key: password
Here we set the remote_write URL and basic_auth username and password using the Secret created in the previous step.
When you’re done editing the file, save and close it.
Roll out the changes using
helm upgrade -f:
helm upgrade -f new_values.yaml [your_release_name] prometheus-community/kube-prometheus-stack
Replace
[your_release_name] with the name of the release you used to install kube-prometheus-stack. You can get a list of installed releases using
helm list.
After running
helm upgrade, you should see the following output:
Release "your_release_name" has been upgraded. Happy Helming! NAME: your_release_name LAST DEPLOYED: Mon Dec 7 17:29:03 2020 NAMESPACE: default STATUS: deployed REVISION: 2 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace default get pods -l "release=your_release_name" Visit for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
At this point, you’ve successfully configured your Prometheus instances to
remote_write scraped metrics to Grafana Cloud. You can verify that your changes have propagated to your running Prometheus instances using
port-forward:
kubectl --namespace default port-forward svc/<your_release_name>-kube-prometheus-sta-prometheus 9090
Replace
namespace with the appropriate namespace, and
<your_release_name> with the Helm release. | https://grafana.com/docs/grafana-cloud/kubernetes-monitoring/prometheus/remote_write_helm_operator/ | CC-MAIN-2022-27 | refinedweb | 474 | 55.64 |
Suppose you are creating an account on Geekbook, you want to enter a cool username, you entered it and got a message, “Username is already taken”. You added your birth date along username, still no luck. Now you have added your university roll number also, still got “Username is already taken”. It’s really frustrating, isn’t it?
But have you ever thought how quickly Geekbook check availability of username by searching millions of username registered with it. There are many ways to do this job –
- Linear search : Bad idea!
- Binary Search : Store all username alphabetically and compare entered username with middle one in list, If it matched, then username is taken otherwise figure out , whether entered username will come before or after middle one and if it will come after, neglect all the usernames before middle one(inclusive). Now search after middle one and repeat this process until you got a match or search end with no match.This technique is better and promising but still it requires multiple steps.
But, There must be something better!!
Bloom Filter is a data structure that can do this job.
For understanding bloom filters, you must know what is hashing. A hash function takes input and outputs a unique identifier of fixed length which is used for identification of input.
What is Bloom Filter?
A Bloom filter is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. For example, checking availability of username is set membership problem, where the set is the list of all registered username. The price we pay for efficiency is that it is probabilistic in nature that means, there might be some False Positive results. False positive means, it might tell that given username is already taken but actually it’s not.
Interesting Properties of Bloom Filters
- Unlike a standard hash table, a Bloom filter of a fixed size can represent a set with an arbitrarily large number of elements.
- Adding an element never fails. However, the false positive rate increases steadily as elements are added until all bits in the filter are set to 1, at which point all queries yield a positive result.
- Bloom filters never generate false negative result, i.e., telling you that a username doesn’t exist when it actually exists.
- Deleting elements from filter is not possible because, if we delete a single element by clearing bits at indices generated by k hash functions, it might cause deletion of few other elements. Example – if we delete “geeks” (in given example below) by clearing bit at 1, 4 and 7, we might end up deleting “nerd” also Because bit at index 4 becomes 0 and bloom filter claims that “nerd” is not present.
Working of Bloom Filter
A empty bloom filter is a bit array of m bits, all set to zero, like this –
We need k number of hash functions to calculate the hashes for a given input. When we want to add an item in the filter, the bits at k indices h1(x), h2(x), … hk(x) are set, where indices are calculated using hash functions.
Example – Suppose we want to enter “geeks” in the filter, we are using 3 hash functions and a bit array of length 10, all set to 0 initially. First we’ll calculate the hashes as following :
h1(“geeks”) % 10 = 1 h2(“geeks”) % 10 = 4 h3(“geeks”) % 10 = 7
Note: These outputs are random for explanation only.
Now we will set the bits at indices 1, 4 and 7 to 1
Again we want to enter “nerd”, similarly we’ll calculate hashes
h1(“nerd”) % 10 = 3 h2(“nerd”) % 10 = 5 h3(“nerd”) % 10 = 4
Set the bits at indices 3, 5 and 4 to 1
Now if we want to check “geeks” is present in filter or not. We’ll do the same process but this time in reverse order. We calculate respective hashes using h1, h2 and h3 and check if all these indices are set to 1 in the bit array. If all the bits are set then we can say that “geeks” is probably present. If any of the bit at these indices are 0 then “geeks” is definitely not present.
False Positive in Bloom Filters
The question is why we said “probably present”, why this uncertainty. Let’s understand this with an example. Suppose we want to check whether “cat” is present or not. We’ll calculate hashes using h1, h2 and h3
h1(“cat”) % 10 = 1 h2(“cat”) % 10 = 3 h3(“cat”) % 10 = 7
If we check the bit array, bits at these indices are set to 1 but we know that “cat” was never added to the filter. Bit at index 1 and 7 was set when we added “geeks” and bit 3 was set we added “nerd”.
So, because bits at calculated indices are already set by some other item, bloom filter erroneously claim that “cat” is present and generating a false positive result. Depending on the application, it could be huge downside or relatively okay.
We can control the probability of getting a false positive by controlling the size of the Bloom filter. More space means fewer false positives. If we want decrease probability of false positive result, we have to use more number of hash functions and larger bit array. This would add latency in addition of item and checking membership.
Probability of False positivity: :
Space Efficiency
If we want to store large list of items in a set for purpose of set membership, we can store it in hashmap, tries or simple array or linked list. All these methods require storing item itself, which is not very memory efficient. For example, if we want to store “geeks” in hashmap we have to store actual string “ geeks” as a key value pair {some_key : ”geeks”}.
Bloom filters do not store the data item at all. As we have seen they use bit array which allow hash collision. Without hash collision, it would not be compact.
Choice of Hash Function
The hash function used in bloom filters should be independent and uniformly distributed. They should be fast as possible. Fast simple non cryptographic hashes which are independent enough include murmur, FNV series of hash functions and Jenkins hashes.
Generating hash is major operation in bloom filters. Cryptographic hash functions provide stability and guarantee but are expensive in calculation. With increase in number of hash functions k, bloom filter become slow. All though non-cryptographic hash functions do not provide guarantee but provide major performance improvement.
Basic implementation of Bloom Filter class in Python3. Save it as bloomfilter.py
# Python 3 program to build Bloom Filter # Install mmh3 and bitarray 3rd party module first # pip install mmh3 # pip install bitarray import math import mmh3 from bitarray import bitarray class BloomFilter(object): ''' Class for Bloom filter, using murmur3 hash function ''' def __init__(self, items_count,fp_prob): ''' items_count : int Number of items expected to be stored in bloom filter fp_prob : float False Positive probability in decimal ''' # False posible probability in decimal self.fp_prob = fp_prob # Size of bit array to use self.size = self.get_size(items_count,fp_prob) # number of hash functions to use self.hash_count = self.get_hash_count(self.size,items_count) # Bit array of given size self.bit_array = bitarray(self.size) # initialize all bits as 0 self.bit_array.setall(0) def add(self, item): ''' Add an item in the filter ''' digests = [] for i in range(self.hash_count): # create digest for given item. # i work as seed to mmh3.hash() function # With different seed, digest created is different digest = mmh3.hash(item,i) % self.size digests.append(digest) # set the bit True in bit_array self.bit_array[digest] = True def check(self, item): ''' Check for existence of an item in filter ''' for i in range(self.hash_count): digest = mmh3.hash(item,i) % self.size if self.bit_array[digest] == False: # if any of bit is False then,its not present # in filter # else there is probability that it exist return False return True @classmethod def get_size(self,n,p): ''' Return the size of bit array(m) to used using following formula m = -(n * lg(p)) / (lg(2)^2) n : int number of items expected to be stored in filter p : float False Positive probability in decimal ''' m = -(n * math.log(p))/(math.log(2)**2) return int(m) @classmethod def get_hash_count(self, m, n): ''' Return the hash function(k) to be used using following formula k = (m/n) * lg(2) m : int size of bit array n : int number of items expected to be stored in filter ''' k = (m/n) * math.log(2) return int(k)
Lets test the bloom filter. Save this file as bloom_test.py
from bloomfilter import BloomFilter from random import shuffle n = 20 #no of items to add p = 0.05 #false positive probability bloomf = BloomFilter(n,p) print("Size of bit array:{}".format(bloomf.size)) print("False positive Probability:{}".format(bloomf.fp_prob)) print("Number of hash functions:{}".format(bloomf.hash_count)) # words to be added word_present = ['abound','abounds','abundance','abundant','accessable', 'bloom','blossom','bolster','bonny','bonus','bonuses', 'coherent','cohesive','colorful','comely','comfort', 'gems','generosity','generous','generously','genial'] # word not added word_absent = ['bluff','cheater','hate','war','humanity', 'racism','hurt','nuke','gloomy','facebook', 'geeksforgeeks','twitter'] for item in word_present: bloomf.add(item) shuffle(word_present) shuffle(word_absent) test_words = word_present[:10] + word_absent shuffle(test_words) for word in test_words: if bloomf.check(word): if word in word_absent: print("'{}' is a false positive!".format(word)) else: print("'{}' is probably present!".format(word)) else: print("'{}' is definitely not present!".format(word))
Output
Size of bit array:124 False positive Probability:0.05 Number of hash functions:4 'war' is definitely not present! 'gloomy' is definitely not present! 'humanity' is definitely not present! 'abundant' is probably present! 'bloom' is probably present! 'coherent' is probably present! 'cohesive' is probably present! 'bluff' is definitely not present! 'bolster' is probably present! 'hate' is definitely not present! 'racism' is definitely not present! 'bonus' is probably present! 'abounds' is probably present! 'genial' is probably present! 'geeksforgeeks' is definitely not present! 'nuke' is definitely not present! 'hurt' is definitely not present! 'twitter' is a false positive! 'cheater' is definitely not present! 'generosity' is probably present! 'facebook' is definitely not present! 'abundance' is probably present!
Applications of Bloom filters
- Medium uses bloom filters for recommending post to users by filtering post which have been seen by user.
- Quora implemented a shared bloom filter in the feed backend to filter out stories that people have seen before.
- The Google Chrome web browser used to use a Bloom filter to identify malicious URLs
- Google BigTable, Apache HBase and Apache Cassandra, and Postgresql use Bloom filters to reduce the disk lookups for non-existent rows or columns
References
-
-
-. | https://www.geeksforgeeks.org/bloom-filters-introduction-and-python-implementation/ | CC-MAIN-2018-09 | refinedweb | 1,788 | 56.66 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.