text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The following form allows you to view linux man pages. #include <sys/time.h> #include <sys/resource.h> int getrusage(int who, struct rusage *usage); pro- cess exceeded its time slice. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. EFAULT usage points outside the accessible address space. EINVAL who is invalid. set to SIG_IGN then the resource usages of child processes are automat- ically included in the value returned by RUSAGE_CHILDREN, although POSIX.1-2001 explicitly prohibits this. This nonconformance is recti- fied). clock_gettime(2), getrlimit(2), times(2), wait(2), wait4(2), clock(3) webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/vtimes/
CC-MAIN-2018-05
refinedweb
105
54.69
The UWP Community Toolkit v2.0 Today, the UWP Community Toolkit graduates to version 2.0 and sets the stage for future releases. There have been seven releases since the UWP Community Toolkit was first introduced exactly one year ago and version 2.0 is the first major and largest update to date. The developer community has worked enthusiastically to build something that is used by thousands of developers every month. Today, there are over 100 contributors, and developers have downloaded the packages over 250,000 times. This would not be possible without the strength of the community – Thank You! For developers, and designers alike Beginning with the v2.0 release, the UWP Community Toolkit is making efforts to align with the latest Windows 10 Fall Creators Update to enable developers to take advantage of the new APIs and the new Fluent Design System. The Fluent Design System defines several foundational elements that will make new designs perform beautifully across devices, inputs and dimensions. To prepare for the general availability of the Fall Creators Update later this year, the community has committed to update all UWP Community Toolkit controls to adopt Fluent Design. Over the coming months, new and existing controls will be updated to support light, depth, material, motion and scale. The sample app will also be updated to take full advantage of the new foundational elements to demonstrate what is possible. Updating the Sample App The UWP Community Toolkit Sample App showcases toolkit features for developers by providing tools to get started using the toolkit in their apps, and it continues to get better. In the largest update since the initial release, developers can now edit XAML directly in the sample app and instantaneously view the results side by side. This is a very powerful addition that allows developers to get started with development immediately by simply downloading the app from the store. But that’s not all. Taking inspiration from the Fall Creators Update, the sample app has been updated to use an improved and redesigned navigation model. The navigation has moved to the top and it’s now much easier to get to any sample. In addition, a new landing page has been added to make it easier to find what is new and keep track of favorite samples. Beyond UWP The UWP Community Toolkit has received feedback about the importance of supporting cross-platform development to enable developers to share more of their code across platforms. Version 2.0 introduces two new packages: Microsoft.Toolkit and Microsoft.Toolkit.Services with the commitment to support more cross platform APIs in future releases. These packages are built with .NET Standard and support any platform with .NET Standard 1.4 and above. The Bing Service is the first API to go cross-platform and there is currently work underway to move more services to the new packages. What else is new? As with every release, the community has worked together to share their ideas, build new controls and helpers libraries and improve the UWP Community Toolkit for everyone. This release is no different. There are several large additions and updates to highlight here, but make sure to visit our release notes for all additions and improvements: - This is just the start We learned a lot in the past year, and the community has worked together to make toolkit APIs as easy and flexible as possible. Few APIs and packages have been restructured to make them more convenient for developers and allowed more flexibility for future additions and updates. For example, the Microsoft.Toolkit.UWP.Connectivity package was added to unify all connectivity APIs such as Bluetooth and networking. Likewise, all extensions and helpers are now unified under a single namespace and are consistent across API. Windows Store. If you would like to contribute, please join us on GitHub! To join the conversation on Twitter, use the #uwptoolkit hashtag. Updated August 30, 2017 10:01 am Join the conversation These are beautiful!!!! Great work guys but please fully embrace React Native. It’s where developers are now and the reality is devs are targeting ios and android and react native makes it a no brainer to support other smaller user bases like the uwp community. I know there is currently is an React native implementation but it would be great if MS put the full weight of it’s resources behind it. Perhaps a partnership with facebook like you did with angular 2 and google. Love fluent design, now allow us javascript devs to fully use it.
https://blogs.windows.com/buildingapps/2017/08/30/uwp-community-toolkit-v2-0/
CC-MAIN-2017-39
refinedweb
758
55.24
This is the decalaration of the Collision class. More... #include <Collision.h> This is the decalaration of the Collision class. It contains all Particles produced in the generation of a collision between two particles in an Event. The particles are divided into Steps corresponding to the particles present after a given step in the event generation. The collision also carries information about the SubProcesses in the collision. Particle Step SubProcesses Definition at line 34 of file Collision.h. tEventPtr() tcEventBasePtr() The standard constructor takes a pair of incoming particles as argument. Optionally can be given a pointer to the Event which this Collision belongs, and a pointer to the EventHandler which produced this collision. Definition at line 59 of file Collision.h. References addParticle(), addStep(), incoming(), newStep(), and ~Collision(). Clone this Collision. This also makes clones of all steps, sub processes and particles in this Collision. Referenced by all(). Return a pointer to the Event to which this Collision belongs. May be the null pointer. Definition at line 96 of file Collision.h. References select(), and theEvent. Extract all final state particles in this Collision. Definition at line 125 of file Collision.h. References selectFinalState(). Return the set of remnants in this collision. Remnants are defined as the daughters of the incoming particles which are not incoming particles to any SubProcess or children thereof which are present in the final state. Referenced by incoming(), and isRemnant(). Return a pointer to the EventHandler which produced this Collision. Definition at line 90 of file Collision.h. References theHandler. Standard Init function. Referenced by m2(). Return true if the given particle is a remnant of the colliding particles. Calls the getRemnants method, so to check several particles it is better to call getRemnants directly and check if the particles are members of the resulting set by hand. Definition at line 193 of file Collision.h. References getRemnants(), and ThePEG::member(). Create a new step in this collision, which is a copy of the last step (if any) and return a pointer to it. Referenced by Collision(). Return a pointer to the primary SubProcess in this Collision. Definition at line 135 of file Collision.h. References subProcesses(). Rebind to cloned objects. When a Collision is cloned, a shallow copy is done first, then all Particles etc, are cloned, and finally this method is used to see to that the pointers in the cloned Collision points to the cloned Particles etc. Remove (recursively) the given Particle from the Collision. If this was the last daughter of the mother Particle, the latter is added to the list of final state particles. Extract particles from this Collision which satisfies the requirements given by an object of the SelectorBase class. Referenced by event(), and selectFinalState(). Definition at line 117 of file Collision.h. References select(). Referenced by getFinalState(). Most of the Event classes are friends with each other. Definition at line 43 of file Collision.h. Output to a standard ostream. Definition at line 45 of file Collision.h. A vector of all sub-processes in this Collision. The front element points to the primary sub-process. Definition at line 332 of file Collision.h. Referenced by subProcesses().
https://thepeg.hepforge.org/doxygen/classThePEG_1_1Collision.html
CC-MAIN-2018-39
refinedweb
530
52.56
Andrew Koenig is a former AT&T researcher and programmer. He is author of C Traps and Pitfalls and, with Barbara, coauthor of Ruminations on C++ and Accelerated C++. Barbara Moo is an independent consultant with 20 years of experience in the software field. C++ offers two kinds of polymorphismruntime polymorphism, which is based on virtual functions and is the foundation of object-oriented programming, and compile-time polymorphism, which is based on templates and is the foundation of generic programming. When we wish to select from a set of classes at runtime, C++ requires that those classes be related by inheritance. When we wish to select from a set of types at compile time, the relationship between those types is more subtle. The types need be related only indirectly, and only by their behavior. The C++ community does not have a generally accepted term for this kind of behavior-based relationship between types. Accordingly, people first learning about C++ generic programming are tempted to think that inheritance is involved somehow, just as it is for object-oriented programming. For example, on several occasions we have seen questions such as "Why isn't a bidirectional iterator derived from a forward iterator?" A student who asks that question has probably already formed a significant misconception about how templates deal with types. One way to avoid such misconceptions is to adopt a term for the kind of type relationships that we find in generic programs. Giving names to concepts often makes the concepts easier to understand and remember. The C++ community does not appear to have such a term, so we would like to borrow a term from the Python community and call such relationships "duck typing." The idea, of course, is that if it looks like a duck, walks like a duck, and quacks like a duck; then it's a duck. Examples Suppose you have two classes related by inheritance: class Employee { /* ... */ }; <blockquote> class Manager: public Employee { /* ... */ };<br> and another class with a member that accepts one of these classes as an argument: class Payroll_handler { public: // ... void generate_paycheck(Employee&); // ... }; You also have an object that represents a manager and another that represents a payroll handler: Manager m; Payroll_handler p; then you expect to be able to generate m's paycheck by executing p.generate_paycheck(m); even though the generate_paycheck function expects an Employee, rather than a Manager. Why? Because Manager has Employee as a base class, so a Manager "is-a" Employee. In other words, you know that a function that expects an Employee& will accept a Manager argument because of inheritance. Now consider this example: int x[100]; std::fill(x, x+100, 42); The call to std::fill sets all the elements of x to 42. If you look at the definition of fill, you find that it expects its first two arguments to be forward iterators. You know that x and x+100 are forward iterators becausewhy? Unlike the case with virtual functions, you can tell that x and x+100 are forward iterators only by looking at their behavior in context. In particular, you need to know not only that x is a pointer, but also a pointer to an element of an array. Unless a pointer points to an array element, you cannot meaningfully apply ++ to itan operation that is required of every forward iterator. In other words, if x looks like a forward iterator, it is a forward iteratorregardless of the type that x actually has. Claiming that x is a forward iterator is a prime example of duck typing. As another example, when the description of a container says that the container's elements must be assignable and copy constructable, that description is using duck typing. It doesn't care what the types actually are; it cares only that they support particular operations. It is not always even necessary for a type to support specific operations in a specific way to be considered a particular kind of duck. For example, for an object to be considered an output iterator, it is required to support the ++ and = operations only in a very restricted form. The ostream_iterator classes meet this requirement by making ++ do nothing at all! As another example, consider the accumulate function from the Standard Library. If you call accumulate(p, q, x), the accumulate function initializes a local variable to be a copy of x. Let's call that variable acc. After initializing acc, the accumulate function looks at each iterator it in the range [p, q) and effectively executes the statement: acc = acc + *it; This execution might take place in more than one way, depending on the types of acc and *it. For example, acc could be of a type that has an operator+ member. Alternatively, there could be an operator+ defined separately that accepts, as arguments, values of the types of acc and *it. The specification of accumulate doesn't care; all it requires is that acc and *it quack in the right dialect. Useful Ducks Python takes advantage of duck typing in contexts that C++ programmers may find surprising. For example, the normal Python way of printing the value of an expression on the standard output stream is: print "Hello, world!" By default, the destination is the standard output stream and the output is followed by a newline. If you want to print the same message on the standard error stream, you do so this way: import sys print >>sys.stderr, "Hello, world" So far, these examples don't look much different from their C++ counterparts: std::cout << "Hello, world!\n"; and std::cerr << "Hello, world!\n"; The difference is that in C++, std::cout and std::cerr are objects with << members that, in turn, accept string literals. In Python, what follows the >> is an object of any type that happens to have a method named write. Suppose that for some reason, you want to make it easy to write the same text on both the standard output and standard error files. Doing so in Python is easy: import sys class DualWriter: def write(self, x): sys.stdout.write(x) sys.stderr.write(x) Now you can create a DualWriter object: dual = DualWriter() and then whenever you execute print >>dual, x the value of x appears on both the standard output and standard error streams. Because you know that the >> mechanism assumes only the existence of the write method, you could define a tiny class that >> would accept because of duck typing. Suppose we wanted to do something similar in C++. It might appear at first to be impossible, because << is a member of the ostream library classes, and you cannot easily define such a class of your own. However, when you write an expression such as: dual << "Hello, world!\n" in C++, you don't actually require dual to be a member of the ostream hierarchy. It suffices for our purposes that dual support a << member that can handle the right types. What are the right types? Whatever it takes to make our class look like a duck. Here's a start: class DualWriter { public: DualWriter(std::ostream& s1, std::ostream& s2): s1(s1), s2(s2) { } template<class T> DualWriter& operator<<(const T& t) { s1 << t; s2 << t; return *this; } private: std::ostream& s1; std::ostream& s2; }; We have defined a tiny class named DualWriter that encapsulates references to two output streams. When you construct a DualWriter object, you say what those streams are. The only other work that a DualWriter object will do is to implement a << operator that takes a (const reference to an) object of any type and calls each ostream's << operator with that object. In effect, you're saying that as far as the << operation is concerned, a DualWriter is the same kind of duck as an ostream, whatever kind of duck that might be. Of course, you can extend this class to support other operations as needed. However, it is useful even in its current sketchy form: DualWriter dual(std::cout, std::cerr); dual << "Hello, world!\n"; will say Hello, world! on both the standard output and standard error streams. Discussion The distinction in C++ between duck typing and inheritance comes from C++'s static type system and is part of the price we pay for having C++ programs run as quickly as they do. Runtime duck typing is expensive, so C++ doesn't support it. When a C++ program executes obj.f(x), and obj is a reference to a base class with a virtual function f, C++'s inheritance requirements ensure that obj actually refers to an object that has a member function named f, and that function's return type has the same internal representation, regardless of which derived class f is actually called. In contrast, compile-time duck typing doesn't cost anything during runtime. Indeed, it is duck typing that makes it possible for the C++ library to define a single vector template that allows vector<T> for any suitable type T, rather than requiring T to be derived from a class such as vector_element. The standard containers require their element types to be "assignable" and "copy constructible," but those notions are just ways of describing particular kinds of ducks. It is these notions' lack of inheritance requirements that lets us use types such as vector<int>, even though int is not part of any inheritance hierarchy.
http://www.drdobbs.com/templates-and-duck-typing/184401971
CC-MAIN-2018-51
refinedweb
1,570
51.68
24 July 2009 11:11 [Source: ICIS news] LONDON (ICIS news )--More potassium chloride (MOP) supply contracts have been settled in India this week, with 3.2m tonnes booked by Indian buyers under 2009-2010 contracts, market sources said late on Thursday. This week, Israel Chemicals Ltd (ICL) Fertilizers, German producer Kali and Salz (K+S), North American marketer Canpotex and Belarusian Potash Co (BPC) agreed to supply Indian Potash Ltd (IPL) at $460/tonne (€322/tonne) CFR (cost and freight), including 180 days’ credit. The price, which was the same as that agreed last week by International Potash Co (IPC), represented a decline of $165/tonne from the previous contract. Shipment dates for all suppliers will be between July 2009 and March 2010. ICL agreed to supply 750,000 tonnes, BPC 650,000 tonnes and K+S 100,000 tonnes. Canpotex agreed to supply 850,000 tonnes to both private and public sector buyers. Last week, IPC agreed to supply IPL 850,000 tonnes. APC has yet to officially confirm an agreement, but sources said they expect it to do so imminently. ?xml:namespace> (
http://www.icis.com/Articles/2009/07/24/9234791/india-mop-contract-settlements-see-3.2m-tonnes-booked-so.html
CC-MAIN-2015-06
refinedweb
185
60.65
Coloured terminal output for Python's logging module Project description NOTE: This is a parody of the great python-colouredlogs project by Peter Odding. Please use that package as I currently have no plans to maintain this one. All I’ve done is s/color/colour/g. The colouredlogs package enables coloured terminal output for Python’s logging module. The ColouredFormatter class inherits from logging.Formatter and uses ANSI escape sequences to render your logging messages in colour. It uses only standard colours so it should work on any UNIX terminal. It’s currently tested on Python 2.6, 2.7, 3.4, 3.5, 3.6 and PyPy. On Windows colouredlogs automatically pulls in Colourama as a dependency and enables ANSI escape sequence translation using Colourama. Here is a screen shot of the demo that is printed when the command colouredlogs --demo is executed: Note that the screenshot above includes custom logging levels defined by my verboselogs package: if you install both colouredlogs and verboselogs it will Just Work (verboselogs is of course not required to use colouredlogs). Installation The colouredlogs package is available on PyPI which means installation should be as simple as: $ pip install colouredlog Here’s an example of how easy it is to get started: import colouredlogs, logging # Create a logger object. logger = logging.getLogger(__name__) # By default the install() function installs a handler on the root logger, # this means that log messages from your code and log messages from the # libraries that you use will all show up on the terminal. colouredlogs.install(level='DEBUG') # If you don't want to see log messages from libraries, you can pass a # specific logger object to the install() function. In this case only log # messages originating from that logger will show up on the terminal. colouredlogs.install(level='DEBUG', logger=logger) # Some examples. logger.debug("this is a debugging message") logger.info("this is an informational message") logger.warning("this is a warning message") logger.error("this is an error message") logger.critical("this is a critical message") Format of log messages The Coloured colouredlogs.demo[30462] DEBUG message with level 'debug' 2015-10-23 03:32:23 peter-macbook colouredlogs.demo[30462] VERBOSE message with level 'verbose' 2015-10-23 03:32:24 peter-macbook colouredlogs.demo[30462] INFO message with level 'info' ... You can customize the log format and styling using environment variables as well as programmatically, please refer to the online documentation for details. Enabling millisecond precision If you’re switching from logging.basicConfig() to colouredlogs.install() you may notice that timestamps no longer include milliseconds. This is because colouredlogs doesn’t output milliseconds in timestamps unless you explicitly tell it to. There are three ways to do that: The easy way is to pass the milliseconds argument to colouredlogs.install(): colouredlogs.install(milliseconds=True) This became supported in release 7.1 (due to #16). Alternatively you can change the log format to include ‘msecs’: %(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s Here’s what the call to colouredlogs.install() would then look like: colouredlogs.install(fmt='%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s') Customizing the log format also enables you to change the delimiter that separates seconds from milliseconds (the comma above). This became possible in release 3.0 which added support for user defined log formats. If the use of %(msecs)d isn’t flexible enough you can instead add %f to the date/time format, it will be replaced by the value of %(msecs)03d. Support for the %f directive was added to release 9.3 (due to #45). Changing text styles and colours The online documentation contains an example of customizing the text styles and colours. Coloured output from cron When colouredlogs is used in a cron job, the output that’s e-mailed to you by cron won’t contain any ANSI escape sequences because colouredlogs realizes that it’s not attached to an interactive terminal. If you’d like to have colours e-mailed to you by cron there are two ways to make it happen: Modifying your crontab Here’s an example of a minimal crontab: MAILTO="your-email-address@here" CONTENT_TYPE="text/html" * * * * * root colouredlogs --to-html your-command The colouredlogs program is installed when you install the colouredlogs Python package. When you execute coloured colouredlogs program. Yes, this is a bit convoluted, but it works great :-) Modifying your Python code The ColouredCronMailer class provides a context manager that automatically enables HTML output when the $CONTENT_TYPE variable has been correctly set in the crontab. This requires my capturer package which you can install using pip install 'coloured colouredlogs is available on PyPI and GitHub. The online documentation is available on Read The Docs and includes a changelog. | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/colouredlogs/
CC-MAIN-2021-10
refinedweb
829
56.45
Did Qt Quick controls 2.0 support IOS In QT5.7? I created a project for osx, android and ios using material style. osx and android works fine, But IOS not. In the qml/qtquick/controls.2 directory, there are some qmls for osx and android; no qml for ios. What should I do? Thanks for your help. What exactly does not work? What do you mean by "no QML for iOS"? Where? The .qml files are built into resources in static builds. What is likely the issue here is that the Material style does not get deployed, because qmlimportscannerdoes not detect the dependency. Adding import QtQuick.Controls.Material 2.0to your main.qmlor anywhere else should help with that. What exactly does not work? Not work means the app is running but not material style. What do you mean by "no QML for iOS"? Where? Qt/5.7/ios/qml/QtQuick/Controls.2 Adding import QtQuick.Controls.Material 2.0 to your main.qml or anywhere else should help with that. YEAH! But editor tell me "QML Module not found" after add "import QtQuick.Controls.Material 2.0", in desktop Kit. In IOS Kit, import QtQuick.Window 2.0 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.0 import QtQuick.Controls.Material 2.0 All are "QML Module does not contain information about components contained in plugins", But "import QtQuick 2.7" is fine, I'm confused by editor. Anyway, the material style is running in IOS, Thank you!!
https://forum.qt.io/topic/69730/did-qt-quick-controls-2-0-support-ios-in-qt5-7
CC-MAIN-2018-39
refinedweb
251
72.63
This action might not be possible to undo. Are you sure you want to continue? What are the extractor types? Application Specific o BW Content FI, HR, CO, SAP CRM, LO Cockpit o Customer-Generated Extractors LIS, FI-SL, CO-PA Cross Application (Generic Extractors) o DB View, InfoSet, Function Module 2. What are the steps involved in LO Extraction?? LBW0 Connecting LIS InfoStructures to BW 4.. 5. What are Start routines, Transfer routines and Update routines?. 6. What is the difference between start routine and update routine, when, how and why are they called? Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets. 7. What is the table that is used in start routines? Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table. 8.. 9. What are Return Tables? When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee. 10. How do start routine and return table synchronize with each other? Return table is used to return the Value following the execution of start routine 11.. o V1 & V2 don¶t. 12. What is compression? It is a process used to delete the Request IDs and this saves space. 13. What is Rollup? This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.. 15. How many extra partitions are created and why? Two partitions are created for date before the begin date and after the end date. 16. What are the options available in transfer rule? InfoObject Constant Routine Formula 17. How would you optimize the dimensions? We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size. 18.. 19. Can an InfoObject be an InfoProvider, how and why? Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select ³Insert characteristic as data target´. For example, we can make 0CUSTOMER as an InfoProvider and report on it. 20. data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system. 21. How do you transform Open Hub Data? Using BADI we can transform Open Hub Data according to the destination requirement. 22. What is ODS? Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables. 23. What are BW Statistics and what is its use? They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management. 24. What are the steps to extract data from R/3? What are the delta options available when you load from flat file? The 3 options for Delta Management with Flat Files: o Full Upload o New Status for Changed records (ODS Object only) o Additive Delta (ODS Object & InfoCube) Q) Under which menu path is the Test Workbench to be found.Test Workbench. Q) I want to delete a BEx query that is in Production system through request. (Here we can debug update rules or transfer rules.) SM50 à Program/Mode à Program à Debugging & debug this work process. . including in earlier Releases? The menu path is: Tools .Simulate update. Apart from them. This rolling up into aggregates might fail. you activate ODS data (another process type) this might also fail.ABAP Workbench . you rollup data into aggregates. Replicate DataSources Assign InfoSources Maintain Communication Structure and Transfer rules Create and InfoPackage Load Data 25.Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK ---. Q) In Monitor----. Is anyone aware about it? A) Have you tried the RSZDELETE transaction? Q) Errors while monitoring process chains. now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. in process chains you add so many process types. Q) PSA Cleansing. Another one is after you load data into ODS. for example after loading data into Info Cube. A) During data loading.Test . in the next screen there is one button at the top. should be transported with the role. which says generic delta. Q) Can we make a datasource to support delta. Delta extraction is supported for all generic extractors. such as tables/views. You have to delete entire PSA data for a request. While creating datasource from RSO2. as a general rule. then the role does not need to be part of the transport. SAP Query and function modules The delta queue (RSA7) allows you to monitor the current status of the delta attribute Q) Workbooks. it's in last pages u find out. after entering datasource name and pressing create. such as document number & counter Only one of these attributes can be set as a delta attribute. If you want more details about this there is a chapter in Extraction book.A) You know how to edit PSA. . I don't think you can delete single records. A) If this is a custom (user-defined) datasource you can make the datasource delta enabled. Here are a couple of scenarios: 1. If both the workbook and its role have been previously transported. Generic delta services: Supports delta extraction for generic extractors according to: Time stamp Calendar day Numeric pointer. it will exist (verified via Table RSRWBINDEXT). decision makers define clear project objectives and an efficient decision making process (i. Realization. Q) How much time does it take to extract 1 million (10 lackhs) of records into an infocube? A. Project Preparation: In this phase. Thus.e. Overall. If only the workbook is transported. Final preparation& Go-Live . If the role exists in both dev and the target system but the workbook has never been transported. like what are his needs and requirements etc. Q) What are the five ASAP Methodologies? A: Project plan. you should transport roles with workbooks. as a general rule. you do not receive an error message from the transport of 'just a workbook' -even though it may not be visible.). If the role does not exist in the target system you should transport both the role and workbook. and then you have a choice of transporting the role (recommended) or just the workbook. A Project Charter is issued and an implementation strategy . Project managers will be involved in this phase (I guess). Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. This depends. or else it will take less than 30 minutes. 1. Discussions with the client. if you have complex coding in update rules it will take longer time.support. Business Blue print.2. then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -.ALWAYS use control c/control v copy/paste for manually adding! 3. e. production system Development system: All the implementation part is done in this sys. end user training etc. Q) How do you measure the size of infocube? . The Project team will be supporting the end users. Go-Live & support: The project has gone live and it is into production. Landscape of R/3 not sure. 5. Realization: In this only. End user training is given that is in the client site you train them how to work with the new environment. Final Preparation: Final preparation before going live i.e. (I. Business Blueprint: It is a detailed documentation of your company's requirements. Testing/Quality system: quality check is done in this system and integration testing is done. the implementation of the project takes place (development of objects etc) and we are involved in the project from here only. Production system: All the extraction part takes place in this sys.is outlined in this phase. testing system. as they are new to the technology. Analysis of objects developing. modification etc) and from here the objects are transported to the testing system. Q) What is landscape of R/3 & what is landscape of BW.. but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys. conducting pre-go-live. Then Landscape of b/w: u have the development system. testing. 2. what are the objects we need to develop are modified depending on the client's requirements). (i. 3.e. 4. A: In no of records. d delete. n new. r reverse Q) Difference between display attributes and navigational attributes? A: Display attribute is one. Q. Overwrite functionality. For example. a add. And the data wise. Q. which is used only for display purpose in the report. then you can delete it by requestID. . you will have aggregated data in the cubes. Q. ODS is nothing but a table. No overwrite functionality ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Difference between infocube and ODS? A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. -after. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE? A) Yes of course. HOW TO CORRECT IT? A: But how is it possible? If you load it manually twice. for loading text and hierarchies we use different data sources but the same InfoSource. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. Flat file datasources does not support 0recordmode in extraction. Where as navigational attribute is used for drilling down in the report. Q). We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down. CAN U ADD A NEW FIELD AT THE ODS LEVEL? Sure you can. x before. Q. sbiw1. The data is stored in cluster tables from where it is read when the initialization is run. Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)? A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. that means there are some newly added fields in that which r not before. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions. customers orders with the tables like VBAK. So to get the required data (i. DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively. Q) AS WE USE Sbwnn. the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables. VBAP) & fills the relevant communication structure with the data. Q) SIGNIFICANCE of ODS? It holds granular data (detailed level). CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. BRIEF THE DATAFLOW IN BW. at least until the tables can be set up. sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT? No LIS in LO cockpit. To refresh the statistical data. The extraction set up reads the dataset that you want to process such as. We r changing the extract structure right. A) Data flows from transactional system to analytical system (BW). Q. WHY NOT IN TRANSFER RULES? Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS? FULL and DELTA.e. .. It is important that during initialization phase. no one generates or modifies application data. For designing the Virtual cube you have to write the function module that is linking to table. Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE. Basic. Can be made of ODS's and Characteristic InfoObjects with masterdata.sdn and search for Designing Virtual Cube and you will get a good material designing the Function Module Q) INFOSET QUERY.com/sdn/index. transfer routines. when ever the table is updated the virtual cube will fetch the data from table and display report Online. In R/3 or in BW? 2 in R/3 and 2 in BW Q) ROUTINES? Exist in the InfoObject. you can create structures. if you consider railways reservation all the information has to be updated online.sdn. update routines and start routine Q) BRIEF SOME STRUCTURES USED IN BEX. . Q) WHAT IS DATA SIZE? The volume of data one data target holds (in no... of records) Q) Different types of INFOCUBES. sap remote and multi) Virtual Cube is used for example. Virtual (remote. FYI.you will get the information :. Rows and Columns. Virtual cube it is like a the structure.Q) WHERE THE PSA DATA IS STORED? In PSA table..sap. Variable Types are Manual entry /default value Replacement path SAP exit Customer exit Authorization Q) HOW MANY LEVELS YOU CAN GO IN REPORTING? You can drill down to any level by using Navigational attributes and jump targets. . which help in retrieving data fastly. Hierarchies.Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX? Different Variable's are Texts. These are key figures Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION. Help! Refer documentation Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED? No.X VERSIONS. After image (correct me if I am wrong) Q) REPORTING AND RESTRICTIONS.1 AND 3. Help! Refer documentation. Hierarchy nodes & Characteristic values. Formulas. Q) WHAT IS THE SIGNIFICANCE OF KPI'S? KPI's indicate the performance of a company. Q) DIFFERENCE BETWEEN 2. Q) WHAT ARE INDEXES? Indexes are data base indexes. Number ranges. B) Transactional Load Partitioning Improvement: Partitioning based on expected load volumes and data element sizes. Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of . Improves data loading into PSA and Cubes by infopackages (Eg. ST22. There should be some tool to run the job daily (SM37 jobs) Q) AUTHORIZATIONS. Etc Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.Q) TOOLS USED FOR PERFORMANCE TUNING. delete indexes before load. What are you expecting?? Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER. Profile generator Q) WEB REPORTING. without timeouts). What are you expecting? MultiCube works on Union condition Q) EXPLAIN TRANPSORTATION OF OBJECTS? Dev---àQ and Dev-------àP). Of course Q) PROCEDURES OF REPORTING ON MULTICUBES Refer help. it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. go to it and enter the extractor ex: 2LIS_02_HDR. you will not be able to determine what is in the Cube compared to what is in the R/3 environment. It is simple to use. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question.records? A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Since records that get updated into Cubes/ODS structures are controlled by Update Rules. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. how many targets were updated. I would recommend enlisting the help of the end user community to assist since they presumably know the data. However. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab. but only really tells you if the extractor works. Click execute and you will see the record count. Q) Types of Update Rules? A) (Check box). Variable & routine. you can also go to display that data. and how many records passed through the Update Rules. A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It also gives you error messages from the PSA. To use RSA3. Q) Types of Transfer Rules? A) Field to Field mapping. Constant. Return table Q) Transfer Routine? . Q) Start routines? A) Start routines u can write in both updates rules and transfer rules. as you like from one data record. Ex: . suppose you want to restrict (delete) some records based on conditions before getting loaded into data targets. which we write in Update rules Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules? A).Delete Data_Package ani ante it will delete a record based on the condition Q) X & Y Tables? X-table = A table to link material SIDs with SIDs for timeindependent navigation attributes. . then you can specify this in update rules-start routine. However. The corresponding key figure routine then no longer has a return value. by choosing checkbox Return table. Q) Update Routine? A) Routines. which we write in.A) Routines. but a return table. you can create a routine in the tab strip key figure calculation. Q) Routine with Return Table. You can then generate as many key figure values. A) Update rules generally only have one return value. transfer rules. billing value. There are four types of sid tables X time independent navigational attributes sid tables Y time dependent navigational attributes sid tables H hierarchy sid tables I hierarchy structure sid tables Q) Filters & Restricted Key figures (real time example) Restricted KF's u can have for an SD cube: billed quantity.Y-table = A table to link material SIDs with SIDS for timedependent navigation attributes. . no: of billing documents as RKF's. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System. Reports that are created using BEx Analyzer. Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Q) Line-Item Dimension (give me an real time example) Line-Item Dimension: Invoice no: or Doc no: is a real time example Q) What does the number in the 'Total' column in Transaction RSA7 mean? A). Both.A) There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted. In the detail screen of Transaction RSA7.used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) Q) What is a LUW in the delta queue? A) A LUW from the point of view of the delta queue can be an individual document. You will find your information about technical names and description about queries in the following tables. a group of documents from a collective run or a whole data packet of an application extractor. Thus. The detail screen displays the records contained in the LUWs. the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading? A) Only when a new delta has been requested does the source system learn that the previous delta was successfully . a possibly existing customer exit is not taken into account. Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view? A) The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. only the records that are ready for the next delta request are displayed on the detail screen. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where. In particular. Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue? A) The impact is limited. This error is corrected with BW 2. In the meantime. however. the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters. the LUWs of the previous delta may be confirmed (and also deleted). the number on the overview screen does not change when the first delta was loaded to the BW System.loaded to the BW System.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. If performance problems are related to the loading process from the delta queue. in the logistics cockpit area and so on). that LUWs are not split during data loading for consistency reasons. the LUWs must be kept for a possible delta request repetition. This means that when very large LUWs are written to the DeltaQueue. Q)).0B Support Package 11. then refer to the application-specific notes (for example in the CO-PA area. Q) Why are selections not taken into account when the delta queue is filled? A) Filtering according to selections takes place when the system reads from the delta queue. Please note. Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)? . Then. Caution: As of Plug In 2000. Such a DataSource should not be displayed in RSA7. This is necessary for reasons of performance. 2 patch 3. You do not only delete all data of this DataSource for the affected BW System. to selectively choose the number of a data record. When you delete the data. With this patch the performance during deletion is considerably improved. but also lose the entire information concerning the delta initialization. no longer exists or can no longer be accessed. Then you can only request new deltas after another delta initialization. from which the delta initialization was originally executed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs. the LUWs kept in the qRFC queue for the corresponding target system are confirmed. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. The deletion function is for example intended for a case where the BW System. Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted? A) You should act with extreme caution when you use the deletion function in the delta queue. to make a distinction between the 'actual' delta data and the data intended for repetition and so on. to restrict it. Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area? A) It is most likely that a delta initialization had not yet run or that the delta initialization was not successful.1 the display was changed: the user has the option of defining the amount of data to be displayed. A successful delta initialization (the corresponding request must have QM status 'green' in the BW .A) With Plug In 2001. Q) Why does it take so long to delete from the delta queue (for example half a day)? A) Import PlugIn 2000. e. Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)? A) The qRFC monitor basically displays the same data as RSA7. 'pure' delta records? A) Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system. i. the data of a LUW is displayed in an unstructured manner there. the records are updated directly in the delta queue (RSA7). For which one is the delta loaded? . Then. Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were newly added. There is no duplicate transfer of records to the BW system. Moreover. In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Q) I loaded several delta inits with various selections. the short name corresponds to the name of the DataSource. The internal queue name must be used for selection on the initial screen of the qRFC monitor. For DataSources whose name are 19 characters long or shorter. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN. the client and the short name of the DataSource.System) is a prerequisite for the application data being written in the delta queue. Q) Why are the data in the delta queue although the V3 update was not started? A) Data was posted in background. This is made up of the prefix 'BW. This happens in particular during automatic goods receipt posting (MRRS). See Note 417189. The delta must be initialized in any case since delta depends on both the BW system and the source system.e. you should expect that the delta have to be initialized after the copy. After the client copy. With complicated selection conditions. Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent.A) For delta. After the client copy. make a client copy. Q) I intend to copy the source system. After the system copy. you can make up to about 100 delta inits. make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. all selections made via delta inits are summed up. It should not be more. the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. i. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage. too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit. Q) How many selections for delta inits are possible in the system? A) With simple selections (intervals without complicated join conditions or single values). Reason: With many selection conditions that are joined in a complicated way. Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes? A) Use SMQ1 as an instrument for diagnosis and control . it should be only up to 10-20 delta inits. What will happen with may delta? Should I initialize again after that? A) Before you copy a source client or source system. a delta for the 'total' of all delta initializations is loaded. This means. which processes them in one or several parallel update processes in an asynchronous way. Otherwise. LUWs are still written into the DeltaQueue? A) In general. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the . buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active. ARFCSSTATE is 'READ'. What is the cause for this "splitting"? A) The collective run submits the open V2 documents for processing to the task handler.only. Only another delta request loads the missing documents into BW. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC? A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. An alternative solution where this problem does not occur is described in Note 505700. Q) Despite of the delta request being started after completion of the collective run (V3 update). some entries have the status 'READY'. plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. This is the case in your system. For this reason. he/she posts data into the queue even though the initialization had been deleted in the meantime. delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. In the table TRFCQOUT. it does not contain all documents. Make changes to BW queues only after informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'. Q) Despite my deleting the delta init. However. this does not mean that the record has successfully reached the BW yet. others 'RECORDED'. NOSEND in SMQ1 means nothing (see note 378903). It is set before starting a DeltaExtraction for all records with status READ present at that time. Q) The extract structure was changed when the DeltaQueue was empty. The status EXECUTED in TRFCQOUT can occur temporarily. Why are the data displayed differently? What can be done? Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. Every other status is an indicator for an error or an inconsistency. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. Q) How and where can I control whether a repeat delta is . When the problem occurs.DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. READY and RECORDED in both tables are considered to be valid. If the extract structure change is not communicated synchronously to the server where delta records are being created. This may have disastrous consequences for the delta. If you see such records. or. In any case only the statuses READ. no more deltas are loaded into the BW. In this state. it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment. When loading the delta into the PSA. The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting. We recommend to reset the buffers using Transaction $SYNC. the delta needs to be reinitialized. if the records remain in the table for a longer period of time with status EXECUTED. Afterwards new delta records were written to the DeltaQueue. it shows that some fields were moved. the records are written with the old structure until the new structure has been generated. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat.which is of no practical importance. both the number of LUWs is important and the average data volume per LUW. set the request in the monitor to red manually. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics? See the recommendation in Note 505700. For the contents of the repeat see Question 14. As a rule. correct reading is guaranteed in most cases. If the limit is observed. if they have not been deleted from the data targets concerned before. however . 100 Mbytes per LUW should not be exceeded). Q) As of PI 2003. The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process. That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. e. When estimating "smooth" limits.or the restrictions regarding the volume and number of records in a database table). the next load will be of type 'Repeat'. Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems? A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the LUW management table . the Logistic Cockpit offers various types of update methods.1. . If the request is RED. in the Logistics Cockpit).g. If you need to repeat the last load for certain reasons.requested? A) Via the status of the last delta in the BW Request Monitor. we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications. But for other. Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA. a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value. you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so. Q) Can we filter the fields at Transfer Structure? Q) Can we load data directly into infoobject with out extraction is it possible. reasons. which is the date on which the data load has taken place. BW-specific. We recommend. Yes. we load the transactional data nightly. Usually.If the number of LUWs cannot be reduced by bundling application transactions. a) We can set the time. the DeltaQueue must be emptied by several successive DeltaRequests. We can copy from other infoobject if it is same. Is there any easy way to include this information on the report for users? So that they know the validity of the report. We load data from PSA if it is already in PSA. the frequency should not be higher than one DeltaRequest per hour. If this limit is reached within a request. This displays the relevance of data field. To avoid memory problems. configure your workbook to display the text elements in the report. however. . Q) I would like to display the date the data was uploaded on the report. you should at least make sure that the data are fetched from all connected BWs as quickly as possible. IF WE R SHEDULED DAILY. A) If I understand your requirement correctly. WEEKLY AND MONTHLY. Virtual Private Network. VPN is nothing but one sort of network where we can connect to the client systems sitting in offshore through RAS (Remote access server). WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows? A) Expand dimension under left side panel (that is infocube panel) select than navigational attributes drag and drop under rows panel. a) VPN«««««. In current systems (BW 3. THROUGH WHICH NETWORK.0B and R/3 4.Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. Q) THERE is one ODS AND 4 INFOCUBES. Q) HOW CAN U ANALIZE THE PROJECT AT FIRST? Prepare Project Plan and Environment Define Project Management Standards and Procedures Define Implementation Standards and Procedures Testing & Go-live + supporting.6B) these Tcodes don't exist! Q) WHAT IS TRANSACTIONAL CUBE? . Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT. HOW CAN U RECTIFY THE ERROR? Go to TCode sm66 then see which one is locked select that pid from there and goto sm12 TCode then unlock it this is happened when lock errors are occurred when u scheduled. Standard Basic cubes are not suitable for this. would it be easier to write a program that would be run after the load and delete the records with a zero open qty? A) START routine for update rules u can write ABAP code. Instead. the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM). Q) I am not able to access a node in hierarchy directly using variables for reports. Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity was 0. data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Loop at all the records and delete the record that has the condition. In that case you may delete the change record and keep the old and after the change the wrong information. When I am using Tcode RSZV it is giving a message that it doesn't exist in BW 3.A) Transactional InfoCubes differ from standard InfoCubes in that the former have an improved write access performance level.0 and it is embedded in BEx. Also. It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. From 3. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. you can do it. A) Yap. meaning that. Create a start routine in Update rule. it's possible in the Query Designer . Can any one tell me the other options to get the same functionality in BEx? A) Tcode RSZV is used in the earlier version of 3. I've tried using the 0recordmode but that doesn't work.0B only. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload.0B onwards. are these routines written in update rules? I will be glad. if this is clarified with real-time scenarios and few examples? A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or . Say for a specific scenario. Now I have uploaded all the sales order created till yesterday into the cube. which was already uploaded. I wish to know when and what type of abap routines we got to write.10/2004 then monthly value is actually divide by the number of months that I selected. anyway to stop this? Q) Can any one give me info on how the BW delta works also would like to know about 'before image and after image' am currently in a BW project and have to write start routines for delta load.. A) Q) In BW we need to write abap routines. Now what happens when I schedule it again? Will the same record be uploaded again with the changes or will the changes get affected to the previous record. Q) I am very new to BW. for an example. Sales. I would like to clarify a doubt regarding Delta extractor.(BEx) itself. Q) Wondering how can I get the values. by using delta extractors the data that has already been scheduled will not be uploaded again.. Now say I make changes to any of the open record.. If I am correct. Also. Just right click on the InfoObject for which you want to use as variables and precede further selecting variable type and processing types. if I run a report for month range 01/2004 . Which variable should I use? Q) Why is it every time I switch from Info Provider to InfoObject or from one item to another while in modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes. data is stored in different versions ((new) delta. compare with the Transactional ODS Object. where as a transactional ODS object contains the data in a single version.directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). A transactional ODS object differs from a standard ODS object in the way it prepares data. I hope this helps. For implementation. (change log) modified). In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. This data can be evaluated using a BEx query.. for example) on the document (atomic) level. Q) What is ODS? A) An ODS object acts as a storage location for consolidated and cleaned-up transaction data (transaction data or master data. In a standard ODS object. data is stored in precisely the same form in which it was written to the transactional ODS object by the . active. Standard ODS Object Transactional ODS object: The data is immediately available here for reporting. Therefore. such as SAP Strategic Enterprise Management (SEM) for example. It retrieves its data from external systems via fill. The advantage to the way it is structured is that data is easy to access. Dimension can contain up to 248 freely available characteristics. unit & data packet) dimensions. as well as other external applications. data is written to the ODS object (possibly by several users at the same time) and reread as soon as possible. (13 user defined & 3 system pre-defined . you can use a transaction ODS object as a data target for an analysis process. They are made available for reporting immediately after being loaded. The transactional ODS object simply consists of a table for active data. Transactional ODS objects allow data to be available quickly. Instead. It offers no replacement for the standard ODS object. The data from this kind of ODS object is accessed transactionally. that is. Q) What does InfoCube contains? A) Each InfoCube has one FactTable & a maximum of 16 (13+3 system defined.or delete. The loading process is not supported by the BW system.application. time.APIs. The transactional ODS object is also required by diverse applications. Each Fact Table can contain a maximum of 233 key figures. Q) What does FACT Table contain? A FactTable consists of KeyFigures. an additional function displays those that can be used for special applications. In BW. Q) How many dimensions are in a CUBE? A) 16 dimensions. # so that BW will accept it. instead they are stored in Masterdata tables divided into attributes. surrounded by dimensional tables and the dimension tables contains of master data. text & hierarchy. Differences between STAR Schema & Extended Schema? A) In STAR SCHEMA. This is transaction data which loads fine in the PSA but not the data target. A) Check transaction SPRO ---Then click the "Goggles"Button => Business . A FACT Table in center. These Masterdata & dimensional tables are linked with each other with SID keys. hierarchies) Q) What does ATTRIBUTE Table contain? Master attribute data Q) What does TEXT Table contain? Master text data. Masterdata tables are independent of Infocube & reusability in other InfoCubes. long text. medium text & language key if it is language dependent Q) What does Hierarchy table contain? Master hierarchy data Q) What is the advantage of extended STAR Schema? Q). unit & data packet]) Q) What does SID Table contain? SID keys linked with dimension table & master data tables (attributes. In Extended Schema the dimension tables does not contain master data.[time. Q) As to where in BW do you go to add a character like a \. short text. texts. Manage →. Q) When are SID's generated? A) When Master data loaded into Master Tables (Attr. Text. Selective deletion & change log entry deletion. change log entries. Q) How would we delete the data in ODS? A) By request IDs. Q) How would we delete the data in change log table of ODS? A) Context menu of ODS →. Data packet « Q) Partitioning possible for ODS? A) No. (when created)? Q) When are Dimension ID's created? A) When Transaction data is loaded into InfoCube. I hope you can use my "Guide" (my BW is in german. so i don't know all the english descriptions).Information Warehouse => Global Settings => 2nd point in the list. Q) What are the extra fields does PSA contain? A) (4) Record id. It's possible only for Cube. Q) Does data packets exits even if you don't enter the master data. . Environment →. Hierarchies). Q) Transitive Attributes? A) Navigational attributes having nav attr«these nav attrs are called transitive attrs Q) Navigational attribute? A) Are used for drill down reporting (RRI). Q) Display attributes? A) You can show DISPLAY attributes in a report. Currency attributes. Transitive attributes. which are used only for displaying. Q) Have you ever tried to load data from 2 InfoPackages into one cube? A) Yes. Display attributes. Q) How does u recognize an attribute whether it is a display attribute or not? A) In Edit characteristics of char. Time dependent attributes. on general tab checked as attribute only. Compounding attributes.Q) Why partitioning? A) For performance tuning. Q) Different types of Attributes? A) Navigational attribute. Q) Compounding attribute? . * Assign InfoSources.A) Q) Time dependent attributes? A) Q) Currency attributes? A) Q) Authorization relevant object. Infoprovider (context menu) →. Insert characteristic Data as DataTarget. (R/3) * Replicate DataSource in BW. . Q) How do we load the data if a FlatFile consists of both Master and Transaction data? A) Using Flexible update method while creating InfoSource. Why authorization needed? A) Q) How do we convert Master data InfoObject to a Data target? A) InfoArea →. (R/3) * Maintain DataSources. Q) Steps in LIS are Extraction? A) Q) Steps in LO are Extraction? A) * Maintain extract structures. Queued Delta: . .* Maintain communication structures/transfer rules. * Activating Updates. * Set-up periodic V3 update.The extraction data from the application is collected in extraction queue instead of as update data and can be transferred to the BW delta queue by an update collection run.With every document posted in R/3. * Maintaining DataSources. Each document posting with delta extraction becomes exactly one LUW in the corresponding Delta queue. Q) What does LO Cockpit contain? A) * Maintaining Extract structure. (R/3) * InfoPackage for the Delta initialization. * Maintain InfoCubes & Update rules. (R/3) * Delete setup tables/setup extraction. Direct Delta: . (R/3) * InfoPackage for Delta uploads. Queued Delta. Q) Steps in FlatFile Extraction? A) Q) Different types in LO's? A) Direct Delta. as in the V3 update. * Activate extract structures. Unserialized V3 Update. Serialized V3 update. the extraction data is transferred directly into the BW delta queue. assign users to these roles. Q) OLI*BW --.Creating Updating rules for LO's. Change or Delete the InfoCube. Q) RSD5 -. Q) RSA7 ---.Creating user-defined Information Structure for LIS (It is InfoSource in SAP BW).* Controlling Updates. Q) MC24 ---.Fill Set-up tables.Data packet characteristics maint.Changeability of the BW namespace.TCode for LIS. Q) LBWE ---. Q) RSO2 --. Q) LBW0 --. Q) LBWG --.TCode for Logistics extractors.Role maintenance. Q) SE03 -.Delta Queue (allows you to monitor the current status of the delta attribute) Q) RSA3 ---.Maintain DataSources.Delete set-up tables in LO's.For Delete. Q) RSA6 --.Maintaining Generic DataSources. Q) RSDCUBEM --.Extract checker. Q) MC21 ----. . Q) PFCG ---. Q) RSCUSTV6 -.BEx Analyzer Q) RSBBS . Q) RSMONMESS -.Q) RSDBC .IMG (To make configurations in BW).Partitioning of PSA. Q) RSBOH1 -. Q) RSRT -. Q) RSKC -."Messages for the monitor" table. .Report to Report interface (RRI).Checking ShortDump.Monitoring of Dataloads.Scheduling Background jobs.Character permit checker. Q) ROOSOURCE .Maintaining Aggregates. Q) RSDDV .DB Connect Q) RSMO --.Query monitor. Q) SPRO -. Q) ST22 .Open Hub Service: Create InfoSpoke. Q) RSRV .Table to find out delta update methods. Q) SM37 .Analysis and Repair of BW Objects Q) RRMX . Create Indexes for BCube after loading data →.Program Compare Q) SE11 . Roll-Up data into the . here we specify when should the process chain start by giving date and time or if you want to start immediately Some of theses processes trigger an event of their own that in-turn triggers other processes. Process chains nothing but grouping processes.Finding for modes of records (i. Load data from the source system to PSA →. Load data from PSA to DataTarget ODS →.Implementation guide Q) Statistical Update? A) Q) What are Process Chains? A) TCode is RSPC.Start chain →.e.Project Management enhancing Q) SPAU . Delete BCube indexes →.ABAP Dictionary Q) SE09 .Q) RODELTAM .Transport Organizer (workbench organizer) Q) SE10 . before image& after image) Q) SMOD . Load data from ODS to BCube →. Process variant (start variant) is the place the process chain knows where to start.Transport Organizer (Extended View) Q) SBIW . Ex: .Definition Q) CMOD . is a sequence of processes scheduled in the background & waiting to be triggered by a specific event. Create database statistics →. There should be min and max one start variant in each process chain. Q) InfoPackage groups? A) Q) Explain the status of records in Active & change log tables in ODS when modified in source system? A) Q) Why it takes more time while loading the transaction data even to load the transaction without master data (we . Repair Request flag (check). the extra tab in Transaction data is DATA TARGETS. Restart chain from beginning. Q) Difference between MasterData & Transaction InfoPackage? A) 5 tabs in Masterdata & 6 tabs in Transaction data. Reporting agent & Other BW services. Q) For Full update possible while loading data from R/3? A) InfoPackage →. Load Process & subsequent processing. Data Target Administration. Q) Types of Updates? A) Full Update.aggregate→. Q) What are Process Types & Process variant? A) Process types are General services. Init Delta Update & Delta Update. This is only possible when we use MM & SD modules. Scheduler →. Process variant (start variant) is the place the process type knows when & where to start. even if no master data exits for the data)? A) Because while loading the data it has to create SID keys for transaction data. the field will appear in InfoPackage Data selection tab. A) When it comes to transporting for R/3 and BW. Second..and nullifying the value.The only purpose is when we check this column. Then you have to do this transport in BW. Always Update data. SELECT fields & CANCELLATION fields? A) Selection fields-. you will transport all the BW Objects from 1st BW system to 2nd BW system.. Third.check the checkbox. Cancellation . First you will transport all the datasources from 1st R/3 system to 2nd R/3 System. you have to replicate the datasources into the 2nd BW system«and then transport BW objects. Hide fields -.These fields are not transferred to BW transfer structure. Q) For what we use HIDE fields.It will reverse the posted documents of keyfigures of customer defined by multiplying it with 1. u should always transport all the R/3 Objects first«««once you transport all the R/3 objects to the 2nd system. testing and then production . You have to send your extractors first to the corresponding R/3 Q Box and replicate that to BW. Development. you will replicate the datasources from 2nd R/3 system into 2nd BW system. I think this is reverse posting Q) Transporting. Q) What is the PSA.Q) Functionality of InitDelta & Delta Updates? A) Q) What is Change-run ID? A) Q) Currency conversions? A) Q) Difference between Calculated KeyFigure & Formula? A) Q) When does a transfer structure contain more fields than the communication structure of an InfoSource? A) If we use a routine to enhance a field in the communication from several fields in the transfer structure. A) For cleansing purpose. the communication structure may contain more fields. at this stage we can directly load from PSA not going to extract from R/3. . technical name of PSA. A) The total no of InfoObjects in the communication structure& Extract structure may be different. since InfoObjects can be copied to the communication structure from all the extract structures. Uses? A) When we want to delete the data in InfoProvider & again want to re-load the data. Replacement path. Q) Difference between Filters & Conditioning? A) Q) What is NODIM? A) For example it converts 5lts + 5kgs = 10. Q) What is the use of Filters? A) It Restricts Data.Q) Variables in Reporting? A) Characteristics values. Authorizations. Customer Exit Q) Why we use this RSRP0001 Enhancement? A) For enhancing the Customer Exit in reporting. Q) Variable processing types in Reporting? A) Manual. less than or equal etc. Hierarchies. by adding relevant colors u can get pink. Q) Why SAPLRSAP is used? . Q) What is the use of Conditioning? A) To retrieve data based on particular conditions like less than.. greater than. Hierarchy nodes& Formula elements. Text. SAP Exit. Q) What for Exception's? How can I get PINK color? A) To differentiate values with colors. When you create an InfoSet. Navigating in a BW to an InfoSet Query. InfoSets are based on logical databases. Q) Can Favorites accessed by other users? A) No. Choose InfoObjects or ODS objects as data sources. table join. InfoSets determine the tables or fields in these tables that can be referenced by a report. Q) What is InfoSet? A) An InfoSet is a special view of a dataset. SAP Query includes a component for maintaining InfoSets. _The InfoSet Query functions allow you to report using flat data tables (master data reporting). and is used by SAP Query as a source data. they need authorization. using one or more ODS objects or InfoObjects. These can be connected using joins. Q) What are workbooks & uses? A) Q) Where are workbooks saved? A) Workbooks are saved in favorites. that is Connected as a data mart. You can also drill-through to BEx queries and InfoSet Queries from a second BW system. table. . In most cases.A) We use these function modules for enhancing in r/3. and sequential file. such as logical database. a DataSource in an application system is selected. Asynchronous update (V2 update) Document update and the statistics update take place separately in different tasks. the V3 collective statistics update must be scheduled as a job.__You define the data sources in an InfoSet. A full production environment. However. Collective update (V3 update) Again. An InfoSet can contain data from one or more tables that are connected to one another by key fields. in contrast to the V2 update. Scheduling intervals should be based on the amount of activity on a particular OLTP system. __The data sources specified in the InfoSet form the basis of the InfoSet Query. Q) LO's? A) Synchronous update (V1 update) Statistics update is carried out at the same time as the document update in the same task. Successfully scheduling the update will ensure that all the necessary information Structures are properly updated when new or existing documents are processed. document update is separate from the statistics update. with hundreds of transactions per hour may have to be updated every 15 to 30 minutes. For example. . a development system with a relatively low or no volume of new documents may only need to run the V3 update on a weekly basis. At this screen. flag the radio button 'All' and hit enter. your information structures will be current and overall performance will be improved. It is possible to verify that all V3 updates are successfully completed via transaction SM13. This transaction will take you to the ³UPDATE RECORDS: MAIN MENU´ screen. enter asterisk as your user (for all users). Any outstanding V3 updates will be listed. While a non-executed V3 update will not hinder your OLTP system. by administering the V3 update jobs properly. .SAP standard background job scheduling functionality may be used in order to schedule the V3 updates successfully. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/doc/54022567/SAP-Interview-Q2
CC-MAIN-2016-50
refinedweb
9,527
69.07
Minimum Cost to connect the graph by choosing any vertices that have a cost of at least 0 Introduction The problem states that we are given a graph, let say G, with N vertices and M edges. We are also given a cost[] for each vertex. The task is to connect this undirected graph in minimum cost, and the cost of connecting two vertices, let say, i and j, is the cost [i] + cost [j]. We can only join two vertices if the cost of vertices is at least 0, if it is not possible to connect two vertices, print -1; otherwise, print the minimum cost. Sample Examples Example 1:- Input: N = 6. Let the given graph be: Cost[] = {2, 1, 5, 3, 2, 9} Output: 7 Explanation: We connect Node 2 and Node 4, cost = 1 + 3 = 4. And We will connect Node 2 and Node 5 = 1 + 2 = 3. Now we have connect the graph , and total cost = 4 + 3 = 7. Example 2:- Input: N = 6 Let the given graph be: Cost[] = {2, 1, 5, -3, -2, -9} Output : -1 Explanation: It is impossible to connect the graph, as the cost for Node 4, Node 5, and Node 6 are less than 0. Solution Approach(Disjoint Set Union) This problem will be solved using the Disjoint Set Union data Structure. The idea is to store all the minimum values greater than 0 of each connected component of a graph in a set. If any value among these minimum values is less than 0, then print -1, find the minimum value among all these minimum values, say minStore, and then find the sum of all the minimum values with this minStore and print the sum. Hence this will be minimum cost to connect the graph. Algorithm - Make parent, rank, and minVal array to store each node's parent, compare the ranks, and find the minimum element of each connected component, respectively. - Initialize the parent array as parent[i] = i, and minVal[i] = cost[i-1] in range [1,N]. - Find the union of each edge of the graph by calling the function UnionTwoNode(x,y) for every pair of nodes. - Store the value of the parent array into unordered_set, let say s, so as to get the parent of each connected component. - Iterate over the unordered_set s, find the minimum value of s, store it in minStore and keep a check if any is negative. - If the graph is connected, print 0 if there is a negative value print -1. - Store the sum of each element from the unordered_set and minStore variable in the ans variable. - Finally, print the and variable. Implementation in C++ // C++ function to find the minimum cost to connect the graph #include <bits/stdc++.h> using namespace std; // function to find the parent of the node in a graph int findParent(int x, int *parent) { if (parent[x] == x) return x; return findParent(parent[x], parent); } // function to find the union of the two nodes void UnionTwoNodes(int *parent, int *rank, int *minVal, int x, int y) { // Finding parent of Node x x = findParent(x, parent); // Finding parent of Node y y = findParent(y, parent); // If rank of both the nodes are same, simply increase rank of first node if (rank[x] == rank[y]) rank[x]++; if (rank[x] > rank[y]) { parent[y] = x; // updating the minVal array to store the minValue greater than 0. if (minVal[x] < 0 && minVal[y] < 0) { minVal[x] = max(minVal[x], minVal[y]); } else if (minVal[x] < 0 && minVal[y] >= 0) { minVal[x] = minVal[y]; } else if (minVal[x] >= 0 && minVal[y] >= 0) { minVal[x] = min(minVal[x], minVal[y]); } } else { parent[x] = y; // updating the minVal array to store the minValue greater than 0. if (minVal[x] < 0 && minVal[y] < 0) { minVal[y] = max(minVal[x], minVal[y]); } else if (minVal[x] >= 0 && minVal[y] < 0) { minVal[y] = minVal[x]; } else if (minVal[x] >= 0 && minVal[y] >= 0) { minVal[x] = min(minVal[x], minVal[y]); } } } // function to find the minimum cost to connect the graph int findMinCost(vector<pair<int, int>> &graph, int *cost, int n, int m) { // declaring parent array to store the parent for every node, // initially every node is its own parent int *parent = new int[n + 1]; // rank array to store the rank of every node int *rank = new int[n + 1]; // stores the minimum value of each set int *minVal = new int[n + 1]; for (int i = 1; i <= n; i++) { // initially every node is its own parent parent[i] = i; // initially rank of every node is 0. rank[i] = 0; minVal[i] = cost[i - 1]; } for (auto it = graph.begin(); it != graph.end(); it++) { // grouping the nodes that are connected, by making their parent Node same UnionTwoNodes(parent, rank, minVal, it->first, it->second); } // map to store parent of each connected component unordered_set<int> s; for (int i = 1; i <= n; i++) { s.insert(parent[i]); } // variable to store the min value for the set with its index. pair<int, int> minStore = {0, INT_MAX}; // flag variable that keep the check if minimum value is not less than 0. // if less than 0, then true, otherwise false. bool flag = false; for (auto it = s.begin(); it != s.end(); it++) { // if minVal is less than 0, if (minVal[*it] < 0) { // mark flag as true flag = true; } if (minStore.second > minVal[*it]) { minStore.first = *it; minStore.second = minVal[*it]; } } // it will store the final answer, minimum cost to add the edges int ans = 0; if (flag == false) { for (auto it = s.begin(); it != s.end(); it++) { if (*it != minStore.first) { ans += (minVal[*it] + minStore.second); } } } else if (flag && s.size() == 1) { ans = 0; } else { ans = -1; } return ans; } int main() { int n = 6; // initial given graph vector<pair<int, int>> graph = {{1, 2}, {1, 3}, {5, 6}}; // cost of the vertex to be connected int cost[] = {2, 5, 1, 3, 2, 9}; cout << findMinCost(graph, cost, n, graph.size()) << endl; } Output: 7 Complexity Analysis Time Complexity: O(n*logM) M is the number of edges and n is the number of nodes. Space Complexity: O(n) Since we have created only linear arrays of size n, so total space complexity is O(n) to find the minimum cost to connect the graph. Frequently asked questions Q1. What is the difference between directed and undirected graph ? Ans. The directed graph contains ordered pairs of vertices while the undirected contains unordered pairs of vertices. In a directed graph, edges represent the direction of vertices, while in unordered graph, edges do not represent the direction of vertices. Q2. What is the maximum number of edges in the undirected graph of Nodes N? Ans. Each node can have an edge with every other n-1 node in an undirected graph. Therefore the total number of edges possible are n*(n-1)/2. Q3. Which data structure is used in the BFS and DFS of the graph? Ans. In BFS, a queue data structure is used, while in DFS stack is used. Key takeaways In this article, we have discussed the approach to connect the graph in minimum cost, by selecting vertex which have cost greater than 0. We hope you understand the problem and solution properly. Now you can do more similar questions. If you are a beginner, interested in coding, and want to learn DSA, you can look for our guided path for DSA, which is free! Thank you for reading. Until then, Keep Learning and Keep Coding. j
https://www.codingninjas.com/codestudio/library/minimum-cost-to-connect-the-graph-by-choosing-any-vertices-that-have-a-cost-of-at-least-0
CC-MAIN-2022-27
refinedweb
1,251
67.59
QML WebengineView does not return http page errors - Jimmybobby Hi So I am new to QML but it seems straight forward to me. I am tring to create a simple single windows browser window that displays our webpage and runs on a arm based linux system. I have gotten it to build both on our yocto build system and also on ubuntu linux with version 5.5.1 and 5.6.2 on ubuntu. I need to beable to detect the error page so i can then load a custom HTML page I just used the example code to start with and added a couple of keys to change page and see the reponses , code shown below. One action changes to an invalid page which returns the HTTP 404 error page and it reports nothing in the onLoading changed. no error code and domain. So then I noticed the settings.errorPageEnabled which i then set to false hoping now it would give me an error code etc. yet it does not it always displays the error page and does not report any error codes. It just says the page loaded correctly when it clearly has an 404 http error , yes it loaded the error page correctly. I have been digging around in the source code and the errorPageEnable setting seems to be some sort a test attribute but why it seems like a standard thing to want to know if the web page you just set actually loaded correctly So how do I make to api function work? or is it broken import QtQuick 2.0 import QtQuick.Window 2.1 import QtWebEngine 1.2 import QtQuick.Controls 1.0 Window { id: mainWindow width: 800 height: 600 visible: true WebEngineView { id: mywebEngineView anchors.fill: parent settings.errorPageEnabled: false url: "" onLoadingChanged: { print("onloadingchanged called ") if (loadRequest.status === WebEngineLoadRequest.LoadFailedStatus) { print(loadRequest.errorCode) print(loadRequest.errorString) print("url " + loadRequest.url + " Domain " + loadRequest.errorDomain) if ( loadRequest.errorDomain == WebEngineView.HttpErrorDomain ) { print("http error") } print("load status failed") } } } Action { id: refreshTrademe shortcut: "Ctrl+L" onTriggered: { mywebEngineView.settings.errorPageEnabled = false; mywebEngineView.url= "" } } Action { id: errorPageTest shortcut: "Ctrl+M" onTriggered: { print("ctrl+M") mywebEngineView.url= "" } } } - Jimmybobby The solution for me was to use a XMLHttprequest which does return the http status codes to request my page and using that information it can then decide which page the webengine view would display. Either a real page from the server or a custom error page. It would be nice if the loadrequest errorcode etc actually worked as during my limited testing it would always return nothing usefull . No errorDomain, no errorCode and no errorString
https://forum.qt.io/topic/80372/qml-webengineview-does-not-return-http-page-errors/2
CC-MAIN-2019-18
refinedweb
435
66.44
Difference between revisions of "Chatlog 2011-06-01" From RDF Working Group Wiki Revision as of 17:12, 1 June 2011 See panel, original RRSAgent log or preview nicely formatted version. Please justify/explain non-obvious edits to this page, in your "edit summary" text. 14:21:19 <RRSAgent> RRSAgent has joined #rdf-wg 14:21:19 <RRSAgent> logging to 14:21:21 <trackbot> RRSAgent, make logs world 14:21:21 <Zakim> Zakim has joined #rdf-wg 14:21:23 <trackbot> Zakim, this will be 73394 14:21:23 <Zakim> ok, trackbot; I see SW_RDFWG()11:00AM scheduled to start in 39 minutes 14:21:24 <trackbot> Meeting: RDF Working Group Teleconference 14:21:24 <trackbot> Date: 01 June 2011 14:27:12 <Scott> Scott has joined #rdf-wg 14:30:55 <cygri> cygri has joined #rdf-wg 14:51:14 <Scott_Bauer> Scott_Bauer has joined #rdf-wg 14:52:16 <Guus> Guus has joined #rdf-wg 14:53:25 <Zakim> SW_RDFWG()11:00AM has now started 14:53:32 <Zakim> +hsbauer 14:53:42 <Zakim> +Guus 14:53:57 <Scott_Bauer> zakim, hsbauer is me 14:53:57 <Zakim> +Scott_Bauer; got it 14:55:07 <mbrunati> mbrunati has joined #rdf-wg 14:55:40 <Zakim> +davidwood 14:56:12 <davidwood1> zakim, who is here? 14:56:12 <Zakim> On the phone I see Scott_Bauer, Guus, davidwood 14:56:59 <ivan> zakim, dial ivan-voip 14:56:59 <Zakim> ok, ivan; the call is being made 14:57:01 <Zakim> +Ivan 14:57:28 <SteveH_> SteveH_ has joined #rdf-wg 14:57:30 <ericP> Zakim, please dial ericP-office 14:57:30 <Zakim> ok, ericP; the call is being made 14:57:32 <Zakim> +EricP 14:57:45 <Zakim> +??P25 14:58:06 <AZ> AZ has joined #rdf-wg 14:58:21 <AndyS> AndyS has joined #rdf-wg 14:58:40 <Olivier> Olivier has joined #rdf-wg 14:58:40 <Zakim> +??P22 14:58:45 <pchampin> pchampin has joined #rdf-wg 14:58:47 <Zakim> +??P26 14:58:53 <mbrunati> zakim, ??P22 is me 14:58:53 <Zakim> +mbrunati; got it 14:58:56 <SteveH_> Zakim, ??P26 is me 14:58:56 <Zakim> +SteveH_; got it 14:59:12 <cmatheus> cmatheus has joined #rdf-wg 14:59:19 <Zakim> +AlexHall 14:59:24 <AlexHall> AlexHall has joined #rdf-wg 14:59:47 <pchampin> zakim, who is here? 14:59:47 <Zakim> On the phone I see Scott_Bauer, Guus, davidwood, Ivan, EricP, ??P25 (muted), mbrunati, SteveH_, AlexHall 15:00:05 <Zakim> +FabGandon 15:00:15 <pchampin> zakim, ??P25 is me 15:00:15 <Zakim> +pchampin; got it 15:00:22 <Zakim> +??P30 15:00:42 <Zakim> +??P3 15:00:42 <AndyS> zakim, ??P30 is me 15:00:44 <Zakim> +AndyS; got it 15:00:50 <Zakim> +pfps 15:01:03 <cmatheus> zakim, ??P30 is me 15:01:03 <Zakim> I already had ??P30 as AndyS, cmatheus 15:01:07 <Zakim> +wcandillon 15:01:19 <pfps> pfps has joined #rdf-wg 15:01:20 <AZ> zakim, wcandillon is me 15:01:20 <Zakim> +AZ; got it 15:01:22 <Zakim> +LeeF 15:01:35 <pfps> zakim, who is on the phone? 15:01:35 <Zakim> On the phone I see Scott_Bauer, Guus, davidwood, Ivan, EricP, pchampin (muted), mbrunati, SteveH_, AlexHall, FabGandon, AndyS, ??P3, pfps, AZ, LeeF 15:01:42 <Zakim> +mhausenblas 15:01:42 <AZ> Yes 15:01:46 <davidwood1> zakim, ??P30 is really me. Really! Please let me have it. 15:01:49 <Zakim> I don't understand you, davidwood1 15:01:56 <davidwood1> Zakim, I know :) 15:01:57 <Zakim> I'm glad that smiley is there, davidwood1 15:02:01 <cmatheus> zakim, ??P3 is me 15:02:05 <Zakim> +cmatheus; got it 15:02:48 <davidwood1> Chair: David Wood 15:02:51 <cygri> zakim, who is on the phone? 15:02:51 :52 <davidwood1> Zakim, who is here? 15:02:53 <cygri> zakim, mhausenblas is temporarily me 15:02:54 :54 <pchampin> zakim, mute me 15:02:56 <Zakim> +cygri; got it 15:02:56 <Zakim> pchampin should now be muted 15:03:03 <Zakim> +??P36 15:03:10 <davidwood1> Scribe: Alex Hall 15:03:21 <NickH> Zakim, ??P36 is BBC 15:03:21 <Zakim> +BBC; got it 15:03:23 <davidwood1> Scribenick: AlexHall 15:03:30 <Zakim> +JeremyCarroll 15:03:49 <Zakim> +Souri 15:04:22 <AlexHall> regrets: axel, pat, mischat, souri 15:04:25 <AlexHall> topic: Admin <AlexHall> subtopic: Last week's minutes 15:04:53 <zwu2> zwu2 has joined #rdf-wg 15:04:55 <AlexHall> davidwood: there were several resolutions from last meeting, please review the minutes. 15:05:02 <zwu2> zakim, code? 15:05:02 <Zakim> the conference code is 73394 (tel:+1.617.761.6200 tel:+33.4.26.46.79.03 tel:+44.203.318.0479), zwu2 15:05:13 <AlexHall> RESOLVED: minutes from last meeting accepted 15:05:22 <pfps> pfps has joined #rdf-wg 15:05:29 <ivan> zakim, who is noisy? 15:05:30 <pfps> minutes look OK to me 15:05:32 <Zakim> +zwu2 15:05:38 <zwu2> sorry I am late 15:05:39 <Zakim> ivan, listening for 10 seconds I heard sound from the following: AZ (16%), Guus (5%), davidwood (59%), Ivan (25%) 15:05:50 <ivan> zakim, mute me 15:05:50 <Zakim> Ivan should now be muted 15:06:19 <NickH> Zakim, BBC also has NickH 15:06:19 <Zakim> +NickH; got it 15:06:24 <NickH> Zakim, BBC also has yvesr 15:06:24 <Zakim> +yvesr; got it <AlexHall> subtopic: Action item review 15:06:30 <AlexHall> cygri: still working on writing up named graph proposals for action-25 15:06:46 <AlexHall> ... happy to keep action open or accept help from others 15:07:01 <JeremyCarroll> JeremyCarroll has joined #rdf-wg 15:07:08 <pchampin> ACTION-25? 15:07:08 <trackbot> ACTION-25 -- Richard Cyganiak to write up the different options re ISSUE-15 -- due 2011-04-13 -- OPEN 15:07:08 <trackbot> 15:07:39 <Zakim> +sandro 15:07:55 <AlexHall> cygri: action-51 text is implemented in local copy and waiting for hg repository 15:08:11 <Zakim> +OpenLink_Software 15:08:18 <MacTed> Zakim, OpenLink_Software is temporarily me 15:08:18 <Zakim> +MacTed; got it 15:08:20 <MacTed> Zakim, mute me 15:08:20 <Zakim> MacTed should now be muted 15:08:45 <cygri> trackbot, close ACTION-51 15:08:45 <trackbot> ACTION-51 Implement ISSUE-40 resolution in RDF Concepts Editor's draft; see and replies for text closed 15:08:55 <AlexHall> guus: still trying to figure out what purpose of action-51 was 15:09:04 <AlexHall> s/action-51/action-47 15:09:26 <AlexHall> topic: Language tags 15:09:26 <davidwood1> ISSUE-64, RFC 3066 or RFC 5646 for language tags? 15:09:26 <davidwood1> Richard's proposal to resolve: 15:09:26 <trackbot> ISSUE-64 RFC 3066 or RFC 5646 for language tags? notes added 15:09:44 <AlexHall> davidwood: Richard has proposal to resolve language tag issue 15:09:52 <AZ> AZ has joined #rdf-wg 15:10:05 <JeremyCarroll> q+ to express surprise at the current text 15:10:14 <AlexHall> cygri: spec currently refers to obsoleted rfc 3066 for language tags 15:10:31 <AlexHall> ... proposal is to use latest rfc 5646 15:10:38 <davidwood1> Pat's reformulation/explanation: 15:10:56 <davidwood1> Lee F also expressed support: 15:11:10 <AlexHall> ... the new RFC has two notions of validity: well-formedness (grammar only) and validity (lang tag actually exists) 15:11:52 <AlexHall> ... add a note that the previous RFC allowed lang tags that are no longer allowed under the latest version 15:12:01 <ericP> +1 ref'ing 5646, +1 to holding at well-formedness, +1 to explanatory note 15:12:06 <yvesr> +1 15:12:07 <AlexHall> ... adopt the loosest notion of well-formedness 15:12:36 <AlexHall> davidwood: do you agree with pat's latest note on the mailing list? 15:12:49 <AlexHall> cygri: seems to be about a different issue 15:13:13 <AlexHall> davidwood: apologies, it was a different issue 15:14:23 <AlexHall> ???: when i read this note, it prompted me to drill down into original text around lang tags in the spec 15:14:57 <AZ> s/???/JeremyCarroll 15:15:03 <AlexHall> ... at some point there was a phrase to reference RFC 3066 or its successors 15:15:20 <AlexHall> ... not sure what that phrase was dropped, would like to find out why 15:15:39 <FabGandon> FabGandon has joined #rdf-wg 15:15:43 <AZ> s/what/why/ 15:16:07 <ericP> refs to unicode serve as a precedent for "or it's successors", but there are contracts which allow forward-thinking parsers to know what could be valid in the next decade or so 15:16:25 <AlexHall> ... richard's point about validity vs. well-formedness was well taken and i support the proposal. 15:16:51 <davidwood1> 15:17:03 <pfps> OK by me 15:17:25 <pfps> ... not that I care .... Issue 12, on the other hand ... 15:17:29 <ericP> +1 15:17:31 <cygri> PROPOSAL: Resolve ISSUE-64 by updating RDF concepts as per Richard's proposal: 15:17:31 <SteveH> +1 15:17:34 <AndyS> OK if syntax restriction - not depending on registry state 15:17:35 <pfps> +0 15:17:36 <JeremyCarroll> +1 15:17:36 <davidwood1> +1 15:17:36 <mbrunati> +1 15:17:38 <ivan> +1 15:17:41 <zwu2> +1 15:17:43 <sandro> +1 15:17:55 <pchampin> +1 15:17:59 <cmatheus> +1 15:18:03 <AZ> +1 15:18:04 <yvesr> +1 15:18:07 <Souri> Souri has joined #rdf-wg 15:18:39 <Zakim> -Scott_Bauer 15:19:07 <Zakim> +Scott_Bauer 15:19:12 <AlexHall> RESOLVED: Resolve ISSUE-64 by updating RDF concepts per Richard's proposal 15:19:23 <ivan> q+ 15:19:26 <ivan> ack ivan 15:19:31 <cygri> ACTION: cygri to implement ISSUE-64 resolution 15:19:31 <trackbot> Created ACTION-54 - Implement ISSUE-64 resolution [on Richard Cyganiak - due 2011-06-08]. 15:19:37 <davidwood1> Proposed text on replacing URIref with IRI 15:19:37 <davidwood1> 15:19:37 <davidwood1> Related email: 15:19:47 <davidwood1> ack JeremyCarroll 15:19:47 <Zakim> JeremyCarroll, you wanted to express surprise at the current text 15:19:47 <JeremyCarroll> ack 15:19:56 <Zakim> +[Sophia] 15:20:04 <ivan> 15:20:14 <FabGandon> zakim, Sophia is me 15:20:14 <Zakim> +FabGandon; got it 15:20:30 <AlexHall> ivan: before we move on to other major issues, I have the hg repository set up and link is posted in IRC 15:21:10 <davidwood1> q? 15:21:39 <AlexHall> topic: Replacing URIref with IRI 15:21:47 <AlexHall> davidwood: This is Richard's text 15:22:24 <AlexHall> cygri: link is posted in minutes. issue is that we need to replace references to URI Reference in Concepts with references to IRI 15:22:48 <AlexHall> ... fortunately this simplifies things because IRI defines things which were previously defined in RDF 15:23:00 <AlexHall> ... main issue is what to do with the left-over notes in Concepts 15:23:29 <AlexHall> ... there are characters which were allowed in URIrefs which are no longer allowed in IRIs 15:23:37 <davidwood1> q+ to discuss IPv6 in ihost: 15:23:47 <AlexHall> ... add note to indicate that these are no longer allowed except in %-encoded form 15:24:21 <AlexHall> ... also a note to discourage %-encoded characters in old text, not sure this is a good idea 15:24:34 <AlexHall> q+ to discuss percent-encoding 15:24:51 <davidwood1> ack davidwood 15:24:51 <Zakim> davidwood, you wanted to discuss IPv6 in ihost: 15:24:59 <AlexHall> ... would like feedback from others who have looked into it 15:25:29 <AndyS> There is %-enc text in RFC => (summary) use % only as necessary and not wildly. 15:25:36 <JeremyCarroll> q+ to suggest editors' draft should be updated with new text and public review sought 15:25:47 <ericP> q? 15:25:52 <AndyS> IPv6 are legal using [] 15:25:56 <AlexHall> davidwood: occurred to me since i'm dealing with IPv6 issues... IRI grammar seems to allow host names and IPv4 addresses but not IPv6 15:26:12 <AlexHall> ... anybody know why this is? 15:26:23 <AZ> AZ has joined #rdf-wg 15:26:26 <davidwood1> IP-literal = "[" ( IPv6address / IPvFuture ) "]" 15:26:31 <SteveH> right 15:26:34 <AlexHall> ???: IRI allows IPv6 addresses in square brackets 15:26:36 <SteveH> I've actually used them :) 15:26:46 <davidwood1> ack AlexHall 15:26:46 <Zakim> AlexHall, you wanted to discuss percent-encoding 15:26:50 <AndyS> RFC2732 adds them 15:27:04 <AZ> s/???/SteveH/ 15:27:38 <AndyS> RFC 3986 page 19 15:28:16 <AndyS> section 3.2.2. Host 15:28:22 <JeremyCarroll> q+ to say last note is too long! 15:30:23 <pchampin> is there any reference that we could refer to regarding this notion of "canonical IRI"? 15:30:27 <davidwood1> ack JeremyCarroll 15:30:27 <Zakim> JeremyCarroll, you wanted to suggest editors' draft should be updated with new text and public review sought and to say last note is too long! 15:30:50 <cygri> q+ 15:30:56 <pfps> +1 to Jeremy - it is better to defer than to copy 15:31:34 <cygri> q- 15:31:36 <AlexHall> AlexHall: Intent of the original text with %-encoding seemed to be to avoid interoperability issues, so I agree with the new proposal in this regard. 15:31:48 <pfps> -1 to David - informative lists tend to become too normative 15:32:07 <cygri> q+ to ask jeremy how much is too long 15:32:14 <AlexHall> JeremyCarroll: Would like to simply defer to IRI section 5 for normalization 15:32:31 <AlexHall> ... giving a long list here runs the risk of people thinking this is exhaustive or normative 15:33:06 <AlexHall> davidwood: having the list there is nice as a summary so people don't have to hunt down the list themselves 15:33:26 <AlexHall> cygri: the intent here is that they are informative, not normative, and this will be explicitly noted in the document. 15:34:09 <ericP> q+ to propose adding "While RDF does not require normalization or IRIs, using only normalized IRI forms will improve the chances that non-RDF tools will consume and produce the same IRIs and that other parties will reproduce the exact spelling of these IRIs." 15:34:18 <davidwood1> q? 15:34:22 <cygri> q- 15:34:36 <AlexHall> JeremyCarroll: Historically this section has been note-heavy, would prefer to see this stuff moved into a new section 3.7 15:34:53 <Zakim> -AZ 15:34:54 <AZ> AZ has joined #rdf-wg 15:35:05 <pfps> moving to an informative section would help a lot! 15:35:25 <davidwood1> ack ericP 15:35:25 <Zakim> ericP, you wanted to propose adding "While RDF does not require normalization or IRIs, using only normalized IRI forms will improve the chances that non-RDF tools will consume and 15:35:29 <Zakim> ... produce the same IRIs and that other parties will reproduce the exact spelling of these IRIs." 15:35:38 <Zakim> +AZ 15:35:58 <AlexHall> ericP: seems the root issue is that producing non-normalized IRIs decreases the chance that other tools will produce the same form 15:36:12 <cygri> q+ 15:36:15 <AndyS> And other RDF apps. 15:36:20 <JeremyCarroll> with that text we are well on the way to section 3.7 15:36:23 <davidwood1> ack cygri 15:36:36 <AlexHall> ... propose to add some text (quoted in IRC) to explain the motivations for this note. 15:37:08 <AlexHall> cygri: prefer to avoid motivations and give just a concise summary 15:37:34 <AlexHall> davidwood: would like to cater to people who don't want to read through all the specs to get a good understanding 15:38:02 <AlexHall> ... most conerns at this point seem to be editorial in nature 15:38:29 <AlexHall> cygri: as soon as the working draft goes live this content will be added and i encourage further comments 15:38:48 <AlexHall> ... don't think we need a resolution now but want interested people to keep any eye on it. 15:39:03 <AlexHall> Topic: Revisit RDF Postponed Issues 15:39:19 <AlexHall> davidwood: This always seems to get pushed down to the bottom of the agenda 15:39:36 <AlexHall> ... let's take a few minutes to knock some of these down now <AlexHall> subtopic: ISSUE-55 Revisit "Request for a richer vocabulary for languages" 15:39:39 <davidwood1> 15:40:01 <pfps> Issue-55 - not only no, but xxxx NO! the request is incorrect, anyway 15:40:22 <pfps> s/xxxx/hell/ 15:40:24 <LeeF> seconded 15:40:34 <SteveH> yeah, lets not do that :) 15:40:39 <AlexHall> PROPOSED: To close ISSUE-55 as this is not considered the duty of this group 15:40:39 <zwu2> +1 close it 15:40:41 <ivan> agreed with closing 15:40:41 <cygri> +1 15:40:42 <pfps> +1 15:40:43 <mbrunati> +1 15:40:44 <SteveH> +1 15:40:53 <yvesr> +1 15:40:57 <AZ> +1 15:40:58 <pchampin> +1 15:40:59 <cmatheus> +1 15:41:02 <LeeF> ISSUE-55? 15:41:02 <trackbot> ISSUE-55 -- Revisit "Request for a richer vocabulary for languages" -- raised 15:41:02 <trackbot> 15:41:03 <NickH> +1 15:41:59 <JeremyCarroll> 15:42:19 <AlexHall> JeremyCarroll: we should close saying lang-matches from SPARQL addresses this issue 15:43:05 <ivan> issue-56? 15:43:06 <trackbot> ISSUE-56 -- Revisit "A request for a semantics free predicate for comments" -- raised 15:43:06 <trackbot> 15:43:06 <davidwood1> 15:43:07 <AlexHall> RESOLVED: To close ISSUE-55 as this is not considered the duty of this group <AlexHall> subtopic: ISSUE-56 Revisit "A request for a semantics free predicate for comments" 15:43:20 <davidwood1> ISSUE-56 Revisit "A request for a semantics free predicate for comments" 15:43:20 <trackbot> ISSUE-56 Revisit "A request for a semantics free predicate for comments" notes added 15:43:21 <pfps> +q 15:43:25 <Zakim> -Scott_Bauer 15:43:51 <Zakim> +Scott_Bauer 15:43:52 <davidwood1> ack pfps 15:43:53 <SteveH> rdfs:comment? 15:44:02 <JeremyCarroll> no - not rdfs:comment 15:44:17 <JeremyCarroll> rdf:universal 15:44:19 <SteveH> ok, then <!-- --> / # 15:44:27 <AlexHall> pfps: this is from Ian, there was annoyance in the OWL wg that rdfs:comment has semantics 15:44:44 <AlexHall> ... that ship has sailed, rdfs:comment is there and has semantics 15:45:05 <AlexHall> ... no third party is allowed to add new predicates to the RDF namespace 15:45:13 <AlexHall> ... we should close it 15:45:32 <cygri> +1 to closing 15:45:33 <pfps> +1 15:45:35 <ericP> +0 15:45:36 <AndyS> +1 15:45:37 <AZ> +1 15:45:37 <zwu2> +1 15:45:37 <mbrunati> +1 15:45:39 <JeremyCarroll> +0 15:45:39 <Guus> +1 to closing 15:45:40 <yvesr> +1 15:45:42 <SteveH> +0 15:45:44 <cmatheus> +1 15:45:45 <AlexHall> PROPOSED: Close ISSUE-56, we have no intention of addressing this. 15:46:15 <pchampin> +0 15:46:21 <AlexHall> RESOLVED: Close ISSUE-56, we have no intention of addressing this. <AlexHall> subtopic: ISSUE-57 Revisit "A request to define subset of RDFS with a more conventional layered architecture" 15:47:23 <davidwood1> 15:47:29 <pfps> q+ for issue-57 15:47:41 <davidwood1> ISSUE-57 Revisit "A request to define subset of RDFS with a more conventional layered architecture" 15:47:41 <trackbot> ISSUE-57 Revisit "A request to define subset of RDFS with a more conventional layered architecture" notes added 15:48:06 <AlexHall> is anybody speaking right now? 15:49:28 <pfps> q+ 15:49:40 <JeremyCarroll> +1 to peter 15:49:50 <JeremyCarroll> close it, reject 15:49:54 <AlexHall> davidwood: instead of continuing, we should open it and revisit when we finish the specs 15:50:20 <AndyS> What would a more layered architecture look like? It's not just change of exposition. 15:50:33 <pchampin> q+ to ask about RDFS-DL ? 15:50:49 <davidwood1> ack pfps 15:50:49 <Zakim> pfps, you wanted to discuss issue-57 and to 15:50:51 <AlexHall> pfps: we won't be addressing this in this WG 15:51:15 <AlexHall> ... there was a request for some defined fragment of RDFS that fits nicely into OWL 15:52:13 <AlexHall> davidwood: sounds like yet another proposal for yet another subset of logical formalism 15:52:13 <pchampin> q- 15:52:14 <JeremyCarroll> q+ 15:52:29 <Guus> propose to close by doing nothing, no strong expressed need 15:52:31 <davidwood1> ack JeremyCarroll 15:53:23 <Guus> I don't think we need to spend telecon time on this, we are all in violent agreement :-) 15:54:00 <FabGandon> +1 15:54:03 <ericP> +0 15:54:04 <JeremyCarroll> +1 15:54:05 <AlexHall> PROPOSED: to close ISSUE-57 by stating that it's not in our charter and we have no intention of doing it. 15:54:05 <mbrunati> +1 15:54:07 <SteveH> +1 15:54:08 <pfps> +1 to crush the can 15:54:09 <cygri> +1 15:54:10 <zwu2> +0 15:54:10 <Souri> +1 15:54:14 <AZ> +1 15:54:21 <ivan> 1 15:54:23 <yvesr> +0 15:54:24 <AndyS> +1 15:54:28 <AlexHall> RESOLVED: to close ISSUE-57 by stating that it's not in our charter and we have no intention of doing it. 15:54:33 <cmatheus> +1 15:54:56 <davidwood1> ISSUE-12 Reconcile various forms of string literals 15:54:56 <trackbot> ISSUE-12 Reconcile various forms of string literals (time permitting) notes added 15:54:56 <davidwood1> 15:55:16 <AlexHall> Topic: ISSUE-12 (string literals) 15:55:17 <davidwood1> ISSUE-12 Reconcile various forms of string literals (time permitting) 15:55:17 <trackbot> ISSUE-12 Reconcile various forms of string literals (time permitting) notes added 15:55:47 <AlexHall> davidwood: I note that this item is marked as "time-permitting" in the charter 15:55:59 <cygri> q+ 15:56:05 <pfps> +1 to keeping Pat simple :-) 15:56:17 <SteveH> "simple" != about 2k of text 15:56:25 <AlexHall> ... who would like to speak for pat and his request to keep it simple? 15:56:27 <davidwood1> ack cygri 15:56:28 <ivan> is pat's mail 15:56:48 <davidwood1> Ivan, that URI gives me a 404 15:56:52 <cygri> 15:56:58 <LeeF> Pat's email just re-expresses Richard's proposal, as far as i can tell. 15:57:02 <cygri> 15:57:03 <AlexHall> cygri: would not like to speak to what pat said, seems to just point out that the last proposal wasn't that complicated 15:57:21 <ivan> sorry 15:57:29 <ivan> is pat's mail 15:57:31 <LeeF> +1 to 15:57:36 <AndyS> Remaining issue is class vs datatype because DATATYPE("foo"@en) =? rdf:TaggedThing 15:57:37 <AlexHall> ... would like to talk about another proposal that addresses only strings without language tag 15:57:50 <AlexHall> ... seems to be agreement on this aspect of it 15:58:28 <SteveH> q+ 15:58:58 <AlexHall> ... the proposal is to unify un-tagged string literals is to abolish the untagged plain literal in the abstract syntax and consider "foo" to be syntactic sugar for "foo"^^xsd:string in the concrete syntax 15:59:29 <AlexHall> s/is to abolish/by abolishing/ 15:59:31 <AndyS> To Pat's email - if only class, then DATATYPE() => error still?? Seems unhelpful. 15:59:54 <AlexHall> pfps: like this proposal better than previous ones 16:00:46 <Zakim> -FabGandon 16:01:02 <AlexHall> ... have some compliant about the use of rdf:LangTaggedString, not sure it is needed and will require changes to RDF semantics, OWL, and SPARQL 16:01:22 <AZ> AZ has joined #rdf-wg 16:01:42 <FabGandon> FabGandon has left #rdf-wg 16:01:56 <JeremyCarroll> rdf:LangTaggedString = rdfs:Literal - union of all typed literals 16:02:01 <AlexHall> ... thinks it's OK to handle rdf:LTS in OWL but need to verify 16:02:17 <davidwood1> ack SteveH 16:02:32 <AlexHall> ... would like to send a note to the OWL WG 16:02:54 <Souri> Question - Will "abc" still be a valid RDF literal? For example, would it be ok for me present the following triple for insertion: <John> rdfs:label "John" , OR am I obliged to present: <John> rdfs:label "John"^^xsd:string ? Also, can SPARQL query return "John" as a value for a variable? 16:03:02 <LeeF> q+ 16:03:11 <AlexHall> SteveH: my concern is that we previously resolved to do just the opposite, to turn xsd:string into plain literals 16:03:26 <LeeF> q- 16:03:43 <LeeF> (was going to ask where this is visible, but then Steve answered it) 16:03:47 <Souri> I agree with Steve's concern 16:03:48 <AlexHall> ... this seems to match what users expect, since most string data in the wild is not typed as xsd:string 16:03:59 <AlexHall> ... there is also concern about how this plays with SPARQL 16:04:38 <AndyS> +1 - SPARQL results must return non-DT string for xsd:string for this else massive surprises (= lots of support costs). 16:05:00 <davidwood1> q? 16:05:15 <Souri> s/agree with/share/ 16:05:18 <AlexHall> ... SPARQL results will return lots of unexpected xsd:string datatypes 16:05:33 <cygri> q+ 16:05:45 <ivan> q+ 16:05:51 <davidwood1> ack cygri 16:06:03 <AlexHall> ... seems odd to go to all this trouble to remove plain literals from the abstract syntax and turn around and strip out xsd:string types on the way out of the system. 16:06:05 <AndyS> q+ to say we can have both : syntax vs semantics 16:06:12 <pchampin> making a difference btw 2 kinds of strings is even more perverse 16:07:01 <AlexHall> cygri: the only syntax that really needs changing is N-Triples 16:07:15 <JeremyCarroll> q+ to suggest predictability also helpful for XML 16:07:40 <AlexHall> ... syntactic sugar in most concrete syntaxes is bad because it reduces predictability 16:08:23 <AlexHall> ... forbidding one datatype in the abstract syntax is even more perverse than forbidding plain literals in one of the concrete syntaxes 16:09:02 <sandro> SteveH, I liked deprecating xs:string until it looked like we could get rid of Plain Literals entirely (via using language-tags-as-datatypes). 16:09:25 <davidwood1> Sandro, right. Me, too. 16:09:31 <ivan> q- 16:09:32 <davidwood1> ack ivan 16:09:35 <Souri> We could have both "abc" and "abc"^^xsd:string as equivalent (identical when compared), but treat the simple literal form "abc" to be the canonical one. 16:09:35 <AlexHall> davidwood: none of the proposals seems to play nicely with all the various levels (semantics, concepts, RDF document set, implementations) 16:09:37 <sandro> I hope people wont be expected to emit the long form. 16:09:40 <davidwood1> ack AndyS 16:09:40 <Zakim> AndyS, you wanted to say we can have both : syntax vs semantics 16:09:50 <SteveH> sandro, I like lang tag -> datatype 16:10:16 <AlexHall> AndyS: agree with Steve's concerns re. xsd:string in concrete syntaxes, think this could be abolished if we're careful 16:10:17 <AlexHall> ... 16:10:22 <JeremyCarroll> +1 to andy / split surface syntax from abstract syntax 16:10:29 <ericP> +1 to short-forms only in the serializations 16:10:39 <davidwood1> ack JeremyCarroll 16:10:39 <Zakim> JeremyCarroll, you wanted to suggest predictability also hel[ful for XML 16:10:43 <yvesr> +1 to AndyS 16:10:51 <AlexHall> ... but we can split the abstract and sufrace syntaxes and use different approaches in each 16:11:00 <AZ> AZ has joined #rdf-wg 16:11:18 <AlexHall> Jeremy: If we're making N-Triples, Turtle more predictable then we should also make RDF/XML more predictable. 16:11:57 <AlexHall> davidwood: If we ingest RDF literals, turn them into xsd:string internally, and emit them back as plain literals, is this consistent with what you said: 16:12:13 <AlexHall> AndyS: yes it is, and I can't think of a format where you wouldn't want to do that. 16:12:30 <pfps> NO! 16:12:53 <AlexHall> davidwood: are we re-defining xsd:string? 16:12:57 <AlexHall> everybody: NO! 16:13:39 <Souri> q+ 16:13:41 <AlexHall> ???: we're retroactively declaring that all plain literals without language tags are actually xsd:strings 16:13:47 <sandro> At the RDF APIs will be much simpler, Andy. 16:13:55 <sandro> At LEAST, the APIs.... 16:14:48 <AndyS> sandro - not so simple?? - are serializers inside or outside such API? 16:14:48 <AlexHall> davidwood: Volunteers to start a wiki page to collect all the places that are affected by Richard & Pat's ISSUE-12 proposal? 16:15:22 <AndyS> Request for a consolidate text for R+P proposal. 16:15:26 <cygri> 16:16:53 <AndyS> Please rename - if we keep untagged-in-surface syntax, it's a bad name. 16:17:08 <AlexHall> cygri: it's been my understanding that this conversation has been only about string literals without language tags 16:17:34 <AZ> AZ has joined #rdf-wg 16:17:39 <AlexHall> davidwood: can you combine this with the language-tag proposal? 16:18:06 <AlexHall> cygri: point was to keep them separate because there is still disagreement about language tags 16:18:45 <davidwood1> q? 16:18:51 <AlexHall> davidwood: request for somebody to add to the wiki page for this proposal a section that collects all documents which will need to change as a result of it. 16:19:39 <davidwood1> ack Souri 16:19:59 <AndyS> SPARQL query is doable / SPARQL XML results is not being opened this time => trickier 16:20:20 <AlexHall> souri: looking at this proposal, intent seems to be that these two forms are declared equivalent 16:20:45 <AlexHall> ... we should define a canonical form for the surface syntax so we know how to output the value in query results 16:20:59 <AlexHall> davidwood: we are over time 16:21:11 <zwu2> bye 16:21:12 <LeeF> regrets next week for semtech 16:21:12 <AlexHall> ... think we've made progress 16:21:13 <yvesr> bye 16:21:13 <pchampin> bye 16:21:13 <AndyS> regrets for next week - semtech 16:21:15 <AlexHall> ... adjourned. 16:21:16 <Zakim> -LeeF 16:21:17 <Zakim> -davidwood 16:21:17 <Zakim> -sandro 16:21:18 <JeremyCarroll> bye 16:21:19 <Zakim> -MacTed 16:21:20 <Zakim> -Ivan 16:21:20 <Zakim> -zwu2 16:21:21 <Zakim> -SteveH_ 16:21:21 <Zakim> -EricP 16:21:23 <Zakim> -pchampin 16:21:23 <mbrunati> bye 16:21:25 <Zakim> -cmatheus 16:21:27 <Zakim> -JeremyCarroll 16:21:29 <Zakim> -mbrunati 16:21:31 <Zakim> -pfps 16:21:33 <Zakim> -Scott_Bauer 16:21:35 <Zakim> -AlexHall 16:21:37 <Zakim> -BBC 16:21:39 <Zakim> -AndyS 16:21:41 <Zakim> -Souri 16:21:43 <Zakim> -cygri 16:21:46 <Zakim> -AZ 16:21:47 <Zakim> -Guus 16:21:49 <Zakim> SW_RDFWG()11:00AM has ended 16:21:51 <Zakim> Attendees were Guus, Scott_Bauer, davidwood, Ivan, EricP, mbrunati, SteveH_, AlexHall, FabGandon, pchampin, AndyS, pfps, AZ, LeeF, cmatheus, cygri, JeremyCarroll, Souri, zwu2, 16:21:54 <Zakim> ... NickH, yvesr, sandro, MacTed 16:22:06 <AndyS> AndyS has left #rdf-wg 16:24:28 <SteveH> SteveH has joined #rdf-wg # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000489
http://www.w3.org/2011/rdf-wg/wiki/index.php?title=Chatlog_2011-06-01&diff=prev&oldid=1146
CC-MAIN-2015-14
refinedweb
5,373
52.73
Here is a simple code snippet for demonstration. Somebody told me that the double-check lock is incorrect. Since the variable is non-volatile, the compiler is free to reorder the calls or optimize them away. But I really saw such a code snippet is used in many projects indeed. Could somebody shed some light on this matter? I googled and talked about it with my friends, but I still can’t find out the answer. #include <iostream> #include <mutex> #include <fstream> namespace DemoLogger { void InitFd() { if (!is_log_file_ready) { std::lock_guard<std::mutex> guard(log_mutex); if (!is_log_file_ready) { log_stream.open("sdk.log", std::ofstream::out | std::ofstream::trunc); is_log_file_ready = true; } } } extern static bool is_log_file_ready; extern static std::mutex log_mutex; extern static std::ofstream log_stream; } //cpp namespace DemoLogger { bool is_log_file_ready{false}; std::mutex log_mutex; std::ofstream log_stream; } Source: Windows Questions C++
https://windowsquestions.com/2021/08/29/is-there-any-potential-problem-with-double-check-lock-for-c/
CC-MAIN-2022-05
refinedweb
138
58.08
Update (11/7/09): fixed Execute() method per Richard’s suggestion to wrap IDataRecord instead of Reader. Recently, I started playing around with C# dynamic, and blogged how it could be used to call static class members late bound. Today, I was talking to Phil Haack, who I think had talked to ScottGu, and he mentioned that it would be cool to use dynamic to simplify data access when you work directly with SQL query. So I thought I’d play around with that, and it didn’t take much code to make it work nicely. So the scenario is that you’re not using any fancy O/R mapper like LINQ to SQL or Entity Framework, but you’re directly using ADO.NET to execute raw SQL commands. It’s not something that I would personally do, but there are a lot of folks who prefer this over the higher level data access layers. So let’s look at an example of what we’re trying to improve. Let’s borrow an MSDN sample about SqlCommand: string commandText = “SELECT OrderID, CustomerID FROM dbo.Orders;”; using (var connection = new SqlConnection(Settings.Default.NorthwindConnectionString)) { using (var command = new SqlCommand(commandText, connection)) { connection.Open(); using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { Console.WriteLine(String.Format(“{0}, {1}”, reader[0], reader[1])); } } } } And now let’s assume that we’re only ever interested in making one select query at a time, which lets us abstract out some of the details about the SQL Connection. By writing some nice little helpers that make use of dynamic, we’re able to write something much simpler: string commandText = “SELECT OrderID, CustomerID FROM dbo.Orders;”; foreach (var row in SimpleQuery.Execute(Settings.Default.NorthwindConnectionString, commandText)) { Console.WriteLine(String.Format(“{0}, {1}”, row.OrderID, row.CustomerID)); } A few things to note: - We pretty much just make one method call, and directly get back objects that we can work with. Contrast this with having to deal with SqlConnection, SqlCommand and SqlDataReader. - We use a standard enumeration pattern, while SqlDataReader makes you call reader.Read() on every iteration, which looks ugly. - And the big one: we get to access the properties directly on the row object, thanks to dynamic! e.g. we can write row.OrderID instead of reader[0] (or reader[“OrderID”]) So how does it all work? First, let’s take a look at the SimpleQuery.Execute helper method: public static IEnumerable<dynamic> Execute(string connString, string commandText) { using (var connection = new SqlConnection(connString)) { using (var command = new SqlCommand(commandText, connection)) { connection.Open(); using (SqlDataReader reader = command.ExecuteReader()) { foreach (IDataRecord record in reader) { yield return new DataRecordDynamicWrapper(record); } } } } } So it’s basically the same as the MSDN code, except that it wraps the reader that it returns in a DataRecordDynamicWrapper, which is what makes the dynamic magic work. Also, note that the method returns IEnumerable<dynamic>, which is why we’re able to just use ‘var row’ in the test code (which I think looks nicer than ‘dynamic row’). So now all that’s left to look at is DataRecordDynamicWrapper, which is incredibly simple: public class DataRecordDynamicWrapper : DynamicObject { private IDataRecord _dataRecord; public DataRecordDynamicWrapper(IDataRecord dataRecord) { _dataRecord = dataRecord; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = _dataRecord[binder.Name]; return result != null; } } All it does is index into the data record to get the value for a given property name. I think what I did with static methods in my last post was probably a bit of an abuse of dynamic, because we were dealing with statically types objects, and there are alternatives that would have avoided the need for dynamic. But here, it’s I think a more legitimate use, because we’re dealing with data record objects that are intrinsically untyped. While dynamic of course doesn’t give us strong typing, it at least makes it more pleasant to deal with. One last thing worth noting is that to make this real, we should add support for SQL parameters, which makes it easier to write SQL code that is not vulnerable to SQL-injection attacks. That could easily be done by passing additional params to SimpleQuery.Execute. This sample is more of a proof of concept and an excuse to mess around with dynamic 🙂 Zipped sample is attached to this post. DataReaderWithDynamic.zip Reminds me of VB3 with DAO. Late bound recordset column access. I think, this isn’t a good example of using "dynamic". everyone else who sees this code is in a false positive thinking of using an ORM. Why make a simple ado.net works to more complex?? I think it is valid to use ‘dynamic’ to access dictionary with the property syntax. It’s just a syntax sugar, but still nice. HI David, What about the performance of the code. It will be fast or slow. I am newbie to 4.0. Thanks, Jalpesh I feel like we are going backwards simply due to boredom with the features that take us forwards what are you using to format code in this article. Is there special visual studio add-in or theme? Thanks @developer: I use SyntaxHighlighter for the code snippets. Read Scott Hanselman talk about it: unfortunately this does not work when .ToList() is called on the result set. thats because all wrapper objects share the same data reader. a solution would be to return a copy of the row data in a dictionary. @tobi: good point, ToList() would not work based on how the code is written here. I really didn’t test this much outside of the scenario above, and it’s really just a proof of concept that may need some work before be put to real use. Would that work with Intellisense? @Manu: you will not get Intellisense when you use C# dynamic, since the set of valid properties is dynamic and not known until runtime. A yield return within a connection.open? lol… ToList is just one of the subtle bugs in this code. How about: bool fail = false; dynamic prev = null; foreach (var row in SimpleQuery.Execute("…", "SELECT DISTINCT UserID FROM Users")) { if (null != prev && prev.UserID == row.UserID) { fail = true; break; } prev = row; } if (fail) { Console.WriteLine("FAIL:"); Console.WriteLine(prev.UserID); } If the query returns two or more records, the result will be the message "FAIL:", followed by an exception. The fix is quite simple: using (SqlDataReader reader = command.ExecuteReader()) { return reader.Cast<IDataRecord>() .Select(r => new DataRecordDynamicWrapper(r)); } The SqlDataReader.GetEnumerator method returns a new DbEnumerator, which copies the meta-data and values for each record to a new DataRecordInteral class. Sorry, that won’t work either. It should be: using (SqlDataReader reader = command.ExecuteReader()) { foreach (IDataRecord record in reader) { yield return new DataRecordDynamicWrapper(record); } } @Richard: thanks for catching this. I’ll fix the post soon!
https://blogs.msdn.microsoft.com/davidebb/2009/10/29/using-c-dynamic-to-simplify-ado-net-data-access/
CC-MAIN-2017-13
refinedweb
1,135
65.83
It creates all the files into a map, but once i start the .exe file it closes right away. Idk what i should do with this problem.. The source i compiled : import random def main(): ranNum = (random.randint(1, 100)) howGue = 0 while True: choice = int(input ("Guess what the number is (1/100): ")) if (choice > 100): print ("That's not a valid number, guess between 1 and 100") elif (choice < ranNum): howGue +=1 print ("Too low, try again") elif (choice > ranNum): howGue +=1 print ("To high, try again") elif (choice == ranNum): howGue +=1 print ("You got the right number") print ("The Number was : ", ranNum) print ("You Guessed ", howGue , " Times") break elif (choice > 101): print ("That's not a valid number, guess between 1 and 100") else: print ("Not a valide number") print ("Welcome to PyGuess") main() again = input ("Would you like too run the program over again Y/N?") if again == "Y": main() elif again == "y": main() else: stop = input ("Okay, Take care and come again!") Thanks in advance
http://www.dreamincode.net/forums/topic/288425-cx-freexe-compiled-program-close-once-its-started/page__pid__1681687__st__0
CC-MAIN-2016-07
refinedweb
170
81.67
Welcome to my Prototype Design Pattern Tutorial. The Prototype design pattern is used for creating new objects (instances) by cloning (copying) other objects. It allows for the adding of any subclass instance of a known super class at run time. It is used when there are numerous potential classes that you want to only use if needed at runtime. The major benefit of using the Prototype pattern is that it reduces the need for creating potentially unneeded subclasses. All of the code follows the video to help you learn. If you like videos like this, please tell Google Sharing is super great Code from the Video ANIMAL.JAVA // By making this class cloneable you are telling Java // that it is ok to copy instances of this class // These instance copies have different results when // System.identityHashCode(System.identityHashCode(bike)) // is called public interface Animal extends Cloneable { public Animal makeCopy(); } SHEEP.JAVA public class Sheep implements Animal { public Sheep(){ System.out.println("Sheep is Made"); } public Animal makeCopy() { System.out.println("Sheep is Being Made"); Sheep sheepObject = null; try { // Calls the Animal super classes clone() // Then casts the results to Sheep sheepObject = (Sheep) super.clone(); } // If Animal didn't extend Cloneable this error // is thrown catch (CloneNotSupportedException e) { System.out.println("The Sheep was Turned to Mush"); e.printStackTrace(); } return sheepObject; } public String toString(){ return "Dolly is my Hero, Baaaaa"; } } CLONEFACTORY.JAVA public class CloneFactory { // Receives any Animal, or Animal subclass and // makes a copy of it and stores it in its own // location in memory // CloneFactory has no idea what these objects are // except that they are subclasses of Animal public Animal getClone(Animal animalSample) { // Because of Polymorphism the Sheeps makeCopy() // is called here instead of Animals return animalSample.makeCopy(); } } TESTCLONING.JAVA public class TestCloning { public static void main(String[] args){ // Handles routing makeCopy method calls to the // right subclasses of Animal CloneFactory animalMaker = new CloneFactory(); // Creates a new Sheep instance Sheep sally = new Sheep(); // Creates a clone of Sally and stores it in its own // memory location Sheep clonedSheep = (Sheep) animalMaker.getClone(sally); // These are exact copies of each other System.out.println(sally); System.out.println(clonedSheep); System.out.println("Sally HashCode: " + System.identityHashCode(System.identityHashCode(sally))); System.out.println("Clone HashCode: " + System.identityHashCode(System.identityHashCode(clonedSheep))); } } I just came across your design pattern tutorials.You have explained the concepts in a simple manner and its amazing..!!! Looking forward to view and learn all your videos…. Great JOB.. Thank you very much 🙂 I did my best to make them easy to understand. I’ll dive into using them and recognizing when they can help in my next tutorial on refactoring. Thank you for taking the time to tell me you liked them I really appreciate the way you have explained the java design pattern. It clears all my doubts. Thank you again for your awesome description..:) You are very welcome and thank you for taking the time to tell me the tutorial helped 🙂 Very good, clean and clear tutorial. Best part is the availability of the code to be practiced by the learner. If possible, if you include class diagram then it would be perfect. Thank you very much 🙂 I’ll see what I can do about the UML diagrams. Realy helpful!! thank you..(: You’re welcome 🙂 Hi, Derek! Your tutorials are awesome! Keep the good work. I’ve a suggestion… In the classes Dog and Sheep you can return Dog and Sheep types (respectively) in makeCopy method, because they’re subtypes of Animal: //Dog.java file @Override public Dog makeCopy() { (…) } //Sheep.java file @Override public Sheep makeCopy() { (…) } So, you’ll avoid cast to this types later, but you won’t able to use CloneFactory without do the cast. It’ll seems like that: Sheep clonedSheep = sally.makeCopy(); []’s Helton Hi Derek, I never thought in my life that design pattern could be learned in such an easy way. I would really thanks from my bottom of my heart. The way you are explaining the stuff is exceptional. I became your great friend… One small request please upload few core java related stuff such as: collections, Java memory management , Multi Threads , Syncronization. Thanks again and god bless you… Thank you 🙂 I’m very happy that they have helped. I plan on going back and covering all of the topics you have mentioned and much more. I’m sorry it is taking so long. May God bless you and your family as well I would like to ask an other question , What about making our object’s constructors private to prevent direct instantiation ? That is very often a good idea and I cover that topic in this tutorial series Not only the explanations are great, but also the quality of the videos. Congrats and thanks for such a hard and great work! Thank you very much 🙂 I try to do my best I appreciate the videos as they seem to be gentle introductions to design patterns, but a little bit of explanation on how they improves quality would be very nice. For instance, with this video, why go through the trouble of creating a factory, when one could very easily have done this: Sheep clonedSheep=(Sheep)sally.makeCopy(); I realize that there is some underlying reason as to why the above design is better, and it maybe that the benefits of applying the design patterns do not manifest in small programs , but a gentle reminder of why they ultimately are better would complement these tutorial videos very nicely. Or else, these tutorials would just be a ‘hey, so this is this and that is that…’, and not ‘Hey, but this is why this is so and that is such!’ thank you ever so much mate! i consider you one of these few rare individuals that really try to make the world a better place, i wish i could shake your hand and buy you a beer 🙂 Thank you for the nice compliments 🙂 I try to do my best Hi Derek, I have watched many videos on Design Patterns but the way you have explained is very nice and your examples are so practical and real that it is easy to remember because it related with real world problems. Please keep up the good work. I would suggest you to pick up such complex concept and keep explaining them in your way… Thank you for the nice compliment 🙂 I have an advanced algorithm tutorial that I hope to bring out soon.
http://www.newthinktank.com/2012/09/prototype-design-pattern-tutorial/
CC-MAIN-2017-26
refinedweb
1,081
61.77
A few months ago I wrote an article about developing RESTful web services using Python and Flask. In that article, I explained and demonstrated the mechanics of implementing a web service that exposes a REST API to for setting and completing goals. There was a concept of a user, but you couldn't login and everybody could see the goals of all users. This is fine for a demo application, but in the real world you want to provide privacy to your users, as well as know which user is requesting a particular resource to provide a customized experience. In order to do that you need to authenticate the user, which means you need to verify that the active user is indeed who he or she claims to be and map him/her to the user object in your application that is authorized to access some subset of all resources. There are many ways to do that, such as HTTP basic authentication, JWT access tokens, hard coded secrets, pass user credentials in every request, etc. Each has its own pros and cons. I'll jump right ahead to the modern alternative of OAuth2-based authentication. Using OAuth2 allows you to authenticate users without managing their credentials. Users who log in to your application or code can securely use web services by relying on trusted identity providers such as Google, Facebook, Twitter and more. What's so great about it? As a developer you don't have to deal with messy area of registration, storing emails and passwords, providing password reset workflows. As a user you don't need to create yet another set of credentials you'll forget or write insecurely on a post-it note (or even worse use your bank account password to login to every web site). The problem with OAuth2 is that it is extremely complicated due to issues with standardization. First, of all it is inherently complicated and on top of it many details were left for the implementers to decide. The end result is that clients need to perform various interactions with different identity providers. Luckily, libraries are available to ease the burden, but it is still not completely trivial and it takes some work to get it right. In the remainder of this article, I'll show you how to use GitHub as an identify provider and allow user that have a GitHub account get an access token that will allow them to authenticate themselves to the over-achiever application. Here we go. I chose the flask-oauthlib extension built on top of the excellent oauthlib () and requests-oauthlib (). In order to use GitHub as an identity provider for your application you need to register a developer application here:. You'll need a couple of items later. The full source code is available here: First add flask-oauthlib to your requirements.txt file: Flask==0.10.1 Flask-RESTful==0.3.4 Flask-SQLAlchemy==2.1 Flask-OAuthlib==0.9.2 mock==1.3.0 To set up everything you need to wrap your app with the OAuth object. Then create a "github" object by calling the remote_app() method and passing sme arguments. You need to change the consumer_key and consumer_secret and replace them with your values. You may use mine if you wish (e.g. for playing with it), but you'll see that when you go through the authentication process with github it will tell you that Gigi's Over Achiever app wants access to your awesome stuff. This is not the experience you want your users to have. Finally, you need to set the _tokengetter attribute. There is a decorator that is supposedly more user-friendly, but it doesn't work with the way I initialize the Flask app. Its job is to tell OauthLib where to find the token when a request comes in. from flask.ext.oauthlib.client import OAuth . . . oauth = OAuth(app) github = oauth.remote_app( 'github', consumer_key='507e57ab372adeb8051b', consumer_secret='08a7dbaa06ac16daab00fac53724ee742c8081c5', request_token_params={'scope': 'user:email'}, base_url='', request_token_url=None, access_token_method='POST', access_token_url='', authorize_url='' ) # set the token getter for the auth client github._tokengetter = lambda: session.get('github_token') Once this part is done you will have an initialized github object you can pass around and can authrnticate users for you. Before we can do that let's add a few required methods to the mix. The login endpoint returns a github user info that includes an access_token. You'll pass this access token as the "Access-Token" header to all authenticated endpoints. @app.route('/login') def login(): return app.github.authorize(callback=url_for('authorized', _external=True)) The /login/authorized endpoint is called by the github auth infrastructure according to the OAuth2 workflow. The Flask-OAuthLib extension takes care of adding the tokn to the session @app.route('/login/authorized') def authorized(): resp = app.github.authorized_response() if resp is None: abort(401, message='Access denied!') user = app.github.get('user') user.data['access_token'] = session['github_token'][0] return jsonify(user.data) The /logout endpoint just pops the token from the session @app.route('/logout') def logout(): session.pop('github_token', None) return 'OK' Note, that any GitHub user can login to Over-Achiever. A user will be created automatically on the first login attempt. The code now uses a _get_user() method to get information about the current user from GitHub using the access token. If the access token is missing from the headers it abort immediately with a 401 error. If it exists it calls the GitHub object's get() method to get the user and from the result it extract the email and name. If a user with this email doesn't exist it creates it on the spot. def _get_user(): """Get the user object or create it based on the token in the session If there is no access token abort with 401 message """ if 'Access-Token' not in request.headers: abort(401, message='Access Denied!') token = request.headers['Access-Token'] user_data = github.get('user', token=dict(access_token=token)).data name = user_data['name'] q = _get_query() user = q(m.User).filter_by(email=email).scalar() if not user: user = m.User(email=email, name=name) s = _get_session() s.add(user) return user The _get_user() helper function makes it super simple for every other endpoint to provide each user access to their goals only. For example the /v1.0/goals endpoint supports GET, POST and PUT methods. All methods call _get_user() and then filter any access to the goals table (either read or write) by the user object: class Goal(Resource): def get(self): """Get all goals organized by user and in hierarchy If user doesn't exist create it (with no goals) """ user = _get_user() q = _get_query() result = {user.name: _get_goal_tree(q, user, None, {})} return result def post(self): user = _get_user() parser = RequestParser() parser.add_argument('name', type=str, required=True) parser.add_argument('parent_name', type=str) parser.add_argument('description', type=str, required=False) args = parser.parse_args() # Get a SQL Alchemy query object q = _get_query() # Create a new goal # Find parent goal by name parent = q(m.Goal).filter_by(name=args.parent_name).scalar() goal = m.Goal(user=user, parent=parent, name=args.name, description=args.description) s = _get_session() s.add(goal) s.commit() def put(self): """Update end time""" user = _get_user() parser = RequestParser() parser.add_argument('name', type=str, required=True) args = parser.parse_args() # Get a SQL Alchemy query object q = _get_query() goal = q(m.Goal).filter_by(user=user, name=args.name).one() goal.end = datetime.now() In conclusion, OAuth-based authentication takes some work to get right, but it's worth it. Give it a try. Advertiser Disclosure:
https://www.devx.com/webdev/authenticate-restful-apis-with-an-oauth-provider.html
CC-MAIN-2021-31
refinedweb
1,262
57.27
Search the Community Showing results for tags 'assembly'. Found 7 results -! BmpSearch.zip GAMERS - Asking for help with ANY kind of game automation is against the forum rules. DON'T DO IT. - Years ago I tried to put some functionality together to do some of this here. I started off in the right direction but it ended up getting out of control. Any new thing I learned along the way (as I was creating it), I kept trying to add in and it all became a mess. One of my primary goals with that was to make sure the code could always be pre-compiled and still run. That part did work and I was able create a couple of good projects with it, but still a lot of parts I wouldn't consider correct now and certainly not manageable. Here is a redo of what I was going for there only this time I'm not going to be generating any of the assembly code. That's all going to be done using the built in macro engine already within fasm.dll and the macros written by Tomasz Grysztar (creator of fasm) so this time I don't have to worry about any of the code that gets generated. Im not going to touch the source at all. In fact there is not even going to be _fasmadd or global variables tracking anything. None of that is needed with the added basic and extended headers that you can read more about in the fasm documentation. You can use almost all of whats in the documentation section for basic/extended headers but ignore the parts about import,exports,resources,text encoding. doesn't really apply here. Here are examples I came up with that covers a lot of core functionality to write assembly code in a manner that you already know how. If/while using multiple conditional logic statements, multiple functions, local variables, global variables, structures, COM interfaces, strings as parameters, nesting function calls. These are all things you dont even have to think about when your doing it in autoit and I'm hoping this helps bring some of that same comfort to fasm. These 3 simple callback functions will be used through out the examples Global $gConsoleWriteCB = DllCallbackRegister('_ConsoleWriteCB', 'dword', 'str;dword'), $gpConsoleWriteCB = DllCallbackGetPtr($gConsoleWriteCB) Global $gDisplayStructCB = DllCallbackRegister('_DisplayStructCB', 'dword', 'ptr;str'), $gpDisplayStructCB = DllCallbackGetPtr($gDisplayStructCB) Global $gSleepCB = DllCallbackRegister('_SleepCB', 'dword', 'dword'), $gpSleepCB = DllCallbackGetPtr($gSleepCB) Func _ConsoleWriteCB($sMsg, $iVal) ConsoleWrite($sMsg & $iVal & @CRLF) EndFunc ;==>_ConsoleWriteCB Func _DisplayStructCB($pStruct, $sStr) _WinAPI_DisplayStruct(DllStructCreate($sStr, $pStruct), $sStr, 'def=' & $sStr) EndFunc ;==>_DisplayStructCB Func _SleepCB($iSleep) Sleep($iSleep) EndFunc ;==>_SleepCB proc/endp - like func and endfunc with some extra options. "uses" statement will preserve the registers specified. stdcall is the default call type if not specified. DWORD is the default parameter size if not specified. ret value is also handled for you. You don't have to worry about adjusting a number every time you throw on an extra parameter. In fact you don't ever have to specify/touch ebp/esp at all with these macros. See Basic headers -> procedures for full description. force - just a macro I added for creating a anonymous label for the first/primary function to ensure the code gets generated. The problem we are getting around is this: in our example, _main is never actually called anywhere within fasm code and fasm engine detects that and thinks the code is doing nothing. Because of that it wants to skip generating that code and all code that was called by it leaving you with nothing. This is actually a great feature but we obviously want to make an exception for our main/initial/primary function that starts it all off so thats all this does. Func _Ex_Proc() $g_sFasm = '' _('force _main') _('proc _main uses ebx, parm1, parm2') ; _('proc _main stdcall uses ebx, parm1:DWORD, parm2:DWORD'); full statement _(' mov ebx, [parm1]') _(' add ebx, [parm2]') _(' mov eax, ebx') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $iAdd = DllCallAddress('dword', DllStructGetPtr($tBinary), 'dword', 5, 'dword', 5) ConsoleWrite('Parm1+Parm2=' & $iAdd[0] & @CRLF) EndFunc ;==>_Ex_Proc Here Im showing you calling _ConsoleWriteCB autoit function we set up as a callback. Its how you would call any function in autoit from fasm. Strings - Notice Im creating and passing "edx = " string to the function on the fly. So helpful! invoke - same as a stdcall with brackets []. Use this for when calling autoit functions Func _Ex_Callback() $g_sFasm = '' _('force _main') _('proc _main, pConsoleWriteCB, parm1, parm2') _(' mov edx, [parm1]') _(' add edx, [parm2]') _(' invoke pConsoleWriteCB, "edx = ", edx') ; ;~ _(' stdcall [pConsoleWriteCB], "edx = ", edx') ; same as invoke _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) DllCallAddress('ptr', DllStructGetPtr($tBinary), 'ptr', $gpConsoleWriteCB, 'dword', 5, 'dword', 5) EndFunc ;==>_Ex_Callback Showing .while/.endw, .if/.elseif/.else/.endif usage. .repeat .until are also macros you can use. See Extended Headers -> Structuring the source. Ignore .code, .data, .end - Those are gonna be more for a full exe. invokepcd/invokepd - these are macros I added that are the same as invoke, just preserve (push/pop) ECX or both ECX and EDX during the call. Below is also a good example of what can happen when you don't preserve registers that are caller saved (us calling the function) vs callie saved (us creating the function). EAX,ECX,EDX are all caller saved so when we call another function like the autoit callback _ConsoleWriteCB, those registers could have very different values then what was in them before the call. This function below should do at least two loops, but it doesn't (at least on my pc) without preserving ECX because ECX is no longer zero when the function returns. Keep the same thought in mind for registers EBX,ESI,EDI when you are creating assembly functions (callie saved). If your functions uses those registers, You need to preserve and restore them before your code returns back to autoit or else you could cause a similar effect to autoit. "trashing" registers is a term I've seen used alot when referring to these kind of mistakes Func _Ex_IfElseWhile() $g_sFasm = '' _('force _main') _('proc _main uses ebx, pConsoleWriteCB') _(' xor edx, edx') ; edx=0 _(' mov eax, 99') ; _(' mov ebx, 10') _(' xor ecx, ecx') ; ecx=0 _(' .while ecx = 0') _(' .if eax<=100 & ( ecx | edx )') ; not true on first loop _(' inc ebx') _(' invokepcd pConsoleWriteCB, "Something True - ebx=", ebx') _(' ret') _(' .elseif eax < 99') ; Just showing you the elseif statement _(' inc ebx') _(' .else') ;~ _(' invokepcd pConsoleWriteCB, "Nothing True - ebx=", ebx') ; comment this and uncomment the line below _(' invoke pConsoleWriteCB, "Nothing True - ebx=", ebx') _(' inc edx') ; this will make next loop true _(' .endif') _(' .endw') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) DllCallAddress('dword', DllStructGetPtr($tBinary), 'ptr', $gpConsoleWriteCB) EndFunc ;==>_Ex_IfElseWhile Sub Functions : You already understand this. Not really "sub", its just another function you call. And those functions call other functions and so on. fix : syntax sugar - Look how easy it was to replace invoke statement with our actual autoit function name ptr : more sugar - same thing as using brackets [parm1] Nesting : In subfunc1 we pass the results of two function calls to the same function we are calling Func _Ex_SubProc() $g_sFasm = '' ;replace all '_ConsoleWriteCB' statments with 'invoke pConsoleWriteCB' before* assembly _('_ConsoleWriteCB fix invoke pConsoleWriteCB') _('force _main') _('proc _main uses ebx, pConsoleWriteCB, parm1, parm2') _(' mov ebx, [parm1]') _(' add ebx, [parm2]') _(' _ConsoleWriteCB, "ebx start = ", ebx') _(' stdcall _subfunc1, [pConsoleWriteCB], [parm1], [parm2]') _(' _ConsoleWriteCB, "ebx end = ", ebx') _(' ret') _('endp') ; _('proc _subfunc1 uses ebx, pConsoleWriteCB, parm1, parm2') _(' mov ebx, [parm1]') _(' _ConsoleWriteCB, " subfunc1 ebx start = ", ebx') _(' stdcall _SubfuncAdd, <stdcall _SubfuncAdd, [parm1], [parm2]>, <stdcall _SubfuncAdd, ptr parm1, ptr parm2>') ; Nesting functions _(' _ConsoleWriteCB, " _SubfuncAdd nested <5+5><5+5> = ", eax') _(' _ConsoleWriteCB, " subfunc1 ebx end = ", ebx') _(' ret') _('endp') ; _('proc _SubfuncAdd uses ebx, parm1, parm2') _(' mov ebx, [parm1]') _(' add ebx, [parm2]') _(' mov eax, ebx') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) DllCallAddress('dword', DllStructGetPtr($tBinary), 'ptr', $gpConsoleWriteCB, 'dword', 5, 'dword', 5) EndFunc ;==>_Ex_SubProc This demonstrates the struct macro. See basic headers -> Structures for more info _FasmAu3StructDef will create an equivalent formated structure definition. All elements already have a sizeof.#name created internally. So in this example sizeof.AUTSTRUCT.x would equal 8. sizeof.AUTSTRUCT.z would equal 16 (2*8). I have added an additional one sot.#name (sizeoftype) for any array that gets created. Below is the source of what gets generate from 'dword x;dword y;short z[8]'. Also dont get confused that in fasm data definitions, d is for data as in db (data byte) or dw (data word). Not double like it is in autoit's dword (double word). See intro -> assembly syntax -> data definitions struct AUTSTRUCT x dd ? y dd ? z dw 8 dup ? ends define sot.AUTSTRUCT.z 2 Func _Ex_AutDllStruct() $g_sFasm = '' Local Const $sTag = 'dword x;dword y;short z[8]' _(_FasmAu3StructDef('AUTSTRUCT', $sTag)) _('force _main') _('proc _main uses ebx, pDisplayStructCB, pAutStruct') _(' mov ebx, [pAutStruct]') ; place address of autoit structure in ebx _(' mov [ebx+AUTSTRUCT.x], 1234') _(' mov [ebx+AUTSTRUCT.y], 4321') _(' xor edx, edx') _(' mov ecx, 5') ; setup ecx for loop instruction _(' Next_Z_Index:') ; set elements 1-6 (0-5 here in fasm) _(' mov [ebx+AUTSTRUCT.z+(sot.AUTSTRUCT.z*ecx)], cx') ; cx _(' loop Next_Z_Index') _(' invoke pDisplayStructCB, [pAutStruct], "' & $sTag & '"') _(' mov [ebx+AUTSTRUCT.z+(sot.AUTSTRUCT.z*6)], 666') _(' mov [ebx+AUTSTRUCT.z+(sot.AUTSTRUCT.z*7)], 777') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $tAutStruct = DllStructCreate($sTag) DllCallAddress('ptr', DllStructGetPtr($tBinary), 'ptr', $gpDisplayStructCB, 'struct*', $tAutStruct) _WinAPI_DisplayStruct($tAutStruct, $sTag) EndFunc ;==>_Ex_AutDllStruct Here shows the locals/endl macros for creating local variables. See basic headers -> procedures. We create a local string and the same dll structure as above. Notice that you can initialize all the values of the structure on creation. There is a catch to this though that I will show you in next example. addr macro - This will preform the LEA instruction in EDX and then push the address on to the stack. This is awesome, just remember its using EDX to perform that and does not preserve it. You'll pretty much want to use that for any local variables you are passing around. Edit: I shouldn't say things like that so causally. Use the addr macro as much as you want but remember that it is adding a couple of extra instuctions each time you use it so if your calling invoke within a loop and ultimate performance is one of your goals, you should probably perform the LEA instructions before the loop and save the pointer to a separate variable that your would then use in the loop. Func _Ex_LocalVarsStruct() $g_sFasm = '' Local Const $sTag = 'dword x;dword y;short z[8]' _(_FasmAu3StructDef('POINT', $sTag)) _('force _main') _('proc _main, pDisplayStructCB') _(' locals') _(' sTAG db "' & $sTag & '", 0') ; define local string. the ', 0' at the end is to terminate the string. _(' tPoint POINT 1,2,<0,1,2,3,4,5,6,7>') ; initalize values in struct _(' endl') _(' invoke pDisplayStructCB, addr tPoint, addr sTAG') _(' mov [tPoint+POINT.x], 4321') _(' mov [tPoint+POINT.z+sot.POINT.z*2], 678') _(' invoke pDisplayStructCB, addr tPoint, addr sTAG') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $ret = DllCallAddress('ptr', DllStructGetPtr($tBinary), 'ptr', $gpDisplayStructCB) EndFunc ;==>_Ex_LocalVarsStruct Back to the catch. Alignment is the problem here but only with the initializes. I'm handling all the alignment ok so you don't have to worry about that for creating structures that need alignment, only if you are using the one liner initialize in locals. The problem comes from extra padding being defined to handle the alignment, but fasm doesn't really know its just padding so without adding extra comma's to the initiator statement, your data ends up in the padding or simply fails. The _FasmFixInit will throw in the extra commas needed to skip the padding. Func _Ex_LocalVarStructEx() $g_sFasm = '' $sTag = 'byte x;short y;char sNote[13];long odd[5];word w;dword p;char ext[3];word finish' _(_FasmAu3StructDef('POINT', $sTag)) _('force _main') _('proc _main, pDisplayStructCB') _(' locals') _(' tPoint POINT ' & _FasmFixInit('1,222,<"AutoItFASM",0>,<41,43,43,44,45>,6,7,"au3",12345', $sTag)) _(' endl') _(' invoke pDisplayStructCB, addr tPoint, "' & $sTag & '"') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) DllCallAddress('dword', DllStructGetPtr($tBinary), 'ptr', $gpDisplayStructCB) EndFunc ;==>_Ex_LocalVarStructEx I love this one and it is really not even that hard to explain. We got multiple functions and want to be able to call them individually. Here I simply use the primary function to tell me where all the functions are. I load all the offsets (byte distance from start of code) of each each function in to a dllstruct, then once its passed back to autoit, adjust all the offsets by where they are actually located in memory (pointer to dll). From there you can call each individual function as shown previously. full code is in the zip. String functions came from link below. I ended up modifying strcmp to get a value I understand. CRC32 func is all mine. Made it so easy being able to call _strlen and then use while statements like I normally would Func _Ex_SSE4_Library() $g_sFasm = '' _('force _main') _('proc _main stdcall, pAdd') _(' mov eax, [pAdd]') _(' mov dword[eax], _crc32') _(' mov dword[eax+4], _strlen') _(' mov dword[eax+8], _strcmp') _(' mov dword[eax+12], _strstr') _(' ret') _('endp') _('proc _crc32 uses ebx ecx esi, pStr') ; _('endp') _('proc _strlen uses ecx edx, pStr') ; _('endp') _('proc _strcmp uses ebx ecx edx, pStr1, pStr2') ; ecx = string1, edx = string2' ; _('endp') _('proc _strstr uses ecx edx edi esi, sStrToSearch, sStrToFind') ; _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $pBinary = DllStructGetPtr($tBinary) Local $sFunction_Offsets = 'dword crc32;dword strlen;dword strcmp;dword strstr' $tSSE42 = DllStructCreate($sFunction_Offsets) $ret = DllCallAddress('ptr', $pBinary, 'struct*', $tSSE42) _WinAPI_DisplayStruct($tSSE42, $sFunction_Offsets, 'Function Offsets') ;Correct all addresses $tSSE42.crc32 += $pBinary $tSSE42.strlen += $pBinary $tSSE42.strcmp += $pBinary $tSSE42.strstr += $pBinary $sTestStr = 'This is a test string!' ConsoleWrite('$sTestStr = ' & $sTestStr & @CRLF) $iCRC = DllCallAddress('int', $tSSE42.crc32, 'str', $sTestStr) ConsoleWrite('CRC32 = ' & Hex($iCRC[0]) & @CRLF) $aLen = DllCallAddress('int', $tSSE42.strlen, 'str', $sTestStr) ConsoleWrite('string len = ' & $aLen[0] & ' :1:' & @CRLF) $aFind = DllCallAddress('int', $tSSE42.strcmp, 'str', $sTestStr, 'str', 'This iXs a test') ConsoleWrite('+strcmp = ' & $aFind[0] & @CRLF) $aStr = DllCallAddress('int', $tSSE42.strstr, 'str', 'This is a test string!', 'str', 'test') ConsoleWrite('Strstr = ' & $aStr[0] & @CRLF) EndFunc ;==>_Ex_SSE4_Library I'm extremely happy I got a com interface example working. I AM. That being said.. I'm pretty fucking annoyed I cant find the original pointer when using using built in ObjCreateInterface I've tired more than just whats commented out. It anyone has any input (I know someone here does!) that would be great. Using the __ptr__ from _autoitobject works below. Example will delete the tab a couple times. Edit: Got that part figured out. Thanks again trancexx! Func _Ex_ComObjInterface() $g_sFasm = '' ;~ _AutoItObject_StartUp() ;~ Local Const $sTagITaskbarList = "QueryInterface long(ptr;ptr;ptr);AddRef ulong();Release ulong(); HrInit hresult(); AddTab hresult(hwnd); DeleteTab hresult(hwnd); ActivateTab hresult(hwnd); SetActiveAlt hresult(hwnd);" ;~ Local $oList = _AutoItObject_ObjCreate($sCLSID_TaskbarList, $sIID_ITaskbarList, $sTagITaskbarList) Local Const $sCLSID_TaskbarList = "{56FDF344-FD6D-11D0-958A-006097C9A090}", $oList = ObjCreateInterface($sCLSID_TaskbarList, $sIID_ITaskbarList, $sTagITaskbarList) _('interface ITaskBarList,QueryInterface,AddRef,Release,HrInit,AddTab,DeleteTab,ActivateTab,SetActiveAlt') ; _('force _main') _('proc _main uses ebx, pSleepCB, oList, pGUIHwnd') _(' comcall [oList],ITaskBarList,HrInit') _(' xor ebx, ebx') _(' .repeat') _(' invoke pSleepCB, 500') ; wait _(' comcall [oList],ITaskBarList,DeleteTab,[pGUIHwnd]') ; delete _(' invoke pSleepCB, 500') ; wait _(' comcall [oList],ITaskBarList,AddTab,[pGUIHwnd]') ; add back _(' comcall [oList],ITaskBarList,ActivateTab,[pGUIHwnd]') ; actvate _(' inc ebx') _(' .until ebx=4') _(' ret') _('endp') Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $GUI = GUICreate("_Ex_ComObjInterface ------ DeleteTab") GUISetState() ;~ DllCallAddress('ptr', DllStructGetPtr($tBinary), 'ptr', $gpSleepCB, 'ptr', $oList.__ptr__, 'dword', Number($GUI)) DllCallAddress('ptr', DllStructGetPtr($tBinary), 'ptr', $gpSleepCB, 'ptr', $oList(), 'dword', Number($GUI)) EndFunc ;==>_Ex_ComObjInterface Lastly here is an example of how to use a global variable. Without using the org statement, this value is just an offset like the functions in the library example. In order for your code to know that location, it needs to know where the real starting address is so we have to pass that to our functions. Once you have it, if you write your code proper and preserve registers correctly, you can just leave in EBX. From what I understand, if all functions are following stdcall rules, that register shouldn't change in less you change it. Something cool and important to remember is these variables will hold whatever values left in them till you wipe the memory (dll structure) holding your code. keep that in mind if you made your dll structure with a static keyword. If thats the case treat them like static variables Func _Ex_GlobalVars() $g_sFasm = '' _('_ConsoleWriteCB fix invoke pConsoleWriteCB') ; _('force _main') _('proc _main uses ebx, pMem, pConsoleWriteCB, parm1') _(' mov ebx, [pMem]') ; This is where are code starts in memory. _(' mov [ebx + g_Var1], 111') _(' add [ebx + g_Var1], 222') _(' _ConsoleWriteCB, "g_Var1 = ", [ebx + g_Var1]') _(' stdcall subfunc1, [pMem], [pConsoleWriteCB], [parm1]') _(' mov eax, g_Var1') _(' ret') _('endp') ; _('proc subfunc1 uses ebx, pMem, pConsoleWriteCB, parm1') _(' mov ebx, [pMem]') _(' mov [ebx + g_Var1], 333') _(' _ConsoleWriteCB, "g_Var1 from subfunc1= ", [ebx + g_Var1]') _(' stdcall subfunc2, [pConsoleWriteCB], [parm1]') ; no memory ptr passed. ebx should be callie saved _(' _ConsoleWriteCB, "g_Var1 from subfunc1= ", [ebx + g_Var1]') _(' stdcall subfunc2, [pConsoleWriteCB], [parm1]') _(' ret') _('endp') ; _('proc subfunc2, pConsoleWriteCB, parm1') _(' add [ebx + g_Var1], 321') _(' _ConsoleWriteCB, "g_Var1 from subfunc2= ", [ebx + g_Var1]') _(' ret') _('endp') ; _('g_Var1 dd ?') ; <--------- Global Var Local $tBinary = _FasmAssemble($g_sFasm) If @error Then Exit (ConsoleWrite($tBinary & @CRLF)) Local $iOffset = DllCallAddress('dword', DllStructGetPtr($tBinary), 'struct*', $tBinary, 'ptr', $gpConsoleWriteCB, 'dword', 55)[0] ConsoleWrite('$iOffset = ' & $iOffset & @CRLF) Local $tGVar = DllStructCreate('dword g_Var1', DllStructGetPtr($tBinary) + $iOffset) ConsoleWrite('Directly access g_Var1 -> ' & $tGVar.g_Var1 & @CRLF) ; direct access EndFunc ;==>_Ex_GlobalVars FasmEx.zip. - Special thanks to Ward for his udf and Trancexx for her assembly examples as they have played a huge role in my learning of asm. UDF Requires >Beta version 3.3.9.19 or higher. Also Requires >Wards Fasm UDF. Direct Download Link FASMEx.zip FasmEx 9-29-2013.zip This is dead. See new version here : - Reading assembly Code RaiNote posted a topic in AutoIt General Help and SupportHello, My question is would it be possible to read an assembly Code of an executable? Like _AssemblyRead($Process,$PointerAdress,$Value) or _AssemblyRead($Executable,$PointerAdress,$Value) ==> _Assembly("MyTool.exe",0xB61016,"CALL 00B64974") If not would it be possible to read the HEX Data? _HEXRead("MyTool.exe", 00C1E000, 00 00 00 00|00 00 F0 7F|00 00 00 00|00 00 00 00, ð) $Process,$Adress,$HEXData,$HEXINASCII Answers to the Forum Rules: ---> I am asking this to create an Anti-Cheat/Anti-Crack for my program. So that if it get the wrong assembly Code it will Close the Program. ---> Only reading no writing. I hope someone can help mewing my results somehow. Thanks! Func _B64Decode($sSource) Local Static $Opcode, $tMem, $tRevIndex, $fStartup = True If $fStartup Then If @AutoItX64 Then $Opcode = '0xC800000053574D89C74C89C74889D64889CB4C89C89948C7C10400000048F7F148C7C10300000048F7E14989C242807C0EFF3D750E49FFCA42807C0EFE3D750349FFCA4C89C89948C7C10800000048F7F14889C148FFC1488B064989CD48C7C108000000D7C0C0024188C349C1E30648C1E808E2EF49C1E308490FCB4C891F4883C7064883C6084C89E9E2CB4C89D05F5BC9C3' Else $Opcode = '0xC8080000FF75108B7D108B5D088B750C8B4D148B062060FCA891783C70383C604E2C2807EFF3D75084F807EFE3D75014FC6070089F85B29D8) Local $aRevIndex[128] Local $aTable = StringToASCIIArray('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/') For $i = 0 To UBound($aTable) - 1 $aRevIndex[$aTable[$i]] = $i Next $tRevIndex = DllStructCreate('byte[' & 128 & ']') DllStructSetData($tRevIndex, 1, StringToBinary(StringFromASCIIArray($aRevIndex))) $fStartup = False EndIf Local $iLen = StringLen($sSource) Local $tOutput = DllStructCreate('byte[' & $iLen + 8 & ']') DllCall("kernel32.dll", "bool", "VirtualProtect", "struct*", $tOutput, "dword_ptr", DllStructGetSize($tOutput), "dword", 0x00000004, "dword*", 0) Local $tSource = DllStructCreate('char[' & $iLen + 8 & ']') DllStructSetData($tSource, 1, $sSource) Local $aRet = DllCallAddress('uint', DllStructGetPtr($tMem), 'struct*', $tRevIndex, 'struct*', $tSource, 'struct*', $tOutput, 'uint', (@AutoItX64 ? $iLen : $iLen / 4)) Return BinaryMid(DllStructGetData($tOutput, 1), 1, $aRet[0]) EndFunc ;==>_B64Decode Func _B64Encode($sSource) Local Static $Opcode, $tMem, $fStartup = True If $fStartup Then If @AutoItX64 Then $Opcode = '0xC810000053574889CE4889D74C89C34C89C89948C7C10600000048F7F14889C14883FA00740348FFC1488B06480FC848C1E80EC0E802D788470748C1E806C0E802D788470648C1E806C0E802D788470548C1E806C0E802D788470448C1E806C0E802D788470348C1E806C0E802D788470248C1E806C0E802D788470148C1E806C0E802D788074883C6064883C708E2994883FA00743B49C7C5060000004929D54883FA03770349FFC54C29EF4883FA03741F4883FA01740E4883FA047408C6073D48FFC7EB0BC6073DC647013D4883C702C607005F5BC9C3' Else $Opcode = '0xC80800008B451499B903000000F7F189C1528B5D108B75088B7D0C83FA007401418B160FCAC1EA0888D0243FD7884703C1EA0688D0243FD7884702C1EA0688D0243FD7884701C1EA0688D0243FD7880783C60383C704E2C95A83FA00740DC647FF3D83FA027404C647FE3DC60700) $fStartup = False EndIf $sSource = Binary($sSource) Local $iLen = BinaryLen($sSource) $tSource = DllStructCreate('byte[' & $iLen & ']') DllStructSetData($tSource, 1, $sSource) Local $tOutput = DllStructCreate('char[' & Ceiling($iLen * (4 / 3) + 3) & ']') DllCall("kernel32.dll", "bool", "VirtualProtect", "struct*", $tOutput, "dword_ptr", DllStructGetSize($tOutput), "dword", 0x00000004, "dword*", 0) Local $sTable = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/' DllCallAddress('none', DllStructGetPtr($tMem), 'struct*', $tSource, 'struct*', $tOutput, 'str', $sTable, 'uint', $iLen) Return DllStructGetData($tOutput, 1) EndFunc ;==>_B64Encode Results: x86 >_B64Encode avg = 121.71071578269 _Base64Encode_MS avg = 133.64460931775 >_B64Decode avg = 106.147524856932 _Base64Decode_MS avg = 149.362345205542 Results: x64 >_B64Encode avg = 123.473349548198 _Base64Encode_MS avg = 122.300780993821 >_B64Decode avg = 113.430527477353 _Base64Decode_MS avg = 170.667366205978 b64.zip
https://www.autoitscript.com/forum/tags/assembly/
CC-MAIN-2019-18
refinedweb
3,509
51.89
So I have two arrays that have x, y, z coordinates. I'm just trying to apply the 3D distance formula. Problem is, that I can't find a post that constitutes arrays with multiple values in each column and spits out an array. print MW_FirstsubPos1 [[ 51618.7265625 106197.7578125 69647.6484375 ] [ 33864.1953125 11757.29882812 11849.90332031] [ 12750.09863281 58954.91015625 38067.0859375 ] ..., [ 99002.6640625 96021.0546875 18798.44726562] [ 27180.83984375 74350.421875 78075.78125 ] [ 19297.88476562 82161.140625 1204.53503418]] print MW_SecondsubPos1 [[ 51850.9140625 106004.0078125 69536.5234375 ] [ 33989.9375 11847.11425781 12255.80859375] [ 12526.203125 58372.3046875 37641.34765625] ..., [ 98823.2734375 95837.1796875 18758.7734375 ] [ 27047.19140625 74242.859375 78166.703125 ] [ 19353.97851562 82375.8515625 1147.07556152]] import numpy as np xs1,ys1,zs1 = zip(*MW_FirstsubPos1) xs11,ys11,zs11 = zip(*MW_SecondsubPos1) squared_dist1 = (xs11 - xs1)**2 + (ys11 - ys1)**2 + (zs11 - zs1)**2 dist1 = np.sqrt(squared_dist1) print dist1 TypeError: unsupported operand type(s) for -: 'tuple' and 'tuple' See if this works, assuming that aaa and bbb are normal python list of lists having the x, y and z coordinates (or that you can convert to such, using tolist or something like that perhaps). result will have the 1-D array you are looking for. result = [] for a, b in zip(aaa, bbb): dist = 0 for i in range(3): dist += (a[i]-b[i])**2 result.append(dist**0.5)
https://codedump.io/share/L7eBDeLhJMoZ/1/creating-an-array-for-distance-between-two-3-d-arrays
CC-MAIN-2018-05
refinedweb
232
69.11
android / platform / dalvik / 3740ab6c5b5544e26acb257bc092415b48bdef63 / . / vm / mterp / README.txt blob: 6106740fcff48d60dad567d09f2b1811e1beb106 [ file ] [ log ] [ blame ] Dalvik "mterp" README NOTE: Find rebuilding instructions at the bottom of this file. ==== Overview ==== This is the source code for the Dalvik interpreter. The core of the original version was implemented as a single C function, but to improve performance we rewrote it in assembly. To make this and future assembly ports easier and less error-prone, we used a modular approach that allows development of platform-specific code one opcode at a time. The original all-in-one-function C version still exists as the "portable" interpreter, and is generated using the same sources and tools that generate the platform-specific versions. Every configuration has a "config-*" file that controls how the sources are generated. The sources are written into the "out" directory, where they are picked up by the Android build system. The best way to become familiar with the interpreter is to look at the generated files in the "out" directory, such as out/InterpC-portstd.c, rather than trying to look at the various component pieces in (say) armv5te. ==== Platform-specific source generation ==== The architecture-specific config files determine what goes into two generated output files (InterpC-<arch>.c, InterpAsm-<arch>.S). The goal is to make it easy to swap C and assembly sources during initial development and testing, and to provide a way to use architecture-specific versions of some operations (e.g. making use of PLD instructions on ARMv6 or avoiding CLZ on ARMv4T). Depending on architecture, instruction-to-instruction transitions may be done as either computed goto or jump table. In the computed goto variant, each instruction handler is allocated a fixed-size area (e.g. 64 byte). "Overflow" code is tacked on to the end. In the jump table variant, all of the instructions handlers are contiguous and may be of any size. The interpreter style is selected via the "handler-size" command (see below). When a C implementation for an instruction is desired, the assembly version packs all local state into the Thread structure and passes that to the C function. Updates to the state are pulled out of "Thread" on return. The "arch" value should indicate an architecture family with common programming characteristics, so "armv5te" would work for all ARMv5TE CPUs, but might not be backward- or forward-compatible. (We *might* want to specify the ABI model as well, e.g. "armv5te-eabi", but currently that adds verbosity without value.) ==== Config file format ==== The config files are parsed from top to bottom. Each line in the file may be blank, hold a comment (line starts with '#'), or be a command. The commands are: handler-style <computed-goto|jump-table|all-c> Specify which style of interpreter to generate. In computed-goto, each handler is allocated a fixed region, allowing transitions to be done via table-start-address + (opcode * handler-size). With jump-table style, handlers may be of any length, and the generated table is an array of pointers to the handlers. The "all-c" style is for the portable interpreter (which is implemented completely in C). [Note: all-c is distinct from an "allstubs" configuration. In both configurations, all handlers are the C versions, but the allstubs configuration uses the assembly outer loop and assembly stubs to transition to the handlers]. This command is required, and must be the first command in the config file. handler-size <bytes> Specify the size of the fixed region, in bytes. On most platforms this will need to be a power of 2. For jump-table and all-c implementations, this command is ignored. import <filename> The specified file is included immediately, in its entirety. No substitutions are performed. ".cpp" and ".h" files are copied to the C output, ".S" files are copied to the asm output. asm-stub <filename> The named file will be included whenever an assembly "stub" is needed to transfer control to a handler written in C. Text substitution is performed on the opcode name. This command is not applicable to to "all-c" configurations. asm-alt-stub <filename> When present, this command will cause the generation of an alternate set of entry points (for computed-goto interpreters) or an alternate jump table (for jump-table interpreters). op-start <directory> Indicates the start of the opcode list. Must precede any "op" commands. The specified directory is the default location to pull instruction files from. op <opcode> <directory> Can only appear after "op-start" and before "op-end". Overrides the default source file location of the specified opcode. The opcode definition will come from the specified file, e.g. "op OP_NOP armv5te" will load from "armv5te/OP_NOP.S". A substitution dictionary will be applied (see below). alt <opcode> <directory> Can only appear after "op-start" and before "op-end". Similar to the "op" command above, but denotes a source file to override the entry in the alternate handler table. The opcode definition will come from the specified file, e.g. "alt OP_NOP armv5te" will load from "armv5te/ALT_OP_NOP.S". A substitution dictionary will be applied (see below). op-end Indicates the end of the opcode list. All kNumPackedOpcodes opcodes are emitted when this is seen, followed by any code that didn't fit inside the fixed-size instruction handler space. The order of "op" and "alt" directives are not significant; the generation tool will extract ordering info from the VM sources. Typically the form in which most opcodes currently exist is used in the "op-start" directive. For a new port you would start with "c", and add architecture-specific "op" entries as you write instructions. When complete it will default to the target architecture, and you insert "c" ops to stub out platform-specific code. For the <directory> specified in the "op" command, the "c" directory is special in two ways: (1) the sources are assumed to be C code, and will be inserted into the generated C file; (2) when a C implementation is emitted, a "glue stub" is emitted in the assembly source file. (The generator script always emits kNumPackedOpcodes assembly instructions, unless "asm-stub" was left blank, in which case it only emits some labels.) ==== Instruction file format ==== The assembly instruction files are simply fragments of assembly sources. The starting label will be provided by the generation tool, as will declarations for the segment type and alignment. The expected target assembler is GNU "as", but others will work (may require fiddling with some of the pseudo-ops emitted by the generation tool). The C files do a bunch of fancy things with macros in an attempt to share code with the portable interpreter. (This is expected to be reduced in the future.) A substitution dictionary is applied to all opcode fragments as they are appended to the output. Substitutions can look like "$value" or "${value}". The dictionary always includes: $opcode - opcode name, e.g. "OP_NOP" $opnum - opcode number, e.g. 0 for OP_NOP $handler_size_bytes - max size of an instruction handler, in bytes $handler_size_bits - max size of an instruction handler, log 2 Both C and assembly sources will be passed through the C pre-processor, so you can take advantage of C-style comments and preprocessor directives like "#define". Some generator operations are available. %include "filename" [subst-dict] Includes the file, which should look like "armv5te/OP_NOP.S". You can specify values for the substitution dictionary, using standard Python syntax. For example, this: %include "armv5te/unop.S" {"result":"r1"} would insert "armv5te/unop.S" at the current file position, replacing occurrences of "$result" with "r1". %default <subst-dict> Specify default substitution dictionary values, using standard Python syntax. Useful if you want to have a "base" version and variants. %break Identifies the split between the main portion of the instruction handler (which must fit in "handler-size" bytes) and the "sister" code, which is appended to the end of the instruction handler block. In jump table implementations, %break is ignored. %verify "message" Leave a note to yourself about what needs to be tested. (This may turn into something more interesting someday; for now, it just gets stripped out before the output is generated.) The generation tool does *not* print a warning if your instructions exceed "handler-size", but the VM will abort on startup if it detects an oversized handler. On architectures with fixed-width instructions this is easy to work with, on others this you will need to count bytes. ==== Using C constants from assembly sources ==== The file "common/asm-constants.h" has some definitions for constant values, structure sizes, and struct member offsets. The format is fairly restricted, as simple macros are used to massage it for use with both C (where it is verified) and assembly (where the definitions are used). If a constant in the file becomes out of sync, the VM will log an error message and abort during startup. ==== Development tips ==== If you need to debug the initial piece of an opcode handler, and your debug code expands it beyond the handler size limit, you can insert a generic header at the top: b ${opcode}_start %break ${opcode}_start: If you already have a %break, it's okay to leave it in place -- the second %break is ignored. ==== Rebuilding ==== If you change any of the source file fragments, you need to rebuild the combined source files in the "out" directory. Make sure the files in "out" are editable, then: $ cd mterp $ ./rebuild.sh As of this writing, this requires Python 2.5. You may see inscrutible error messages or just general failure if you have a different version of Python installed. The ultimate goal is to have the build system generate the necessary output files without requiring this separate step, but we're not yet ready to require Python in the build. ==== Interpreter Control ==== The central mechanism for interpreter control is the InterpBreak struture that is found in each thread's Thread struct (see vm/Thread.h). There is one mandatory field, and two optional fields: subMode - required, describes debug/profile/special operation breakFlags & curHandlerTable - optional, used lower subMode polling costs The subMode field is a bitmask which records all currently active special modes of operation. For example, when Traceview profiling is active, kSubModeMethodTrace is set. This bit informs the interpreter that it must notify the profiling subsystem on each method entry and return. There are similar bits for an active debugging session, instruction count profiling, pending thread suspension request, etc. To support special subMode operation the simplest mechanism for the interpreter is to poll the subMode field before interpreting each Dalvik bytecode and take any required action. In fact, this is precisely what the portable interpreter does. The "FINISH" macro expands to include a test of subMode and subsequent call to the "dvmCheckBefore()". Per-instruction polling, however, is expensive and subMode operation is relative rare. For normal operation we'd like to avoid having to perform any checks unless a special subMode is actually in effect. This is where curHandlerTable and breakFlags come in to play. The mterp fast interpreter achieves much of its performance advantage over the portable interpreter through its efficient mechanism of transitioning from one Dalvik bytecode to the next. Mterp for ARM targets uses a computed-goto mechanism, in which the handler entrypoints are located at the base of the handler table + (opcode * 64). Mterp for x86 targets instead uses a jump table of handler entry points indexed by the Dalvik opcode. To support efficient handling of special subModes, mterp supports two sets of handler entries (for ARM) or two jump tables (for x86). One handler set is optimized for speed and performs no inter-instruction checks (mainHandlerTable in the Thread structure), while the other includes a test of the subMode field (altHandlerTable). In normal operation (i.e. subMode == 0), the dedicated register rIBASE (r8 for ARM, edx for x86) holds a mainHandlerTable. If we need to switch to a subMode that requires inter-instruction checking, rIBASE is changed to altHandlerTable. Note that this change is not immediate. What is actually changed is the value of curHandlerTable - which is part of the interpBreak structure. Rather than explicitly check for changes, each thread will blindly refresh rIBASE at backward branches, exception throws and returns. The breakFlags field tells the interpreter control mechanism whether curHandlerTable should hold the real or alternate handler base. If non-zero, we use the altHandlerBase. The bits within breakFlags tells dvmCheckBefore which set of subModes need to be checked. See dvmCheckBefore() for subMode handling, and dvmEnableSubMode(), dvmDisableSubMode() for switching on and off.
https://android.googlesource.com/platform/dalvik/+/3740ab6c5b5544e26acb257bc092415b48bdef63/vm/mterp/README.txt
CC-MAIN-2021-49
refinedweb
2,092
55.34
Forms Forms handle state, validation, and error handling so that the internal components don't have to worry about them. The Form component works by building up its fields, where the field name, initial value, label, and rendered children are defined. These fields can then be used within the Form A Form will generally look like this: <Form {...{ fields: { fieldName: {/*...*/} } }}> {({fields}) => { return ( <div> {fields.fieldName} </div> ); }} </Form> Basic Forms We can set an initialValue to pre-fill each field. When the form is reset before submitting, all fields will revert to their initialValue, if provided. In the Email Form below, the email field has been pre-filled to my@me.com. We use the Form's canSubmit function to control whether or not the "Subscribe" button is disabled. We attach the Form's reset function to the "Reset" button to allow it to reset the form's state. ReferenceError: React is not defined Optional Fields When you set optional: true on a field, it can affect both the appearance of the field and the behavior of the form. The text "(Optional)" is added to the field label, and the field is no longer considered required. To change the text that is added to the label, set the optionalText property within the field object. In the example below, we have set optional: true and optionalText: '(Opt)' for the Last Name field. Inline Forms & Label Position When you set the inline property to true, the label gets positioned next to the field instead of above it. By default, the label will appear to the left of the field, but you can set labelPosition: 'after' to place the label on the right. ReferenceError: React is not defined Tooltips The tooltip prop makes an icon with a tooltip appear next to the label. tooltipSize can be set to: sm, md or lg in order to control it's size. And it's placement can be controlled via the tooltipPlacement prop with the following options: left, right, bottom, top. ReferenceError: React is not defined Accessing the Form state The Form's childen has access to its state to determine how it wants to render. For example, one field can determine whether to hide or show another field. ReferenceError: React is not defined Or use one field to determine the contents of another field. ReferenceError: React is not defined Client-side Validation In this next example, we do two kinds of client-side validation. First, we define a validator prop on the first field to make sure that the password's length is 8 or more characters. validator functions take in the current value of the field that they are validating and return either an error message (if there is a validation error) or a falsy value (if there is no error). Next, to construct the "Save Password" button, we look at the current form state and render the button as disabled when state.current.password1 and state.current.password2 do not match. The field is validated as the user enters a value. When the value is invalid, the canSubmit function will return false. The value will only be shown to be invalid after that field loses focus. As soon as the the value becomes valid again, canSubmit will return true, and the value will be shown to be valid again. Due to the above behavior, we recommend against using a validator on the final field of a Form. The experience can be jarring when a user wants to click the Submit button, but is unable due to a validation error that will only be shown after the field loses focus. ReferenceError: React is not defined Field ids & label fors All fields require an id. If you do not provide one, a unique id will be generated for you. It is used to set the for attribute on the corresponding <label> tag, so that the label is semantically connected to a specific input. <Form {...{ fields: { host: {label: 'Host'}, path: {label: 'Path', children: <Input id="the-path"/>} } }}> {({fields}) => ( <div> {fields.host} {fields.path} </div> )} </Form> Composite fields When the state of a field is best represented by a collection (e.g. an array or an object), use a composite field. An initial value must be provided in order for the Reset button to work properly. onChange An onChange callback needs to be provided for each input element. This callback should use the Form's onChange function to update its value. In the example below, both inputs have their own onChange callback. Use stopPropagation within onChange to stop the Form from overriding its composite value. ReferenceError: React is not defined Form submission Define an onSubmit prop on a Form to do something with the state values on submission. The onSubmit method is passed {state: {initial, current}}. The canSubmit function is available to help determine whether a form is ready for submission. It returns true if all required fields are filled and if all fields are different from their initial value. By default, a button within the Form that has type="submit" will trigger submission. This behavior can also be attached to another field that takes in the onSubmit, as shown below. ReferenceError: React is not defined Form error handling Define a onSubmitError handler to map error messages to a specific field. Return an object keyed by the field's name to determine where the error is shown. The error is attached to the first name field. ReferenceError: React is not defined Using the FormUnit To lay out a single form field without using a whole Form component, you can use the FormUnit component. The FormUnit component can decorate a field with a label, a tooltip, an "optional" indicator, and help text. Note that state management and other Form features are not handled by the FormUnit. Examples ReferenceError: React is not defined When inline is true, the label will be placed on the same line as the field. ReferenceError: React is not defined When hasError is true, the field border and help text become red to indicate an error. ReferenceError: React is not defined The postLabel can contain any node and will be positioned in the top-right corner of a non-inline form unit. ReferenceError: React is not defined ReferenceError: React is not defined Props Form props FormUnit props Imports Import React components (including CSS): import {Form, FormUnit} from 'pivotal-ui/react/forms'; Import CSS only: import 'pivotal-ui/css/forms';
https://styleguide.pivotal.io/components/forms/
CC-MAIN-2021-04
refinedweb
1,073
62.38
It's recommended practice to change the mouse cursor to an hourglass wait cursor to inform the user that the program is working on something. This is easy enough to do when the programmer knows in advance that something is going to take a while. However, there are often circumstances where you can't predict how long something is going to take, so it's hard to know whether to display the hourglass or not. Ideally, you'd like the cursor to change to an hourglass automatically, as soon as your task had run for more than some time limit, e.g. 1/10 of a second. I needed to do this for a project of mine so I searched through my books and the internet, but couldn't find any applicable code. The closest thing I could find was an article explaining how to do it in Java. It didn't seem like it should be too tough to write a Windows/C++ version, but it ended up taking me longer than I expected to get it working properly so I thought I'd share the results. The basic idea I had was to create a secondary thread that would act as a "timer". The message loop would keep resetting the timer so it normally wouldn't run out. However, if a task took longer, then the timer would run out and the secondary thread would change the cursor to an hourglass. It didn't' take long to set this up, but mysteriously, it didn't work. A little debugging reassured me that my code was working properly, but that the SetCursor call in the secondary thread wasn't working. I guessed that SetCursor didn't work from secondary threads, but a search of the MSDN documentation and the internet didn't find anything about this. Finally, I posted a question to comp.os.ms-windows.programmer.win32 and went home. The first few responses I got didn't really help, but the third response turned out to be the key. The message referred me to an article by Petter Hesselberg in the Nov. 2001 Windows Developers Journal - "The Right Time and Place for the Wait Cursor". I couldn't find the article, but the source-code was online and the key turned out to be using AttachThreadInput. Once I added this to my code, things started to work. SetCursor AttachThreadInput I actually thought I was finished until I discovered that when I pulled down a menu (or brought up a context menu) the cursor changed to an hourglass. The problem was that while the menu was displayed, my message loop wasn't running and so the timer wasn't getting reset. After a certain amount of head scratching I came up with the idea of using a GetMessage hook to reset the timer, since I figured the menu must still be calling GetMessage (or PeekMessage). Sure enough this solved the menu problem. (And probably some related issues like modal dialogs.) GetMessage PeekMessage Again I thought I was finished, but I found one last glitch. Just before a tool-tip appeared, the cursor flashed to an hourglass and back. I guess tool-tips don't call GetMessage or PeekMessage while they wait. I fixed this by simply making my timer longer than the tool-tip timer. My last task was to extract the code, which I'd mixed in with my message loop, into something more reusable. I ended up packaging it into a C++ class. To use the class you simply have to create an instance of it inside your message loop. Something like: #include "awcursor.h" ... while (GetMessage(&msg, NULL, 0, 0)) { AutoWaitCursor awc; TranslateMessage(&msg); DispatchMessage(&msg); } The AutoWaitCursor constructor "starts" the timer, and the destructor (called automatically at the end of the loop) restores the cursor if necessary. The constructor also looks after creating the thread and the hook the first time around. AutoWaitCursor If you look at the code, you may notice that the only concession I've made to multi-threading is declaring several of the variables volatile. I don't make any attempt to synchronize the threads or prevent them from accessing variables at the same time. This was a deliberate choice, because I wanted to make the code as fast as possible in order to not add overhead to the message loop. How do I justify this? First, the variables are plain integers, and therefore reading and writing them is atomic anyway. Second, if in some rare case, a synchronization problem occurs, the worst that can happen is the cursor might be wrong momentarily. This is a small price to pay to keep the code simple and fast. In practice I haven't seen any problems. For simplicity, I've included the definition/initialization of the static class members in the header file. This isn't the best setup since it means you can't include this header in more than one source file. Ideally, you'd put the definitions in a separate .cpp file. However, you normally only have one message loop in your program anyway, so this setup seems acceptable. I hope you find the code and the explanation useful. Good luck! This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here while (GetMessage(&msg, NULL, 0, 0)) { CWaitCursor wait; TranslateMessage(&msg); DispatchMessage(&msg); } myView.h: { private: HCURSOR m_hCursor; bool m_bMyCursorShape; } myView.cpp: OnInitialUpdate() { ... SetClassLong(GetSafeHwnd(), GCL_HCURSOR, (LONG) NULL); ... } OnSetCursor(CWnd *pWnd, UINT nHitTest, UINT message) { if ( m_bMyCursorShape ) { m_hCursor = LoadCursor(NULL, IDC_WAIT); SetCursor(m_hCursor); } else { m_hCursor = LoadCursor(NULL, IDC_ARROW); SetCursor(m_hCursor); } return CView::OnSetCursor(pWnd, nHitTest, message); } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/1699/Auto-Wait-Cursor?msg=260041
CC-MAIN-2014-52
refinedweb
1,003
62.58
Updated 19 Sep 2019: new sketch for Nano (with keyboard) to correct CV1 write of TCS decoders. Please use sketch 'Command_Station_with_keypad_v2.ino' Updated 1 June 2019: use of Adafruit libraries that now work with the ILI93411 / 2.2 inch 240x320 Serial SPI TFT LCD Display. In the Arduino sketch, replace: #include "Adafruit_GFX_AS.h" // Core graphics library #include "Adafruit_ILI9341_AS.h" // Hardware-specific library with: #include "Adafruit_GFX.h" // Core graphics library #include "Adafruit_ILI9341.h" // Hardware-specific library Download these libraries on GitHub here and here Minor changes to lines 45 to 47 within the sketch - new version attached in step 'Arduino Nano With TFT LCD Display' (Command_Station_Display_jun_2019.ino) ------------------------------------------------------------------ Update 11th May 2019: amended sketch to display opening screen after 4 seconds delay. Download updated file 'Command_Station_with_keypad_11_may_2019.ino' in step 2 Update 2 May 2019: Modified sketch to fix bug in E-stop where moving the speed pot started loco motion download latest sketch 'Command_Station_with_keypad_may_2019.ino' in step 2 Update 8 April 2019: Modified sketch to eliminate timing issue when sending repeat data packets. Please download latest sketch 'Command_Station_with_keypad_apr_2019.ino' in step 2 Update 27 Jan 2019: Now includes option of momentary F2 F3 functions for horn sounds, if not required simply comment out the new lines of code in void loop() delay(100); if (f2a[locoAdr] == 1){ // momentary action on F2 fun2 = 0; } if (f3a[locoAdr] == 1){ // momentary action on F3 fun3 = 0; } Update 16th Aug 2018: Now includes a CV1 write function to programme loco address numbers. This is a software modification only with no change to hardware. See updated Arduino Nano sketch. To access this CV1 write facility, hold down button 'A' on membrane keypad while turning on power. Ensure that only the loco to be addressed is on the track. Choose loco number 1-19 from the keypad then press 'A' again to programme new value for CV1. See step 3 for more details. used here requires a voltage divider from 5v to 3v on the input pins, 1k0 and 1k5 ohm values work fine. Check your TFT requirements and if it can take 5v inputs, the 1k5 resistors are not required. PORT 'case' and 'return' rather than 'if' 'DCC Command Station PCB' (now available from August 2018) Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Components: All available on eBay: 1 off PCB eBay link 4x4 Matrix Array 16 Key Membrane Switch Keypad £1. eBay 2.2 inch 240x320 Serial SPI TFT LCD Display Module £5 eBay UNIVERSAL 12/14V 5A 60W POWER SUPPLY AC ADAPTER £7 eBay Nano V3.0 For Arduino with CH340G 5V 16M compatible ATmega328P 2 x £3.50 = £7 eBay Motor Driver Module LMD18200T for Arduino £5. eBay *Voltage Regulators: 3v3 (UA78M33C) £2.53 eBay *Voltage Regulators: 9v (7809) in TO220 packages £1 eBay 3v3 zener diode £1 for 10 eBay 1N4004 rectifier diode £1 for 10 eBay 1N5817 Schottky diode £1 for 50 eBay Connectors, wire, vero board, resistors, potentiometer: approx £3 PCB (available on e-bay from Aug 2018) £5.50 Box / Enclosure *Note: Voltage regulators will run at 23 degrees C above ambient and may require a require heatsink, depending on air circulation within enclosure. If Vsupply is 14v , then a temp rise of 36 degrees C above ambient on 9v Regulator will probably need a heat sink. Step 2: Arduino Nano With Keypad Make serial communications between the Nanos using Analog pins A3/A4: #include "SoftwareSerial" // download this library SoftwareSerial dcc(A3,A4); // RX TX The 4x4 membrane keypad is configured in the sketch as follows: #include "Keypad.h" // download this library,11}; //connect to the column pinouts of the keypad Keypad keypad = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS ); char key = keypad.getKey(); void SetupTimer2() This void starts the clock which is amended by the DCC binary code to provide a DCC signal. Each of 4 byte of loco/speed commands: byte 1 = loco number byte 2 = speed steps byte 3 = speed value / direction / front light on/off byte 4 = checksum (error) For function commands, 3 bytes: byte 1 = loco number byte 2 = function (F1 to F8) on/off byte 3 = checksum (error) A current sense resistor 0.1 ohm 2 watt monitors the current through the h-bridge. The LMD18200t module used has a max current of 3 Amps I have set the working limit to 2.6 Amps in the sketch: void current(){ sensorValue = analogRead(AN_CURRENT); // Convert the analog reading (which goes from 0 - 1023) to a voltage (0 - 5V) = 4.9mv per division // 0.1 ohm resistor on current sense gives 200mv at 2 Amps or, 100mv per Amp // 100/4.9 = 20 divisions per Amp or 20 divisions per 1000mA or 1 division = 50mA A = 50 * sensorValue ; // mA } Within void setup() : Amax = 2600; // max current Within the void read_loco() : if (A >= Amax){ // current limit Atrig = 1; changed_l = true; return changed_l; } The system can control 1 - 19 engines, direction, lights, 8 functions, emergency stop and auto current limit. The max current of 2.6 amps makes it suitable for all scales including G-scale (garden trains). The mains power supply and electronics are suitable for indoor use only unless you can make it all weather proof. I have the command station in the summer house with rail connecting wires running out through the wall to the track. Step 3: CV1 Loco Address Holding the 'A' key down while turning on power, will provide access to the CV1 write feature. This is achieved in the Arduino Nano sketch through a sequence of reset, page preset and write packets as specified in NMRA S-9.2.3... //CV1 coding for Address only mode void cv1_prog(){ 4 x (preamble (16 x '1') plus reset_packet) // 3 or more required 6 x (preamble (16 x '1') plus page_preset_packet) // 5 or more required 7 x (preamble (16 x '1') plus reset_packet) // 6 or more required 6 x (long preamble (24 x '1') plus cv1_write_packet() // 5 or more required 11 x (long preamble (24 x '1') plus cv1_write_packet() // 10 or more required void reset_packet() Creates 'preamble' of 2 bytes of (8 x '1') followed by 3 bytes of '0' void page_preset_packet() Creates 'preamble' of 2 bytes of (8 x '1') followed by 3 bytes 01111101 00000001 01111100 void cv1_write_packet() Creates 'long preamble' of 3 bytes of (8 x '1') followed by 3 bytes 0111C000 0DDDDDDD EEEEEEEE C=1 write address used here (C=0 verify address) Value of DDDDDDD is written into CV1 The decoder must be powered off after each CV1 write and switched on again prior to another CV1 write. This is done by switching the pins on the h-bridge: Turn ON : PWM = High : Brake = Low. Turn OFF : PWM = Low : Brake = HIGH To sense the smaller currents, Analog Internal Ref is used during CV1 write 1.1v ref gives 1.08 mv per division, current Acv = 10.8 X sensor value using 0.1 ohm sense resistor If Acv > 15mA then a valid CV1 write is achieved. Step 4: Arduino Nano With TFT LCD Display The Nano used to control the TFT display is configured as follows: // wac 1 June 2019 used updated Adafruit libraries #include SoftwareSerial dcc(A3,A4); // RX TX #include "Adafruit_GFX.h" // Core graphics library #include "Adafruit_ILI9341.h" // Hardware-specific library #include "SPI.h" // hardware SPI pins for Nano #define _sck 13 #define _mosi 11 #define _cs 10 #define _dc_rs 9 #define _reset 7 Adafruit_ILI9341 tft = Adafruit_ILI9341(_cs, _dc_rs, _reset); A valid command received from the Nano with keypad must start with 'A', end with '\n' and have 18 ',' characters. Each valid command is decoded and the result displayed. 4 People Made This Project! dave3054 made it! ve3vxy made it! tao97 made it! flyingalan made it! 38 Discussions Question 4 months ago Hi Bill, I cannot write CV1 to TCS decoder, "after press "A", it display the current 21mA. Answer 24 days ago Hi Tao, I have solved the problem with TCS decoders on CV1 write. Please download the latest Arduino sketch 'Command_Station_with_keypad_7_sep_2019.ino' Sorry it took me so long !! Reply 21 days ago Hi Bill Thanks for your help ! the new version solved the problem, I can run more than one loco in the track. Thks ! Tao Reply 24 days ago Thanks ! I’ll try it on monday. Answer 4 months ago Hi Tao, How are you connecting the TCS decoder to the DCC Command Station ? It must have the loco motor on its output to change CV1. The decoder must be in the loco or it should be ok with a 150 ohm resistor between the motor wires. Hope this helps. Reply 4 months ago Hi Bill I can control the engine at default address to control it, how to connect the 150ohm resistor ? parallel or series to motor. Reply 4 months ago Hi Tao, If the decoder is installed in the loco, you will not need the resistor. What value are you trying to give CV1 ? Ensure the track and wheels are clean. Not sure what else to suggest. I am away on holiday till mid next week. I will have another look at this problem then. Reply 3 months ago Hi Tao, Do you still have issues with the writing of CV1 on a TCS decoder ? I could look at increasing the time delay between write packets to try and resolve this. Reply 3 months ago I can't solve the problem yet. Reply 3 months ago Hi Tao, please try the attached file 'Command_Station_with_keypad_11_july_2019.ino' to check if CV1 can be changed on your TCS decoder. Thanks. Question 2 months ago Hi Bill dave3054 again. I noticed that you uploaded the two older libraries for another question a while ago. I have used them both in the Jan 2019 sketch and the whole thing works perfectly! Points, functions and locos working a treat. even better now that I have turned my screen round as it was upside down :) Answer 2 months ago Hi Dave, sorry I did not reply sooner. However you seem to have found a way round the problem! I was going to suggest the issue with the Arduino loading may have been due to the latest Arduino IDE update includes a faster loader for Nanos. The older or clone Nanos require us to choose the older loader from the file menu. With your success on using the older files makes that unlikely. There may have been a problem with the downloaded libraries for the newer version ? Thanks for you support and enjoy using your completed project. Reply 8 weeks ago Thanks for your reply Bill. The controller is flipping awsom. Works a treat. all the best Dave Question 2 months ago on Step 4 Hi Bill I have your dcc command station board and have fitted all the components. I have downloaded the latest sketches. The "with keypad may 2019" sketch compiles and uploads fine. The "display June 2019" will not compile, error "exit1 error compiling for Arduino nano". I assume I need both sketches one for each Nano, or has my stupidity reached new levels? any advice please. kind regards Dave Question 4 months ago Bill, by the way, I cannot find the library Adafruit_ILI9341_AS. Answer 4 months ago Thanks again, Bill. Have a nice day. Juan. Answer 4 months ago Hi Juan, I have attached a zip file of library used in the sketch. If attachment does not come through I can send directly by email. My email is billc 'at' john-lewis.com Reply 4 months ago Thank you vey much, Bill. An another question: can I manage in some way the DCC Sattion without the display? Regards, Juan. Reply 4 months ago Hi Juan, I have attached the Adafruit GFX AS library for you. You could use the DCC system without display if you want to try it out using keyboard only. 4 months ago Hello Bill, have finally started the construction of the DCC Command Station. It's almost finished, but I have a doubt: instead of your TFT screen I have this shield, whose photos I send you. Can I use my screen? How should the connections be? Thank you very much for your help. Juan.
https://www.instructables.com/id/Model-Railroad-DCC-Command-Station/
CC-MAIN-2019-43
refinedweb
2,042
72.66
del_curterm, restartterm, set_curterm, setupterm - interfaces to the terminfo database #include <term.h> int del_curterm(TERMINAL *oterm); int restartterm(char *term, int fildes, int *errret); TERMINAL *set_curterm(TERMINAL *nterm); int setupterm(char *term, int fildes, int *errret); These functions retrieve information from the terminfo database. To gain access to the terminfo database, setupterm() must be called first. It is automatically called by initscr() and newterm(). The setupterm() function initialises the other functions to use the terminfo record for a specified terminal (which depends on whether use_env() was called). It sets the cur_term external variable to a TERMINAL structure that contains the record from the terminfo database for the specified terminal. The terminal type is the character string term; if term is a null pointer, the environment variable TERM is used. If TERM is not set or if its value is an empty string, then "unknown" is used as the terminal type. The application must set fildes to a file descriptor, open for output, to the terminal device, before calling setupterm(). If errret is not null, the integer it points to is set to one of the following values to report the function outcome: - -1 - The terminfo database was not found (function fails). - 0 - The entry for the terminal was not found in terminfo (function fails). - 1 - Success. If setupterm() detects an error and errret is a null pointer, setupterm() writes a diagnostic message and exits. A simple call to setupterm() that uses all the defaults and sends the output to stdout is:The set_curterm() function sets the variable cur_term to nterm, and makes all of the terminfo boolean, numeric, and string variables use the values from nterm.The set_curterm() function sets the variable cur_term to nterm, and makes all of the terminfo boolean, numeric, and string variables use the values from nterm. setupterm((char *)0, fileno(stdout), (int *)0); The del_curterm() function frees the space pointed to by oterm and makes it available for further use. If oterm is the same as cur_term, references to any of the terminfo boolean, numeric, and string variables thereafter may refer to invalid memory locations until setupterm() is called again. The restartterm() function assumes a previous call to setupterm() (perhaps from initscr() or newterm()). It lets the application specify a different terminal type in term and updates the information returned by baudrate() based on fildes, but does not destroy other information created by initscr(), newterm() or setupterm(). Upon successful completion, set_curterm() returns the previous value of cur_term. Otherwise, it returns a null pointer. Upon successful completion, the other functions return OK. Otherwise, they return ERR. No errors are defined. An application would call setupterm() if it required access to the terminfo database but did not otherwise need to use Curses. baudrate(), erasechar(), has_ic(), longname(), putc(), termattrs(), termname(), tgetent(), tigetflag(), use_env(), <term.h>.baudrate(), erasechar(), has_ic(), longname(), putc(), termattrs(), termname(), tgetent(), tigetflag(), use_env(), <term.h>.
http://www.opengroup.org/onlinepubs/007908799/xcurses/del_curterm.html
crawl-001
refinedweb
479
61.67
Re: Creating individual .pub files - From: Christiaan <Christiaan@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Fri, 18 Apr 2008 11:35:03 -0700 Sorry for taking so long to respond but your suggestions does make a lot of sense and the links will be easier to manage your way. I see what you are saying about the amount of pages per section so I am now faced with a decision. My site is having a slow start but yes, if that changes, the sections will grow very rapidly and I will have to restructure the site. I think that while my site is currently published, rather do the big work now. Thanks for your and Mike's help, I will shout if I need anything else. "DavidF" wrote: .. Your current home page: Current link to Limpopo page: Link after creating new Pub file, Limpopo.pub, saving home page as limpopo.htm instead of index.htm or limpopopage.htm when you Publish to the web, and uploading to a new folder on your host called "limpopo": I would suggest that you consider getting away from using uppercase, unless you can be very consistent. I also don't know why you couldn't use limpopo.htm instead of limpopopage.htm??? I just don't see the need to use the "page" part. Unless you have a good reason for that, I think the shorter version is better. Plus one of the reasons to use "limpopo.htm" vs "index.htm" is that in theory it will help with search engine optimzation SEO. It seems to me that the search engines might like to see the name of the province, and not provincepage. The main thing is it would be shorter... And keep in mind what Mike said, it can be a good thing to use "index" for the default home page for many reasons. Then if I understand you correctly, the individual cities and all the links from those cities will be part of the limpopo Pub file. So if you follow the link to Ellisras that link right now is: Now as a subpage of the limpopo Pub file the link will be: Don't forget that when you Publish to the Web, and if you choose something other than index as the default file name then the index_files subfolder (that contain your other pages and all the graphics) is changed to reflect the file name. So when you save the home page as limpopo.htm you will get a limpopo_files folder containing the other pages and the graphics. So now I also assume that the accomodation page for Ellisras is part of the limpopo.pub file, so that link is now: and would become: Etc, etc... So you would be working with 12 Publisher files and 11 subfolders on your host. Your home page will be uploaded to exactly where it is now. It will be a single page, but will have both an index.htm file and an index_files folder. One subfolder will be called "aboutus", another "advertisewithus". The other 9 subfolders on your host will be named "limpopo"...after the province name. All these subfolders will located at the same level in the directory as the index_files folder is for the home page. Now is that what you are proposing and does the way the links are written make sense to you? Your organization seems logical and will make it easier to manage. If you think the providence Pub files will end up being more than 30 or so pages, I would suggest that you consider an alternative. Keep all the main sections with the home page in one pub file of 12 pages, and produce eacy cities content with a seperate pub file and their own subfolder on your host. Yeah a lot more initial work, but as you add content to the cities and even more cities, it will be easier to manage...just thinking out loud. Isn't the city content the most dynamic part of your site? DavidF "Christiaan" <Christiaan@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:1EDAE578-B4FE-46EF-85A3-46C2B36B4828@xxxxxxxxxxxxxxxx Hell it's nice to have so much help, thanks guys!!! I think what I am going to do now: I am going to make my homepage a single .pub file. Each of the 9 provinces consists of a provincial "homepage" and about 30 pages of content that relates to each province. For instance: the Limpopo "homepage" wil be linked from my site homepage and the content of Limpopo will be linked from the Limpopo "homepage" but the Limpopo "homepage" will still be called Limpopo.htm. I think that I should give the about us and the advetise with us pages their own .pub files becuase they are going to be static. So in effect the site should look something like this: index.htm(Site homepage) Limpopo.htm Mpumalanga.htm Gauteng.htm Northwest.htm Freestate.htm Kwazulunatal.htm Northerncape.htm Easterncape.htm Westerncape.htm advertisewithus.htm I will build the nav bar myself with some text or small images. I am just not sure how the link will look that links index.htm(site homepage) to the provincial "homepages". Am I in the right direction? "DavidF" wrote: Now I understand. I actually addressed the issue of relative vs. absolute links in the looooong thread and discussion we had before (which I doubt you read...way too long and wordy). I hope he realizes that the primary challenge he faces is the navigation system. It needs to be such that adding a page does not require going back and changing 200 plus pages. Also he does use custom file names instead of index.htm already, so that shouldn't be a big change. I am sure you are busy, but if you or anyone else have any ideas of how best to organize his site, it is at. About the only argument I can think of for breaking it up to one page per Pub file is that if he eventually decided to switch to a server side database, having the individual pages could make the conversion easier. A bit off topic, but do you know anyway to import a textual navbar, and specifically one that looks like the typical bottom navbar, into a page? I currently import a javascript menu to each page that only requires changing the javascript in one folder on my host, and the change is reflected in each page. I can tell Christian how to do that, but I haven't figured out how to import a textual navbar at the bottom of each page. And if anybody else has an idea, I would appreciate it. DavidF "Mike Koewler" <wordwiz@xxxxxxxx> wrote in message news:82eac$48050953$42a1fb54$21771@xxxxxxxxxxx David, Unless Christian is anal retentive like me, he has used relative links, so copying and pasting text or object with links will mean they are broke. He will also have to be very careful renaming pages so he doesn't end up with a couple dozen index.htm pages. I faced this problem at the end of 2006. For week after week, I kept adding pages of news, links to other groups, etc. Suddenly my file was out of control with over 300 pages. So I copied just the main pages to a new file and used absolute links instead of relative ones. I then deleted all of those pages and uploaded the file to /2006. Then I created a /2007 folder and file and kept the copied pages as my main file. It gets updated to my root directory, which has links to the 2006 and now 2007 archives. Even now, once a file gets to about 60 pages, I start a new one with only the skeleton pages and rename it something like 2008April.wpp. I can do incremental updates without deleting files so that isn't a problem. I do something similar with a site I do for my church. Since I add at least 12 pages a month, it was becoming way too large also. Now stuff goes into a 2008 folder or a special events folder. Yeah, I have more native files, but they stay relatively small in size. Also, and I don't know if Pub allows this or not, I can insert an "offsite" page and have it automatically included in the Nav Bar Mike DavidF wrote: Mike, Thanks for jumping in here. Would you please elaborate on why you say it would be a disaster? Also, can you suggest a "best" way for Christiaan to break his site up and organize it. Single pages/single pub files seems overkill to me too, but I am curious about why you said it. Thanks. DavidF "Mike Koewler" <wordwiz@xxxxxxxx> wrote in message news:17733$4804126b$42a1fb54$5706@xxxxxxxxxxx Christiaan wrote: I am going to make each page an individual .pub file. You are looking at a disaster waiting to happen if you do this. Mike - Follow-Ups: - Re: Creating individual .pub files - From: DavidF - References: - Creating individual .pub files - From: Christiaan - Re: Creating individual .pub files - From: Mike Koewler - Re: Creating individual .pub files - From: DavidF - Re: Creating individual .pub files - From: Mike Koewler - Re: Creating individual .pub files - From: DavidF - Re: Creating individual .pub files - From: Christiaan - Re: Creating individual .pub files - From: DavidF - Prev by Date: SWF file should not be cache - Next by Date: Re: Creating individual .pub files - Previous by thread: Re: Creating individual .pub files - Next by thread: Re: Creating individual .pub files - Index(es):
http://www.tech-archive.net/Archive/Publisher/microsoft.public.publisher.webdesign/2008-04/msg00247.html
crawl-002
refinedweb
1,592
73.07
Let's Build Web Components! Part 6: Gluon Benny Powers Oct 28 '18 Updated on Jan 03, 2019 ・6 lit-html, a new functional UI library from Google, and it's associated custom-element base class LitElement. Let's Build Web Components! Part 5: LitElement Benny Powers Today we'll implement <gluon-lazy-image> using @ruphin's Gluon library. Like LitElement, Gluon components use lit-html to define their templates, but the Gluon base class is much "closer to the metal": it prefers to remain lightweight, leaving fancy features like observed or typed properties up to the user. If you didn't catch last week's article on lit-html and LitElement, take a look now before we dive in. <gluon-lazy-image> - Element Template - Properties and Attributes - Rendering and Lifecycle - Other Niceties - Complete Component <gluon-lazy-image> Our refactor of <gluon-lazy-image> will be, as you might have expected, a mashup of the vanilla <lazy-image> component with <lit-lazy-image> from last week. Let's start by importing our dependencies and defining our class. import { GluonElement, html } from '/node_modules/@gluon/gluon/gluon.js'; class GluonLazyImage extends GluonElement {/*..*/} customElements.define(GluonLazyImage.is, GluonLazyImage); One small convenience to notice right off the bat is that Gluon prepares a static is getter for us that returns the camel-cased class name. It's a small kindness, but will make refactoring easier if we ever decided to change our element's name. Of course, if we wanted to override the element name, we could just override the static getter. Next up, we'll define the template in an instance getter: Element Template class GluonLazyImage extends GluonElement { get template() { return html`<!-- template copied from LitLazyImage -->`; } } Properties and Attributes For the properties, we'll implement observedAttributes and property setters ourselves, just like we did with vanilla <lazy-image>: static get observedAttributes() { return ['alt', 'src']; } /** * Implement the vanilla `attributeChangedCallback` * to observe and sync attributes. */ attributeChangedCallback(name, oldVal, newVal) { switch (name) { case 'alt': return this.alt = newVal case 'src': return this.src = newVal } } Rather than declaring types statically, note how we coerce the value in the setter, this is how you do typed properties with Gluon. /** * Whether the element is on screen. * @type {Boolean} */ get intersecting() { return !!this.__intersecting; } Just like in vanilla <lazy-image>, we'll use guarded property setters to reflect to attributes. /** * Image alt-text. * @type {String} */ get alt() { return this.getAttribute('alt'); } set alt(value) { if (this.alt != value) this.setAttribute('alt', value); this.render(); } Rendering and Lifecycle Gluon elements have a render() method which you call to update the element's DOM. There's no automatic rendering, so you should call render() in your property setters. render() batches and defers DOM updates when called without arguments, so it's very cheap. set intersecting(value) { this.__intersecting = !!value; this.render(); } set src(value) { if (this.src != value) this.setAttribute('src', value); this.render(); } render() returns a promise. You can also force a synchronous render with render({ sync: true }). The notion of component lifecycle is similarly simplified. Rather than introduce new callbacks like LitElement does, if you want to manage your element's DOM etc, you just wait on the render() promise. const lazyImage = document.querySelector('gluon-lazy-image'); (async () => { // Force and wait for a render. await lazyImage.render(); // Do whatever you need to do with your element's updated DOM. console.log(lazyImage.$.image.readyState); })(); Other Niceties Gluon will pack your element's $ property with references to id'd elements in the shadow root at first render. So in our case we could get lazyImage.$.image or lazyImage.$.placeholder if we needed references to the inner image or placeholder elements. Also, like LitElement you can override the createRenderRoot class method to control how your component renders. Return this to render your component's DOM to the Light DOM instead of in a shadow root: class LightElement extends GluonElement { get template() { return html`Lightness: <meter min="0" max="1" value="1"></meter>`; } createRenderRoot() { return this; } } Complete Component import { GluonElement, html } from ''; const isIntersecting = ({isIntersecting}) => isIntersecting; class GluonLazyImage extends GluonElement { observedAttributes() { return ['alt', 'src']; } /** * Implement the vanilla `attributeChangedCallback` * to observe and sync attributes. */ attributeChangedCallback(name, oldVal, newVal) { switch (name) { case 'alt': return this.alt = newVal case 'src': return this.src = newVal } } /** * Whether the element is on screen. * Note how we coerce the value, * this is how you do typed properties with Gluon. * @type {Boolean} */ get intersecting() { return !!this.__intersecting; } set intersecting(value) { this.__intersecting = !!value; this.render(); } /** * Image alt-text. * @type {String} */ get alt() { return this.getAttribute('alt'); } set alt(value) { if (this.alt != value) this.setAttribute('alt', value); this.render(); } /** * Image URI. * @type {String} */ get src() { return this.getAttribute('src'); } set src(value) { if (this.src != value) this.setAttribute('src', value); this.render(); } /** * Whether the image has loaded. * @type {Boolean} */ get loaded() { return this.hasAttribute('loaded'); } set loaded(value) { value ? this.setAttribute('loaded', '') : this.removeAttribute('loaded'); this.render(); } constructor() { super(); this.observerCallback = this.observerCallback.bind(this); this.intersecting = false; this.loading = false; } connectedCallback() { super.connectedCallback(); this.setAttribute('role', 'presentation');. this.dispatchEvent(new CustomEvent('loaded-changed', { bubbles: true, composed: true, detail: { value: true }, })); } /** *(GluonLazyImage.is, GluonLazyImage); The file comes in at 190 LOC (diff), equivalent to the vanilla component, which makes sense considering Gluon's hands-off approach. Conclusions If you're looking for a custom element base class that doesn't hold your hand, but gives you the power of lit-html for templating, Gluon is a great choice! We've seen how Gluon components straddle the boundary between totally vanilla low-level APIs and library conveniences. Join us next time for something completely different as we dive into one of the most fascinating web component libraries yet published - hybrids. See you then 😊 Would you like a one-on-one mentoring session on any of the topics covered here? Acknowledgements It's a pleasure to once again thank @ruphin for donating his time and energy to this blog series and this post in particular. The Depression and Anxiety Paradox The first step to recovery in a world that just doesn't get it Hello Benny! Nice to hear that the hybrids is the next library on your map! Yesterday I gave a talk at a great ConFrontJS conference in Warsaw about the library. It will be available on YouTube soon, but before that, you can read a presentation, that I prepared. It might help you to even better understand architecture decisions behind it. You can find it here: slides.com/smalluban/future-with-w... As you know, you are very welcome to ask any questions during article creation :) Awesome. This will be very helpful. I'm excited to dive in to hybrids. I'm planning on taking longer than just one week to prepare my writeup, since I'm traveling now. Expect my questions soon
https://dev.to/bennypowers/lets-build-web-components-part-6-gluon-27ll
CC-MAIN-2019-09
refinedweb
1,133
50.63
Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 3:20 PM I have a problem with soap and flex 3. I have created a webservice through the import webservice menu in Flex Builder. If I use the service as is I get a security error because the crossdomain policy on the remote server doesn't comply. So, instead I am using a php proxy to relay the webservice through my server and out to the webservice back to the server back to Flex. When I try to do this I get a SOAP mismatch error coming from the below code. else if (envNS.uri != SOAPConstants.SOAP_ENVELOPE_URI) { throw new Error("SOAP Response Version Mismatch"); } I went back in and checked the value of envNS.uri and SOAPConstants.SOAP_ENVELOPE_URI in both the previously described situations (php proxy and straight security riddled call). In the security riddled call the two variables match. In the proxy call I get back differing values of envNS.uri and SOAPConstants.SOAP_ENVELOPE_URI. Can somebody tell me why the variables are not matching when put through the php proxy. The php is simple, just curl, so I've pasted it below. ///////START PHP SNIPPET $url = $_GET['url']; $headers = $_GET['headers']; $mimeType = $_GET['mimeType']; //Start the Curl session $session = curl_init(); // Don't return HTTP headers. Do return the contents of the call curl_setopt($session, CURLOPT_URL, $url); curl_setopt($session, CURLOPT_HEADER, ($headers == "true") ? true : false); curl_setopt($session, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']); curl_setopt($session, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($session, CURLOPT_RETURNTRANSFER, 1); // Make the call $response = curl_exec($session); if ($mimeType != "") { // The web service returns XML. Set the Content-Type appropriately header("Content-Type: ".$mimeType); } echo $response; curl_close($session); //END PHP SNIPPET Any help would be great. Thanks, Josh 1. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 5:23 PM (in response to joshua_shizny) Some more info... I'm running everything over https When I create the wsdl code from flex builder wizard, I have to select an alternative port to connect with the soap1.1 version (on the same screen where you specify the services you want to connect to). Is it possible that when I run the php proxy and curl that I somehow lose the correct port to connect to 1.1 and get 1.2 response back. If so, anybody know how I could correct that? 2. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:41 PM (in response to joshua_shizny) I just don't get the overall picture here with this wsdl stuff with flex, so I've got some more questions and more info. Firstly, I don't have a services-config.xml file... Do I need one? If so how, where do I create it. I've seen a bunch of partial info on the subject but nothing really through. All my stuff is going from https to https, the site is hosted on https and the service is located on https. Does that matter. I have control over my server and can put cross-domain files on it. I have a proxy written with curl trying to relay the call to the service. Do I need to do anything special with that because I'm am going across https? I guess the port deal selector on the flex wsdl creation wizard is really a namespace and not a more traditional communication port number.. Is that correct? 3. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:42 PM (in response to joshua_shizny) Also, here is my crossdomain file on the root of my server <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM ""> <cross-domain-policy> <allow-http-request-headers-from <allow-access-from </cross-domain-policy> Does this look good considering https? 4. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 6:47 PM (in response to joshua_shizny) This is what I get if I don't pass a rootUrl in to the AbstractWebService constructor (that's how I use the php proxy).... FaultEvent fault=[RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTPS"] messageId=null type="fault" bubbles=true cancelable=true eventPhase=2] Which, by the way when I'm working in debug mode on my computer works just fine until I upload to server. 5. Re: Flex and Soap Mismatchjoshua_shizny Oct 27, 2009 7:23 PM (in response to joshua_shizny) When instantiating webservice (implements webservice) with this line (no proxy/destination url, but rootUrl) this.myService = new ModpayWeb(null, " %2Fmodpay.asmx%3Fwsdl"); I get a soap envelop of which does not match my static const public static const SOAP_ENVELOPE_URI:String = ""; and gives me a soap mismatch error but when I go directly to the source and don't use the proxy I get back an envelop of which does match my predefined const but I get a security error channel .... DefaultHttps ?? 6. Re: Flex and Soap Mismatchjoshua_shizny Oct 28, 2009 7:37 AM (in response to joshua_shizny) Anyone, any ideas? I'm really stuck here. I had somebody in the flex user group tell me that I should just run my wsdl through php and make a weborb / amfphp connection from flex to server but that is going to be a ton of work and it seems that using flex's built in wsdl stuff would be the way to go. 7. Re: Flex and Soap Mismatchjoshua_shizny Oct 30, 2009 3:02 PM (in response to joshua_shizny) Well, After fighting for days trying to get flex to call a php proxy to relay wsdl info because of security issues (no cross domain on remote server), The company was nice enough to put me on their crossdomain file. So, I'm thinking good, but I was wrong. I still get a security violation.... FaultEvent fault=[RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTPS"] messageId=null type="fault" bubbles=true cancelable=true eventPhase=2 What is up with that. I thought that the cross-domain file would handle that. Here is the cross domain file on the root of the wsdl service <?xml version="1.0" encoding="UTF-8"?> <!-- the purpose of this document is to allow FLASH web applications to access the APIs. --> <!DOCTYPE cross-domain-policy SYSTEM ""> <cross-domain-policy> <allow-access-from </cross-domain-policy> I am at and am accessing a wsdl at https:// at well. anybody know why this is still happening with crossdomain support? Thns, 8. Re: Flex and Soap Mismatchjoshua_shizny Oct 31, 2009 10:58 AM (in response to joshua_shizny) Well now it doesn't matter. I got it running through a php proxy. Here is what I ended up with. Hope this saves somebody else 3 days. <? $soapRequest = file_get_contents("php://input"); $soapAction = $_SERVER['HTTP_SOAPACTION']; $url = ''; $header[] = "POST /ws/theservice.asmx HTTP/1.1"; $header[] = "Host:"; $header[] = "Content-Type: text/xml; charset=utf-8"; $header[] = "Content-length: ".strlen($soapRequest); $header[] = "SOAPAction: ".$soapAction; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_CUSTOMREQUEST,'POST'); curl_setopt($ch, CURLOPT_HTTPHEADER, $header); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_POSTFIELDS, $soapRequest); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $result = curl_exec($ch); echo $result; curl_close($ch); ?> 9. Re: Flex and Soap Mismatchjoshua_shizny Oct 31, 2009 11:02 AM (in response to joshua_shizny) Answered 10. Re: Flex and Soap MismatchKrish.praveen Mar 23, 2010 4:11 AM (in response to joshua_shizny) Hi, I too facing the issue with web services. Error: SOAP Response Version Mismatch at mx.rpc.soap::SOAPDecoder/decodeEnvelope()[C:\autobuild\3.3.0\frameworks\projects\rpc\src\ mx\rpc\soap\SOAPDecoder.as:266] at mx.rpc.soap::SOAPDecoder/decodeResponse()[C:\autobuild\3.3.0\frameworks\projects\rpc\src\ mx\rpc\soap\SOAPDecoder.as:236] Previously, My web servieces are deployed in 32bit server now its moved to 64bit server. As you found the answer for this, can you help out to fix the same. Any further details you are looking for, let me know. Thanks, Krishna
https://forums.adobe.com/thread/513832
CC-MAIN-2017-51
refinedweb
1,320
56.35
01 August 2013 16:42 [Source: ICIS news] HOUSTON (ICIS)--Growth in the overall ?xml:namespace> In its monthly purchasing managers index (PMI), the Institute for Supply Management (ISM) said that the index gained 4.5 percentage points in July, raising the reading to 55.4% from June’s measure of 50.9%. The PMI is a composite of supplier responses to the ISM’s monthly survey of 10 different business performance measures in 18 major manufacturing sectors. A PMI reading above 50% indicates the July’s jump was driven by substantial growth in key subsidiary indexes, including an increase of 11.6 percentage points in production to 65.0%, 6.4 points in new orders to 58.3% and 5.7 points in employment to 54.4%. Of the 18 manufacturing sectors surveyed by the ISM, 13 reported growth, including chemicals production. Plastics and rubber production was one of the five that reported contraction. An unidentified chemicals industry executive responded to the survey request for comment by saying: “Business remains flat. Looking for some seasonal bump as we come to the beginning of our ‘busy’
http://www.icis.com/Articles/2013/08/01/9693347/us-manufacturing-survey-shows-second-straight-month-of.html
CC-MAIN-2015-22
refinedweb
186
59.09
In Apache 2.2, one could set an SSI variable based on backreferences from a regex match in the previous <!--#if -->: <!--#if expr='$REQUEST_URI = /(.*)/' --> <!--#set var="foo" value="$1" --> Found <!--#echo var="foo" --> <!--#endif --> However, in Apache 2.4, the equivalent code doesn’t work: <!--#if expr='v("REQUEST_URI") =~ /(.*)/' --> <!--#set var="foo" value="$1" --> Found <!--#echo var="foo" --> <!--#endif --> It sets the variable to the empty string and yields this error: [Thu Jan 15 19:23:20.763133 2015] [include:warn] [pid 6768:tid 140695587436288] [client 127.0.0.1:59575] AH01330: regex capture $1 is out of range (last regex was: '(null)') in /var/www/html/test.shtml I can still use the Apache 2.2 code if I set SSILegacyExprParser on, but obviously there should be a non-deprecated way to do this. I can reproduce this bug. <!--#set var="foo" value="$0" --> does work, it contains the whole string, but $1 does fail exactly as described earlier. This bug is still present in 2.4.16 where I have encountered it today. Example: Show 198. from 198.19.81.98 <!--#if expr="v('REMOTE_ADDR') =~ /(\d+\.)\d+/ && $1 =~ /(\d\.)/" --> <!--#set var="foo" value="$0" --> Found <!--#echo var="foo" --> <!--#endif --> You *must* use Backreferences in same Expression! (In reply to Helge from comment #3) > Example: Show 198. from 198.19.81.98 > <!--#if expr="v('REMOTE_ADDR') =~ /(\d+\.)\d+/ && $1 =~ /(\d\.)/" --> > <!--#set var="foo" value="$0" --> > Found <!--#echo var="foo" --> > <!--#endif --> > You *must* use Backreferences in same Expression! That's correct, in the same expression you can use $1 and backreference matches. BUT $0 references always the whole string and not the last matched string. in Helges example foo == "198.19.81.98" and not as expected "198." if you try use a nested if, you can backreference <!--#if expr="v('REMOTE_ADDR') =~ /(\d+\.)\d+/ && $1 =~ /(\d\.)/" --> <!--# if expr='$1 == "198."' --> Found 198. <--#endif --> <!--#endif --> it matches. this means backreferences are available in nested ap_expr, but not for any "<!--#set" var or "<!--#echo" operation. (In reply to Ingmar Eveslage from comment #4) > (In reply to Helge from comment #3) > That's correct, in the same expression you can use $1 and backreference > matches. BUT $0 references always the whole string and not the last matched > string. in Helges example > foo == "198.19.81.98" > and not as expected "198." I have tested my Example many times on Apache 2.4.12 an it works as expected: foo returns "198.". $1 from expression /^(\d\.)/ is $0 for 'set var=foo'. i boiled it down a litte: at first: there has to be an indirection with "<!--#set var". echoing directly "$0" doesn't work. my example shows a work around: <!--#set var="test_var" value="1_2_3_4" --> <!--#if expr='v("test_var") =~ /(1_)(.*)/ && $1 =~ /(.*)/' --><!--#endif --> <!--#set var="first" value="$0" --> <!--#echo encoding='none' var='first' --> OUTPUT: 1_ changing the second regex in the if statement to $2 <!--#set var="test_var" value="1_2_3_4" --> <!--#if expr='v("test_var") =~ /(1_)(.*)/ && $2 =~ /(.*)/' --><!--#endif --> <!--#set var="first" value="$0" --> <!--#echo encoding='none' var='first' --> OUTPUT: 2_3_4 so the simple "$n =~ /(.*)/" acts like an exporter for matched parts. BUT BE AWARE: don't simplify it "$n =~ /.*/" doesn't work. i think the bug report stands. somethings doesn't add up. And why does my RegExample correctly working on my server? (Last Test on Apache/2.4.12: Fri 2016-02-26 10:50 GMT) ----------------------- SSILegacyExprParser Off ----------------------- You CAN use for echoing #1: #set var="FooBar" value="$0" + #echo var="FooBar" -OR- #2: #echo var="0" (var="$0" doesn't work!) CORRECTED (full) EXAMPLE: Show first ^(\d+\.) from IPv4 Address <!--#if expr="v('REMOTE_ADDR') =~ /^(\d+\.)/ && $1 =~ /^(\d+\.)/"--> <!--#set var="FooBar" value="$0" --> FooBar #1: <!--#echo var="FooBar" --><br> FooBar #2: <!--#echo var="0" --> <!--#endif --> It works! Thanks for the explanation of the echo part. <!--#echo var="0" --> works. your example works for me, too. And it does the same as my example. Match a group and match it again in the same expression. so it gets exported as $0. i think we can agree on that. but i think its still a workaround. $1...$n should be exported directly as gets exported using the legacy parser. Don't you think. and if the developer do not agree on that, then at least the fact that: <!--#if expr='v("test_var") =~ /(1_)(.*)/ && $1 =~ /(.*)/' --><!--#endif --> <!--#echo var="0" --> works and <!--#if expr='v("test_var") =~ /(1_)(.*)/ && $1 =~ /.*/' --><!--#endif --> <!--#echo var="0" --> doesn't, is still a bug. right? (In reply to Ingmar Eveslage from comment #8) > Thanks for the explanation of the echo part. <!--#echo var="0" --> works. I tell you a secret: I'm personally using 'SSILegacyExprParser On'. For me and until now It always works. ;-) Greetings from Helge good to know. but what are the plans for "SSILegacyExprParser". will it be removed in future versions? (In reply to Ingmar Eveslage from comment #10) > good to know. but what are the plans for "SSILegacyExprParser". will it be > removed in future versions? I think, it could be removed in Apache/2.5 (?) The code shows that $1 is available in the #if, but not #set, whereas $0 is available in the #set. <!--#set var="a" value="abc" --> <!--#if expr='v("a") =~ /a(b)c/' --> <!--#if expr='$1 == "b"' --> Got a match. <!--#set var="match" value="a$1" --> <!--#echo var="match" --> <!--#set var="match" value="a$0" --> <!--#echo var="match" --> =============== Got a match. a aabc I finally got round to migrating my SSI expressions to the "new" ap_expr syntax, and hit this bug. And it is clearly a bug. Because $0 is sometimes (though not always) exported from the "if" to a subsequent "set", you can hack it with extra matchers. Congratulations to the folks who discovered that, as it's a viable workaround! But it's clearly a hack, and doesn't work if you want to capture more than one substring. The documentation for ap_expr suggests that modules can, if they want to, allow the backref variables to survive between expressions. It's *partially* happening with SSI (but only with $0, and only if there are capturing parentheses, which in themselves shouldn't affect whether $0 is set), so please can it be fixed to work properly? I've taken a look at util_expr_eval.c and mod_include.c, and my guess is it's somewhere in the code in parse_ap_expr that decides whether to (re)allocate a backref_t struct within the persistent include_ctx_t. Hopefully somebody more familiar with this area of code will spot it! I'm marking this as regression because as initially reported it breaks sites that were working on prior versions. With the current state of this bug, this is the rigmarole I have to go through simply to impersonate the current directory index header, before I get to customise it with new content: <!-- Strip the Query-string --> <!--#if expr='v("REQUEST_URI") =~ /^([^?]*)/ && $1 =~ /(.*)/' --> <!--#set var="request" value="$0" --> <!--#else --> <!--#set var="request" value="${REQUEST_URI}" --> <!--#endif --> <!-- strip the final / unless it is the first / --> <!--#if expr='v("request") =~ /(\x2F.*)\x2F/ && $1 =~ /(.*)/' --> <!--#set var="request" value="$0" -> <!--#endif --> <h1>Index of <!--#echo encoding="entity" var="request" --></h1> Thanks to Helge & Igmar for showing this work-around.
https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=57448
CC-MAIN-2020-40
refinedweb
1,191
79.77
psi4¶ psi4 is an open source quatum chemistry code out of the Sherill Group at Georgia Tech. - class ase.calculators.psi4. Psi4(restart=None, ignore_bad_restart=False, label='psi4-calc', atoms=None, command=None, **kwargs)[source]¶ An ase calculator for the popular open source Q-chem code psi4. method is the generic input for whatever method you wish to use, thus and quantum chemistry method implemented in psi4 can be input (i.e. ccsd(t)) also note that you can always use the in-built psi4 module through: calc.psi4. Setup¶ First we need to install psi4. There are instructions available on their website for compiling the best possible version of psi4. However, the easiest way to obtain psi4 by obtaining the binary package from conda: conda install psi4 -c psi4; conda update psi4 -c psi4 The ase calculator operates using the psi4 python API, meaning that if psi4 is installed correctly you won’t need to do anything else to get psi4 working. It is, however, recommended that you set up a psi4 scratch directory by setting the PSI_SCRATCH environment variable: export PSI_SCRATCH=/path/to/existing/writable/local-not-network/directory/for/scratch/files This directory is where temporary electronic structure files will be written. It is important that this directory be located on the same machine as the calculation is being done to avoid slow read/write operations. This is set to /tmp by default. However, be aware that the /tmp directory might not be large enough. Examples¶ You can import psi4 and run it like any other calculator in ase: from ase.calculators.psi4 import Psi4 from ase.build import molecule import numpy as np atoms = molecule('H2O') calc = Psi4(atoms = atoms, method = 'b3lyp', memory = '500MB' # this is the default, be aware! basis = '6-311g_d_p_') atoms.set_calculator(calc) print(atoms.get_potential_energy()) print(atoms.get_forces()) However, once you have instantiated the psi4 ase calculator with an atoms object you can interact with the psi4 python API as well. The psi4 API is just an attribute of the psi4 ase calculator: calc.psi4.frequency('scf/cc-pvdz', molecule=calc.molecule, return_wfn=True, dertype=1) This is not required though, as psi4 will act like any other ase calculator. It should be noted that the method argument supports non-DFT methods (such as coupled cluster ccsd(t)) as well. There is a great variety of quatum methods and basis sets to choose from. Parallelization¶ Psi4 runs on a single thread by default. However, you may increase the number of threads by passing in the num_threads argument, which can take either “max” or integer values.
https://wiki.fysik.dtu.dk/ase/ase/calculators/psi4.html
CC-MAIN-2020-16
refinedweb
433
53.81
06 February 2012 05:01 [Source: ICIS news] NEW DELHI (ICIS)--Indian Oil Corp (IOC) plans to increase polymer exports to ?xml:namespace> IOC, which regularly offers polyethylene (PE) and polypropylene (PP) to neighbouring “Exporting via road is more cost-effective. Movement by rail is limited by the availability of wagons; we will now be able to send larger volumes,” he added. Panipat is about 400km from The company hopes to increase export volumes to 8,000-8,500 tonnes/month from 5,000-6,000 tonnes/month at present. “The truck was unloaded at the border and product transferred to another truck with Pakistani registration,” he explained. The consignment was delivered to But And the surplus will increase further after the commissioning of HMEL’s 440,000 tonnes/year plant in March or April 2012. Besides polymers, IOC is also eyeing exports of other petrochemicals to “The next product that we will like to move by road is purified terphthalic acid (PTA). The 8th PlastIndia exhibition is a six-day event which will end on 6 February. Additional reporting by Ong Sheau Ling
http://www.icis.com/Articles/2012/02/06/9529518/IOC-to-increase-PE-PP-exports-to-Pakistan-on-road-trade.html
CC-MAIN-2014-42
refinedweb
184
52.49
Delta bot - Login or register to post comments - by mjcbruin - Collected by 33 @ Thu, 2012-05-17 07:37 Hello Or could you please tell me how did u make the mouse interface to control the robot?I will be grateful. @ Tue, 2012-05-15 07:32 Help please Hello.Could u please give me a msn id,yahoo messenger id to talk with you.I really need help with my robot.Please.I will explain on chat what is about.Hope u understand me.Thanks.Have a great day. @ Fri, 2011-07-15 05:44 ferrofluid? have you ever seen ferrofluid i was thinking if you made some and put it on a stand over your delta bot with a magnet ontop of the bot and use ur phone to control the ferrofluid that would be really cool @ Tue, 2010-09-28 00:26 My delta robot controlled via iPhone Hi guys! At the end I have realized my own delta robot. I controll it via iPhone accelerometer. I also have implemented inverse kinematics. As soon as possible I will pubblish some implementation details. Here is my parallel robot on youtube: Best, Filippo @ Mon, 2010-08-23 01:13 Control via iPhone Hi guys! I would like to manage this delta robot via iphone. I found a way to control arduino via iphone. You can see an example in this link: Could someone help me to modify processing code to communicate with TouchOSC? I think I could use a XY pads? Is it possible? Please help me. Thanks a lot, Filippo @ Mon, 2010-08-23 02:07 draft I have written a draft. Could someone check it please? Ufortunely I haven't an iphone for testing. Anyway someone could check my new processing code. Basically I have changed just some rows. Here my code: import oscP5.*; // Load OSC P5 library import netP5.*; // Load net P5 library import processing.serial.*; OscP5 oscP5;// Set oscP5 as OSC connection Serial myPort; // The serial port: int servo1 = 0; int servo2 = 0; int servo3 = 0; int serialBegin = 255; float X,Y void setup() { size(600,600); myPort = new Serial(this, Serial.list()[1], 115200); frameRate(100); noCursor(); } void oscEvent(OscMessage theOscMessage) { // This runs whenever there is a new OSC message String addr = theOscMessage.addrPattern(); // Creates a string out of the OSC message // if(addr.indexOf("/1/toggle") !=-1){ // Filters out any toggle buttons X=theOscMessage.get(0).floatValue(); Y=theOscMessage.get(1).floatValue(); } } void draw() { background(255); triangle(width/2, height, 0, 200, width, 200); servo1 = 100-int(dist(width/2,0,X,Y)/6); servo2 = 100-int(dist(0,height,X,Y)/6); servo3 = 100-int(dist(width,height,X,Y)/6); strokeWeight(3); line(300,200,X,Y); line(150,400,X,Y); line(450,400,X,Y); println("X "+X); println("Y "+Y); if (servo1 < 0){ servo1=0; } if (servo2 <0){ servo2=0; } if (servo3 <0){ servo3=0; } if (mousePressed && (mouseButton == LEFT)) { servo1 -= 20; servo2 -= 20; servo3 -= 20; } if (mousePressed && (mouseButton == RIGHT)) { servo1 += 40; servo2 += 40; servo3 += 40; } //println("servo1 "+servo1); //println("servo2 "+servo2); //println("servo3 "+servo3); //Serial.write myPort.write(255); //delay(10); myPort.write(servo1+30); //delay(10); myPort.write(254); //delay(10); myPort.write(servo2+30); //delay(10); myPort.write(253); //delay(10); myPort.write(servo3+30); //delay(10); } @ Thu, 2010-08-19 17:14 Dimensions and some more details Fantastic job! Could you please tell us some more details? For exaple PushRods length and basament disposition. Where can I buy PushRods like this? thanks a lot to all of you. @ Fri, 2010-07-23 12:02 Enhancing your project Thanks for posting your bot, it inspired me to copy and enhance... See my blog post at The enhancements are: - control using a Wii Nunchuck - proper XYZ positioning, full inverse kinematics - standalone, all calculations done on Arduino Thanks again, have fun! @ Fri, 2010-07-23 13:07 .... I have no word to express my impresion,i just want to say i want to make one . @ Wed, 2010-06-16 20:03 servos? Can you tell me which servos did you use in this delta robot?
http://letsmakerobots.com/node/10577
crawl-003
refinedweb
687
68.67
In this tutorial we will be mocking up and finishing a pan and throw class that will allow us to add this effect to any element we want. To accomplish this, we will create a image viewer - but not your average viewer. Here we'll have zooming, throwing, panning.. Almost sounds like a ninja app, huh? Step 1: Introduction The Pan and Throw Class will allow you to add the pan and throw functionality to any ActionScript object you want. Although this tutorial is specifically for Flex, the class itself can be used anywhere ActionScript is. I had seen this effect on several websites, then in Photoshop CS4 and decided that I wanted this on my projects too. There are many applications for this effect; the one that we are going to be using for this tutorial is an image viewer that lets you zoom in and out of am image and change the friction that the throw effect uses. However, this tutorial isn't really about the image viewer, it is about making a pan and throw class. So let's get started with it. Open up your favorite Flex editor and get a project going; for information on doing this in Flex Builder see the Adobe LiveDocs. Once your project is created, open the MXML file. We need to add some code to this before we create our class. Step 2: Our MXML Since this isn't the major portion of the tutorial I am not going to spend much time here. If you have any questions about this section that aren't covered, you can ask in the comments below. Firstly, here are the MXML objects to put in the application: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: > You'll notice the four functions called in the tags: init(), changeDecay(), smoothImage() and zoom(). We need to write up those functions. This is the code between the <mx:script> tags: import mx.states.SetStyle; import mx.effects.Move; import mx.containers.HBox; import mx.containers.Box; private var imageWidth:Number = 0; private var imageHeight:Number = 0; private var mover:Move = new Move(); // this will be called when the application loads private function init():void { // This event will add the ability to hide and show our controls with a click. control.addEventListener(MouseEvent.CLICK, controlClick); mover.target = control; } // this function will zoom in and out of our image according to the value of our zoom slider. private function zoom():void { inside.width = (imageWidth*hSlider.value)/100; inside.height = (imageHeight*hSlider.value)/100; } // this gets called when our image changes size. private function smoothImage(ev:Event):void{ //set image smoothing so image looks better when transformed. var bmp:Bitmap = ev.target.content as Bitmap; bmp.smoothing = true; imageWidth=inside.width; imageHeight=inside.height; } // we won't be using this one yet private function changeDecay():void { // this will change the decay (friction) value of our class, when we get there. } private function controlClick(e:MouseEvent):void { mover.play(); //this function hides/shows the controls on click if(control.y != -5){ mover.stop(); mover.yTo = -5; mover.play(); } else if(e.target == control){ mover.stop(); mover.yTo = (control.height - 10) * -1; mover.play(); } } Once you have your MXML you need to create a folder called "classes" in the same folder as your MXML file. (If using Flash, the folder needs to be in the same dir as your FLA file.) This is our classes package and is where the PanAndThrow.as file will go. In Flex Builder, create a new class, put it in the classes package, and call it PanAndThrow; this will create your class - default style. Step 3: The Makings of a Class Here's our basic PanAndThrow class. Save it as PanAndThrow.as in your new "classes" folder. //namespace declaration package classes // class declaration public class PanAndThrow { /* this is called the constructor, this method/function will get called when you create * an instance of your object, or instantiate your object. * for this class we don't do anything because we are going to do everything * in the Init function */ public function PanAndThrow() { } } Which variables and functions do we need in our PanAndThrow class? To get that, you can ask yourself "what does my class need to do, what does it need to know, and what does it need to be able to do it?" So let's create some pseudo-code. Quick Note When I first developed this class I did put everything in the constructor, but that lead to a problem when I created the start and stop methods because of scope. I couldn't instantiate this class on a global scope with all the information required. I therefore made an init() function, so the instance could be started and stopped from outside the class. Step 4: Our Pseudo-Code "Pseudo-code" just means fake code, that we can use to help ourselves think about what real code we'll need. package classes public class PanAndThrow { /*These will be the variables that we make. So what do we need to know? * anObjectToThrow; * anObjectToThrowItIn; * ObjectLocation; * PreviousObjectLocation; * Decay; // for the physics * these are the obvious ones, but this list will get a lot bigger * as we see exactly what we need in our functions */ public function PanAndThrow() { } /* So what is our class going to do? * init(); //it needs to start * stop(); // we want to be able to stop it somehow. * start(); // if we stop we need to be able to start it again. * pan(); * throw(); */ } Now that we have some pseudo-code we can start building the class. Let's start with the init() function. This will also bring us into one of the principles of Object Oriented Programming called encapsulation, which deals with the access of pieces of the code. This code should go in the PanAndThrow class we've just started. (Not sure where? Check out the Document Class Quick Tip.) // thanks to OOP, a lower level class and an upper level class (one that extends // the lower level class) can be used. Like here, almost any object you will use extends the // Sprite class. So I just have to ask for a Sprite object and you can give a Box or a Button. private var targetObject:Sprite = new Sprite(); private var eventObject:Sprite = new Sprite(); private var originalDecay:Number = .9; private var buttonDown:Boolean = false; private var moveY:Boolean = true; private var moveX:Boolean = true; private var TargetClick:Boolean = true; // We'll use this to check how long your mouse has been down on an object without moving. private var t:Timer; private var timerInterval:int = 100; public function init(ObjectToMove:Sprite, ObjectToEventise:Sprite, DecayAmout:Number = .9, isMoveY:Boolean = true, isMoveX:Boolean = true, OnlyMoveOnTargetClick:Boolean = true):void { targetObject = ObjectToMove; eventObject = ObjectToEventise; originalDecay = DecayAmount; moveX = isMoveX; moveY = isMoveY; TargetClick = OnlyMoveOnTargetClick; t = new Timer(timerInterval); start(); } Just a couple of things I want to point out. In the function for init I have set a few of the arguments to be equal to a value. That means I'm giving them a default value, thus making them optional. When setting default values for arguments of a function, they have to be the last parameters - you can't have a required variable after an optional one. The reason I added default variables is to make the call shorter if we use the default settings. I can call PanAndThrow(mover, eventer); and be done, instead of PanAndThrow(mover, enventer, decayer, yVal, ... ) and so on. Have you ever wondered what the "private" or "public" in front of functions and variables means? That is the exposure of the object. A "public" object can be accessed by any other class; a "private" object can only be seen by the other members of this class; a "protected" object is hidden from everything except classes which are in the same package. We want to be able to change the decay from our MXML so we need a public hook to get to our private variable; this is where getter and setter functions come in: private var originalDecay:Number = .9; public function get decay():Number { return originalDecay; } That's a "getter" function. It means that, to outside classes, it looks like the PanAndThrow class has a public variable called "decay". When they try to access it, we will return to them the value of our (private) originalDecay variable. Setter functions are almost the same, but allows the outside classes to change the value of our "fake" public variable: public function set decay(value:Number):void { originalDecay = value; } These are useful because you can put logic into a setter to constrain what comes into your private var. For instance, if you put a Number in the MXML tag for a box you will get a set height; if you put a % (making the number a string) you will get a percentage height. That is built into the code for the box's height setter. Now that we have our getter and setter you can access the decay variable like this from outside the class: var pt:PanAndThrow = new PanAndThrow(); pt.init(target, parent); pt.decay = .7; Step 5: Start, Listen, Stop We have our class, some local variables and an init() function. Let's do something now. At the end of the init() function we called "start();" so let's make the start function. Mostly it's just a bunch of listeners: public function start():void{ // With the mouse down, we are looking to start our pan action, but we need to be able // to check our OnlyMoveOnTargetClick which we assigned to our global field TargetClick targetObject.addEventListener(MouseEvent.MOUSE_DOWN, handleOverTarget); eventObject.addEventListener(MouseEvent.MOUSE_DOWN, handleOverTarget); // When we call our pan, it uses a mouse move listener, which means it gets called every time the // mouse moves, so we need to see how to limit when the target object moves. eventObject.addEventListener(MouseEvent.MOUSE_MOVE, moveIt); // this is to throw the object after a pan, this is a little tricky because the throwIt() function calls another listener. targetObject.addEventListener(MouseEvent.MOUSE_UP, throwIt); eventObject.addEventListener(MouseEvent.MOUSE_UP, throwIt); //the throwItOut method makes our object act as though we let go of the mouse button, but it gets fired when // the mouse leaves the parent object targetObject.addEventListener(MouseEvent.MOUSE_OUT, throwItOut); eventObject.addEventListener(MouseEvent.MOUSE_OUT, throwItOut); // this is the timer listener, this will check to see if you have been holding the mouse down for a little bit, I will // explain the need for this when we get to the timerOut() function t.addEventListener(TimerEvent.TIMER, timerOut); t.start(); } The stop() function is almost the same, but we are removing the listeners. public function stop():void{ targetObject.removeEventListener(MouseEvent.MOUSE_DOWN, handleOverTarget); eventObject.removeEventListener(MouseEvent.MOUSE_DOWN, handleOverTarget); eventObject.removeEventListener(MouseEvent.MOUSE_MOVE, moveIt); targetObject.removeEventListener(MouseEvent.MOUSE_UP, throwIt); eventObject.removeEventListener(MouseEvent.MOUSE_UP, throwIt); targetObject.removeEventListener(MouseEvent.MOUSE_OUT, throwItOut); eventObject.removeEventListener(MouseEvent.MOUSE_OUT, throwItOut); t.removeEventListener(TimerEvent.TIMER, timerOut); t.stop(); } Now we can listen to what is going on, let's go through each of these listener functions. Step 6: MouseEvent.MOUSE_DOWN We are going to be looking at the handleOverTarget event handler. private function handleOverTarget(e:MouseEvent):void { buttonDown = true; arMousePrevX = MousePrevX = MouseCurrX = eventObject.mouseX; arMousePrevY = MousePrevY = MouseCurrY = eventObject.mouseY; if(e.currentTarget == targetObject || !TargetClick) { overTarget = true; } else if(e.target.toString().search(targetObject.toString()) < 0) { overTarget = false; } } This function will be called when there is a MOUSE_DOWN event on either the event object or the target object. It is very important to note that if I put a listener on a parent object, the handler will even be called when the event occurs on a child. In this case my target object is a child of the event object. When I click on the target object this method will be called twice: first for the mouse down on the child then second for the mouse down on the parent. That is really important for this because we are going to be deciding whether the mouse down will be able to move our target object so we really need to be able to know whether that mouse down was on the child or not. The first statement is pretty straightforward: set our class variable buttonDown to true. The next two are pretty easy also, except I have introduced a couple new variables that will need to be put in our class variable list: MousePrevX, MousePrevY, arMousePrevX, arMousePrevY, MouseCurrX and MouseCurrY. These will be used a lot in the drag and pan functions, so I will wait to talk about them till then. The if statement checks to see whether the object clicked is the target object. Remember, TargetClick was set to the argument we passed to init(), OnlyMoveOnTargetClick; if this is false we want to treat every child object as the target object when clicked. That's why we have the "|| !TargetClick" check. That's the easy part. The next part is a little trickier. The e.currentTarget returns the object that triggered the event. The e.target will return the object that was the actual target. So I could say this, right? if(e.target == targetObject || !TargetClick) { overTarget = true; } else { overTarget = false; } That is simple enough, but it's wrong. What if my target object has children? Then my e.currentTarget may be the targetObject but the e.target is targetObject's child and will not match. We want this to move even if we are mousing down on a child object. So here comes String.search to the rescue. If our currentTarget is not our targetObject, we use an "else if" to see if we can find our target object in the target. e.target.toString() will produce something like this: "application2.eventobject3.targetobject2.targetchild4" for our target object where targetObject.toString() will produce something like this "application2.eventobject3.targetobject2" all I need to do to find out if our target is a child of our targetObject is by this: e.target.toString().search(targetObject.toString()) If there is a match it will return the first index of the match, or if there is not a match it will return a -1, so we can just see if it is greater than -1 and viola, we have found if the object being clicked on is a child of our targetObject. (We could check the children or parent(s) of the object via the getChildAt() function and parent property, but this is a neat alternative.) Step 7: TimerOut and Pan The timer function is pretty easy too, especially since we have done this before. Well, almost done this before. When we have dragged around our little targetObject a bit and decide we don't want to let it go, we just love it too much, and abruptly stop the mouse, what would happen if you let go of the mouse button at that point? well, what do you think would happen? I am not going to answer that for you, I am just going to help you with the code to keep it from happening. In the final code, comment these three lines out. This should look really familiar, we just used this in the button down handler, except for one variable, MouseDragged. We are going to use that when we call our other function: private function timerOut(e:TimerEvent):void { MouseDragged = false; arMousePrevX = MousePrevX = MouseCurrX = eventObject.mouseX; arMousePrevY = MousePrevY = MouseCurrY = eventObject.mouseY; } So, if you are asking why we need this timer event, you probably didn't try and take it out to see what was happening. So do that. This next function is one of our main functions; it is the pan function. There is a lot involved so let's dive into our pseudo-code: private function moveIt(e:MouseEvent):void { /*ok, so what do we need this function to do? *it needs to pan our target object. *so lets see if we are over our target object */ //if(we are over our target object) //{ // what tools are we going to need to pan? // well, maybe we should check to see if the button is down //if(button is down) //{ // we might need to set the button down variable. buttonDown = true; // and if we are in this function at this point our button is down and // the mouse has moved -- that's a drag: so MouseDragged = true; // if we are moving the object according to the mouse move then we should // probably know where our mouse is : MouseCurrX,Y = current MouseX,Y; // this is an introduction to our artificial mouse prev, which will be explained // in the next function. The ar stands for 'artificial' or 'after release', // whichever you prefer. That needs to be set to our actual previous mouse pos. // arMousePrevX = MousePrevX; // arMousePrevY = MousePrevY; // then we need to actually move the targetObject, // but remember our variables, moveX and moveY, so: // if moveX move x; // if moveY move y; // we need to reset our Decay (friction) back to the original state: //Decay = originalDecay; // that should finish the if //} //what else? //{ // we set our buttonDown to true before, so lets set it to false here. //buttonDown = false; // if this isn't a target click, we should set our overTarget to false so: //if(!TargetClick) //overTarget = false; // that's it. //} // there are a few things that we want to happen regardless of the conditions. // first, we need to set our mousePrevX,Y variable -- BEFORE the mouse is // moved again! //MousePrevX = eventObject.mouseX; //MousePrevY = eventObject.mouseY; // Here are two more variables to keep track of: xOpposideEdge and yOppositeEdge // we are testing to see what the size of our target object is in relation // to our event object; if one is bigger we need to change the behavior of the bounce. // if(targetObject.width > eventObject.width){xOppositeEdge = true;} // else{xOppositeEdge = false;} // if(targetObject.height > eventObject.height){yOppositeEdge = true;} // else{yOppositeEdge = false;} // and finally we need to stop and restart our timer. //t.stop(); //t.start(); //} I admit this is a little more psuedo-y than the last; that is for two reasons: one, you don't know what is coming, and two, I am just really excited to get to the code: private function moveIt(e:MouseEvent):void { // in our pseudo-code this was two conditions but we can combine then to one, // we test to see if our event was a button down, and if we are over our target, // if we are then let's move the target object. if(e.buttonDown && overTarget) { buttonDown = true; MouseDragged = true; MouseCurrX = eventObject.mouseX; MouseCurrY = eventObject.mouseY; // here is the artificial / after release one. again, well get to that. arMousePrevX = MousePrevX; arMousePrevY = MousePrevY; /* this is the important one, in our pseudo it was "move the target object", * so we need to translate that. To help us we'll create a local variable * Topper for the top, and Sider for the side. * so let's look at Topper (the same will apply to Sider). * eventObject.mouseY looks at where our mouse is inside of the eventObject. * We take our MousePrev away from that, and that will give us how much the object * should travel, so the Y might travel 2 pixels, or -2 pixels depending on * direction, so we take that change and add it to the target's current * position, but that isn't happening yet, this is just a var. */ var Topper:int = (eventObject.mouseY - MousePrevY) + targetObject.y; var Sider:int = (eventObject.mouseX - MousePrevX) + targetObject.x; // here is where it happens, if moveY (remember from the pseudo-code) then we // can set the position of the target. if(moveY){targetObject.y = Topper;} if(moveX){targetObject.x = Sider;} // so really we are just using Topper and Sider to temporarily store where the // target object should move to Decay = originalDecay; } else { buttonDown = false; if(!TargetClick) overTarget = false; } MousePrevX = eventObject.mouseX; MousePrevY = eventObject.mouseY; if(targetObject.width > eventObject.width){xOppositeEdge = true;} else{xOppositeEdge = false;} if(targetObject.height > eventObject.height){yOppositeEdge = true;} else{yOppositeEdge = false;} t.stop(); t.start(); } And now we are panning. Step 8: Throw, It, Out, Repeater! This is the second big function and with this we will have our class built! Ready to pan and throw any object you see fit! There are two functions that we need to address first: throwIt(), which we set as a handler to the MOUSE_UP event, and throwItOut(), which we set as a handler to the MOUSE_OUT event. private function throwIt(e:MouseEvent):void { buttonDown = false; if(MouseDragged){ eventObject.addEventListener(Event.ENTER_FRAME, theRepeater); } } private function throwItOut(e:MouseEvent):void { buttonDown = false; if(e.relatedObject == null || e.relatedObject == eventObject.parent){ eventObject.addEventListener(Event.ENTER_FRAME, theRepeater); } } These two functions are almost the same (after all, they are doing the same thing just at different times). In them we set the buttonDown to false, because this is a mouse up event, and check to see if the mouse was dragged, using either MouseDragged (which we set in the last function) or by checking "e.relatedObject"; the object that the mouse just moved out of. If it was dragged we add another listener. The ENTER_FRAME event is a really cool one. This is the basis of our animation; every time we enter a new frame the throw() function will be run. That is what allows us to simulate a mouse drag after release (remember the arMousePrevX,Y variable? That is what it is for). And that is all the throw is really doing, simulating a mouse drag, without a mouse of course. So we have pretty much already got the function we need, except we need to replace the calls to the current mouse position to our artificial mouse position. Almost got a little ahead of myself there. So with these two event functions, throwIt and throwItOut, they are doing the same thing but that if in the second function is worth mentioning. I struggled a while trying to get this functionality, until I looked at the event a little closer. The problem was trying to get the target object to act as though I let go of the button when the cursor left the event object. Go ahead, try and do this without the e.relatedObject. I almost had it a few times, but couldn't get it right. What the e.relatedObject does is finds what object you are on, after the event is called. That is why it is soooo cool. When our cursor leaves the movie altogether, it returns a null, otherwise it will return the object you are on, so we can check to see if e.relatedObject is null or is a parent to the eventObject. That produces the correct action we are looking for. In the above functions we set up calls to theRepeater(). This will be the throw function, and remember it will be called every time we enter a new frame. Let's step through this line by line: private function theRepeater(e:Event):void { // the timer must be stopped, try removing this and see what happens. t.stop(); // here is a local variable that will hold the current (fake) cursor position. // well it is only "fake" after the first time around. var oldxer:Number = MouseCurrX; var oldyer:Number = MouseCurrY; // now, just like we did before, we need to find the difference between our current // and previous position. so how is this different from before? why? var xDiff:Number = MouseCurrX - arMousePrevX; var yDiff:Number = MouseCurrY - arMousePrevY; // if the button is down, we aren't going to move any more, the button will stop the action in this case. if(!buttonDown) { // take the difference and times it by the decay. this will give us the new // difference, which will be slightly smaller than the last one, which is how // we get the friction effect with this. // e.g. if Decay is 0.5 then the distance moved will halve every frame. xDiff = xDiff * Decay; yDiff = yDiff * Decay; // next is one of the confusing parts for me, this doesn't move the object at // all, it just tests to see if our targetObject has reached the edge. if it has, // we need to bounce it back. (this could be changed to some other action if you // want, you could even remove it, what happens if you do? try it! // in the pan function we set this variable, OppositeEdge, this is where we will // use that 'if the targetObject is bigger than the Event Object' that we set in // the init() function. I am only going to walk through the x here because the y is // almost the same (what is different? why? think about it!) if(xOppositeEdge) { /* so first, "the width of the eventObject, - the width of the targetObject - 50", * here, the width of the targetObject is greater than that of the eventObject * this will allow the opposite edge of the target object to be 50 px in from * the opposite edge. If you go to the example movie and shrink the image to * 10% and throw it around, then increase the size to 200% and try and notice * what edge is doing what, then you will see the difference between the bounces. * That is the best way to understand this part. */ if(targetObject.x < (eventObject.width - targetObject.width - 50)) { xDiff = -1 * xDiff; targetObject.x = eventObject.width - targetObject.width - 50; } // this does the same thing for the other edge. if(targetObject.x > 50) { xDiff = -1 * xDiff; targetObject.x = 50; } } // this is if the target object is smaller than the eventObject. else { /* so again we are testing the edges of the targetObject against the * event object. This time we are dealing with the same edge (well, * 5px outside the edge). So this will bounce like it is hitting a wall. */ if(targetObject.x < -5) { xDiff = -1 * xDiff; targetObject.x = -5; } if(targetObject.x > (eventObject.width - (targetObject.width - 5))) { xDiff = -1 * xDiff; targetObject.x = eventObject.width - (targetObject.width - 5); } } if(yOppositeEdge) { if(targetObject.y < (eventObject.height - targetObject.height - 50)) { yDiff = -1 * yDiff; targetObject.y = eventObject.height - targetObject.height - 50; } if(targetObject.y > 50) { yDiff = -1 * yDiff; targetObject.y = 50; } } else { if(targetObject.y < -5) { yDiff = -1 * yDiff; targetObject.y = -5; } if(targetObject.y > (eventObject.height - (targetObject.height - 5))) { yDiff = -1 * yDiff; targetObject.y = eventObject.height - (targetObject.height - 5); } } // well, if you have questions about that part, just post a comment about it and I will answer them. // here are the sider and Topper vars (just like the ones from the pan function). var sider:int = xDiff + targetObject.x; var Topper:int = yDiff + targetObject.y; // we need to set this ready for the next go around. MouseCurrX = MouseCurrX + xDiff; MouseCurrY = MouseCurrY + yDiff; // and then the if moveX,Y (again like the pan function) if(moveY){targetObject.y = Topper;} if(moveX){targetObject.x = sider; } // and now set our artificial mouse prev arMousePrevX = oldxer; arMousePrevY = oldyer; // and if we are not in frictionless mode (OriginalDecay = 1) // we are going to subtract a small amount from our decay, to // gives it a little more natural easing. if(originalDecay < 1) { Decay = Decay - .004; } // so the moving is done. } // if the button is down we need to remove the listener. else { eventObject.removeEventListener(Event.ENTER_FRAME, theRepeater); } // now we need to check if the effect is over, which is if our x and y diffs are less than 1px. if((Math.abs(xDiff) < 1 && Math.abs(yDiff) < 1)) { eventObject.removeEventListener(Event.ENTER_FRAME, theRepeater); } } And with that, our class is finished. Step 9: The Completed Class Code You can grab the completed code from the Source zip, linked at the top of the tutorial. It's in the PanAndThrow.as class. Step 10: Do Something With It To do something with this we need to go back to the MXML and add a few lines of code. Add our declaration in the global variable section, fill in the decay method and call our pan and throw init() function. With all that added, here is the complete MXML file: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.states.SetStyle; import mx.effects.Move; // be sure to import our new class! import classes.PanAndThrow; import mx.containers.HBox; import mx.containers.Box; // here we initialise our pan and throw class public var pt:PanAndThrow = new PanAndThrow(); private var imageWidth:Number = 0; private var imageHeight:Number = 0; private var mover:Move = new Move(); private function init():void { control.addEventListener(MouseEvent.CLICK, controlClick); mover.target = control; //and here is the init call. pt.init(inside, outside, sldDecay.value, true, true, false); } private function zoom():void { inside.width = (imageWidth*hSlider.value)/100; inside.height = (imageHeight*hSlider.value)/100; } private function smoothImage(ev:Event):void{ var bmp:Bitmap = ev.target.content as Bitmap; bmp.smoothing = true; imageWidth=inside.width; imageHeight=inside.height; } private function changeDecay():void { // now we can access our public setter from here. pt.decay = sldDecay.value; } private function controlClick(e:MouseEvent):void { mover.play(); if(control.y != -5){ mover.stop(); mover.yTo = -5; mover.play(); } else if(e.target == control){ mover.stop(); mover.yTo = (control.height - 10) * -1; mover.play(); } } ]]> </mx:Script> > Conclusion Now you have a working pan and throw class! I hope you are as excited as I am. There is a lot here and I hope I was able to cover everything and not make this tutorial too long. I hope you liked this tutorial, thanks for reading! Please post in the comments if you have any questions. Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
http://code.tutsplus.com/tutorials/throw-objects-by-creating-a-panandthrow-class--active-3621
CC-MAIN-2015-32
refinedweb
4,913
65.62
The Features Typically you would buy a little module which makes it easier to connect up to an arduino, this is the module that was purchased. It does look a little confusing as you can see its marked with 2 different names on the silkscreen and more often than not there is no check in the box called I160 . Just something to look out for. bmi160 module Connection Being an I2C device its easy to connect to an Arduino, watch out though you need to power this off 3.3v, some modules may have a step down regulator but this cannot be guaranteed You can also connect the device in SPI mode Code This example used the following library which makes life easier – You can connect the sensor in I2C or SPI modes, you need to change the wiring and the initiliaze device in the code #include <BMI160Gen.h> const int select_pin = 10; const int i2c_addr = 0x69; void setup() { Serial.begin(9600); // initialize Serial communication while (!Serial); // wait for the serial port to open // initialize device //BMI160.begin(BMI160GenClass::SPI_MODE, select_pin); BMI160.begin(BMI160GenClass::I2C_MODE, i2c_addr); } void loop() { int gx, gy, gz; // raw gyro values // read raw gyro measurements from device BMI160.readGyro(gx, gy, gz); // display tab-separated gyro x/y/z values Serial.print("g:\t"); Serial.print(gx); Serial.print("\t"); Serial.print(gy); Serial.print("\t"); Serial.print(gz); Serial.println(); delay(500); } Output Open the serial monitor and you should readings like the following, move the sensor about to see different values g: 90 86 9 g: 69 69 40 g: 35 97 -9 g: -7370 3961 -1786 g: -31829 -2652 32767 g: -3221 25109 32767 g: 26020 31878 -26125 g: -20332 -21698 -15712 g: -7297 3463 -1723 g: -1137 1521 420 g: -203 305 96 g: 144 -102 54 g: 77 116 35 Links CJMCU-160I BMI160 latest inertial measurement sensor attitude module 6DOF
http://www.arduinoprojects.net/sensor-projects/using-bmi160-sensor-arduino-uno.php
CC-MAIN-2020-34
refinedweb
320
55.47
There is one final feature of Haskell's type system that sets it apart from other programming languages. The kind of polymorphism that we have talked about so far is commonly called parametric polymorphism. There is another kind called ad hoc polymorphism, better known as overloading. Here are some examples of ad hoc polymorphism: on pairs of objects of the same type.) The constraint that a type a must be an instance of the class Eq is written Eq a. Thus Eq a is not a type expression, but rather it expresses a constraint on a type, and is called a context. Contexts are placed at the front of type expressions. For example, the effect of the above class declaration is to assign the following type to ==: (==) :: (Eq a) => a -> a -> Bool This should be read, "For every type a that is an instance of the class Eq, == has type a->a->Bool". This is the type that would be used for == in the elem example, and indeed the constraint imposed by the context propagates to the principal type for elem: elem :: (Eq a) => a -> [a] -> Bool This is read, "For every type a that is an instance of the class Eq, elem has type a->[a]->Bool". This is just what we want---it expresses the fact that elem is not defined on all types, just those for which we know how to compare elements for equality. So far so good. But how do we specify which types are instances of the class Eq, and the actual behavior of == on each of those types? This is done with an instance declaration. For example: instance Eq Integer where x == y = x `integerEq` y The line---this is necessary because the elements in the leaves (of type a) are compared for equality in the second line. The additional constraint is essentially saying that we can compare trees of a's for equality as long as we know how to compare a's for equality. If the context were omitted from the instance declaration, a static type error would result. The Haskell Report, especially the Prelude, contains a wealth of useful examples of type classes. Indeed, a class Eq is defined that is slightly larger than the one defined earlier: class Eq a where (==), (/=) :: a -> a -> Bool x /= y = not (x == y) This is an example of a class with two operations, one for equality, the other for inequality. It also demonstrates the use of a default method, in this case for the inequality operation /=. If a method for a particular operation is omitted in an instance declaration, then the default one a class Ord which inherits all of the operations in Eq, but in addition has a set of comparison operations and minimum and maximum functions: class (Eq a) => Ord a where (<), (<=), (>=), (>) :: a -> a -> Bool max, min :: a -> a -> a Note the context in the class declaration. We say that Eq is a superclass of Ord (conversely, Ord is a subclass of Eq), and any type which is an instance of Ord must also be an instance of Eq. (In the next Section we give a fuller definition of Ord taken from the Prelude.) One benefit of such class inclusions is shorter contexts: a type expression for a function that uses operations from both the Eq and Ord classes can use the context (Ord a), rather than (Eq a, Ord a), since Ord "implies" Eq. More importantly, methods for subclass operations can assume the existence of methods for superclass operations. For example, the Ord declaration in the Standard Prelude contains this default method for (<): x < y = x <= y && x /= y As an example of the use of Ord, the principal typing of quicksort defined in Section 2.4.1 is: quicksort :: (Ord a) => [a] -> [a] In other words, quicksort only operates on lists of values of ordered types. This typing for quicksort arises because of the use of the comparison operators < and >= in its definition. Haskell also permits multiple inheritance, since classes may have more than one superclass. For example, the declaration class (Eq a, Show a) => C a where ... creates a class C which inherits operations from both Eq and Show. Class methods are treated as top level declarations in Haskell. They share the same namespace as ordinary variables; a name cannot be used to denote both a class method and a variable or methods in different classes. Contexts are also allowed in data declarations; see §4.2.1. Class methods may have additional class constraints on any type variable except the one defining the current class. For example, in this class: class C a where m :: Show b => a -> b the method m requires that type b is in class Show. However, the method m could not place any additional class constraints on type a. These would instead have to be part of the context in the class declaration. So far, we have been using "first-order" types. For example, the type constructor Tree has so far always been paired with an argument, as in Tree Integer (a tree containing Integer values) or Tree a (representing the family of trees containing a values). But Tree by itself is a type constructor, and as such takes a type as an argument and returns a type as a result. There are no values in Haskell that have this type, but such "higher-order" types can be used in class declarations. To begin,. Thus we would expect it to be bound to a type such as Tree which can be applied to an argument. An instance of Functor for type Tree would be: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Branch t1 t2) = Branch (fmap f t1) (fmap f t2) This instance declaration declares that Tree, rather than Tree a, is an instance of Functor. This capability is quite useful, and here demonstrates the ability to describe generic "container" types, allowing functions such as fmap to work uniformly over arbitrary trees, lists, and other data types. [Type applications are written in the same manner as function applications. The type T a b is parsed as (T a) b. Types such as tuples which use special syntax can be written in an alternative style which allows currying. For functions, (->) is a type constructor; the types f -> g and (->) f g are the same. Similarly, the types [a] and [] a are the same. For tuples, the type constructors (as well as the data constructors) are (,), (,,), and so on.] As we know, the type system detects typing errors in expressions. But what about errors due to malformed type expressions? The expression (+) 1 2 3 results in a type error since (+) takes only two arguments. Similarly, the type Tree Int Int should produce some sort of an error since the Tree type takes only a single argument. So, how does Haskell detect malformed type expressions? The answer is a second type system which ensures the correctness of types! Each type has an associated kind which ensures that the type is used correctly. Type expressions are classified into different kinds which take one of two possible forms: Kinds do not appear directly in Haskell programs. The compiler infers kinds before doing type checking without any need for `kind declarations'. Kinds stay in the background of a Haskell program except when an erroneous type signature leads to a kind error. Kinds are simple enough that compilers should be able to provide descriptive error messages when kind, and permitting inheritance of operations/methods. A default method may also be associated with an operation." In contrast to OOP, it should be clear that types are not objects, and in particular there is no notion of an object's or type's internal mutable state. An advantage over some OOP languages is that methods in Haskell are completely type-safe: any attempt to apply a method to a value whose type is not in the required class will be detected at compile time instead of at runtime. In other words, methods are not "looked up" at runtime but are simply passed as higher-order functions. A different perspective can be gotten by considering the relationship between parametric and ad hoc polymorphism. We have shown how parametric polymorphism is useful in defining families of types by universally quantifying over all types. Sometimes, however, that universal quantification is too broad---we wish to quantify over some smaller set of types, such as those types whose elements can be compared for equality. Type classes can be seen as providing a structured way to do just this. Indeed, we can think of parametric polymorphism as a kind of overloading too! It's just that the overloading occurs implicitly over all types instead of a constrained set of types (i.e. a type class). The classes used by Haskell are similar to those used in other object-oriented languages such as C++ and Java. However, there are some significant differences:
http://www.haskell.org/tutorial/classes.html
crawl-002
refinedweb
1,500
59.74
Hide Forgot Created attachment 367545 [details] console-dump of crash Description of problem: Kernel freezes with KVM vms running [usually during startup] Version-Release number of selected component (if applicable): kernel-2.6.30.9-90.fc11.x86_64 qemu-0.10.6-9.fc11.x86_64 virt-manager-0.7.0-7.fc11.x86_64 How reproducible: Happens pretty frequently Steps to Reproduce: 1. Install F11 with latest patches 2. Install a bunch of VMs [kvm] windows, ubuntu, freebsd, opensolaris 3. set the VMs to start at reboot Actual results: kernel panic? with blinking numlock/caps lock at various times. - sometimes when the VMs are booting - sometimes when a VM is restarted Expected results: no crash Additional info: - Ran memtest86 overnight and no errors here. - I suspect it happens with multiple VMs running - esp with OpenSolaris in the mix. - Some of these VMs are carried over from F10 - and during this migration OpenSolaris VM never worked. Recently [perhaps a couple of months back - I reinstalled the OpenSolaris VM]. Also until then, the windows VM was the primary active VM. But since then - I was attempting to run all the VMs [windows,freebsd, ubuntu, opensolaris] simultaneously - and have seen constant crashes. - so the crashes have been cosntant during the past few kernel updates - and qemu, virt-manager updates. a picture of one of the kernel dumps is attached Created attachment 367547 [details] lspci; cat /proc/meminfo; cat /proc/cpuinfo; dmesg We really need to see the beginning of that oops report. I'm not sure how to get the complete stack trace. All I can do is take pics of the console output. I have the following update since my previous report: I've been running a single [Windows] VM since then - and it was stable. So the issue is with multiple VMs - usually its triggered during the boot sequence of one of them. I've upgraded to F12 now - and the crashes persist. I have the oops from 2 different crashes. Kernel: 2.6.31.6-145.fc12.x86_64 - First one when all 4 VMs are booted together at startup [of host F12] - Second one - with 3 VMs booted together at startup [without OpenSolaris]. Here the initial boot went fine. But on rebooting one of the VMs [ubuntu] a panic was triggered. For this panic - I have the stack trace from the begining. I also get the following output on a ssh terminal connection [to the F12 host] from a different machine. >>>>>>>>>>>>>>>> [root@maverick ~]# Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:general protection fault: 0000 [#1] SMP Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:last sysfs file: /sys/kernel/mm/ksm/run Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:Stack: Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:Call Trace: Message from syslogd@maverick at Dec 2 14:20:01 ... kernel: <IRQ> Message from syslogd@maverick at Dec 2 14:20:01 ... kernel: <EOI> Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:Code: 38 0f b6 d2 48 01 d0 74 30 48 8b 58 28 eb 13 48 89 df e8 03 f9 ff ff 48 89 df e8 cf f8 ff ff 4c 89 e3 48 85 db 74 12 48 8d 7b 78 <4c> 8b 23 e8 fa ee cb ff 85 c0 74 e8 eb d6 5b 41 5c c9 c3 55 48 Message from syslogd@maverick at Dec 2 14:20:01 ... kernel:Kernel panic - not syncing: Fatal exception in interrupt asterix:/home/balay> Created attachment 375591 [details] crash with 4 VMs started together Created attachment 375592 [details] crash with 3 VMs started together, and then one of the VMs was rebooted Ok - I've disabled ipv6 on this machine [because the stack trace has references to it] - and now the VMs are lot more stable. I've tried a few things - theF12 host hasn't crashed yet. I'll see if this stays stable [with all the 4VMs running concurrently]. BTW: should have mentioned: I use bridge networking for the VMs [and it is also listed in the stack trace]. So perhaps the combination of bridge networking with ipv6 is the trigger for the crash.. Am seeing something fairly similar, reported in It seems I can confirm the ipv6 part of the anecdote. This is on a friends AMD box, which doesn't have a VT-d knob in the bios. I can't test myself but the issue is quite reproducible there. Thanks to a lead from the Fedora Forums I found this bug report that mirrors recent problems I have seen on both Fedora 11 and Fedora 12 64-bit KVM systems that are using bridge networks. If Autostart is enabled for at least one VM, the systems are hard locking at reboot. However, if ipv6 is disabled, the host boots normally and the VMs autostart up as normal. I have disabled ipv6 by editing /etc/modprobe.d/blacklist and adding the line: install ipv6 /bin/true If I remove all autostart options and re-enable ipv6, the KVM host starts fine, and the VMs can be manually started without any problems. Hence, it appears there is a conflict (possibly just for systems using bridge networks) when ipv6 is enabled and VMs are configured to autostart. Just an update: [after disabling ipv6] The machine now has been stable for the past 2 weeks [even with some reboots of the guest OSes] [root@maverick ~]# uname -srv Linux 2.6.31.6-145.fc12.x86_64 #1 SMP Sat Nov 21 15:57:45 EST 2009 [root@maverick ~]# uptime 12:09:12 up 13 days, 19:12, 1 user, load average: 0.31, 0.29, 0.21 I have had a similar experience, although I could get the kernel to crash even when KVM VM machines were not running (and libvirtd was disabled at startup). Originally thought it was issue with Spanning Tree, but disabling had no effect. The issue for me involved Windows 7 clients with TCP/IPv6 active on their NIC profile. As soon as the NIC initialised (or reset for that matter), the Fedora 12 server would freeze. Disabling TCP/IPv6 driver in Win7 clients resolved issue. WinXP clients no problem, as TCP/IPv6 not installed. For safety, turned off any explicit TCP/IPv6 settings on Fedora too, although even with IPV6INIT=no, Fedora still assigned auto IP6 address. Network config: HP ProCurve managed switch, with two ports configured in LACP (802.3ad) mode, no VLAN. Fedora server config: eth0+eth1 -> bond0 -> br0 /etc/sysconfig/network-scripts/ifcfg-br0: ... IPV6INIT=no IPV6_AUTOCONF=no DHCPV6=no ... ncftool> dumpxml br0 <?xml version="1.0"?> <interface type="bridge" name="br0"> <start mode="onboot"/> <protocol family="ipv4"> <ip address="10.16.182.254" prefix="24"/> <route gateway="10.16.182.1"/> </protocol> <bridge stp="on"> <interface type="bond" name="bond0"> <bond mode="802.3ad"> <miimon freq="100" updelay="100" carrier="ioctl"/> <interface type="ethernet" name="eth0"> <mac address="00:23:7D:FB:FE:35"/> </interface> <interface type="ethernet" name="eth1"> <mac address="00:23:7D:A8:EE:CC"/> </interface> </bond> </interface> </bridge> </interface> ncftool> dumpxml --live br0 <?xml version="1.0"?> <interface name="br0" type="bridge"> <bridge> <interface name="bond0" type="bond"> <bond> <interface name="eth0" type="ethernet"> <mac address="00:23:7d:fb:fe:35"/> </interface> <interface name="eth1" type="ethernet"> <mac address="00:23:7d:fb:fe:35"/> </interface> </bond> </interface> <interface name="vnet0" type="ethernet"> <mac address="e2:30:33:b3:84:78"/> </interface> </bridge> <protocol family="ipv4"> <ip address="10.16.182.254" prefix="24"/> </protocol> <protocol family="ipv6"> <ip address="fe80::223:7dff:fefb:fe35" prefix="64"/> </protocol> </interface> I disable IPV6 on the F12 box by doing the following: - edit /etc/sysconfig/network and add the line NETWORKING_IPV6=no - create a file /etc/modprobe.d/disable-ipv6.conf with the line install ipv6 /bin/true (In reply to comment #11) > For safety, turned off any explicit TCP/IPv6 settings on Fedora too, although > even with IPV6INIT=no, Fedora still assigned auto IP6 address. I spent the whole weekend learning the netfilter code and SLUB debugging to find this problem: There should be a patch soon. *** Bug 545851 has been marked as a duplicate of this bug. *** Confirmed that the following hack prevents the issue (real fix is being worked on): void nf_conntrack_destroy(struct nf_conntrack *nfct) { void (*destroy)(struct nf_conntrack *); if ((struct nf_conn *)nfct == &nf_conntrack_untracked) { printk("JCM: nf_conntrack_destroy: trying to destroy nf_conntrack_untracked! CONTINUING...\n"); //panic("JCM: nf_conntrack_destroy: trying to destroy nf_conntrack_untracked!\n"); return; /* refuse to free nf_conntrack_untracked */ } rcu_read_lock(); destroy = rcu_dereference(nf_ct_destroy); BUG_ON(destroy == NULL); destroy(nfct); rcu_read_unlock(); } EXPORT_SYMBOL(nf_conntrack_destroy); The issue is that with multiple namespaces, we wind up decreasing the use count on the untracked static ct to zero and trying to free it, which is bad. Patrick should have a fix tomorrow using per-namespace untracked ct's. Jon. This hack is harmless, but in an ideal world we wouldn't try freeing the untracked ct in the first place. *** Bug 521362 has been marked as a duplicate of this bug. *** I see Kyle is already making a test kernel with this. Yeah, builds are in progress on all the targets I think. *** Bug 520108 has been marked as a duplicate of this bug. *** This has been fixed and confirmed. Is this bug fixed only in rawhide? No, it's been committed to F-11 and F-12 too.. *** Bug 681917 has been marked as a duplicate of this bug. ***
https://bugzilla.redhat.com/show_bug.cgi?id=533087
CC-MAIN-2019-43
refinedweb
1,587
63.29
Creating object key names The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. Object metadata is a set of name-value pairs. For more information about object metadata, see Working with object metadata. When you create an object, you specify the key name, which uniquely identifies the object in the bucket. For example, on the Amazon S3 console The Amazon S3 data model is a flat structure: You create a bucket, and the bucket store objects. There is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders. For more information about how to edit metadata from the Amazon S3 console, see Editing object metadata in the Amazon S3 console.. The s3-dg.pdf key does not have a prefix, so its object appears directly at the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object in it. Amazon S3 supports buckets and objects, and there is no hierarchy. However, by using prefixes and delimiters in an object key name, the Amazon S3 console and the AWS SDKs can infer hierarchy and introduce the concept of folders. The Amazon S3 console implements folder object creation by creating a zero-byte object with the folder prefix and delimiter value as the key. These folder objects don't appear in the console. Otherwise they behave like any other objects and can be viewed and manipulated through the REST API, AWS CLI, and AWS SDKs. Object key naming guidelines You can use any UTF-8 character in an object key name. However, using certain characters in key names can Objects with key names ending with period(s) "." downloaded using the Amazon S3 console will have the period(s) "." removed from the key name of the downloaded object. To download an object with the key name ending in period(s) "." retained in the downloaded object, you must use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API. Characters that might require special handling The following characters in a key name might require additional code handling and likely need to be URL encoded or referenced as HEX. Some of these are non-printable characters that your browser might not handle, which also requires special handling: Ampersand ("&") Dollar ("$") ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal) 'At' symbol ("@") Equals ("=") Semicolon (";") Colon (":") Plus ("+") Space – Significant sequences of spaces might ("|") XML related object key constraints As specified by the XML standard on end-of-line handling ' as ' ” as " & as & < as < > as > \r as or \n as or The following example illustrates the use of an XML entity code as a substitution for a carriage return. This DeleteObjects request deletes an object with the key parameter: /some/prefix/objectwith\rcarriagereturn (where the \r is the carriage return). <Delete xmlns=""> <Object> <Key>/some/prefix/objectwith carriagereturn</Key> </Object> </Delete>
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html
CC-MAIN-2021-31
refinedweb
504
53.31
Some of you may remember a philosophical question I raised in a post some time ago. The overall consensus of the feedback we received was to prioritise delivering code samples (given a choice between code samples and wizard-like code-generation tools). Which we are doing, both via the ADN site and our developer blogs… but that doesn’t mean some investment in code-generation tools isn’t also appropriate, from time-to-time. :-) Cyrille Fauvel, who manages the Media & Entertainment arm of DevTech but also occasionally works on technical activities that relate to our other products, has put together a new (currently draft) version of the AutoCAD .NET Wizard. This tool integrates with the Visual Studio 2008 IDE (as well as the 2008 editions of Visual C# Express and Visual Basic Express), adding C#/VB.NET project templates for AutoCAD development. (This version of the tool only works with the 2008 versions – we’re currently evaluating whether (and how) to support the 2005 versions of Visual Studio [Express] or simply to recommend using our previously posted templates.) When you select one of these “Autodesk” project templates you will get a skeleton project that defines the application initialization protocol plus a few placeholder commands while setting up the ability to define localized commands. Here are a few images stepping you through the installation of this Wizard (nothing very surprising, which is why I’ve kept the thumbnails so small :-): [Note: the current installer requests an install location, which is unnecessary. The installer only uses this location for temporary files, removing them afterwards. This step will be removed from a future build of the tool.] Once installed (and Visual Studio 2008 has been re-started), you will get some new options when you start a new project… … that include the “AutoCAD 2010 plug-in” project template: We’ve numbered these templates “2010”, as they provide support for API features delivered with AutoCAD 2010, but if you don’t select these features the created project should work with older versions of AutoCAD. At least that’s the theory. :-) When you first select one of these project types, you will be asked to select the location of your ObjectARX SDK: You should specify this along with the assemblies you would like to reference for this particular project: On completion of this dialog your skeleton project will be created. [Something else to note: the tool currently creates projects which allow addition of Windows Presentation Foundation content – to do this we had to add a little piece of XML to the project file that specifies a target .NET Framework version of 3.0 or above as well as a GUID to identify the project as WPF-compatible. This tells Visual Studio to list WPF item-types when you add new items to the project, but I expect this to become another configuration option in a future version as some developers will want to target framework versions where WPF is unavailable.] Let’s take a look at the skeleton code created for a C# project, reformatted to fit the width of the blog… First the myPlugin performance [assembly: ExtensionApplication( typeof(Autodesk.AutoCAD.AutoCAD_2010_plug_in1.MyPlugin) )] namespace Autodesk.AutoCAD.AutoCAD_2010_plug_in1 { // This class is instantiated by AutoCAD once and kept alive for // the duration of the session. If you don't do any one time // initialization then you should remove this class. public class MyPlugin : IExtensionApplication { void IExtensionApplication.Initialize() { // Add one time initialization here // One common scenario is to setup a callback function here // that unmanaged code can call. // To do this: // 1. Export a function from unmanaged code that takes a // function pointer and stores the passed in value in a // global variable. // 2. Call this exported function in this function passing // delegate. // 3. When unmanaged code needs the services of this managed // module you simply call acrxLoadApp() and by the time // acrxLoadApp returns global function pointer is // initialized to point to the C# delegate. // For more info see: // // // // as well as some of the existing AutoCAD managed apps. // Initialize your plug-in application here } void IExtensionApplication.Terminate() { // Do plug-in application clean up here } } } And now the myCommands performances [assembly: CommandClass( typeof(Autodesk.AutoCAD.AutoCAD_2010_plug_in1.MyCommands) )] namespace Autodesk.AutoCAD.AutoCAD_2010_plug_in1 { // This class is instantiated by AutoCAD for each document when // a command is called by the user the first time in the context // of a given document. In other words, non static data in this // class is implicitly per-document! public class MyCommands { // The CommandMethod attribute can be applied to any public // member function of any public class. // The function should take no arguments and return nothing. // If the method is an intance member then the enclosing class is // intantiated for each document. If the member is a static // member then the enclosing class is NOT intantiated. // // NOTE: CommandMethod has overloads where you can provide helpid // and context menu. // Modal Command with localized name [CommandMethod( "MyGroup", "MyCommand", "MyCommandLocal", CommandFlags.Modal )] public void MyCommand() // This method can have any name { // Put your command code here } // Modal Command with pickfirst selection [CommandMethod( "MyGroup", "MyPickFirst", "MyPickFirstLocal", CommandFlags.Modal | CommandFlags.UsePickSet )] public void MyPickFirst() // This method can have any name { PromptSelectionResult result = Application.DocumentManager.MdiActiveDocument.Editor. SelectImplied(); if (result.Status == PromptStatus.OK) { // There are selected entities // Put your command using pickfirst set code here } else { // There are no selected entities // Put your command code here } } // Application Session Command with localized name [CommandMethod( "MyGroup", "MySessionCmd", "MySessionCmdLocal", CommandFlags.Modal | CommandFlags.Session )] public void MySessionCmd() // This method can have any name { // Put your command code here } // LispFunction is similar to CommandMethod but it creates a // lisp callable function. Many return types are supported not // just string or integer. [LispFunction("MyLispFunction", "MyLispFunctionLocal")] public int MyLispFunction(ResultBuffer args) // This method can have any name { // Put your command code here // Return a value to the AutoCAD Lisp Interpreter return 1; } } } The myCommands.resx file needed to define the local command names (for the MySessionCmd command) has also been added to the project automatically to help with command localization. All well and good… but it may be that these project templates don’t quite fit your internal needs, whether due to the copyright notices in the source and assembly properties [something I do expect to be removed from a future build – it doesn’t make sense for Autodesk copyright notices to be placed in your project skeleton] or because you have a standard approach you’ve adopted for command registration (or whatever). The good news is that it’s pretty easy for you to modify these baseline templates for your own needs: - Locate the template you wish to update, e.g. “C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ProjectTemplates\CSharp\Autodesk\AutoCAD 2010 plug-in.zip” (this path will vary if you’re running on a 64-bit or localized OS or if you’re using an Express version of Visual Studio, but the technique should still work). - Unzip the files somewhere. - Modify the source code and project setup to meet your needs. - Modify the template file (MyTemplate.vstemplate – an XML file providing Visual Studio with additional information not stored in the project), as needed. - ZIP the files back up, choosing a new name for the .ZIP archive. - Post the file within the ProjectTemplates folder structure. - Run Visual Studio with the /InstallVSTemplates command parameter (the easiest way is to open a “Visual Studio 2008 Command Prompt” and use it to execute “devenv /InstallVSTemplates”). As mentioned a few times, I do expect us to provide a new version of this tool with some minor wrinkles ironed out, but it’s pretty much ready-to-go (and certainly ready for people to try out and respond with their feedback :-). Please let us know how you get on, whether by posting a comment or by emailing our wizard developers.
http://through-the-interface.typepad.com/through_the_interface/2009/06/a-new-project-wizard-for-autocad-net-development.html
CC-MAIN-2017-43
refinedweb
1,301
51.38
Ovidiu Predescu wrote: > > On Mon, 05 Nov 2001 23:36:16 +0100, Stefano Mazzocchi <stefano@apache.org> wrote: > > > Ovidiu Predescu wrote: > > > > > > Hi, > > > > > > This is very good stuff, Mark! I've looked at it briefly and I like it. > > > > Now, don't you love it when big companies people find a way to > > cooperate on an open source list? :) > > :) > > Actually Mark and I have been talking about this for quite some > time. Still your point is still valid, as many people tend to think of > big companies as a place where anybody knows what everybody else is > doing in the company. The situation is very different, in fact, just > like in real life, you usually don't have much visibility on what > others are doing unless you communicate with people. Oh, yes, I know. In fact IBM alphaworks was done exactly for that and I think it was a good move (alphaworks, despite the fact you can download stuff and even the source sometimes, it's not about opensource but it's all about internal visibility) Anyway, I'm very happy to have this CC/PP discussion here! > > > One way would be to expose it to XSLT, so that people can take > > > advantage of device capabilities when generating pages. > > > > Yeah, that could be a good choice, or another one (without touching > > XSLT extensions which are not portable) would be to have a > > DELITransformer that augments browser information into a particular > > namespace and adds it to the end of the XML file being processed. > > I wasn't thinking of using XSLT extension functions. Take a look at > how the current browser component deals with the browser information > which is described in BrowserImpl.xml. It essentially takes the > browser information and passes it as a DOM tree using an XSLT > parameter. An XSLT stylesheet can then use simple matching on this > parameter to obtain browser capabilities. > > As an example, suppose the browser is a Nokia cell phone. The > 'ua-capabilities' XSLT parameter would contain a DOM tree which is the > equivalent of the following XML document: > > <browser xmlns: > <user-agent>Nokia</user-agent> > <mime-type>text/vnd.wap.wml</mime-type> > <formatter-type>text/wml</formatter-type> > <has-accesskey/> > <has-wtai-add-phonebook/> > <binds-call-to-send>vnd.up.send</binds-call-to-send> > <prf:WmlDeckSize>1400</prf:WmlDeckSize> > </browser> > > This XML document becomes the value of the 'ua-capabilities' parameter > in an XSLT stylesheet. You can then write a stylesheet like this: > > <xsl:param > > <xsl:template > <xsl:if > <!-- Generate markup to automatically bind the "send" button on > the cell phone to call the phone number --> > ... > </xsl:if> > </xsl:template> > > I've implemented this sometime last year for C1, and Dims ported it to > C2. Got it. Makes perfect sense. > > > The next step from here would be to come up with a set of > > > stylesheets for XHTML to automatically generate the right markup > > > for the requesting device. > > > > That's where selectors kick in. > > I'm not sure the selectors solve the complete problem. They are really > good when dealing with different types of markup languages, but for > small variations in the functionality of the same category of devices, > something at the XSLT layer works better IMO. > > Imagine all the WML browsers out there, each with its own variation in > the implemented WML features. Having multiple XHTML->WML stylesheets, > one per device, is very difficult to maintain. If you have only one > set of XHTML->WML stylesheets which contain all the possible > variations, it's probably easier to deal with. Very good point. > > > Are there any other ways or places where we can take advantage of it? > > > > Hmmm, I can't think of any... but I'm sure that as soon as we > > integrate it, people will start using it for ways we didn't think > > about. but again, open to suggestions. > > Yes, I agree. Well, if you want to do what you just proposed to integrate Mark's work, I'll be +1 on
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200111.mbox/%3C3BE9032C.24EA33FA@apache.org%3E
CC-MAIN-2015-22
refinedweb
661
63.59
Hey folks - my name is Chriztian and I'm here to tell you about how I fumble my way through developing Umbraco sites on my trusty old MacBook Pro. (I had switched to the Mac long before I ever knew about Umbraco, so I never really pondered switching to Windows, even though I had absolutely no way of running the sites locally for debugging etc.) I've always enjoyed reading about how others do something I also do, whether it's three- vs. four-finger scales on the guitar or structuring their computer's desktop (bump up the icons to an insane size - can't fit so many then, right?) so I hope some of you get an "aha - didn't think of that..." moment while reading this article and get inspired to try something different. I'm a frontend person I was never really interested in all "the server stuff" (setting up permissions, the database, the database user accounts etc.) - the parts of development I really love are the parts that deal with HTML & CSS; with JavaScript for interactivity & behaviors. Over the years I’ve found a process/rhythm for my frontend development that’s been influenced by many different approaches, and currently I’ve dubbed my process “CPL” (pronounced “couple”) - the letters stand for Components, Panels & Layouts. See, that's actually three, not a couple - figures... Components are the smaller chunks, like a button, a header or a navigation list. Panels are very often just a wrapper for a number of components, - for example, a typical Footer Panel could contain Header, TextContent and NavigationList components. Layouts are full pages where panels and components flow by next to each other. Every component has its own folder inside the src/components/ folder, e.g. src/components/linkbutton/, wherein we find a .kit (HTML) file and a .less (CSS) file. Maybe a .js file but probably also a .cshtml file, which we’ll get back to later. The thing that really makes all of this work for me is an app called CodeKit. CodeKit This gem of an app handles so many things that are usually handled by your package/bundle manager of choice, e.g.: - Build the CSS from all the .less files - Build the JS from all the .js files - Build all of the prototype HTML files from the .kit files - Serve the site - Reload any browsers when a file changes (animating the CSS changes so you can easily see what changed) I'm pretty sure this app alone has already saved me years of npm install-woes. (It can actually handle your npm stuff as well, but I don't use that). the .kit file Kit is basically HTML with includes and variables. That’s it, no looping or control statements of any kind. But it enables doing something like this to test the robustness of a component: First, here’s the _linkbutton.kit file: <a href="#somewhere”<!-- $buttonLabel? --></a> Yes. You saw it too, didn't you? An HTML comment inside an attribute??? I know, that did take about a year or so for me to accept, but now I'm fine with it, as I know it's because it makes it insanely fast for CodeKit to process (no tokenization or complex language parsing needed). Then what I also have is a long "components" file where I include all of these, to see how stuff breaks: <!-- $buttonLabel: Click me! --> <!-- $buttonClass: nil --> <!-- @include “components/linkbutton/linkbutton” --> ... <!-- $buttonLabel: We’ve updated our Terms & Conditions --> <!-- $buttonClass: alt <!-- @include “components/linkbutton/linkbutton” --> Now I’m able to see just how well that longer button breaks (or if it breaks at all). And CodeKit keeps that page updated whenever a component changes. A couple of LinkButtons on the "componentized" page in the *regular* sizeclass. A couple of LinkButtons on the "componentized" page in the *compact* sizeclass You can read more about the Kit format on CodeKit's webpage. The .less file I stumbled upon Less first, but I'm sure Sass has the same features that I use Less for, specifically: - Imports (i.e., the kind that results in a single CSS file - not CSS imports that are separate HTTP requests) - Variables (e.g. for brand colors, animation settings etc.) - these are compile-time variables, mind you; not runtime variables (which I can use CSS Custom Properties for, but that’s a whole ‘nother discussion) - Mixins - for abstracting styles into something that can be used in more places and have a meaningful name. - Nesting - I don’t nest to insane levels, but it’s nice to be able to “scope” a component and A) Have it be easy to read, and B) Not have to add complicated classes to every single element inside out of fear that something will bleed through to “the other side”. In our sample here, the linkbutton.less file exhibits points 2, 3 & 4: .linkbutton { .linkbuttonStyles(); &.alt { background-color: @whiteish; } } Note: I usually vote against naming color variables after the actual color (because of the likelihood of near-launch changes ending up in a variable called @red actually having the color of dark green) but I make an exception for "white" and "black", specifically. (I name them @whiteish and @blackish just in case the designer decides to tweak them a tiny bit.) The .js file I was actually using CoffeeScript for quite many years, but now that ES6 has adopted so many of its features, I'm back to JavaScript again. (Why yes, CodeKit handled compiling CoffeScript on the fly for me, and now it's just doing "Babel transpilation" instead). Now, the LinkButton uses CSS transitions for its hover + focus interaction so it doesn't have a .js file. But let's look at one for a Slider Panel instead: import Swiper from '../../../vendor/swiper/swiper.esm.browser.bundle' export default function initialize(selector, options) { const swiper = new Swiper(selector, options) } In the main.js file I pick up the exported initializer and call it when called upon (in this case when the page has loaded): import initializeSliderPanel from '../panels/slider-panel/slider-panel' document.addEventListener('DOMContentLoaded', (event) => { initializeSliderPanel('.swiper-container', { pagination: { el: '.swiper-pagination', clickable: true }, navigation: { nextEl: '.swiper-button-next', prevEl: '.swiper-button-prev' } }) } This way I've been able to keep the component-specific files together – but I was always missing one specific file from that equation... The .cshtml file This is Razor — wöohoö :) I'll save my personal quibbles with Razor for a future installment in the continuing saga, but suffice to say it's been a bumpy ride for me, from time to time. The Razor file shares its name with the DocumentTypeAlias of the block it's rendering, so let's have a look at that PageIntro component from the screenshots above — it'll have a razor file called PageIntroBlock.cshtml: @inherits Umbraco.Web.Mvc.UmbracoViewPage<PageIntroBlock> @{ var block = Model; var header = block.Header; var teaser = block.Intro; } <div class="pageheader"> <h1>@(header)</h1> <p class="teaser"> @(teaser) </p> </div> But I am getting ahead of myself — let's talk about them "blocks"... Building Blocks In Umbraco 7 we used this great package called Embedded Content Blocks for general page building. In Umbraco 8 we're using Nested Content (with the Nesting Contently addon) but the general setup is the same: The Document Types tree has a folder called Blocks where the block types are created. They're all created without a template, their aliases all have the suffix Block as we saw with the PageIntro above. In Umbraco 8 they also need to have "Is an Element Type" checked on the Permissions tab for them to be used with Nested Content. In the Partial Views tree there's a similar folder, named Blocks where every block type has a corresponding partial that knows how to render a block of said type. The blocks are added to a Nested Content datatype so that they're available to build pages with. When rendering the blocks section, we pick up all the blocks and send them into this partial (Blocks/RenderBlocks.cshtml): @inherits Umbraco.Web.Mvc.UmbracoViewPage<IEnumerable<IPublishedElement>> @{ var blocks = Model; } @if (blocks.Any()) { foreach (var block in blocks.Where(b => b.IsVisible())) { @Html.Partial("Blocks/" + block.ContentType.Alias, block) } } That, in turn, makes sure to find the appropriate partial for any given block, and because of some internal magic with types, the engine is able to hand the blocks to the partials as the typed models they are, so we're able to "just" use Models Builder syntax (block.PropertyName) instead of the not-so-readable generic block.Value<string>("propertyName") syntax. This is as close as I will ever get to the magic of <xsl:apply-templates /> I guess :) We're now able to rather easily add a new block (with new functionality) to the already existing set, without worrying about breaking existing pages. Sometimes a block can just be a dummy placeholder (e.g. an EmployeeListBlock) or even a spacer. Build & Deploy While developing the frontend components, CodeKit continually compiles any changed files (or files that include them). It also runs a webserver to preview the files in, so that's where I'm able to see the various components and how they behave in mainly the regular and compact sizeclasses (by changing the size of the browser window). (If the term sizeclasses confuse you, I have a repo that you can check out). One neat trick I use on the components page, is that in a component's kit file, I add the contenteditable attribute where applicable, so I can tweak a header, a link label or even a paragraph right on the page when I'm viewing it (instead of doing it with the Devtools): <div class="pageheader"> <h1 contenteditable><!-- $header? --></h1> <p class="teaser" contenteditable> The teaser doesn’t need to be short - it can even be a couple of sentences or more [...] </p> </div> When I have a version that's ready to fly, I have a deploy script (shell script) that copies the relevant files from the build folder into the website folder, which is usually a {project}.Web folder next to the {project}.Frontend folder. For the frontend setup there's also a repo to checkout, which includes the tiny component framework I use for listing the components on a single page with a clickable Table of Contents - it's called componentize and has its own repo as well. I'd like to thank you for reading this - and by all means, feel free to bug me about any of this – I'm @greystate on Twitter. End Credits Nesting Contently is created by Nathan Woulfe, Embedded Content Blocks is by Rasmus John Pedersen Also in Issue No 61 Building a Webshop in Vendr by Gareth Wright & Paul Marden
https://skrift.io/issues/frontend-fumbraco/
CC-MAIN-2020-40
refinedweb
1,806
61.06
Record response_time when Mouse is being moved not clicked Hi everyone, I'm creating an experiment in which geometrical forms will be presented on different pictures with different location. The subject task will be to press a key on the keyboard when a square is presented and to click on the target when a circle is presented For the mouse condition, I want to record Time_response when the target is being clicked but also when the mouse is being moved (when the motor program is being initiated) so the response-time won't be dependant of the distance between mouse and target. So, I set the Mouse pos : souris = Mouse() souris.set_pos(pos=(0,0)) I want to get the time response when the pos becomes different from(0,0) I hope I'm clear.. Thank u for responses ! Best Regards Hi, if you use the mousetrap_response item to collect the mouse response, you can use it to reset the mouse to the specified position. It also automatically computes the initiation_time variable, which is the time until any movement was initiated (which is I think the time you are interested in). Best, Pascal Hi Pascal, Thank u for your quick answer I don't have mousetrap and I decided to install it. I followed the instructions and I wrote in the debug window : import pip pip.main(['install', '']) But I can't install it because of my old version of pip I guess : You are using pip version 9.0.1, however version 19.0.3 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. But when I tap the command : python -m pip install --upgrade pip I have this message : NameError: name 'python' is not defined What should I do ? Thank u How can I upgrade my pip from OpenSesame ? The message you are posting is not an error but a warning, because OpenSesame is using an old version of pip. If you have run the commands specified above, you simply need to restart OpenSesame and the mousetrap items should be present. If not, could you post the complete message you get after running the commands? import pip pip.main(['install', '']) Collecting Downloading Exception: Traceback (most recent call last): File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\basecommand.py", line 215, in main status = self.run(options, args) File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\commands\install.py", line 335, in run wb.build(autobuilding=True) File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\req\req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\req\req_set.py", line 634, in _prepare_file abstract_dist.prep_for_dist() File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\req\req_set.py", line 129, in prep_for_dist self.req_to_install.run_egg_info() File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\req\req_install.py", line 439, in run_egg_info command_desc='python setup.py egg_info') File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\utils\__init__.py", line 676, in call_subprocess line = console_to_str(proc.stdout.readline()) File "C:\Program Files (x86)\OpenSesame\lib\site-packages\pip\compat\__init__.py", line 73, in console_to_str return s.decode(sys.__stdout__.encoding) AttributeError: 'NoneType' object has no attribute 'encoding' You are using pip version 9.0.1, however version 19.0.3 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. Hmm this is weird. I just checked the installation procedure using OpenSesame 3.2.7 under Windows 7 and Windows 10 and it worked for me. I run OpenSesame as an administrator and then paste the following two lines in the debug window The resulting message is: Which version of OpenSesame and which operating system are you using? Indeed it's weird.. I'm using Opensesame 3.2.7 (python 3.6.4) as well and my Os is windows 10 64bits. I already tried to reinstall OpenSesame but it still doesn't work. Update ! I installed the 32 bits version and.. It works ! Don't know why Thank u for your help, I'll come back to you if I have questions about the use of mousetrack Best regards !
http://forum.cogsci.nl/discussion/comment/16042
CC-MAIN-2019-26
refinedweb
730
59.9
Getting Started with ASP.NET MVC 1.0 Getting Started with ASP.NET MVC 1.0 By Simone Chiaretta and Keyvan Nayyeri 16,746 Downloads · Refcard 69 of 202 (see them all) Download FREE PDF The Essential ASP.NET MVC Cheat Sheet Getting Started with ASP.NET MVC 1.0 Introduction ASP.NET MVC is a new framework for building Web applications developed by Microsoft; it was found that the traditional WebForm abstraction, designed in 2000 to bring a “desktop-like” development experience to the Web, was sometimes getting in the way, and could not provide proper separation of concerns, so it was difficult to test. Therefore a new, alternative framework was built in order to address the changing requirements of developers. It was built with testability, extensibility and freedom in mind. This Refcard will first explain how to setup your environment to work with ASP.NET MVC and how to create an ASP.NET MVC Web application. Then it will go deeper in details explaining the various components of the framework and showing the structure of the main API. Finally, it will show a sample of standard operation that developers can do with ASP.NET MVC. Prerequisites The ASP.NET MVC is a new framework, but it’s based on ASP.NET core API: in order to understand and use it, you have to know the basic concepts of ASP.NET. Furthermore, since it doesn’t abstract away the “Web” as the traditional WebForm paradigm does, you have to know HTML, CSS and JavaScript in order to take full advantage of the framework. Installation To develop a Web site with ASP.NET MVC, all you need is Visual Studio 2008 and the .NET Framework 3.5 SP1. If you are an hobbyist developer you can use Visual Web Developer 2008 Express Edition, which can be downloaded for free at the URL:. You also need to install the ASP.NET MVC library, which can be downloaded from the official ASP.NET Web site at. You can also download everything you need, the IDE, the library, and also a free version of SQL Server (Express Edition) through the Web Platform Installer, available at:. The MVC pattern As you probably have already guessed from the name, the framework implements the Model View Controller (MVC) pattern. The UI layer of an application is made up of 3 components: And the flow of an operation is depicted in the diagram: - The request hits the Controller. - The Controller delegates the execution of “main” operation to the Model. - The Model sends the results back to the Controller. - The Controller formats the data and sends them to the View. - The View takes the data, renders the HTML page, and sends it to the browser that requested it. Build your first application Starting the developing of an ASP.NET MVC application is easy. From Visual Studio just use the “File > New Project” menu command, and select the ASP.NET MVC Project template (as shown in the following figure). Type in the name of the project and press the “OK” button. It will ask you whether you want to create a test project (I suggest choosing Yes), then it will automatically create a stub ASP.NET MVC Web site with the correct folder structure that you can later customize for your needs. As you can see, the components of the applications are wellseparated in different folders. The Fundame ntals of ASP.NET MVC One of the main design principles of ASP.NET MVC is “convention over configuration”, which allows components to fit nicely together based on their naming conventions and location inside the project structure. The following diagram shows how all the pieces of an ASP.NET MVC application fit together based on their naming conventions: Routing The routing engine is not part of the ASP.NET MVC framework, but is a general component introduced with .NET 3.5 SP1. It is the component that is first hit by a request coming from the browser. Its purpose is to route all incoming requests to the correct handler and to extrapolate from the URL a set of data that will be used by the handler (which, in the case of an ASP.NET MVC Web application, is always the MvcHandler) to respond to the request. To accomplish its task, the routing engine must be configured with rules that tell it how to parse the URL and how to get data out of it. This configuration is specified inside the RegisterRoutes method of the Global.asax file, which is in the root of the ASP.NET MVC Web application. public static void RegisterRoutes(RouteCollection routes) { routes.MapRoute( “Default”, //Route Name “{controller}/{action}/{id}”, //Route Formats new { controller = “Home”, action = “Index”, id = “” } //Defaults ); } The snippet above shows the default mapping rule for each ASP.NET MVC application: every URL is mapped to this route, and the first 3 parts are used to create the data dictionary sent to the handler. The last parameter contains the default values that must be used if some of the URL tokens cannot be populated. This is required because, based on the default convention, the data dictionary sent to the MvcHandler must always contain the controller and the action keys. Examples of other possible route rules: Model ASP.NET MVC, unlike other MVC-based frameworks like Ruby on Rails (RoR), doesn’t enforce a convention for the Model. So in this framework the Model is just the name of the folder where you are supposed to place all the classes and objects used to interact with the Business Logic and the Data Access Layer. It can be whatever you prefer it to be: proxies for Web services, ADO.NET Entity Framework, NHibernate, or anything that returns the data you have to render through the views. Controller The controller is the first component of the MVC pattern that comes into action. A controller is simply a class that inherits from the Controller base class whose name is the name of a controller and ends with “Controller,” and is located in the Controllers folder of the application folder structure. Using that naming convention, the framework automatically calls the specified controller based on the parameter extrapolated by the URL. namespace MyMvcApp.Controllers { public class PageController : Controller { //Controller contents. } } The real work, however, is not done by the class itself, but by the method that lives inside it. These are called Action Methods. Action Method An action method is nothing but a public method inside a Controller class. It usually returns a result of type ActionResult and accepts an arbitrary number of parameters that contain the data retrieved from the HTTP request. Here is what an action method looks like: public ActionResult Show(int id) { //Do stuff ViewData[“myKey”]=myValue; return View(); } The ViewData is a hash-table that is used to store the variables that need to be rendered by the view: this object is automatically passed to the view through the ActionResult object that is returned by the action. Alternatively, you can create your own view model, and supply it to the view.public ActionResult Show(int id) { //Do stuff return View(myValue); } This second approach is better because it allows you to work with strongly-typed classes instead of hash-tables indexed with string values. This brings compile-time error checking and Intellisense. Once you have populated the ViewData or your own custom view model with the data needed, you have to instruct the framework on how to send the response back to the client. This is done with the return value of the action, which is an object that is a subclass of ActionResult. There are various types of ActionResult, each with its specific way to return it from the action. Model Binder Using the ActionResults and the ViewData object (or your custom view model), you can pass data from the Action to the view. But how can you pass data from the view (or from the URL) to the Action? This is done through the ModelBinder. It is a component that retrieves values from the request (URL parameters, query string parameters, and form fields) and converts them to action method parameters. As everything in ASP.NET MVC, it’s driven by conventions: if the action takes an input parameter named Title, the default Model Binder will look for a variable named Title in the URL parameters, in the query string, and among the values supplied as form fields. But the Model Binder works not only with simple values (string and numbers), but also with composite types, like your own objects (for example the ubiquitous User object). In this scenario, when the Model Binder sees that an object is composed by other sub-objects, it looks for variables whose name matches the name of the properties of the custom type. Here it’s worth taking a look at a diagram to make things clear: View The next and last component is the view. When using the default ViewEngine (which is the WebFormViewEngine) a view is just an aspx file without code-behind and with a different base class. Views that are going to render data passed only through the ViewData dictionary have to start with the following Page directive: <%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master” Inherits=”System.Web.Mvc.ViewPage” %> If the view is also going to render the data that has been passed via the custom view model, the Page directive is a bit different, and it also specifies the type of the view model: <%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master” Inherits=”System.Web.Mvc.ViewPage<PageViewModel>” %> You might have noticed that, as with all normal aspx files, you can include a view inside a master page. But unlike traditional Web forms, you cannot use user controls to write your HTML markup: you have to write everything manually. However, this is not entirely true: the framework comes with a set of helper methods to assist with the process of writing HTML markup. You’ll see more in the next section. Another thing you have to handle by yourself is the state of the application: there is no ViewState and no Postback. HTML helper You probably don’t want to go back writing the HTML manually, and neither does Microsoft want you to do it. Not only to help you write HTML markup, but also to help you easily bind the data passed from the controller to the view, the ASP.NET MVC Framework comes with a set of helper methods collectively called HtmlHelpers. They are all methods attached to the Html property of the ViewPage. For example, if you want to write the HTML markup for a textbox you just need to write: <%= Html.Textbox(“propertyName”)%> And this renders an HTML input text tag, and uses the value of the specified property as the value of the textbox. When looking for the value to write in the textbox, the helper takes into account both the possibilities for sending data to a view: it first looks inside the ViewData hash-table for a key with the name specified, and then looks inside the custom view model, for a property with the given name. This way you don’t have to bother assigning values to input fields, and this can be a big productivity boost, especially if you have big views with many fields. Let’s see the HtmlHelpers that you can use in your views: As alternative to writing Html.BeginForm and Html.CloseForm methods, you can write an HTML form by including all its elements inside a using block: <% using(Html.BeginForm(“Save”)) { %> <!—all form elements here --> <% } %> To give you a better idea of how a view that includes an editing form looks like, here is a sample of a complete view for editing an address book element: <%@ Page Language=”C#” MasterPageFile=”~/Views/Shared/Site.Master” Inherits=”System.Web.Mvc.ViewPage<EditContactViewModel>” %> <% using(Html.BeginForm(“Save”)) { %> Name: <%= Html.Textbox(“Name”) %> <br/> Surname: <%= Html.Textbox(“Surname”) %> <br/> Email: <%= Html.Textbox(“Email”) %> <br/> Note: <%= Html.TextArea(“Notes”, 80, 7, null) %> <br/> Private <%= Html.Checkbox(“IsPrivate”) %><<br/> <input type=”submit” value=”Save”> <% } %> T4 Templates But there is more: bundled with Visual Studio there is a template engine (made T4 as in Text Template Transformation Toolkit) that helps automatically generate the HTML of your views based on the ViewModel that you want to pass to the view. The “Add View” dialog allows you to choose with which template and based on which class you want the views to be generated What these templates do is mainly iterating over all the properties of the ViewModel class and generating the same code you would have probably written yourself, using the HtmlHelper methods for the input fields and the validation messages. For example, if you have a view model class with two properties, Title and Description, and you choose the Edit template, the resulting view will be: <%@ Page Title=”” Language=”C#” MasterPageFile=”~/Views/Shared/Site. Master” Inherits=”System.Web.Mvc.ViewPage<IssueTracking.Models.Issue>” %> <asp:Content ID=”Content1” ContentPlaceHolderID=”TitleContent” runat=”server”> Edit </asp:Content> <asp:Content ID=”Content2” ContentPlaceHolderID=”MainContent” runat=”server”> <h2>Edit</h2> <%= Html.ValidationSummary(“Edit was unsuccessful. Please correct the errors and try again.”) %> <% using (Html.BeginForm()) {%> <fieldset> <legend>Fields<</legend> <p> <label for=”Title”>Title:</label> <%= Html.TextBox(“Title”, Model.Title) %> <%= Html.ValidationMessage(“Title”, “*”) %> </p> <p> <label for=”Description”>Description:</label> <%= Html.TextArea(“Description”, Model.Description,7,50,null)%> <%= Html.ValidationMessage(“Description”, “*”) %> </p> <p> <input type=”submit” value=”Save” /> </p> </fieldset> <% } %> <div> <%=Html.ActionLink(“Back to List”, “Index”) %> </div> </asp:Content> Ajax The last part of ASP.NET MVC that is important to understand is AJAX. But it’s also one of the easiest aspects of the framework. First, you have to include the script references at the top of the page where you want to enable AJAX (or in a master page if you want to enable itfor the whole site): <script src=”/Scripts/MicrosoftAjax.js” type=”text/javascript”><script> <script src=”/Scripts/MicrosoftMvcAjax.js” type=”text/javascript”></script> And then you can use the only 2 methods available in the AjaxHelper: ActionLink and BeginForm. They do the exact same thing as their HtmlHelper counterpart, just asynchronously and without reloading the page. To make the AJAX features possible, a new parameter is added to configure how the request and the result should be handled. It’s called AjaxOptions and is a class with the following properties: For example, here is a short snippet of code that shows how to update a list of items using the AJAX flavor of the BeginForm method: <ul id=”types”> <% foreach (var item in Model) { %> <li><%= item.Name %></li> <% } %> </ul> <% using(Ajax.BeginForm(“Add”,”IssueTypes”,new AjaxOptions() { InsertionMode = InsertionMode.InsertAfter, UpdateTargetId = “types”, OnSuccess = “myJsFunc” })) { %> Type Name: <%= Html.TextBox(“Name”) %> <input type=”submit” value=”Add type” /> <% } %> The AJAX call will be sent to the Add action inside the IssueType controller. Once the request is successful, the result sent by the controller will be added after all the list items that are inside the types element. And then the myJsFunc will be executed. But what the ASP.MVC library does is just enabling these two methods: if you want more complex interactions you have to use either the AJAX in ASP.NET library or you can use jQuery, which ships as part of the ASP.NET MVC library. If you want to use the AJAX in ASP.NET library, you don’t have to do anything because you already referenced it in order to use the BeginForm method, but if you want to use jQuery, you have to reference it as well. <script src=”/Scripts/jquery-1.3.2.js” type=”text/javascript”></script> One benefit of having the jQuery library as part of the ASP.NET MVC project template is that you gain full Intellisense support. But there is an extra step to enable it: you have to reference the jQuery script both with the absolute URL (as above) needed by the application and with a relative URL, which is needed by the Intellisense resolution engine. So, at the end, if you want to use jQuery and enable Intellisense on it, you have to add the following snippet: <script src=”/Scripts/jquery-1.3.2.js” type=”text/javascript”> </script> <% if(false> { %> <script src=”../../Scripts/jquery-1.3.2.js” type=”text javascript”> </script> <% } %> Anil Mujagic replied on Mon, 2011/08/15 - 3:49am
http://refcardz.dzone.com/refcardz/getting-started-aspnet-mvc-10
CC-MAIN-2014-35
refinedweb
2,745
62.88
How to Write Pr2 PropsDescription: This is a step-by-step guide to writing one of the basic Android Apps that runs on the Pr2. This tutorial will demonstrate how to make the Pr2 Props App. You should read the previous tutorial linked above but if it didn't all make sense then that's okay. This tutorial does assume that the Android development environment described at the beginning of that article has been set up. Tutorial Level: BEGINNER WARNING: This documentation refers to an outdated version of rosjava and is probably incorrect. Use at your own risk. Contents Android Client Creating the Package If you roscd to the existing android_pr2_props package you'll see a lot of different files. some of those files will get generated for us. We can follow some of the same instructions from the previous tutorial. Go to your ROS_DIR (where you installed your ROS android toolchain). Create a directory called android_pr2_props2 and navigate to it. mkdir android_pr2_props2 cd android_pr2_props2 We can use the android_create tool to make some of the autogenerated files. The parameters are fairly complex. For a full description see the previous tutorial. Here is an example of what we'll put for our Props App. rosrun appmanandroid android_create --create Pr2Props2 ros.android.pr2props2 Pr2Props2 icon "Pr2 Props2" pr2_props2 pr2_props2_app/pr2_props2 After running the script, you should see a bunch of new files. To create our icon file, we can just borrow the ROS icon for now. Or you can use your own. cp `rospack find android_gingerbread`/res/drawable-hdpi/icon.png res/drawable/icon.png Then add the application to your tool chain install: rosinstall ROS_DIR . source ROS_DIR/setup.bash If this produces any errors, just read the error message and it should indicate how to fix it. Remember to tailor the solution to your version of ROS. Next build the application. rosmake --threads=1 Filling in the Activity Class Now we can actually start to make the application do something. Props is only a single Android activity. We can find the source for the activity if we roscd to the package and go to src/ros/android/pr2props/Pr2Props2.java. roscd android_pr2_props2 cd src/ros/android/pr2props vi Pr2Props2.java Once you've opened up the activity file in the editor of your choice, you can see that it's mostly empty. Let's start to fill that in. If we take a look at the onCreate() method, we'll notice that most initialization has been done for us. We can add a line to set the robot's height to 0.0 when it starts up. We'll make it a global variable and declare it at the beginning of our activity. 1 public class Pr2Props extends RosAppActivity { 2 3 private double spineHeight; 4 5 /** Called when the activity is first created. */ 6 @Override 7 public void onCreate(Bundle savedInstanceState) { 8 setDefaultAppName("pr2_props_app/pr2_props"); 9 setDashboardResource(R.id.top_bar); 10 setMainWindowResource(R.layout.main); 11 spineHeight = 0.0; 12 super.onCreate(savedInstanceState); 13 } Next we can look at onNodeCreate(), which has everything in it that happens when your node is created. Here you want to create your publisher to publish messages to the spine of the robot and move the torso up/down. If you are familiar with the Props App you probably know that there's more to it than just moving the torso of the robot. The other parts are controlled by services, so they'll get taken care of later. This also creates an interesting behavior where if for some reason the .launch file doesn't get launched on the robot, the torso can still move up and down because it's just relying on messages published to a topic. Your onNodeCreate() might look something like this now: 1 @Override 2 protected void onNodeCreate(Node node) { 3 super.onNodeCreate(node); 4 spinePub = node.newPublisher("torso_controller/command", "trajectory_msgs/JointTrajectory"); 5 spineThread = new Thread(new Runnable() { 6 @Override 7 public void run() { 8 JointTrajectory spineMessage = new JointTrajectory(); 9 spineMessage.points = new ArrayList<JointTrajectoryPoint>(); 10 spineMessage.joint_names = new ArrayList<String>(); 11 spineMessage.joint_names.add("torso_lift_joint"); 12 JointTrajectoryPoint p = new JointTrajectoryPoint(); 13 p.positions = new double[] { 0.0 }; 14 p.velocities = new double[] { 0.1 }; 15 p.time_from_start = new Duration(0.25); 16 spineMessage.points.add(p); 17 try { 18 while (true) { 19 spineMessage.points.get(0).positions[0] = spineHeight; 20 spinePub.publish(spineMessage); 21 Thread.sleep(200L); 22 } 23 } catch (InterruptedException e) { 24 } 25 } 26 }); 27 spineThread.start(); 28 } Basically this just creates a publisher to publish messages of the type trajectory_msgs/JointTrajectory to the torso_controller/command topic. As you can see, the actual construction and publishing of the messages is done in a separate thread. Make sure to declare any undeclared global variables. Your global variables list should look like this: onNodeDestroy() is what happens when the node gets shutdown. You want to shutdown your spineThread and your publisher, spinePub: 1 @Override 2 protected void onNodeDestroy(Node node) { 3 super.onNodeDestroy(node); 4 final Thread thread = spineThread; 5 if (thread != null) { 6 spineThread.interrupt(); 7 } 8 spineThread = null; 9 final Publisher pub = spinePub; 10 if (pub != null) { 11 pub.shutdown(); 12 } 13 spinePub = null; 14 } At the bottom you'll see code for handling the options menu. This can be left as is, or you can change it as necessary. Now it's time to deal with the services. How will we make the robot give high fives? What we do is make a method called runService which does all the work of actually running the services. It takes a string for the service name and then creates a service client to send messages to the service node on the robot-side. In this case it actually just sends empty messages and gets empty responses. Our runService() method might look something like this: 1 private void runService(String service) { 2 Log.i("Pr2Props2", "Run: " + service); 3 try { 4 ServiceClient<Empty.Request, Empty.Response> appServiceClient = 5 getNode().newServiceClient(service, "std_srvs/Empty"); 6 Empty.Request appRequest = new Empty.Request(); 7 appServiceClient.call(appRequest, new ServiceResponseListener<Empty.Response>() { 8 @Override public void onSuccess(Empty.Response message) { 9 } 10 11 @Override public void onFailure(RemoteException e) { 12 //TODO: SHOULD ERROR 13 Log.e("Pr2Props2", e.toString()); 14 } 15 }); 16 } catch (Exception e) { 17 //TODO: should error 18 Log.e("Pr2Props2", e.toString()); 19 } 20 } So that was how the client is going to make the requests, but where do the requests get made? In the Props App, the user hits a button to trigger each of the actions high five left, props right, raising the torso, etc. We'll briefly see what the buttons look like in the layout xml later but right now we can just make a bunch of callbacks to use runService() when buttons are clicked. For example: 1 public void highFiveLeft(View view) { 2 runService("/pr2_props/high_five_left"); 3 } 4 public void highFiveRight(View view) { 5 runService("/pr2_props/high_five_right"); 6 } 7 public void highFiveDouble(View view) { 8 runService("/pr2_props/high_five_double"); 9 } 10 public void lowFiveLeft(View view) { 11 runService("/pr2_props/low_five_left"); 12 } 13 public void lowFiveRight(View view) { 14 runService("/pr2_props/low_five_right"); 15 } 16 public void poundLeft(View view) { 17 runService("/pr2_props/pound_left"); 18 } 19 public void poundRight(View view) { 20 runService("/pr2_props/low_five_right"); 21 } 22 public void poundDouble(View view) { 23 runService("/pr2_props/pound_double"); 24 } 25 public void hug(View view) { 26 runService("/pr2_props/hug"); 27 } 28 public void raiseSpine(View view) { 29 spineHeight = 0.31; 30 } 31 public void lowerSpine(View view) { 32 spineHeight = 0.0; 33 } The last two methods there aren't actually service calls. Those are just setting the spine height. The spine publisher thread will pick up on that and publish the corresponding messages. Finally you should make sure you're importing all the right things. If you try to build it, you'll probably find out what's missing. Just in case though, these are the import statements from the original Props App and we can just steal those: 1 import org.ros.exception.RemoteException; 2 import ros.android.activity.AppManager; 3 import ros.android.activity.RosAppActivity; 4 import android.os.Bundle; 5 import org.ros.node.Node; 6 import android.view.Window; 7 import android.view.WindowManager; 8 import android.util.Log; 9 import org.ros.node.service.ServiceClient; 10 import org.ros.node.topic.Publisher; 11 import org.ros.service.app_manager.StartApp; 12 import org.ros.node.service.ServiceResponseListener; 13 import android.widget.Toast; 14 import android.view.Menu; 15 import android.view.View; 16 import android.view.MenuInflater; 17 import android.view.MenuItem; 18 import android.widget.LinearLayout; 19 import org.ros.service.std_srvs.Empty; 20 import org.ros.message.trajectory_msgs.JointTrajectory; 21 import org.ros.message.trajectory_msgs.JointTrajectoryPoint; 22 import java.util.ArrayList; 23 import org.ros.message.Duration; Congratulations! That's pretty much all the java code you'll have to write for this app! If you want, you can skip down to the part where we write the corresponding robot-side code and give it a quick read before finishing up the Android side. It might make more sense if done that way. Layout XML If you navigate to the root of your package and then to res/layout, there will be the layout xml in main.xml. If you're familiar with Android layouts then this should be easy and you can definitely skip this part of the tutorial. There are a lot of different ways to accomplish the same thing with layouts, so the exact implementation can be somewhat arbitrary. Let's consider what we want to accomplish. We want to have a button for each action that we defined in our activity. It's a lot of buttons. We should probably group the similar buttons together under headings (all the high fives together, all the props together, changing torso height, etc). We should make the view scroll in case the app will be run on a device where not all the content fits on the screen at once. To do that we can use a LinearLayout as the main layout. Then, we should have another LinearLayout inside that one, which will be the top_bar and have the dashboard components that you might have seen in the ROS Android apps. This shows basic status information about the robot (battery, run-stop status). The top_bar layout does not contain any other layouts. Our main layout should contain a ScrollView. The ScrollView's child can be another LinearLayout containing the buttons grouped under TextViews for headings. You can implement this yourself. The only trick with the buttons is that you have to make sure to define the names of the OnClick methods for your buttons in the xml since we didn't declare them in the activity. If you have any trouble you can take a look at the actual Props implementation at: roscd android_pr2_props vi res/layout/main.xml If you write a couple of button descriptions and decide that it's too much typing for the moment you can actually just copy the main.xml from the original Props App. We don't need to change it. cp `rospack find android_pr2_props`/res/layout/main.xml res/layout Update Manifest We need to make some minor changes to our manifest.xml as well. It can be found in the root of our package. Right now we depend only on the appmanandroid library. With the latest version of rosjava we should also explicitly depend on std_msgs and trajectory_msgs. Just add these two lines after the other package dependency: And that's it. Since we changed the manifest.xml we have to rosmake it again. If you make changes that do not change the manifest.xml you can use ant instead. rosmake --threads=1 If you want to iteratively make changes to the app but you're not changing the manifest.xml then the best way to do it is to use ant. The following commands will build your project and clean it. ant ant clean If you've made changes then it's important to clean the project before installing the app on a device because depending on which files have changed, ant might not recreate the class files appropriately. To install it on your device you can use: ant debug install If this fails then it may be because your computer does not recognize your Android device. That is outside the scope of this tutorial but you are encouraged to Google vigorously. Robot-side Application Writing the Python Script The robot-side of the application must have a stack. In this case we're not going to release a whole new stack for our application. Since we're just testing we'll log (ssh) into the robot as the user 'applications' and put our stack under the ROS install directory. If this is a PR2 then it may be a directory called 'ros' in the applications user home directory: cd ros mkdir pr2_props2_app The main part of the app here is a python script. Inside pr2_props2_app we can make a directory called 'scripts' and inside make a file called prop_runner with the editor of your choice. cd pr2_props2_app mkdir scripts cd scripts vi prop_runner The start of our python script will look like this: We're importing and using roslib only for bootstrapping reasons. It is appropriate to use rospy for most of your ROS Python needs. We then import rospy and os. We import os because we're actually going to use os.system to rosrun scripts to do the positioning for us. All we're going to do is queue a bunch of requests to run scripts and then send them to the system to run. We import everything from std_srvs.srv to allow us to respond to service requests. Also, if you don't usually write anything in Python, remember Python is whitespace delimited! Let's start by making our QueueItem class. This is just an object to hold on to the command we're going to have the system execute and let us know when it's done. Let's also create an item and put it into a queue. Next as a way for things to get added to our queue, we'll make run_command: run_command will add create QueueItems out of the commands you want to run and add them to the queue. Now we have to actually figure out the commands that we want run. If you navigate to /opt/ros/electric/stacks/pr2_props_stack/pr2_props/src you'll see a couple of .cpp files that basically just run a few different actions. That's what we want to run when the user presses a button in the app. We can use the rosrun command for that. Let's make methods for each of the buttons in the app that can be pushed (except for the buttons to raise and lower the torso, those buttons have no robot-side code because they publish messages straight to topics that the spine subscribes to). Each method should queue a command to rosrun the appropriate script out of the ones we saw earlier: 1 def high_five_double(msg): 2 run_command("rosrun pr2_props high_five double") 3 return EmptyResponse() 4 5 def high_five_left(msg): 6 run_command("rosrun pr2_props high_five left") 7 return EmptyResponse() 8 9 def high_five_right(msg): 10 run_command("rosrun pr2_props high_five right") 11 return EmptyResponse() 12 13 def low_five_left(msg): 14 run_command("rosrun pr2_props low_five left") 15 return EmptyResponse() 16 17 def low_five_right(msg): 18 run_command("rosrun pr2_props low_five right") 19 return EmptyResponse() 20 21 def pound_double(msg): 22 run_command("rosrun pr2_props pound double") 23 return EmptyResponse() 24 25 def pound_left(msg): 26 run_command("rosrun pr2_props pound left") 27 return EmptyResponse() 28 29 def pound_right(msg): 30 run_command("rosrun pr2_props pound right") 31 return EmptyResponse() 32 33 def hug(msg): 34 run_command("rosrun pr2_props hug") 35 return EmptyResponse() You'll notice that they take in a msg. These methods are the handlers that rospy.Service() will make callbacks to in the main function. rospy.Service() recieves messages from the service client we created on the Android side. Now it's time to use this stuff in the main function. We should create a node, which we can call 'pr2_props2_app' and then make the callbacks to our handlers that we just wrote. 1 if __name__ == "__main__": 2 rospy.init_node("pr2_props2_app") 3 s1 = rospy.Service('pr2_props/high_five_double', Empty, high_five_double) 4 s2 = rospy.Service('pr2_props/high_five_left', Empty, high_five_left) 5 s3 = rospy.Service('pr2_props/high_five_right', Empty, high_five_right) 6 s4 = rospy.Service('pr2_props/low_five_right', Empty, low_five_right) 7 s5 = rospy.Service('pr2_props/low_five_left', Empty, low_five_left) 8 s6 = rospy.Service('pr2_props/pound_double', Empty, pound_double) 9 s7 = rospy.Service('pr2_props/pound_left', Empty, pound_left) 10 s8 = rospy.Service('pr2_props/pound_right', Empty, pound_right) 11 s9 = rospy.Service('pr2_props/hug', Empty, hug) You'll notice that the on the Android side we sent empty messages and here we're returning empty responses. This is because it's not necessary to get any extra content from the message. The fact that it was sent/executed is sufficient. The last thing we need is to actually read from the queue we created and have the system run those 'rosrun pr2_props ...' commands. We can make a loop to go through the queue while the node is not shutdown, execute the goals, and remove them from the queue. This will create the behavior that when you press buttons more quickly than the actions can execute, your requests get queued. So if you were to press 'hug' ten times then the robot would sit there for about 3 minutes and give all ten hugs in sequence. That's really it for the python code even though we cheated and used those .cpp scripts. For the next few sections we'll be basically following the steps from this tutorial: ApplicationsPlatform/CreatingAnApp Launch File Next we need to write a launch file for the application. We will place it in a directory called 'launch' and call it 'pr2_props_app.launch'. roscd pr2_props2_app mkdir launch vi pr2_props_app.launch The launch file will help launch the correct nodes when the application is started. The ROS Application Chooser (which you will want to download from the Android Market if you are going to be running any ROS Android applications on an Android device) will use these launch files when you start apps from insider the App Chooser. If you do not start your ROS app from inside the app chooser then you will have roslaunch the launch file yourself from your computer. For information on the format of the launch files see roslaunch/XML. Your launch file should look something like this: 1 <launch> 2 <include file="$(find pr2_props)/launch/pr2_props.launch" /> 3 <node pkg="pr2_props2_app" type="prop_runner" name="pr2_props2_app" /> 4 <node pkg="pr2_position_scripts" type="head_up.py" name="head_up" /> 5 </launch> What we're doing is including another .launch file. It's actually in pr2_props_stack. You can roscd to pr2_props_stack and you'll see the package that we're searching for, pr2_props. Inside is the launch file we want to include. Then we also create a node for what's running in our Python script and also for the position scripts that make the robot face forward when we start up the app. Interface File The interface file is a file that is essentially blank for now. In the future it will be more important. It should look like this and be named 'pr2_props_app.interface' and it should also be in the root of the package: published_topics: {} subscribed_topics: {} Icon For now we're actually just going to steal the icon from the original Props app. You should definitely get your own eventually when you make real apps, but that's up to you. Go to the root to the package and type: cp `rospack find pr2_props_app`/pr2props.jpg pr2props2.jpg App File The .app file is what the app manager uses to find out about your application. Ours will look like this: display: Props2 description: Run PR2 Props platform: pr2 launch: pr2_props2_app/pr2_props2_app.launch interface: pr2_props2_app/pr2_props2_app.interface icon: pr2_props2_app/pr2props2.jpg clients: - type: android manager: api-level: 9 intent-action: ros.android.pr2props2.Pr2Props2 app: gravityMode: 0 camera_topic: /wide_stereo/left/image_color/compressed_throttle Most of that is pretty straightforward. One thing to be aware of is that the path names are all ROS path names so they just have the package_name/file_name no matter how many directories down in the package the file is. Installing App We have to add your package/unary stack to the .rosinstall file. Make sure you're in the ROS install directory. Then add the following line to the .rosinstall file: - other: {local-name: pr2_props2_app} After you save and close: rosinstall . Now add: echo "Sourcing /u/applications/ros/setup.bash" . /u/applications/ros/setup.bash to the .bashrc in the home directory of the applications user. Now we have to make a .installed file. Go to the local_apps directory (should be located under the home directory). We will name the file pr2_props2_app.installed and it will contain the following: apps: - app: pr2_props2_app/pr2_props2_app display: Pr2 Props2 App It's actually pointing to the .app file. This is hard to tell since we named everything 'pr2_props2_app'. But we don't include the .app extension because it gets automatically added. Loose Ends: Makefile, stack.xml, manifest.xml, etc There are a few more things to take care of before we can actually run the app. Because we just created this stack now, it's missing some important things that it should have. We need a Makefile and CMakeLists.txt. If you want some background information on making those files you can look here rospy_tutorials/Tutorials/Makefile. We could have used roscreate-pkg to create our package at the start and that would have generated a template of these files for us. However since they're each only a few lines long we can make them ourselves this time. First let's roscd to our package. Our Makefile only has to be one line: include $(shell rospack find mk)/cmake_stack.mk And our CMakeLists.txt looks like this inside: cmake_minimum_required(VERSION 2.4.6) include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake) rosbuild_make_distribution(0.1.0) Alternatively you can also just copy the same files from the original Props: cp `rospack find pr2_props_app`/Makefile Makefile cp `rospack find pr2_props_app`/CMakeLists.txt CMakeLists.txt We also need to make a manifest.xml. It's pretty standard in terms of dependencies and should look like this: 1 <package> 2 <description brief="PR2 Props2 App"> 3 Application files for running PR2 props 4 </description> 5 <author>You</author> 6 <license>BSD</license> 7 <url></url> 8 <review status="na" notes="" /> 9 <depend package="roslib" /> 10 <depend package="rospy" /> 11 <depend package="pr2_props" /> 12 <depend package="pr2_position_scripts" /> 13 <depend package="std_srvs" /> 14 <platform os="ubuntu" version="9.04"/> 15 <platform os="ubuntu" version="9.10"/> 16 <platform os="ubuntu" version="10.04"/> 17 </package> We also need a stack description in the form of the stack.xml: <stack> <description brief="pr2_props2_app">pr2_props_app</description> <author>Maintained by Applications Manager</author> <license>BSD</license> <review status="unreviewed" notes=""/> <url></url> <depend stack="pr2_apps" /> <!-- pr2_position_scripts --> <depend stack="pr2_props_stack" /> <!-- pr2_props --> <depend stack="ros" /> <!-- roslib --> <depend stack="ros_comm" /> <!-- std_srvs, rospy --> </stack> Now we're done. Almost. We need to put a ROS_NOBUILD file in the root of the package/unary stack so that rosmake skips it. This file does not have any real content. We can just copy it from the original Props stack. cp `rospack find pr2_props_app`/ROS_NOBUILD ROS_NOBUILD Done! Deactivate and restart your robot (from the ROS Application Chooser you can push the "Deactivate" button). Once you reconnect, you should see your application listed in the Application Chooser. If you see no applications listed, this means that your application's formatting is invalid, and it has caused errors. If you do not see your application listed at all, this means that you have skipped a step or failed to restart the app manager. If there is an error, deactivate your robot, and find the latest log in the ~/.ros directory of the applications user. The *app_manager* files should tell you a bit about what happened. If you see your application, click it to start it. You should see the application highlight and see your ROS nodes running, just as if you launched the roslaunch file manually. You should run your applications through the Application Chooser because it will roslaunch the appropriate nodes for you. If you do not go through the App Chooser and instead just try to run the application by itself, you will have the manually roslaunch the .launch file for your application, probably from your computer.
https://wiki.ros.org/ApplicationsPlatform/Clients/Android/Tutorials/HowToWritePr2Props
CC-MAIN-2022-40
refinedweb
4,128
58.58
Whether you're providing e-mail for just system daemons, a single server, a domain, or for many virtual domains, netqmail can easily be setup to handle your needs. This guide will help you setup netqmail for all of these scenarios with a focus on remote access and encrypted communications the whole way through. Specifically, the packages this guide will help you with are netqmail, courier-imap, vpopmail, and horde/imp. These core packages will also bring in daemontools, ucspi-tcp, mysql, apache, and mod_php. netqmail provides the core mta functions, courier-imap provides remote retrieval services, vpopmail provides virtual domain management, and horde/imp provides webmail access. Before emerging anything, you will need the following USE variables enabled. If you've already emerged any of these packages, you may have to re-emerge them. The last step of course is to commit yourself to the netqmail system. There are many other packages with which you could build your e-mail system. Now is the time to research and decide that netqmail is for you. We have another # emerge mail-mta/netqmail Emerging netqmail will also emerge ucspi-tcp and daemontools. You can read up on First we have a few post-install configuration steps. (Customize to fit your personal information)# nano /var/qmail/control/servercert.cnf # emerge --config netqmail The design of netqmail has been completely around the focus of security. To this end, e-mail is never sent to the user 'root'. So now you have to select a user on your machine to receive mail that would normally be destined for 'root'. From now on in this guide, I will refer to that user as I have it in my setup, 'vapier'. # cd /var/qmail/alias # echo vapier > .qmail-root # echo vapier > .qmail-postmaster # echo vapier > .qmail-mailer-daemon Now we want to get the netqmail delivery service up and running. # rc-update add svscan default # /etc/init.d/svscan start # cd /service # ln -s /var/qmail/supervise/qmail-send qmail-send We want to make sure netqmail is working correctly, so here's a quick test. # ssh vapier@localhost # maildirmake .maildir # qmail-inject root << EOF test root e-mail! EOF # qmail-inject postmaster << EOF test postmaster e-mail! EOF # qmail-inject vapier << EOF test vapier e-mail! EOF # mutt (You should now have 3 e-mails in your inbox) And that's all! Now you have a mail system that will handle mail for your local machine and the system daemons/users who utilize it. # hostname --fqdn wh0rd.org # cat me wh0rd.org # cat defaultdomain wh0rd.org # cat plusdomain wh0rd.org # cat locals wh0rd.org # cat rcpthosts wh0rd.org # hostname --fqdn mail.wh0rd.org # cat me mail.wh0rd.org # cat defaultdomain wh0rd.org # cat plusdomain wh0rd.org # cat locals mail.wh0rd.org # cat rcpthosts mail.wh0rd.org # emerge vpopmail vpopmail takes a little bit more effort to setup than the previous packages. Since vpopmail runs off of mysql, we'll have to make sure that it's up and running first. Then we can setup the vpopmail database and move on. Before you do this step, you should make sure you've already emerged and setup mysql properly. Note that the password I will use for the vpopmail database is 'vpoppw', you however should pick a different one. # rc-update add mysql default If you just emerged mysql for the first time, make sure you run the ebuild <mysql.ebuild> config command and follow the directions before starting the mysql server.# /etc/init.d/mysql start # nano /etc/vpopmail.conf (Change the password from 'secret' to 'vpoppw')# mysql -p << EOF create database vpopmail; use mysql; grant select, insert, update, delete, create, drop on vpopmail.* to vpopmail@localhost identified by 'vpoppw'; flush privileges; EOF (The following steps may or may not be needed, but we run them just to be sure)# chown root:vpopmail /etc/vpopmail.conf # chmod 640 /etc/vpopmail.conf # chown root:vpopmail /var/vpopmail/bin/vchkpw # chmod 4711 /var/vpopmail/bin/vchkpw At this point in time, vpopmail is ready to roll. In this guide, we will be providing virtual hosting for the domain 'wh0rd.org'. This means we need to tell vpopmail about this domain we want it to host for us. We'll also quickly add an user account for 'vapier' while we're here. (You only have to do this if the vadddomain step below results in "command not found")# source /etc/profile (While debugging vpopmail, you may want to consult the logs)# mysql -u vpopmail -p mysql> select * from vpopmail.vlog; # vadddomain wh0rd.org postpass (Now quickly verify the domain is setup properly)# printf "postmaster@wh0rd.org\0postpass\0blah\0" | vchkpw `which id` 3<&0 uid=89(vpopmail) gid=89(vpopmail) groups=0(root) (If you don't see something similar to above, then permissions somewhere are incorrect)# vadduser vapier@wh0rd.org vappw Every domain that vpopmail creates comes with a 'postmaster' account. Here we told vpopmail that the password for the postmaster account is 'postpass'. Before vpopmail can be truly useful, we'll need to be able to receive mail via courier and send mail via netqmail and SMTP. # emerge net-mail/courier-imap Now for the common post-install configuration steps. These steps are only needed if you wish to run SSL encrypted communications (which you should !). Otherwise you can skip to the last two steps in the two following code listings, removing the '-ssl' from the init script name each time. # nano /etc/courier/authlib/authdaemonrc (Set the authmodulelist variable to only contain "authvchkpw")# cd /etc/courier-imap # nano pop3d.cnf (Edit the [ req_dn ] section)# mkpop3dcert # rc-update add courier-pop3d-ssl default # /etc/init.d/courier-pop3d-ssl start # cd /etc/courier-imap # nano imapd.cnf (Edit the [ req_dn ] section)# mkimapdcert # rc-update add courier-imapd-ssl default # /etc/init.d/courier-imapd-ssl start Your mail client should now be able to login to the host running courier and retrieve mail for the virtual host. In my case, I am now able to login with the username 'vapier@wh0rd.org' and password 'vappw'. Let's get SMTP up and running while making sure we don't create another spam hole for people to abuse. # cd /var/qmail/control/ # nano conf-smtpd (Uncomment the SMTP-AUTH variables and set QMAIL_SMTP_CHECKPASSWORD to /var/vpopmail/bin/vchkpw)# nano servercert.cnf (Edit the [ req_dn ] section)# mkservercert # cd /service # ln -s /var/qmail/supervise/qmail-smtpd qmail-smtpd # /etc/init.d/svscan restart Assuming you haven't tweaked the netqmail control files at all, netqmail will now accept mail for the wh0rd.org virtual domain and for users of the local machine. Furthermore, netqmail will relay mail for anyone who sends via 127.0.0.1 and for anyone who is able to authenticate with vpopmail. When you setup your mail client to send mail, make sure you select options like 'Server requires authentication'. In my case, I set the user as 'vapier@wh0rd.org' and my password as 'vappw'. The last detail is to make sure you tell your mail client to use SSL/TLS for SMTP communication. netqmail will not let you authenticate if the session is not encrypted. Although there are plenty of webmail clients out there (and you're free to use any of them), I prefer the On to the good stuff! We need to emerge IMP now. # emerge horde-imp Now we setup IMP real quick. # cd /var/www/localhost/htdocs/horde/imp/config/ # nano servers.php (Edit the $servers['imap'] array:)$servers['imap'] = array( 'name' => 'wh0rd.org', 'server' => 'localhost', 'protocol' => 'imap/ssl/novalidate-cert', 'port' => 993, 'folders' => '', 'namespace' => 'INBOX.', 'maildomain' => 'wh0rd.org', 'smtphost' => 'localhost', 'realm' => '', 'preferred' => '' ); Finally, we bring up apache so we can start using webmail. # nano /etc/conf.d/apache2 (Uncomment APACHE2_OPTS="-D SSL -D PHP5")# rc-update add apache2 default # /etc/init.d/apache2 start To test out the new IMP setup, launch a web browser and visit At this point, Horde and IMP are all setup. You should, however, go back through the config directories and tweak each to your heart's content. The first package I would suggest you look into is If you run into problems with netqmail queues and have a hard time debugging the situation, you may want to look into I would highly recommend looking into the many other Horde applications. The netqmail utilizes ucspi-tcp to handle the incoming connections for netqmail. If you wish to customize these filtering rules, then see the configuration files in If you wish to do content filtering on your mail server (spam and virus), then you'll need to use a different queuing program than the default one. One good program for doing so is # cd /etc/tcprules.d/ # nano tcp.qmail-smtp (Add QMAILQUEUE="/var/qmail/bin/qmail-scanner-queue" to the catchall allow rule)# tcprules tcp.qmail-smtp.cdb tcp.qmail-smtp.tmp < tcp.qmail-smtp See the following sections for setting up spam and virus filtering. You may want to set a few custom options by editing One of the best Open Source spam filters out there is # nano /etc/mail/spamassassin/local.cf (At the bare minimum, add these options:)required_hits 6 skip_rbl_checks 1 # rc-update add spamd default # /etc/init.d/spamd start # nano /var/qmail/bin/qmail-scanner-queue.pl (Make sure the $spamc_binary variable is set to '/usr/bin/spamc'.) (If it is set to '', then see the note below.) At this point, incoming mail should be sent through qmail-scanner which will run it through SpamAssassin for you. Like SpamAssassin, # nano /etc/conf.d/clamd (Set START_CLAMD=yes)# nano /etc/clamav.conf (Setup stuff the way you want it)# rc-update add clamd default # /etc/init.d/clamd start # nano /var/qmail/bin/qmail-scanner-queue.pl (Make sure the $clamscan_binary variable is set to '/usr/bin/clamscan'.) (If it is set to '', then see the note below.)# nano /var/qmail/control/conf-common (If ClamAV reports memory problems try rasing the softlimit) At this point, incoming mail should be sent through qmail-scanner which will run it through Clam AntiVirus for you. I have no final notes other than if you experience any troubles with the guide, please contact
https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo/xml/htdocs/doc/en/qmail-howto.xml?revision=1.42
CC-MAIN-2015-22
refinedweb
1,711
67.15
About .NET 4.0 Series - I'll be covering various aspects of .NET 4.0 and related technologies in these posts C# 4.0 introduced the dynamickeyword, to support dynamic typing. If you assign an object to a dynamic type variable (like dynamic myvar=new MyObj()), all method calls, property invocations and operator invocations on myvar will be delayed till the run time, and the compiler won't perform any type checks for myvar at compile time. So, if you do something like myvar.SomethingInvalid(); it is valid at compile time, but invalid at runtime if the object you assigned to myvar doesn't have a SomethingInvalid() method. The System.Dynamic namespace has various classes for supporting dynamic programming, mainly the DynamicObject class from which you can derive your own classes to do run time dispatching yourself. A couple of points to note. - A dynamic call will be slower for the first time, and your calls will be jited and cached if possible for subsequent calls. As a first step, the DLR checks the cache to see if the given action has already been bound wrt to the arguments. If not, the DLR checks to see if the receiver is an IDynamicObject, and if so, asks the receiver to bind the action. If the receiver is not an IDO, then DLR calls into the language binder (i.e, the C# runtime binder), and cache this. - C#'s underlying type system has not changed in 4.0. As long as you are not using dynamic keyword, you are still statically typed (i.e, the types are known for the compiler at compile time). - Error handling when you use dynamic features is a bit difficult and can't be very specific, as you don't know much about the foreign objects to which you dispatch the calls. Recently, I published this article in Codeproject; This article demonstrates a couple of interesting techniques, including. This article demonstrates a couple of interesting techniques, including. - Creating a dynamic wrapper around the file system so that we can access Files and Directories as properties/members of a dynamic object - A way to attach custom Methods and operators to our dynamic wrapper class and dispatch them to a plug in sub system. Please keep your comments clean.
http://www.amazedsaint.com/2009/09/fun-with-dynamic-objects-and-mef-in-c.html
CC-MAIN-2018-51
refinedweb
380
62.38
The .NET security system is a marvelously intricate invention. You can customize the permissions available to an individual assembly or a group of assemblies (such as all code from a particular publisher) on an amazingly granular level. But many developers are a bit hazy on how all of the pieces fit together to generate these permissions. In this article, I'll walk you through the process of calculating permissions by hand. Armed with this knowledge, you can more effectively configure .NET to secure your assemblies. To get started in .NET security, you need to understand three basic concepts: You can work with all of these concepts in code, or through the Microsoft .NET Framework Configuration tool (which you can launch from Start > Programs > Administrative Tools). I'll focus on the graphical tool in this article, since my primary goal is understanding the process rather than customizing it in code. Figure 1 shows this tool in action, expanded to show some of the security-related nodes in its MMC treeview. Permissions are the individual abilities that the Common Language Runtime (CLR) can grant or deny to .NET code. A permission can be as broad as "unlimited power to use any printer" or as narrow as "power to read from a particular Registry key." The .NET Framework includes its own extensive set of permissions to control access to system resources. You'll find these in the System.Security.Permissions namespace. You can also create your own permissions in code if your program manages custom resources that should be included in the .NET security system. System.Security.Permissions Permission Sets are, of course, sets of permissions. A permission set contains one or more permissions that are granted or denied as a unit. In fact, you can't grant or deny individual permissions; to grant a single permission, you need to create a permission set containing only that permission, and then grant the permission set. The .NET Framework includes seven built-in permission sets: You can also create your own custom permission sets. In the .NET Framework Configuration Tool, right-click a Permission Sets node and select New. Assign a name and description to the new permission set and click Next. This will open the dialog box shown in Figure 2, which lets you pick the permissions that will be a part of your new permission set. Select the permissions that you want to include, using the Properties button to customize the individual permissions, if you like. Then click Finish to create the permission set. Code groups are sets of assemblies that share a security context. You define a code group by specifying the membership condition for the group. Every assembly in a code group receives the same permissions from that group; however, because assemblies can be members of multiple code groups, two assemblies in the same group might end up with different permissions in the end. The .NET Framework supports seven different membership conditions for code groups: The .NET Framework includes some built-in code groups. Of course, you can also create your own code groups. Right-click on a Code Groups node in the .NET Framework Configuration Tool and select New to create a new code group. Assign a name and description to the code group and click Next. Specify the membership permission for the code group and click Next. Select the permission set for the code group and click Next, then Finish to create the code group. Code groups are arranged in a hierarchy. If code is in a parent group in the hierarchy, it might also be in one or more of the child groups in the hierarchy. If code isn't in a parent group, then it won't be in any of the child groups, even if it matches their membership condition. Each code group has properties, which you can see by right-clicking and choosing Properties. Figure 3 shows the Properties dialog box for a particular code group. Take note of the two checkboxes at the bottom of the General tab. The "This policy level will only have the permissions from the permission set associated with this code group" checkbox sets the Exclusive property for the code group. The "Policy levels below this level will not be evaluated" checkbox sets the LevelFinal property for the code group. Now that you know the basics, you can follow the permissions process to determine the actual permissions applied to any given piece of code. To begin the process, think about permissions at the Enterprise level only. The Common Language Runtime starts by examining the evidence a particular piece of code presents to determine its membership in code groups at that level. Evidence is just an overall term for the various factors (publisher, strong name, hash, and so on) that can go into code group membership. As it's determining membership, the CLR walks down the hierarchy, checking the child code groups of each code group where the code being evaluated is a member. In general, the CLR will examine all of the code groups in the hierarchy to determine membership. However, the CLR stops checking for group membership if code is found to be a member of an Exclusive code group. Whether it's part of an Exclusive code group or not, code will be determined to be a member of zero or more code groups at the end of this first step. Next, the CLR retrieves the permission set for each code group that contains the code. If the code is a member of an Exclusive code group, only the permission set of that code group is taken into account. If the code is a member of more than one code group and none of them is an Exclusive code group, all of the permission sets of those code groups are taken into account. The permission set for the code is the union of the permission sets of all relevant code groups. That is, if code is a member of two code groups, and one code group grants Isolated Storage File permissions, but the other does not, the code will have Isolated Storage File permission from this step. This is a "least-restrictive" combination of permissions. That accounts for the permissions at one level (the Enterprise level). But there are actually four levels of permissions: Enterprise, Machine, User, and Application Domain. Only the first three levels can be managed within the .NET Framework Configuration Tool, but if you need specific security checking within an application domain (roughly speaking, an application domain is a session in which code runs), you can do this in code. An application domain can reduce the permissions granted to code within that application domain, but it cannot expand them. The Common Language Runtime determines which of the four levels are relevant by starting at the top (the Enterprise level) and working down all the way to the Application Domain level. But remember the LevelFinal property; if code is a member of a code group with this property set, then the CLR stops there. For example, if code is a member of a code group on the Machine level, and that group has the LevelFinal property, only the Enterprise and Machine levels are considered in assigning security. The CLR computes the permissions for each level separately and then assigns the code the intersection of the permissions of all relevant levels. That is, if code is granted Isolated Storage File permission on the Enterprise and Machine levels but is not granted Isolated Storage File permission on the User level, the code will not have Isolated Storage File permission. Across levels, this is a "most-restrictive" combination of permissions. After looking at all of the relevant levels, the CLR knows what permissions should be granted to the code in question, considered in isolation. But code does not run in isolation; it runs as part of an application. The final step of evaluating code access permissions is to perform a stack walk. In a stack walk, the CLR examines all code in the calling chain from the original application to the code being evaluated. The final permission set for the code is the intersection of the permission sets of all code in the calling chain. That is, if code is granted Isolated Storage File permission but the code that called it was not, the code will not be granted Isolated Storage File permission. Now that you know how to compute permissions by hand, I'll leave you with a way to check your work. In the .NET Framework Configuration Tool, right-click the Runtime Security Policy node and select Evaluate Assembly. Use the Browse button to locate the assembly in which you're interested. You can choose to view permissions granted to this assembly or to view code groups that grant permissions to this assembly. You can also choose whether to evaluate all levels or a particular level. Click Next and the results will come back, as shown in Figure 4. If you're having trouble figuring out why a particular assembly doesn't have the permissions you expect, this list of memberships can be a great troubleshooting tool. The .NET security system is set up to be as unobtrusive as possible, while still providing protection from rogue code. In particular, all code on your own computer is granted Full Trust permission by the My_Computer_Zone built-in code group at the Machine level. When you're ready to run code from elsewhere, or to restrict what a particular assembly can do, you'll need to dig further. Follow the process I've outlined, and you too can customize .NET security to do whatever you wish. Mike Gunderloy is the lead developer for Larkware and author of numerous books and articles on programming topics..
http://www.onlamp.com/pub/a/dotnet/2003/02/18/permissions.html?page=last&x-maxdepth=0
CC-MAIN-2015-40
refinedweb
1,634
61.67
NAME Start a timer. SYNOPSIS #include <zircon/syscalls.h> zx_status_t zx_timer_set(zx_handle_t handle, zx_time_t deadline, zx_duration_t slack); DESCRIPTION zx_timer_set() starts a one-shot timer that will fire when deadline passes. If a previous call to zx_timer_set() was pending, the previous timer is canceled and ZX_TIMER_SIGNALED is de-asserted as needed. The deadline parameter specifies a deadline with respect to ZX_CLOCK_MONOTONIC. To wait for a relative interval, use zx_deadline_after() returned value in deadline. To fire the timer immediately pass a deadline less than or equal to 0. When the timer fires it asserts ZX_TIMER_SIGNALED. To de-assert this signal call zx_timer_cancel() or zx_timer_set() again. The slack parameter specifies a range from deadline - slack to deadline + slack during which the timer is allowed to fire. The system uses this parameter as a hint to coalesce nearby timers. The precise coalescing behavior is controlled by the options parameter specified when the timer was created. ZX_TIMER_SLACK_EARLY allows only firing in the deadline - slack interval and ZX_TIMER_SLACK_LATE allows only firing in the deadline + slack interval. The default option value of 0 is ZX_TIMER_SLACK_CENTER and allows both early and late firing with an effective interval of deadline - slack to deadline + slack RIGHTS handle must be of type ZX_OBJ_TYPE_TIMER and have ZX_RIGHT_WRITE. RETURN VALUE zx_timer_set() returns ZX_OK on success. In the event of failure, a negative error value is returned. ERRORS ZX_ERR_BAD_HANDLE handle is not a valid handle. ZX_ERR_ACCESS_DENIED handle lacks the right ZX_RIGHT_WRITE. ZX_ERR_OUT_OF_RANGE slack is negative.
https://fuchsia.dev/fuchsia-src/reference/syscalls/timer_set
CC-MAIN-2020-29
refinedweb
241
50.43
tag:code.tutsplus.com,2005:/categories/composer Envato Tuts+ Code - Composer 2013-11-11T18:19:34Z tag:code.tutsplus.com,2005:PostPresenter/net-35593 Authentication With Laravel 4 <p>Authentication is required for virtually any type of web application. In this tutorial, I'd like to show you how you can go about creating a small authentication application using <a href="">Laravel 4</a>.!</p> <hr> <h2>Installation</h2> <p. </p> <h3>Download</h3> <p>Let's use composer to create a new Laravel application. I'll first change directories into my <code>Sites</code> folder as that's where I prefer to store all of my apps:</p> <pre class="brush: bash noskimlinks noskimwords">cd Sites</pre> <p>Then run the following command to download and install Laravel (I named my app <code>laravel-auth</code>) and all of its dependencies:</p> <pre class="brush: bash noskimlinks noskimwords">composer create-project laravel/laravel laravel-auth</pre> <h3>Add In Twitter Bootstrap</h3> <p>Now to keep our app from suffering a horrible and ugly fate of being styled by yours truly, we'll include the Twitter bootstrap within our <code>composer.json</code> file:</p> <pre class="brush: noskimlinks noskimwords">{ "name": "laravel/laravel", "description": "The Laravel Framework.", "keywords": ["framework", "laravel"], "require": { "laravel/framework": "4.0.*", "twitter/bootstrap": "*" }, // The rest of your composer.json file below ....</pre> <p>... and then we can install it:</p> <pre class="brush: bash noskimlinks noskimwords">composer update</pre> <p>Now if you open up your app into your text editor, I'm using Sublime, and if you look in the <code>vendor</code> folder you'll see we have the Twitter Bootstrap here.</p> <figure> <img width="600" alt="laravel-auth-twitter-bootstrap-installed" data-<br> </figure> <p>Now by default our Twitter Bootstrap is composed of <code>.less</code> files and before we can compile them into <code>.CSS</code> files, we need to install all of the bootstrap dependencies. This will also allow us to use the <code>Makefile</code> that is included with the Twitter bootstrap for working with the framework (such as compiling files and running tests).</p> <div> <p>Note: You will need <a href="">npm</a> in order to install these dependencies.</p> </div> <p>In your terminal, let's change directories into <code>vendor/twitter/bootstrap</code> and run <code>npm install</code>:</p> <pre class="brush: bash noskimlinks noskimwords">cd ~/Sites/laravel-auth/vendor/twitter/bootstrap npm install</pre> <p>With everything ready to go, we can now use the <code>Makefile</code> to compile the <code>.less</code> files into CSS. Let's run the following command:</p> <pre class="brush: bash noskimlinks noskimwords">make bootstrap-css</pre> <p>You should now notice that we have two new folders inside our <code>vendor/twitter/bootstrap</code> directory named <code>bootstrap/css</code> which contain our bootstrap CSS files.</p> <figure> <img width="600" alt="laravel-auth-css-compiled" data-<br> </figure> <p>Now we can use the bootstrap CSS files later on, in our layout, to style our app.</p> <p>But, we have a problem! We need these CSS files to be publicly accessible, currently they are located in our <code>vendor</code> folder. But this is an easy fix! We can use artisan to <code>publish</code> (move) them to our <code>public/packages</code> folder, that way we can link in the required CSS files into our main layout template, which we'll create later on.</p> <p>First, we'll change back into the root of our Laravel application and then run artisan to move the files:</p> <pre class="brush: bash noskimlinks noskimwords">cd ~/Sites/laravel-auth php artisan asset:publish --<br> </figure> <h3>Set Permissions</h3> <p>Next we need to ensure our web server has the appropriate permissions to write to our applications <code>app/storage</code> directory. From within your app, run the following command:</p> <pre class="brush: bash noskimlinks noskimwords">chmod -R 755 app/storage</pre> <h3>Connect To Our Database</h3> <p>Next, we need a database that our authentication app can use to store our users in. So fire up whichever database you are more comfortable using, personally, I prefer MySQL along with PHPMyAdmin. I've created a new, empty database named: <code>laravel-auth</code>.</p> <figure> <img width="600" alt="laravel-auth-database-creation" data-<br> </figure> <p>Now let's connect this database to our application. Under <code>app/config</code> open up <code>database.php</code>. Enter in your appropriate database credentials, mine are as follows:</p> <pre class="brush: php noskimlinks noskimwords">// ...</pre> <h3>Create the Users Table</h3> <p>With our database created, it won't be very useful unless we have a table to store our users in. Let's use artisan to create a new migration file named: <code>create-users-table</code>:</p> <pre class="brush: bash noskimlinks noskimwords">php artisan migrate:make create-users-table</pre> <p>Let's now edit our newly created migration file to create our <code>users</code>table using the <a href="">Schema Builder</a>. We'll start with the <code>up()</code> method:</p> <pre class="brush: php noskimlinks noskimwords">public function up() { $table->increments('id'); $table->string('firstname', 20); $table->string('lastname', 20); $table->string('email', 100)->unique(); $table->string('password', 64); $table->timestamps(); }</pre> <p>This will create a table named <code>users</code> and it will have an <code>id</code> field as the primary key, <code>firstname</code> and <code>lastname</code> fields, an <code>email</code> field which requires the email to be unique, and finally a field for the <code>password</code> (must be at least 64 characters in length) as well as a few <code>timestamps</code>.</p> <p>Now we need to fill in the <code>down()</code> method in case we need to revert our migration, to drop the <code>users</code> table:</p> <pre class="brush: php noskimlinks noskimwords">public function down() { Schema::drop('users'); }</pre> <p>And now we can run the migration to create our <code>users</code> table:</p> <pre class="brush: bash noskimlinks noskimwords">php artisan migrate</pre> <h3>Start Server & Test It Out</h3> <p>Alright, our authentication application is coming along nicely. We've done quite a bit of preparation, let's start up our server and preview our app in the browser:</p> <pre class="brush: bash noskimlinks noskimwords">php artisan serve</pre> <p>Great, the server starts up and we can see our home page:</p> <figure> <img width="600" alt="laravel-auth-home-page" data-<br> </figure> <hr> <h2>Making the App Look Pretty</h2> <p>Before we go any further, it's time to create a main layout file, which will use the Twitter Bootstrap to give our authentication application a little style!</p> <h3>Creating a Main Layout File</h3> <p>Under <code>app/views/</code> create a new folder named <code>layouts</code> and inside it, create a new file named <code>main.blade.php</code> and let's place in the following basic HTML structure:</p> <pre class="brush: html noskimlinks noskimwords"><!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Authentication App With Laravel 4</title> </head> <body> </body> </html></pre> <h3>Linking In the CSS Files</h3> <p>Next, we need to link in our bootstrap CSS file as well as our own <code>main</code> CSS file, in our <code>head</code> tag, right below our <code>title</code>:</p> <pre class="brush: html noskimlinks noskimwords"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Authentication App With Laravel 4</title> {{ HTML::style('packages/bootstrap/css/bootstrap.min.css') }} {{ HTML::style('css/main.css')}} </head></pre> <p>Now we just need to create this <code>main.css</code> file where we can add our own customized styling for our app. Under the <code>public</code> directory create a new folder named <code>css</code> and within it create a new file named <code>main.css</code>.</p> <figure> <img width="600" alt="laravel-auth-add-main-css-file" data-<br> </figure> <h3>Finishing the Main Layout</h3> <p>Inside of our <code>body</code> tag, let's create a small navigation menu with a few links for registering and logging in to our application:<> </body></pre> <p>Notice the use of several Bootstrap classes in order to style the navbar appropriately. Here we're just using a couple of DIVs to wrap an unordered list of navigation links, pretty simple.</p> <p <code>div</code> with a class of <code>.container</code> and display any available flash messages right after our navbar:<></pre> <p>To display the flash message, I've first used a Blade <code>if</code> statement to check if we have a flash message to display. Our flash message will be available in the Session under <code>message</code>. So we can use the <code>Session::has()</code> method to check for that message. If that evaluates to true, we create a paragraph with the Twitter bootstrap class of <code>alert</code> and we call the <code>Session::get()</code> method to display the message's value.</p> <p>Now lastly, at least for our layout file, let's echo out a <code>$content</code> variable, right after our flash message. This will allow us to tell our controller to use this layout file, and our views will be displayed in place of this <code>$content</code> variable, right here in the layout:<></pre> <hr> <h2>Custom Styling</h2> <p>Now that we have our layout complete, we just need to add a few small custom CSS rules to our <code>main.css</code> file to customize our layout a little bit more. Go ahead and add in the following bit of CSS, it's pretty self explanatory:</p> <pre class="brush: css noskimlinks noskimwords">body { padding-top: 40px; } .form-signup, .form-signin { width: 400px; margin: 0 auto; }</pre> <p>I added just a small amount of padding to the top of the <code>body</code> tag in order to prevent our navbar from overlapping our main content. Then I target the Bootstrap's <code>.form-signup</code> and <code>.form-signin</code> classes, which we'll be applying to our register and login forms in order to set their width and center them on the page.</p> <hr> <h2>Creating the Register Page</h2> <p>It's now time to start building the first part of our authentication application and that is our Register page.</p> <h3>The Users Controller</h3> <p>We'll start by creating a new <code>UsersController</code> within our <code>app/controllers</code> folder and in it, we define our <code>UsersController</code> class:</p> <pre class="brush: php noskimlinks noskimwords"><?php class UsersController extends BaseController { } ?></pre> <p>Next, let's tell this controller to use our <code>main.blade.php</code> layout. At the top of our controller set the <code>$layout</code> property:</p> <pre class="brush: php noskimlinks noskimwords"><?php class UsersController extends BaseController { protected $public function getRegister() { $this->layout->content = View::make('users.register'); }</pre> <p>Here we just set the <code>content</code> layout property (this is the <code>$content</code> variable we echo'd out in our layout file) to display a <code>users.register</code> view file.</p> <h3>The Users Controller Routes</h3> <p>With our controller created next we need to setup the routes for all of the actions we might create within our controller. Inside of our <code>app/routes.php</code> file let's first remove the default <code>/</code> route and then add in the following code to create our <code>UsersController</code> routes:</p> <pre class="brush: php noskimlinks noskimwords">Route::controller('users', 'UsersController');</pre> <p>Now anytime that we create a new action, it will be available using a URI in the following format: <code>/users/actionName</code>. For example, we have a <code>getRegister</code> action, we can access this using the following URI: <code>/users/register</code>.</p> <div> <p>Note that we don't include the "get" part of the action name in the URI, "get" is just the HTTP verb that the action responds to.</p> </div> <h3>Creating the Register View</h3> <p>Inside of <code>app/views</code> create a new folder named <code>users</code>. This will hold all of our <code>UsersController</code>'s view files. Inside the <code>users</code> folder create a new file named <code>register.blade.php</code> and place the following code inside of it:</p> <pre class="brush: php noskimlinks noskimwords">{{() }}</pre> <p>Here we use the <code>Form</code> class to create our register form. First we call the <code>open()</code> method, passing in an array of options. We tell the form to submit to a URI of <code>users/create</code> by setting the <code>url</code> key. This URI will be used to process the registration of the user. We'll handle this next. After setting the <code>url</code> we then give the form a class of <code>form-signup</code>.</p> <p>After opening the form, we just have an <code>h2</code> heading with the <code>.form-signup-heading</code> class.</p> <p>Next, we use a <code>@foreach</code> loop, looping over all of the form validation error messages and displaying each <code>$error</code> in the unordered list.</p> <p>After the form validation error messages, we then we create several form input fields, each with a class of <code>input-block-level</code> and a placeholder value. We have inputs for the firstname, lastname, email, password, and password confirmation fields. The second argument to the <code>text()</code> method is set to <code>null</code>, since we're using a <code>placeholder</code>, we don't need to set the input fields value attribute, so I just set it to <code>null</code> in this case.</p> <p>After the input fields, we then create our submit button and apply several different classes to it so the Twitter bootstrap handles the styling for us.</p> <p>Lastly, we just close the form using the <code>close()</code> method.</p> <p>Make sure to start up your server, switch to your favorite browser, and if we browse to <code></code> you should see your register page:</p> <figure> <img width="600" alt="laravel-auth-register-page" data-<br> </figure> <hr> <h2>Processing the Register Form Submission</h2> <p>Now if you tried filling out the register form's fields and hitting the <strong>Register</strong> button you would have been greeted with a <code>NotFoundHttpException</code>, and this is because we have no route that matches the <code>users/create</code> URI, because we do not have an action to process the form submission. So that's our next step!</p> <h3>Creating a <code>postCreate</code> Action</h3> <p>Inside of your <code>UsersController</code> let's create another action named <code>postCreate</code>:</p> <pre class="brush: php noskimlinks noskimwords">public function postCreate() { }</pre> <p>Now this action needs to handle processing the form submission by validating the data and either displaying validation error messages or it should create the new user, hashing the user's password, and saving the user into the database.</p> <h3>Form Validation</h3> <p <code>User.php</code> model already created for you.</p> <div> <p>Make sure you don't delete this User model or remove any of the preexisting code, as it contains new code that is required for Laravel 4's authentication to work correctly. Your User model must implement <code>UserInterface</code> and <code>RemindableInterface</code> as well as implement the <code>getAuthIdentifier()</code> and <code>getAuthPassword()</code> methods.</p> </div> <p>Under <code>app/models</code> open up that <code>User.php</code> file and at the top, add in the following code:</p> <pre class="brush: php noskimlinks noskimwords">public static $rules = array( 'firstname'=>'required|alpha|min:2', 'lastname'=>'required|alpha|min:2', 'email'=>'required|email|unique:users', 'password'=>'required|alpha_num|between:6,12|confirmed', 'password_confirmation'=>'required|alpha_num|between:6,12' );</pre> <p>Here I'm validating the <code>firstname</code> and <code>lastname</code> fields to ensure they are present, only contain alpha characters, and that they are at least two characters in length. Next, I validate the <code>email</code> field to ensure that it's present, that it is a valid email address, and that it is unique to the users table, as we don't want to have duplicate email addresses for our users. Lastly, I validate the <code>password</code> and <code>password_confirmation</code> fields. I ensure they are both present, contain only alpha-numeric characters and that they are between six and twelve characters in length. Additionally, notice the <code>confirmed</code> validation rule, this makes sure that the <code>password</code> field is exactly the same as the matching <code>password_confirmation</code> field, to ensure users have entered in the correct password.</p> <p>Now that we have our validation rules, we can use these in our <code>UsersController</code> to validate the form submission. In your <code>UsersController</code>'s <code>postCreate</code> action, let's start by checking if the data passes validation, add in the following code:</p> <pre class="brush: php noskimlinks noskimwords">public function postCreate() { $validator = Validator::make(Input::all(), User::$rules); if ($validator->passes()) { // validation has passed, save user in DB } else { // validation has failed, display error messages } } }</pre> <p>We start by creating a validator object named <code>$validator</code> by calling the <code>User::validate()</code> method. This accepts the two arguments, the submitted form input that should be validated and the validation rules that the data should be validated against. We can grab the submitted form data by calling the <code>Input::all()</code> method and we pass that in as the first argument. We can get our validation rules that we created in our <code>User</code> model by accessing the static <code>User::$rules</code> property and passing that in as the second argument.</p> <p>Once we've created our validator object, we call its <code>passes()</code> method. This will return either <code>true</code> or <code>false</code> and we use this within an <code>if</code> statement to check whether our data has passed validation.</p> <p>Within our <code>if</code> statement, if the validation has passed, }</pre> <p>As long as the data that the user submits has passed validation, we create a new instance of our User model: <code>new User;</code> storing it into a <code>$user</code> variable. We can then use the <code>$user</code> object and set each of the user's properties using the submitted form data. We can grab the submitted data individually using the <code>Input::get('fieldName')</code> method. Where <code>fieldName</code> <code>Hash::make()</code> method to hash the submitted password for us before saving it. Lastly, we save the user into the database by calling the <code>$user</code> object's <code>save()</code> method.</p> <p>After creating the new user, we then redirect the user to the login page (we'll create the login page in a few moments) using the <code>Redirect::to()</code> method. This just takes in the URI of where you'd like to redirect to. We also chain on the <code>with()</code> method call in order to give the user a flash message letting them know that their registration was successful.</p> <p>Now if the validation does not pass, we need to redisplay the register page, along with some validation error messages, with the old input, so the user can correct their mistakes. Within your <code>else</code> statement,(); }</pre> <p>Here we just redirect the user back to the register page with a flash message letting them know some errors have occurred. We make sure to display the validation error messages by calling the <code>withErrors($validator)</code> method and passing in our <code>$validator</code> object to it. Finally, we call the <code>withInput()</code> method so the form remembers what the user originally typed in and that will make it nice and easy for the user to correct the errors.</p> <h3>Adding In the CSRF Before Filter</h3> <p>Now we need to make sure to protect our POST actions from CSRF attacks by setting the CSRF before filter within our <code>UsersController</code>'s constructor method. At the top of your <code>UsersController</code> add in the following code:</p> <pre class="brush: php noskimlinks noskimwords">public function __construct() { $this->beforeFilter('csrf', array('on'=>'post')); }</pre> <p>Within our constructor, we call the <code>beforeFilter()</code> method and pass in the string <code>csrf</code>, as the first argument. <code>csrf</code>.</p> <hr> <h2>Creating the Login Page</h2> <p!</p> <p>Still inside of your <code>UsersController</code>, create a new action named <code>getLogin</code> and place in the following code:</p> <pre class="brush: php noskimlinks noskimwords">public function getLogin() { $this->layout->content = View::make('users.login'); }</pre> <p>This will display a <code>users.login</code> view file. We now need to create that view file. Under <code>app/views/users</code> create a new file named <code>login.blade.php</code> and add in the following code:</p> <pre class="brush: php noskimlinks noskimwords">{{() }}</pre> <p>This code is very similar to the code we used in our <code>register</code> view, so I'll simplify the explanation this time to only what is different. For this form, we have it submit to a <code>users/signin</code> URI and we changed the form's class to <code>.form-signin</code>. The <code>h2</code> has been changed to say "Please Login" and its class was also changed to <code>.form-signin-heading</code>. Next, we have two form fields so the user can enter in their email and password, and then finally our submit button which just says "Login".</p> <h3>Let's Register a New User!</h3> <p <code></code>. Try entering in some invalid user data to test out the form validation error messages. Here's what my page looks like with an invalid user:</p> <figure> <img width="600" alt="laravel-auth-displaying-errors" data-<br> </figure> <p>Now try registering with valid user data. This time we get redirected to our login page along with our success message, excellent!</p> <figure> <img width="600" alt="laravel-auth-successful-registration" data-<br> </figure> <hr> <h2>Logging In</h2> <p>So we've successfully registered a new user and we have a login page, but we still can't login. We now need to create the <code>postSignin</code> action for our <code>users/signin</code> URI, that our login form submits to. Let's go back into our <code>UsersController</code> and create a new action named <code>postSignin</code>:</p> <pre class="brush: php noskimlinks noskimwords">public function postSignin() { }</pre> <p>Now let's log the user in, using the submitted data from the login form. Add the following code into your <code>postSignin()</code> action:</p> <pre class="brush: php noskimlinks noskimwords">if (Auth::attempt(array('email'=>Input::get('email'), 'password'=>Input::get('password')))) { return Redirect::to('users/dashboard')->with('message', 'You are now logged in!'); } else { return Redirect::to('users/login') ->with('message', 'Your username/password combination was incorrect') ->withInput(); }</pre> <p>Here we attempt to log the user in, using the <code>Auth::attempt()</code> method. We simply pass in an array containing the user's email and password that they submitted from the login form. This method will return either <code>true</code> or <code>false</code> if the user's credentials validate. So we can use this <code>attempt()</code> method within an <code>if</code> statement. If the user was logged in, we just redirect them to a <code>dashboard</code> view page and give them a success message. Otherwise, the user's credentials did not validate and in that case we redirect them back to the login page, with an error message, and display the old input so the user can try again.</p> <h3>Creating the Dashboard</h3> <p.</p> <p>While still inside of your <code>UsersController</code> let's create a new action named <code>getDashboard</code>:</p> <pre class="brush: php noskimlinks noskimwords">public function getDashboard() { }</pre> <p>And inside of this action we'll just display a <code>users.dashboard</code> view file:</p> <pre class="brush: php noskimlinks noskimwords">public function getDashboard() { $this->layout->content = View::make('users.dashboard'); }</pre> <p>Next, we need to protect it from unauthorized users by using the <code>auth</code> before filter. In our <code>UsersController</code>'s constructor, add in the following code:</p> <pre class="brush: php noskimlinks noskimwords">public function __construct() { $this->beforeFilter('csrf', array('on'=>'post')); $this->beforeFilter('auth', array('only'=>array('getDashboard'))); }</pre> <p>This will use the <code>auth</code> filter, which checks if the current user is logged in. If the user is not logged in, they get redirected to the login page, essentially denying the user access. Notice that I'm also passing in an array as a second argument, by setting the <code>only</code> key, I can tell this before filter to only apply it to the provided actions. In this case, I'm saying to protect only the <code>getDashboard</code> action.</p> <h3>Customizing Filters</h3> <p>By default the <code>auth</code> filter will redirect users to a <code>/login</code> URI, this does not work for our application though. We need to modify this filter so that it redirects to a <code>users/login</code> URI instead, otherwise get an error. Open up <code>app/filters.php</code> and in the <strong>Authentication Filters</strong> section, change the <strong>auth</strong> filter to redirect to <code>users/login</code>, like this:</p> <pre class="brush: php noskimlinks noskimwords">/* |-------------------------------------------------------------------------- | Authentication Filters |-------------------------------------------------------------------------- | | The following filters are used to verify that the user of the current | session is logged into this application. The "basic" filter easily | integrates HTTP Basic authentication for quick, simple checking. | */ Route::filter('auth', function() { if (Auth::guest()) return Redirect::guest('users/login'); });</pre> <h3>Creating the Dashboard View</h3> <p>Before we can log users into our application we need to create that <code>dashboard</code> view file. Under <code>app/views/users</code> create a new file named <code>dashboard.blade.php</code> and insert the following snippet of code:</p> <pre class="brush: html noskimlinks noskimwords"><h1>Dashboard</h1> <p>Welcome to your Dashboard. You rock!</p></pre> <p>Here I'm displaying a very simple paragraph to let the user know they are now in their Dashboard.</p> <h3>Let's Login!</h3> <p>We should now be able to login. Browse to <code></code>, enter in your user's credentials, and give it a try.</p> <figure> <img width="600" alt="laravel-auth-logged-in" data-<br> </figure> <p>Success!</p> <hr> <h2>Displaying the Appropriate Navigation Links</h2> <p <code>main.blade.php</code> file again. Here's what our navbar code looks like at the moment:</p> <pre class="brush: php noskimlinks noskimwords"><div class="navbar navbar-fixed-top"> <div class="navbar-inner"> <div class="container"> <ul class="nav"> <li>{{ HTML::link('users/register', 'Register') }}</li> <li>{{ HTML::link('users/login', 'Login') }}</li> </ul> </div> </div> </div></pre> <p>Let's modify this slightly, replacing our original navbar code, with the following:</p> <pre class="brush: php noskimlinks noskimwords">></pre> <p>All I've done is wrapped our <code>li</code> tags for our navbar in an <code>if</code> statement to check if the user is <em>not</em> logged in, using the <code>!Auth::check()</code> method. This method returns <code>true</code> if the user is logged in, otherwise, <code>false</code>. So if the user is not logged in, we display the register and login links, otherwise, the user is logged in and we display a logout link, instead.</p> <figure> <img width="600" alt="laravel-auth-logout-link" data-<br> </figure> <hr> <h2>Logging Out</h2> <p>Now that our navbar displays the appropriate links, based on the user's logged in status, let's wrap up this application by creating the <code>getLogout</code> action, to actually log the user out. Within your <code>UsersController</code> create a new action named <code>getLogout</code>:</p> <pre class="brush: php noskimlinks noskimwords">public function getLogout() { }</pre> <p>Now add in the following snippet of code to log the user out:</p> <pre class="brush: php noskimlinks noskimwords">public function getLogout() { Auth::logout(); return Redirect::to('users/login')->with('message', 'Your are now logged out!'); }</pre> <p>Here we call the <code>Auth::logout()</code> method, which handles logging the user out for us. Afterwards, we redirect the user back to the login page and give them a flash message letting them know that they have been logged out.</p> <figure> <img width="600" alt="laravel-auth-logged-out" data-<br> </figure> <hr> <h2>Conclusion</h2> <p <a href="">complete source code</a> for the small demo app that we built throughout this tutorial on Github. Thanks for reading.</p> 2013-11-11T18:19:34.000Z 2013-11-11T18:19:34.000Z Andrew Perkins tag:code.tutsplus.com,2005:PostPresenter/net-29218 Why 2013 is the Year of PHP <p>2012 was an excellent year for the PHP community, thanks to many badly needed features being added to version 5.4, as well as the countless projects, advancing PHP to the next level.</p> <p>In this article, I'd like to review a handful of the issues that people had with PHP in the past, and provide a glimpse at why 2013 just may be the year of PHP!</p> <p></p> <hr> <h2>Why the Hostility?</h2> <p>This may come as a surprise to you, but many people have negative feelings toward PHP developers, and the language as a whole. You likely know exactly what I mean, if you've considered learning Ruby in the past couple of years, due to some sense of peer pressure.</p> <blockquote> <p>However, before you make any changes, you have to ask yourself: "Why does PHP have such a stigma?"</p> </blockquote> <p>Well, like many of life's important questions, there is no clear-cut answer. After doing a bit of searching online, for some PHP arguments, you'll find that roughly eighty percent of the arguments against PHP are rooted in ignorance, in one form or another.</p> <blockquote> <p>Roughly eighty percent of the arguments against PHP are rooted in ignorance.</p> </blockquote> <h4>The Beginners</h4> <p>There are the beginners, who don't really know how PHP works. This results in questions, like "<em>Why can't you listen for button events with PHP?</em>," and similar questions about AJAX.</p> <h4>One Language to Rule Them All</h4> <p>Next, you have the folks who don't know about other language or framework than the one that they currently use. These are the types of people who make arguments, such as "<em>Rails is much easier then PHP</em>," and things like that.</p> <h4>Fighting PHP 4</h4> <p>The third form of misconception comes from the people who haven't kept up with PHP's advances over the years. Instead, they're still fighting the language, as it existed years and years ago. This results in statements, like: "<em>PHP isn't object oriented</em>" or "<em>PHP sucks because it doesn't support namespacing.</em>" You get the idea.</p> <h4>Scaling</h4> <p.</p> <div> <strong>What is the PHP-FIG?</strong> ." </div> <p>It's an unfortunate truth that some arguments, which permeate through the web are either completely false, or updated.</p> <hr> <h2>PHP Isn't Perfect</h2> <blockquote> <p>There's truth in every criticism, however.</p> </blockquote> <p>There's truth in every criticism, however. PHP isn't perfect. When it comes to its implementation of core features and functions, PHP is inconsistent. These arguments are entirely valid.</p> <p, "<em>Why not just deprecate the bad parts?</em>"</p> <p <code>-></code> syntax. So, instead of <code>array_push($arr, "Value");</code>, you would write something, like <code>$arr.push("Value");</code>.</p> <p>Don't worry; things like this have been happening slowly. Just look at the new PHP 5.5 features. The old function-oriented MySQL add-on has been deprecated, in favor of the newer object-oriented approach.</p> <hr> <h2>The Present</h2> <p>Now with the past covered, let's move up to the present. There are a handful of really cool projects and movements, some of which borrow ideas from other languages, in order to propel PHP to the next level.</p> <p>Let's consider the following:</p> <ul> <li><a href="">Composer</a></li> <li><a href="">Laravel</a></li> <li><a href="">Test Driven Development</a></li> <li><a href="">PHP 5.4 / 5.5</a></li> </ul> <hr> <h3>Composer</h3> <div><img alt="" data-</div> <blockquote> <p>The PHP community can now stop reinventing the wheel over and over again, thanks to Composer.</p> </blockquote> <p.</p> <h4>PEAR?</h4> <p.</p> <blockquote> <p>If you so desire, you can pick and choose your components.</p> </blockquote> <p.</p> <p>Additionally, Composer is a light application - written in PHP, itself - and comes with an autoloader feature. This works off of the PSR-0 standard (mentioned above), which will automatically load your dependancies as you need them, so your application remains as clean as possible.</p> <p>All of these features are a definite improvement, however, without community adoption, it means nothing. I'm happy to inform you that it's been very well accepted. Big projects, such as Symfony and Laravel, have already uploaded their components to the Composer library, <a href="">Packagist</a>. Having the framework split up into components means that you can easily build your own custom framework to match your liking. In other words, no more bloated frameworks. If you so desire, you can pick and choose your components.</p> <p?</p> <hr> <h2>Laravel</h2> <div><img alt="" data-</div> <blockquote> <p>Even if you do have issues with some of PHP's inconsistencies, Laravel abstracts nearly all of it.</p> </blockquote> <p>Now this wouldn't be an article about PHP's future without discussing Laravel in a bit more detail. We're often asked why Nettuts+ seems to be pushing Laravel as much as it has been. This is the wrong question. Instead, ask "<em>Why not?</em>"</p> <p>Even if you do have issues with some of PHP's inconsistencies, Laravel abstracts nearly all of it, providing you with the feel and elegance of a language, like Ruby, but with the ease of PHP.</p> <p>Laravel comes with <a href="">Eloquent< <code>find</code> and <code>delete</code>.</p> <p.</p> <blockquote> <p>If you'd like to learn more about Laravel, I recommend the Tuts+ Premium course, <a href="">Laravel Essentials</a>,.</p> </blockquote> <p!</p> <hr> <h2>PHP 5.4 / 5.5</h2> <div><img alt="" data-</div> +: <a href="">5.4 article</a>, <a href="">5.5 article</a>.</p> <p>But, for a quick recap of my favorites:</p> <h4>Traits</h4> <ul> <li>Traits add the ability to create Class "partials," which allows you to create consistent objects without re-writing everything over and over.</li> </ul> <h4>Generators</h4> <ul> <li>Generators let you do some cool things with lists of data, as well as allow you to benefit from all the features that come with lazy-evaluation.</li> </ul> <h4>CLI Web Server</h4> <ul> <li>Another great addition is the built in web server, which allows you to test your applications with different versions of PHP, without the need for something like Apache.</li> </ul> <h4>Dereferencing</h4> <ul> <li>Dereferencing is not a major addition, but it's nice to be able to reference child elements without the use of functions. This includes things like accessing individual characters of a constant by using only square bracket notation.</li> </ul> <h4>The New Password Hashing API</h4> <ul> <li>With the new API, you are given the ability to both encrypt strings, as well as verify and strengthen passwords - all without any knowledge of bcrypt or any other hashing algorithm.</li> </ul> <p>These represent just a few of the new improvements, and their is a whole list of things that are currently being discussed for the next version, scheduled to be released later this year.</p> <hr> <h2>Test Driven Development</h2> <div><img alt="" data-</div> <p.</p> <h3>Why is it Important?</h3> <blockquote> <p>Think about your project before diving in, like a cowboy.</p> </blockquote> <p.</p> <h3>How Does TDD Help?</h3> <p.</p> <p>Yes setting up these tests requires an extra step, but so is thinking before you speak. Does anyone the benefits of that? Of course not. The same is true for tests: think about your project before diving in, like a cowboy. </p> <h5>Additional Learning</h5> <ul> <li> <a href="">Test-Driven PHP</a> (Premium) </li> <li> <a href="">TDD in PHP</a> </li> </ul> <hr> <h2>Conclusion</h2> <p>It's an exciting time to be a PHP developer. Many of the inherent problems have or are being fixed. AS for the other issues, we'll those are easily remedied with a good framework and testing.</p> <p>So what do you think? Are you getting on board? Disagree with me? If so, let's continue the discussion below!</p> 2013-01-15T19:08:36.000Z 2013-01-15T19:08:36.000Z Gabriel Manricks tag:code.tutsplus.com,2005:PostPresenter/net-25530 Easy Package Management With Composer <p>Let's face it: PHP has had a rocky history with package management, and as a result, it is pretty rare to find a developer who actively uses systems like PEAR. Instead, most developers have chosen their favorite framework, which has code specifically written for it to handle various things, like DB interaction, ORM's, OAuth, Amazon S3 integration, etc.</p> <p>The downside here, though, is that switching frameworks (or returning to not using a framework at all) can be a nightmare, as it involves relearning everything to use brand new tools - and that is no easy task. Well, <a href="">Composer</a> can fix that!</p> <p><!--more--></p> <hr> <h2>Introduction</h2> <blockquote class="pullquote"><p> "The glue between all projects." </p></blockquote> <p><a href="">Composer</a> sets out to solve this situation by positioning itself as "the glue between all projects" - meaning that packages can be written, developed and shared in a format that other developers can plug into other applications with ease.</p> <p>This article sets out to show you how to install and work with Composer packages. By the end of this article, you will be able to plug and play with chunks of code in any framework, whether you work with <a href="">CodeIgniter</a>, <a href="">FuelPHP</a>, <a href="laravel.com">Laravel</a>, <a href="">Symfony2</a>, <a href="">Lithium</a>, <a href="">Yii</a>, <a href="">Zend</a>... or anything else. </p> <hr> <h2> <span>Step 1 -</span> Installing Composer</h2> <p>Composer has two main logical parts: there is a repository that stores packages, and then there's the command-line application, which helps you find, download, update and share code.</p> <p>Installing the application on anything Unix flavoured is easy:</p> <pre class="brush: bash noskimlinks noskimwords">$ cd /path/to/my/project $ curl -s | php</pre> <p>It's as easy as that! You'll now have a <code>composer.phar</code> file listed in your project, which contains all of the logic for the command line utility. </p> <p>You can confirm that it has been installed by running:</p> <pre class="brush: bash noskimlinks noskimwords">$ php composer.phar</pre> <p>This command will reveal all available commands. </p> <p>A personal preference of mine is to run an extra command:</p> <pre class="brush: bash noskimlinks noskimwords">$ sudo mv composer.phar /usr/bin/composer</pre> <p>This <em>moves</em> the file into your bin, which allows you to access all commands with the much shorter example:</p> <pre class="brush: bash noskimlinks noskimwords">$ composer about</pre> <p>If you're running Windows, you can just download this file, and run it through the PHP interpreter - wherever that may be installed.</p> <hr> <h2> <span>Step 2 -</span> Understanding <code>composer.json</code> </h2> <p>If you are a Ruby developer, you'll likely be familiar with the <code>Gemfile</code>. Or, Node developers will know about <code>package.json</code>. Similarly, Composer uses a <code>composer.json</code> file to specify settings and package requirements for your application.</p> <p>In its most basic form, the composer file will look like this:</p> <pre class="brush: js noskimlinks noskimwords">{ "require": { "kriswallsmith/assetic": "*" } }</pre> <p>This will require the "Assetic" package, created by "kriswallsmith", and will require any version. To specify a specific version, you could instead use:</p> <pre class="brush: js noskimlinks noskimwords">"kriswallsmith/assetic": "1.0.3"</pre> <p>You can even combine the two approaches, like so:</p> <pre class="brush: js noskimlinks noskimwords">"kriswallsmith/assetic": "1.0.*"</pre> <p>This will allow any minor update to automatically be included, but it not upgrade to 1.1.0, as that might have some interface changes a developer will need to watch out for.</p> <hr> <h2> <span>Step 3</span> - Installation Requirements</h2> <p>Now that you have one or more packages listed within your <code>composer.json</code>, you can run:</p> <pre class="brush: bash noskimlinks noskimwords">$ php composer.phar install</pre> <p>...Or, if you've used my trick to shorten it on Unix machines (see above):</p> <pre class="brush: bash noskimlinks noskimwords">$ composer install</pre> <p>You'll now notice files being downloaded and placed into a new <code>vendors/</code> folder within the root of your application. This logic can be changed, using the following configuration option:</p> <pre class="brush: js noskimlinks noskimwords">{ "require": { "kriswallsmith/assetic": "1.0.*" }, "config" : { "vendor-dir" : "packages" } }</pre> <hr> <h2> <span>Step 4 - </span> Autoloading</h2> <blockquote class="pullquote"><p> Autoloading in PHP has been a bit of a mess for some time.</p></blockquote> <p>Autoloading in PHP has been a bit of a mess for some time, as every developer has his or her own ways of handling things. Some packages, like <a href="">Smarty</a>, use their own autoloading, some developers place multiple classes into one file or have lower-case file names - it's all very random.</p> <p>PSR-0 is a standard, created by the PHP Standards Group, to calm this mess down; Composer will work with it by default. Composer bundles with a PSR-0 autoloader, which you can include in your project with only a single line:</p> <pre class="brush: php noskimlinks noskimwords">include_once './vendor/autoload.php';</pre> <p><em>Obviously, if you changed the vendor directory, you'll need to update that.</em></p> <p>You can now use the code in your applications:</p> <pre class="brush: php noskimlinks noskimwords"><?php();</pre> <p>This is an example of Assetic in use. Yes, there is a lot of namespace code in there, but this is done to avoid conflicts between packages. The naming convention for PSR-0 is essentially:</p> <pre class="brush: bash noskimlinks noskimwords">\<Vendor Name>\(<Namespace>\)*<Class Name></pre> <p>Another example might be the Buzz HTTP package, which looks like so:</p> <pre class="brush: php noskimlinks noskimwords">$browser = new Buzz\Browser; $response = $browser->get(''); echo $browser->getLastRequest()."\n"; echo $response;</pre> <p>That might look like a glorified <code>file_get_contents()</code>, but it handles all sorts of smart logic in the background for working with HTTP Response/Request - and you can see the namespace syntax is a little less intense.</p> <hr> <h2> <span>Step 5 -</span> Real World</h2> <blockquote class="pullquote"><p> If you want to be really clever, you can automate the whole process. </p></blockquote> <p.</p> <p>That version then sits with your code as a static file, which, at some point, you may or may not remember to upgrade - <strong>IF</strong> you notice that Facebook have released an updated version. The new version of the file goes over the top and you push those new changes, too.</p> <p>You <em>can</em> use Composer to avoid needing to pay attention to the versions, and just run an update, and commit all the changes. But why have loads of code in your repository that you don't need to have in there? </p> <p>The neatest solution is to add <code>vendors/</code> to your "Ignore" list (E.g: .gitignore) and keep your code out of there entirely. When you deploy code to your hosts, you can just run <code>composer install</code> or <code>composer update</code>.</p> <p>If you want to be really clever, you can automate the whole process, so if you have <a href="">hosting in the cloud</a>, you can set up hooks to run <code>composer install</code> as soon as your new code is pushed!</p> <hr> <h2>Summary</h2> <p>You'll start to see a lot more of Composer going forward, as various PHP frameworks have begun providing various levels of integration; <a href="">FuelPHP</a> will be built as Composer packages, <a href="">CodeIgniter</a> will support autoloading, and <a href="">Symfony2</a> is already using it extensively.</p> <p>Composer is a great way to add dependencies to your projects without needing to install PECL extensions or copy and paste a bunch of files. That way of doing things is extremely outdated, and requires too much of a developer's time.</p> 2012-06-25T14:21:26.000Z 2012-06-25T14:21:26.000Z Philip Sturgeon
https://code.tutsplus.com/categories/composer.atom
CC-MAIN-2020-05
refinedweb
7,792
52.09
Visual Studio 2008’s release has come and gone, and I can confidently say that I am fully overwhelmed. As of this writing, I've had almost no time to dive into new language features LINQ, let alone such major paradigm shifts like WPF. I'm approaching this article as a platform to learn WPF myself, but I hope you find it instructional as well. I decided to use the mathematical concept of an epicycloid as the subject for my first attempt at WPF. Most people will remember epicycloids from their youth as Spirographs. I had an endless fascination drawing Spirograph artwork as a kid, so it seems fitting to resurrect the memory for a graphics-related article. For those not already familiar with what Windows Presentation Foundation is, WPF is Microsoft’s next generation platform for building Windows user interfaces. The beauty of WPF is that, for the first time, the presentation and layout that defines a Windows UI is decoupled from the code. The UI is rendered using XAML, the native WPF, XML-based mark up, while code remains in *.cs files. Most importantly, WPF should make most UI tasks exponentially easier. We'll explore that claim throughout this article.The separation of XAML and code is essentially the same as the separation between HTML markup and an ASP.NET code behind file. This is a far cry from the visually limiting, drag and drop interface left over even from the VB 5 days. The clear benefit is that, with tools like Expressions, an application’s UI can be designed by a graphics team while a development team remains focused on development. This is especially important for someone like me considering some of the terrible UIs I've designed in my career. Despite my lack of design talent, WPF should provide a hefty UI advantage. Let’s take a crack at a putting together a decent interface. The first stop along the path is adding a nice background. In years past, developers had to subscribe to or override a forms paint event, do some not-so-fun clipping calculations, and then manually paint their image to get what WPF offers in just a couple lines of XAML. Here’s the XAML for the tiled background: <grid> <grid.background> <imagebrush viewport="0,0,0.3,0.3" viewbox="0,0,1,1" stretch="None" tilemode="FlipXY" imagesource="background.jpg" /> </grid.background> </grid> Isn't it nice how the XAML almost speaks for itself? The main (parent) element on my form is a grid. I've added an ImageBrush to the grid’s background. The image is set to tile by flipping every other tile horizontally (have fun coding that without WPF). The Viewbox property simply represents the area of the original image to be displayed. The Viewbox property is the target area where the Viewbox will be displayed. In other words, the XAML above tells the WPF Framework which portion of the image should be tiled. Notice that as you change the Viewport values in the XAML, the UI representation within Visual Studio 2008 dynamically updates! ImageBrush Viewbox Viewport Ok, the background was easy in just a couple lines of mark up. What we need now is a general window layout. I'm aiming for two columns with controls on the left and the Spirograph’s drawing surface on the right; something like this: To pull this off, I'm using a combination of Grid, StackPanel, and DockPanel objects. The dock panel is the primary container. That’s simple enough, to me, I want to dock the controls to the left and have the drawing surface fill the rest of the window. One hangup I found is that there’s no “Fill” property in WPF, where I've come to rely on that up through the 2.0 Framework. I did find, however, that the DockPanel control has a handy property called LastChildFill which does exactly what I need. Here’s the abbreviated XAML: Grid StackPanel DockPanel DockPanel LastChildFill <dockpanel lastchildfill="True" margin="20,20,5,5 " name="dockPanel1" > <stackpanel name="stackPanel1" width="200" dockpanel. ... </stackpanel> <dockpanel lastchildfill="True" margin="20,0,15,15" name="stackPanel2" dockpanel. ... </dockpanel> </dockpanel> Once again, terribly simple. We now have a left-docked panel and a filled right area. I thought it would look nice for the controls to sit on top of a semi-transparent. Once again, without using WPF this would be a time consuming task. And once again, WPF makes short work of designing the UI. Take a look at the following XAML that renders a semi-transparent rectangle with rounded corners: <rectangle width="200" opacity="0.5" stroke="Silver" fill="White" radiusy="10" radiusx="10" height="260" /> That’s it. One line. It’s still sinking in how easily this is coming together. Note that in the source code, you'll see this Rectangle object inside a Canvas element. The quick explanation is that the Canvas element allows for controls to overlap one another. Rectangle Canvas Next I thought I'd try customizing a standard button. I wanted a button with a PNG image on the left edge. This is one task that isn't difficult in previous versions of .NET and Visual Studio, so let’s put the XAML to the test and see if it’s either easier or more customizable in WPF. Here’s the XAML for an image button. <button name="drawButton" margin="10,10,0,10" width="80" height="23" horizontalalignment="Right"> <stackpanel width="Auto" height="Auto" horizontalalignment="Left" orientation="Horizontal"> <img height="16" width="16" stretch="Fill" source="draw.png" /> <textblock margin="10,0,50,0" text="Draw" fontsize="12" verticalalignment="Center" /> </stackpanel> </button> Did you notice that the button is acting as a host for child controls? The implications are endless! Unlike a standard WinForms button, this WPF button is as feature rich as any other drawn element in the WPF Framework. Ever wanted a button with an integrated drop down list? Me neither, but it’s possible. Hopefully by now you're seeing how much power is in so little markup. Now for the fun part. This is where we get to build a custom control for drawing the Spirograph. As it turns out, the Canvas element has all the drawing methods and context needed to draw or animate a Spirograph. Let’s take a look at how a Canvas object can be extended for custom drawing. public class GraphContext : Canvas { protected override void OnRender(DrawingContext drawingContext){…} protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo){…} } Nothing fancy here except that overriding the OnRender method gives us access to the DrawingContext object. You can liken this to the Graphics object in previous versions of .NET. We'll use the DrawingContext to add custom shapes to the canvas for custom rendering and animation. OnRender DrawingContext Graphics I won't belabor how to extend a control; that’s just standard OOP practices. Instead, let’s look at how we can get this custom control on to our window. Once again, it’s just a little bit of XAML. This time, however, we need to add a little definition about the control using standard XML namespacing. <window xmlns:codeprojectexample="clr-namespace:WpfApplication1" x: <graphcontext background="Black" x:</graphcontext> This xmlns declaration tells WPF to include the WpfApplication1 namespace using a CodeProjectExample prefix. This is a bit like the section of an ASP.NET page if you're not as familiar with XML. xmlns WpfApplication1 namespace CodeProjectExample The XAML to reference the custom element couldn't be easier. It is a simple combination of the prefix we defined and the class name of the element. Any additional properties can be defined inline. In this case, I'm setting the drawing area to black. Drawing shapes in WPF isn't necessarily easier than in previous versions of .NET, but there are some nice amenities in the WPF Framework, namely Shape UI elements. Since an epicycloid (like any mathematical graph) is essentially a series of connected X and Y points, we can draw a collection of lines to visually represent the graph. Here’s how the line segments are generated: Shape X Y private void DrawStaticGraph(DrawingContext drawingContext) { // PathGeometry is a nice alternative to drawingContext.DrawLine(...) as it // allows the points to be rendered as an image that can be further manipulated PathGeometry geometry = new PathGeometry(); // Add all points to the geometry foreach (Points pointXY in _points) { PathFigure figure = new PathFigure(); figure.StartPoint = pointXY.FromPoint; figure.Segments.Add(new LineSegment(pointXY.ToPoint, true)); geometry.Figures.Add(figure); } // Add the first point to close the gap from the graph's end point // to graph's start point PathFigure lastFigure = new PathFigure(); lastFigure.StartPoint = _points[_points.Count - 1].FromPoint; lastFigure.Segments.Add(new LineSegment(_firstPoint, true)); geometry.Figures.Add(lastFigure); // Create a new drawing and drawing group in order to apply // a custom drawing effect GeometryDrawing drawing = new GeometryDrawing(this.Pen.Brush, this.Pen, geometry); DrawingGroup drawingGroup = new DrawingGroup(); drawingGroup.Children.Add(drawing); ... } Let me take a minute to explain that there are a couple ways of drawing lines (specifically the DrawLine method on the DrawingContext object), but there’s a purpose to this code. We'll get to that point shortly. For now, let’s take note of a couple important points. Notice that we have a PathGeometry object to which we're adding a collection of PathFigure objects. Finally, the geometry object is added to a DrawingGroup. Read between the lines and you'll see that WPF is capable of rendering any number of shapes or figures in one fell swoop. I find that impressive. DrawLine DrawingContext PathGeometry PathFigure geometry DrawingGroup Like I said, there’s a point to the verbose code above. I didn't just want to draw an epicycloid, I wanted to draw it and make it look really smooth. That’s where WPF REALLY starts to shine. I don't even want to begin to imagine how to add a blur effect to individual line segments in .NET 2.0. It would take plenty of custom code. The theme of this article is how simply WPF can achieve stunning visual effects. Here’s the three lines of code needed to soften the entire Spirograph. BlurBitmapEffect blurEffect = new BlurBitmapEffect(); blurEffect.Radius = Softness; drawingGroup.BitmapEffect = blurEffect; I wish I had more code to show for this snippet, but that’s it. Splendid, if you ask me! There are dozens of effects in the WPF Framework, all of which are customizable both in XAML and in code. In most cases, it only takes a few lines of code like this to apply an effect to part or all of an image. All that’s left to render the graph is to call the Add() method to add the drawing to our custom canvas. That's it! Add() WPF has a bunch of build in features to support animation. Unfortunately, I haven't found any yet that can paint a series of lines in succession. Until I find a way to do that, the animation in this sample is facilitated by a standard timer. I will say, however, that the ability to add and remove rendering elements from the canvas at run time makes animating the Spirograph extremely simple, and there are less overall pieces that have to be buffered and redrawn. I didn't find the animation solution I was looking for, but I'm still impressed! I really love how simple a custom UI can be with WPF. It’s a very powerful framework, and we've just scratched the.
http://www.codeproject.com/Articles/22003/WPF-and-NET-3-5-Drawing-Customized-Controls-and-Cu?PageFlow=FixedWidth
CC-MAIN-2015-40
refinedweb
1,932
56.35
14 October 2008 08:43 [Source: ICIS news] By Prema Viswanathan SINGAPORE (ICIS news)--Saudi Ethylene & Polyethylene Co (SEPC) expects to begin commercial production at its new 1m tonne/year cracker and 400,000 tonne/year high density polyethyene (HDPE) plant in Al-Jubail, Saudi Arabia, in early November, a source close to the project said on Tuesday. ?xml:namespace> A 400,000 tonne/year low density PE (LDPE) plant at the same complex would also start-up in early December, the source added. Besides the local Gulf Cooperation Council (GCC) market, the company planned to export the PE output from the new facility to the rest of the Middle East, Africa and south ?xml:namespace> The complex would have spare ethylene capacity of 200,000 tonnes/year for export even after both PE plants went on stream, a second source had said last month. The cracker would also produce 284,800 tonnes/year of propylene, which would be sold under a long-term sales agreement to Saudi Polyolefins Co (SPC) to feed its new 250,000 tonne/year polypropylene (PP) plant in Al-Jubail, the company said. SPC’s new PP plant is expected to start-up in January 2009 alongside its existing 450,000 tonne/year PP plant at Al- Jubail. SEPC is a joint venture of Sahara Olefins Co with 24.4% equity, National Industrialisation Co (Tasnee) with 50.6% and LyondellBasell holding the remaining 25%. SPC is a joint venture of Tasnee and LyondellBasell. For more on C2, C3
http://www.icis.com/Articles/2008/10/14/9163492/sepc-plans-al-jubail-cracker-hdpe-production.html
CC-MAIN-2014-52
refinedweb
253
58.92
Fast Fourier Transform (fft) The fft module in liquid implements fast discrete Fourier transforms including forward and reverse DFTs as well as real even/odd transforms. Complex Transforms Given a vector of complex time-domain samples \(\vec{x} = \left[x(0),x(1),\ldots,x(N-1)\right]^T\) the \(N\) -point forward discrete Fourier transform is computed as:$$ X(k) = \sum_{i=0}^{N-1}{x(i) e^{-j 2 \pi k i/N}} $$ Similarly, the inverse (reverse) discrete Fourier transform is:$$ x(n) = \sum_{i=0}^{N-1}{X(i) e^{ j 2 \pi n i/N}} $$ Internally, liquid uses several algorithms for computing FFTs including the standard decimation-in-time (DIT) for power-of-two transforms {cite:Ziemer:1998(Section 10-4)}, the Cooley-Tukey mixed-radix method for composite transforms {cite:CooleyTukey:1965}, Rader's algorithm for prime-length transforms {cite:Rader:1968}, and the DFT given by [eqn-fft-dft] for very small values of \(N\) . The DFT requires \(\ord\bigl(N^2\bigr)\) operations and can be slow for even moderate sizes of \(N\) which is why it is typically reserved for small transforms. liquid 's strategy for computing FFTs is to recursively break the transform into manageable pieces and perform the best method for each step. For example, a transform of length \(N=128=2^7\) can be easily computed using the standard DIT FFT algorithm which is computationally fast. The Cooley-Tukey algorithm permits any factorable transform of size \(N=PQ\) to be computed with \(P\) transforms of size \(Q\) and \(Q\) transforms of size \(P\) . For example, a transform of length \(N=126\) can be computed using the Cooley-Tukey algorithm with radices \(P=9\) and \(Q=14\) . Furthermore, each of these transforms can be further split using the Cooley-Tukey algorithm (e.g. \(9=3\cdot3\) and \(14=2\cdot7\) ). The smallest resulting transforms can finally be computed using the DFT algorithm without much penalty. For large transforms of prime length, liquid uses Rader's algorithm {cite:Rader:1968} which permits any transform of prime length \(N\) to be computed using an FFT and an IFFT each of length \(N-1\) . For example, Rader's algorithm can compute a 127-point transform using the 126-point Cooley-Tukey transform (and its inverse) described above. .. footnote Rader actually gives an alternate algorithm by which any transform of prime length $N$ can be computed with an FFT and an IFFT of any length greater than $2N-4$. For example, the 127-point FFT could also be computed using computationally efficient 256-point DIT transforms. `liquid` includes both algorithms and chooses the most appropriate one for the task. Through recursion, a tranform of any size can be decomposed into either computationally efficient DIT FFTs, or combinations of small DFTs. Consequently, liquid can compute any transform in \(\ord\bigl(n\log(n)\bigr)\) operations. Even still, liquid will use the fftw3 library library {cite:fftw:web} for internal methods if it is available. The presence of fftw3.h and libfftw3 are detected by the configure script at build time. If found, liquid will link against fftw for better performance (it is, however, the fastest FFT in the west, you know). If fftw is unavailable, however, liquid will use its own, slower FFT methods for internal processing. This eliminates libfftw as an external dependency, but takes advantage of it when available. An example of the interface for computing complex discrete Fourier transforms is listed below. Notice the stark similarity to libfftw3 's interface. #include <liquid/liquid.h> int main() { // options unsigned int n=16; // input data size int flags=0; // FFT flags (typically ignored) // allocated memory arrays float complex * x = (float complex*) malloc(n * sizeof(float complex)); float complex * y = (float complex*) malloc(n * sizeof(float complex)); // create FFT plan fftplan q = fft_create_plan(n, x, y, LIQUID_FFT_FORWARD, flags); // ... initialize input ... // execute FFT (repeat as necessary) fft_execute(q); // destroy FFT plan and free memory arrays fft_destroy_plan(q); free(x); free(y); } Example Real even/odd DFTs liquid also implement real even/odd discrete Fourier transforms; however these are not guaranteed to be efficient. A list of the transforms and their descriptions is given below. FFT_REDFT00 (DCT-I)$$ X(k) = \frac{1}{2}\Bigl( x(0) + (-1)^k x(N-1) \Bigr) + \sum_{n=1}^{N-2}{x(n) \cos\left(\frac{\pi}{N-1}nk\right) } $$ FFT_REDFT10 (DCT-II)$$ X(k) = \sum_{n=0}^{N-1}{ x(n) \cos\left[ \frac{\pi}{N}\left(n + 0.5\right)k \right] } $$ FFT_REDFT01 (DCT-III)$$ X(k) = \frac{x(0)}{2} + \sum_{n=1}^{N-1}{ x(n) \cos\left[ \frac{\pi}{N}n\left(k + 0.5\right) \right] } $$ FFT_REDFT11 (DCT-IV)$$ X(k) = \sum_{n=0}^{N-1}{ x(n) \cos\left[ \frac{\pi}{N} \left(n+0.5\right) \left(k+0.5\right) \right] } $$ FFT_RODFT00 (DST-I)$$ X(k) = \sum_{n=0}^{N-1}{ x(n) \sin\left[ \frac{\pi}{N+1}(n+1)(k+1) \right] } $$ FFT_RODFT10 (DST-II)$$ X(k) = \sum_{n=0}^{N-1}{ x(n) \sin\left[ \frac{\pi}{N}(n+0.5)(k+1) \right] } $$ FFT_RODFT01 (DST-III)$$ X(k) = \frac{(-1)^k}{2}x(N-1) + \sum_{n=0}^{N-2}{ x(n) \sin\left[ \frac{\pi}{N}(n+1)(k+0.5) \right] } $$ FFT_RODFT11 (DST-IV)$$ X(k) = \sum_{n=0}^{N-1}{ x(n) \sin\left[ \frac{\pi}{N}(n+0.5)(k+0.5) \right] } $$ An example of the interface for computing a discrete cosine transform of type-III ( FFT_REDFT01 ) is listed below. #include <liquid/liquid.h> int main() { // options unsigned int n=16; // input data size int type = LIQUID_FFT_REDFT01; // DCT-III int flags=0; // FFT flags (typically ignored) // allocated memory arrays float * x = (float*) malloc(n * sizeof(float)); float * y = (float*) malloc(n * sizeof(float)); // create FFT plan fftplan q = fft_create_plan_r2r_1d(n, x, y, type, flags); // ... initialize input ... // execute FFT (repeat as necessary) fft_execute(q); // destroy FFT plan and free memory arrays fft_destroy_plan(q); free(x); free(y); }
http://liquidsdr.org/doc/fft/
CC-MAIN-2017-13
refinedweb
1,004
50.97
2350 Your first name and initial Application for Extension of Time To File U.S. Income Tax Return For U.S. Citizens and Resident Aliens Abroad Who Expect To Qualify for Special Tax Treatment � See instructions on page 3. OMB No. 1545-0074 Department of the Treasury Internal Revenue Service 2007 Please print or type. 2007, 2 3 Will you need additional time to allocate moving expenses? Yes No 4a Date you first arrived in the foreign country b Date qualifying period begins c Your foreign home address d Date you expect to return to the United States Note. This is not an extension of time to pay tax. Full payment is required to avoid interest and late payment charges. 5 Enter the amount of income tax paid with this form � ; ends 5 Signature and Verification Under penalties of perjury, I declare that I have examined this form, including accompanying schedules and statements, and to the best of my knowledge and belief, it is true, correct, and complete; and, if prepared by someone other than the taxpayer, that I am authorized to prepare this form. Signature of taxpayer � Signature of spouse � Signature of preparer other than taxpayer � Date � Date (Do not detach) We have approved your application. We have not approved your application. However, we have granted a 45-day grace period to . This grace period To Be Completed by the IRS Director Date Taxpayer’s social security number Return Label (Please print or type) (2007) Form 2350 (2007) verification. If you wish to make a payment, you can pay by electronic funds withdrawal (see page 4) or send your check or money order to the address shown under Where To File below. File a Paper Form 2350 If you wish to file on paper instead of electronically, fill in the Form 2350 and mail it to the address shown under Where To File below., 2008, for a calendar year return), you will owe interest CAUTION and may be charged penalties. For details, see Filing Your Tax Return that begins on this page.. “Out of the country” means that on the regular due date of your return, either (a) you live outside the United States and Puerto Rico and your main place of work is outside the United States and Puerto Rico, or (b). You do not have to file a form to get the 2-month extension because you were out of the country. But you will have to attach a statement to your tax return explaining how you qualified. Where To File File Form 2350 either by mailing it to the Department of the Treasury, Internal Revenue Service, Austin, TX 73301-0215, or by giving it to a local IRS representative or other IRS employee. Period of Extension If you are given an extension, it will generally be to a date 30 days after the date on which you expect to meet either the bona fide residence test or the physical presence test. But if you must allocate moving expenses (see Pub. 54), you may be given an extension to 90 days after the end of the year following the year you moved to the foreign country. Gift or generation–skipping transfer (GST) tax return (Form 709). An extension of time to file your 2007 calendar year income tax return also extends the time to file Form 709 for 2007. However, it does not extend the time to pay any gift or GST tax you may owe for 2007. To make a payment of gift or GST tax, see Form 8892. If you do not pay the amount due by the regular due date for Form 709, you will owe interest and may also be charged penalties. If the donor died during 2007, see the instructions for Forms 709 and 8892. CAUTION Note. If you file your return after the regular due date, you cannot have the IRS figure your tax.. Filing Your Tax Return You may file Form 1040 any time before the extension expires., 2008, for a 2007 2 of 1% of any tax (other than estimated tax) not paid by April 15, 2008 (for a calendar year return), or June 16, 2008, if you have 2 extra months to file your return because you were “out of the country.” It is charged for each month or part of a month the tax is unpaid. The maximum penalty is 25%. You might not owe this penalty if you have a good reason for not paying on time. Attach a statement to your return, not to the Form 2350, explaining the reason.: National Distribution Center, P.O. Box 8903, Bloomington, IL 61702-8903. You can also download Pub. 54 (and other forms and publications) from the IRS website at. When To File File Form 2350 on or before the due date of your Form 1040. For a 2007 calendar year return, this is April 15, 2008. However, if you have 2 extra months to file your return because you were “out of the country” (defined later), file Form 2350 on or before June 16, 2008. You should file Form 2350 early enough so that if it is not approved, you can still file your return on time. Form 2350 (2007) Page 4 Late filing penalty. A penalty is usually charged if your return is filed after the due date (including extensions). It is usually 5% of the tax not paid by the regular due date for each month or part of a month your return is late. Generally, the maximum penalty is 25%. If your return is more than 60 days late, the minimum penalty is $100 or the balance of tax due on your return, whichever is smaller. You might not owe the penalty if you have a good reason for filing late. Attach a statement to your return, not Form 2350, explaining the reason. How to claim credit for payment made with this form. When you file Form 1040, enter any income tax payment (line 5) sent with Form 2350 on Form 1040, line 69.. Line 1. If you plan to qualify for the bona fide residence test, enter the date that is one year and 30 days (90 days if allocating moving expenses) from the 1st day of your next full tax year (from January 1, 2008, for a calendar year return). If you plan to qualify under the physical presence test, enter the date that is twelve months and 30 days (90 days if allocating moving expenses) from your first full (24 hour) day in the foreign country. Line 4a. Enter the day, month, and year of your arrival in the foreign country. Line 4b. The beginning date of the qualifying period is the first full (24 hour) day in the foreign country, usually the day after arrival. The ending date is the date you will qualify for special tax treatment by meeting the physical presence or bona fide residence test. Line 4c. Enter the physical address where you are currently living in the foreign country. Line 4d. Date you expect to return to the United States. If you have no planned date, leave this line blank. Bona fide residence test. To meet this test, you must also may meet this test. Physical presence test. To meet this test, you must be a U.S. citizen or resident alien who is physically present in a foreign country (or countries) for at least 330 full days during any 12-month period.. Signature and Verification This form must be signed. If you plan to file a joint return, both of you should sign. If there is a good reason why one of you cannot, the other spouse may sign for both. Attach a statement explaining why the other spouse cannot sign. Others who can sign for you. Anyone with a power of attorney can sign. But the following can sign for you without a power of attorney. ● Attorneys, CPAs, and enrolled agents. ● A person in a close personal or business relationship to you who is signing because you cannot. There must be a good reason why you cannot sign, such as illness or absence. Attach an explanation. Notice to Applicant and Return Label You must complete the Return Label to receive the Notice to Applicant. We will use it to tell you if your application is approved. Do not attach the notice to your return—keep it for your records. If the post office does not deliver mail to your street address, enter your P.O. box number instead. How To Make a Payment With Your Extension Paying by Electronic Funds Withdrawal You can e-file Form 2350 and make a payment by authorizing an electronic funds withdrawal from your checking or savings account., or provide incomplete or false information, you may be liable for interest and penalties. the processing of the electronic payment of taxes to receive confidential information necessary to answer inquiries and resolve issues related to the payment. “2007 Form 2350” on your check or money order. ● Do not staple or attach your payment to the.
https://www.scribd.com/document/534984/US-Internal-Revenue-Service-f2350-accessible
CC-MAIN-2018-39
refinedweb
1,514
71.14
Featured Blog Images In Gatsby.js Updated on: April 08, 2018 With Gatsby.js, it's pretty easy to get a static site up and running with one of their starter templates. The gatsby-starter-blog () demonstrates how a Gatsby static site can function with blog posts written in markdown files. There is a list of blog posts on the homepage, but it would be nice to see a featured image with each post. Let's dive into the gatsby-starter-blog and associate a featured or cover image to a markdown post. Gatsby Featured Image Demo ( view source ) Start by installing the gatsby-starter-blog official starter. gatsby new gatsby-blog cd gatsby-blog gatsby develop Open up your browser and head to to see the starter Gatsby site. You can see three blog posts on the homepage, but no images are associated with them. Let's fix that! Exploring the file structure and plugins First, let's explore how this site is working right now. If we look at the file structure of this site, it looks like this: Gatsby Starter Blog File Structure We can see that all the blog posts live in the /src/pages directory, in their own separate folder. There is an image in the hello-world post, but that is not a featured image that we can query on the homepage. The index.js that's in /src/pages is the React component for the homepage. If we open up gatsby-config.js, we can see the plugins used for processing the /src/pages directory: plugins: [ { resolve: `gatsby-source-filesystem`, options: { path: `${__dirname}/src/pages`, name: 'pages', }, }, { resolve: `gatsby-transformer-remark`, options: { plugins: [ { resolve: `gatsby-remark-images`, options: { maxWidth: 590, }, }, { resolve: `gatsby-remark-responsive-iframe`, options: { wrapperStyle: `margin-bottom: 1.0725rem`, }, }, 'gatsby-remark-prismjs', 'gatsby-remark-copy-linked-files', 'gatsby-remark-smartypants', ], }, }, `gatsby-transformer-sharp`, `gatsby-plugin-sharp`, The first plugin is gatsby-source-filesystem and the path of /src/pages is added there as an option. This is telling Gatsby to look in that directory and make the files there available to query in GraphQL. Since our markdown files are in there, the second plugin, gatsby-transformer-remark, will parse the markdown files into usable data nodes for GraphQL. The last two plugins, gatsby-transformer-sharp and gatsby-plugin-sharp, are what we'll need later to process and query our featured images. Adding a featured image to a blog post Open up the markdown file for the latest blog post, titled "New Beginnings" here /src/pages/hi-folks/index.md. At the top, you'll see some info for the post, known as "frontmatter" --- title: New Beginnings date: "2015-05-28T22:40:32.169Z" --- Here we see the title of the post and the date published. We can add a path to our featured image here like so: --- title: New Beginnings date: "2015-05-28T22:40:32.169Z" featuredImage: "./featured-image.jpg" --- Head over to and download a free stock image, saving it as featured-image.jpg in our post directory /src/pages/hi-folks. Since we're here, might as well add images to the other two posts. *Update* - You'll actually need a featuredImage for all of your posts right now, or else you'll get an error Cannot read property 'childImageSharp' of null when trying to render your page. Go ahead and download two more images, save them in the other two blogs at /src/pages/hello-world and /src/pages/my-second-post. Make sure to save all the images with the name featured-image.jpg Open up their respective index.md files and add featuredImage: "./featured-image.jpg" to the frontmatter. Double check your image names and make sure they match what's in your frontmatter! You can always save the images under a different name, just make sure your frontmatter reflects this. Displaying the featured image in the homepage query In order to display our new featured image, we're going to install a Gatsby React component called gatsby-image. From the terminal, let's stop our gatsby develop process and then install the gatsby-image component: yarn add gatsby-image Check out my previous post about gatsby-image to learn more about it. Open up the homepage component at /src/pages/index.js. If we go to the bottom of this React component, we'll see the GraphQL query: export const pageQuery = graphql` query IndexQuery { site { siteMetadata { title } } allMarkdownRemark(sort: { fields: [frontmatter___date], order: DESC }) { edges { node { excerpt fields { slug } frontmatter { date(formatString: "DD MMMM, YYYY") title } } } } } ` Under the allMarkdownRemark field, you'll see the frontmatter data we looked at earlier. We will add the featuredImage field here just below the title, so your query now looks like this: export const pageQuery = graphql` query IndexQuery { site { siteMetadata { title } } allMarkdownRemark(sort: { fields: [frontmatter___date], order: DESC }) { edges { node { excerpt fields { slug } frontmatter { date(formatString: "DD MMMM, YYYY") title featuredImage { childImageSharp{ sizes(maxWidth: 630) { ...GatsbyImageSharpSizes } } } } } } } } ` The childImageSharp part is using the two Gatsby sharp plugins we mentioned earlier to process the images. ...GatsbyImageSharpSizes is from the gatsby-image component we installed earlier, you can learn more about the different image types available from the official gatsby-image page. Since our featured image will be fluid, we are using the sizes option with a maxWidth of 630px to match our content's container. Let's start up gatsby-develop again and add our image. Up at the top of /src/pages/index.js, we'll import the gatsby-image component right below the Bio component: import Bio from '../components/Bio' import Img from 'gatsby-image' import { rhythm } from '../utils/typography' In our posts.map() function, we'll add the <Img /> component right below the <h3> and pass it the props for sizes. Your return() statement should now look like this: return ( <div> <Helmet title={siteTitle} /> <Bio /> {posts.map(({ node }) => { const title = get(node, 'frontmatter.title') || node.fields.slug return ( <div key={node.fields.slug}> <h3 style={{ marginBottom: rhythm(1 / 4), }} > <Link style={{ boxShadow: 'none' }} to={node.fields.slug}> {title} </Link> </h3> <Img sizes={node.frontmatter.featuredImage.childImageSharp.sizes} /> <small>{node.frontmatter.date}</small> <p dangerouslySetInnerHTML={{ __html: node.excerpt }} /> </div> ) })} </div> ) You should now see the featured images show up in the query of posts! However, if we click into a blog post, the featured image doesn't show up. No worries, the component for a blog post is located at /src/templates/blog-post.js. If you open up that file, it will look pretty similar to the index.js component. You can do the same thing here, add the featuredImage field to the GraphQL query at the bottom, import the gatsby-image component, and add the <Img /> tag on the page. Now your blog has sweet featured images that are optimized and will lazy load in!
https://codebushi.com/gatsby-featured-images/
CC-MAIN-2019-22
refinedweb
1,138
55.13
This article is intended as a brief introduction to some of the more basic elements comprising .NET development. I will create and describe five simple applications for .NET, using nothing more than the .NET Framework SDK and a text editor. I will be playing off of the ever popular, "hello, world" theme. The language that I will be using in the article is C#. I have written a number of articles on .NET programming recently, and a common feedback that I have received is that .NET is overly complicated to develop in. That it is much easier to code with language X and Notepad. On the face of it, this might seem like a valid criticism given Visual Studio's size and complexity, but it is not the language or even the framework that is to blame. Microsoft's C#.NET and Sun's Java are remarkably similar in their semantics and high level design - generally what the developer is acquainted with, as opposed to the language purists, who can tell you the difference between the CLR and JVM. The pain (and pleasure) of the development process that is most difficult and different with .NET, is the IDE, Visual Studio. Powerful? Definitely, but the ramp is steep and even simple things drag along a considerable amount of unnecessary cruft when the IDE is used. I decided to strike back against the complex and visually breathtaking IDE based development using only "primitive" tools - a command-line and text editor. divided into five "hello, world" tutorials covering command-line development using .NET: A basic understanding of .NET, the command-line, and working in Windows is assumed. Here are some good resources to help you get up to speed on these topics: OK, I will admit that when I first sat down to write this section, I was intending to simply state the requirements and leave it to the reader as an exercise to actually do the getting and installation. However, when I sat down at my wife's computer and tested out the process, it quickly became apparent that more was required. There are basically four required pieces to doing .NET development using the command-line: To install IIS on your PC: After you install a major component like IIS, it is a really good idea to run Windows Update. More information is available here. If you have Visual Studio 2003 installed, you will not need to install the Framework separately. Instead of a regular command-line, you can use the Visual Studio .NET 2003 Command Prompt which is located in the Visual Studio .NET Tools start folder. Alternatively, you can open a regular command-line and run the vsvars32.bat batch file to set up the environment for you. If you need to install the Framework, you will need to check and see if the Redistributable (runtime) is installed on your machine: If it is not installed, download and install the Redistributable: Install the Microsoft .NET Framework 1.1 Redistributable - dotnetfx.exe, 23698 KB, Published 3/30/2004, Version 1.1. Once the Redistributable is installed, download and install the .NET Framework SDK Version 1.1 - setup.exe, 108757 KB, Published 4/9/2003, Version 1.1. If either of the links above do not work, both programs can be found at the Microsoft Download Center. Add SDK directory to the path: It is possible that .NET has lost its mind since the redistributable was installed. If you try to access a .aspx file from a web browser and do not see any data coming back, you may need to remind Windows about .NET. The fix is to re-register .NET by running the command: "%windir%\Microsoft.NET\Framework\v1.1.4322\aspnet_regiis.exe" -i To use the SDK, it is simplest to open a Command Prompt and use the SDK environment setup batch file: sdkvars.bat Well, OK, it is not a requirement, but it will sure make things easier. Get it here. If you do not have a decent text editor, Notepad will work OK. You should now be set up to develop with the command-line. It is time to begin the tutorials. This tutorial is the easiest of the five. When you have finished it, you will have the pleasure of seeing the phrase "hello, world", printed on the command-line. OK, let's get started. What are the goals? Borrowing directly from Kernighan and Ritchie, these will be our goals for all of the tutorials: needed to compile //so I added the following include - it is not part } using System Console namespace class Car HelloWorld Main public static WriteLine Save the file to your hard drive as HelloConsole.cs and open a command prompt. Compile the application using the C# compiler: csc HelloConsole.cs This should produce output similar to the following, without error. Microsoft (R) Visual C# .NET Compiler version 7.10.3052.4 for Microsoft (R) .NET Framework version 1.1.4322 Compiling the application will generate an executable named HelloConsole.exe. Loading and running the program are accomplished in a single step. Load and run the application from the command-line by typing: HelloConsole.exe Console output is incredibly easy to find. Generally, it appears on the line immediately following the command itself. Running the program should produce the following output: hello, world Congratulations, you have completed the HelloConsole. HelloFile is going to be a near duplicate of HelloConsole with the addition of the ability to write to a file. When you have finished this tutorial, you will have a file named Hello.txt in the same directory as the program, that has the phrase, "hello, world" as the contents. Here is the code for the application: 1 using System; 2 using System.IO; 3 4 namespace mynamespace { 5 public class HelloWorld { 6 public static void Main(string [] args) { 7 FileInfo fi = new FileInfo("Hello.txt"); 8 StreamWriter sw = fi.CreateText(); 9 sw.WriteLine("hello, world"); 10 sw.Close(); 11 } 12 } 13 } System.IO FileInfo StreamWriter File CreateText StreamWriter WriteLine() Save the file to your hard drive as HelloFile.cs and open a command prompt. csc HelloFile.cs Compiling the application will generate an executable named HelloFile.exe. HelloFile.exe There will not be any discernable output on the console, remember that we are directing the output to the file Hello.txt. Where did the output go? We told the program to send its output to the file Hello.txt. The file should be located in the same directory that the HelloFile.exe file was run from. The easiest way to view the file's contents is to use the type command: type Hello.txt You should see the following output: That is all there is to HelloFile. With this tutorial, we begin to move into the 21st century. At the end of the tutorial, you will have a browse-able web application that is running on a web server that, when called, will return HTML with the phrase, "hello, world", as the contents. HelloBrowser is different than the console applications in that the program will run in the context of the local web server and its output will be sent to a browser as HTML. I will be using code-behind. Code-behind is new to .NET programming, and for our purposes, refers to the fact that algorithms and program logic reside in a separate file from the HTML that comprises the graphical user elements. I will not belabor it, but code-behind is the only way to go, mixing HTML and code is a nightmare waiting to prey on the weak minded. What this means is that there will be two files, one for HTML and one for C#.NET code. Here is the HTML file, called HelloBrowser.aspx. 1 <%@ Page language="c#" Codebehind="HelloBrowser> Page language Codebehind Inherits FORM DIV Save the file to disk as HelloBrowser.aspx. Here is the C#.NET file, called HelloBrowser.aspx.cs. divHelloWorld.InnerText = "hello, world"; 13 } 14 15 override protected void OnInit(EventArgs e) 16 { 17 this.Load += new System.EventHandler(this.Page_Load); 18 } 19 } 20 } System.Web.UI.HtmlControls System.Web.UI.HtmlControls.HtmlGenericControl System.Web.UI.Page Page_Load() System.Web.UI.Page Page_Load() InnerText OnInit HelloWorld Page_Load() EventHandler Page_Load() HelloWorld.Load() Save the file to disk as HelloBrowser.aspx.cs. This is where things get interesting. The .aspx file will be compiled at run time by the web server. The .aspx.cs file will need to be compiled into a .dll. First, though, let's prepare for the files. This is a web application after all - it needs a little more in the way of deployment than a simple console application. Create a directory to contain the .aspx file: md HelloBrowser Copy HelloBrowser.aspx to the HelloBrowser directory. copy HelloBrowser.aspx HelloBrowser Create a directory to contain the .aspx.cs assembly .dll file: md HelloBrowser\bin Compile the .aspx.cs file. Open a command prompt and type: csc /t:library /out:Hellobrowser\bin\HelloBrowser.dll HelloBrowser.aspx.cs You should see something like the following: This should create a file HelloBrowser.dll in the HelloWorld\bin directory. Create a virtual directory in the web server. These steps will be managed by the web server when the page is requested. To test, type the address below into your browser's address bar: The output will be in the browser's display area. I have chosen to use the Mozilla Browser 1.6 available here as the browser for this article. Mozilla is a much more advanced browser than Microsoft's Internet Explorer and it can be a developer's best friend. However, this is one big caveat - some sites on the web are not Mozilla friendly and you will definitely need to keep IE around. It is also a good idea to use IE to develop UI elements for web pages - it is the most popular browser. Feel free to use IE for the tutorials - I tested with IE and Mozilla. Here is the output: That is it for HelloBrowser. This is the most difficult of the tutorials, but when you are finished, you will be pleased to find that you have created an XML web service that exposes a HelloWorld method to any client that has access to the web server. The method will return a SOAP document containing XML with the phrase "hello, world" as the content, to any client requesting it. The web service will be written using code-behind and will operate in the context of the web server. There will be two files, one for the WebService directive and one for C#.NET code. WebService Here is the WebService directive, called HelloWebService.asmx. 1 <%@ WebService Language="c#" Codebehind="HelloWebService.asmx.cs" Class="mynamespace.HelloWebService" %> Class Save the file to disk as HelloWebService.asmx. Here is the C# file, HelloWebService.asmx.cs, that defines the web service logic: 1 using System; 2 using System.Web.Services; 3 4 namespace mynamespace 5 { 6 [WebService(Namespace="")] 7 public class HelloWebService : WebService 8 { 9 [WebMethod] 10 public string SayHelloWorld() 11 { 12 return "hello, world"; 13 } 14 } 15 } System.Web.Services tempuri HelloWebService WebMethod Save the file to disk as HelloWebService.asmx.cs. The .asmx file will be compiled at run time by the web server. The .asmx.cs file will need to be compiled into a .dll. First, though, let's prepare for the files. This is a web service - it needs more in the way of deployment. Create a directory to contain the .asmx file: md HelloWebService Copy the HelloWebService.asmx file to the HelloWebService directory: copy HelloWebService.asmx HelloWebService Create a directory to contain the .asmx.cs file: md HelloWebService\bin Compile the .asmx.cs file. csc /t:library /out:HelloWebService\bin\HelloWebService.dll HelloWebService.asmx.cs This should create a file HelloWebService.dll in the HelloWebService\bin directory. These steps will be managed by the web server when the page is requested. To test it out, type the address below into your browser's Address bar: In order to locate the output of a web service, you will need to perform a few steps. The browser page shows the web methods that are available to use. You can choose one of two paths here. One is the web method SayHelloWorld. The other is the webservice service description. First, look at the service description. SayHelloWorld Next, go back and select the SayHelloWorld web method. This brings you to a page that shows some sample SOAP code for those of you that want to delve deeper. There is also a button that says, Invoke, press the button. This simulates a client of the web service requesting the SayHelloWorld web method via webservices. There is one additional page that we might be interested in, and that is the .disco file, or Discovery document of the web service. To display the .disco file, browse to: That is it for HelloWebService. What good would a web service be without a client application? None, so here is that application. It is a simple web application that calls the web service created in the previous tutorial and displays the phrase, "hello, world", returned by the web service in the browser window. This is another code-behind web application, but with a twist - we need to create a proxy. Web services use a pretty sophisticated communication mechanism. Thankfully, the .NET Framework provides the ability to auto-generate the proxy code and lets us concentrate on the productive stuff. First, some prep work - create a directory to contain the web service client web application: md HelloWebServiceClient We need a directory for the code-behind DLL: md HelloWebServiceClient\bin To generate the proxy code, open a command prompt, and from the same directory as the .aspx.cs file: wsdl /out:HelloWebService_proxy.cs This should result in some semblance of the following output: Microsoft (R) web services Description Language Utility [Microsoft (R) .NET Framework, Version 1.1.4322.573] Writing file 'HelloWebService_proxy.cs'. Here is the code for the .aspx web application file: 1 <%@ Page language="c#" Codebehind="HelloWebServiceClient> Save the file to disk as HelloWebServiceClient.aspx. Here is the C# code-behind source: HelloWebService hello = new HelloWebService(); 13 divHelloWorld.InnerText = hello.SayHelloWorld(); 14 } 15 16 override protected void OnInit(EventArgs e) 17 { 18 this.Load += new System.EventHandler(this.Page_Load); 19 } 20 } 21 } This file is remarkably similar to the web application's code-behind. SayHelloWorld() Save the file to disk as HelloWebServiceClient.aspx.cs. Compile the proxy: csc /t:library /out:HelloWebServiceClient\bin\HelloWebService_proxy.dll HelloWebService_proxy.cs The output should resemble: Compile the code-behind: csc /t:library /r:HelloWebServiceClient\bin\HelloWebService_proxy.dll \ /out:HelloWebServiceClient\bin\HelloWebServiceClient.dll HelloWebServiceClient.aspx.cs The output will resemble: Copy the HelloWebServiceClient.aspx file to the HelloWebServiceClient directory: copy HelloWebServiceClient.aspx HelloWebServiceClient The output will be in the browser window and will appear as it did in the HelloBrowser application. If you have come this far, you are serious about your command-line .NET experience. My hope is that you have found this article interesting and useful. Please let me know what you think. Version 1.0 - This main(argc, argv) char *argv[]; { printf("hello, world\n"); } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/6908/hello-world-A-primitive-view-of-the-state-of-the-a?msg=3551161
CC-MAIN-2014-42
refinedweb
2,565
59.09
1 /**2 * $RCSfile$3 * $Revision: 2002 $4 * $Date: 2003-08-02 14:33:50 -0300 (Sat, 02 Aug 2003) $5 *6 * Copyright (C) 2002-2003 Jive Software. All rights reserved.7 * ====================================================================8 * The Jive Software License (based on Apache Software License, Version 1.1 *17 * 2. Redistributions in binary form must reproduce the above copyright18 * notice, this list of conditions and the following disclaimer in19 * the documentation and/or other materials provided with the20 * distribution.21 *22 * 3. The end-user documentation included with the redistribution,23 * if any, must include the following acknowledgment:24 * "This product includes software developed by25 * Jive Software ()."26 * Alternately, this acknowledgment may appear in the software itself,27 * if and wherever such third-party acknowledgments normally appear.28 *29 * 4. The names "Smack" and "Jive Software" must not be used to30 * endorse or promote products derived from this software without31 * prior written permission. For written permission, please32 * contact webmaster@jivesoftware.com.33 *34 * 5. Products derived from this software may not be called "Smack",35 * nor may "Smack" appear in their name, without prior written36 * permission of Jive Software.37 *38 * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED39 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES40 * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE41 * DISCLAIMED. IN NO EVENT SHALL JIVE SOFTWARE 53 package org.jivesoftware.smack.packet;54 55 import java.util.*;56 57 /**58 * A mock implementation of the Packet abstract class. Implements toXML() by returning null.59 */60 public class MockPacket extends Packet {61 62 /**63 * Returns null always.64 * @return null65 */66 public String toXML() {67 return null;68 }69 }70 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jivesoftware/smack/packet/MockPacket.java.htm
CC-MAIN-2016-50
refinedweb
291
50.43
!.0 - patch 20171223 [ncurses.git] / NEWS 1 ------------------------------------------------------------------------------- 2 -- Copyright (c) 1998-2016.3030 2017/12/23 21:44:36 20171223 49 + modify ncurses-examples to quiet const-warnings when building with 50 PDCurses. 51 + modify toe to not exit if unable to read a terminal description, 52 e.g., if there is a permission problem. 53 + minor fix for progs/toe.c, using _nc_free_termtype2. 54 + assign 0 to pointer in _nc_tgetent_leak() after freeing it. Also 55 avoid reusing pointer from previous successful call to tgetent 56 if the latest call is unsuccessful (patch by Michael Schroeder, 57 OpenSuSE #1070450). 58 + minor fix for test/tracemunch, initialize $awaiting variable. 59 60 20171216 61 + repair template in test/package/ncurses-examples.spec (cf: 20171111). 62 + improve tic's warning about the number of parameters tparm might use 63 for u1-u9 by making a special case for u6. 64 + improve curs_attr.3x discussion of color pairs. 65 66 20171209 67 + modify misc/ncurses-config.in to make output with --includedir 68 consistent with --cflags, i.e., when --disable-overwrite option was 69 configured the output should show the subdirectory where headers 70 are. 71 + modify MKlib_gen.sh to suppress macros when calling an "implemented" 72 function in link_test.c 73 + updated ftp-url used in test-packages, etc. 74 + modify order of -pie/-shared options in configure script in case 75 LDFLAGS uses "-pie", working around a defect or limitation in the GNU 76 linker (prompted by patch by Yogesh Prasad, forwarded by Jay Shah). 77 + add entry in man_db.renames for user_caps.5 78 79 20171125 80 + modify MKlib_gen.sh to avoid tracing result from getstr/getnstr 81 before initialized. 82 + add "-a" aspect-ratio option to picsmap. 83 + add configure check for default path of rgb.txt, used in picsmap. 84 + modify _nc_write_entry() to truncate too-long filename (report by 85 Hosein Askari, Debian #882620). 86 + build-fix for ncurses-examples with NetBSD curses: 87 + it lacks the use_env() function. 88 + it lacks libpanel; a recent change used the wrong ifdef symbol. 89 + add a macro for is_linetouched() and adjust the function's return 90 value to make it possible for most applications to check for an 91 error-return (report by Midolikawa H). 92 + additional manpage cleanup. 93 + update config.guess, config.sub from 94 95 96 20171118 97 + add a note to curs_addch.3x on portability. 98 + add a note to curs_pad.3x on the origin and portability of pads. 99 + improve manpage description of getattrs (report by Midolikawa H). 100 + improve manpage macros (prompted by discussion in Debian #880551. 101 + reviewed test-programs using KEY_RESIZE, made fixes to test/worm.c 102 + add a "-d" option to picsmap for default-colors. 103 + modify old terminology entry and a few other terminal emulators to 104 account for xon -TD 105 + correct sgr string for tmux, which used screen's "standout" code 106 rather than the standard code (patch by Roman Kagan) 107 + correct sgr/sgr0 strings in a few other cases reported by tic, making 108 those correspond to the non-sgr settings where they differ, but 109 otherwise use ECMA-48 consistently: 110 jaixterm, aixterm, att5420_2, att4424, att500, decansi, d410-7b, 111 dm80, hpterm, emu-220, hp2, iTerm2.app, mterm-ansi, ncrvt100an, 112 st-0.7, vi603, vwmterm -TD 113 + build-fix for diagnostics warning in lib_mouse.c for pre-5.0 versions 114 of gcc which did not recognize the diagnostic "push" pragma (patch by 115 Vassili Courzakis). 116 117 20171111 118 + add "op" to xterm+256setaf -TD 119 + reviewed terminology 1.0.0 -TD 120 + reviewed st 0.7 -TD 121 + suppress debug-package for ncurses-examples rpm build. 122 123 20171104 124 + check for interrupt in color-pair initialization of dots_curses.c, 125 dots_xcurses.c 126 + add z/Z zoom feature to test/ncurses.c C/c screens. 127 + add '<' and '>' commands to test/ncurses.c S/s screens, to better 128 test off-by-ones in the overlap/copywin functions. 129 130 20171028 131 + improve man/curs_inwstr.3x, correct end-logic for lib_inwstr.c 132 (report by Midolikawa H). 133 + fix typo in a few places for "improvements" (patch by Sven Joachim). 134 + clear the other half of a double-width character on which a line 135 drawing character is drawn. 136 + make test/ncurses.c "s" test easier to understand which subtests are 137 available; add a "S" wide-character overlap test-screen. 138 + modify test/ncurses.c C/c tests to allow for extended color pairs. 139 + add endwin() call in error-returns from test/ncurses.c omitted in 140 recent redesign of its menu (cf: 20170923). 141 + improve install of hashed-db by removing the ".db" file as done for 142 directory-tree terminal databases. 143 + repair a few overlooked items in include/ncurses_defs from recent 144 port/refactoring of test-programs (cf: 20170909). 145 + add test/padview.c, to compare pads with direct updates in view.c 146 147 20171021 148 + modify test/view.c to expand tabs using the ncurses library rather 149 than in the test-program. 150 + remove very old SIGWINCH example in test/view.c, just use KEY_RESIZE. 151 + add -T, -e, -f -m options to "dots" test-programs. 152 + fix a few typos in usage-messages for test-programs. 153 154 20171014 155 + minor cleanup to test/view.c: 156 + eliminate "-n" option by simply reading the whole file. 157 + implement page up/down commands. 158 + add check in tput for init/reset operands to ensure those use a 159 terminal. 160 + improve manual pages which discuss chtype, cchar_t types and the 161 attribute values which can be stored in those types. 162 + correct array-index when parsing "-T" command-line option in tabs 163 program. 164 + modify demo_new_pair.c to pass extended pairs to setcchar(). 165 + add test/dots_xcurses.c to illustrate a different approach used for 166 extended colors which can be contrasted with dots_curses.c. 167 + add a check in tic to note when a description uses non-mandatory 168 delays without xon_xoff. This is not an error, but some descriptions 169 for a terminal emulator may use the combination incorrectly. 170 171 20171007 172 + modify "-T" option of clear and tput to call use_tioctl() to obtain 173 the operating system's notion of the screensize if possible. 174 + review/repair some exit-codes for tput, making usage-message exit 175 with 2 rather than 1, and a failure to open terminal 4+errno. 176 + amend check in tput, tabs and clear to allow those to use the 177 database-only features in cron if a -T option gives a suitable 178 terminal name (report by Lauri Tirkkonen). 179 + correct an ifdef in test/ncurses.c for systems with soft-keys but 180 not slk_color(). 181 + regenerate man-html documentation. 182 183 20170930 184 + fix a symbol conflict that made ncurses.c C/c menu not work with 185 Solaris xpg4 curses. 186 + add refresh() call to dots_mvcur.c, needed to use mvcur() with 187 Solaris xpg4 curses after calling newterm(). 188 + minor fixes for configure script from work on ncurses-examples and 189 tin. 190 + improve animation in test/xmas.c by adding a time-delay in blinkit(). 191 + modify several test programs to reflect that ncurses honors existing 192 signal handlers in initscr(), while other implementations do not. 193 + modify bs.c to make it easier to quit. 194 + change ncurses-examples to use attr_t vs chtype to follow X/Open 195 documentation more closely since Solaris xpg4-curses uses different 196 values for WA_xxx vs A_xxx that rely on attr_t being an unsigned 197 short. Tru64 aka OSF1, HPUX, AIX did as ncurses does, equating the 198 two sets. 199 200 20170923 201 + modify menu for test/ncurses.c to fit on 24-line screen. 202 + build-fix for configure --with-caps=uwin 203 + add options to test_arrays.c, for selecting termcap vs terminfo, etc. 204 205 20170916 206 + minor fix to test/filter.c to avoid clearing the command in one case. 207 + modify filter() to discard clr_eos if back_color_erase is set. 208 209 20170909 210 + improve wide-character implementation of myADDNSTR() in frm_driver.c, 211 which was inconsistent with the normal implementation. 212 + save/restore cursor position in Undo_Justification(), matching 213 behavior of Buffer_To_Window() (report by Leon Winter). 214 + modify test/knight to provide the "slow" solution for small screens 215 using "R", noting that Warnsdorf's method is easily done with "a". 216 + modify several test-programs which call use_default_colors() to 217 consistently do this only if "-d" option is given. 218 + additional changes to test with non-standard variants of curses: 219 + modify a loop limit in firework.c to work around absense of limit 220 checks in some libraries. 221 + fill the last row of a window with "?" in firstlast if waddch does 222 not return ERR on the lower-right corner. 223 + add checks in test/configure for some functions not in 4.3BSD curses. 224 + fix a regression in test/configure (cf: 20170826). 225 226 20170902 227 + amend change for endwin-state for better consistency with the older 228 logic (report/patch by Jeb Rosen, cf: 20170722). 229 + modify check in fmt_entry() to handle a cancelled reset string 230 (Debian #873746). Make similar fixes in other parts of dump_entry.c 231 and tput.c 232 233 20170827 234 + fix a bug in repeat_char logic (cf: 20170729, report by Chris Clayton). 235 236 20170826 237 + fixes for "iterm2" (report by Leonardo Brondani Schenkel) -TD 238 + corrected a warning from tic about keys which are the same, to skip 239 over missing/cancelled values. 240 + add check in tic for unnecessary use of "2" to denote a shifted 241 special key. 242 + improve checks in trim_sgr0, comp_parse.c and parse_entry.c, for 243 cancelled string capabilities. 244 + add check in _nc_parse_entry() for invalid entry name, setting the 245 name to "invalid" to avoid problems storing entries. 246 + add/improve checks in tic's parser to address invalid input 247 + add a check in comp_scan.c to handle the special case where a 248 nontext file ending with a NUL rather than newline is given to tic 249 as input (Redhat #1484274). 250 + allow for cancelled capabilities in _nc_save_str (Redhat #1484276). 251 + add validity checks for "use=" target in _nc_parse_entry (Redhat 252 #1484284). 253 + check for invalid strings in postprocess_termcap (Redhat #1484285) 254 + reset secondary pointers on EOF in next_char() (Redhat #1484287). 255 + guard _nc_safe_strcpy() and _nc_safe_strcat() against calls using 256 cancelled strings (Redhat #1484291). 257 + correct typo in curs_memleaks.3x (Sven Joachim). 258 + improve test/configure checks for some curses variants not based on 259 X/Open Curses. 260 + add options for test/configure to disable checks for form, menu and 261 panel libraries. 262 263 20170819 264 + update "iterm" entry -TD 265 + add "iterm2" entry (report by Leonardo Brondani Schenkel) -TD 266 + regenerate llib-* files. 267 + regenerate HTML manpages. 268 + improve picsmap test-program: 269 + reduce memory used for tsearch 270 + add report in log file showing cumulative color coverage. 271 + add -x option to clear/tput to make the E3 extension optional 272 (cf: 20130622). 273 + add options -T and -V to clear command for compatibility with tput. 274 + add usage message to clear command (Debian #371855). 275 + improve usage messages for tset and tput. 276 + minor fixes to "RDGB" extension and reset_color_pairs(). 277 278 20170812 279 + improve description of -R option in infocmp manual page (report by 280 Stephane Chazelas). 281 + add reset_color_pairs() function. 282 + add user_caps.5 manual page to document the terminfo extensions used 283 by ncurses. 284 + improve build scripts, using SIGQUIT vs SIGTRAP; add other configure 285 script fixes from work on xterm, lynx and tack. 286 + modify install-rule for ncurses-examples to put the data files in 287 /usr/share/ncurses-examples 288 + improve tracemunch, by changing address-parameters of add_wch(), 289 color_content() and pair_content() to dummy parameters. 290 + minor optimization to _nc_change_pair, to return quickly when the 291 current screen is marked for clearing. 292 + in-progress changes to improve performance of test/picsmap.c for 293 loading image files. 294 + modify allocation for SCREEN's color-pair table to start small, grow 295 on demand up to the existing limit. 296 + add "RGB" extension capability for direct-color support, use this to 297 improve color_content(). 298 + improve picsmap test-program: 299 + if no palette file is needed, attempt to load one based on $TERM, 300 checking first in the current directory, then by adding ".dat" 301 suffix, and finally in the data-directory, e.g., 302 /usr/share/ncurses-examples 303 + add "-l" option for logging 304 + add "-d" option for debugging 305 + add "-s" option for stepping automatically through list of images, 306 with time delay. 307 + use tsearch to improve time for loading color table for images. 308 + update config.guess, config.sub from 309 310 311 20170729 312 + update interix entry using tack and SFU on Windows 7 Ultimate -TD 313 + use ^? for kdch1 in interix (reported by Jonathan de Boyne Pollard) 314 + add "rep" to xterm-new, available since 1997/01/26 -TD 315 + move SGR 24 and 27 from vte-2014 to vte-2012 (request by Alain 316 Williams) -TD 317 + add a check in newline_forces_scroll() in case a program moves the 318 cursor outside scrolling margins (report by Robert King). 319 + improve _nc_tparm_analyze, using that to extend the checks made by 320 tic for reporting inconsistencies between the expected number of 321 parameters for a capability and the actual. 322 + amend handling of repeat_char capability in EmitRange (adapted from 323 report/patch by Dick Wesseling): 324 + translate the character to the alternate character set when the 325 alternate character set is enabled. 326 + do not use repeat_char for characters past 255. 327 + document "_nc_free_tinfo" in manual page, because it could be used in 328 tack for memory-leak checking. 329 + add "--without-tack" configure option to refine "--with-progs" 330 configure option. Normally tack is no longer built in-tree, but 331 a few packagers combine it during the build. If term_entry.h is 332 installed, there is no advantage to in-tree builds. 333 + adjust configure-script to define HAVE_CURSES_DATA_BOOLNAMES symbol 334 needed for tack 1.08 when built in-tree. Rather than relying upon 335 internal "_nc_" functions, tack now uses the boolean, number and 336 string capability name-arrays provided by ncurses and SVr4 Unix 337 curses. It still uses term_entry.h for the definitions of the 338 extended capability arrays. 339 + add an overlooked null-pointer check in mvcur changes from 20170722 340 341 20170722 342 + improve test-packages for ncurses-examples and AdaCurses for lintian 343 + modify logic for endwin-state to be able to detect the case where 344 the screen was never initialized, using that to trigger a flush of 345 ncurses' buffer for mvcur, e.g., in test/dots_mvcur.c for the 346 term-driver configuration. 347 + add dependency upon ncurses_cfg.h to a few other internal header 348 files to allow each to be compiled separately. 349 + add dependency upon ncurses_cfg.h to tic's header-files; any program 350 using tic-library will have to supply this file. Legacy tack 351 versions supply this file; ongoing tack development has dropped the 352 dependency upon tic-library and new releases will not be affected. 353 354 20170715 355 + modify command-line parameters for "convert" used in picsmap to work 356 with ImageMagick 6.8 and newer. 357 + fix build-problem with tack and ABI-5 (Debian #868328). 358 + repair termcap-format from tic/infocmp broken in 20170701 fixes 359 (Debian #868266). 360 + reformat terminfo.src with 20170513 updates. 361 + improve test-packages to address lintian warnings. 362 363 20170708 364 + add a note to tic manual page about -W versus -f options. 365 + correct a limit-check in fixes from 20170701 (report by Sven Joachim). 366 367 20170701 368 + modify update_getenv() in db_iterator.c to ensure that environment 369 variables which are not initially set will be checked later if an 370 application happens to set them (patch by Guillaume Maudoux). 371 + remove initialization-check for calling napms() in the term-driver 372 configuration; none is needed. 373 + add help-screen to test/test_getstr.c and test/test_get_wstr.c 374 + improve compatibility between different configurations of new_prescr, 375 fixing a case with threaded code and term-driver where c++/demo did 376 not work (cf: 20160213). 377 + the fixes for Redhat #1464685 obscured a problem subsequently 378 reported in Redhat #1464687; the given test-case was no longer 379 reproducible. Testing without the fixes for the earlier reports 380 showed a problem with buffer overflow in dump_entry.c, which is 381 addressed by reducing the use of a fixed-size buffer. 382 + add/improve checks in tic's parser to address invalid input 383 (Redhat #1464684, #1464685, #1464686, #1464691). 384 + alloc_entry.c, add a check for a null-pointer. 385 + parse_entry.c, add several checks for valid pointers as well as 386 one check to ensure that a single character on a line is not 387 treated as the 2-character termcap short-name. 388 + fix a memory leak in delscreen() (report by Bai Junq). 389 + improve tracemunch, showing thread identifiers as names. 390 + fix a use-after-free in NCursesMenu::~NCursesMenu() 391 + further amend incorrect calls for memory-leaks from 20170617 changes 392 (report by Allen Hewes). 393 394 20170624 395 + modify c++/etip.h.in to accommodate deprecation of throw() and 396 throws() in c++17 (prompted by patch by Romain Geissler). 397 + remove some incorrect calls for memory-leaks from 20170617 changes 398 (report by Allen Hewes). 399 + add test-programs for termattrs and term_attrs. 400 + modify _nc_outc_wrapper to use the standard output if the screen was 401 not initialized, rather than returning an error. 402 + improve checks for low-level terminfo functions when the terminal 403 has not been initialized (Redhat #1345963). 404 + modify make_hash to allow building with address-sanitizer, 405 assuming that --disable-leaks is configured. 406 + amend changes for number_format() in 20170506 to avoid undefined 407 behavior when shifting (patch by Emanuele Giaquinta). 408 409 20170617 410 + fill in some places where TERMTYPE2 vs TERMTYPE was not used 411 (report by Allen Hewes). 412 + use ExitTerminfo() internally in error-exits for ncurses' setupterm 413 to help with leak checking. 414 + use ExitProgram() in error-exit from initscr() to help with leak 415 checking. 416 + review test-programs, adding checks for cases where the terminal 417 cannot be initialized. 418 419 20170610 420 + add option "-xp" to picsmap.c, to use init_extended_pair(). 421 + make simple performance fixes for picsmap.c 422 + improve aspect ratio of images read from "convert" in picsmap.c 423 424 20170603 425 + add option to picsmap to use color-palette files, e.g., for mapping 426 to xterm-256color. 427 + move the data in SCREEN used for the alloc_pair() function to the 428 end, to restore compatibility between ncurses/ncursesw libtinfo 429 (report/patch by Miroslav Lichvar). 430 + add build-time utility "report_offsets" to help show when the various 431 configurations of tinfo library are compatible or not. 432 433 20170527 434 + improved test/picsmap.c: 435 + lookup named colors for xpm files in rgb.txt 436 + accept blanks in color-keys for xpm files. 437 + if neither xbm/xpm work, try "convert", which may be available. 438 439 20170520 440 + modify test/picsmap.c to read xpm files. 441 + modify package/debian/* to create documentation packages, so the 442 related files can be checked with lintian. 443 + fix some typos in manpages (report/patch by Sven Joachim). 444 445 20170513 446 + add test/picsmap.c to fill in some testing issues not met by dots. 447 The initial version reads X bitmap (".xbm") files. 448 + repair logic which forces a repaint where a color-pair's content is 449 changed (cf: 20170311). 450 + improve tracemunch, showing screenXX pointers as names. 451 452 20170506 453 + modify tic/infocmp display of numeric values to use hexadecimal when 454 they are "close" to a power of two, making the result more readable. 455 + improve discussion of portability in curs_mouse.3x 456 + change line-length for generated html/manpages to 78 columns from 65. 457 + improve discussion of line-drawing characters in curs_add_wch.3x 458 (prompted by discussion with Lorinczy Zsigmond). 459 + cleanup formatting of hackguide.html and ncurses-intro.html 460 + add examples for WACS_D_PLUS and WACS_T_PLUS to test/ncurses.c 461 462 20170429 463 + corrected a case where $with_gpm was set to "maybe" after CF_WITH_GPM, 464 overlooked in 20160528 fixes (report by Alexandre Bury). 465 + improve a couple of test-program's help-messages. 466 + corrected loop in rain.c from 20170415 changes. 467 + modify winnstr and winchnstr to return error if the output pointer is 468 null, as well as adding a null pointer check of the window pointer 469 for better compatibility with other implementations. 470 + improve discussion of NetBSD curses in scr_dump.5 471 + modify LIMIT_TYPED macro in new_pair.h to avoid changing sign of the 472 value to be limited (reports by Darby Payne, Rob Boudreau). 473 + update config.guess, config.sub from 474 475 476 20170422 477 + build-fix for termcap-configuration (report by Chi-Hsuan Yen). 478 + improve terminfo manual page discussion of control- and graphics- 479 characters. 480 + remove tic warning about "^?" in string capabilities, which was 481 marked as an extension (cf: 20000610, 20110820); however all Unix 482 implementations support this and X/Open Curses does not address it. 483 On the other hand, termcap never did support this feature. 484 + correct missing comma-separator between string capabilities in 485 icl6402 and m2-nam -TD 486 + restore rmir/smir in ansi+idc to better match original ansiterm+idc, 487 add alias ansiterm (report by Robert King). 488 + amend an old check for ambiguous use of "ma" in terminfo versus 489 a termcap use, if the capability is cancelled to treat it as number. 490 + correct a case in _nc_captoinfo() which read "%%" and emitted "%". 491 + modify sscanf calls in _nc_infotocap() for patterns "%{number}%+%c" 492 and "%'char'%+%c" to check that the final character is really 'c', 493 avoiding a case in icl6404 which cannot be converted to termcap. 494 + in _nc_infotocap(), add a check to ensure that terminfo "^?" is not 495 written to termcap, because the BSDs did not implement that. 496 + in _nc_tic_expand() and _nc_infotocap(), improve string-length check 497 when deciding whether to use "^X" or "\xxx" format for control 498 characters, to make the output of tic/infocmp more predictable. 499 + limit termcap "%d" width to 2 digits on input, and use "%2" in 500 preference to "%02" on output. 501 + correct terminfo/termcap conversion of "%02" and "%03" into "%2" and 502 "%3"; the result repeated the last character. 503 + add man/scr_dump.5 to document screen-dump format. 504 505 20170415 506 + modify several test programs to use new popup_msgs, adapted from 507 help-screen used in test/edit_field.c 508 + drop two symbols obsoleted in 2004: _nc_check_termtype, and 509 _nc_resolve_uses 510 + fix some old copyright dates (cf: 20031025). 511 + build-fixes for test/savescreen.c to work with AIX and HPUX. 512 + minor fix to configure script, adding a backslash/continuation. 513 + extend TERMINAL structure for ABI 6 to store numbers internally as 514 integers rather than short, by adding new data for this purpose. 515 + more fixes for minor memory-leaks in test-programs. 516 517 20170408 518 + change logic in wins_nwstr() to avoid addressing data past the output 519 of mbstowcs(). 520 + correct a call to setcchar() in Data_Entry_w() from 20131207 changes. 521 + fix minor memory-leaks in test-programs. 522 + further improve ifdef in term_entry.h for internal definitions not 523 used by tack. 524 525 20170401 526 + minor fixes for vt100+4bsd, e.g., delay in sgr for consistency -TD 527 + add smso for env230, to match sgr -TD 528 + remove p7/protect from sgr in fbterm -TD 529 + drop setf/setb from fbterm; setaf/setab are enough -TD 530 + make xterm-pcolor sgr consistent with other capabilities -TD 531 + add rmxx/smxx ECMA-48 strikeout extension to tmux and xterm-basic 532 (discussion with Nicholas Marriott) 533 + add test-programs sp_tinfo and extended_color 534 + modify no-leaks code for lib_cur_term.c to account for the tgetent() 535 cache. 536 + modify setupterm() to save original tty-modes so that erasechar() 537 works as expected. Also modify _nc_setupscreen() to avoid redundant 538 calls to get original tty-modes. 539 + modify set_curterm() to update ttytype[] data used by longname(). 540 + modify wattr_set() and wattr_get() to return ERR if win-parameter is 541 null, as documented. 542 + improve cast used for null-pointer checks in header macros, to 543 reduce compiler warnings. 544 + modify several functions, using the reserved "opts" parameter to pass 545 color- and pair-values larger than 16-bits: 546 + getcchar(), setcchar(), slk_attr_set(), vid_puts(), wattr_get(), 547 wattr_set(), wchgat(), wcolor_set(). 548 + Other functions call these with the corresponding altered behavior, 549 including chgat(), mvchgat(), mvwchgat(), slk_color_on(), 550 slk_color_off(), vid_attr(). 551 + add new functions for manipulating color- and pair-values larger 552 than 16-bits. These are extended_color_content(), 553 extended_pair_content(), extended_slk_color(), init_extended_color(), 554 init_extended_pair(), and the corresponding sp-funcs. 555 556 20170325 557 + fix a memory leak in the window-list when creating multiple screens 558 (reports by Andres Martinelli, Debian #783486). 559 + reviewed calls from link_test.c, added a few more null-pointer 560 checks. 561 + add a null-pointer check in ungetmouse, in case mousemask was not 562 called (report by "Kau"). 563 + updated curs_sp_funcs.3x for new functions. 564 565 20170318 566 + change TERMINAL structure in term.h to make it opaque. Some 567 applications misuse its members, e.g., directly modifying it 568 rather than using def_prog_mode(). 569 + modify utility headers such as tic.h to make it clearer which are 570 externals that are used by tack. 571 + improve curs_slk.3x in particular its discussion of portability. 572 + fix cut/paste in legacy_encoding.3x 573 + add prototype for find_pair() to new_pair.3x (report by Branden 574 Robinson). 575 + fix a couple of broken links in generated man-html documentation. 576 + regenerate man-html documentation. 577 578 20170311 579 + modify vt100 rs2 string to reset vt52 mode and scrolling regions 580 (report/analysis by Robert King) -TD 581 + add vt100+4bsd building block, use that for older terminals rather 582 than "vt100" which is now mostly used as a building block for 583 terminal emulators -TD 584 + correct a few spelling errors in terminfo.src comments -TD 585 + add fbterm -TD 586 + fix a typo in ncurses.c test_attr legend (patch by Petr Vanek). 587 + changed internal colorpair_t to a struct, eliminating an internal 588 8-bit limit on colors 589 + add ncurses/new_pair.h 590 + add ncurses/base/new_pair.c with alloc_pair(), find_pair() and 591 free_pair() functions 592 + add test/demo_new_pair.c 593 594 20170304 595 + improve terminfo manual description of terminfo syntax. 596 + clarify the use of wint_t vs wchar_t in curs_get_wstr.3x 597 + improve description of endwin() in manual. 598 + modify setcchar() and getcchar() to treat negative color-pair as an 599 error. 600 + fix a typo in include/hashed_db.h (Andre Sa). 601 602 20170225 603 + fixes for CF_CC_ENV_FLAGS (report by Ross Burton). 604 605 20170218 606 + fix several formatting issues with manual pages. 607 + correct read of terminfo entry in which all strings are absent or 608 explicitly cancelled. Before this fix, the result was that all were 609 treated as only absent. 610 + modify infocmp to suppress mixture of absent/cancelled capabilities 611 that would only show as "NULL, NULL", unless the -q option is used, 612 e.g., to show "-, @" or "@, -". 613 614 20170212 615 + build-fixes for PGI compilers (report by Adam J. Stewart) 616 + accept whitespace in sed expression for generating expanded.c 617 + modify configure check that g++ compiler warnings are not used. 618 + add configure check for -fPIC option needed for shared libraries. 619 + let configure --disable-ext-funcs override the default for the 620 --enable-sp-funcs option. 621 + mark some structs in form/menu/panel libraries as potentially opaque 622 without modifying API/ABI. 623 + add configure option --enable-opaque-curses for ncurses library and 624 similar options for the other libraries. 625 626 20170204 627 + trim newlines, tabs and escaped newlines from terminfo "paths" passed 628 to db-iterator. 629 + ignore zero-length files in db-iterator; these are useful for 630 instance to suppress "$HOME/.terminfo" when not wanted. 631 + amended "b64:" encoder to work with the terminfo reader. 632 + modify terminfo reader to accept "b64:" format using RFC-3548 in 633 as well as RFC-4648 url/filename-safe format. 634 + modify terminfo reader to accept "hex:" format as generated by 635 "infocmp -0qQ1" (cf: 20150905). 636 + adjust authors comment to reflect drop below 1% for SV. 637 638 20170128 639 + minor comment-fixes to help automate links to bug-urls -TD 640 + add dvtm, dvtm-256color -TD 641 + add settings corresponding to xterm-keys option to tmux entry to 642 reflect upcoming change to make that option "on" by default 643 (patch by Nicholas Marriott). 644 + uncancel Ms in tmux entry (Harry Gindi, Nicholas Marriott). 645 + add dumb-emacs-ansi -TD 646 647 20170121 648 + improve discussion of early history of tput program. 649 + incorporate A_COLOR mask into COLOR_PAIR(), in case user application 650 provides an out-of-range pair number (report by Elijah Stone). 651 + clarify description in tput manual page regarding support for 652 termcap names (prompted by FreeBSD #214709). 653 + remove a restriction in tput's support for termcap names which 654 omitted capabilities normally not shown in termcap translations 655 (cf: 990123). 656 + modify configure script for clang as used on FreeBSD, to work around 657 clang's differences in exit codes vs gcc. 658 659 20170114 660 + improve discussion of early history of tset/reset programs. 661 + clarify in manual pages that the optional verbose option level is 662 available only when ncurses is configured for tracing. 663 + amend change from 20161231 to avoid writing traces to the standard 664 error after initializing the trace feature using the environment 665 variable. 666 667 20170107 668 + amend changes for tput to reset tty modes to "sane" if the program 669 is run as "reset", like tset. Likewise, ensure that tset sends 670 either reset- or init-strings. 671 + improve manual page descriptions of tput init/reset and tset/reset, 672 to make it easier to see how they are similar and different. 673 + move a static result from key_name() to _nc_globals 674 + modify _nc_get_screensize to allow for use_env() and use_tioctl() 675 state to be per-screen when sp-funcs are configured, better matching 676 the behavior when using the term-driver configuration. 677 + improve cross-references in manual pages for often used functions 678 + move SCREEN field for use_tioctl() data before the ncursesw fields, 679 and limit that to the sp-funcs configuration to improve termlib 680 compatibility (cf: 20120714). 681 + correct order of initialization for traces in use_env() and 682 use_tioctl() versus first trace calls. 683 684 20161231 685 + fix errata for ncurses-howto (report by Damien Ruscoe). 686 + fix a few places in configure/build scripts where DESTDIR and rpath 687 were combined (report by Thomas Klausner). 688 + merge current st description (report by Harry Gindi) -TD 689 + modify flash capability for linux and wyse entries to put the delay 690 between the reverse/normal escapes rather than after -TD 691 + modify program tabs to pass the actual tty file descriptor to 692 setupterm rather than the standard output, making padding work 693 consistently. 694 + explain in clear's manual page that it writes to stdout. 695 + add special case for verbose debugging traces of command-line 696 utilities which write to stderr (cf: 20161126). 697 + remove a trace with literal escapes from skip_DECSCNM(), added in 698 20161203. 699 + update config.guess, config.sub from 700 701 702 20161224 703 + correct parmeters for copywin call in _nc_Synchronize_Attributes() 704 (patch by Leon Winter). 705 + improve color-handling section in terminfo manual page (prompted by 706 patch by Mihail Konev). 707 + modify programs clear, tput and tset to pass the actual tty file 708 descriptor to setupterm rather than the standard output, making 709 padding work. 710 711 20161217 712 + add tput-colorcube demo script. 713 + add -r and -s options to tput-initc demo, to match usage in xterm. 714 + flush the standard output in _nc_flush for the case where SP is zero, 715 e.g., when called via putp. This fixes a scenario where "tput flash" 716 did not work after changes in 20130112. 717 718 20161210 719 + add configure script option --disable-wattr-macros for use in cases 720 where one wants to use the same headers for ncurses5/ncurses6 721 development, by suppressing the wattr* macros which differ due to 722 the introduction of extended colors (prompted by comments in 723 Debian #230990, Redhat #1270534). 724 + add test/tput-initc to demonstrate tput used to initialize palette 725 from a data file. 726 + modify test/xterm*.dat to use the newer color4/color12 values. 727 728 20161203 729 + improve discussion of field validation in form_driver.3x manual page. 730 + update curs_trace.3x manual page. 731 732 20161126 733 + modify linux-16color to not mask dim, standout or reverse with the 734 ncv capability -TD 735 + add 0.1sec mandatory delay to flash capabilities using the VT100 736 reverse-video control -TD 737 + omit selection of ISO-8859-1 for G0 in enacs capability from linux2.6 738 entry, to avoid conflict with the user-defined mapping. The reset 739 feature will use ISO-8859-1 in any case (Mikulas Patocka). 740 + improve check in tic for delays by also warning about beep/flash 741 when a delay is not embedded, or if those use the VT100 reverse 742 video escape without using a delay. 743 + minor fix for syntax-check of delays from 20161119 changes. 744 + modify trace() to avoid overwriting existing file (report by Maor 745 Shwartz). 746 747 20161119 748 + add check in tic for some syntax errors of delays, as well as use of 749 proportional delays for non-line capabilities. 750 + document history of the clear program and the E3 extension, prompted 751 by various discussions including 752 753 754 20161112 755 + improve -W option in tic/infocmp: 756 + correct order of size-adjustments in wrapped lines 757 + if -f option splits line, do not further split it with -W 758 + begin a new line when adding "use=" after a wrapped line 759 760 20161105 761 + fix typo in man/terminfo.tail (Alain Williams). 762 + correct program-name in adacurses6-config.1 manual page. 763 764 20161029 765 + add new function "unfocus_current_field" (Leon Winter) 766 767 20161022 768 + modify tset -w (and tput reset) to update the program's copy of the 769 screensize if it was already set in the system, to improve tabstop 770 setting which relies upon knowing the actual screensize. 771 + add functionality of tset -w to tput, like the "-c" feature this is 772 not optional in tput. 773 + add "clear" as a possible link/alias to tput. 774 + improve tput's check for being called as "init" or "reset" to allow 775 for transformed names. 776 + split-out the "clear" function from progs/clear.c, share with 777 tput to get the same behavior, e.g., the E3 extension. 778 779 20161015 780 + amend internal use of tputs to consistently use the number of lines 781 affected, e.g., for insert/delete character operations. While 782 merging terminfo source early in 1995, several descriptions used the 783 "*" proportional delay for these operations, prompting a change in 784 doupdate. 785 + regenerate llib-* files. 786 + regenerate HTML manpages. 787 + fix several formatting issues with manual pages. 788 789 20161008 790 + adjust size in infocmp/tic to work with strlcpy. 791 + fix configure script to record when strlcat is found on OpenBSD. 792 + build-fix for "recent" OpenBSD vs baudrate. 793 794 20161001 795 + add -W option to tic/infocmp to force long strings to wrap. This is 796 in addition to the -w option which attempts to fit capabilities into 797 a given line-length. 798 + add linux-m1 minitel entries (patch by Alexandre Montaron). 799 + correct rs2 string for vt100-nam -TD 800 801 20160924 802 + modify _nc_tic_expand to escape comma if it immediately follows a 803 percent sign, to work with minitel change. 804 + updated minitel and viewdata descriptions (Alexandre Montaron). 805 806 20160917 807 + build-fix for gnat6, which unhelpfully attempts to compile C files. 808 + fix typo in 20160910 changes (Debian #837892, patch by Sven Joachim). 809 810 20160910 811 + trim dead code ifdef'd with HIDE_EINTR since 970830 (discussion with 812 Leon Winter). 813 + trim some obsolete/incorrect wording about EINTR from wgetch manual 814 page (patch by Leon Winter). 815 + really correct 20100515 change (patch by Rich Coe). 816 + add "--enable-string-hacks" option to test/configure 817 + completed string-hacks for "sprintf", etc., including test-programs. 818 + make "--enable-string-hacks" work with Debian by checking for the 819 "bsd" library and its associated "<bsd/string.h>" header. 820 821 20160903 822 + correct 20100515 change for weak signals versus sigprocmask (report 823 by Rich Coe). 824 + modify misc/Makefile.in to work around OpenBSD "make" which unlike 825 all other versions of "make" does not recognize continuation lines 826 of comments. 827 + amend the last change to CF_C_ENV_FLAGS to move only the 828 preprocessor, optimization and warning flags to CPPFLAGS and CFLAGS, 829 leaving the residue in CC. That happens to work for gcc's various 830 "model" options, but may require tuning for other compilers (report 831 by Sven Joachim). 832 833 20160827 834 + add "v" menu entry to test/ncurses.c to show baudrate and other 835 values. 836 + add "newer" baudrate symbols from Linux and FreeBSD to progs/tset.c, 837 lib_baudrate.c 838 + modify CF_XOPEN_SOURCE macro: 839 + add "uclinux" to case for "linux" (patch by Yann E. Morin) 840 + modify _GNU_SOURCE for cygwin headers, tested with cygwin 2.3, 2.5 841 (patch by Corinna Vinschen, from changes to tin). 842 + improve CF_CC_ENV_FLAGS macro to allow for compiler wrappers such 843 as "ccache" (report by Enrico Scholz). 844 + update config.guess, config.sub from 845 846 847 20160820 848 + update tput manual page to reflect changes to manipulate terminal 849 modes by sharing functions with tset. 850 + add the terminal-mode parts of "reset" (aka tset) to the "tput reset" 851 command, making the two almost the same except for window-size. 852 + adapt logic used in dialog "--keep-tite" option for test/filter.c as 853 "-a" option. When set, test/filter attempts to suppress the 854 alternate screen. 855 + correct a typo in interix entry -TD 856 857 20160813 858 + add a dependency upon generated-sources in Ada95/src/Makefile.in to 859 handle a case of "configure && make install". 860 + trim trailing blanks from include/Caps*, to work around a problem 861 in sed (Debian #818067). 862 863 20160806 864 + improve CF_GNU_SOURCE configure macro to optionally define 865 _DEFAULT_SOURCE work around a nuisance in recent glibc releases. 866 + move the terminfo-specific parts of tput's "reset" function into 867 the shared reset_cmd.c, making the two forms of reset use the same 868 strings. 869 + split-out the terminal initialization functions from tset as 870 progs/reset_cmd.c, as part of changes to merge the reset-feature 871 with tput. 872 873 20160730 874 + change tset's initialization to allow it to get settings from the 875 standard input as well as /dev/tty, to be more effective when 876 output or error are redirected. 877 + improve discussion of history and portability for tset/reset/tput 878 manual pages. 879 880 20160723 881 + improve error message from tset/reset when both stderr/stdout are 882 redirected to a file or pipe. 883 + improve organization of curs_attr.3x, curs_color.3x 884 885 20160709 886 + work around Debian's antique/unmaintained version of mawk when 887 building link_test. 888 + improve test/list_keys.c, showing ncurses's convention of modifiers 889 for special keys, based on xterm. 890 891 20160702 892 + improve test/list_keys.c, using $TERM if no parameters areg given. 893 894 20160625 895 + build-fixes for ncurses "test_progs" rule. 896 + amend change to CF_CC_ENV_FLAGS in 20160521 to make multilib build 897 work (report by Sven Joachim). 898 899 20160618 900 + build-fixes for ncurses-examples with NetBSD curses. 901 + improve test/list_keys.c, fixing column-widths and sorting the list 902 to make it more readable. 903 904 20160611 905 + revise fix for Debian #805618 (report by Vlado Potisk, cf: 20151128). 906 + modify test/ncurses.c a/A screens to make exiting on an escape 907 character depend on the start of keypad and timeout modes, to allow 908 better testing of function-keys. 909 + modify rs1 for xterm-16color, xterm-88color and xterm-256color to 910 reset palette using "oc" string as in linux -TD 911 + use ANSI reply for u8 in xterm-new, to reflect vt220-style responses 912 that could be returned -TD 913 + added a few capabilities fixed in recent vte -TD 914 915 20160604 916 + correct logic for -f option in test/demo_terminfo.c 917 + add test/list_keys.c 918 919 20160528 920 + further workaround for PIE/PIC breakage which causes gpm to not link. 921 + fix most cppcheck warnings, mostly style, in ncurses library. 922 923 20160521 924 + improved manual page description of tset/reset versus window-size. 925 + fixes to work with a slightly broken compiler configuration which 926 cannot compile "Hello World!" without adding compiler options 927 (report by Ola x Nilsson): 928 + pass appropriate compiler options to the CF_PROG_CC_C_O macro. 929 + when separating compiler and options in CF_CC_ENV_FLAGS, ensure 930 that all options are split-off into CFLAGS or CPPFLAGS 931 + restore some -I options removed in 20140726 because they appeared 932 to be redundant. In fact, they are needed for a compiler that 933 cannot combine -c and -o options. 934 935 20160514 936 + regenerate HTML manpages. 937 + improve manual pages for wgetch and wget_wch to point out that they 938 might return values without names in curses.h (Debian #822426). 939 + make linux3.0 entry the default linux entry (Debian #823658) -TD 940 + modify linux2.6 entry to improve line-drawing so that the linux3.0 941 entry can be used in non-UTF-8 mode -TD 942 + document return value of use_extended_names (report by Mike Gran). 943 944 20160507 945 + amend change to _nc_do_color to restore the early return for the 946 special case used in _nc_screen_wrap (report by Dick Streefland, 947 cf: 20151017). 948 + modify test/ncurses.c: 949 + check return-value of putwin 950 + correct ifdef which made the 'g' test's legend not reflect changes 951 to keypad- and scroll-modes. 952 + correct return-value of extended putwin (report by Mike Gran). 953 954 20160423 955 + modify test/ncurses.c 'd' edit-color menu to optionally read xterm 956 color palette directly from terminal, as well as handling KEY_RESIZE 957 and screen-repainting with control/L and control/R. 958 + add 'oc' capability to xterm+256color, allowing palette reset for 959 xterm -TD 960 961 20160416 962 + add workaround in configure script for inept transition to PIE vs 963 PIC builds documented in 964 965 + add "reset" to list of programs whose names might change in manpages 966 due to program-transformation configure options. 967 + drop long-obsolete "-n" option from tset. 968 969 20160409 970 + modify test/blue.c to use Unicode values for card-glyphs when 971 available, as well as improving the check for CP437 and CP850. 972 973 20160402 974 + regenerate HTML manpages. 975 + improve manual pages for utilities with respect to POSIX versus 976 X/Open Curses. 977 978 20160326 979 + regenerate HTML manpages. 980 + improve test/demo_menus.c, allowing mouse-click on the menu-headers 981 to switch the active menu. This requires a new extension option 982 O_MOUSE_MENU to tell the menu driver to put mouse events which do not 983 apply to the active menu back into the queue so that the application 984 can handle the event. 985 986 20160319 987 + improve description of tgoto parameters (report by Steffen Nurpmeso). 988 + amend workaround for Solaris line-drawing to restore a special case 989 that maps Unicode line-drawing characters into the acsc string for 990 non-Unicode locales (Debian #816888). 991 992 20160312 993 + modified test/filter.c to illustrate an alternative to getnstr, that 994 polls for input while updating a clock on the right margin as well 995 as responding to window size-changes. 996 997 20160305 998 + omit a redefinition of "inline" when traces are enabled, since this 999 does not work with gcc 5.3.x MinGW cross-compiling (cf: 20150912). 1000 1001 20160220 1002 + modify test/configure script to check for pthread dependency of 1003 ncursest or ncursestw library when building ncurses examples, e.g., 1004 in case weak symbols are used. 1005 + modify configure macro for shared-library rules to use -Wl,-rpath 1006 rather than -rpath to work around a bug in scons (FreeBSD #178732, 1007 cf: 20061021). 1008 + double-width multibyte characters were not counted properly in 1009 winsnstr and wins_nwstr (report/example by Eric Pruitt). 1010 + update config.guess, config.sub from 1011 1012 1013 20160213 1014 + amend fix for _nc_ripoffline from 20091031 to make test/ditto.c work 1015 in threaded configuration. 1016 + move _nc_tracebits, _tracedump and _tracemouse to curses.priv.h, 1017 since they are not part of the suggested ABI6. 1018 1019 20160206 1020 + define WIN32_LEAN_AND_MEAN for MinGW port, making builds faster. 1021 + modify test/ditto.c to allow $XTERM_PROG environment variable to 1022 override "xterm" as the name of the program to run in the threaded 1023 configuration. 1024 1025 20160130 1026 + improve formatting of man/curs_refresh.3x and man/tset.1 manpages 1027 + regenerate HTML manpages using newer man2html to eliminate some 1028 unwanted blank lines. 1029 1030 20160123 1031 + ifdef'd header-file definition of mouse_trafo() with NCURSES_NOMACROS 1032 (report by Corey Minyard). 1033 + fix some strict compiler-warnings in traces. 1034 1035 20160116 1036 + tidy up comments about hardcoded 256color palette (report by 1037 Leonardo Brondani Schenkel) -TD 1038 + add putty-noapp entry, and amend putty entry to use application mode 1039 for better consistency with xterm (report by Leonardo Brondani 1040 Schenkel) -TD 1041 + modify _nc_viscbuf2() and _tracecchar_t2() to trace wide-characters 1042 as a whole rather than their multibyte equivalents. 1043 + minor fix in wadd_wchnstr() to ensure that each cell has nonzero 1044 width. 1045 + move PUTC_INIT calls next to wcrtomb calls, to avoid carry-over of 1046 error status when processing Unicode values which are not mapped. 1047 1048 20160102 1049 + modify ncurses c/C color test-screens to take advantage of wide 1050 screens, reducing the number of lines used for 88- and 256-colors. 1051 + minor refinement to check versus ncv to ignore two parameters of 1052 SGR 38 and 48 when those come from color-capabilities. 1053 1054 20151226 1055 + add check in tic for use of bold, etc., video attributes in the 1056 color capabilities, accounting whether the feature is listed in ncv. 1057 + add check in tic for conflict between ritm, rmso, rmul versus sgr0. 1058 1059 20151219 1060 + add a paragraph to curs_getch.3x discussing key naming (discussion 1061 with James Crippen). 1062 + amend workaround for Solaris vs line-drawing to take the configure 1063 check into account. 1064 + add a configure check for wcwidth() versus the ncurses line-drawing 1065 characters, to use in special-casing systems such as Solaris. 1066 1067 20151212 1068 + improve CF_XOPEN_CURSES macro used in test/configure, to define as 1069 needed NCURSES_WIDECHAR for platforms where _XOPEN_SOURCE_EXTENDED 1070 does not work. Also modified the test program to ensure that if 1071 building with ncurses, that the cchar_t type is checked, since that 1072 normally is since 20111030 ifdef'd depending on this test. 1073 + improve 20121222 workaround for broken acs, letting Solaris "work" 1074 in spite of its misconfigured wcwidth which marks all of the line 1075 drawing characters as double-width. 1076 1077 20151205 1078 + update form_cursor.3x, form_post.3x, menu_attributes.3x to list 1079 function names in NAME section (patch by Jason McIntyre). 1080 + minor fixes to manpage NAME/SYNOPSIS sections to consistently use 1081 rule that either all functions which are prototyped in SYNOPSIS are 1082 listed in the NAME section, or the manual-page name is the sole item 1083 listed in the NAME section. The latter is used to reduce clutter, 1084 e.g., for the top-level library manual pages as well as for certain 1085 feature-pages such as SP-funcs and threading (prompted by patches by 1086 Jason McIntyre). 1087 1088 20151128 1089 + add option to preserve leading whitespace in form fields (patch by 1090 Leon Winter). 1091 + add missing assignment in lib_getch.c to make notimeout() work 1092 (Debian #805618). 1093 + add 't' toggle for notimeout() function in test/ncurses.c a/A screens 1094 + add viewdata terminal description (Alexandre Montaron). 1095 + fix a case in tic/infocmp for formatting capabilities where a 1096 backslash at the end of a string was mishandled. 1097 + fix some typos in curs_inopts.3x (Benno Schulenberg). 1098 1099 20151121 1100 + fix some inconsistencies in the pccon* entries -TD 1101 + add bold to pccon+sgr+acs and pccon-base (Tati Chevron). 1102 + add keys f12-f124 to pccon+keys (Tati Chevron). 1103 + add test/test_sgr.c program to exercise all combinations of sgr. 1104 1105 20151107 1106 + modify tset's assignment to TERM in its output to reflect the name by 1107 which the terminal description is found, rather than the primary 1108 name. That was an unnecessary part from the initial conversion of 1109 tset from termcap to terminfo. The termcap program in 4.3BSD did 1110 this to avoid using the short 2-character name (report by Rich 1111 Burridge). 1112 + minor fix to configure script to ensure that rules for resulting.map 1113 are only generated when needed (cf: 20151101). 1114 + modify configure script to handle the case where tic-library is 1115 renamed, but the --with-debug option is used by itself without 1116 normal or shared libraries (prompted by comment in Debian #803482). 1117 1118 20151101 1119 + amend change for pkg-config which allows build of pc-files when no 1120 valid pkg-config library directory was configured to suppress the 1121 actual install if it is not overridden to a valid directory at 1122 install time (cf: 20150822). 1123 + modify editing script which generates resulting.map to work with the 1124 clang configuration on recent FreeBSD, which gives an error on an 1125 empty "local" section. 1126 + fix a spurious "(Part)" message in test/ncurses.c b/B tests due 1127 to incorrect attribute-masking. 1128 1129 20151024 1130 + modify MKexpanded.c to update the expansion of a temporary filename 1131 to "expanded.c", for use in trace statements. 1132 + modify layout of b/B tests in test/ncurses.c to allow for additional 1133 annotation on the right margin; some terminals with partial support 1134 did not display well. 1135 + fix typo in curs_attr.3x (patch by Sven Joachim). 1136 + fix typo in INSTALL (patch by Tomas Cech). 1137 + improve configure check for setting WILDCARD_SYMS variable; on ppc64 1138 the variable is in the Data section rather than Text (patch by Michel 1139 Normand, Novell #946048). 1140 + using configure option "--without-fallbacks" incorrectly caused 1141 FALLBACK_LIST to be set to "no" (patch by Tomas Cech). 1142 + updated minitel entries to fix kel problem with emacs, and add 1143 minitel1b-nb (Alexandre Montaron). 1144 + reviewed/updated nsterm entry Terminal.app in OSX -TD 1145 + replace some dead URLs in comments with equivalents from the 1146 Internet Archive -TD 1147 + update config.guess, config.sub from 1148 1149 1150 20151017 1151 + modify ncurses/Makefile.in to sort keys.list in POSIX locale 1152 (Debian #801864, patch by Esa Peuha). 1153 + remove an early-return from _nc_do_color, which can interfere with 1154 data needed by bkgd when ncurses is configured with extended colors 1155 (patch by Denis Tikhomirov). 1156 > fixes for OS/2 (patches by KO Myung-Hun) 1157 + use button instead of kbuf[0] in EMX-specific part of lib_mouse.c 1158 + support building with libtool on OS/2 1159 + use stdc++ on OS/2 kLIBC 1160 + clear cf_XOPEN_SOURCE on OS/2 1161 1162 20151010 1163 + add configure check for openpty to test/configure script, for ditto. 1164 + minor fixes to test/view.c in investigating Debian #790847. 1165 + update autoconf patch to 2.52.20150926, incorporates a fix for Cdk. 1166 + add workaround for breakage of POSIX makefiles by recent binutils 1167 change. 1168 + improve check for working poll() by using posix_openpt() as a 1169 fallback in case there is no valid terminal on the standard input 1170 (prompted by discussion on bug-ncurses mailing list, Debian #676461). 1171 1172 20150926 1173 + change makefile rule for removing resulting.map to distclean rather 1174 than clean. 1175 + add /lib/terminfo to terminfo-dirs in ".deb" test-package. 1176 + add note on portability of resizeterm and wresize to manual pages. 1177 1178 20150919 1179 + clarify in resizeterm.3x how KEY_RESIZE is pushed onto the input 1180 stream. 1181 + clarify in curs_getch.3x that the keypad mode affects ability to 1182 read KEY_MOUSE codes, but does not affect KEY_RESIZE. 1183 + add overlooked build-fix needed with Cygwin for separate Ada95 1184 configure script, cf: 20150606 (report by Nicolas Boulenguez) 1185 1186 20150912 1187 + fixes for configure/build using clang on OSX (prompted by report by 1188 William Gallafent). 1189 + do not redefine "inline" in ncurses_cfg.h; this was originally to 1190 solve a problem with gcc/g++, but is aggravated by clang's misuse 1191 of symbols to pretend it is gcc. 1192 + add braces to configure script to prevent unwanted add of 1193 "-lstdc++" to the CXXLIBS symbol. 1194 + improve/update test-program used for checking existence of stdc++ 1195 library. 1196 + if $CXXLIBS is set, the linkage test uses that in addition to $LIBS 1197 1198 20150905 1199 + add note in curs_addch.3x about line-drawing when it depends upon 1200 UTF-8. 1201 + add tic -q option for consistency with infocmp, use it to suppress 1202 all comments from the "tic -I" output. 1203 + modify infocmp -q option to suppress the "Reconstructed from" 1204 header. 1205 + add infocmp/tic -Q option, which allows one to dump the compiled 1206 form of the terminal entry, in hexadecimal or base64. 1207 1208 20150822 1209 + sort options in usage message for infocmp, to make it simpler to 1210 see unused letters. 1211 + update usage message for tic, adding "-0" option. 1212 + documented differences in ESCDELAY versus AIX's implementation. 1213 + fix some compiler warnings from ports. 1214 + modify --with-pkg-config-libdir option to make it possible to install 1215 ".pc" files even if pkg-config is not found (adapted from patch by 1216 Joshua Root). 1217 1218 20150815 1219 + disallow "no" as a possible value for "--with-shlib-version" option, 1220 overlooked in cleanup-changes for 20000708 (report by Tommy Alex). 1221 + update release notes in INSTALL. 1222 + regenerate llib-* files to help with review for release notes. 1223 1224 20150810 1225 + workaround for Debian #65617, which was fixed in mawk's upstream 1226 releases in 2009 (report by Sven Joachim). See 1227 1228 1229 20150808 6.0 release for upload to 1230 1231 20150808 1232 + build-fix for Ada95 on older platforms without stdint.h 1233 + build-fix for Solaris, whose /bin/sh and /usr/bin/sed are non-POSIX. 1234 + update release announcement, summarizing more than 800 changes across 1235 more than 200 snapshots. 1236 + minor fixes to manpages, etc., to simplify linking from announcement 1237 page. 1238 1239 20150725 1240 + updated llib-* files. 1241 + build-fixes for ncurses library "test_progs" rule. 1242 + use alternate workaround for gcc 5.x feature (adapted from patch by 1243 Mikhail Peselnik). 1244 + add status line to tmux via xterm+sl (patch by Nicholas Marriott). 1245 + fixes for st 0.5 from testing with tack -TD 1246 + review/improve several manual pages to break up wall-of-text: 1247 curs_add_wch.3x, curs_attr.3x, curs_bkgd.3x, curs_bkgrnd.3x, 1248 curs_getcchar.3x, curs_getch.3x, curs_kernel.3x, curs_mouse.3x, 1249 curs_outopts.3x, curs_overlay.3x, curs_pad.3x, curs_termattrs.3x 1250 curs_trace.3x, and curs_window.3x 1251 1252 20150719 1253 + correct an old logic error for %A and %O in tparm (report by "zreed"). 1254 + improve documentation for signal handlers by adding section in the 1255 curs_initscr.3x page. 1256 + modify logic in make_keys.c to not assume anything about the size 1257 of strnames and strfnames variables, since those may be functions 1258 in the thread- or broken-linker configurations (problem found by 1259 Coverity). 1260 + modify test/configure script to check for pthreads configuration, 1261 e.g., ncursestw library. 1262 1263 20150711 1264 + modify scripts to build/use test-packages for the pthreads 1265 configuration of ncurses6. 1266 + add references to ttytype and termcap symbols in demo_terminfo.c and 1267 demo_termcap.c to ensure that when building ncursest.map, etc., that 1268 the corresponding names such as _nc_ttytype are added to the list of 1269 versioned symbols (report by Werner Fink) 1270 + fix regression from 20150704 (report/patch by Werner Fink). 1271 1272 20150704 1273 + fix a few problems reported by Coverity. 1274 + fix comparison against "/usr/include" in misc/gen-pkgconfig.in 1275 (report by Daiki Ueno, Debian #790548, cf: 20141213). 1276 1277 20150627 1278 + modify configure script to remove deprecated ABI 5 symbols when 1279 building ABI 6. 1280 + add symbols _nc_Default_Field, _nc_Default_Form, _nc_has_mouse to 1281 map-files, but marked as deprecated so that they can easily be 1282 suppressed from ABI 6 builds (Debian #788610). 1283 + comment-out "screen.xterm" entry, and inherit screen.xterm-256color 1284 from xterm-new (report by Richard Birkett) -TD 1285 + modify read_entry.c to set the error-return to -1 if no terminal 1286 databases were found, as documented for setupterm. 1287 + add test_setupterm.c to demonstrate normal/error returns from the 1288 setupterm and restartterm functions. 1289 + amend cleanup change from 20110813 which removed redundant definition 1290 of ret_error, etc., from tinfo_driver.c, to account for the fact that 1291 it should return a bool rather than int (report/analysis by Johannes 1292 Schindelin). 1293 1294 20150613 1295 + fix overflow warning for OSX with lib_baudrate.c (cf: 20010630). 1296 + modify script used to generate map/sym files to mark 5.9.20150530 as 1297 the last "5.9" version, and regenerated the files. That makes the 1298 files not use ".current" for the post-5.9 symbols. This also 1299 corrects the label for _nc_sigprocmask used in when weak symbols are 1300 configured for the ncursest/ncursestw libraries (prompted by 1301 discussion with Sven Joachim). 1302 + fix typo in NEWS (report by Sven Joachim). 1303 1304 20150606 pre-release 1305 + make ABI 6 the default by updates to dist.mk and VERSION, with the 1306 intention that the existing ABI 5 should build as before using the 1307 "--with-abi-version=5" option. 1308 + regenerate ada- and man-html documentation. 1309 + minor fixes to color- and util-manpages. 1310 + fix a regression in Ada95/gen/Makefile.in, to handle special case of 1311 Cygwin, which uses the broken-linker feature. 1312 + amend fix for CF_NCURSES_CONFIG used in test/configure to assume that 1313 ncurses package scripts work when present for cross-compiling, as the 1314 lessor of two evils (cf: 20150530). 1315 + add check in configure script to disallow conflicting options 1316 "--with-termlib" and "--enable-term-driver". 1317 + move defaults for "--disable-lp64" and "--with-versioned-syms" into 1318 CF_ABI_DEFAULTS macro. 1319 1320 20150530 1321 + change private type for Event_Mask in Ada95 binding to work when 1322 mmask_t is set to 32-bits. 1323 + remove spurious "%;" from st entry (report by Daniel Pitts) -TD 1324 + add vte-2014, update vte to use that -TD 1325 + modify tic and infocmp to "move" a diagnostic for tparm strings that 1326 have a syntax error to tic's "-c" option (report by Daniel Pitts). 1327 + fix two problems with configure script macros (Debian #786436, 1328 cf: 20150425, cf: 20100529). 1329 1330 20150523 1331 + add 'P' menu item to test/ncurses.c, to show pad in color. 1332 + improve discussion in curs_color.3x about color rendering (prompted 1333 by comment on Stack Overflow forum): 1334 + remove screen-bce.mlterm, since mlterm does not do "bce" -TD 1335 + add several screen.XXX entries to support the respective variations 1336 for 256 colors -TD 1337 + add putty+fnkeys* building-block entries -TD 1338 + add smkx/rmkx to capabilities analyzed with infocmp "-i" option. 1339 1340 20150516 1341 + amend change to ".pc" files to only use the extra loader flags which 1342 may have rpath options (report by Sven Joachim, cf: 20150502). 1343 + change versioning for dpkg's in test-packages for Ada95 and 1344 ncurses-examples for consistency with Debian, to work with package 1345 updates. 1346 + regenerate html manpages. 1347 + clarify handling of carriage return in waddch manual page; it was 1348 discussed only in the portability section (prompted by comment on 1349 Stack Overflow forum): 1350 1351 20150509 1352 + add test-packages for cross-compiling ncurses-examples using the 1353 MinGW test-packages. These are only the Debian packages; RPM later. 1354 + cleanup format of debian/copyright files 1355 + add pc-files to the MinGW cross-compiling test-packages. 1356 + correct a couple of places in gen-pkgconfig.in to handle renaming of 1357 the tinfo library. 1358 1359 20150502 1360 + modify the configure script to allow different default values 1361 for ABI 5 versus ABI 6. 1362 + add wgetch-events to test-packages. 1363 + add a note on how to build ncurses-examples to test/README. 1364 + fix a memory leak in delscreen (report by Daniel Kahn Gillmor, 1365 Debian #783486) -TD 1366 + remove unnecessary ';' from E3 capabilities -TD 1367 + add tmux entry, derived from screen (patch by Nicholas Marriott). 1368 + split-out recent change to nsterm-bce as nsterm-build326, and add 1369 nsterm-build342 to reflect changes with successive releases of OSX 1370 (discussion with Leonardo B Schenkel) 1371 + add xon, ich1, il1 to ibm3161 (patch by Stephen Powell, Debian 1372 #783806) 1373 + add sample "magic" file, to document ext-putwin. 1374 + modify gen-pkgconfig.in to add explicit -ltinfo, etc., to the 1375 generated ".pc" file when ld option "--as-needed" is used, or when 1376 ncurses and tinfo are installed without using rpath (prompted by 1377 discussion with Sylvain Bertrand). 1378 + modify test-package for ncurses6 to omit rpath feature when installed 1379 in /usr. 1380 + add OSX's "*.dSYM" to clean-rules in makefiles. 1381 + make extra-suffix work for OSX configuration, e.g., for shared 1382 libraries. 1383 + modify Ada95/configure script to work with pkg-config 1384 + move test-package for ncurses6 to /usr, since filename-conflicts have 1385 been eliminated. 1386 + corrected build rules for Ada95/gen/generate; it does not depend on 1387 the ncurses library aside from headers. 1388 + reviewed man pages, fixed a few other spelling errors. 1389 + fix a typo in curs_util.3x (Sven Joachim). 1390 + use extra-suffix in some overlooked shared library dependencies 1391 found by 20150425 changes for test-packages. 1392 + update config.guess, config.sub from 1393 1394 1395 20150425 1396 + expanded description of tgetstr's area pointer in manual page 1397 (report by Todd M Lewis). 1398 + in-progress changes to modify test-packages to use ncursesw6 rather 1399 than ncursesw, with updated configure scripts. 1400 + modify CF_NCURSES_CONFIG in Ada95- and test-configure scripts to 1401 check for ".pc" files via pkg-config, but add a linkage check since 1402 frequently pkg-config configurations are broken. 1403 + modify misc/gen-pkgconfig.in to include EXTRA_LDFLAGS, e.g., for the 1404 rpath option. 1405 + add 'dim' capability to screen entry (report by Leonardo B Schenkel) 1406 + add several key definitions to nsterm-bce to match preconfigured 1407 keys, e.g., with OSX 10.9 and 10.10 (report by Leonardo B Schenkel) 1408 + fix repeated "extra-suffix" in ncurses-config.in (cf: 20150418). 1409 + improve term_variables manual page, adding section on the terminfo 1410 long-name symbols which are defined in the term.h header. 1411 + fix bug in lib_tracebits.c introduced in const-fixes (cf: 20150404). 1412 1413 20150418 1414 + avoid a blank line in output from tabs program by ending it with 1415 a carriage return as done in FreeBSD (patch by James Clarke). 1416 + build-fix for the "--enable-ext-putwin" feature when not using 1417 wide characters (report by Werner Fink). 1418 + modify autoconf macros to use scripting improvement from xterm. 1419 + add -brtl option to compiler options on AIX 5-7, needed to link 1420 with the shared libraries. 1421 + add --with-extra-suffix option to help with installing nonconflicting 1422 ncurses6 packages, e.g., avoiding header- and library-conflicts. 1423 NOTE: as a side-effect, this renames 1424 adacurses-config to adacurses5-config and 1425 adacursesw-config to adacursesw5-config 1426 + modify debian/rules test package to suffix programs with "6". 1427 + clarify in curs_inopts.3x that window-specific settings do not 1428 inherit into new windows. 1429 1430 20150404 1431 + improve description of start_color() in the manual. 1432 + modify several files in ncurses- and progs-directories to allow 1433 const data used in internal tables to be put by the linker into the 1434 readonly text segment. 1435 1436 20150329 1437 + correct cut/paste error for "--enable-ext-putwin" that made it the 1438 same as "--enable-ext-colors" (report by Roumen Petrov) 1439 1440 20150328 1441 + add "-f" option to test/savescreen.c to help with testing/debugging 1442 the extended putwin/getwin. 1443 + add logic for writing/reading combining characters in the extended 1444 putwin/getwin. 1445 + add "--enable-ext-putwin" configure option to turn on the extended 1446 putwin/getwin. 1447 1448 20150321 1449 + in-progress changes to provide an extended version of putwin and 1450 getwin which will be capable of reading screen-dumps between the 1451 wide/normal ncurses configurations. These are text files, except 1452 for a magic code at the beginning: 1453 0 string \210\210 Screen-dump (ncurses) 1454 1455 20150307 1456 + document limitations of getwin in manual page (prompted by discussion 1457 with John S Urban). 1458 + extend test/savescreen.c to demonstrate that color pair values 1459 and graphic characters can be restored using getwin. 1460 1461 20150228 1462 + modify win_driver.c to eliminate the constructor, to make it more 1463 usable in an application which may/may not need the console window 1464 (report by Grady Martin). 1465 1466 20150221 1467 + capture define's related to -D_XOPEN_SOURCE from the configure check 1468 and add those to the *-config and *.pc files, to simplify use for 1469 the wide-character libraries. 1470 + modify ncurses.spec to accommodate Fedora21's location of pkg-config 1471 directory. 1472 + correct sense of "--disable-lib-suffixes" configure option (report 1473 by Nicolas Boos, cf: 20140426). 1474 1475 20150214 1476 + regenerate html manpages using improved man2html from work on xterm. 1477 + regenerated ".map" and ".sym" files using improved script, accounting 1478 for the "--enable-weak-symbols" configure option (report by Werner 1479 Fink). 1480 1481 20150131 1482 + regenerated ".map" and ".sym" files using improved script, showing 1483 the combinations of configure options used at each stage. 1484 1485 20150124 1486 + add configure check to determine if "local: _*;" can be used in the 1487 ".map" files to selectively omit symbols beginning with "_". On at 1488 least recent FreeBSD, the wildcard applies to all "_" symbols. 1489 + remove obsolete/conflicting rule for ncurses.map from 1490 ncurses/Makefile.in (cf: 20130706). 1491 1492 20150117 1493 + improve description in INSTALL of the --with-versioned-syms option. 1494 + add combination of --with-hashed-db and --with-ticlib to 1495 configurations for ".map" files (report by Werner Fink). 1496 1497 20150110 1498 + add a step to generating ".map" files, to declare any remaining 1499 symbols beginning with "_" as local, at the last version node. 1500 + improve configure checks for pkg-config, addressing a variant found 1501 with FreeBSD ports. 1502 + modify win_driver.c to provide characters for special keys, like 1503 ansi.sys, when keypad mode is off, rather than returning nothing at 1504 all (discussion with Eli Zaretskii). 1505 + add "broken_linker" and "hashed-db" configure options to combinations 1506 use for generating the ".map" and ".sym" files. 1507 + avoid using "ld" directly when creating shared library, to simplify 1508 cross-compiles. Also drop "-Bsharable" option from shared-library 1509 rules for FreeBSD and DragonFly (FreeBSD #196592). 1510 + fix a memory leak in form library Free_RegularExpression_Type() 1511 (report by Pavel Balaev). 1512 1513 20150103 1514 + modify_nc_flush() to retry if interrupted (patch by Stian Skjelstad). 1515 + change map files to make _nc_freeall a global, since it may be used 1516 via the Ada95 binding when checking for memory leaks. 1517 + improve sed script used in 20141220 to account for wide-, threaded- 1518 variations in ABI 6. 1519 1520 20141227 1521 + regenerate ".map" files, using step overlooked in 20141213 to use 1522 the same patch-dates across each file to match ncurses.map (report by 1523 Sven Joachim). 1524 1525 20141221 1526 + fix an incorrect variable assignment in 20141220 changes (report by 1527 Sven Joachim). 1528 1529 20141220 1530 + updated Ada95/configure with macro changes from 20141213 1531 + tie configure options --with-abi-version and --with-versioned-syms 1532 together, so that ABI 6 libraries have distinct symbol versions from 1533 the ABI 5 libraries. 1534 + replace obsolete/nonworking link to man2html with current one, 1535 regenerate html-manpages. 1536 1537 20141213 1538 + modify misc/gen-pkgconfig.in to add -I option for include-directory 1539 when using both --prefix and --disable-overwrite (report by Misty 1540 De Meo). 1541 + add configure option --with-pc-suffix to allow minor renaming of 1542 ".pc" files and the corresponding library. Use this in the test 1543 package for ncurses6. 1544 + modify configure script so that if pkg-config is not installed, it 1545 is still possible to install ".pc" files (report by Misty De Meo). 1546 + updated ".sym" files, removing symbols which are marked as "local" 1547 in the corresponding ".map" files. 1548 + updated ".map" files to reflect move of comp_captab and comp_hash 1549 from tic-library to tinfo-library in 20090711 (report by Sven 1550 Joachim). 1551 1552 20141206 1553 + updated ".map" files so that each symbol that may be shared across 1554 the different library configurations has the same label. Some 1555 review is needed to ensure these are really compatible. 1556 + modify MKlib_gen.sh to work around change in development version of 1557 gcc introduced here: 1558 1559 1560 (reports by Marcus Shawcroft, Maohui Lei). 1561 + improved configure macro CF_SUBDIR_PATH, from lynx changes. 1562 1563 20141129 1564 + improved ".map" files by generating them with a script that builds 1565 ncurses with several related configurations and merges the results. 1566 A further refinement is planned, to make the tic- and tinfo-library 1567 symbols use the same versions across each of the four configurations 1568 which are represented (reports by Sven Joachim, Werner Fink). 1569 1570 20141115 1571 + improve description of limits for color values and color pairs in 1572 curs_color.3x (prompted by patch by Tim van der Molen). 1573 + add VERSION file, using first field in that to record the ABI version 1574 used for configure --with-libtool --disable-libtool-version 1575 + add configure options for applying the ".map" and ".sym" files to 1576 the ncurses, form, menu and panel libraries. 1577 + add ".map" and ".sym" files to show exported symbols, e.g., for 1578 symbol-versioning. 1579 1580 20141101 1581 + improve strict compiler-warnings by adding a cast in TRACE_RETURN 1582 and making a new TRACE_RETURN1 macro for cases where the cast does 1583 not apply. 1584 1585 20141025 1586 + in-progress changes to integrate the win32 console driver with the 1587 msys2 configuration. 1588 1589 20141018 1590 + reviewed terminology 0.6.1, add function key definitions. None of 1591 the vt100-compatibility issues were improved -TD 1592 + improve infocmp conversion of extended capabilities to termcap by 1593 correcting the limit check against parametrized[], as well as filling 1594 in a check if the string happens to have parameters, e.g., "xm" 1595 in recent changes. 1596 + add check for zero/negative dimensions for resizeterm and resize_term 1597 (report by Mike Gran). 1598 1599 20141011 1600 + add experimental support for xterm's 1005 mouse mode, to use in a 1601 demonstration of its limitations. 1602 + add experimental support for "%u" format to terminfo. 1603 + modify test/ncurses.c to also show position reports in 'a' test. 1604 + minor formatting fixes to _nc_trace_mmask_t, make this function 1605 exported to help with debugging mouse changes. 1606 + improve behavior of wheel-mice for xterm protocol, noting that there 1607 are only button-presses for buttons "4" and "5", so there is no need 1608 to wait to combine events into double-clicks (report/analysis by 1609 Greg Field). 1610 + provide examples xterm-1005 and xterm-1006 terminfo entries -TD 1611 + implement decoder for xterm SGR 1006 mouse mode. 1612 1613 20140927 1614 + implement curs_set in win_driver.c 1615 + implement flash in win_driver.c 1616 + fix an infinite loop in win_driver.c if the command-window loses 1617 focus. 1618 + improve the non-buffered mode, i.e., NCURSES_CONSOLE2, of 1619 win_driver.c by temporarily changing the buffer-size to match the 1620 window-size to eliminate the scrollback. Also enforce a minimum 1621 screen-size of 24x80 in the non-buffered mode. 1622 + modify generated misc/Makefile to suppress install.data from the 1623 dependencies if the --disable-db-install option is used, compensating 1624 for the top-level makefile changes used to add ncurses*-config in the 1625 20140920 changes (report by Steven Honeyman). 1626 1627 20140920 1628 + add ncurses*-config to bin-directory of sample package-scripts. 1629 + add check to ensure that getopt is available; this is a problem in 1630 some older cross-compiler environments. 1631 + expanded on the description of --disable-overwrite in INSTALL 1632 (prompted by reports by Joakim Tjernlund, Thomas Klausner). 1633 See Gentoo #522586 and NetBSD #49200 for examples. 1634 which relates to the clarified guidelines. 1635 + remove special logic from CF_INCLUDE_DIRS which adds the directory 1636 for the --includedir from the build (report by Joakim Tjernlund). 1637 + add case for Unixware to CF_XOPEN_SOURCE, from lynx changes. 1638 + update config.sub from 1639 1640 1641 20140913 1642 + add a configure check to ignore some of the plethora of non-working 1643 C++ cross-compilers. 1644 + build-fixes for Ada95 with gnat 4.9 1645 1646 20140906 1647 + build-fix and other improvements for port of ncurses-examples to 1648 NetBSD. 1649 + minor compiler-warning fixes. 1650 1651 20140831 1652 + modify test/demo_termcap.c and test/demo_terminfo.c to make their 1653 options more directly comparable, and add "-i" option to specify 1654 a terminal description filename to parse for names to lookup. 1655 1656 20140823 1657 + fix special case where double-width character overwrites a single- 1658 width character in the first column (report by Egmont Koblinger, 1659 cf: 20050813). 1660 1661 20140816 1662 + fix colors in ncurses 'b' test which did not work after changing 1663 it to put the test-strings in subwindows (cf: 20140705). 1664 + merge redundant SEE-ALSO sections in form and menu manpages. 1665 1666 20140809 1667 + modify declarations for user-data pointers in C++ binding to use 1668 reinterpret_cast to facilitate converting typed pointers to void* 1669 in user's application (patch by Adam Jiang). 1670 + regenerated html manpages. 1671 + add note regarding cause and effect for TERM in ncurses manpage, 1672 having noted clueless verbiage in Terminal.app's "help" file 1673 which reverses cause/effect. 1674 + remove special fallback definition for NCURSES_ATTR_T, since macros 1675 have resolved type-mismatches using casts (cf: 970412). 1676 + fixes for win_driver.c: 1677 + handle repainting on endwin/refresh combination. 1678 + implement beep(). 1679 + minor cleanup. 1680 1681 20140802 1682 + minor portability fixes for MinGW: 1683 + ensure WINVER is defined in makefiles rather than using headers 1684 + add check for gnatprep "-T" option 1685 + work around bug introduced by gcc 4.8.1 in MinGW which breaks 1686 "trace" feature: 1687 1688 + fix most compiler warnings for Cygwin ncurses-examples. 1689 + restore "redundant" -I options in test/Makefile.in, since they are 1690 typically needed when building the derived ncurses-examples package 1691 (cf: 20140726). 1692 1693 20140726 1694 + eliminate some redundant -I options used for building libraries, and 1695 ensure that ${srcdir} is added to the include-options (prompted by 1696 discussion with Paul Gilmartin). 1697 + modify configure script to work with Minix3.2 1698 + add form library extension O_DYNAMIC_JUSTIFY option which can be 1699 used to override the different treatment of justification for static 1700 versus dynamic fields (adapted from patch by Leon Winter). 1701 + add a null pointer check in test/edit_field.c (report/analysis by 1702 Leon Winter, cf: 20130608). 1703 1704 20140719 1705 + make workarounds for compiling test-programs with NetBSD curses. 1706 + improve configure macro CF_ADD_LIBS, to eliminate repeated -l/-L 1707 options, from xterm changes. 1708 1709 20140712 1710 + correct Charable() macro check for A_ALTCHARSET in wide-characters. 1711 + build-fix for position-debug code in tty_update.c, to work with or 1712 without sp-funcs. 1713 1714 20140705 1715 + add w/W toggle to ncurses.c 'B' test, to demonstrate permutation of 1716 video-attributes and colors with double-width character strings. 1717 1718 20140629 1719 + correct check in win_driver.c for saving screen contents, e.g., when 1720 NCURSES_CONSOLE2 is set (cf: 20140503). 1721 + reorganize b/B menu items in ncurses.c, putting the test-strings into 1722 subwindows. This is needed for a planned change to use Unicode 1723 fullwidth characters in the test-screens. 1724 + correct update to form status for _NEWTOP, broken by fixes for 1725 compiler warnings (patch by Leon Winter, cf: 20120616). 1726 1727 20140621 1728 + change shared-library suffix for AIX 5 and 6 to ".so", avoiding 1729 conflict with the static library (report by Ben Lentz). 1730 + document RPATH_LIST in INSTALLATION file, as part of workarounds for 1731 upgrading an ncurses library using the "--with-shared" option. 1732 + modify test/ncurses.c c/C tests to cycle through subsets of the 1733 total number of colors, to better illustrate 8/16/88/256-colors by 1734 providing directly comparable screens. 1735 + add test/dots_curses.c, for comparison with the low-level examples. 1736 1737 20140614 1738 + fix dereference before null check found by Coverity in tic.c 1739 (cf: 20140524). 1740 + fix sign-extension bug in read_entry.c which prevented "toe" from 1741 reading empty "screen+italics" entry. 1742 + modify sgr for screen.xterm-new to support dim capability -TD 1743 + add dim capability to nsterm+7 -TD 1744 + cancel dim capability for iterm -TD 1745 + add dim, invis capabilities to vte-2012 -TD 1746 + add sitm/ritm to konsole-base and mlterm3 -TD 1747 1748 20140609 1749 > fix regression in screen terminfo entries (reports by Christian 1750 Ebert, Gabriele Balducci) -TD 1751 + revert the change to screen; see notes for why this did not work -TD 1752 + cancel sitm/ritm for entries which extend "screen", to work around 1753 screen's hardcoded behavior for SGR 3 -TD 1754 1755 20140607 1756 + separate masking for sgr in vidputs from sitm/ritm, which do not 1757 overlap with sgr functionality. 1758 + remove unneeded -i option from adacurses-config; put -a in the -I 1759 option for consistency (patch by Pascal Pignard). 1760 + update xterm-new terminfo entry to xterm patch #305 -TD 1761 + change format of test-scripts for Debian Ada95 and ncurses-examples 1762 packages to quilted to work around Debian #700177 (cf: 20130907). 1763 + build fix for form_driver_w.c as part of ncurses-examples package for 1764 older ncurses than 20131207. 1765 + add Hello World example to adacurses-config manpage. 1766 + remove unused --enable-pc-files option from Ada95/configure. 1767 + add --disable-gnat-projects option for testing. 1768 + revert changes to Ada95 project-files configuration (cf: 20140524). 1769 + corrected usage message in adacurses-config. 1770 1771 20140524 1772 + fix typo in ncurses manpage for the NCURSES_NO_MAGIC_COOKIE 1773 environment variable. 1774 + improve discussion of input-echoing in curs_getch.3x 1775 + clarify discussion in curs_addch.3x of wrapping. 1776 + modify parametrized.h to make fln non-padded. 1777 + correct several entries which had termcap-style padding used in 1778 terminfo: adm21, aj510, alto-h19, att605-pc, x820 -TD 1779 + correct syntax for padding in some entries: dg211, h19 -TD 1780 + correct ti924-8 which had confused padding versus octal escapes -TD 1781 + correct padding in sbi entry -TD 1782 + fix an old bug in the termcap emulation; "%i" was ignored in tparm() 1783 because the parameters to be incremented were already on the internal 1784 stack (report by Corinna Vinschen). 1785 + modify tic's "-c" option to take into account the "-C" option to 1786 activate additional checks which compare the results from running 1787 tparm() on the terminfo expressions versus the translated termcap 1788 expressions. 1789 + modify tic to allow it to read from FIFOs (report by Matthieu Fronton, 1790 cf: 20120324). 1791 > patches by Nicolas Boulenguez: 1792 + explicit dereferences to suppress some style warnings. 1793 + when c_varargs_to_ada.c includes its header, use double quotes 1794 instead of <>. 1795 + samples/ncurses2-util.adb: removed unused with clause. The warning 1796 was removed by an obsolete pragma. 1797 + replaced Unreferenced pragmas with Warnings (Off). The latter, 1798 available with older GNATs, needs no configure test. This also 1799 replaces 3 untested Unreferenced pragmas. 1800 + simplified To_C usage in trace handling. Using two parameters allows 1801 some basic formatting, and avoids a warning about security with some 1802 compiler flags. 1803 + for generated Ada sources, replace many snippets with one pure 1804 package. 1805 + removed C_Chtype and its conversions. 1806 + removed C_AttrType and its conversions. 1807 + removed conversions between int, Item_Option_Set, Menu_Option_Set. 1808 + removed int, Field_Option_Set, Item_Option_Set conversions. 1809 + removed C_TraceType, Attribute_Option_Set conversions. 1810 + replaced C.int with direct use of Eti_Error, now enumerated. As it 1811 was used in a case statement, values were tested by the Ada compiler 1812 to be consecutive anyway. 1813 + src/Makefile.in: remove duplicate stanza 1814 + only consider using a project for shared libraries. 1815 + style. Silent gnat-4.9 warning about misplaced "then". 1816 + generate shared library project to honor ADAFLAGS, LDFLAGS. 1817 1818 20140510 1819 + cleanup recently introduced compiler warnings for MingW port. 1820 + workaround for ${MAKEFLAGS} configure check versus GNU make 4.0, 1821 which introduces more than one gratuitous incompatibility. 1822 1823 20140503 1824 + add vt520ansi terminfo entry (patch by Mike Gran) 1825 + further improve MinGW support for the scenario where there is an 1826 ANSI-escapes handler such as ansicon running in the console window 1827 (patch by Juergen Pfeifer). 1828 1829 20140426 1830 + add --disable-lib-suffixes option (adapted from patch by Juergen 1831 Pfeifer). 1832 + merge some changes from Juergen Pfeifer's work with MSYS2, to 1833 simplify later merging: 1834 + use NC_ISATTY() macro for isatty() in library 1835 + add _nc_mingw_isatty() and related functions to windows-driver 1836 + rename terminal driver entrypoints to simplify grep's 1837 + remove a check in the sp-funcs flavor of newterm() which allowed only 1838 the first call to newterm() to succeed (report by Thomas Beierlein, 1839 cf: 20090927). 1840 1841 20140419 1842 + update config.guess, config.sub from 1843 1844 1845 20140412 1846 + modify configure script: 1847 + drop the -no-gcc option from Intel compiler, from lynx changes. 1848 + extend the --with-hashed-db configure option to simplify building 1849 with different versions of Berkeley database using FreeBSD ports. 1850 + improve initialization for MinGW port (Juergen Pfeifer): 1851 + enforce Windows-style path-separator if cross-compiling, 1852 + add a driver-name method to each of the drivers, 1853 + allow the Windows driver name to match "unknown", ignoring case, 1854 + lengthen the built-in name for the Windows console driver to 1855 "#win32console", and 1856 + move the comparison of driver-names allowing abbreviation, e.g., 1857 to "#win32con" into the Windows console driver. 1858 1859 20140329 1860 + add check in tic for mismatch between ccc and initp/initc 1861 + cancel ccc in putty-256color and konsole-256color for consistency 1862 with the cancelled initc capability (patch by Sven Zuhlsdorf). 1863 + add xterm+256setaf building block for various terminals which only 1864 get the 256-color feature half-implemented -TD 1865 + updated "st" entry (leaving the 0.1.1 version as "simpleterm") to 1866 0.4.1 -TD 1867 1868 20140323 1869 + fix typo in "mlterm" entry (report by Gabriele Balducci) -TD 1870 1871 20140322 1872 + use types from <stdint.h> in sample build-scripts for chtype, etc. 1873 + modify configure script and curses.h.in to allow the types specified 1874 using --with-chtype and related options to be defined in <stdint.h> 1875 + add terminology entry -TD 1876 + add mlterm3 entry, use that as "mlterm" -TD 1877 + inherit mlterm-256color from mlterm -TD 1878 1879 20140315 1880 + modify _nc_New_TopRow_and_CurrentItem() to ensure that the menu's 1881 top-row is adjusted as needed to ensure that the current item is 1882 on the screen (patch by Johann Klammer). 1883 + add wgetdelay() to retrieve _delay member of WINDOW if it happens to 1884 be opaque, e.g., in the pthread configuration (prompted by patch by 1885 Soren Brinkmann). 1886 1887 20140308 1888 + modify ifdef in read_entry.c to handle the case where 1889 NCURSES_USE_DATABASE is not defined (patch by Xin Li). 1890 + add cast in form_driver_w() to fix ARM build (patch by Xin Li). 1891 + add logic to win_driver.c to save/restore screen contents when not 1892 allocating a console-buffer (cf: 20140215). 1893 1894 20140301 1895 + clarify error-returns from newwin (report by Ruslan Nabioullin). 1896 1897 20140222 1898 + fix some compiler warnings in win_driver.c 1899 + updated notes for wsvt25 based on tack and vttest -TD 1900 + add teken entry to show actual properties of FreeBSD's "xterm" 1901 console -TD 1902 1903 20140215 1904 + in-progress changes to win_driver.c to implement output without 1905 allocating a console-buffer. This uses a pre-existing environment 1906 variable NCGDB used by Juergen Pfeifer for debugging (prompted by 1907 discussion with Erwin Waterlander regarding Console2, which hangs 1908 when reading in an allocated console-buffer). 1909 + add -t option to gdc.c, and modify to accept "S" to step through the 1910 scrolling-stages. 1911 + regenerate NCURSES-Programming-HOWTO.html to fix some of the broken 1912 html emitted by docbook. 1913 1914 20140209 1915 + modify CF_XOPEN_SOURCE macro to omit followup check to determine if 1916 _XOPEN_SOURCE can/should be defined. g++ 4.7.2 built on Solaris 10 1917 has some header breakage due to its own predefinition of this symbol 1918 (report by Jean-Pierre Flori, Sage #15796). 1919 1920 20140201 1921 + add/use symbol NCURSES_PAIRS_T like NCURSES_COLOR_T, to illustrate 1922 which "short" types are for color pairs and which are color values. 1923 + fix build for s390x, by correcting field bit offsets in generated 1924 representation clauses when int=32 long=64 and endian=big, or at 1925 least on s390x (patch by Nicolas Boulenguez). 1926 + minor cleanup change to test/form_driver_w.c (patch by Gaute Hope). 1927 1928 20140125 1929 + remove unnecessary ifdef's in Ada95/gen/gen.c, which reportedly do 1930 not work as is with gcc 4.8 due to fixes using chtype cast made for 1931 new compiler warnings by gcc 4.8 in 20130824 (Debian #735753, patch 1932 by Nicolas Boulenguez). 1933 1934 20140118 1935 + apply includesubdir variable which was introduced in 20130805 to 1936 gen-pkgconfig.in (Debian #735782). 1937 1938 20131221 1939 + further improved man2html, used this to fix broken links in html 1940 manpages. See 1941 1942 1943 20131214 1944 + modify configure-script/ifdef's to allow OLD_TTY feature to be 1945 suppressed if the type of ospeed is configured using the option 1946 --with-ospeed to not be a short. By default, it is a short for 1947 termcap-compatibility (adapted from suggestion by Christian 1948 Weisgerber). 1949 + correct a typo in _nc_baudrate() (patch by Christian Weisgerber, 1950 cf: 20061230). 1951 + fix a few -Wlogical-op warnings. 1952 + updated llib-l* files. 1953 1954 20131207 1955 + add form_driver_w() entrypoint to wide-character forms library, as 1956 well as test program form_driver_w (adapted from patch by Gaute 1957 Hope). 1958 1959 20131123 1960 + minor fix for CF_GCC_WARNINGS to special-case options which are not 1961 recognized by clang. 1962 1963 20131116 1964 + add special case to configure script to move _XOPEN_SOURCE_EXTENDED 1965 definition from CPPFLAGS to CFLAGS if it happens to be needed for 1966 Solaris, because g++ errors with that definition (report by 1967 Jean-Pierre Flori, Sage #15268). 1968 + correct logic in infocmp's -i option which was intended to ignore 1969 strings which correspond to function-keys as candidates for piecing 1970 together initialization- or reset-strings. The problem dates to 1971 1.9.7a, but was overlooked until changes in -Wlogical-op warnings for 1972 gcc 4.8 (report by David Binderman). 1973 + updated CF_GCC_WARNINGS to documented options for gcc 4.9.0, moving 1974 checks for -Wextra and -Wdeclaration-after-statement into the macro, 1975 and adding checks for -Wignored-qualifiers, -Wlogical-op and 1976 -Wvarargs 1977 + updated CF_CURSES_UNCTRL_H and CF_SHARED_OPTS macros from ongoing 1978 work on cdk. 1979 + update config.sub from 1980 1981 1982 20131110 1983 + minor cleanup of terminfo.tail 1984 1985 20131102 1986 + use TS extension to describe xterm's title-escapes -TD 1987 + modify terminator and nsterm-s to use xterm+sl-twm building block -TD 1988 + update hurd.ti, add xenl to reflect 2011-03-06 change in 1989 1990 (Debian #727119). 1991 + simplify pfkey expression in ansi.sys -TD 1992 1993 20131027 1994 + correct/simplify ifdef's for cur_term versus broken-linker and 1995 reentrant options (report by Jean-Pierre Flori, cf: 20090530). 1996 + modify release/version combinations in test build-scripts to make 1997 them more consistent with other packages. 1998 1999 20131019 2000 + add nc_mingw.h to installed headers for MinGW port; needed for 2001 compiling ncurses-examples. 2002 + add rpm-script for testing cross-compile of ncurses-examples. 2003 2004 20131014 2005 + fix new typo in CF_ADA_INCLUDE_DIRS macro (report by Roumen Petrov). 2006 2007 20131012 2008 + fix a few compiler warnings in progs and test. 2009 + minor fix to package/debian-mingw/rules, do not strip dll's. 2010 + minor fixes to configure script for empty $prefix, e.g., when doing 2011 cross-compiles to MinGW. 2012 + add script for building test-packages of binaries cross-compiled to 2013 MinGW using NSIS. 2014 2015 20131005 2016 + minor fixes for ncurses-example package and makefile. 2017 + add scripts for test-builds of cross-compiler packages for ncurses6 2018 to MinGW. 2019 2020 20130928 2021 + some build-fixes for ncurses-examples with NetBSD-6.0 curses, though 2022 it lacks some common functions such as use_env() which is not yet 2023 addressed. 2024 + build-fix and some compiler warning fixes for ncurses-examples with 2025 OpenBSD 5.3 2026 + fix a possible null-pointer reference in a trace message from newterm. 2027 + quiet a few warnings from NetBSD 6.0 namespace pollution by 2028 nonstandard popcount() function in standard strings.h header. 2029 + ignore g++ 4.2.1 warnings for "-Weffc++" in c++/cursesmain.cc 2030 + fix a few overlooked places for --enable-string-hacks option. 2031 2032 20130921 2033 + fix typo in curs_attr.3x (patch by Sven Joachim, cf: 20130831). 2034 + build-fix for --with-shared option for DragonFly and FreeBSD (report 2035 by Rong-En Fan, cf: 20130727). 2036 2037 20130907 2038 + build-fixes for MSYS for two test-programs (patches by Ray Donnelly, 2039 Alexey Pavlov). 2040 + revert change to two of the dpkg format files, to work with dpkg 2041 before/after Debian #700177. 2042 + fix gcc -Wconversion warning in wattr_get() macro. 2043 + add msys and msysdll to known host/configuration types (patch by 2044 Alexey Pavlov). 2045 + modify CF_RPATH_HACK configure macro to not rely upon "-u" option 2046 of sort, improving portability. 2047 + minor improvements for test-programs from reviewing Solaris port. 2048 + update config.guess, config.sub from 2049 2050 2051 20130831 2052 + modify test/ncurses.c b/B tests to display lines only for the 2053 attributes which a given terminal supports, to make room for an 2054 italics test. 2055 + completed ncv table in terminfo.tail; it did not list the wide 2056 character codes listed in X/Open Curses issue 7. 2057 + add A_ITALIC extension (prompted by discussion with Egmont Koblinger). 2058 2059 20130824 2060 + fix some gcc 4.8 -Wconversion warnings. 2061 + change format of dpkg test-scripts to quilted to work around bug 2062 introduced by Debian #700177. 2063 + discard cached keyname() values if meta() is changed after a value 2064 was cached using (report by Kurban Mallachiev). 2065 2066 20130816 2067 + add checks in tic to warn about terminals which lack cursor 2068 addressing, capabilities or having those, are marked as hard_copy or 2069 generic_type. 2070 + use --without-progs in mingw-ncurses rpm. 2071 + split out _nc_init_termtype() from alloc_entry.c to use in MinGW 2072 port when tic and other programs are not needed. 2073 2074 20130805 2075 + minor fixes to the --disable-overwrite logic, to ensure that the 2076 configured $(includedir) is not cancelled by the mingwxx-filesystem 2077 rpm macros. 2078 + add --disable-db-install configure option, to simplify building 2079 cross-compile support packages. 2080 + add mingw-ncurses.spec file, for testing cross-compiles. 2081 2082 20130727 2083 + improve configure macros from ongoing work on cdk, dialog, xterm: 2084 + CF_ADD_LIB_AFTER - fix a problem with -Wl options 2085 + CF_RPATH_HACK - add missing result-message 2086 + CF_SHARED_OPTS - modify to use $rel_builddir in cygwin and mingw 2087 dll symbols (which can be overridden) rather than explicit "../". 2088 + CF_SHARED_OPTS - modify NetBSD and DragonFly symbols to use ${CC} 2089 rather than ${LD} to improve rpath support. 2090 + CF_SHARED_OPTS - add a symbol to denote the temporary files that 2091 are created by the macro, to simplify clean-rules. 2092 + CF_X_ATHENA - trim extra libraries to work with -Wl,--as-needed 2093 + fix a regression in hashed-database support for NetBSD, which uses 2094 the key-size differently from other implementations (cf: 20121229). 2095 2096 20130720 2097 + further improvements for setupterm manpage, clarifying the 2098 initialization of cur_term. 2099 2100 20130713 2101 + improve manpages for initscr and setupterm. 2102 + minor compiler-warning fixes 2103 2104 20130706 2105 + add fallback defs for <inttypes.h> and <stdint.h> (cf: 20120225). 2106 + add check for size of wchar_t, use that to suppress a chunk of 2107 wcwidth.h in MinGW port. 2108 + quiet linker warnings for MinGW cross-compile with dll's using the 2109 --enable-auto-import flag. 2110 + add ncurses.map rule to ncurses/Makefile to help diagnose symbol 2111 table issues. 2112 2113 20130622 2114 + modify the clear program to take into account the E3 extended 2115 capability to clear the terminal's scrollback buffer (patch by 2116 Miroslav Lichvar, Redhat #815790). 2117 + clarify in resizeterm manpage that LINES and COLS are updated. 2118 + updated ansi example in terminfo.tail, correct misordered example 2119 of sgr. 2120 + fix other doclifter warnings for manpages 2121 + remove unnecessary ".ta" in terminfo.tail, add missing ".fi" 2122 (patch by Eric Raymond). 2123 2124 20130615 2125 + minor changes to some configure macros to make them more reusable. 2126 + fixes for tabs program (prompted by report by Nick Andrik). 2127 + corrected logic in command-line parsing of -a and -c predefined 2128 tab-lists options. 2129 + allow "-0" and "-8" options to be combined with others, e.g.,"-0d". 2130 + make warning messages more consistent with the other utilities by 2131 not printing the full pathname of the program. 2132 + add -V option for consistency with other utilities. 2133 + fix off-by-one in columns for tabs program when processing an option 2134 such as "-5" (patch by Nick Andrik). 2135 2136 20130608 2137 + add to test/demo_forms.c examples of using the menu-hooks as well 2138 as showing how the menu item user-data can be used to pass a callback 2139 function pointer. 2140 + add test/dots_termcap.c 2141 + remove setupterm call from test/demo_termcap.c 2142 + build-fix if --disable-ext-funcs configure option is used. 2143 + modified test/edit_field.c and test/demo_forms.c to move the lengths 2144 into a user-data structure, keeping the original string for later 2145 expansion to free-format input/out demo. 2146 + modified test/demo_forms.c to load data from file. 2147 + added note to clarify Terminal.app's non-emulation of the various 2148 terminal types listed in the preferences dialog -TD 2149 + fix regression in error-reporting in lib_setup.c (Debian #711134, 2150 cf: 20121117). 2151 + build-fix for a case where --enable-broken_linker and 2152 --enable-reentrant options are combined (report by George R Goffe). 2153 2154 20130525 2155 + modify mvcur() to distinguish between internal use by the ncurses 2156 library, and external callers, preventing it from reading the content 2157 of the screen which is only nonblank when curses calls have updated 2158 it. This makes test/dots_mvcur.c avoid painting colored cells in 2159 the left margin of the display. 2160 + minor fix to test/dots_mvcur.c 2161 + move configured symbols USE_DATABASE and USE_TERMCAP to term.h as 2162 NCURSES_USE_DATABASE and NCURSES_USE_TERMCAP to allow consistent 2163 use of these symbols in term_entry.h 2164 2165 20130518 2166 + corrected ifdefs in test/testcurs.c to allow comparison of mouse 2167 interface versus pdcurses (cf: 20130316). 2168 + add pow() to configure-check for math library, needed since 2169 20121208 for test/hanoi (Debian #708056). 2170 + regenerated html manpages. 2171 + update doctype used for html documentation. 2172 2173 20130511 2174 + move nsterm-related entries out of "obsolete" section to more 2175 plausible "ansi consoles" -TD 2176 + additional cleanup of table-of-contents by reordering -TD 2177 + revise fix for check for 8-bit value in _nc_insert_ch(); prior fix 2178 prevented inserts when video attributes were attached to the data 2179 (cf: 20121215) (Redhat #959534). 2180 2181 20130504 2182 + fixes for issues found by Coverity: 2183 + correct FNKEY() macro in progs/dump_entry.c, allowing kf11-kf63 to 2184 display when infocmp's -R option is used for HP or AIX subsets. 2185 + fix dead-code issue with test/movewindow.c 2186 + improve limited-checking in _nc_read_termtype(). 2187 2188 20130427 2189 + fix clang 3.2 warning in progs/dump_entry.c 2190 + drop AC_TYPE_SIGNAL check; ncurses relies on c89 and later. 2191 2192 20130413 2193 + add MinGW to cases where ncurses installs by default into /usr 2194 (prompted by discussion with Daniel Silva Ferreira). 2195 + add -D option to infocmp's usage-message (patch by Miroslav Lichvar). 2196 + add a missing 'int' type for main function in configure check for 2197 type of bool variable, to work with clang 3.2 (report by Dmitri 2198 Gribenko). 2199 + improve configure check for static_cast, to work with clang 3.2 2200 (report by Dmitri Gribenko). 2201 + re-order rule for demo.o and macros defining header dependencies in 2202 c++/Makefile.in to accommodate gmake (report by Dmitri Gribenko). 2203 2204 20130406 2205 + improve parameter checking in copywin(). 2206 + modify configure script to work around OS X's "libtool" program, to 2207 choose glibtool instead. At the same time, chance the autoconf macro 2208 to look for a "tool" rather than a "prog", to help with potential use 2209 in cross-compiling. 2210 + separate the rpath usage for c++ library from demo program 2211 (Redhat #911540) 2212 + update/correct header-dependencies in c++ makefile (report by Werner 2213 Fink). 2214 + add --with-cxx-shared to dpkg-script, as done for rpm-script. 2215 2216 20130324 2217 + build-fix for libtool configuration (reports by Daniel Silva Ferreira 2218 and Roumen Petrov). 2219 2220 20130323 2221 + build-fix for OS X, to handle changes for --with-cxx-shared feature 2222 (report by Christian Ebert). 2223 + change initialization for vt220, similar entries for consistency 2224 with cursor-key strings (NetBSD #47674) -TD 2225 + further improvements to linux-16color (Benjamin Sittler) 2226 2227 20130316 2228 + additional fix for tic.c, to allocate missing buffer space. 2229 + eliminate configure-script warnings for gen-pkgconfig.in 2230 + correct typo in sgr string for sun-color, 2231 add bold for consistency with sgr, 2232 change smso for consistency with sgr -TD 2233 + correct typo in sgr string for terminator -TD 2234 + add blink to the attributes masked by ncv in linux-16color (report 2235 by Benjamin Sittler) 2236 + improve warning message from post-load checking for missing "%?" 2237 operator by tic/infocmp by showing the entry name and capability. 2238 + minor formatting improvement to tic/infocmp -f option to ensure 2239 line split after "%;". 2240 + amend scripting for --with-cxx-shared option to handle the debug 2241 library "libncurses++_g.a" (report by Sven Joachim). 2242 2243 20130309 2244 + amend change to toe.c for reading from /dev/zero, to ensure that 2245 there is a buffer for the temporary filename (cf: 20120324). 2246 + regenerated html manpages. 2247 + fix typo in terminfo.head (report by Sven Joachim, cf: 20130302). 2248 + updated some autoconf macros: 2249 + CF_ACVERSION_CHECK, from byacc 1.9 20130304 2250 + CF_INTEL_COMPILER, CF_XOPEN_SOURCE from luit 2.0-20130217 2251 + add configure option --with-cxx-shared to permit building 2252 libncurses++ as a shared library when using g++, e.g., the same 2253 limitations as libtool but better integrated with the usual build 2254 configuration (Redhat #911540). 2255 + modify MKkey_defs.sh to filter out build-path which was unnecessarily 2256 shown in curses.h (Debian #689131). 2257 2258 20130302 2259 + add section to terminfo manpage discussing user-defined capabilities. 2260 + update manpage description of NCURSES_NO_SETBUF, explaining why it 2261 is obsolete. 2262 + add a check in waddch_nosync() to ensure that tab characters are 2263 treated as control characters; some broken locales claim they are 2264 printable. 2265 + add some traces to the Windows console driver. 2266 + initialize a temporary array in _nc_mbtowc, needed for some cases 2267 of raw input in MinGW port. 2268 2269 20130218 2270 + correct ifdef on change to lib_twait.c (report by Werner Fink). 2271 + update config.guess, config.sub 2272 2273 20130216 2274 + modify test/testcurs.c to work with mouse for ncurses as it does for 2275 pdcurses. 2276 + modify test/knight.c to work with mouse for pdcurses as it does for 2277 ncurses. 2278 + modify internal recursion in wgetch() which handles cooked mode to 2279 check if the call to wgetnstr() returned an error. This can happen 2280 when both nocbreak() and nodelay() are set, for instance (report by 2281 Nils Christopher Brause) (cf: 960418). 2282 + fixes for issues found by Coverity: 2283 + add a check for valid position in ClearToEOS() 2284 + fix in lib_twait.c when --enable-wgetch-events is used, pointer 2285 use after free. 2286 + improve a limit-check in make_hash.c 2287 + fix a memory leak in hashed_db.c 2288 2289 20130209 2290 + modify test/configure script to make it simpler to override names 2291 of curses-related libraries, to help with linking with pdcurses in 2292 MinGW environment. 2293 + if the --with-terminfo-dirs configure option is not used, there is 2294 no corresponding compiled-in value for that. Fill in "no default 2295 value" for that part of the manpage substitution. 2296 2297 20130202 2298 + correct initialization in knight.c which let it occasionally make 2299 an incorrect move (cf: 20001028). 2300 + improve documentation of the terminfo/termcap search path. 2301 2302 20130126 2303 + further fixes to mvcur to pass callback function (cf: 20130112), 2304 needed to make test/dots_mvcur work. 2305 + reduce calls to SetConsoleActiveScreenBuffer in win_driver.c, to 2306 help reduce flicker. 2307 + modify configure script to omit "+b" from linker options for very 2308 old HP-UX systems (report by Dennis Grevenstein) 2309 + add HP-UX workaround for missing EILSEQ on old HP-UX systems (patch 2310 by Dennis Grevenstein). 2311 + restore memmove/strdup support for antique systems (request by 2312 Dennis Grevenstein). 2313 + change %l behavior in tparm to push the string length onto the stack 2314 rather than saving the formatted length into the output buffer 2315 (report by Roy Marples, cf: 980620). 2316 2317 20130119 2318 + fixes for issues found by Coverity: 2319 + fix memory leak in safe_sprintf.c 2320 + add check for return-value in tty_update.c 2321 + correct initialization for -s option in test/view.c 2322 + add check for numeric overflow in lib_instr.c 2323 + improve error-checking in copywin 2324 + add advice in infocmp manpage for termcap users (Debian #698469). 2325 + add "-y" option to test/demo_termcap and test/demo_terminfo to 2326 demonstrate behavior with/without extended capabilities. 2327 + updated termcap manpage to document legacy termcap behavior for 2328 matching capability names. 2329 + modify name-comparison for tgetstr, etc., to accommodate legacy 2330 applications as well as to improve compatbility with BSD 4.2 2331 termcap implementations (Debian #698299) (cf: 980725). 2332 2333 20130112 2334 + correct prototype in manpage for vid_puts. 2335 + drop ncurses/tty/tty_display.h, ncurses/tty/tty_input.h, since they 2336 are unused in the current driver model. 2337 + modify mvcur to use stdout except when called within the ncurses 2338 library. 2339 + modify vidattr and vid_attr to use stdout as documented in manpage. 2340 + amend changes made to buffering in 20120825 so that the low-level 2341 putp() call uses stdout rather than ncurses' internal buffering. 2342 The putp_sp() call does the same, for consistency (Redhat #892674). 2343 2344 20130105 2345 + add "-s" option to test/view.c to allow it to start in single-step 2346 mode, reducing size of trace files when it is used for debugging 2347 MinGW changes. 2348 + revert part of 20121222 change to tinfo_driver.c 2349 + add experimental logic in win_driver.c to improve optimization of 2350 screen updates. This does not yet work with double-width characters, 2351 so it is ifdef'd out for the moment (prompted by report by Erwin 2352 Waterlander regarding screen flicker). 2353 2354 20121229 2355 + fix coverity warnings regarding copying into fixed-size buffers. 2356 + add throw-declarations in the c++ binding per Coverity warning. 2357 + minor changes to new-items for consistent reference to bug-report 2358 numbers. 2359 2360 20121222 2361 + add *.dSYM directories to clean-rule in ncurses directory makefile, 2362 for Mac OS builds. 2363 + add a configure check for gcc option -no-cpp-precomp, which is not 2364 available in all Mac OS X configurations (report by Andras Salamon, 2365 cf: 20011208). 2366 + improve 20021221 workaround for broken acs, handling a case where 2367 that ACS_xxx character is not in the acsc string but there is a known 2368 wide-character which can be used. 2369 2370 20121215 2371 + fix several warnings from clang 3.1 --analyze, includes correcting 2372 a null-pointer check in _nc_mvcur_resume. 2373 + correct display of double-width characters with MinGW port (report 2374 by Erwin Waterlander). 2375 + replace MinGW's wcrtomb(), fixing a problem with _nc_viscbuf 2376 > fixes based on Coverity report: 2377 + correct coloring in test/bs.c 2378 + correct check for 8-bit value in _nc_insert_ch(). 2379 + remove dead code in progs/tset.c, test/linedata.h 2380 + add null-pointer checks in lib_tracemse.c, panel.priv.h, and some 2381 test-programs. 2382 2383 20121208 2384 + modify test/knight.c to show the number of choices possible for 2385 each position in automove option, e.g., to allow user to follow 2386 Warnsdorff's rule to solve the puzzle. 2387 + modify test/hanoi.c to show the minimum number of moves possible for 2388 the given number of tiles (prompted by patch by Lucas Gioia). 2389 > fixes based on Coverity report: 2390 + remove a few redundant checks. 2391 + correct logic in test/bs.c, when randomly placing a specific type of 2392 ship. 2393 + check return value from remove/unlink in tic. 2394 + check return value from sscanf in test/ncurses.c 2395 + fix a null dereference in c++/cursesw.cc 2396 + fix two instances of uninitialized variables when configuring for the 2397 terminal driver. 2398 + correct scope of variable used in SetSafeOutcWrapper macro. 2399 + set umask when calling mkstemp in tic. 2400 + initialize wbkgrndset() temporary variable when extended-colors are 2401 used. 2402 2403 20121201 2404 + also replace MinGW's wctomb(), fixing a problem with setcchar(). 2405 + modify test/view.c to load UTF-8 when built with MinGW by using 2406 regular win32 API because the MinGW functions mblen() and mbtowc() 2407 do not work. 2408 2409 20121124 2410 + correct order of color initialization versus display in some of the 2411 test-programs, e.g., test_addstr.c 2412 > fixes based on Coverity report: 2413 + delete windows on exit from some of the test-programs. 2414 2415 20121117 2416 > fixes based on Coverity report: 2417 + add missing braces around FreeAndNull in two places. 2418 + various fixes in test/ncurses.c 2419 + improve limit-checks in tinfo/make_hash.c, tinfo/read_entry.c 2420 + correct malloc size in progs/infocmp.c 2421 + guard against negative array indices in test/knight.c 2422 + fix off-by-one limit check in test/color_name.h 2423 + add null-pointer check in progs/tabs.c, test/bs.c, test/demo_forms.c, 2424 test/inchs.c 2425 + fix memory-leak in tinfo/lib_setup.c, progs/toe.c, 2426 test/clip_printw.c, test/demo_menus.c 2427 + delete unused windows in test/chgat.c, test/clip_printw.c, 2428 test/insdelln.c, test/newdemo.c on error-return. 2429 2430 20121110 2431 + modify configure macro CF_INCLUDE_DIRS to put $CPPFLAGS after the 2432 local -I include options in case someone has set conflicting -I 2433 options in $CPPFLAGS (prompted by patch for ncurses/Makefile.in by 2434 Vassili Courzakis). 2435 + modify the ncurses*-config scripts to eliminate relative paths from 2436 the RPATH_LIST variable, e.g., "../lib" as used in installing shared 2437 libraries or executables. 2438 2439 20121102 2440 + realign these related pages: 2441 curs_add_wchstr.3x 2442 curs_addchstr.3x 2443 curs_addstr.3x 2444 curs_addwstr.3x 2445 and fix a long-ago error in curs_addstr.3x which said that a -1 2446 length parameter would only write as much as fit onto one line 2447 (report by Reuben Thomas). 2448 + remove obsolete fallback _nc_memmove() for memmove()/bcopy(). 2449 + remove obsolete fallback _nc_strdup() for strdup(). 2450 + cancel any debug-rpm in package/ncurses.spec 2451 + reviewed vte-2012, reverted most of the change since it was incorrect 2452 based on testing with tack -TD 2453 + un-cancel the initc in vte-256color, since this was implemented 2454 starting with version 0.20 in 2009 -TD 2455 2456 20121026 2457 + improve malloc/realloc checking (prompted by discussion in Redhat 2458 #866989). 2459 + add ncurses test-program as "ncurses6" to the rpm- and dpkg-scripts. 2460 + updated configure macros CF_GCC_VERSION and CF_WITH_PATHLIST. The 2461 first corrects pattern used for Mac OS X's customization of gcc. 2462 2463 20121017 2464 + fix change to _nc_scroll_optimize(), which incorrectly freed memory 2465 (Redhat #866989). 2466 2467 20121013 2468 + add vte-2012, gnome-2012, making these the defaults for vte/gnome 2469 (patch by Christian Persch). 2470 2471 20121006 2472 + improve CF_GCC_VERSION to work around Debian's customization of gcc 2473 --version message. 2474 + improve configure macros as done in byacc: 2475 + drop 2.13 compatibility; use 2.52.xxxx version only since EMX port 2476 has used that for a while. 2477 + add 3rd parameter to AC_DEFINE's to allow autoheader to run, i.e., 2478 for experimental use. 2479 + remove unused configure macros. 2480 + modify configure script and makefiles to quiet new autoconf warning 2481 for LIBS_TO_MAKE variable. 2482 + modify configure script to show $PATH_SEPARATOR variable. 2483 + update config.guess, config.sub 2484 2485 20120922 2486 + modify setupterm to set its copy of TERM to "unknown" if configured 2487 for the terminal driver and TERM was null or empty. 2488 + modify treatment of TERM variable for MinGW port to allow explicit 2489 use of the windows console driver by checking if $TERM is set to 2490 "#win32con" or an abbreviation of that. 2491 + undo recent change to fallback definition of vsscanf() to build with 2492 older Solaris compilers (cf: 20120728). 2493 2494 20120908 2495 + add test-screens to test/ncurses to show 256-characters at a time, 2496 to help with MinGW port. 2497 2498 20120903 2499 + simplify varargs logic in lib_printw.c; va_copy is no longer needed 2500 there. 2501 + modifications for MinGW port to make wide-character display usable. 2502 2503 20120902 2504 + regenerate configure script (report by Sven Joachim, cf: 20120901). 2505 2506 20120901 2507 + add a null-pointer check in _nc_flush (cf: 20120825). 2508 + fix a case in _nc_scroll_optimize() where the _oldnums_list array 2509 might not be allocated. 2510 + improve comparisons in configure.in for unset shell variables. 2511 2512 20120826 2513 + increase size of ncurses' output-buffer, in case of very small 2514 initial screen-sizes. 2515 + fix evaluation of TERMINFO and TERMINFO_DIRS default values as needed 2516 after changes to use --datarootdir (reports by Gabriele Balducci, 2517 Roumen Petrov). 2518 2519 20120825 2520 + change output buffering scheme, using buffer maintained by ncurses 2521 rather than stdio, to avoid problems with SIGTSTP handling (report 2522 by Brian Bloniarz). 2523 2524 20120811 2525 + update autoconf patch to 2.52.20120811, adding --datarootdir 2526 (prompted by discussion with Erwin Waterlander). 2527 + improve description of --enable-reentrant option in README and the 2528 INSTALL file. 2529 + add nsterm-256color, make this the default nsterm -TD 2530 + remove bw from nsterm-bce, per testing with tack -TD 2531 2532 20120804 2533 + update test/configure, adding check for tinfo library. 2534 + improve limit-checks for the getch fifo (report by Werner Fink). 2535 + fix a remaining mismatch between $with_echo and the symbols updated 2536 for CF_DISABLE_ECHO affecting parameters for mk-2nd.awk (report by 2537 Sven Joachim, cf: 20120317). 2538 + modify followup check for pkg-config's library directory in the 2539 --enable-pc-files option to validate syntax (report by Sven Joachim, 2540 cf: 20110716). 2541 2542 20120728 2543 + correct path for ncurses_mingw.h in include/headers, in case build 2544 is done outside source-tree (patch by Roumen Petrov). 2545 + modify some older xterm entries to align with xterm source -TD 2546 + separate "xterm-old" alias from "xterm-r6" -TD 2547 + add E3 extended capability to xterm-basic and putty -TD 2548 + parenthesize parameters of other macros in curses.h -TD 2549 + parenthesize parameter of COLOR_PAIR and PAIR_NUMBER in curses.h 2550 in case it happens to be a comma-expression, etc. (patch by Nick 2551 Black). 2552 2553 20120721 2554 + improved form_request_by_name() and menu_request_by_name(). 2555 + eliminate two fixed-size buffers in toe.c 2556 + extend use_tioctl() to have expected behavior when use_env(FALSE) and 2557 use_tioctl(TRUE) are called. 2558 + modify ncurses test-program, adding -E and -T options to demonstrate 2559 use_env() versus use_tioctl(). 2560 2561 20120714 2562 + add use_tioctl() function (adapted from patch by Werner Fink, 2563 Novell #769788): 2564 2565 20120707 2566 + add ncurses_mingw.h to installed headers (prompted by patch by 2567 Juergen Pfeifer). 2568 + clarify return-codes from wgetch() in response to SIGWINCH (prompted 2569 by Novell #769788). 2570 + modify resizeterm() to always push a KEY_RESIZE onto the fifo, even 2571 if screensize is unchanged. Modify _nc_update_screensize() to push a 2572 KEY_RESIZE if there was a SIGWINCH, even if it does not call 2573 resizeterm(). These changes eliminate the case where a SIGWINCH is 2574 received, but ERR returned from wgetch or wgetnstr because the screen 2575 dimensions did not change (Novell #769788). 2576 2577 20120630 2578 + add --enable-interop to sample package scripts (suggested by Juergen 2579 Pfeifer). 2580 + update CF_PATH_SYNTAX macro, from mawk changes. 2581 + modify mk-0th.awk to allow for generating llib-ltic, etc., though 2582 some work is needed on cproto to work with lib_gen.c to update 2583 llib-lncurses. 2584 + remove redundant getenv() cal in database-iterator leftover from 2585 cleanup in 20120622 changes (report by Sven Joachim). 2586 2587 20120622 2588 + add -d, -e and -q options to test/demo_terminfo and test/demo_termcap 2589 + fix caching of environment variables in database-iterator (patch by 2590 Philippe Troin, Redhat #831366). 2591 2592 20120616 2593 + add configure check to distinguish clang from gcc to eliminate 2594 warnings about unused command-line parameters when compiler warnings 2595 are enabled. 2596 + improve behavior when updating terminfo entries which are hardlinked 2597 by allowing for the possibility that an alias has been repurposed to 2598 a new primary name. 2599 + fix some strict compiler warnings based on package scripts. 2600 + further fixes for configure check for working poll (Debian #676461). 2601 2602 20120608 2603 + fix an uninitialized variable in -c/-n logic for infocmp changes 2604 (cf: 20120526). 2605 + corrected fix for building c++ binding with clang 3.0 (report/patch 2606 by Richard Yao, Gentoo #417613, cf: 20110409) 2607 + correct configure check for working poll, fixing the case where stdin 2608 is redirected, e.g., in rpm/dpkg builds (Debian #676461). 2609 + add rpm- and dpkg-scripts, to test those build-environments. 2610 The resulting packages are used only for testing. 2611 2612 20120602 2613 + add kdch1 aka "Remove" to vt220 and vt220-8 entries -TD 2614 + add kdch1, etc., to qvt108 -TD 2615 + add dl1/il1 to some entries based on dl/il values -TD 2616 + add dl to simpleterm -TD 2617 + add consistency-checks in tic for insert-line vs delete-line 2618 controls, and insert/delete-char keys 2619 + correct no-leaks logic in infocmp when doing comparisons, fixing 2620 duplicate free of entries given via the command-line, and freeing 2621 entries loaded from the last-but-one of files specified on the 2622 command-line. 2623 + add kdch1 to wsvt25 entry from NetBSD CVS (reported by David Lord, 2624 analysis by Martin Husemann). 2625 + add cnorm/civis to wsvt25 entry from NetBSD CVS (report/analysis by 2626 Onno van der Linden). 2627 2628 20120526 2629 + extend -c and -n options of infocmp to allow comparing more than two 2630 entries. 2631 + correct check in infocmp for number of terminal names when more than 2632 two are given. 2633 + correct typo in curs_threads.3x (report by Yanhui Shen on 2634 freebsd-hackers mailing list). 2635 2636 20120512 2637 + corrected 'op' for bterm (report by Samuel Thibault) -TD 2638 + modify test/background.c to demonstrate a background character 2639 holding a colored ACS_HLINE. The behavior differs from SVr4 due to 2640 the thick- and double-line extension (cf: 20091003). 2641 + modify handling of acs characters in PutAttrChar to avoid mapping an 2642 unmapped character to a space with A_ALTCHARSET set. 2643 + rewrite vt520 entry based on vt420 -TD 2644 2645 20120505 2646 + remove p6 (bold) from opus3n1+ for consistency -TD 2647 + remove acs stuff from env230 per clues in Ingres termcap -TD 2648 + modify env230 sgr/sgr0 to match other capabilities -TD 2649 + modify smacs/rmacs in bq300-8 to match sgr/sgr0 -TD 2650 + make sgr for dku7202 agree with other caps -TD 2651 + make sgr for ibmpc agree with other caps -TD 2652 + make sgr for tek4107 agree with other caps -TD 2653 + make sgr for ndr9500 agree with other caps -TD 2654 + make sgr for sco-ansi agree with other caps -TD 2655 + make sgr for d410 agree with other caps -TD 2656 + make sgr for d210 agree with other caps -TD 2657 + make sgr for d470c, d470c-7b agree with other caps -TD 2658 + remove redundant AC_DEFINE for NDEBUG versus Makefile definition. 2659 + fix a back-link in _nc_delink_entry(), which is needed if ncurses is 2660 configured with --enable-termcap and --disable-getcap. 2661 2662 20120428 2663 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 2664 + add eslok flag to dec+sl -TD 2665 + dec+sl applies to vt320 and up -TD 2666 + drop wsl width from xterm+sl -TD 2667 + reuse xterm+sl in putty and nsca-m -TD 2668 + add ansi+tabs to vt520 -TD 2669 + add ansi+enq to vt220-vt520 -TD 2670 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 2671 + added paragraph in keyname manpage telling how extended capabilities 2672 are interpreted as key definitions. 2673 + modify tic's check of conflicting key definitions to include extended 2674 capability strings in addition to the existing check on predefined 2675 keys. 2676 2677 20120421 2678 + improve cleanup of temporary files in tic using atexit(). 2679 + add msgr to vt420, similar DEC vtXXX entries -TD 2680 + add several missing vt420 capabilities from vt220 -TD 2681 + factor out ansi+pp from several entries -TD 2682 + change xterm+sl and xterm+sl-twm to include only the status-line 2683 capabilities and not "use=xterm", making them more generally useful 2684 as building-blocks -TD 2685 + add dec+sl building block, as example -TD 2686 2687 20120414 2688 + add XT to some terminfo entries to improve usefulness for other 2689 applications than screen, which would like to pretend that xterm's 2690 title is a status-line. -TD 2691 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 2692 of ordering and overrides -TD 2693 + add consistency check in tic for screen's "XT" capability. 2694 + add section in terminfo.src summarizing the user-defined capabilities 2695 used in that file -TD 2696 2697 20120407 2698 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 2699 non-standard capabilities, while infocmp was ignoring only the user 2700 definable capabilities. 2701 + improve special case in tic parsing of description to allow it to be 2702 followed by terminfo capabilities. Previously the description had to 2703 be the last field on an input line to allow tic to distinguish 2704 between termcap and terminfo format while still allowing commas to be 2705 embedded in the description. 2706 + correct variable name in gen_edit.sh which broke configurability of 2707 the --with-xterm-kbs option. 2708 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 2709 + further amend 20110910 change, providing for configure-script 2710 override of the "linux" terminfo entry to install and changing the 2711 default for that to "linux2.2" (Debian #665959). 2712 2713 20120331 2714 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 2715 + correct order of use-clauses in st-256color -TD 2716 + modify configure script to look for gnatgcc if the Ada95 binding 2717 is built, in preference to the default gcc/cc (suggested by 2718 Nicolas Boulenguez). 2719 + modify configure script to ensure that the same -On option used for 2720 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 2721 using "-O3" (suggested by Nicolas Boulenguez) 2722 2723 20120324 2724 + amend an old fix so that next_char() exits properly for empty files, 2725 e.g., from reading /dev/null (cf: 20080804). 2726 + modify tic so that it can read from the standard input, or from 2727 a character device. Because tic uses seek's, this requires writing 2728 the data to a temporary file first (prompted by remark by Sven 2729 Joachim) (cf: 20000923). 2730 2731 20120317 2732 + correct a check made in lib_napms.c, so that terminfo applications 2733 can again use napms() (cf: 20110604). 2734 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 2735 (cf: 20040327). 2736 + correct scripting for --disable-echo option in test/configure. 2737 + amend check for missing c++ compiler to work when no error is 2738 reported, and no variables set (cf: 20021206). 2739 + add/use configure macro CF_DISABLE_ECHO. 2740 2741 20120310 2742 + fix some strict compiler warnings for abi6 and 64-bits. 2743 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 2744 + improve a limit-check in infocmp.c (Werner Fink): 2745 2746 20120303 2747 + minor tidying of terminfo.tail, clarify reason for limitation 2748 regarding mapping of \0 to \200 2749 + minor improvement to _nc_copy_termtype(), using memcpy to replace 2750 loops. 2751 + fix no-leaks checking in test/demo_termcap.c to account for multiple 2752 calls to setupterm(). 2753 + modified the libgpm change to show previous load as a problem in the 2754 debug-trace. 2755 > merge some patches from OpenSUSE rpm (Werner Fink): 2756 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 2757 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 2758 runtime linker 2759 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 2760 fallback entries 2761 2762 20120228 2763 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 2764 2765 20120225 2766 + modify configure script to allow creating dll's for MinGW when 2767 cross-compiling. 2768 + add --enable-string-hacks option to control whether strlcat and 2769 strlcpy may be used. The same issue applies to OpenBSD's warnings 2770 about snprintf, noting that this function is weakly standardized. 2771 + add configure checks for strlcat, strlcpy and snprintf, to help 2772 reduce bogus warnings with OpenBSD builds. 2773 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 2774 (cf:20111231) 2775 + update config.guess, config.sub 2776 2777 20120218 2778 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 2779 the first success (patch by Pierre Labastie). 2780 + improve configure macro CF_MKSTEMP by moving existence-check for 2781 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 2782 + improve configure macro CF_FUNC_POLL from luit changes to detect 2783 broken implementations, e.g., with Mac OS X. 2784 + add configure option --with-tparm-arg 2785 + build-fix for MinGW cross-compiling, so that make_hash does not 2786 depend on TTY definition (cf: 20111008). 2787 2788 20120211 2789 + make sgr for xterm-pcolor agree with other caps -TD 2790 + make sgr for att5425 agree with other caps -TD 2791 + make sgr for att630 agree with other caps -TD 2792 + make sgr for linux entries agree with other caps -TD 2793 + make sgr for tvi9065 agree with other caps -TD 2794 + make sgr for ncr260vt200an agree with other caps -TD 2795 + make sgr for ncr160vt100pp agree with other caps -TD 2796 + make sgr for ncr260vt300an agree with other caps -TD 2797 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 2798 + make sgr for cygwin, cygwinDBG agree with other caps -TD 2799 + add configure option --with-xterm-kbs to simplify configuration for 2800 Linux versus most other systems. 2801 2802 20120204 2803 + improved tic -D option, avoid making target directory and provide 2804 better diagnostics. 2805 2806 20120128 2807 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 2808 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 2809 + make sgr for sun-color agree with smso -TD 2810 + make sgr for prism9 agree with other caps -TD 2811 + make sgr for icl6404 agree with other caps -TD 2812 + make sgr for ofcons agree with other caps -TD 2813 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 2814 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 2815 + make sgr for avt-ns agree with other caps -TD 2816 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 2817 give verbose warnings (cf: 20110730). 2818 + modify misc/gen-edit.sh to make the location of the tabset directory 2819 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 2820 (Debian #653435, patch by Sven Joachim). 2821 2822 20120121 2823 + add --with-lib-prefix option to allow configuring for old/new flavors 2824 of OS/2 EMX. 2825 + modify check for gnat version to allow for year, as used in FreeBSD 2826 port. 2827 + modify check_existence() in db_iterator.c to simply check if the 2828 path is a directory or file, according to the need. Checking for 2829 directory size also gives no usable result with OS/2 (cf: 20120107). 2830 + support OS/2 kLIBC (patch by KO Myung-Hun). 2831 2832 20120114 2833 + several improvements to test/movewindow.c (prompted by discussion on 2834 Linux Mint forum): 2835 + modify movement commands to make them continuous 2836 + rewrote the test for mvderwin 2837 + rewrote the test for recursive mvwin 2838 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 2839 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 2840 and OpenBSD. 2841 + regenerated html manpages. 2842 2843 20120107 2844 + various improvements for MinGW (Juergen Pfeifer): 2845 + modify stat() calls to ignore the st_size member 2846 + drop mk-dlls.sh script. 2847 + change recommended regular expression library. 2848 + modify rain.c to allow for threaded configuraton. 2849 + modify tset.c to allow for case when size-change logic is not used. 2850 2851 20111231 2852 + modify toe's report when -a and -s options are combined, to add 2853 a column showing which entries belong to a given database. 2854 + add -s option to toe, to sort its output. 2855 + modify progs/toe.c, simplifying use of db-iterator results to use 2856 caching improvements from 20111001 and 20111126. 2857 + correct generation of pc-files when ticlib or termlib options are 2858 given to rename the corresponding tic- or tinfo-libraries (report 2859 by Sven Joachim). 2860 2861 20111224 2862 + document a portability issue with tput, i.e., that scripts which work 2863 with ncurses may fail in other implementations that do no parameter 2864 analysis. 2865 + add putty-sco entry -TD 2866 2867 20111217 2868 + review/fix places in manpages where --program-prefix configure option 2869 was not being used. 2870 + add -D option to infocmp, to show the database locations that it 2871 could use. 2872 + fix build for the special case where term-driver, ticlib and termlib 2873 are all enabled. The terminal driver depends on a few features in 2874 the base ncurses library, so tic's dependencies include both ncurses 2875 and termlib. 2876 + fix build work for term-driver when --enable-wgetch-events option is 2877 enabled. 2878 + use <stdint.h> types to fix some questionable casts to void*. 2879 2880 20111210 2881 + modify configure script to check if thread library provides 2882 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 2883 + modify configure script to suppress check to define _XOPEN_SOURCE 2884 for IRIX64, since its header files have a conflict versus 2885 _SGI_SOURCE. 2886 + modify configure script to add ".pc" files for tic- and 2887 tinfo-libraries, which were omitted in recent change (cf: 20111126). 2888 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 2889 2890 20111203 2891 + modify configure-check for etip.h dependencies, supplying a temporary 2892 copy of ncurses_dll.h since it is a generated file (prompted by 2893 Debian #646977). 2894 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 2895 2896 20111126 2897 + correct database iterator's check for duplicate entries 2898 (cf: 20111001). 2899 + modify database iterator to ignore $TERMCAP when it is not an 2900 absolute pathname. 2901 + add -D option to tic, to show the database locations that it could 2902 use. 2903 + improve description of database locations in tic manpage. 2904 + modify the configure script to generate a list of the ".pc" files to 2905 generate, rather than deriving the list from the libraries which have 2906 been built (patch by Mike Frysinger). 2907 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 2908 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 2909 from patch by Mike Frysinger). 2910 2911 20111119 2912 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 2913 from curses.priv.h, fixing a regression with IRIX64 and Tru64 2914 (cf: 20110416) 2915 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 2916 i.e., the database iterator was not initialized. This case is needed 2917 to when tic is translating to termcap, rather than loading the 2918 database (cf: 20111001). 2919 2920 20111112 2921 + add pccon entries for OpenBSD console (Alexei Malinin). 2922 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 2923 600 to work around inconsistent ifdef'ing of wcstof between C and 2924 C++ header files. 2925 + modify capconvert script to accept more than exact match on "xterm", 2926 e.g., the "xterm-*" variants, to exclude from the conversion (patch 2927 by Robert Millan). 2928 + add -lc_r as alternative for -lpthread, allows build of threaded code 2929 in older FreeBSD machines. 2930 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 2931 _POSIX_SOURCE are defined. 2932 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 2933 2934 20111030 2935 + modify make_db_path() to allow creating "terminfo.db" in the same 2936 directory as an existing "terminfo" directory. This fixes a case 2937 where switching between hashed/filesystem databases would cause the 2938 new hashed database to be installed in the next best location - 2939 root's home directory. 2940 + add variable cf_cv_prog_gnat_correct to those passed to 2941 config.status, fixing a problem with Ada95 builds (cf: 20111022). 2942 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 2943 accommodate broken implementations for _XPG6. 2944 + eliminate usage of NULL symbol from etip.h, to reduce header 2945 interdependencies. 2946 + add configure check to decide when to add _XOPEN_SOURCE define to 2947 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 2948 This is a workaround for gcc 4.6, which fails to build the c++ 2949 binding if that symbol is defined by the application, due to 2950 incorrectly combining the corresponding feature test macros 2951 (report by Peter Kruse). 2952 2953 20111022 2954 + correct logic for discarding mouse events, retaining the partial 2955 events used to build up click, double-click, etc, until needed 2956 (cf: 20110917). 2957 + fix configure script to avoid creating unused Ada95 makefile when 2958 gnat does not work. 2959 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 2960 internal functions of libncurses. The external interface of courses 2961 uses bool, which still produces these warnings. 2962 2963 20111015 2964 + improve description of --disable-tic-depends option to make it 2965 clear that it may be useful whether or not the --with-termlib 2966 option is also given (report by Sven Joachim). 2967 + amend termcap equivalent for set_pglen_inch to use the X/Open 2968 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 2969 + improve manpage for tgetent differences from termcap library. 2970 2971 20111008 2972 + moved static data from db_iterator.c to lib_data.c 2973 + modify db_iterator.c for memory-leak checking, fix one leak. 2974 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 2975 of ncurses rather than Requires, as well as Libs.private for the 2976 other library dependencies (prompted by Debian #644728). 2977 2978 20111001 2979 + modify tic "-K" option to only set the strict-flag rather than force 2980 source-output. That allows the same flag to control the parser for 2981 input and output of termcap source. 2982 + modify _nc_getent() to ignore backslash at the end of a comment line, 2983 making it consistent with ncurses' parser. 2984 + restore a special-case check for directory needed to make termcap 2985 text files load as if they were databases (cf: 20110924). 2986 + modify tic's resolution/collision checking to attempt to remove the 2987 conflicting alias from the second entry in the pair, which is 2988 normally following in the source file. Also improved the warning 2989 message to make it simpler to see which alias is the problem. 2990 + improve performance of the database iterator by caching search-list. 2991 2992 20110925 2993 + add a missing "else" in changes to _nc_read_tic_entry(). 2994 2995 20110924 2996 + modify _nc_read_tic_entry() so that hashed-database is checked before 2997 filesystem. 2998 + updated CF_CURSES_LIBS check in test/configure script. 2999 + modify configure script and makefiles to split TIC_ARGS and 3000 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 3001 to help separate searches for tic- and tinfo-libraries (patch by Nick 3002 Alcock aka "Nix"). 3003 + build-fix for lib_mouse.c changes (cf: 20110917). 3004 3005 20110917 3006 + fix compiler warning for clang 2.9 3007 + improve merging of mouse events (integrated patch by Damien 3008 Guibouret). 3009 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 3010 (patch by Damien Guibouret). 3011 3012 20110910 3013 + modify misc/gen_edit.sh to select a "linux" entry which works with 3014 the current kernel rather than assuming it is always "linux3.0" 3015 (cf: 20110716). 3016 + revert a change to getmouse() which had the undesirable side-effect 3017 of suppressing button-release events (report by Damien Guibouret, 3018 cf: 20100102). 3019 + add xterm+kbs fragment from xterm #272 -TD 3020 + add configure option --with-pkg-config-libdir to provide control over 3021 the actual directory into which pc-files are installed, do not use 3022 the pkg-config environment variables (discussion with Frederic L W 3023 Meunier). 3024 + add link to mailing-list archive in announce.html.in, as done in 3025 FAQ (prompted by question by Andrius Bentkus). 3026 + improve manpage install by adjusting the "#include" examples to 3027 show the ncurses-subdirectory used when --disable-overwrite option 3028 is used. 3029 + install an alias for "curses" to the ncurses manpage, tied to the 3030 --with-curses-h configure option (suggested by Reuben Thomas). 3031 3032 20110903 3033 + propagate error-returns from wresize, i.e., the internal 3034 increase_size and decrease_size functions through resize_term (report 3035 by Tim van der Molen, cf: 20020713). 3036 + fix typo in tset manpage (patch by Sven Joachim). 3037 3038 20110820 3039 + add a check to ensure that termcap files which might have "^?" do 3040 not use the terminfo interpretation as "\177". 3041 + minor cleanup of X-terminal emulator section of terminfo.src -TD 3042 + add terminator entry -TD 3043 + add simpleterm entry -TD 3044 + improve wattr_get macros by ensuring that if the window pointer is 3045 null, then the attribute and color values returned will be zero 3046 (cf: 20110528). 3047 3048 20110813 3049 + add substitution for $RPATH_LIST to misc/ncurses-config.in 3050 + improve performance of tic with hashed-database by caching the 3051 database connection, using atexit() to cleanup. 3052 + modify treatment of 2-character aliases at the beginning of termcap 3053 entries so they are not counted in use-resolution, since these are 3054 guaranteed to be unique. Also ignore these aliases when reporting 3055 the primary name of the entry (cf: 20040501) 3056 + double-check gn (generic) flag in terminal descriptions to 3057 accommodate old/buggy termcap databases which misused that feature. 3058 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 3059 error-return. 3060 3061 20110807 3062 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 3063 variable is defined in the makefiles which use it. 3064 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 3065 + build-fixes for NetBSD 5.1 with termcap support enabled. 3066 + corrected k9 in dg460-ansi, add other features based on manuals -TD 3067 + improve trimming of whitespace at the end of terminfo/termcap output 3068 from tic/infocmp. 3069 + when writing termcap source, ensure that colons in the description 3070 field are translated to a non-delimiter, i.e., "=". 3071 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 3072 use a single line. 3073 + add a null-pointer check when handling the $CC variable. 3074 3075 20110730 3076 + modify configure script and makefiles in c++ and progs to allow the 3077 directory used for rpath option to be overridden, e.g., to work 3078 around updates to the variables used by tic during an install. 3079 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 3080 for termcap output. 3081 + add _nc_strict_bsd variable in tic library which controls the 3082 "strict" BSD termcap compatibility from 20110723, plus these 3083 features: 3084 + allow escapes such as "\8" and "\9" when reading termcap 3085 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 3086 termcap files, passing through "a", "e", etc. 3087 + expand "\:" as "\072" on output. 3088 + modify _nc_get_token() to reset the token's string value in case 3089 there is a string-typed token lacking the "=" marker. 3090 + fix a few memory leaks in _nc_tgetent. 3091 + fix a few places where reading from a termcap file could refer to 3092 freed memory. 3093 + add an overflow check when converting terminfo/termcap numeric 3094 values, since terminfo stores those in a short, and they must be 3095 positive. 3096 + correct internal variables used for translating to termcap "%>" 3097 feature, and translating from termcap %B to terminfo, needed by 3098 tctest (cf: 19991211). 3099 + amend a minor fix to acsc when loading a termcap file to separate it 3100 from warnings needed for tic (cf: 20040710) 3101 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 3102 a termcap file to be handled via TERMINFO_DIRS. 3103 + modify _nc_infotocap() to include non-mandatory padding when 3104 translating to termcap. 3105 + modify _nc_read_termcap_entry(), passing a flag in the case where 3106 getcap is used, to reduce interactive warning messages. 3107 3108 20110723 3109 + add a check in start_color() to limit color-pairs to 256 when 3110 extended colors are not supported (patch by David Benjamin). 3111 + modify setcchar to omit no-longer-needed OR'ing of color pair in 3112 the SetAttr() macro (patch by David Benjamin). 3113 + add kich1 to sun terminfo entry (Yuri Pankov) 3114 + use bold rather than reverse for smso in sun-color terminfo entry 3115 (Yuri Pankov). 3116 + improve generation of termcap using tic/infocmp -C option, e.g., 3117 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 3118 regarding Schilling's test program): 3119 + translate %02 and %03 to %2 and %3 respectively. 3120 + suppress string capabilities which use %s, not supported by tgoto 3121 + use \040 rather than \s 3122 + expand null characters as \200 rather than \0 3123 + modify configure script to support shared libraries for DragonFlyBSD. 3124 3125 20110716 3126 + replace an assert() in _nc_Free_Argument() with a regular null 3127 pointer check (report/analysis by Franjo Ivancic). 3128 + modify configure --enable-pc-files option to take into account the 3129 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 3130 + add/use xterm+tmux chunk from xterm #271 -TD 3131 + resync xterm-new entry from xterm #271 -TD 3132 + add E3 extended capability to linux-basic (Miroslav Lichvar) 3133 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 3134 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 3135 + fix inconsistent tabset path in pcmw (Todd C. Miller). 3136 + remove a backslash which continued comment, obscuring altos3 3137 definition with OpenBSD toolset (Nicholas Marriott). 3138 3139 20110702 3140 + add workaround from xterm #271 changes to ensure that compiler flags 3141 are not used in the $CC variable. 3142 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 3143 7.1 with both gcc 4.2.4 and cc. 3144 + modify configure checks for AIX to include release 7.x 3145 + add loader flags/libraries to libtool options so that dynamic loading 3146 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 3147 at gentoo prefix repository (patch by Michael Haubenwallner). 3148 3149 20110626 3150 + move include of nc_termios.h out of term_entry.h, since the latter 3151 is installed, e.g., for tack while the former is not (report by 3152 Sven Joachim). 3153 3154 20110625 3155 + improve cleanup() function in lib_tstp.c, using _exit() rather than 3156 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 3157 comments forwarded by Nicholas Marriott). 3158 + reduce name pollution from term.h, moving fallback #define's for 3159 tcgetattr(), etc., to new private header nc_termios.h (report by 3160 Sergio NNX). 3161 + two minor fixes for tracing (patch by Vassili Courzakis). 3162 + improve trace initialization by starting it in use_env() and 3163 ripoffline(). 3164 + review old email, add details for some changelog entries. 3165 3166 20110611 3167 + update minix entry to minix 3.2 (Thomas Cort). 3168 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 3169 3170 20110604 3171 + fixes for MirBSD port: 3172 + set default prefix to /usr. 3173 + add support for shared libraries in configure script. 3174 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 3175 + add a few more checks based on ncurses/link_test. 3176 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 3177 3178 20110528 3179 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 3180 + used ncurses/link_test to check for behavior when the terminal has 3181 not been initialized and when an application passes null pointers 3182 to the library. Added checks to cover this (prompted by Redhat 3183 #707344). 3184 + modify MKlib_gen.sh to make its main() function call each function 3185 with zero parameters, to help find inconsistent checking for null 3186 pointers, etc. 3187 3188 20110521 3189 + fix warnings from clang 2.7 "--analyze" 3190 3191 20110514 3192 + compiler-warning fixes in panel and progs. 3193 + modify CF_PKG_CONFIG macro, from changes to tin -TD 3194 + modify CF_CURSES_FUNCS configure macro, used in test directory 3195 configure script: 3196 + work around (non-optimizer) bug in gcc 4.2.1 which caused 3197 test-expression to be omitted from executable. 3198 + force the linker to see a link-time expression of a symbol, to 3199 help work around weak-symbol issues. 3200 3201 20110507 3202 + update discussion of MKfallback.sh script in INSTALL; normally the 3203 script is used automatically via the configured makefiles. However 3204 there are still occasions when it might be used directly by packagers 3205 (report by Gunter Schaffler). 3206 + modify misc/ncurses-config.in to omit the "-L" option from the 3207 "--libs" output if the library directory is /usr/lib. 3208 + change order of tests for curses.h versus ncurses.h headers in the 3209 configure scripts for Ada95 and test-directories, to look for 3210 ncurses.h, from fixes to tin -TD 3211 + modify ncurses/tinfo/access.c to account for Tandem's root uid 3212 (report by Joachim Schmitz). 3213 3214 20110430 3215 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 3216 is not used when building a static library (report by Nicolas 3217 Boulenguez): 3218 + Ada95 build-fix for big-endian architectures such as sparc. This 3219 undoes one of the fixes from 20110319, which added an "Unused" member 3220 to representation clauses, replacing that with pragmas to suppress 3221 warnings about unused bits (patch by Nicolas Boulenguez). 3222 3223 20110423 3224 + add check in test/configure for use_window, use_screen. 3225 + add configure-checks for getopt's variables, which may be declared 3226 as different types on some Unix systems. 3227 + add check in test/configure for some legacy curses types of the 3228 function pointer passed to tputs(). 3229 + modify init_pair() to accept -1's for color value after 3230 assume_default_colors() has been called (Debian #337095). 3231 + modify test/background.c, adding commmand-line options to demonstrate 3232 assume_default_colors() and use_default_colors(). 3233 3234 20110416 3235 + modify configure script/source-code to only define _POSIX_SOURCE if 3236 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 3237 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 3238 + update config.guess, config.sub 3239 3240 20110409 3241 + fixes to build c++ binding with clang 3.0 (patch by Alexander 3242 Kolesen). 3243 + add check for unctrl.h in test/configure, to work around breakage in 3244 some ncurses packages. 3245 + add "--disable-widec" option to test/configure script. 3246 + add "--with-curses-colr" and "--with-curses-5lib" options to the 3247 test/configure script to address testing with very old machines. 3248 3249 20110404 5.9 release for upload to 3250 3251 20110402 3252 + various build-fixes for the rpm/dpkg scripts. 3253 + add "--enable-rpath-link" option to Ada95/configure, to allow 3254 packages to suppress the rpath feature which is normally used for 3255 the in-tree build of sample programs. 3256 + corrected definition of libdir variable in Ada95/src/Makefile.in, 3257 needed for rpm script. 3258 + add "--with-shared" option to Ada95/configure script, to allow 3259 making the C-language parts of the binding use appropriate compiler 3260 options if building a shared library with gnat. 3261 3262 20110329 3263 > portability fixes for Ada95 binding: 3264 + add configure check to ensure that SIGINT works with gnat. This is 3265 needed for the "rain" sample program. If SIGINT does not work, omit 3266 that sample program. 3267 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 3268 + add ncurses_compat.c, to supply functions used in the Ada95 binding 3269 which were added in 5.7 and later. 3270 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 3271 upon GNU sed. 3272 3273 20110326 3274 + add special check in Ada95/configure script for ncurses6 reentrant 3275 code. 3276 + regen Ada html documentation. 3277 + build-fix for Ada shared libraries versus the varargs workaround. 3278 + add rpm and dpkg scripts for Ada95 and test directories, for test 3279 builds. 3280 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 3281 CF_X_ATHENA_LIBS. 3282 + add configure check to determine if gnat's project feature supports 3283 libraries, i.e., collections of .ali files. 3284 + make all dereferences in Ada95 samples explicit. 3285 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 3286 + add configure check for, ifdef's for math.h which is in a separate 3287 package on Solaris and potentially not installed (report by Petr 3288 Pavlu). 3289 > fixes for Ada95 binding (Nicolas Boulenguez): 3290 + improve type-checking in Ada95 by eliminating a few warning-suppress 3291 pragmas. 3292 + suppress unreferenced warnings. 3293 + make all dereferences in binding explicit. 3294 3295 20110319 3296 + regen Ada html documentation. 3297 + change order of -I options from ncurses*-config script when the 3298 --disable-overwrite option was used, so that the subdirectory include 3299 is listed first. 3300 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 3301 + modify configure script to provide value for HTML_DIR in 3302 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 3303 distributed separately (report by Nicolas Boulenguez). 3304 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 3305 CFLAGS for the build has these options. 3306 + amend change from 20070324, to not add 1 to the result of getmaxx 3307 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 3308 thread in comp.lang.ada). 3309 + build-fix Ada95/samples for gnat 4.5 3310 + spelling fixes for Ada95/samples/explain.txt 3311 > fixes for Ada95 binding (Nicolas Boulenguez): 3312 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 3313 + add workaround for binding to set_field_type(), which uses varargs. 3314 The original binding from 990220 relied on the prevalent 3315 implementation of varargs which did not support or need va_copy(). 3316 + add dependency on gen/Makefile.in needed for *-panels.ads 3317 + add Library_Options to library.gpr 3318 + add Languages to library.gpr, for gprbuild 3319 3320 20110307 3321 + revert changes to limit-checks from 20110122 (Debian #616711). 3322 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 3323 + corrected a minor sign error in a field of Low_Level_Field_Type, to 3324 conform to form.h. 3325 + replaced C_Int by Curses_Bool as return type for some callbacks, see 3326 fieldtype(3FORM). 3327 + modify samples/sample-explain.adb to provide explicit message when 3328 explain.txt is not found. 3329 3330 20110305 3331 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 3332 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 3333 for compiler warnings (report by Nicolas Boulenguez). 3334 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 3335 needed to compile when chtype is 64-bits using gnat 4.4.5 3336 3337 20110226 5.8 release for upload to 3338 3339 20110226 3340 + update release notes, for 5.8. 3341 + regenerated html manpages. 3342 + change open() in _nc_read_file_entry() to fopen() for consistency 3343 with write_file(). 3344 + modify misc/run_tic.in to create parent directory, in case this is 3345 a new install of hashed database. 3346 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 3347 3348 20110220 3349 + configure script rpath fixes from xterm #269. 3350 + workaround for cygwin's non-functional features.h, to force ncurses' 3351 configure script to define _XOPEN_SOURCE_EXTENDED when building 3352 wide-character configuration. 3353 + build-fix in run_tic.sh for OS/2 EMX install 3354 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 3355 3356 20110212 3357 + regenerated html manpages. 3358 + use _tracef() in show_where() function of tic, to work correctly with 3359 special case of trace configuration. 3360 3361 20110205 3362 + add xterm-utf8 entry as a demo of the U8 feature -TD 3363 + add U8 feature to denote entries for terminal emulators which do not 3364 support VT100 SI/SO when processing UTF-8 encoding -TD 3365 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 3366 extended terminfo capability U8 (prompted by mailing list 3367 discussion). 3368 3369 20110122 3370 + start documenting interface changes for upcoming 5.8 release. 3371 + correct limit-checks in derwin(). 3372 + correct limit-checks in newwin(), to ensure that windows have nonzero 3373 size (report by Garrett Cooper). 3374 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 3375 Alcock). 3376 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 3377 by discussion with Kevin Martin). 3378 3379 20110115 3380 + modify Ada95/configure script to make the --with-curses-dir option 3381 work without requiring the --with-ncurses option. 3382 + modify test programs to allow them to be built with NetBSD curses. 3383 + document thick- and double-line symbols in curs_add_wch.3x manpage. 3384 + document WACS_xxx constants in curs_add_wch.3x manpage. 3385 + fix some warnings for clang 2.6 "--analyze" 3386 + modify Ada95 makefiles to make html-documentation with the project 3387 file configuration if that is used. 3388 + update config.guess, config.sub 3389 3390 20110108 3391 + regenerated html manpages. 3392 + minor fixes to enable lint when trace is not enabled, e.g., with 3393 clang --analyze. 3394 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 3395 + update ncurses/llib-lncurses* 3396 3397 20110101 3398 + fix remaining strict compiler warnings in ncurses library ABI=5, 3399 except those dealing with function pointers, etc. 3400 3401 20101225 3402 + modify nc_tparm.h, adding guards against repeated inclusion, and 3403 allowing TPARM_ARG to be overridden. 3404 + fix some strict compiler warnings in ncurses library. 3405 3406 20101211 3407 + suppress ncv in screen entry, allowing underline (patch by Alejandro 3408 R Sedeno). 3409 + also suppress ncv in konsole-base -TD
http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=aca9d15815316e7f77e1b293b45d05ed6d983f79;hb=aed072e27e60c2abc5ac0ab8113aacf9b4908d50
CC-MAIN-2022-33
refinedweb
25,907
66.23
std::ostringstream, but it's not the only string stream in IOStreams; it is complemented by std::istringstreamfor input, and std::stringstreamfor both input and output. The following example demonstrates a simple use of istringstreamto extract data from a string: #include <sstream> #include <string> #include <cassert> int main() { std::istringstream stm; stm.str("1 3.14159265 Two strings"); int i; double d; std::string s1,s2; stm >> i >> d >> s1 >> s2; assert(i==1); assert(d==3.14159265); assert(s1=="Two"); assert(s2=="strings"); }As you can see in the example, the istringstreamis assigned a string that is subsequently used to extract data from; an int, a double, and two strings. By default, istringstreamskips whitespace; if you need to change that behavior you can use the manipulator std::noskipws. The remaining class, std::stringstream, does both input and output streaming. When using these streams, it makes sense to always use the stream with the capabilities you are looking for. This makes it easier to read and understand the code, so if all you need is input (string) streaming, use std::istringstream rather than always going with std::stringstream for convenience. (Remember, what may be convenient at the time of writing may not be convenient when the same code needs to be maintained.) #include <sstream> #include <string> #include <cassert> #include <iostream> int main() { std::istringstream stm; stm >> std::noskipws; // Don't skip whitespace stm.str(" 1.23"); double d; stm >> d; if (!stm) { std::cout << "Error streaming to d!\n"; // Manually fix the problem...assuming we know what went wrong! // In this example, we know that we must ignore whitespace, // so we simply clear the stream's state. stm.clear(std::ios::goodbit); // Ignore whitespace! stm >> std::skipws; stm >> d; } assert(d==1.23); }In the example, the extraction of a double will fail on the first attempt; because there's no way a space can be converted to a double. This leaves the stream in a bad state ( std::ios::failbitwill be set), which is why the test if (!stm)yields true. Once a stream has gone bad, you must explicitly set it in a good state to be able to use it again. In this example, we know what's gone wrong, and we decide to turn on the skipping of spaces again, which nicely resolves the problem. Then, and only then, can we successfully extract the double! Checking the stream's state can be tedious and is easy to forget. An alternative is to tell the stream to use exceptions when entering a bad state. This is done through a member function called exceptions, which accepts an argument that denotes which bad states should cause an exception to be thrown. The recommended mask includes the flags badbit and failbit. Here's the above example in a version using exceptions: #include <sstream> #include <string> #include <cassert> #include <iostream> int main() { std::istringstream stm; stm >> std::noskipws; // Don't skip whitespace stm.str(" 1.23"); double d; try { // Turn on exceptions stm.exceptions(std::ios::badbit | std::ios::failbit); stm >> d; } catch(std::ios_base::failure e) { std::cout << e.what() << '\n'; // Manually fix it...assuming we know what went wrong! stm.clear(std::ios::goodbit); // Ignore whitespace! stm >> std::skipws; stm >> d; } assert(d==1.23); }Whether to use exceptions or not when streams end up in a bad state largely depends on the problem at hand. Our advice is to consider how a stream in a bad state affects your code, and if failure indicates a truly exceptional situation, then the exception-throwing version is definitely better. std::ostringstream, std::istringstreamand std::stringstream. In most cases (and when you're not in need of blistering performance [6,7]), rather than searching for a suitable (specialized) conversion function, this stream type can be used to convert most anything to its string representation. This led us to another important topic, namely enabling your own classes to work seamlessly with output streams by making them OutputStreamable. Finally, we looked at other string stream offerings from the C++ Standard Library. Conversions from various types to strings are ubiquitous in most any application, which means that all the measurements we take to simplify such conversions bring forth excellent value. Input streaming, which was only briefly discussed here, is just as important as output streaming—that will be the topic of a future article. We hope that the tools covered in this article helps you feel empowered to do what the title says—Stream Thy Strings! Thank you for reading, Bjorn Karlsson and Matthew Wilson FILE*family of functions.. Bjorn and Matthew keep a reasonably up-to-date listing of their publications at.
http://www.artima.com/cppsource/streamstrings3.html
CC-MAIN-2016-18
refinedweb
778
63.9
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Beginning Java Author Non-static variable cannot be referenced from static context Amil Mehmedagic Greenhorn Joined: Feb 27, 2004 Posts: 22 posted Apr 26, 2004 03:51:00 0 I am getting the error message non-static variable clockwise cannot be referenced from a static context, when running the program code below. How do I counter this problem? The program below asks a user to input a probability and number of circles. The program then animates the input number of circles around a square of side 400 pixels. At each move there is a probability that each circle moves anticlockwise. If it does so the variable count counts that circle. When all circles are moving anticlockwise, the animation needs to stop. Below is the code of the two classes. The error occurs in the main method of the Main class. This is where I check whether a circle moves anticlockwise to assign it to the count variable. Please help in solving the above problem! public class Circle { private int x, y; public boolean clockwise=true; private int radius=10; private int step=5; private int side=400; private double prob; //Constructors; The following method sets the position of the //required circles, x and y //position respectively and sets the value for the instance //variable prob for each circle object. public Circle(int i, double p) { x=(i*2*radius)+radius; y=radius; prob=p; } public void move () { if (clockwise) { if (Math.random() < prob) clockwise = false; if (y == radius && x < side) { x = x + step; } else if (x == side && y < side) { y = y + step; } else if (y == side && x <= side && x > radius) { x = x - step; } else if (x == radius && y <= side && y > radius) { y = y - step; } } else { if (x == radius && y < side) { y = y + step; } else if (y == side && x < side) { x = x + step; } else if (x == side && y > radius && y <= side) { y = y - step; } else if (y == radius && x <= side && x > radius) { x = x - step; } } } // These are used for drawing circles. public int getX() {return x;} public int getY() {return y;} public int getRadius() {return radius;} public boolean clockwise() {return clockwise;} } public class Main { public static void main (String [ ] args) { int i; int circles = DialogBox.requestInt("Number of circles?"); double prob = DialogBox.requestDouble( "Probability of a clockwise circle changing direction"); int count = 0; Circle [] circle = new Circle [circles]; i=0; while (i!=circles) { circle[i]= new Circle(i,prob); circle[i].move(); i=i+1; } CirclesFigure.create( ); CirclesFigure.draw(circle); while (true) { for (int j = 0; j < circles; j++) { Delay.milliseconds(20); circle[j].move(); CirclesFigure.draw(circle); if (Circle.clockwise = false) count = count + 1; if (count == circles) { } } } } } [ edited to break long lines and remove the evil tab character -ds ] [ April 27, 2004: Message edited by: Dirk Schreckmann ] Mike Gershman Ranch Hand Joined: Mar 13, 2004 Posts: 1272 posted Apr 26, 2004 07:35:00 0 For one thing, if (Circle.clockwise = false) should be if (circle.clockwise = false) Since clockwise is an instance variable. IMHO, your convention of naming a reference to an object the lower case of the object class name is pretty error-prone. Mike Gershman SCJP 1.4, SCWCD in process Jeroen Wenting Ranch Hand Joined: Oct 12, 2000 Posts: 5093 posted Apr 26, 2004 07:44:00 0 More correctly it should be if (circle.clockwise == false) else you have an assignment instead of a comparison. Which doesn't solve the problem that you're calling instance variables from static code, something that's simply impossible. Either make sure you have an instance to call (create one...) or make the variable static (if possible). Any book or tutorial should be able to help you out in this, it's so basic. Repeat: you cannot use instance variables from static code (after all, the static code doesn't belong to any instance so how can it know about the value of an instance variable???). 42 Tobias Huebner Greenhorn Joined: Apr 26, 2004 Posts: 1 posted Apr 26, 2004 07:52:00 0 The problem is that you don't have a instance of the Class Circle called "Circle" !!! But you want to call on "Circle". Remember java is case-sensetive. if (Circle.clockwise = false) All you have is an array of the type Circle called "circle". Circle [] circle = new Circle [circles]; You should instanciate the class Circle and use the clockwise variable like this: Circle myCircle = new Circle(); if(myCircle.clockwise == false){} But in your case I think you might want to use the array of Circle you already have: if(circle[j].clockwise == false){} since you are in a loop. I hope i was of some help for you. This was my first posting. So good luck :0) [ April 26, 2004: Message edited by: Tobias Huebner ] [ April 27, 2004: Message edited by: Tobias Huebner ] I agree. Here's the link: subject: Non-static variable cannot be referenced from static context Similar Threads Help With Snake Game Random Direction Change of Object Help in Algorithm Construction A probability problem Stopping Animation All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/396308/java/java/static-variable-referenced-static-context
CC-MAIN-2014-41
refinedweb
877
62.07
So the program I have is supposed to read from a file take that data (which is split into three parts divided by a space). These Strings need to be read and stored in an appropriate field. Now i dont know how to do this, I've tried with an array as you can see which i think is right but it only stores a single line. Someone told me that i should do that and then have spearate array objects or something? but i dont really know how. Help would be massivley appreciated. import java.util.Scanner; import java.io.*; public class DataRead { private Scanner scan; private String[]arraySplit = null; public DataRead() { arraySplit = new String[100]; try { FileReader inputFile = new FileReader(//read file); scan = new Scanner(inputFile); } catch (FileNotFoundException e) { System.out.println("FileNotFoundException - Could not find file"); } catch (Exception e) { System.out.println("Unknown error"); } while(scan.hasNext()) //reads through file while there is more to file { String line = scan.nextLine(); arraySplit = line.split(" "); } }
https://www.daniweb.com/programming/software-development/threads/243683/i-o-fileread-storage-help
CC-MAIN-2018-39
refinedweb
166
67.45
In today’s article, we take a closer look at how we can build our own custom lazy loading image component with Vue.js. We use the fast and lightweight Lozad.js package for handling the lazy loading logic for us, and we’ll enhance it with the ability to display the dominant color of the image as a fallback color, which is shown while the original image is loading. Additionally, the lazy loading component handles maintaining the correct aspect ratio while a placeholder rectangle is shown. The final result of our work If you want to take a look at the result of our work, you can look at a demo hosted on Netlify and you can check out the code on GitHub. Vue-Lazyload Before we get started: there already is a perfectly fine solution for lazy loading images with Vue.js: Vue-Lazyload. The reason why I’m still writing this article is, that I wanted a more lightweight solution. In the tests that I’ve done, Vue-Lazyload adds about 19 kB to the final bundle size (overall bundle size: 106 kB). The custom solution only adds about 5 kb (overall bundle size: 92 kB). If a few kilobytes are not a great concern for you and if you’re not interested in how to actually build a lazy loading component, you can stop reading and just use Vue-Lazyload, it’s a great plugin. For those of you who’re constantly searching for ways of how to shave off a few kilobytes of your application or who’re interested in how to build stuff themselves, read on. Building a custom lazy loading component Because we don’t want to implement the logic for detecting if an image is in the viewport, and therefore should be loaded, ourselves, we use Lozad.js to handle this for us. Lozad.js has a very small footprint of only 1.8 kB and it’s very fast because it uses the Intersection Observer API. Keep in mind, though, that not every browser (most notably Safari) supports the Intersection Observer API yet, but Lozad.js degrades very gracefully by simply loading all images immediately if the browser does not support the Intersection Observer API. In my opinion, this is a good enough fallback, but you can also use a polyfill if you want. npm install lozad --save After installing the lozad package, we can start building our custom Vue.js lazy loading image component. The lazy loading image component There are multiple ways of how to solve this problem in Vue.js. One possible approach would be to use custom directives to handle lazy loading on regular <img> tags. However, I’m a huge fan of components because they’re very flexible and they enable us to easily add further functionality in the future, if we want to. <template> <img : </template> <script> import lozad from 'lozad'; export default { name: 'AppImage', props: { backgroundColor: { type: String, default: '#efefef', }, height: { type: Number, default: null, }, lazySrc: { type: String, default: null, }, lazySrcset: { type: String, default: null, }, width: { type: Number, default: null, }, }, data() { return { loading: true, }; }, computed: { aspectRatio() { // Calculate the aspect ratio of the image // if the width and the height are given. if (!this.width || !this.height) return null; return (this.height / this.width) * 100; }, style() { // The background color is used as a // placeholder while loading the image. // You can use the dominant color of the // image to improve perceived performance. // See: const style = { backgroundColor: this.backgroundColor }; if (this.width) style.width = `${this.width}px`; // If the image is still loading and an // aspect ratio could be calculated, we // apply the calculated aspect ratio by // using padding top. const applyAspectRatio = this.loading && this.aspectRatio; if (applyAspectRatio) { // Prevent flash of unstyled image // after the image is loaded. style.height = 0; // Scale the image container according // to the aspect ratio. style.paddingTop = `${this.aspectRatio}%`; } return style; }, }, mounted() { // As soon as the <img> element triggers // the `load` event, the loading state is // set to `false`, which removes the apsect // ratio we've applied earlier. const setLoadingState = () => { this.loading = false; }; this.$el.addEventListener('load', setLoadingState); // We remove the event listener as soon as // the component is destroyed to prevent // potential memory leaks. this.$once('hook:destroyed', () => { this.$el.removeEventListener('load', setLoadingState); }); // We initialize Lozad.js on the root // element of our component. const observer = lozad(this.$el); observer.observe(); }, }; </script> <style lang="scss"> // Responsive image styles. .AppImage { max-width: 100%; max-height: 100%; width: auto; height: auto; vertical-align: middle; } </style> Above you can see the code of the AppImage component. I’ve added comments to explain what’s going on. Do you want to learn more about advanced Vue.js techniques? Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture. Using the component There are multiple ways of how to use the component. If you’re ok with the image popping up as soon as it’s loaded, you can use the component almost the same way as a regular <img> tag. The only difference is that you have to prefix the src and srcset properties with the lazy- keyword, if you want to make use of lazy loading that is, otherwise you can use the regular src and srcset properties. <app-image The lazy-loaded image pops up as soon as it's loaded You can optimize this a little bit, by adding the dimensions of the image. By providing a width and a height, the component can calculate the aspect ratio and reserve the space that the image will take up. <app-image : Maintain the aspect ratio and show a gray rectangle Dominant color To further improve the perceived performance, you can extract the most dominant color of the image as the background color of the placeholder rectangle. <app-image : Show a rectangle in the dominant color of the image Low fi blurry image Another route you can go, is to use a low fi blurry version of the image as a placeholder while the high-resolution version is loading. Keep in mind, though, that by using this technique you have to load two images instead of one, you should absolutely test if this has a beneficial effect overall. <app-image : Show a low fi blurry version initially Wrapping it up Using lazy loading techniques can have a huge positive effect on the loading performance of a website, especially on pages featuring a lot of large scale, high quality images. But keep in mind, that using this techniques can also have downsides. Always test the implications of optimizations like that on a broad range of real devices, starting from low end smartphones up to the latest and greatest flagships.
https://markus.oberlehner.net/blog/lazy-loading-responsive-images-with-vue/
CC-MAIN-2020-50
refinedweb
1,117
52.8
MacKenzie T. Stout23,972 Points Ruby Loops - Methods That Return a Value; this is a bummer I can't figure out what I'm missing about this challenge!?! def parse_answer(answer, kind="string") answer = gets.chomp answer = answer.to_i if kind=="number" return answer end 1 Answer William LiPro Student 26,807 Points Overall your solution is right, only problem here is that the challenge didn't ask for accepting user input, therefore, line 2 should be deleted. def parse_answer(answer, kind="string") answer = answer.to_i if kind=="number" return answer end Hope it helps. MacKenzie T. Stout23,972 Points MacKenzie T. Stout23,972 Points Thank you! I felt confident in the solution and couldn't figure out what I was doing wrong.
https://teamtreehouse.com/community/ruby-loops-methods-that-return-a-value-this-is-a-bummer
CC-MAIN-2020-05
refinedweb
123
67.96
DOM::Event #include <dom2_events.h> Detailed Description Introduced in DOM Level 2.. Definition at line 116 of file dom2_events.h. Member Enumeration Documentation An integer indicating which phase of event flow is being processed. AT_TARGET: The event is currently being evaluated at the target EventTarget. BUBBLING_PHASE: The current event phase is the bubbling phase. CAPTURING_PHASE: The current event phase is the capturing phase. Definition at line 139 of file dom2_events.h. Member Function Documentation Used to indicate whether or not an event is a bubbling event. If the event can bubble the value is true, else the value is false. Definition at line 133 of file dom2_events.cpp. Used to indicate whether or not an event can have its default action prevented. If the default action can be prevented the value is true, else the value is false. Definition at line 142 of file dom2_events.cpp. Used to indicate the EventTarget whose EventListeners are currently being processed. This is particularly useful during capturing and bubbling. Definition at line 112 of file dom2_events.cpp. Used to indicate which phase of event flow is currently being evaluated. Definition at line 124 of file dom2_events.cpp. Definition at line 187 of file dom2_events.cpp.. - Parameters -. - Parameters - Definition at line 178 of file dom2_events.cpp.. Definition at line 169 of file dom2_events.cpp. 160 of file dom2_events.cpp. Used to indicate the EventTarget to which the event was originally dispatched. Definition at line 100 of file dom2_events.cpp. 151 of file dom2_events.cpp. The name of the event (case-insensitive). The name must be an XML name. Definition at line 91 of file dom2_events.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2020 The KDE developers. Generated on Sun May 24 2020 22:52:24 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/khtml/html/classDOM_1_1Event.html
CC-MAIN-2020-24
refinedweb
317
62.14
Its a fun game check it out once The game is computer randomly selects the SECRET number with in the defined range of numbers , here 1 to 52 , and prompts the user to guess the number . In this program we ask the user to enter any number between 1 to 52 . So user puts up any random number with in the defined range. If the user number is same as the SECRET number then the game stops ,otherwise, If the user number is lesser than the SECRET number then , It will change the range of the number , that is now user has to select a number between user input number to 52 . If the user input is greater than the SECRET number then , the range of the numbers to select an input from for user will be 1 to user input . The game continues till the user guesses the correct number . Here we make sure that the game starts again once it is finished . You can also make it a automated program , that is computer will set the SECRET number and then computer only will try to find it . For that you just need to use random method again and compare it with the target or secret number . for example : Suppose computer selects 10 as the secret number so , user try to guess the secret number between 1 to 52 , let suppose he chooses 38 which is greater than the secret number than the computer prompts user to guess the number from the range 1 to 37 instead if user inputs here 5 in first attempt , then the range would become 6 to 52 . And in the same way the game continues to be played and the range to select the number keeps on decreasing . There is no limit set to define a select number of attempts to the user . Pseudo code : 1. Computer randomly selects one secret number ,prompts user to guess it correctly in minimum number of attempts with in defined range here 1 to 52 2. User inputs any number with in defined range 3. If input number is greater than the secret number than range reduces to 1 to INPUT NUMBER else range reduces to INPUT NUMBER to 52 4. Repeat steps from 2 again Demo : Code : import java.util.Scanner; /****************************************************** / ** Program: CardsGame.java** ** Author: **** Date: 20 April 2013 /******************************************************/ public class CardsGame { /** * @param args */ static int low=1,high=52; public static void main(String[] args) { // TODO Auto-generated method stub gameBegins(); } public static void gameBegins() { int target=0; double i =Math.random(); if(i != 0) target= (int) (i*53); else { gameBegins(); System.out.println("Please enter a valid number"); } System.out.println("-------------------------"); System.out.println("NEW GAME "); System.out.println("-------------------------"); System.out.println(""); System.out.println(""); System.out.println("pick a number between 1-52! "); while(true) { int input=userInput(); predictionResult(input, target); } } public static int userInput() { Scanner in = new Scanner(System.in); int s= in.nextInt(); System.out.println("user guesses "+s); return s; } public static void predictionResult(int input , int target) { if(input < target) { low=input; System.out.println(" Sorry,that is too low "); low=low+1; System.out.println("pick a number between "+low +"-"+high); } else if(input > target) { high=input; System.out.println(" Sorry,that number is too high "); high=high-1; System.out.println("pick a number between "+low +"-"+high); } else if(input==target) { System.out.println("That is correct !"); playAgain(); } } public static void playAgain() { low=1; high=52; gameBegins(); } }
http://javahungry.blogspot.com/2013/04/guess-number-game.html
CC-MAIN-2014-49
refinedweb
580
55.24
hypot - Euclidean distance function #include <math.h> double hypot(double x, double y); The hypot() function computes the length of the hypotenuse of a right-angled triangle: An application wishing to check for error situations should set errno to 0 before calling hypot(). If errno is non-zero on return, or the return value is HUGE_VAL or NaN, an error has occurred. Upon successful completion, hypot() returns the length of the hypotenuse of a right angled triangle with sides of length x and y. If the result would cause overflow, HUGE_VAL is returned and errno may be set to [ERANGE]. If x or y is NaN, NaN is returned. and errno may be set to [EDOM]. If the correct result would cause underflow, 0 is returned and errno may be set to [ERANGE]. The hypot() function may fail if: - [EDOM] - The value of x or y is NaN. - [ERANGE] - The result overflows or underflows. No other errors will occur. None. The hypot() function takes precautions against overflow during intermediate steps of the computation. If the calculated result would still overflow a double, then hypot() returns HUGE_VAL. None. isnan(), sqrt(), <math.h>. Derived from Issue 1 of the SVID.
http://www.opengroup.org/onlinepubs/007908799/xsh/hypot.html
crawl-002
refinedweb
198
66.84
Contents DNS and BIND Introduction DNS is an acronym for Domain Name System. DNS organizes the Internet into a hierarchy of domains, providing a system to resolve easy to remember host and domain names to their IP address. An example of this is typing into a browser and having the google web page come up. Another example is using ping [hostname] command instead of ping [IP address] command. These are both examples of forward lookups. DNS also provides reverse lookups, which is resolving hostnames when given an IP address. Reverse lookups are handy for web sites tracking users, tools such as traceroute and ping, checking the reverse DNS records of email addresses, which can be useful against fighting spam, and so on. The Domain Name System also solves name uniqueness problems on networks: a hostname only needs to be unique to the domain or organization, not the entire Internet. The top of the DNS hierarchy is a "dot", which is the root domain. The root domain holds together all domains underneath it. Below the root domain are the familiar com's, edu's, net's, and so on. These are called global Top Level Domains (gTLD). Below gTLDs are subdomains, for example, google.com. When working with DNS you will hear about zones, which are basically a group of machines within a domain. Every period in a DNS name indicates a point where authority can be delegated, so you can think of a zone as part of the DNS namespace: with australia.test.com, australia is a zone in the test.com domain. There is debate on the correction definition of a zone. DNS Queries In most cases a DNS query is sent when you need the IP address of a hostname. The following example will use the host testhost, and the domain testdomain.com. The process is as follows: - if the DNS server you are using is using cache facilities, the cache is first checked for any information about testhost.testdomain.com. If an A record for testhost.testdomain.com is found, the process is complete. - if no information about testhost.testdomain.com exists in cache, the cache is then checked for any information on testdomain.com. This process continues, taking away parts of the DNS namespace from left to right. - when the query reaches the end of com, a query for testhost.testdomain.com is sent to a root level nameserver. The root level name server refers you to a nameserver in the .com domain, which will know more about the query for testhost.testdomain.com - the .com level nameserver refers you to a testdomain.com level nameserver. The testdomain.com level nameserver will contain the A record (IP address) for the testhost.testdomain.com system. Types of DNS Queries There are three types of queries you can send to a DNS server: recursive, iterative, and inverse: - recursive: DNS server will provide the full answer by following all referrals. - iterative: non-recursive. The DNS server first checks its cache. If it is not found, a referral is sent to the resolve on your system. Most local resolves are stub resolvers, which means they can not follow referrals. Therefore you should have at least one nameserver in /etc/resolv.confthat can provide recursive queries. - inverse: inverse queries map a resource record to a domain. Types of DNS Servers - Master: holds zone fails for the domain it is authoritative for. DNS is not owned by one central organizing, instead authority is delegated so that everyone running a domain, or a zone, has control over their DNS. - Salve: downloads zone information from Master DNS servers. Slave servers will reply with an authoritative answer as long as the information was not from its cache. - Advertising: only serves information for the zone it is authoritative for. Does not provide recursive queries. An advertising server will not be able to resolve any queries outside the domain it is configured for. - Cache-only: uses a root hints zone file to provide recursive queries. A cache-only server does not hold authoritative information or serve a domain.
https://fedoraproject.org/wiki/Archive:Docs/Drafts/AdministrationGuide/UserAccounts/DNSBIND/Introduction?rd=Docs/Drafts/AdministrationGuide/UserAccounts/DNSBIND/Introduction
CC-MAIN-2019-51
refinedweb
681
66.13
Today's Page Hits: 2397. As last mentioned, I'm seeing this on boot: Jan 28 20:40:54 kanigix genunix: [ID 104096 kern.warning] WARNING: system call missing from bind file Jan 28 20:40:54 kanigix genunix: [ID 684969 kern.warning] WARNING: Cannot mount /system/dfs How do we debug this? Well, first we find the first message in the source: usr/src/uts/common/os/modconf.c. In mod_getsysent(): if ((sysnum = mod_getsysnum(mod_name)) == -1) { cmn_err(CE_WARN, "system call missing from bind file"); return (NULL); } Okay, this is actually looking familiar - I think I hit it before. Now I did a diff from my code before and after the merge with the latest source - and I didn't see anything glaring. Bzzt, found it after some searching. If we look at the top of make_syscallname(): cmn_err(CE_WARN, "!Couldn't add system call \"%s %d\". " "It conflicts with \"%s %d\" in /etc/name_to_sysnum.", name, sysno, *cp, sysno); So what does /etc/name_to_sysnum contain for entry 140? adjtime 138 systeminfo 139 seteuid 141 forksys 142 fork1 143 So this is the reason I'm seeing the error on the machines. Now I have to figure out why BFU is messing up. (Note, I know I ran ACR afterwards on both of the boxes.) I say it is BFU, but let's check. Inside the source workspace: adjtime 138 systeminfo 139 sharefs 140 seteuid 141 [tdh@warlock os]> pwd /zoo/ws/onnv-gate/usr/src/uts/intel/os And lets check the proto area: [tdh@warlock os]> pwd /zoo/ws/onnv-gate/proto/root_i386/etc adjtime 138 systeminfo 139 seteuid 141 forksys 142 Ouch, BFU is innocent! What is the deal? I looked in the nightly logs and couldn't find anything. I looked in the previous nightly log, which was the fresh install: /usr/bin/rm -f /zoo/ws/onnv-gate/proto/root_i386/etc/name_to_sysnum; install -s -m 644 -f /zoo/ws/onnv-gate/proto/root_i386/etc ../../intel/os/name_to_sysnum Okay, the error is that by not clobbering everything and starting fresh, /etc/name_to_sysnum did not get remade. I'll claim that this is a bug in the makefile - there should be a dependency which gets checked. My solution? Rebuild from scratch. One thing I would like you to notice is that I debugged this issue without actually looking at a core file or stepping through a debugger. That would have taken longer. Instead, I mainly relied on grep, diff, find, OpenGrok, and cscope. With a different bug, or with code which I knew was not working, I might have delved in with kmdb. I'm glad I did not have to. A second thing to note, I can just add in the entry for 140 to /etc/name_to_sysnum to get the system running correctly. I actually did that on one of the two systems, the other is rebuilding the source from scratch. Here are some neat interactions: [tdh@mrx ~]> ls -la /etc/dfs/sharetab lrwxrwxrwx 1 root root 25 Jan 28 01:17 /etc/dfs/sharetab -> ../../system/dfs/sharetab [tdh@mrx ~]> ls -la /system/dfs/sharetab -r--r--r-- 1 root root 0 Jan 28 22:01 /system/dfs/sharetab [tdh@mrx ~]> share -F nfs / -o rw Could not share: /: no permission [tdh@mrx ~]> sudo !! sudo share -F nfs / -o rw [tdh@mrx ~]> ls -la /system/dfs/sharetab -r--r--r-- 1 root root 13 Jan 28 22:01 /system/dfs/sharetab [tdh@mrx ~]> cat /system/dfs/sharetab / -o nfs rw Besides the fact that it is down in /system, there is no way for you to tell that this is not really a file, but an interface into memory. Also, this is just the prototype, I recently decided to not use a symlink and instead fix up GFS to understand a file without a parent directory. Since I'm showcasing my In-Kernel Sharetab project at Connectathon 2007 on OpenSolaris, I thought I would get it into a mercurial download of the source code from OpenSolaris.org. It sounds easy, but I've got 66 files checked out for editing (existing and new) and the source control for OpenSolaris is different than that for Solaris. Okay, the OpenSolaris one is probably pulled over nightly from the Solaris gate, but they are different interfaces to the code. The good news is that I have no desire to putback from the OpenSolaris codebase. I just need to identify how to get my changes pushed on top of the OpenSolaris codebase. Also, the last time I synced my changes were up to the codebase of a month ago. We had a major Flag day since then - and as far as I can tell, the OpenSolaris code reflects that change. So the first thing I have to do is get my code in sync with the current nightly, which I suspect is very close for my purposes with the OpenSolaris code. I reparent my workspace, tell it to bringover, and then resolve the conflicts in 22 files. Not bad, I only had to step in twice for the automatic conflict resoloution. A tool we use at Sun is 'wx' and it can create backups of active files in a workspace. So once I had a good merge, I told it to backup again (I took a backup before the merge as well in case I had to roll it back). That process created a tar file with just the copies of the active fileset. I took that over to my OpenSolaris build machine and did a diff between all 66 files. It wasn't that bad, I had managed to sync up to a close enough copy. Once I saw there were no problems, I untarred into the workspace, made sure via diff that the files were what I said to use, and then started off a build. I also kicked off a build in the synced Solaris workspace. I wanted to make sure my merge hadn't broken anything. The OpenSolaris build finished and it was a clean build. The Solaris one is still going. Does it mean OpenSolaris is easier to build, my home machine is faster, etc? No, in the OpenSolaris case, I told it not to clobber the existing stuff (i.e., an incremental build) and in the Solaris case I told it to start from scratch. Why? Well, I'm not checking in the OpenSolaris code, I knew exactly what I was changing. I had no clue what all changed in the merge for the Solaris workspace. Now I've got to BFU a system with those bits and see if I get what I expect. The big issue for that is that the OpenSolaris build machine I used is behind a VPN firewall. It is 10 feet from the test box, but unless I want to lose all of my other sessions (including the work build). Luckily, we don't need complete access to it, we just need the BFU archives: [tdh@warlock onnv-gate]> ls -la archives/i386/nightly/ total 569421 drwxr-xr-x 2 tdh staff 11 Jan 28 00:24 . drwxr-xr-x 3 tdh staff 3 Jan 28 00:23 .. -rw-r--r-- 1 tdh staff 64348 Jan 28 00:24 conflict_resolution.gz -rw-r--r-- 1 tdh staff 76112168 Jan 28 00:23 generic.kernel -rw-r--r-- 1 tdh staff 24045508 Jan 28 00:23 generic.lib -rw-r--r-- 1 tdh staff 2367696 Jan 28 00:23 generic.root -rw-r--r-- 1 tdh staff 1280000 Jan 28 00:23 generic.sbin -rw-r--r-- 1 tdh staff 178135616 Jan 28 00:24 generic.usr -rw-r--r-- 1 tdh staff 2580480 Jan 28 00:23 i86pc.boot -rw-r--r-- 1 tdh staff 4853760 Jan 28 00:23 i86pc.root -rw-r--r-- 1 tdh staff 1187840 Jan 28 00:23 i86pc.usr [tdh@warlock onnv-gate]> tar cf sht_bfu.tar archives [tdh@warlock onnv-gate]> ls -la sht_bfu.tar -rw-r--r-- 1 tdh staff 290636800 Jan 28 00:50 sht_bfu.tar [tdh@warlock onnv-gate]> bzip2 sht_bfu.tar Time spent in user mode (CPU seconds) : 94.17s Time spent in kernel mode (CPU seconds) : 0.71s Total time : 1:36.39s CPU utilisation (percentage) : 98.4% [tdh@warlock onnv-gate]> ls -la sht_bfu.tar.bz2 -rw-r--r-- 1 tdh staff 97782474 Jan 28 00:50 sht_bfu.tar.bz2 All I have to do is get this file over, unpack it, and run BFU. Okay, I've done that - no serial console, so I can't show you the steps. The good news is that it boots and I can get into it. I've failed the brickify test. The somewhat bad news is this: [tdh@mrx ~]> dmesg | grep dfs Jan 28 01:21:33 mrx genunix: [ID 684969 kern.warning] WARNING: Cannot mount /system/dfs [tdh@mrx dfs]> cd /system/dfs [tdh@mrx dfs]> ls -la total 4 dr-xr-xr-x 2 root root 512 Jan 27 23:57 . drwxr-xr-x 5 root root 512 Jan 23 04:37 .. I actually have to remove the /system/dfs from my prototype to get the final product. But I still need to know how I horked this all up. It isn't the symlink which is messed up, it is loading the sharefs module which is broken. Was it the merge? Or was it the blind copy into the OpenSolaris workspace? Or did a bug creep in? The other evil thought is that I've tested this on 64 bit sparc and 64 bit amd machines, but not 32 bit x86es. It could be a bug in the code. Ouch, I put it on my new desktop (64bit AMD) and it panicked the box. I think it was not related - the backtrace was in usb (page fault in usb_ac:usb_ac_setup_connections+450 to be exact). As near as I can figure, it crashed while loading my Logitech QuickCam - see usb_ac.c. I just added that the other day, easy enough to pull for a stable system. Note that I think it must be my QuickCam because it has a microphone and my SoundBlaster is PCI. And it is in some DEBUG code - which explains why I hadn't seen it before. I filed a new bug for it - Bug ID: 6518469 DEBUG build page faults when booting with an attached Logitech QuickCam. Anyway, the system does put up a warning about mounting /system/dfs: DEBUG enabled WARNING: system call missing from bind file WARNING: Cannot mount /system/dfs I should have first booted this machine as a stock OpenSolaris install. Then I could have added my code. Anyway, it came up on the next boot. But I have no idea if it will crash at any time. I'll look to see if there is a match to the backtrace of the core. As for my problem, I need to get the Solaris build finished and see how it works. I'll just work my way back through the steps (checking my backups for changes). And if that doesn't work, I'll show you the painful way of debugging a live kernel. By the way, failure is a good experience. I've caught a potential problem with my demo 3 days before I would if I waited until the conference. I've also caught a bug before the QA people did - which they would do once I let them play with the code. In all, this has been a good use of 5 hours.
http://blogs.sun.com/tdh/date/20070128
crawl-001
refinedweb
1,929
81.93
Hi guys. I try to subscribe to topic and get one value from it but i suppose that subscribe.simple don’t want work with me. I wanna do this in really simple way. Can anybody help me if I maybe do something wrong in my script or do you know any ideas how to change subsribe.simple in something working? from paho.mqtt import subscribe import json from bge import logic import stay import go owner = logic.getCurrentController().owner def get_measurement(): message = subscribe.simple('measurements', hostname="IP", port=PORT) data = json.loads(message.payload) measurement = int(data["value"]) return measurement def stay_or_go(): Xspeed, Yspeed, Zspeed = owner.getLinearVelocity(True) if Zspeed == 0: if get_measurement() == 0: stay() if get_measurement() > 0: go() if __name__ == "__main__": stay_or_go()
https://blenderartists.org/t/mqtt-subscribe-simple-in-bge/1216638
CC-MAIN-2020-40
refinedweb
124
52.15
2. Setting up a request Datastream Data are downloaded by launching a “request”. There are essentially three kinds of requests: • A time series request: when you need data of one or more series over a INTRODUCTION ADVANCE FOR OFFICE certain time period; • a static request: when you need data of one or more series on a specific date; funds, indices, …), with data going back as far as 50 years. To support this financial 2. specify the type of data needed (the ‘datatypes’); market data, a selected set of Worldscope company fundamental data and financial 3. specify the date or time period over which information is needed; ratios for more than 30,000 companies is available. In addition, Datastream pro- vides exchange rates and interest rates as well as some 400,000 economic data 4. set options. series sourced from central banks, national statistics offices, OECD, and IMF. Fore- We will discuss all four steps using a time series request in which we need informa- cast data for many developed economies are also available. tion about Delhaize. Downloading data from this database is quite straightforward once the user is fa- miliar with the query screens. The purpose of these notes is to show by example First step: specify the series how most standard queries can be performed. We focus on the user interface inte- From the Datastream-AFO menu, pick “Time Series Request”. The following inter- grated in Microsoft Excel, the Advance for Office tool. When the package is in- face is opened: stalled, you will see the item “Datastream-AFO” in the Excel menu: In these notes we will first discuss how to set up a basic request to download data. Next, we introduce lists which allow downloading large amounts of similar data easily. In the last section we present a number of additional tools that can facilitate setting up requests. The requested series are identified by filling in their Datastream Mnemonic into the “Series/List” input box. If you do not know the exact mnemonic – this will certainly be the case at first – you can open the Navigator window by clicking the button. The following screen opens: First of all, it is apparent that some results are represented in boldface. These are so-called primary series. For instance, the Delhaize stock is quoted on several mar- kets, but the main market is Euronext (Brussels). This one is in boldface. Focusing on this line, you see the DS Mnemonic (B:DEH) as well as the DS Code, which is an alternative way to specify the series. Scrolling to the right more (straightforward) This screen allows you to search series using a broad range of criteria. The screen info is given, such as the base date (starting date) of the series and the currency. shot above only shows the most important criteria. You will discover more criteria Clicking on the DS mnemonic returns this into the request screen. This concludes by scrolling down. The first “super”criterion is the Data Category. Here “Equities” is the first step. chosen, but in addition Datastream distinguishes the following criteria: Be sure to use the category to which your requested series belongs. If uncertain, choose “Any Category”, but this may slow down the Search. As we need info about Delhaize, we leave the category choice on Equities. If we fill in “Delh” in the Name input box and click on the “Search” button, the following results are returned: 5 Introduction Advance for Office 2 Setting up a request 6 If you are not sure about the exact content of a datatype, simply click on the open book icon to the left of the datatype to get its definition. To finalize the input, click Again, note that the Data Category “Equities” is selected. Indeed, every data cate- on “Use Selected”. The datatypes are now entered in the input screen: gory has its specific datatypes. There are also two types of datatypes: one for time series requests and one for static requests. Finally, datatypes are assembled into groups. Above the most common datatypes – the so-called Key Datatypes – are selected, but you can easily switch to other groups: Let’s select the dividend yield (DY), price (P), and the total return index (RI) by checking the check boxes to the right of the datatypes: 7 Introduction Advance for Office 3 Working with lists 8 1. The Start Date drop box: here you can enter the starting date of the re- quested period in two ways, a relative and an absolute way. The relative way is shown above: “-2Y” indicates that the last two years of data are to be downloaded. Likewise you can use the letters “D” for days, “W” for weeks, “M” for months, and “Q” for quarters. Another way to enter relative start dates is by using the “Start of Week/Month/Quarter/Year” entries. The advantage of the relative way is that you always have the most frequent da- ta when you update the request. When you need a fixed period, use the absolute way of entering starting (and end) date by simply typing the data in the following format dd/mm/yyyy. You can also enter “Base Date”, which means starting from when the data series becomes available. 2. The Freq drop box, where you indicate the frequency of the download: Dai- ly, Weekly, Monthly, Quarterly, or Yearly. Datastream has returned a time series for each of the datatypes requested. Note 3. The End Date drop box. If left blank, the period ends at the most recent that also all the essential information about the request is indicated in the Excel date. sheet. By having checked the “Embed” and “Visible” boxes, there is also a Refresh button. By clicking it, the request is updated. By right-clicking on it, you can cut, Fourth step: specify options copy, or delete this request, but you can also open the Editor which brings you back Finally, you can set some options about how the output will be returned into Excel. to the input screen. You can then easily adapt the request if necessary. Try it out in For non-experienced users, it is advised to check “Display Row Titles”, “Display Col- order to download also information about Carrefour (F:CRFR). umn Titles”, “Display Headings” and “Display Currency” to have a clear overview about which data were exactly downloaded. 3. Working with lists It is also useful to check the “Embed” and “Visible Button” boxes. Datastream will then embed the request into the Excel sheet, such that the user can easily update When you need data for a large number of securities, you may consider working or edit the request. If the “Auto Refresh” box is checked the request is auto- with lists. As the name indicates, lists are a collection of Datastream codes or Mne- matically updated when the Excel workbook is opened. Needless to say, when the monics. There are lists created and maintained by Datastream, but you can also workbook contains many requests and/or long requests this may slow down the create your own lists. We will first download a so-called constituents list, which con- opening of the file dramatically. tains the codes of the constituent securities of a stock market index. We will then use this list to create our own list and use it to download data. Next, position the mouse pointer on the right-arrow button, which is located to the right of DJ Euro Stoxx 50. The following screen should appear: The first column reports the Datastream code, the remaining columns show the re- quested information. Making a list Making your own list is easily done using the “Tools” item under the Datastream- AFO menu. There you can choose the “Create List from Range” option. The follow- You should click on “Display Underlying Constituent List”, which results in ing interface is shown: The first one is the present constituent list. Their Datastream mnemonic is always an L followed by the underlying index’s mnemonic. You also see two other lists, where 1007 and 1099 are added to the mnemonic. These are the lists on October 2007 and October 1999. The lists for other dates are not available. Now click on the first mnemonic to enter it in the input screen. Then, the datatypes have to be specified. With the Datatype Picker we choose the following datatypes: NAME (the name of the company), ISIN (ISIN-code, a unique security identifier), P (stock price), and WTIDX (weight in the index). This is the result: 11 Introduction Advance for Office 4 More tools 12 In the Code Range input box you enter the Excel range containing the Datastream ways open the List Picker, by clicking on the button. You will then get an codes to be assembled in a list. As an illustration, let’s recreate the Euro Stoxx 50 overview of all the lists you have created. list using the codes we obtained from the previous step. These codes are contained in the range A3:A52. We then enter a list description, which should be sufficiently Submitting the request returns the following data: clear such that we still understand it later, e.g. Eurostoxx 50 November 2007. Then we enter a List File Name, e.g. Leurostoxx50112007. This is the name of the file where the list is stored. Finally, check the “Store List Locally” box. This means that the list information is stored on your computer. Alternatively, you can choose for “Upload List”, in which case the list is stored on the Datastream mainframe. This implies that you can use the list from whichever computer you log on. In this case you may also provide a name in the List Name box, starting with L. We can now use our own list in any request. Let’s download the recent price history of the constituent stocks: Note the “NA” entries in column F, which indicates that the prices for Arcelor are not available over this period. You can change this with the “Options…” item in the “Datastream-AFO” menu. For instance, you can change this into “=NA()”, such that Excel recognizes this as not available. In this illustration we have used the Datastream codes to build the list. We can also use the ISIN codes (or SEDOL codes). This is especially convenient as this informa- tion is often provided by a third party (e.g. an index provider). 4. More tools this is denoted in USD per troy ounce. Its Datastream mnemonic is GOLDBLN. To convert this into another currency, we enter the following expression in the “Se- ries/List” input box: (GOLDBLN)~E0. So note (1) the brackets around the mnemon- ic, (2) the tilde, and (3) the currency code (here: E0 for euro1). The result is: Then we click the “Apply All” button to convert all selected datatypes – use “Apply Last/Selected” for converting selected datatypes. The input screen now shows This only works when the datatype field is not filled in, i.e. for the default datatype. If you need other datatypes or several datatypes, the exchange rate conversion is done in the datatype input box. Let’s download the price index (PI) and return in- dex (RI) for Microsoft (@MSF) in euro. We simply enter the requested information as follows: If you want to enter these conversions manually, remember to add an “X”, which is a dummy for the series code, to enter the datatype between brackets, and then the tilde (~) followed by the currency mnemonic. Then we click the currency converter button where we select the euro: 1 Alternatively, also “E” can be used. In the next example we will show how you can find the currency mnemonics. 15 Introduction Advance for Office 4 More tools 16 It presents different ways to navigate through the database. The categories can be expanded by clicking on the plus signs on the left of the screen, e.g.: • You see that you can also select on Market or Exchange. Here you have to enter the Datastream code for market of exchange. To get an overview of these codes, simply click the “…” button, which opens the following screen for “Market”: As always, in the right-hand side panel you will find more information about the se- ries (Datastream code, source, start date, currency, …). You can sort the series on each of these fields to retrieve information more efficiently using the arrow signs next to the column headings. In addition, you can restrict the selection of seriesby using the filter on top of the screen. • To end this short discussion, notice that you can combine criteria using both “AND” and “OR” Boolean operators. This is set using the buttons at the right of the Navigator screen: The datatype UP is the Unadjusted Price, i.e. the closing price as it was histori- cally determined on the stock exchange. In contrast, P is the Adjusted Price and is the default datatype (if you don’t specify the datatype, Datastream will return P). The most recent price will also reflect the price obtained on the stock exchange – see for instance the data on 12 June 2007: there is no difference between P and UP. However, the stored historical price data are occasionally recomputed by Data- stream to take into account capital operations. As an example, compare P and UP on 8 June 2007. UP is exactly twice P. This is because Deutsche Börse underwent a 2 for 1 stock split: every shareholder received an additional share for each share she owned. This can be checked using the NOSH datatype, the number of shares in thousands. We notice an increase from 100 million to 200 million shares. Of course, such a stock split doesn’t change anything for the company, so the share price would drop by 50%. This is approximately what we see in the UP column: a drop It is often useful that the Navigator remembers your last search criteria. Of course, from € 166.75 to €84.40. In order to make historical data comparable, Datastream when you want to start with an entirely new request, it is recommendable to click adjusts the previous prices, in this case by dividing UP by 2. This adjustment is re- the “Reset All Criteria” option to be sure that no previous search criterion is still ac- flected in the Adjustment Factor (AF). It indicates by how much you have to mul- tive. tiply UP to obtain P. Note that each time there is a capital operation, both AF and P will be recalculated. The datatype AX is the Nonaccumulated Adjustment Fac- More help… tor. It indicates at which date the adjustment has been made, and by how much. It This note only provides the most basic skills needed to work with Datastream. Many is non available at dates without adjustment. more facilities are available, such as making charts, making reports, set up batch The stock return can now be computed in two equivalent ways (assuming no divi- requests (which allows you to download overnight, for instance), etc. Additional dends have been paid out): help and manuals are available on the Thomson Datastream Extranet,, available through the Web browser UPt +1 ⋅ AFt +1 P Rt = − 1 = t +1 − 1. item on the Datastream-AFO menu. In any case, in the end there is only one way UPt ⋅ AFt Pt An alternative way to compute returns is by using the Price Index (PI) or the Re- 6. References turn index (RI). Both are set equal at 100 at the equity’s base date, i.e. the first date for which price data are available in Datastream. The price index grows at the Datastream is increasingly used as a source for empirical research in finance, more capital growth rate, i.e. the return net of dividends. It is also called the capital ap- specifically for countries that are not covered by high-quality databases such as preciation index. In contrast, the return index, or total return index, grows at the CRSP. This raises the issue of the quality of Datastream data. Ince & Porter (2006) total return (inclusive of dividends) rate. As no dividends have been paid out in warn researchers for potential flaws in the Datastream coverage. They compare the June 2007, both series grow by the same rate. US coverage to that of CRSP over the period 1975 and 2002 and find that certainly in the earlier years a large number of equities are missing in Datastream. For in- To see the effect of dividend payments, see the data on 14 May 2007. In the col- stance in 1975 only 20% of common equity issues found in the CRSP database is umn DT (Dividend Type) you see the entry (YR). This datatype has an entry when- also included in Datastream. The problem is more important for small caps (bottom ever dividends are paid out, in which case it indicates which kind of dividend is con- 20% of stocks measured by market cap). In addition, Datastream suffers to some cerned. Here, YR stands for yearly dividend. Other possibilities are: extent from survivorship bias. The authors suggest a number of screens that alle- CMB Combination viate the problems somewhat. Some of these screens are easy to implement (e.g. CPG Capital Gains CPL Long Term Capital Gains multiplying the return index when rounding errors become important, or deleting CPS Short Term Capital Gains returns higher than 300% when they are reversed next month), but other are much CPU Undefined Capital Gains FIN Final more labour-intensive. See the paper for more details. HYR Half yearly INT Interim MTH Monthly QTR Quarterly Ince, O.S. & R.B. Porter, 2006, “Individual equity return data from Thomson Data- RST Restricted dividend, not payable to all shareholders stream: handle with care!,” Journal of Financial Research 29(4), 463-479. SPL Extraordinary payment UND Undefined YR Yearly. The next column, XDDE, is the ex-dividend date of the dividend, i.e. the day on which the shares are traded without the right on this dividend payment. This is the day on which the share price will drop by the dividend. The actual dividend paid is indicated by the UDD (Unadjusted Dividend) datatype. Here, €3.4 per share was paid out. The datatype DD adjusts the dividend payment, to make it comparable to the adjusted price (P). So, the same adjustment factor (AF) is used to convert UDD into DD. Note that DD will be recomputed any time the AF changes. The total return can be computed using any of the following expressions: Rt = (UPt +1 + UDDt +1 ) ⋅ AFt +1 −1 = Pt +1 + DDt +1 −1 = RIt +1 − 1. UPt ⋅ AFt Pt RIt Note that on 14 May 2007 the growth rate of RI is higher than for PI as the former takes into account the dividend payment. It is important to realize that the dividend payment does not necessarily occur at the ex-dividend day. Indeed, often the dividends are paid out days are even weeks later. In that case, DD and UDD will be entered on the dividend payment date, whereas the dividends are included in the return on the ex-dividend date.
https://www.scribd.com/document/55010616/Basic-Instructions
CC-MAIN-2019-35
refinedweb
3,202
62.98
import "golang.org/x/debug" Package debug provides the portable interface to a program being debugged. type Array struct { ElementTypeID uint64 Address uint64 Length uint64 // Number of elements in the array StrideBits uint64 // Number of bits between array entries } Array is a Value representing an array. Element returns a Var referring to the given element of the array. Len returns the number of elements in the array. type Channel struct { ElementTypeID uint64 Address uint64 // Location of the channel struct in memory. Buffer uint64 // Location of the buffer; zero for nil channels. Length uint64 // Number of elements stored in the channel buffer. Capacity uint64 // Capacity of the buffer; zero for unbuffered channels. Stride uint64 // Number of bytes between buffer entries. BufferStart uint64 // Index in the buffer of the element at the head of the queue. } Channel is a Value representing a channel. Element returns a Var referring to the given element of the channel's queue. If the channel is unbuffered, nil, or if the index is too large, returns a Var with Address == 0. The File interface provides access to file-like resources in the program. It implements only ReaderAt and WriterAt, not Reader and Writer, because random access is a far more common pattern for things like symbol tables, and because enormous address space of virtual memory makes routines like io.Copy dangerous. type Frame struct { // PC is the hardware program counter. PC uint64 // SP is the hardware stack pointer. SP uint64 // File and Line are the source code location of the PC. File string Line uint64 // Function is the name of this frame's function. Function string // FunctionStart is the starting PC of the function. FunctionStart uint64 // Params contains the function's parameters. Params []Param // Vars contains the function's local variables. Vars []LocalVar } Func is a Value representing a func. type Goroutine struct { ID int64 Status GoroutineStatus StatusString string // A human-readable string explaining the status in more detail. Function string // Name of the goroutine function. Caller string // Name of the function that created this goroutine. StackFrames []Frame } const ( Running GoroutineStatus = iota Queued Blocked ) func (g GoroutineStatus) String() string Interface is a Value representing an interface. LocalVar is a local variable of a function. Map is a Value representing a map. Param is a parameter of a function. type Pointer struct { TypeID uint64 // A type identifier, opaque to the user. Address uint64 // The address of the variable. } Pointer is a Value representing a pointer. Note that the TypeID field will be the type of the variable being pointed to, not the type of this pointer. type Program interface { // Open opens a virtual file associated with the process. // Names are things like "text", "mem", "fd/2". // Mode is one of "r", "w", "rw". // Return values are open File and error. // When the target binary is re-run, open files are // automatically updated to refer to the corresponding // file in the new process. Open(name string, mode string) (File, error) // Run abandons the current running process, if any, // and execs a new instance of the target binary file // (which may have changed underfoot). // Breakpoints and open files are re-established. // The call hangs until the program stops executing, // at which point it returns the program status. // args contains the command-line arguments for the process. Run(args ...string) (Status, error) // Stop stops execution of the current process but // does not kill it. Stop() (Status, error) // Resume resumes execution of a stopped process. // The call hangs until the program stops executing, // at which point it returns the program status. Resume() (Status, error) // Kill kills the current process. Kill() (Status, error) // Breakpoint sets a breakpoint at the specified address. Breakpoint(address uint64) (PCs []uint64, err error) // BreakpointAtFunction sets a breakpoint at the start of the specified function. BreakpointAtFunction(name string) (PCs []uint64, err error) // BreakpointAtLine sets a breakpoint at the specified source line. BreakpointAtLine(file string, line uint64) (PCs []uint64, err error) // DeleteBreakpoints removes the breakpoints at the specified addresses. // Addresses where no breakpoint is set are ignored. DeleteBreakpoints(pcs []uint64) error // Eval evaluates the expression (typically an address) and returns // its string representation(s). Multivalued expressions such as // matches for regular expressions return multiple values. // TODO: change this to multiple functions with more specific names. // Syntax: // re:regexp // Returns a list of symbol names that match the expression // addr:symbol // Returns a one-element list holding the hexadecimal // ("0x1234") value of the address of the symbol // val:symbol // Returns a one-element list holding the formatted // value of the symbol // 0x1234, 01234, 467 // Returns a one-element list holding the name of the // symbol ("main.foo") at that address (hex, octal, decimal). Eval(expr string) ([]string, error) // Evaluate evaluates an expression. Accepts a subset of Go expression syntax: // basic literals, identifiers, parenthesized expressions, and most operators. // Only the len function call is available. // // The expression can refer to local variables and function parameters of the // function where the program is stopped. // // On success, the type of the value returned will be one of: // int8, int16, int32, int64, uint8, uint16, uint32, uint64, float32, float64, // complex64, complex128, bool, Pointer, Array, Slice, String, Map, Struct, // Channel, Func, or Interface. Evaluate(e string) (Value, error) // Frames returns up to count stack frames from where the program // is currently stopped. Frames(count int) ([]Frame, error) // VarByName returns a Var referring to a global variable with the given name. // TODO: local variables VarByName(name string) (Var, error) // Value gets the value of a variable by reading the program's memory. Value(v Var) (Value, error) // MapElement returns Vars for the key and value of a map element specified by // a 0-based index. MapElement(m Map, index uint64) (Var, Var, error) // Goroutines gets the current goroutines. Goroutines() ([]*Goroutine, error) } Program is the interface to a (possibly remote) program being debugged. The process (if any) and text file associated with it may change during the session, but many resources are associated with the Program rather than process or text file so they persist across debuggging runs. Slice is a Value representing a slice. type String struct { // Length contains the length of the remote string, in bytes. Length uint64 // String contains the string itself; it may be truncated to fewer bytes than the value of the Length field. String string } String is a Value representing a string. TODO: a method to access more of a truncated string. type Struct struct { Fields []StructField } Struct is a Value representing a struct. StructField represents a field in a struct object. A value read from a remote program. type Var struct { TypeID uint64 // A type identifier, opaque to the user. Address uint64 // The address of the variable. } A reference to a variable in a program. TODO: handle variables stored in registers Package debug imports 3 packages (graph) and is imported by 8 packages. Updated 2017-10-10. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/debug
CC-MAIN-2017-51
refinedweb
1,141
57.77
Tracing #include <trace.h> Summary Functions ATrace_beginAsyncSection void ATrace_beginAsyncSection( const char *sectionName, int32_t cookie ) Writes a trace message to indicate that a given section of code has begun. Must be followed by a call to ATrace_endAsyncSection with the same methodName and cookie. Unlike ATrace_beginSection and ATrace_endSection, asynchronous events do not need to be nested. The name and cookie used to begin an event must be used to end it. Available since API level 29. ATrace_beginSection void ATrace_beginSection( const char *sectionName ) Writes a tracing message to indicate that the given section of code has begun. This call must be followed by a corresponding call to ATrace_endSection on the same thread. Note: At this time the vertical bar character '|' and newline character '\n' are used internally by the tracing mechanism. If sectionName contains these characters they will be replaced with a space character in the trace. Available since API level 23. ATrace_endAsyncSection void ATrace_endAsyncSection( const char *sectionName, int32_t cookie ) Writes a trace message to indicate that the current method has ended. Must be called exactly once for each call to ATrace_beginAsyncSection using the same name and cookie. Available since API level 29. ATrace_endSection void ATrace_endSection() Writes a tracing message to indicate that a given section of code has ended. This call must be preceeded by a corresponding call to ATrace_beginSection on the same thread. Calling this method will mark the end of the most recently begun section of code, so care must be taken to ensure that ATrace_beginSection/ATrace_endSection pairs are properly nested and called from the same thread. Available since API level 23. ATrace_isEnabled bool ATrace_isEnabled() Returns true if tracing is enabled. Use this to avoid expensive computation only necessary when tracing is enabled. Available since API level 23. ATrace_setCounter void ATrace_setCounter( const char *counterName, int64_t counterValue ) Writes trace message to indicate the value of a given counter. Available since API level 29.
https://developer.android.com/ndk/reference/group/tracing?hl=he
CC-MAIN-2021-49
refinedweb
312
57.77
Vaclav_Sal wrote:I still do not understand what the sizeof(iRecordIndex) is telling me. #include <stdio.h> #include <stdlib.h> #include <conio.h> #include <string.h> int main(void) { char str[256] = "32 65 73 7 73"; int *pn = NULL, n = 0; char* _tsn = strtok(str, " "); while (_tsn != NULL) { int* tmp = new int[n + 1]; if (pn != NULL) memcpy((void*)tmp, pn, sizeof(int) * n); tmp[n++] = atoi(_tsn); if (tmp != NULL) pn = tmp; _tsn = strtok(NULL, " "); } for (int i = 0; i < n; i++) printf("%d ", pn[i]); printf("\n"); _getch(); return432935 wrote:I ran the Activating a Window program General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Forums/1647/C-Cplusplus-MFC.aspx?df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5091319&fr=7451&fid=1647
CC-MAIN-2020-05
refinedweb
130
75.3
11 April 2011 By clicking Submit, you accept the Adobe Terms of Use. This article assumes a basic familiarity with Flex. All Note: Flash Builder 4.5 provides is ready to take a big leap forward with its next release, Flex 4.5 SDK. Flex 4.5 SDK delivers many exciting new components and capabilities, along with integrated support in Flash Builder 4.5 and Flash Catalyst CS 5.5. With the Adobe Flex 4.5 SDK release, we are introducing three major initiatives: This article aims to introduce you to the variety of new features available in the Flex 4.5 SDK release and provide you with additional resources and documentation such that you can start building applications leveraging the Flex 4.5 framework. Flex development takes a big leap forward as Flex 4.5 release introduces multiscreen application development directly within the core Flex framework. Using Adobe Flex 4.5 SDK, you can leverage your existing Flex knowledge and skills to develop mobile applications by utilizing mobile optimized components (which are based on Spark) and application-level mobile constructs which encapsulate common mobile design patterns. To learn more about the new mobile capabilities added to the Flex framework as well as new mobile workflows added to Flash Builder, please read NJ's article Mobile development using Flex 4.5 SDK and Flash Builder 4.5. The Flex 4.5 release adds popular Spark components that did not make it into the Flex 4 release. The new Spark components added in Flex 4.5 aims to alleviate the most common scenarios which caused mixing of MX and Spark components. The list of new Spark components includes: Spark DataGrid, Form, Image, Module, Busy Indicator, SkinnablePopUpContainer, Date/Time, Number and Currency Formatters as well as Number and Currency Validators. Below we discuss these new components and their capabilities in more detail. One of the most exciting new Spark component is the Spark DataGrid (see Figure 1). The Spark DataGrid has two main goals: The salient features of the Spark DataGrid include: a new skinning contract which allows the DataGrid's subparts to be customized declaratively via the DataGrid skin file including cells and column headers, full support for a dynamically changing data provider along with fixed and variable row sizes, advanced communication between the DataGrid column headers and table area including column sorting, formatting of data values and column resizing as well as full selection support. Like the MX DataGrid, single and multiple row selection is enabled (both contiguously and non-contiguously). Additionally, the Spark DataGrid allows users to select individual cells or multiple cells as part of the default selection behavior. The Spark DataGrid also allows for basic user navigation through the keyboard or mouse and is fully accessible. Some of the advanced features of the Spark DataGrid include full support for cell editing as well as performance optimizations such that the Spark DataGrid exceeds the MX DataGrid with respect to startup time, horizontal scrolling and vertical scrolling. To learn more details about the new DataGrid and all its facilities, please refer to the Spark DataGrid >specification. The accessible nature of the Spark DataGrid is covered in a separate specification. The Spark Form container provides much of the same functionality as the MX Form while also adding several improvements that cater to more modern form designs. The biggest improvement to Spark Form over the MX Form is the maturity of its layout. Spark Form uses a grid-based constraint layout for positioning each FormItem element. Additionally, the layout allows for the form columns and rows to be very dynamic. Form columns and rows can grow and shrink at runtime to support modern form designs where information, like help text or validation errors, can be shown on-demand. Spark Form ships two layouts, the default layout being a horizontal laying out of form items (see Figure 2) and the second layout being a stacked layout where form items are arranged vertically (see Figure 3). Both layouts support form columns being defined with pixel sizes as well as percentage sizes. Spark Forms provide configuration options which can be used to align form item content along a common baseline, to mark form items as required and to list form items sequentially. All of these configurations are customizable via the Form's skinning contract. Thus the state of the Form when a required form item has been omitted or when a form item is in an invalid state can be customized declaratively by modifying the Form or FormItem's skin. You can read about the new Spark Form container in more detail by reading the feature specification on the Flex open source site. To learn more about the Form container and its API's, please refer to the Spark Form specification here. Flex 4 introduced the BitmapImage graphic element. This is a lightweight, non-skinnable UI element that can be used to display image content. In Flex 4.5 we improved the BitmapImage graphic element and added a new Spark Image component. BitmapImage has been improved to allow for the loading and presenting of remote images that are both untrusted and trusted (the normal Flash Player security restrictions apply when loading untrusted asset). Addtionally, BitmapImage introduces new scaling support. The newly introduced 'scaleMode' property can be configured to stretch the image and fill the content area or display as if in letterbox mode. Letterbox mode allows the image content to be displayed with the same aspect ratio as the original, unscaled image. BitmapImage also adds a new 'smoothingQuality' property which can be used to configure the image smoothing algorithm used when the 'smooth' property is enabled. By default, the image is scaled at the quality of the stage. When 'smoothingQuality' is set to high, a multi-step scale algorithm is used resulting in a much higher quality display than what one would obtain with the default stage quality. This 'smoothingQuality' option is useful for high quality thumbnail presentation. And finally, BitmapImage in Flex 4.5 introduces a content cache which can be configured to support the caching and queuing of remote image assets. This type of cache is convenient to present the image without flickering in scenarios where images are quickly being shown and hidden, like when image thumbnails are being scrolled in a List component. A content cache can be associated with a BitmapImage instance and configured to manage the cache size and to control the invalidation and storage behavior of the cache. The configuration option can also be modified to queue the loading of images in prioritized order. The addition of the content cache and queuing mechanism allows for much better perceived performance of image assets in applications built with Flex 4.5. The new Spark Image skinnable component is built atop the improved BitmapImage element. As such, all the improvements of the BitmapImage element like the scaling, smoothing, caching, and queuing mechanisms are available on the Image component as well. The Spark Image's skinning contract allows for customisaton of the presentation of the image asset as it is being loaded, when its finished loading, when its invalid or when the asset is not found and the image is in a broken state (see Figure 4). You can read more about the BitmapImage enhancements and the new Spark Image skinnable component by reading the feature specification here. Flash Player 10.1 introduced a set of new globalization APIs that provide locale-specific formatting of dates, times, numbers, and currencies. Building on these APIs, the Flex 4.5 release adds a set of new formatters to the Spark namespace. These formatters will format data based on the locale defined by the operating system, thus natively providing locale-specific behavior to application content. The functionality provided by the Flash Player 10.1 APIs is driven by the specification of a locale as defined by the operating system. The Flex 4.5 release provides 3 formatters that leverage the locale information to format correctly. This includes a CurrencyFormatter, NumberFormatter and DateTimeFormatter (see Figure 5). You can read the Spark Formatters specification on the Flex 4.5 open source site to learn more about the properties, methods and events used by the newly introduced formatters. Additionally, new Sort and SortField classes have been added which add locale-specific sorting behavior. Under the hood, the new Sort classes take advantage of Flash Player 10.1's locale-specific string comparison, number and currency parsing and uppercase and lowercase string conversion to handle character and number sorting according to language rules as defined by the locale. To learn more about the sorting and collation capabilities newly introduced in Flex 4.5, refer to this specification. In addition to the new Spark formatters, Number and Currency validators have been added which utilize the Flash Player 10.1 globalization APIs. The new NumberValidator and CurrencyValidator classes now validate according to the locale defined by the operating system. This added behavior is beneficial for multiple reasons including the inclusion of validation of negative and positive number formats, acceptance of non-European digits and operating system updates for new locales or changes to locales gets automatically integrated into the application. For more information on the new Spark validators, reference this specification. Apart from the Spark components listed above, additional new components and capabilities were added in Flex 4.5. This includes the ability to specify textual prompts in Spark TextInput, TextArea and ComboBox controls for use in both mobile and desktop applications. This mini-specification covers all the details around text prompts in Spark. Additionally, a new Busy Indicator component was added for use in mobile applications where the component can be used to give a visual indication that an application is in the middle of an operation like a network call or long-running calculation. To read more about the Busy Indicator, reference this specification. Two new skinnable components worth mentioning are the Spark Module and Spark SkinnablePopUpContainer controls. The Spark Module, often used with the new Spark ModuleLoader control, is a skinnable container for creating a module. Modules are often used when creating navigator views or for bringing in separate UI modules within a single application. To learn more about the Spark Module and ModuleLoader components read the specification here. The Spark SkinnablePopUpContainer is a new skinnable control which can be used to customize the animation pop-up, tear-down and data presentation of a pop-up window like an alert or dialog control for use in mobile and non-mobile applications. States are used to govern when the pop-up has been opened or closed and the skin can visually update based on the state change. For information on how to declare and skin a SkinnablePopUpContainer, check out the reference specification. The Flex 4 framework integrated the Open Source Media Framework (OSMF) as the base component for the Spark VideoPlayer component. Additionally, Flex 4 integrated the Text Layout Framework (TLF) as the base text library utilized in all of the Spark text components. OSMF and Flex 4.5 SDK The OSMF library in Flex 4.5 has been upgraded to support OSMF 1.0. The OSMF 1.0 media player is in use in the Flex 4.5 Spark VideoPlayer component. The integration of OSMF 1.0 provides some critical bugfixes as well as adds support for HTTP streaming. This means that when developers and designers create video assets that support HTTP streaming, the Spark VideoPlayer component will be able to render those types of streams. Taking advantage of this is as simple as setting the VideoPlayer's 'source' property to an URL that supports HTTP streaming. You can learn more about OSMF 1.0's inclusion in Flex 4.5 by reading this specification. TLF and Flex 4.5 SDK The Flex 4.5 release includes support for the next version of the TLF library, TLF 2.0. TLF is the base text engine used by all Spark text components, including TextInput, TextArea, RichText, and RichEditableText. The next release of TLF focuses on improving performance of text in Flex applications as well as adding some new features like floats and bulleted and numbered lists. It's important to note that TLF 2.0 incorporates key performance fixes such as Spark text controls displaying, scrolling and interacting with large amounts of text. Some performance issues which have already been addressed center around Spark text controls displaying, scrolling, and interacting with large amounts of text. For more information about TLF 2.0 integration into Flex 4.5, please refer to this specification. Continuing to improve the Flex compiler is a big part of every release. In the Flex 4.5 timeframe, we focused on three major improvements: The Flex 4.5 release shows improvements in all three areas. With recent optimizations, midsize and large projects will see up to a 20% reduction in overall memory consumption during a full compilation and up to a 20% reduction with full and incremental compilation builds. RSL Improvements RSLs (runtime shared libraries) package the Flex framework into libraries that are linked and loaded during application startup. Flex 4 turned Flex framework RSLs on by default, meaning that the Flex compiler linked framework RSLs for use by Flex applications. The Flex 4.5 release adds some very exciting improvements to the RSL infrastructure in Flex. With the Flex 4.5 compiler, only RSLs that have true dependencies on the application code will be linked into your application. For example, this means that applications not using OSMF will not incur the cost of linking and loading the OSMF RSL. Additionally, pure-Spark or pure-MX projects are ensured to only link in the components and architectural pieces needed for that particular type of project. The enhancements to the Flex compiler and its linkage of RSLs is captured in this specification. Additionally, in Flex 4.5, modules and their RSL linkage logic has been improved. Now, modules will understand when their parent application or a sibling module has already loaded in RSLs that it depends on. In situations like this, the module will avoid re-linking and loading in the required RSL. There are compiler configuration options which the developer can use to force link in certain modules if they do not want the compiler to introspect and figure out the dependencies itself. As such, Flex 4.5 introduces the ability for an application, with the help of its sub-applications and modules, to load a set of RSLs that are needed instead of the main application preloading them all. To learn more about this feature, check out this specification. With the addition of new mobile framework functionality as well as enhancements to the core framework, Flex has matured to include all of the pieces necessary to develop expressive applications for the web, desktop or mobile devices. We are excited for customers, both new and existing, to try Flex 4.5 and give the new features a whirl!
https://www.adobe.com/devnet/flex/articles/introducing-flex45sdk.html
CC-MAIN-2017-34
refinedweb
2,492
54.93
On 1/18/06, Miguel A. Figueroa-Villanueva <miguelf@...> wrote: > If this is not an appropriate way to submit code for VXL, please let me > know the prefered way to submit. Miguel, maybe one of the VXL administrators can give you developer access so you can check code in using CVS. > BTW, vidl2 is intended as a level-2 library, right? In other words, the > restrictions stated in the VXL Book 3.3 don't apply, right? Yes, vidl2 is intended to become a level-2 library. My interpretation for section 3.3 is that RTTI and exceptions are allowed, but not in level-1 core libraries. The rest of the items are not allowed in any level core library, and maybe not in contrib either. Someone correct me if I'm wrong about that. That said, maybe this list needs to be reevaluated. Are we still supporting all of those old compilers? > I am using: > > - member templates (not supported by SGI CC 7.2.x). > - partial template specializations (not supported by SGI CC 7.2.x). > - non-type parameters in function templates (not supported by SunPro 5.0)= . Your code is very nice and makes good use of generic programming.=20 Unfortunately, I think this use of templates is a bit too extreme for some of the aging compilers that VXL supports. The good news is most of what you want can be done in VXL in an uglier, non-generic way.=20 For example, vgui uses a similar sort of factory for tool-kits and uses the singleton class model for gui managers. You've already had some discussion about exceptions. The only think I haven't seen in VXL is threads. It would be really nice to have a cross platform multi-threading library in VXL, but that's a whole other project. Another issue here is namespaces. VXL was designed to avoid namespaces because: " few compilers support namespaces well, and their implications in large-scale development are as yet poorly understood" (VXL Book 1.4.2). I have rarely seen namespaces used in VXL, but I haven't seen anything prohibiting them either. Are namespaces allowed in VXL? --Matt View entire thread
http://sourceforge.net/p/vxl/mailman/message/6722945/
CC-MAIN-2015-32
refinedweb
366
76.11
I have to do this problem where I use a Sentinel Control loop to repeatedly get the lenghts of the base and height of a triangle from a user, call the method, and then display the Values of the three sides of the triangle. This all has to take place inside a sentinel control loop where the user has to enter a -1 to terminate the loop. My problem is that after I get the base and height and do the calculations, the loop will not terminate when -1 is entered only one time. Even though I have an or conditional in my while loop, the program will only terminate if both are -1. What do I have to do to get the loop to terminate as soon as a -1 is entered, but still be able to calculate the hyp using my method theHypotenuseIs? Code Java: import java.util.*; public class Hypotenuse { public static double theHypotenuseIs(double base, double height) { double calculation, hyp; hyp= Math.sqrt((Math.pow(base,2) + Math.pow(base,2))); return hyp; } public static void main(String[]args) { final int FLAG = -1; double hypotenuse, base, height; Scanner input = new Scanner(System.in); while(base!=FLAG ||height!=FLAG) { base = input.nextDouble(); height = input.nextDouble(); hypotenuse = theHypotenuseIs(base, height); System.out.println("The Base is:" + base); System.out.println("The Height is:" + height); System.out.println("The Hypotenuse is:" + hypotenuse); } } } --- Update --- Never mind, I found a workaround. IDK if it's good style, but I added a conditional after each input that will break if the value is -1.
http://www.javaprogrammingforums.com/%20loops-control-statements/26056-need-help-sentinel-control-loop-printingthethread.html
CC-MAIN-2014-41
refinedweb
262
64.2
Managing State in React With Unstated As your application becomes more complex, the management of state can become tedious. A component’s state is meant to be self-contained, which makes sharing state across multiple components a headache. Redux is usually the go-to library to manage state in React, however, depending on how complex your application is, you might not need Redux. Unstated is an alternative that provides you with the functionality to manage state across multiple components with a Container class and Provider and Subscribe components. Let’s see Unstated in action by creating a simple counter and then look at a more advanced to-do application. Using Unstated to Create a Counter The code for the counter we’re making is available on GitHub: You can add Unstated to your application with Yarn: yarn add unstated Container The container extends Unstated’s Container class. It is to be used only for state management. This is where the initial state will be initialized and the call to setState() will happen. import { Container } from 'unstated' class CounterContainer extends Container { state = { count: 0 } increment = () => { this.setState({ count: this.state.count + 1 }) } decrement = () => { this.setState({ count: this.state.count - 1 }) } } export default CounterContainer So far, we’ve defined the Container ( CounterContainer), set its starting state for count at the number zero and defined methods for adding and subtracting to the component’s state in increments and decrements of one. You might be wondering why we haven’t imported React at this point. There is no need to import it into the Container since we will not be rendering JSX at all. Events emitters will be used in order to call setState() and cause the components to re-render. The components that will make use of this container will have to subscribe to it. Subscribe The Subscribe component is used to plug the state into the components that need it. From here, we will be able to call the increment and decrement methods, which will update the state of the application and cause the subscribed component to re-render with the correct count. These methods will be triggered by a couple of buttons that contain events listeners to add or subtract to the count, respectively. import React from 'react' import { Subscribe } from 'unstated' import CounterContainer from './containers/counter' const Counter = () => { return ( <Subscribe to={[CounterContainer]}> {counterContainer => ( <div> <div> // The current count value Count: { counterContainer.state.count } </div> // This button will add to the count <button onClick={counterContainer.increment}>Increment</button> // This button will subtract from the count <button onClick={counterContainer.decrement}>Decrement</button> </div> )} </Subscribe> ) } export default Counter The Subscribe component is given the CounterContainer in the form of an array to its to prop. This means that the Subscribe component can subscribe to more than one container, and all of the containers are passed to the to prop of the Subscribe component in an array. The counterContainer is a function that receives an instance of each container the Subscribe component subscribes to. With that, we can now access the state and the methods made available in the container. Provider We’ll make use of the Provider component to store the container instances and allow the children to subscribe to it. import React, { Component } from 'react'; import { Provider } from 'unstated' import Counter from './Counter' class App extends Component { render() { return ( <Provider> <Counter /> </Provider> ); } } export default App; With this, the Counter component can make use of our counterContainer. Unstated allows you to make use of all the functionality that React’s setState() provides. For example, if we want to increment the previous state by one three times with one click, we can pass a function to setState() like this: incrementBy3 = () => { this.setState((prevState) => ({ count: prevState.count + 1 })) this.setState((prevState) => ({ count: prevState.count + 1 })) this.setState((prevState) => ({ count: prevState.count + 1 })) } The idea is that the setState() still works like it does, but this time with the ability to keep the state contained in a Container class. It becomes easy to spread the state to only the components that need it. Let’s Make a To-Do Application! This is a slightly more advanced use of Unstated. Two components will subscribe to the container, which will manage all of the state, and the methods for updating the state. Again, the code is available on Github: The container will look like this: import { Container } from 'unstated' class TodoContainer extends Container { state = { todos: [ 'Mess around with unstated', 'Start dance class' ], todo: '' }; handleDeleteTodo = (todo) => { this.setState({ todos: this.state.todos.filter(c => c !== todo) }) } handleInputChange = (event) => { const todo = event.target.value this.setState({ todo }); }; handleAddTodo = (event) => { event.preventDefault() this.setState(({todos}) => ({ todos: todos.concat(this.state.todo) })) this.setState({ todo: '' }); } } export default TodoContainer The container has an initial todos state which is an array with two items in it. To add to-do items, we have a todo state set to an empty string. We’re going to need a CreateTodo component that will subscribe to the container. Each time a value is entered, the onChange event will trigger then fire the handleInputChange() method we have in the container. Clicking the submit button will trigger handleAddTodo(). The handleDeleteTodo() method receives a to-do and filters out the to-do that matches the one passed to it. import React from 'react' import { Subscribe } from 'unstated' import TodoContainer from './containers/todoContainer' const CreateTodo = () => { return ( <div> <Subscribe to={[TodoContainer]}> {todos => <div> <form onSubmit={todos.handleAddTodo}> <input type="text" value={todos.state.todo} onChange={todos.handleInputChange} /> <button>Submit</button> </form> </div> } </Subscribe> </div> ); } export default CreateTodo When a new to-do is added, the todos state made available in the container is updated. The list of todos is pulled from the container to the Todos component, by subscribing the component to the container. import React from 'react'; import { Subscribe } from 'unstated'; import TodoContainer from './containers/todoContainer' const Todos = () => ( <ul> <Subscribe to={[TodoContainer]}> {todos => todos.state.todos.map(todo => ( <li key={todo}> {todo} <button onClick={() => todos.handleDeleteTodo(todo)}>X</button> </li> )) } </Subscribe> </ul> ); export default Todos This component loops through the array of to-dos available in the container and renders them in a list. Finally, we need to wrap the components that subscribe to the container in a provider like we did in the case of the counter. We do this in our App.js file exactly like we did in the counter example: import React, { Component } from 'react'; import { Provider } from 'unstated' import CreateTodo from './CreateTodo' import Todos from './Todos' class App extends Component { render() { return ( <Provider> <CreateTodo /> <Todos /> </Provider> ); } } export default App; Wrapping Up There are different ways of managing state in React depending on the complexity of your application and Unstated is a handy library that can make it easier. It’s worth reiterating the point that Redux, while awesome, is not always the best tool for the job, even though we often grab for it in these types of cases. Hopefully you now feel like you have a new tool in your belt. The post Managing State in React With Unstated appeared first on CSS-Tricks.
http://design-lance.com/managing-state-in-react-with-unstated/
CC-MAIN-2018-43
refinedweb
1,179
55.95
span8 span4 span8 span4 I have a published python scripted parameter that is supposed to return the current date and time. The parameter is later used in the dataset field in a FeatureWriter at the end of the workspace. Testing the workspace in FME 2017.1 is fine, but running the workspace in 2018.1 doesn't work - FME EXE stops immediately after starting translation Here's the script used by the parameter from datetime import datetime return datetime.now().strftime('%Y-%m-%d_%I%M%p') I have the preferred python interpreter set to Esri ArcGIS Desktop Python (2.7) I'm very new to python so apologies if i'm missing something obvious. Thanks! What happens if you set the Python interpreter (in the Workspace navigator) to the regular Python 2.7 and not the one supplied by ArcGIS? Answers Answers and Comments 19 People are following this question. output folder name from input file using python scripted paramter 5 Answers Python Scripted Parameter Termination 3 Answers Does a python startup script execute before a python scripted parameter? 1 Answer Adding previous month name to output filename (python) 5 Answers How to define a parameter using another parameter? 2 Answers
https://knowledge.safe.com/questions/92630/simple-python-scripted-parameter-not-working-after.html
CC-MAIN-2019-43
refinedweb
202
55.74
Details - Type: New Feature - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: 2.0.0 - - Component/s: Java, Javascript , PHP, Website - Labels:None - Environment:Should be applicable to any platform Shindig will run Description. Activity Jonathan, One question for you.... We make heavy use of Dojo, and it's my understanding that Caja and Dojo don't always play nice together. I think that's one reason we are exploring the idea of having a "Caja-less" implementation. Do you have any thoughts/ideas of how we can confirm this? I believe the problem was with Dojo's use of eval.... Thanks! My understanding of Caja is more of a JavaScript scrubbing to prevent malware. Some of the security concerns such as phishing, cross-site scripting are best addressed using Caja. The Caja's virtual iframe concept is about rewriting your JavaScript to remove security risks. I am no expert on Caja but I think the purpose is to sandbox gadgets addressing client side security concerns, further if you have a complex code that makes it difficult to debug Inlining feature on the other end is all about trusted code, users don't want to take a hit on loading duplicate resources in every iframe, same gadget that works in iframe should work on a page without iframes. So the way we are thinking is to have simple api call similar to shindig.container.renderGadget there will be shindig.container.renderGadgetInline, then load the gadget html using Ajax into page DOM, instead of iFrame src reference. There are other issues that we need to resolve with this approach and we are currently working on them. @Mark Yes, Caja doesn't allow EVAL. However, that pertains to the "Cajoled" code itself. I'm not sure if/how it will affect the container code. What I'm thinking is that the Caja JavaScript runtime would run as part of the container. It would be up to the gadget developer to worry about including EVAL into their code. Here's a discussion on debugging Shinding with Caja in IFrame: And another about using JavaScript libraries in Caja: @Kris Your summary is correct, Caja addresses client-side security. In short, it compiles code and then needs the Caja JavaScript runtime. Inline + Caja would allow semi-trusted or even untrusted code without the downside of IFrames that you mentioned. I'd propose a third API, something like shindig.container.renderCajaGadgetInline. It might make the most sense to implement Inline Gadgets support and see how we can make Caja work (instead of trying to develop both simultaneously.) I'll reach out to the Caja folks and see if they can chime in about the Dojo/Eval issue. Here are some of the functions that are supported in the patch. Patch is created from Shindig trunk snapshot taken on 08/24/2010 - Inline gadget functionality - API support to render gadget inline (shindig.gadget.createInlineGadget(..)) - Sample working Horoscope gadget with sample html to render - SampleContainer changes to switch between iframe vs inline - SocialHelloWorld and SocialActivitiesWorld working samples - Dynamic height working sample - Namespace specific to inline gadgets fix - user preferences fix for inline gadget - Couple of other fixes related to inline Here is the codereview request url: Sorry I missed this CL earlier. Yes Caja does not support "eval" the compiler is a serverside component and we don't know what the arguments to eval() are at compile time. To the extent that dojo uses eval (and other means of creating code dynamically fr'instance using Function constructor, document.createElement('script' | 'iframe'), programs that use dojo won't work cajoled. The input language is converging on code written in ES5-strict and to the extent that libraries support ES5-strict, they will work with Caja. I love the work being done to allow inlining. With the current implementation, what is the effect of inlining a gadget that requests cajoling? For example, Given that you expect this api to be used only in cases where the gadget is trusted, can you modify the api to make that explicit that the container author is making themselves vulnerable to gadgets they inline via this API. Gadgets included in this fashion have access to cookies, xss errors in them affect secrets held in the container as well as in other gadgets on the page etc. I suggest shindig.gadget.createUnprotectedInlineGadget. I am thinking we should have caja working for inline gadgets the same way it does for iframe. The choice is up to the users. If they are concerned about security they can use caja in conjunction with inline. So part of next steps, we are making sure all the gadgets, social API works then get caja working as well. Here is the updated patch, based on the feedback from Paul Linder. Please do review and let us know if its good to go into the trunk. Once you apply the patch, you will notice sampleconainer.html will have one more checkbox for toggling inline option. You can test HelloSocialWorld.xml and HelloActivitiesWorld working inline. Kris, I started digging through some of the changes in the shindig-container. One thing that I think would help out here is to refactor the inlining code into its own feature limiting core container changes as much as possible , I started to do some of this yesterday and although there is an issue with the metadata request, this should give you an idea of how you can start to structure the changes. Kris, I have not had a chance to look into the server side code yet, but for the container changes, I think it makes sense to move inlining to its own feature in extras. I've attached a patch where I started to do some of that. This current patch has an error related to the get metadata request for inlined gadgets which I have not had a chance to track down yet, but should give you a start on refactoring. This is a great addition and I'm trying to verify what features work with it (looking to test RPC and pubsub2). One issue I have noticed is that the generated script element is escaped, resulting in a script src attribute such as ''. Shindig doesn't honor the escaped ampersands, so it doesn't seem possible to get a unminified version of the javascript that's inserted into the head element. One more question if I may - requiring pubsub-2 in a gadget rendered inline causes the gadget to fail rendering silently. I'm trying to verify exactly where the issue lies, but it seems that the patch should be able to handle OAA inline gadgets as there are test cases for the type of gadget. I'm using sample8 as a test, adding a required feature of pubsub-2, but it fails w/o rendering the inline gadget. Hi Scott, We had this working earlier,seems things are constantly changing on the trunk its been a challenge for us to keep up. We are trying to refactor a bit and make available as a feature. I will update another patch with pubsub2 function. Hi Kris. That would be great - I'm trying the patch file against the latest trunk just to see if that works. The patch file has three locations where it replaces ampersands with '&', which causes the errors in parsing the GET arguments by shindig (doesn't respect the debug=1'. If you remove those changes, the URL is correctly rendered, at the cost of properly escaping the ampersands, of course. File being patched starts at line 4605, it's [SOURCECODE]/java/gadgets/src/main/java/org/apache/shindig/gadgets/render/RenderingGadgetRewriter.java Then you can pass in debug: true in your gadget object and get unminified js files. Hi there, awesome work with the inline feature. One thing I'm curious about is accessing _MODULE_ID_ from a feature. I notice that when placed inside the CDATA this value is replaced with an integer representing the module ID of the gadget, but the inline feature scripts are placed before gadgets.config.init(..) and _MODULE_ID_ is thus undefined. Is there a way of obtaining the module ID of the gadget from the script of a feature when required on an inline gadget? You should be able to get the Module ID using: new gadgets.Prefs().getModuleId(); @Mat Thanks for the quick response. That works just fine for iframe gadgets, but for inline gadgets it returns "undefined", at least on my build. Has anyone used new gadgets.Prefs().getModuleId(); in a feature included as part of an inline gadget successfully? Hello again. A lead dev on our team noticed another issue with inline gadets - link tags inside inline gadget CDATA blocks are not injected into the HEAD of the document. It just silently fails to write the link element. Seems to be something in the parsing of the URL for the proxied content, but not sure. We will try to look a bit more and see if we can offer a patch. We originally posted inlining work directly into the existing container shindig-container/server side components... After reviewing some of these changes and learning more about the features, we've stepped back and refactored those changes as a feature on the common container. I add this new patch, which is based on new common container and its new patch: After apply the patch, access you would see the inline gadget and iframe gadget being rendered on same page, though they are helloworld gadgets. The first problem inline gadget rendering needs to solve is about namespacing conflict. Since some gadget declare a unique identifier for some element in dom, such as <div id="hello">, if this gadget is rendered with inline multiple times on same page, it's a problem of element id conflict. As our former implementation(in original patch to support inline) is based on the iWidget context concept and request the gadget developer to rewrite their gadget with a scope, which will generate a unique identifier for each element in inline gadget, to avoid namespacing conflict issue. It might be a little reluctant for gadget developer to accept and it also needs effort to rewrite thousands of existing gadgets. So we did not enable this implementation in our new inline patch. But currently seems we didn't find a better way to solve it. So could someone please review and propose any better way to do solve this namespacing problem? The inline feature impacts some shindig API. shindig.auth feature does not work for inline rendering 1. shindig.auth feature depends on gadgets.util.getUrlParameters, which relays on page's url. In iframe rendering model, it is the gadget's xml->html servlet url. For inline rendering model, it is container page's url. the authToken could NOT be retrieved correctly. 2. shindig.auth is a object of class shindig.Auth; it is a global object for one gadget instance in its iframe. For inline rendering model, all shindig.auth object for each gadget are in one DOM scope. As a result, all gadget instances are using the same shindig.auth object . 3. For this kind of issue, we need to isolate gadget instance related variable and API. 3.1. inject authToken to shindig.auth object correctly for each gadget instance 3.2. each gadget use shindig.auth api under gadget instance scope to retrieve/update its authToken Submit patch for code review here: Hello, We are also interested in having an "inline way of rendering" gadgets. What is the status / progress on this issue? I see that code a patch has been submitted for review but it has been some time since the last time this issue was touched. Best regards, Hasan Ceylan Hello, Are there any plans to get this implemented. We are using Shindig heavily in the enterprise and many developers have asked for an inline rendering option. Does the patch work or is it written against too old of a Shindig version? -Eric Hi, Eric. We are still actively reworking this patch based on shindig 3.0. Quite a lot has changed from the original 2.0.2-based patch and we hope to have a version of the patch that is in an acceptable state for community review soon. Our apologies for the lack of external activity and communication on this work, but we do appreciate the interest from you and the community in our work. Stay tuned and hopefully we'll have something for you shortly. Thanks! Hi Mike, This is great news. Please let me know if myself and team can start testing for you. We are willing to help out with the implementation as well if you need more resources. Can you comment on how you are going to help protect from Javascript collisions and address security issues, ie. Caja? The iframe option only is becoming a real adoption road block for us and having another option to render in div would be HUGE! -Eric This patch (initial-inline-port.patch) is the initial stab at porting the 2.0.2 patch to the 3.0 codebase. We're no longer using this version due to collisions with other features we experienced (like open-views), but it does contain all the modifications to rpc/osapi/prefs/etc. This should be used as a reference for the types of things that will need to be addressed in the mixin approach. This file (inline.patch) is the mixin version of the inline patch. It's not complete and we currently see issues with rpc/osapi/prefs. It is much less invasive to the core of shindig though and the goal is to contain as much as possible within the inline namespace of the container. Eric, btw I forgot to mention it earlier, we would love to have you help out with the implementation as resources are slim at the moment. The Caja model is definitely of interest to us for avoiding collisions and addressing security, and any thoughts you have on the matter are more than welcome. Any news on this issue? Is this effort related to? Can we close this? I don't know if anyone is pushing this forward anymore. Personally, I would really like to see this work continue. I think an inline option for gadgets is important for the future of any OpenSocial based application. We hope to take this on at some point this year starting from the patch provided by Michael Matt Franklin we no longer have a need for this at IBM. What are your use cases? We are starting to see Rave & OpenSocial be used as a modular application development strategy for smaller, targeted apps where the need for low gadget load time outweighs the benefits of the iFrame. Any update on this issue? For the same reasons mentioned by Matt Franklin we are hoping that this feature will be added. I give +1 vote on this one - in my opinion this is one of the most important issues that makes integrators invent various hacks/workarounds on how to get gadgets on a page load faster providing a seamless UX and maybe the most common pain point that drives others away from OpenSocial & gadget development. I'm a strong believer and evangelist of the gadget development (pluggable) model and I can tell you for sure that it's a hard road to convincing people about the benefits of fully decoupled software development as I always get questions regarding iframes and how difficult it is to make them behave well on a single page. At the moment we've reached a point where we can build real-time and responsive dashboard gadget apps but it took time and effort and a custom layer for orchestrating components - through events mostly. Just a few hours ago I came across a presentation for Advanced OpenSocial dev and the author stated boldly: Never, never, never use document.write() calls or bypass the CSS/JS injection mechanism of the container. Well, I don't agree - at all. You have to do whatever it takes to provide a great UX so this type of statements just come to show the weak points that need to be revisited. And as Matt Franklin stated. if OpenSocial wants a future it needs to be up-to-date. Both models need to be supported. We can look at this for 3.0, does anyone want to step up and drive this? Ryan Baxter I'd be happy to give a helping hand for anything required [code/test/design/docs] as long as someone from the core team provides some guidelines. The patch submitted by Kris Vishwanathan is outdated and doesn't work with current version of the code. Chris Spiliotopoulos thanks for stepping up. I am sure all of the committers would be willing to help along the way. Ryan Baxter no prob. I'll start by getting a git clone of the repository and apply Kris Vishwanathan patch to see if I can make it work. Could you walk me through the commit process you currently follow? Send me an email if you wish. See the Creating and submitting patch section here Ryan Baxter Thanks - already done. So, any patches should be submitted only through Subversion? I intended to clone the GitHub master repo and trigger pull requests from there. There is no way for us to merge the pull request in from GitHub. Apache doesn't allow us to do that. You must submit a patch through subversion. Ryan Baxter Ok, got it. Thanks. I'll be in touch. This is great. I'd really like to see this working with Caja. It seems like it would be easier to integrate the Cajita Runtime with global library (see link:.) There's a nice demo of multiple Cajoled widgets running without IFrames:. From my understanding, Caja might actually simplify creating Global libraries through it an "introduction service." However, I realize some implementers may not want to use Caja.
https://issues.apache.org/jira/browse/SHINDIG-1402?focusedCommentId=13116776&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-23
refinedweb
3,013
63.8
Writing an OSGi Web Application This article is based on Enterprise OSGi in Action , to bepublished of operating in an OSGi framework. Enterprise OSGi web bundles are known as WABs. (In contrast to WARs, which are Web ARchives, WABs are Web Application Bundles.) Building a simple OSGi web application bundle Let’s give WABs a try. You can use your favorite OSGi or JEE development tools. All you really need for now is the ability to compile Java code and build jars. What is W WAB layouts The simplest WAB contains three files. These are a Java servlet class, a jar manifest, and a web deployment descriptor. Figure 1 shows how they’re laid out in the WAB. Like WARs, WABs are just a special kind of jar; unlike WARs, the Enterprise OSGi specification does not require a particular file extension. As a result WAB, files may have any extension but typically use .jar or .war. Figure 1 The layout of the fancyfoods.web jar. All code lives in WEB-INF/classes. The web container looks WEB-INF/web.xml to find out what servlets are provided by the bundle. Finally, the standard jar manifest, META-INF/MANIFEST.MF includes important extra metadata for the OSGi container. Web deployment descriptors Let’s start with the deployment descriptor. Listing 1 shows the web.xml file for the web application. It’s a typical web.xml file whose syntax will be reassuringly familiar to everyone who has developed JEE web applications. The web application has one servlet, whose class is fancyfoods.web.SayHello. Listing 1 The WEB-INF/web.xml file <web-app> <servlet> <servlet-name>SayHello</servlet-name> #1 <servlet-class>fancyfoods.web.SayHello</servlet-class> </servlet><br /> <servlet-mapping> <servlet-name>SayHello</servlet-name> #2 <url-pattern>/SayHello</url-pattern> </servlet-mapping> </web-app> #1 A servlet with backing class SayHello #2 The URL is SayHello A simple servlet The servlet class SayHello is also exactly the same as it would be in a WAR. Listing 2 shows the source. There’s one method, which, unsurprisingly, issues a greeting to a user. Listing 2 The SayHello.java file package fancyfoods.web; import java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.http.*; public class SayHello extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter writer = response.getWriter(); #1 writer.append("Hello valued customer!"); } } #1 Write to the response's PrintWriter So far, so familiar. It’s perhaps a bit anti-climactic that writing a WAB is so similar to writing a WAR in some respects, but this is one of the strengths of the enterprise OSGi programming model it’s like existing programming models, only better. The differences between WABs and WARs start to become obvious when we look at the manifest file. A WAB manifest The final file needed in your WAB is the bundle manifest. Every jar has a MANIFEST.MF file, but an OSGi bundle’s manifest has extra headers, such as the symbolic name of the bundle and the bundle’s version. Listing 3 The MANIFEST.MF file Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-SymbolicName: fancyfoods.web Bundle-Version: 1.0.0 Bundle-ClassPath: WEB-INF/classes Web-ContextPath: /fancyfoods.web Import-Package: javax.servlet.http;version="2.5", javax.servlet;version="2.5" To be consistent with the layout of a WAR, the class files for fancyfoods.web have been packaged in the WEB-INF/classes folder. However, there’s actually no need for this. Classes can live anywhere in an OSGi bundle or even be spread across multiple locations. If the class files are not directly in the root directory, the classpath needs to be specified in the manifest: Bundle-ClassPath: WEB-INF/classes Packages used by the servlet that aren’t in the servlet’s own bundle must be explicitly imported in the manifest. Otherwise, they won’t be visible. The exception is that there is no need to import the core java language classes, java.*, which are implicitly imported. Bundle-wiring rules are a bit different for the java.* packages, which must come from the core Java Runtime for security and for compatibility with the Virtual Machine. Imagine if someone could replace the implementation of String or Integer! In the case of the web bundle, this means the javax.servlet and javax.servlet.http packages are imported. The servlet is expected to work with any version of javax.servlet with version 2.5 or higher, up to but not including version 3.0. Import-Package: javax.servlet.http;version="2.5", javax.servlet;version="[2.5, 3.0)" WARNING What about Servlet 3.0? The meaning of the version range “[2.5, 3.0)” isn’t entirely straightforward. You’re mostly right if you assume that 2.5 part implies version 2.5 of the servlet specification. However, 3.0 definitely does not mean version 3.0 of the servlet specification! Remember, OSGi versions are semantic package versions, not marketing or specification versions. Servlet 3.0 is backward compatible with servlet 2.5, and so the package versions for servlet 3.0 won’t be versioned at 3.0. Version 3.0 of the servlet packages would be some change to the servlet specification so radical that the interfaces would no longer be backward compatible. The reason the bottom range starts at 2.5 and not 1.0 is that, when the WAB specification was written, the current version was 2.5, and so 2.5 seemed like a logical starting point. Unfortunately, some application servers have deviated from the semantic version and use the package version 3.0 for the Servlet 3.0 specification, which doesn’t help! The manifest includes one final header which is specific to WABs and defines the web context root. This header is required for a bundle to be recognized as a WAB. Many enterprise OSGi implementations also allow the context root to be changed after deployment. Web-ContextPath: /fancyfoods.web Build these three files into a jar, and the web application is ready to try out! Deploying and testing Because OSGi is so dynamic, testing OSGi bundles is pretty easy. The same bundle can be loaded repeatedly without having to stop or start anything else. If you’re as prone to typos as the authors, you’ll find this extremely handy. The load directory The Apache Aries runtime you assembled earlier provides a simple mechanism for discovering and starting new applications. The target directory includes a folder called load. Any OSGi bundles copied into this directory will automatically be started. To try this out, start the Aries runtime with java -jar. Type osgi-3.5.0.v20090520.jar -console ss to see the list of installed bundles. Now copy the web bundle you’ve built into the load directory. You’ll see a bunch of output scroll by in the OSGi console. Type ss again. You’ll see there’s one extra bundle listed, and it’s the fancyfoods.web bundle you just copied into the load directory. The fancyfoods.web bundle should be in ACTIVE state. All that remains now is to try it out. Point a browser at. You’ll see more debug output scroll by in the OSGi console, and the browser should display a response like the one shown in figure 2. A familiar feeling and important differences Even with the extra headers in the manifest, the web bundle you’ve written looks a lot like a conventional JEE WAR. In fact, it’s so similar that you could probably deploy it as a WAR in a JEE application. So what’s different about it? A WAB is a bundle, and not just a normal jar, which means it has some new behaviors. WAR-to-WAB conversion The structure of WARs and WABs are similar enough that the enterprise OSGi specification supports automatic conversion of WARs to WABs at deploy time. A bundle symbolic name and package imports are automatically generated. This can be convenient when doing an initial migration from JEE to enterprise OSGi, but in general it’s better to write web applications as WABs. This ensures the bundle has a proper, well-known, symbolic name, and it also allows package imports to be versioned. Versioning package imports is always a good idea. The WAB format also provides a convenient mechanism for setting the web context root. Package privacy What are the implications of being an OSGi bundle? The biggest implication is actually what can’t be seen – nothing outside the fancyfoods.web bundle can see the SayHello class, because it’s not exported. This cozy privacy is exactly what you want because there’s no reason for any code outside your bundle (except for perhaps the web container, which can use the OSGi API) to be messing around directly with your servlet class. If you did want to make the SayHello class externally accessible for some reason, all that would be required is to add a package export of the fancyfoods.web package. However, you’d probably want to consider your design pretty carefully before doing this. Could the shared code be externalized to a utility bundle instead? Figure 3 The class space of the fancyfoods.web bundle. It does not have any public packages. To confirm which bundle exports the javax.servlet bundle, type packages javax.servlet in your OSGi console or bundle 37 to see where all of the packages used by fancyfoods.web come from. Although package privacy is a good thing, not all Java code can cope with it. Some existing libraries, particularly ones which use reflection to load classes, may not work properly. Explicit dependencies Being a bundle has a second implication, which is that all the packages required by SayHello must be explicitly imported in the manifest. If you’re first getting used to OSGi, this can seem like an unnecessary and annoying extra step. Let’s step back and think about how Java handles package imports for classes. If you were writing a Java class, you’d always import the packages you were using, rather than expecting the compiler to choose a class with the right name from a random package. Some class names are pretty unique, and you’d probably end up with the right one, but other class names are not at all unique. Imagine how horrible it would be if your class could end up running against any class called Constants, for example. Of course, you might also end up with the opposite problem—instead of a class name, which was too common, you could be trying to use a class that doesn’t exist at all. If the package you needed didn’t exist at all, you’d expect an error at compile-time. You certainly wouldn’t want the compilation to limp along, claim success, and produce a class which half worked. Luckily, the Java compiler doesn’t do this. If your declared dependencies aren’t present, the compilation will fail quickly. At runtime, on the other hand, you’re in a situation that is pretty similar to the undeclared dependencies. You have to wait until your class is invoked to discover its dependencies are missing. You won’t ever end up running against a class from a totally different package to the one you expected, but you could end up running with a class of a different version, with a totally different methods and behaviors. Explicitly declaring the dependency on the javax.servlet and javax.servlet.http packages ensures the fancyfoods.web bundle won’t be run in a container that doesn’t support servlets. Better yet, it won’t even be run on a container that supports an obsolete version of the servlet specification. To try this out, go to the OSGi console for the Aries runtime. At the prompt, use the packages command to see which bundles import and export the javax.servlet package: osgi> packages javax.servlet The response should be something like the following: The output shows that the org.ops4j.pax.web.pax-web-jetty-bundle exports the javax.servlet package and four bundles import it, including fancyfoods.web. What would happen if the Aries assembly hadn’t included Jetty? Quit the OSGi console and move the Jetty bundle (pax-web-jetty-bundle*jar) out of the target directory. (Don’t lose it though!) Restart the OSGi console and type ss to see the list of bundles. You’ll see a number of bundles, including fancyfoods.web, are in the INSTALLED state instead of the ACTIVE state. This means the bundles couldn’t be resolved or started because some dependencies were missing. Make a note of the bundle identifier to the left of fancyfoods.web in the bundle list. Try starting the bundle to see what happens: Figure 5 If the Jetty bundle is removed from the runtime, the Fancy Foods web bundle cannot be started because no servlet implementation is present. Because your assembly no longer has servlet support, fancyfoods.web won’t start. It definitely wouldn’t work properly if the code inside it was to run, so not starting is the right thing to do. Don’t forget to put the Jetty bundle back into the target directory before you try and use the web application again. (The authors forgot to do this on several occasions, and were very confused each time.) Fragments OSGi bundles have a third advantage over normal WARs. OSGi is all about modularity, and so OSGi bundles themselves can themselves have modular extensions, known as fragments. Fragments are extensions to bundles which attach to a host bundle and act in almost every way as if they were part of the host. They allow bundles to be customized depending on their environment. For example, translated resource files can be packaged up by themselves into a fragment and only shipped if needed. Fragments can also be used to add platform-specific code to a generic host. Spicing things up with fragments How would a fragment work with your little application? The first version of the fancyfoods.web application is only intended to work in English, but if the business takes off, it will expand into other markets. The first step in internationalizing fancyfoods.web is to externalize the strings in SayHello.java. Write a properties file with the following content: SayHello.hello=Hello valued customer! The new doGet method looks like: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter writer = response.getWriter(); Locale locale = request.getLocale(); #1 String bundleName = "fancyfoods.web.messages"; ResourceBundle resources = ResourceBundle.getBundle(bundleName, locale); #2 String greeting = resources.getString("SayHello.hello"); #3 writer.append(greeting); } 1 Where are we? 2 Get the right resource bundle 3 Get the translated message Bundles, bundles, or bundles? We’ve got a few different kinds of bundles floating around at the moment, but the resource bundles here are nothing to do with OSGi bundles—they’re just ordinary Java resource bundles. If you build the bundle and test the web page, it will work exactly as it did before. This is reassuring if you’re English, but not ideal if you’re browsing in French. To try it out, change your web browser’s preferred language to French. (If you don’t want to do that, you can hardcode the locale in the getString() call in SayHello.java.) Most pages you browse to, like Google, for example, will show French text. However, if you reload the Fancy Foods web page, the greeting is disappointedly English. In order to get the Fancy Foods to display in French, you need to provide some French translations, obviously. In order to be accessible to the SayHello class, the properties files need to be loaded by the same classloader, which (mostly) means they need to be in the same bundle. However, rebuilding jars is no fun, and you definitely don’t want to be repackaging your existing code every time you have a new translation. What you want to be able to do is be able to easily drop in support for other languages in the future. Resource loading between bundles We’ve simplified our discussion of resource loading slightly. It is in fact possible to load resources from other bundles, but it’s ugly. The package containing the resource must be exported by the providing bundle and imported by the consuming bundle. In order to avoid clashes with packages in the consuming bundle, the consuming bundle shouldn’t export the package it’s attempting to import. Having trouble following? You won’t be the only one! We’ve seen this pattern used, but we definitely don’t recommend it. Luckily, this is the sort of job for which OSGi fragments are perfect. OSGi fragments are a bit like OSGi bundles. However, instead of having their own lifecycle and classloader, they attach to a host bundle. They share the host’s classloader and behave in almost every way as if they’re part of the parent. However, they can be installed and uninstalled independently of the host. Figure 6 OSGi fragments attach to a parent bundle and share its classloader. In this case, a translation fragment can be built and attached to fancyfoods.web. To provide the translations, you’ll need a new fragment jar. All it needs inside it is a manifest and a properties file. The French language properties file, messages_fr.properties, might read: SayHello.hello=Bienvenue aux Aliments de Fantaisie! The MANIFEST.MF is similar to a bundle manifest, but it has an extra header that identifies the host of the fragment—fancyfoods.web in this case. It is a good idea to also specify a minimum version of the host bundle, to ensure compatibility: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: French language resources Bundle-SymbolicName: fancyfoods.web.nls.fr Bundle-Version: 1.0.0 Fragment-Host: fancyfoods.web;bundle-version="[1.0.0,2.0.0)" Bundle-ClassPath: . Build the fragment into a jar, fancyfoods.web.nls.fr.jar. Once the fragment is built, you can drop it into the load directory of your running framework. Type ss and you’ll see your new fragment included in the list of bundles. Fragments can’t be started and stopped like bundles, so the fragment will be shown as INSTALLED. Refresh the web bundle with refresh [web bundle number] and the fragment will attach to the bundle and move to the RESOLVED state. Check the web page again, and the greeting should be shown in French. Delete the fancyfoods.web.nls.fr jar from the load directory, and try the web page again back to English! Although internationalization is the most popular use for fragments, it’s not the only one. Anything can be put into a fragment, including Java classes. Including classes in fragments for plugability is not the best implementation. OSGi provides higher level ways of achieving plugability through services. Summary The web is one of the most fundamental parts of enterprise Java programming, providing a front-end for almost every enterprise application. In this article, we got you going with a simple web application.
http://www.javabeat.net/writing-an-osgi-web-application/
CC-MAIN-2015-32
refinedweb
3,206
59.4
NAMEmount - mount a filesystem SYNOPSISmount [-h|-V] mount [-l] [-t fstype] mount -a [-fFnrsvw] [-t fstype] [-O optlist] mount [-fnrsvw] [-o options] device|mountpoint mount [-fnrsvw] [-t fstype] [-o options] device mountpoint mount --bind|--rbind|--move olddir newdir mount --make-{shared|slave|private|unbindable|rshared|rslave|rprivate|runbindable} mountpoint filesystem is used to control how data is stored on the device or provided in a virtual way by network or other may be mounted on the same mountpoint multiple times. The mount command does not implement any policy to control this behavior. All behavior is controlled by the kernel and it is usually specific to the filesystem driver. The exception is --all, in this case already mounted filesystems are ignored (see --all below for more details). Listing the mountsT] Indicating the device and filesystemMost devices are indicated by a filename (of a block special device), like /dev/sda1, but there are other possibilities. For example, in the case of an NFS mount, device may look like knuth.cwi.nl:/dir.. The command lsblk --fs provides an overview of filesystems, LABELs and UUIDs on available block devices. The command blkid -p <device> provides details about a filesystem on the specified device.. UUID=uuid) rather than /dev/disk/by-{label,uuid,id). The proc filesystem is not associated with a special device, and when mounting it, an arbitrary keyword—for example, proc—can be used instead of a device specification. (The customary choice none is less fortunate: the error message `none already mounted' from mount can be confusing.)] at compile time by default, because on current Linux systems it is better to make /etc/mtab a symlink to /proc/mounts instead. The regular mtab file maintained in userspace cannot reliably work with namespaces, containers and other advanced Linux features. If the regular mtab support is enabled, then, ID, PARTUUID or PARTLABEL) and dir are specified. For example, to mount device foo at /dir: mount /dev/foo /dir Non-superuser mountsNormally, only the superuser can mount filesystems. However, when fstab contains the user option on a line, anybody can mount the corresponding filesystem. Thus, given a line /dev/cdrom /cd iso9660 ro,user,noauto,unhide Since util-linux 2.35, mount does not exit when user permissions are inadequate according to libmount's internal security rules. Instead, it drops suid permissions and continues as regular non-root user. This behavior supports use-cases where root permissions are not necessary (e.g., fuse filesystems, user namespaces, etc). a member of the group of the special file. Bind mount operation create any second-class or special node in the kernel VFS. The "bind" is just another operation to attach a filesystem. There is nowhere stored information that the filesystem has been attached by a "bind" operation. The olddir and newdir are independent and the olddir may be unmounted. One can also remount a single file (on a single file). It's also possible to use a bind mount to create a mountpoint from a regular directory, for example: mount --bind foo foo The bind mount call attaches only (part of) a single filesystem, not possible submounts. The entire file hierarchy including submounts can be attached a second place by using: mount --rbind olddir newdir Note that the filesystem mount options maintained by the kernel will remain the same as those on the original mount point. The userspace mount options (e.g., _netdev) will not be copied by mount and it's necessary to explicitly specify the options on the mount command line. Since util-linux 2.27 mount(8) permits changing via a "remount,bind" operation. The other flags (for example filesystem-specific flags) are silently ignored. It's impossible to change mount options recursively (for example with -o rbind,ro). Since util-linux 2.31, mount ignores the bind flag from /etc/fstab on a remount operation (if "-o remount" is specified on command line). This is necessary to fully control mount options on remount by command line. In previous versions the bind flag has been always applied and it was impossible to re-define mount options without interaction with the bind semantic. This mount(8) behavior does not affect situations when "remount,bind" is specified in the /etc/fstab file. The move operationMove a mounted tree to another place (atomically).; see also mount_namespaces(7). changing multiple propagation flags with a single mount(2) system call, and the flags cannot be mixed with other mount options and operations. Since util-linux 2.23 the mount command can be used to do more propagation (topology) changes by one mount(8) call and do it also together with other mount operations. This feature is EXPERIMENTAL. The propagation flags are applied by additional mount(2) system calls. The mount command. The mount command compares filesystem source, target (and fs root for bind mount or btrfs) to detect already mounted filesystems. The kernel table with already mounted filesystems is cached during mount --all. This means that all duplicated fstab entries will be mounted. The option --all is possible to use for remount operation too. In this case all filters (-t and -O) are applied to the table of already mounted filesystems. Since version 2.35 is possible to use the command line option -o to alter mount options from fstab (see also --options-mode). Note that it is a bad practice to use mount -a for fstab checking. The recommended solution is findmnt --verify. - -B, --bind - Remount a subtree somewhere else (so that its contents are available in both places). See above, under Bind mounts. - -c, --no-canonicalize - Don't canonicalize paths. The mount command canonicalizes all paths (from proceed in parallel. A disadvantage is that the order of the mount operations is undefined.. - -N, --namespace ns - Perform the mount operation in the mount namespace specified by ns. ns is either PID of process running in that namespace or special file representing that namespace. mount(8) switches to the mount namespace when it reads /etc/fstab, writes /etc/mtab (or writes to /run/mount) and calls the mount(2) system call, otherwise it runs in the original mount namespace. This means that the target namespace does not have to contain any libraries or other requirements necessary to execute the mount(2) call. See mount_namespaces(7) for more information. - -O, --test-opts opts - Limit the set of filesystems to which the -a option applies. In this regard it is like the -t option except that -O is useless without -a. For example, the command: - -o, --options - the -t option as well as in an /etc/fstab entry. The list of filesystem types for the -t option.. FILESYSTEM-INDEPENDENT MOUNT OPTIONSSome, ext4, fat, vfat, ufs and xfs): - or ext' - defaults - Use the default options: rw, suid, dev, exec, auto, nouser, and async. Note that the real set of all default mount options depends on the kernel and filesystem type. See the beginning of this section for more details. - dev - Interpret character or block special devices on the filesystem. - nodev - Do not interpret character or block special devices on the filesystem. -: - nolazytime - Do not use the lazytime feature. - suid - Honor set-user-ID and set-group-ID bits or file capabilities when executing programs from this filesystem. - nosuid - Do not honor set-user-ID and set-group-ID bits or file capabilities when executing programs from this filesystem. - operation together with the bind flag has special semantics. See above, the subsection Bind mounts. The remount functionality follows the standard way the mount command works with options from fstab. This means that mount does not read fstab (or mtab) only when both is found in fstab, then a remount with unspecified source is allowed. mount allows the use of --all to remount all already mounted filesystems which match a specified filter (-O and -t). For example: mount --all -o remount,ro -t vfat remounts all already mounted vfat filesystems in read-only mode. Each of the filesystems is remounted by "mount -o remount,ro /dir" semantic. This means the mount command reads fstab or mtab and merges these options with the options from the command line. - for X-* now), but due to the growing number of use-cases (in initrd, systemd etc.) the functionality has been extended to keep existing fstab configurations usable without a change. -. FILESYSTEM-SPECIFIC MOUNT OPTIONSThis section lists options that are specific to particular filesystems. Where possible, you should first consult filesystem-specific manual pages for details. Some of those pages are listed in the following table. Note that some of the pages listed above might be available only after you install the respective userland tools. The following options apply only to certain filesystems. We sort them by filesystem. All options follow the -o flag. What options are supported depends a bit on the running kernel. Further information may be available in filesystem-specific files that the current process is owner of the file, or that it has the CAP_FOWNER capability. But FAT filesystems don't have UID/GID on disk, so the be.) Normal iso9660 filenames appear in. An overlay filesystem combines two filesystems - an upper filesystem and a lower filesystem. When a name exists in both filesystems, the object in the upper filesystem is visible while the object in the lower filesystem is either hidden or, in the case of directories, merged with the upper object. The lower filesystem can be any filesystem supported by Linux and does not need to be writable. The lower filesystem can even be another overlayfs. The upper filesystem will normally be writable and if it is it must support the creation of trusted.* extended attributes, and must provide a valid d_type in readdir responses, so NFS is not suitable. A read-only overlay of two read-only filesystems may use any filesystem type. The options lowerdir and upperdir are combined into a merged directory by using:(1) ubifsUBIFS is a flash filesystem which works on top of UBI volumes. Note that atime is not supported and is always turned off. - The device name may be specified filesystem. - uid=ignore - Ignored, use uid=<user> instead. - gid=ignore - Ignored, use gid=<group> instead. - volume= - Unimplemented and ignored. - partition= - Unimplemented and ignored. - fileset= - Unimplemented and ignored. - rootdir= - Unimplemented and ignored.. DM-VERITY SUPPORT (experimental)The. Supported since util-linux v2.35. For example commands: mksquashfs /etc /tmp/etc.squashfs dd if=/dev/zero of=/tmp/etc.hash bs=1M count=10 veritysetup format /tmp/etc.squashfs /tmp/etc.hash openssl smime -sign -in <hash> -nocerts -inkey private.key \ -signer private.crt -noattr -binary -outform der -out /tmp/etc.p7 mount -o verity.hashdevice=/tmp/etc.hash,verity.roothash=<hash>,\ verity.roothashsig=/tmp/etc.p7 /tmp/etc.squashfs /mnt create squashfs image from /etc directory, verity hash device and mount verified filesystem image to /mnt. The kernel will verify that the root hash is signed by a key from the kernel keyring if roothashsig is used. LOOP-DEVICE SUPPORT re-uses the loop device rather than initializing a new device if the same backing file is already used for some loop device with the same offset and sizelimit. This is necessary to avoid a filesystem corruption. EXIT STATUSmount has the following exit status values HELPERSThe syntax of external mount helpers is: /sbin/mount.suffix spec dir [-sfnv] [-N namespace] [-o options] [-t type.subtype] where the suffix is the filesystem type and the -sfnvoN an argument to the -o option. FILES HISTORYA mount command existed in Version 5 AT&T UNIX. BUGSIt is possible for a corrupted filesystem to cause a crash. Some Linux filesystems don't support -o sync and -o dirsync (the ext2, ext3, ext/mount the noac mount option. AUTHORS Karel Zak <kzak@redhat.com>
https://man.archlinux.org/man/mount.8
CC-MAIN-2021-10
refinedweb
1,954
57.98
public void buttons(){ int c = WHEN_IN_FOCUSED_WINDOW; Action right = new AbstractAction() { public void actionPerformed(ActionEvent e) { player.setVX(2); } };... public void buttons(){ int c = WHEN_IN_FOCUSED_WINDOW; Action right = new AbstractAction() { public void actionPerformed(ActionEvent e) { player.setVX(2); } };... import java.awt.BorderLayout; import java.awt.Image; import javax.swing.BoxLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; Thank you very much now it is working. It is the paint method fault. now Im using the paintComponent method. that helps alot. Hi why is that my background or other images that I paint hides my button?? 17981799 import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import... why is that when I put the button on a while statement with other method such as the paint() and the fullscreen method with update. public void gameLoop(){ long startTime =... Why is that my button doesnt show to the screen BTW I am using a fullscreen instead of JFrame public class optionButton extends JPanel{ public JButton shop; public JButton exit; public... how can I click an image like a button.. so instead of using the JButton I am using an Image how to delete a picture on the screen on a certain time with out pausing the program.. cause I used the try and catch with tread.sleep inside and while but it keeps pausing. Is there a way to... I already me a spacejet image that goies back and fourth using the KeyListener also it shoot one image at a time. thats the only thing I need to fixe for now. can you find me a link on a good example... shooting ====>>> like space invaders it shoots laser with the same image and it moves sample code ==>>> sample code for using the image as a shooting material that create a new image when the... How do I use one image use it many times as I like. like for shooting.. can someone give me a sample code for it I think you need a scanner for each variables. import java.util.Scanner; public class Number { private int numberEntered; private int firstDigit; private int secondDigit; private int thirdDigit; private int fourthDigit; private int... How do you loop this code so that if they entered the wrong digits the second time it will go back to "enter number again and start the "if" and "else" statement again" public class Assign1 { ... (blue indicate user entered information, red indicates display from program) Enter a five digit number: 12345 The number is 1 2 3 4 5 Your class should be called Number. It should contain 6 fields of type int named correspondingly for numberEntered, firstDigit, secondDigit, thirdDigit, fourthDigit and fifthDigit. You should... import java.lang.*; //all import statements must be first import java.util.Scanner; //program uses class Scanner /** prompts user to enter 10 numbers, then displays largest and * average of...
http://www.javaprogrammingforums.com/search.php?s=75582d91a6cbde40ab4c4d8ae6ba32e8&searchid=1665625
CC-MAIN-2015-32
refinedweb
481
66.84
Thanks for the reply. The original spatial data is a polygon geometry file. I was under the impression that if I create a relate, the relate wouldn't remain if I extracted the data (e.g. the "Feature Class to Shapefile" tool would not capture the related data). Is this the case? They aren't permanent, they have to be re-established. Joins can be made permanent, but that is just a shortcut to adding a new field, calculating values from it using the joined data. Nothing gets carried down to the shapefile level, so be careful of things like your attribute field names etcetera. Hi David, I have just come across the same challenge, and feel like I figured that out, so positing my solution here. You can use Make Query Table for executing one-to-many join. How To: Create a one-to-many join in ArcMap The site above is an instruction for ArcMap to Make Query Table, but the same tool was found in ArcGIS Pro for mine. Using Python would be more helpful if you want to export the temporal layer created by Make Query Table as a permanent gdb file or shapefiles. I'm pasting my work for an example. Goal: Join polygon feature class which has 59 city names and table which has temporal population value (2006-2010) for 59 city names. Kyuson_H12 agg15male import arcpy,os from arcpy import env env.overwriteOutput = True env.workspace = r"C:\Users\kenta\Documents\ArcGIS\Projects\MyProject2\Rdata.gdb" # list the polygon feature class and table that want to be joined tableList = [ r"C:\Users\kenta\Documents\ArcGIS\Projects\MyProject2\Rdata.gdb\Kyuson_H12",\ r"C:\Users\kenta\Documents\ArcGIS\Projects\MyProject2\Rdata.gdb\agg15male"] # define the query for matching whereClause = "Kyuson_H12.Kyuson = agg15male.Kyuson" # name the temporary layer name created by MakeQueryTable lyrName = "Kyuson_layer" # name the output fc name outFeatureClass = "agg15male_poly" arcpy.MakeQueryTable_management(tableList, lyrName,"USE_KEY_FIELDS", "", "", whereClause) # since lyrName created by MakeQueryTable is temporal, save it as parmanent gdb file arcpy.CopyFeatures_management(lyrName, outFeatureClass) ****Points you need to be careful*** You might have finished this task a while ago, but hoping it will be helpful for anyone running into the same problems in the future. Best, Kenta "I have just come across the same challenge, and feel like I figured that out, so positing my solution here. " I think you have figured it out; worked perfectly for my project joining 19 years of Quarterly data to 68 Districts. Thank you! Kenta, you saved my bacon with this one. Thank you for posting this solution!!! I have another possible solution (Kenta's method worked for me as well). Using Pro, I had exactly the same issue for one to multiple join when the table was a CSV file and the other was a shapefile, but the join worked when both spatial data and table are in the same geodatabase. People who have the same issue may want to make them in the same geodatabase and try for another time. : ) This is a much simpler solution with no coding required! Thank you Xuan :)
https://community.esri.com/t5/arcgis-pro-questions/arcgis-pro-join-one-to-many-join-by-attribute/td-p/589401
CC-MAIN-2022-21
refinedweb
515
64.2
Hi everyone I need to be able to read each character from a text file and store them in an array here is the text file: ********************* Ace of Spades 10 of Spades 3 of Clubs King of Spades 5 of Spades ********************* Queen of Diamonds King of Hearts Queen of Hearts Queen of Clubs Queen of Spades ********************* 2 of Clubs 2 of Hearts Ace of Clubs 2 of Diamonds Ace of Diamonds ********************* 2 of Hearts 3 of Hearts 4 of Hearts 5 of Hearts 6 of Hearts for instance i need to be able to store the integers in an array.. and then i need to store the string hearts ,clubs in another string array.and i need to ignore the spaces, the'*' so far i have not been able to do it.. i seriously need help. i've been trying different thongs like getline(), indata.getline, getline(indata, str) none have been working here's what i did..i managed to make the program read the file #include <iostream> #include <cstdlib> #include <string> #include <fstream> using namespace std; int main() { string str; ifstream inData; string filename; cout << "Enter the name of the file containing the hands: " << flush; cin >> filename; inData.open(filename.c_str( )); but now i don't know how to continue. i am really bad at programming. ... plz someone help me... thanks... plz someone help me... thanks
https://cboard.cprogramming.com/cplusplus-programming/130233-reading-txt-file-plz-someone-help.html
CC-MAIN-2017-22
refinedweb
228
76.76
Abstract base class for drawing 2d graphics. More... #include <drawerbase.h> Abstract base class for drawing 2d graphics. A drawer is an object that draws 2d graphics by calling the methods of a GraphicsViewAdapter. A call of redraw() redraws the complete graphics. Definition at line 40 of file drawerbase.h. Definition at line 56 of file drawerbase.h. Constructor of a drawer. Definition at line 37 of file drawerbase.cpp. Definition at line 44 of file drawerbase.h. Clear the whole view. Reimplemented in FourierSpectrumGraphDrawer, and TuningIndicatorDrawer. Definition at line 51 of file drawerbase.h. Abstract function : draw the content. Implemented in TuningCurveGraphDrawer, FourierSpectrumGraphDrawer, and TuningIndicatorDrawer. Function to completely redraw the scene. The function first clears the scene and then calls the abstract draw() method. Definition at line 57 of file drawerbase.cpp. Check whether the content has to be redrawn. If force is set to false the function returns true on redraw timeout. If force is set to true the function will always return true. If returning true the function will automatically reset the timer. Definition at line 81 of file drawerbase.cpp. Pointer to the graphics view adapter. Definition at line 53 of file drawerbase.h. Update time. Definition at line 58 of file drawerbase.h. Timeposition when last drawn. Definition at line 57 of file drawerbase.h.
http://doxygen.piano-tuner.org/class_drawer_base.html
CC-MAIN-2022-05
refinedweb
220
63.66
Back to: Design Patterns in C# With Real-Time Examples Facade Design Pattern in C# with Examples In this article, I am going to discuss the Facade Design Pattern in C# with Examples. Please read our previous article where we discussed the Adapter Design Pattern in C# with examples. The Facade Design Pattern falls under the category of Structural Design Pattern. As part of this article, we are going to discuss the following pointers. - What is the Facade Design Pattern in C#? - Understanding the Facade Design Pattern with one real-time example. - Understanding the Class Diagram of the Facade Design Pattern. - When to use Facade Design Pattern? - Implementing the Façade Design Pattern in C#. What is the Facade Design Pattern in C#? As per the GOF definition, the Façade Design Pattern states that you need to provide a unified interface to a set of interfaces in a subsystem. The Façade Design Pattern defines a higher-level interface that makes the subsystem easier to use. In simple words, we can say that the Facade Design Pattern is used to hide the complexities of a system and provides an interface to the client using which the client can access the system. The Façade (usually a wrapper) sits on the top of a group of subsystems and allows them to communicate in a unified manner. Understanding the Façade Design Pattern in C# with one Real-time Example: Let us understand the Façade Design Pattern with one real-time example. Please have a look at the following diagram. Here, we need to design an application to place an order. As shown in the above image, in order to place an order first we need to create an object of Product class and get the product details. Then if everything is fine then we need to make the Payment and in order to do this, we need to create an instance of the Payment class and call the MakePayment method. If Payment is successful then we need to send the Invoice to the customer. So, in order to place the order, we need to do the above mention steps. The Façade is actually an extra class that lies at the top of the above method class. Please have a look at the following diagram. So, here the extra class Order is the Façade class which will take the responsibility of placing the order. This class internally creates the instance of the respective classes and calls the method. Understanding the Class Diagram of the Facade Design Pattern in C#: Let us understand the class diagram and the different components involved in the Facade Design Pattern in C#. In order to understand the Facade Design Pattern class diagram, please have a look at the following image. As shown in the above image, there are two classes involved in the Facade Design Pattern. They are as follows: - The Façade class knows which subsystem classes are responsible for a given request and then it delegates the client requests to appropriate subsystem objects. - The Subsystem classes Implement their respective functionalities assigned to them and these subsystems do not have any knowledge of the facade. Note: The façade pattern is used unknowingly so many times in our projects even we are not aware of this. This is one of the most useful design patterns. If you understand the Façade Design pattern then you will make the project architecture better. Implementing Facade Design Pattern in C#: Let us implement the example that we discussed step by step using the Facade Design Pattern in C#. Step1: Creating subsystems In our example, the systems are going to be the Product, Payment, and Invoice classes and each class will have its own responsibility. So, let’s create the above three classes and implement their responsibility. Product: Create a class file with the name Product.cs and then copy and paste the following code in it. This class is having a method to get the product details. using System; namespace FacadeDesignPattern { public class Product { public void GetProductDetails() { Console.WriteLine("Fetching the Product Details"); } } } Payment: Create a class file with the name Payment.cs and then copy and paste the following code in it. This class is having a method to do the payment. using System; namespace FacadeDesignPattern { public class Payment { public void MakePayment() { Console.WriteLine("Payment Done Successfully"); } } } Invoice: Create a class file with the name Invoice.cs and then copy and paste the following code in it. This class is having a method to send the invoice. using System; namespace FacadeDesignPattern { public class Invoice { public void Sendinvoice() { Console.WriteLine("Invoice Send Successfully"); } } } Note: Here, we have not implemented the methods in detail except that we are just printing the details of the methods. This is because our idea is to understand the façade design pattern implementation and not to focus on the real implementations of the methods. Step2: Creating Facade This is going to be a concrete class and this class takes the responsibility to place the order. So, create a class file with the name Order.cs and then copy and paste the following code in it. This class is having one method which will create subclasses objects and call the respective methods in order to place an order. using System; namespace FacadeDesignPattern { public class Order { public void PlaceOrder() { Console.WriteLine("Place Order Started"); Product product = new Product(); product.GetProductDetails(); Payment payment = new Payment(); payment.MakePayment(); Invoice invoice = new Invoice(); invoice.Sendinvoice(); Console.WriteLine("Order Placed Successfully"); } } } Now we are hiding the complexity of creating the different subclasses objects and calling their respective methods with the help of the Facade class. So, this class acts as a wrapper to the subclasses. Now the client will use this Facade class and simply call the PlaceOrder method to place an order. The PlaceOrder method takes all the responsibility to place an order. Step3: Client Please modify the Main method as shown below. Here, the client simply needs to create an object of Order class and call the PlaceOrder method which will place the order. using System; namespace FacadeDesignPattern { class Program { static void Main(string[] args) { Order order = new Order(); order.PlaceOrder(); Console.Read(); } } } Output: When to use Facade Design Pattern in Real-Time Applications? We need to use the Facade Design Pattern when - We want to provide a simple interface to a complex subsystem. Subsystems often get more complex as they evolve. - There are many dependencies between clients and the implementation classes In the next article, I am going to discuss the Decorator Design Pattern in C# with some examples. Here, in this article, I try to explain the Facade Design Pattern in C# with Examples. I hope you understood the need and use of Facade Design Pattern in C# with Examples.
https://dotnettutorials.net/lesson/facade-design-pattern/
CC-MAIN-2021-31
refinedweb
1,126
55.74
Introduction: Motion Control Gimbal Hello Everyone,My name is Harji Nagi.I am currently a second year student studying electronics and communication engineering from Pranveer Singh Institute Of Technology,Kanpur(UP).I have a keen interest in robotics,arduino,Artificial Intelligence and Analog electronics.. It consists of 3 MG996R servo motors for the 3-axis control, and a base on which the MPU6050 sensor, the Arduino and the battery will be placed yaw, pitch, and roll stabilization. Step 1: Components List The component list are: 1)Arduino Uno 2)8V,1.5 Amp Battery for powering Arduino Uno 3) 7805 Voltage regulator Ic or you can use buck conveter 4)MPU 6050 5)3*(MG995 SERVO Motors) 6)Jumper Wires Other Equipments: 1)Soldering Iron 2)Glue Gun 3)Drill machine 4)Food Can Instead of using breadborad I have use small coustom perf board for positive and negative bus connection. Step 2: Assembling Foamcore, foam board, or paper-faced foam board is a lightweight and easily cut material used for mounting Servo motor and for making scale models. Firstly I made a DIY L-shape brackets to mount servo motor with the help of foam board. Step 3: Assembling the gimbal was quite easy. I started with installing the Yaw servo,MPU 6050 sensor and ON-OFF switch. Using bolts and nuts I secured it to the base Step 4: Next, Using the Same Method I Secured the Roll Servo. the Parts Are Specifically Designed to Easily Fit the MG995 Servos Step 5: Next, Using the Same Method I Secured the Roll Servo. the Parts Are Specifically Designed to Easily Fit the MG995 Servos Step 6: Connections In the circuit diagram you can use either buck converter or 7805 Voltage regulator IC to convert 8V to 5 V.The microcontroller which is given the circuit diagram is Arduino Nano you can also use Arduino Uno,Arduino Mega. The SCL and SDA pins of MPU 6050 is connected to Arduino Analog pin A5 and A4.(SCL and SDA pin may vary so check out datasheet for SCl and SDA pins for other microcontroller) Step 7: Connection With 7805 Voltage Regulator IC This circuit diagram is for the connection of 7805 voltage regulator ic, connect the 8v battery at Vin and you will get an output voltage of 5v. Step 8: Coding You must include the following libraries: 1)#include<Wire.h>Click Hereto download zip file 2)#include<Servo.h>Click Here to download zip file After downloading the zip file,add zip library in arduino sketch For Code /* DIY Gimbal - MPU6050 Arduino Tutorial Code based on the MPU6050_DMP6 example from the i2cdevlib library by Jeff Rowberg: */ // // Arduino Wire library is required if I2Cdev I2CDEV_ARDUINO_WIRE implementation // is used in I2Cdev.h #if I2CDEV_IMPLEMENTATION == I2CDEV_ARDUINO_WIRE #include "Wire.h" #endif #include <Servo.h> // class the 3 servo motors Servo servo0; Servo servo1; Servo servo2; float correct; int j = 0; #define OUTPUT_READABLE_YAWPITCHROLL #define INTERRUPT_PIN 2 // use pin 2 on Arduino Uno & most boards' }; // ================================================================ // === INTERRUPT DETECTION ROUTINE === // ================================================================ volatile bool mpuInterrupt = false; // indicates whether MPU interrupt pin has gone high void dmpDataReady() { mpuInterrupt = true; } // ================================================================ // === INITIAL SETUP === // ================================================================ void setup() { //(38400); while (!Serial); // wait for Leonardo enumeration, others continue immediately // initialize device //Serial.println(F("Initializing I2C devices...")); mpu.initialize(); pinMode(INTERRUPT_PIN, INPUT); devStatus = mpu.dmpInitialize(); // supply your own gyro offsets here, scaled for min sensitivity mpu.setXGyroOffset(17); mpu.setYGyroOffset(-69); mpu.setZGyroOffset(27); mpu.setZAccelOffset(1551); // 1688 factory default for my test chip // make sure it worked (returns 0 if so) if (devStatus == 0) { // turn on the DMP, now that it's ready // Serial.println(F("Enabling DMP...")); mpu.setDMPEnabled(true); attachInterrupt(digitalPinToInterrupt(INTERRUPT_PIN), dmpDataReady, RISING); mpuIntStatus = mpu.getIntStatus(); // set our DMP Ready flag so the main loop() function knows it's okay to use it /(")")); } // Define the pins to which the 3 servo motors are connected servo0.attach(10); servo1.attach(9); servo2.attach(8); } // ================================================================ // === MAIN PROGRAM LOOP === // ================================================================ void loop() { // if programming failed, don't try to do anything if (!dmpReady) return; // wait for MPU interrupt or extra packet(s) available while (!mpuInterrupt && fifoCount < packetSize) { if (mpuInterrupt && fifoCount < packetSize) { // try to get out of the infinite loop fifoCount = mpu.getFIFOCount(); } } // reset interrupt flag and get INT_STATUS byte mpuInterrupt = false; mpuIntStatus = mpu.getIntStatus(); // get current FIFO count fifoCount = mpu.getFIFOCount(); // check for overflow (this should never happen unless our code is too inefficient) if ((mpuIntStatus & _BV(MPU6050_INTERRUPT_FIFO_OFLOW_BIT)) || fifoCount >= 1024) { // reset so we can continue cleanly mpu.resetFIFO(); fifoCount = mpu.getFIFOCount(); Serial.println(F("FIFO overflow!")); // otherwise, check for DMP data ready interrupt (this should happen frequently) } else if (mpuIntStatus & _BV(MPU6050_INTERRUPT_DMP_INT_BIT)) { // wait for correct available data length, should be a VERY short wait while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount(); // read a packet from FIFO mpu.getFIFOBytes(fifoBuffer, packetSize); // track FIFO count here in case there is > 1 packet available // (this lets us immediately read more without waiting for an interrupt) fifoCount -= packetSize; // j++; } // After 300 readings else { ypr[0] = ypr[0] - correct; // Set the Yaw to 0 deg - subtract the last random Yaw value from the currrent value to make the Yaw 0 degrees // Map the values of the MPU6050 sensor from -90 to 90 to values suatable for the servo control from 0 to 180 int servo0Value = map(ypr[0], -90, 90, 0, 180); int servo1Value = map(ypr[1], -90, 90, 0, 180); int servo2Value = map(ypr[2], -90, 90, 180, 0); // Control the servos according to the MPU6050 orientation servo0.write(servo0Value); servo1.write(servo1Value); servo2.write(servo2Value); } #endif } } Finally using the write function, we send these values to the servos as control signals. Of course, you can disable the Yaw servo if you want just stabilization for the X and Y axis, and use this platform as camera gimbal. Step 9: When All the Components Are Connected ,its Look Similar to This Picture Step 10: Now Insert All Base Stuff Inside the Food Can Step 11: When All the Wires and Components Are Placed Inside a Food Can Then Applied Glue Gun at the Base of Foam Board. Step 12: Conclusion Please note this far from good camera gimbal. The movements are not smooth because these servos are not meant for such a purpose. Real camera gimbals use a special type of BLDC motor for getting smooth movements. So, consider this project only for educational purpose. That would be all for this tutorial, I hope you enjoyed it and learned something new. Feel free to ask any question in the comments section below and don’t forget to checkmy collections of project Participated in the Make it Move Contest 2020 Be the First to Share Recommendations
https://www.instructables.com/Motion-Control-Gimbal/
CC-MAIN-2021-31
refinedweb
1,104
50.26
On Mon, 2008-08-04 at 14:28 -0700, David Lutterkort wrote: > Agreed. > - is this only temporary until iptables/lokkit has facilities for > cleaner addition of persistent firewall rules ? There's no huge technical issue here AFAICS. We just need a hook for libvirt to persistently register its rules with iptables. The main objection seems to be the old "how do you prevent different sets of rules from conflicting" chestnut. I don't see that being a serious issue in practice - there are all sorts of other global namespaces that apps manage to share effectively. Feel free to take a look at this; I lose motivation for fixing this every time I go back and discuss it with the maintainer: The truly depressing aspect of all this is that any fix we come up with would be Fedora specific anyway - e.g. /etc/sysconfig/iptables.d Cheers, Mark.
http://www.redhat.com/archives/libvir-list/2008-August/msg00193.html
CC-MAIN-2013-20
refinedweb
148
62.38
Only recently started using python, and I like it! However, I am stuck with SqlAlchemy. I am trying to write a script that reads an MS SQL database, query a table (all fields, only a filter on some fields), and write the results to a local SQLite database. (The object is to write a data adapter: perform some queries on the SQLite database before exporting the results to another database. Writing to temporary table in the target database is also possible.) I can make a connection and get query results - I can print them so I know that part works. But how can I create a new table based on the structure of the query results from the source SQL Server? This works: import sqlalchemy esd = sqlalchemy.create_engine( 'mssql+pyodbc://username:passwordSservername/dbname' ) for row in esd.execute( 'select * from ticket_actions where log_dt > \'2012-09-01\''): print( row.eFolderID ) import pyodbc cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=servername;DATABASE=dbname;UID=username;PWD=password') cursor = cnxn.cursor() for row in cursor.execute( 'select * from ticket_actions where log_dt > \'2012-09-01\''): print( row.eFolderID ) See Creating and Dropping Database Tables: Creating … individual tables can be done via the create()… method of Table. For reading the source structure, see Reflecting Database Objects: A Tableobject can be instructed to load information about itself from the corresponding database schema object already existing within the database. […] The reflection system can also reflect views.
https://codedump.io/share/1ClZJ719S4vo/1/sqlalchemy-export-table-to-new-database
CC-MAIN-2017-13
refinedweb
239
58.69
from. How does this compare with using Scipy's cKDtree? Hi Michael, the class scipy.spatial.cKDTree implements another algorithm for the nearest-neighbor search based on KDTrees. Of course, KDTrees have pro and cons. For example, the search cost using a KDTree is logarithmic (so, it's faster than the naive algorithm implemented here) but you have to build the tree and if need to delete or insert points in your dataset, you have to modify the tree. If you need more details look at this: This comment has been removed by the author. """Thanks a lot, I made a small change so that the user can make several queries with one call of knn_serach(...). Sorry I don't know how to format code here""" from numpy import random,argsort,sqrt,array,ones from pylab import plot,show # The function computes the euclidean distance between every point of D and x then returns the indexes of the points for which the distance is smaller. def knn_search(x, D, K): """ find K nearest neighbours of data among D """ ndata = D.shape[0] # num of query points queries=x.shape K = K if K < ndata else ndata # euclidean distances from the other points diff=array(D*ones(queries,int)).T - x[:,:ndata].T sqd=sqrt(((diff.T)**2).sum(axis=2)) # sorting idx=argsort(sqd) # return the indexes of K nearest neighbours return idx[:,:K] # Now, we will test this function on a random bidimensional dataset: data = random.rand(200,2) # random dataset x = array([[[0.4,0.4]],[[0.6,0.8]],[[0.9,0.2]],[[0.2,0.9]]]) # query points # Performing the search neig_idx = knn_search(x,data,10) # Plotting the data and the input points plot(data[:,0],data[:,1],'ob',x.T[0,0],x.T[1,0],'or') # Highlighting the neighbours for each input plot(data[neig_idx,0],data[neig_idx,1],'o', markerfacecolor='None',markersize=15,markeredgewidth=1) #plot(data[neig_idx[1],0],data[neig_idx[1],1],'xk', markerfacecolor='None',markersize=15,markeredgewidth=1) show() Awesome code - this really helped me out! Thanks for sharing! GP, Great tutorial. Thanks as always for uploading these. One question, tho: It's not clear to me (a beginner) what form "x" should take when it's passed into knn_search function. You say that x is "a query point," but what does a query point look like?? Is it a slice of ndata -- a point within the features?? Thank you for your thoughts! Hello, if you data matrix is of dimension n by m then x have to be a vector of dimension n. Is data matrix, then, some kind of similarity or distance measure if I'm doing kNN on documents?? Usually each row of the data matrix contains one of your samples and the knn computes the distance between each sample you have and a query vector. At the end it reports to you the k samples closest to your query vector. I have my training data in a csv file. The data contains 35 points corresponding to 3D vector in 3 columns x,y, and z and a feature 'color' in the fourth column. Not being a great pythonista, how do I modify your code here to employ my data to test a random new vector? forgot to mention, the color feature is numeric 1, 0.5, 0.3, or 0. I want a new random vector to be predicted. Hi, I would suggest you to read the CSV file using Pandas. Since your dataset has 3 dimensions you have to make a 3D plot (or ignore one of the variables). Matplotlib has a module named mplot3 that enables 3d visualization. should be axis=1 instead of axis=0 for euclidean distance
http://glowingpython.blogspot.it/2012/04/k-nearest-neighbor-search.html
CC-MAIN-2017-30
refinedweb
625
65.22
Traveling Salesman Problems using Solver Foundation: C# code ★★★★★★★★★★★★★★★ Nate BrixiusApril 26, 20107 0 0 0 This post has been updated. Click here to read it. Hi Nathan, Thanks for posting this code. I have been playing with the MSF and it is nice to see many pieces in action here. In an earlier posting, you have mentioned that you could use either "rank" or "assign" to display the results, but you chose to use "assign" to demonstrate some more LINQ capabilities. However I am not getting the expected results with using "assign" alone. Do we need to sort it somehow? Thanks! Tekin. Hi Tekin, Can you elaborate? In the last part of this sample I am doing a "select" from assign to print out the tour. Assign[i, j] == 1 means that the arc i -> j is in the tour. Hi Nathan, Select statement returns j's for active arcs. When you print them out with "->" I assumed that the order represented the route taken. It is not the case however. You can confirm that by printing the full report and construct the route from it. Thanks. I am getting an error when putting this into Microsoft Visual Studio 2010. the line: "using Microsoft.SolverFoundation.Services;" states: the name space does not exist and asks if I am missing an assembly reference. Thank you So Match for your Post…But there's one Error is : " Error 1 The type or namespace name 'SolverFoundation' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) " i search for it on "References">>Add Reference>>.Net or Com but i didn't find it this "SolverFoundation"…. What should i do ? Please Help me!. Thank You Again. Hi Maria, you need to download and install the Express version of Solver Foundation from code.msdn.microsoft.com/solverfoundation. Make sure your project is .Net 4, NOT .Net 4 Client Profile. Otherwise you will not see Solver Foundation listed as a .Net reference. Hi, creat code, works fine for me! Iam new in modelling, and iam asking my self, if it is possible to add a constraint which defines that a city has to be "entered" (assigned) after a spefic distance. thx Nathalie
https://blogs.msdn.microsoft.com/natbr/2010/04/26/traveling-salesman-problems-using-solver-foundation-c-code/
CC-MAIN-2016-30
refinedweb
369
76.32
After creating a game using python 2.7, I tested the game and it worked flawlessly without any bugs. Deciding that I wanted to redistribute it to a few of my friends I looked up some info on what I would have to do and eventually found py2exe. After carefully following all of the directions and creating my .exe, it failed to run. Running it through command line I got the following errors: (Although I'm sure it's obvious, the name of my program is footBallHell.) So, clueless as to what to do next I searched up google and found this similar problem on the python forums: After reading that I dug around through the Python folder and went to: C:\Python27\Lib\site-packages\pygame and found a file listed as "_view.pyd" I believed that this was the file in question that was missing so I copied it into the dist folder hoping it would solve the problem. Unfortunately, it didn't. So, afterwards I went and added it into the directory with the rest of the relevant files as well as the setup.py program I used to create the executable so the program read as follows: from distutils.core import setup import py2exe setup(console = ["footBallHell.py"], author="XXXX", author_email="XXXX", data_files=[('.', ["attack.ogg", "enemy.gif", "Field.gif", "football.gif", "player.gif", "save.ogg", "tackle.ogg", "freesansbold.ttf", "_view.pyd"] )] ) After running the setup.py program, it did include the _view.pyd file in the dist folder, but still would not run. So, I believe I either have the wrong file or there is something else I need to do in order to get the program to run. What I am asking is if there is any one out there that might know how to fix this issue, or, at worst, inform me of another way I can create an executable for Windows out of my python programs that I can distribute to my friends.
https://www.gamedev.net/topic/608340-problem-creating-an-exe-with-py2exe/
CC-MAIN-2017-17
refinedweb
330
71.34
On this exercise when we save the low_limit and high_limit as variables we only have to put them in as low and high? My question is do we not have to write out the entire name of low_limit and high_limit? Also can this be used to recall other variables as well where you can leave out the second word? Difference between low, high and low_limit, high_limit? def get_boundaries(target, margin): return low_limit = target - margin return high_limit = target + margin I don’t know why it is wrong def get_boundaries(target, margin): low_limit = target - margin high_limit = target + margin return low_limit, high_limit I had the same question, so I changed the names from low and high to x and y, and it worked anyway. So no, it’s not about being able to shorten the name of the variable. I don’t understand how it knows what I’m trying to store in low/x and high/y, though. Is it associating low with the first returned value and high with the second? I’m perplexed by this as well. Up until this point we had to clearly define variables, at this stage somehow the low, high are taking on new values without a logical connection (that I can tell at least). You could write out the whole name i.e. low_limit if you’d like but I think just to keep it simple, they used just low and high. Since we already put in calculations for the def get_boundaries. You could also write low_limit if you’d like… the calculation will still work! I was also a little bit confused by that. Actually, I suppose that what made it more difficult for me to fully clarify the multiple return values context at the specific exersice with boundaries, was the repeating use of the words “low” and “high” in the code. To keep its simplier and more clear , I tried to use letters. At least for me, this way quite helpful .The output was of course the same. For example: def get_boundaries(target, margin): L = target - margin H = target + margin return L, H A,B = get_boundaries(100, 20) #we added this print statement just to sanity-check our solution: print("Low limit: “+str(A)+”, High limit: "+str(B)) In fact, by “A” I name the first call of the function which corresponds to the first return value “L” i.e. the first calculation (subtraction). By “B” I name the second call of the function which corresponds to the second return value “H” i.e. the second calculation (addition). I want to chime in: When the function returns variable1, variable2 Then when you call the function, setting it like this: my_var1, my_var2 = function(input1, input2) will ultimately give 2 variables my_var1 and my_var2 that are respectively equal to the output from the function as what variable1 and variable2 were. In this example. low is equal to low_limit from when we ran the function with the (100, 20) input.(and same for high) This is valuable because you could run the function again with a different input and set it to new variables: low2, high2 = get_boundaries(50, 15) As an example. I know I’m a little late to the party, but I was stuck on the same thing! After messing around with it a little bit, I figured it out. This is what my code looked like: def get_boundaries(target, margin): low_limit= target - margin high_limit= margin + target return low_limit, high_limit low_limit, high_limit = get_boundaries (100, 20) print(low_limit) print(high_limit) low = 80 high = 120 And it cleared! Seems like you just have to define low and high as the answers you got from the code. If I understand this properly, there is no direct link between the strings ‘low’ and ‘low_limit’, and ‘high’ and ‘high_limit’. def get_boundaries(target, margin): low_limit = target - margin high_limit = target + margin return low_limit, high_limit Based on the above, the function get_boundaries() returns two variables; low_limit and high_limit, in that order. If we called the function using the syntax get_boundaries(100, 20), then the function would first return low_limit as 80 then high_limit as 120. When the function get_boundaries() is preceded by *low, high = *, we’re technically creating two new variables; low and high. These variables are defined by their position, i.e. low is equal to low_limit because low is written first, and low_limit is the first variable returned by the function get_boundaries(). high is equal to high_limit as high is in the second position and high_limit is the second variable returned by the statement. As a few people have suggested above, we could use any words in place of ‘low’ or ‘high’ to create these new variables - ‘a’ and ‘b’, ‘left’ and ‘right’, etc., as there isn’t as direct link between the terms ‘low’ and ‘low_limit’. It’s not what was used that mattered here, rather how it was used. Hope that makes sense… Blockquote This is the way I was seeing it. If you were to create one variable and assign it the results you will get the readout of both returns (80, 120). I broke the variables low and high up by containing them on their own line and got a print out of (80, 120) (80, 120). You need to indent the second two lines I think that after a return line, you can’t declare anything else so the second return statement isn’t returned in this case. Anyone correct me if I’m wrong. This is what i did as well and it´s the best way i think to understand how it works. It´s a bit confusing as it lacks some better explanation on the lesson It doesn’t matter what you call them, the new variables would be assigned to low_limit and high_limit in the order they are listed. For instance, if you write: high, low = get_boundaries(100, 20) then high becomes low_limit and low becomes high_limit. New variable names are created outside the function so that whoever is reading the code would be able to keep track of what is being used where. If you name the new variables low_limit and high_limit you could easily get confused and not be sure if low_limit refers to the variable in the function or the one outside it. Hello the low_limit and high_limit variables inside the function is totally different from the two outside the function. So even if you type the same full form (low_limit instead of low and high_limit instead of high) outside , new variables are created in the same name. The variables defined inside a function works only inside (local scope) you can understand this by printing a variable inside a function. def get_boundaries(target,margin): low_limit=target -margin high_limit = margin + target return low_limit,high_limit low,high=get_boundaries(100,20) print (low_limit) #will get error showing variable not defined def get_boundaries(target,margin): low_limit = target - margin high_limit = margin + target return low_limit,high_limit low_limit,high_limit = get_boundaries(100,20) """ these variables are different from the ones defined inside. (totally new memory spaces with same name) """ print (low_limit) correct me if i am wrong. What we are doing here is defining the function below. def get_boundaries(target,margin): low_limit= target - margin high_limit= target+margin return low_limit,high_limit low, high =get_boundaries(100,20) print(low) will now return 80 & print (high) returns 120. This is because inside of the variable “low,high” it is defined to use the Get_Boundaries function (Capitalized for ease of reading) Therefore it will relate back to that function and use the inputs we have chosen. 100 & 20 in this case Low_Limit is a variable inside the function. We just need to use the function outside of the actual function and define it as Low or High. As seen in the above EX: low, high =get_boundaries(100,20) Just realized that you can define mulptiple variables with a comma, I thought I was cheating when I did it. This is making me fall in love with python. I know I am way late. As I have seen and learn from the the sections that clarify parameters (and correct me if I am wrong) this shows elements low_limit & high_limit and low and high are the same because of positional parameters given by the def get_boundaries (target, margin) function. For example, when you summon the ’ low, high = get_boundaries (100,20) ’ it is just using the positions of the parameters set (target, margin) which equal (100,20), and this essential means that low and high equal the positions of the next set of parameter given in the function. It can get confusing, but I believe it all relates to how your code is positioned either as first, second or third, in the function. If anything, I learned something new. Why do we need to redefine something we’ve already defined? We’ve defined low_limit and high_limit with equations and then when we tell the program get_boundaries(100, 20) this should tell what each key word in the equation means. Perhaps I take the ease with which computer do things normal for granted but having to introduce low and high seems to be an unnecessary step logically speaking.
https://discuss.codecademy.com/t/difference-between-low-high-and-low-limit-high-limit/426919
CC-MAIN-2019-43
refinedweb
1,524
67.59
facebook ◦ twitter ◦ View blog authority View blog top tags I got turned onto a fairly new tool by a friend and was interested in checking it out. It's called Microsoft StyleCop and done in the style of FxCop but rather than analyzing assemblies it looks at code for formatting and style rules. The original use was within Microsoft for keeping code across all teams consistent. Imagine having to deal with hundreds of developers moving around an organization like Microsoft where there are dozens of major teams (Windows, Office, Visual Studio, Games, etc.) and millions of lines of code. It helps if the code is consistent in style so people moving between teams don't have to re-learn how the "style" of that new team works. Makes sense and I can see where there's benefit, even in smaller organizations. As an example, here's a small User Interface utility class for long(ish) running operations. It's simple but works and is easy to use: using System; using System.Windows.Forms; namespace UserInterface.Common { /// <summary> /// Utility class used to display a wait cursor /// while a long operation takes place and /// guarantee that it will be removed on exit. /// </summary> /// <example> /// using(new WaitCursor()) /// { /// // long running operation goes here... /// } /// </example> internal class WaitCursor : IDisposable { private readonly Cursor _cursor; public WaitCursor() { _cursor = Cursor.Current; Cursor.Current = Cursors.WaitCursor; } public void Dispose() Cursor.Current = _cursor; } } One could even argue here that the class documentation header is somewhat excessive, but this is meant to be a framework class that any application could use and maybe deserves the <example/> tag. Maybe it's my formatting style but I like using the underscore prefix for class fields. This is for two reasons. First, I don't have to use "this." all over the place (so the compile can tell between a parameter variable, local variable, and class variable. Secondly, I can immediately recognize that "_cursor" is a class wide variable. Sometimes we have a policy of only referencing variables via Properties so for example I could tell if this was a problem if I saw a method other than a getter/setter use this variable. The debate on underscore readability can be fought some other time, but for me it works. After running StyleCop on this single file (I wasn't about to deal with all the voilations in the entire solution) it created this list of violations: Hmmm, that's a lot of problems for such a little file. Now grant you, when you run FxCop against any assembly (even Microsoft ones) you get a whack of "violations". They range from actual, real, critical errors that should be fixed, to annoyances like not enough members in a namespace. Any team using FxCop generally has to sift through all the violations and decide, as a team, what makes sense to enforce and what to ignore. StyleCop has similar capabilities through it's SourceAnalysisSettingsEditor program (buried in the Program Files directory where the tool is installed or via right-click on the Project you're performing analysis on). It allows rules to be ignored but it's pretty simplistic. I think one of the biggest issues with the tool is the fact that it goes all Chef Ramsey on your ass, even if its code created by Microsoft in the first place. For example create a new WinForms project and run source analysis on it. You'll get 20+ errors (even if you ignore the .Designer generated file). You can exclude designer files and generated files through the settings of the tool, but still its extra work and more friction to use the tool this way. It might be debated that the boilerplate code Visual Studio generates for new files (which you can modify but again, more work) should conform to the StyleCop guidelines. After all Microsoft produced both tools. However this would probably anger the universe as the "new" boilerplate code would look different from the "old". There are other niggly bits like the tool insisting on documenting private variables so pretty much every property, method, and variable (public, private, or otherwise) will all have at least an extra 3 lines added to it should you enforce this rule. More work, more noise. I'm somewhat torn on the formatting issues here. What it suggests doesn't completely jive with me, but that might be style. After all, the tool is designed to provide consistency of code formatting across multiple disparate sources. However unless you're a company with *no* code and start with this tool, you'll probably be ignoring certain rules (or groups of rules) or doing a lot of work to try to bring your code to match the violations you'll stumble on. It's like writing unit tests after the fact. Unit tests are good, but writing them after the code is done (and even shipped) has a somewhat diminished cost to benefit ratio. In getting this simple class up to snuff I had to not have the urge to hit Ctrl+Alt+F in ReSharper (ReSharper's default formatting totally blows the violations) and hold my nose on a few things (like scattering the code with "this." prefixes and seemingly redundant documentation headers). Documentation is a good thing but my spidey-sense has taught me that comments mean something might be wrong with the code (not descriptive enough, should have been refactored into a well-named method, etc.). It only took a few minutes to shuffle things around, but I look at large codebases that you could point this tool at and think of weeks of code reformatting and what a cost that would be. In any case, here's the final class with the changes to "conform" to StyleCop's way of life: //----------------------------------------------------------------------- using System; using System.Windows.Forms; /// <summary> /// Holds the cursor so it can be set on Dispose /// </summary> private readonly Cursor cursor; /// Default constructor this.cursor = Cursor.Current; /// Resets the cursor back to it's previous state Cursor.Current = this.cursor; I feel this is a lot of noise. Sure, it could be consistent if all files were like this but readability is a funny thing. You want code to be readable and to me this last version (after StyleCop) is less readable than the first. Documenting default constructors? Pretty useless in any system. What more can you say except "Create an instance of <T>". Documenting private variables? Another nitpick but why should I? In this class you could probably rename it be _previousCursorStyle or something to be more descriptive and then what is documentation going to give me. Have I got anything extra from the tool as a result? I don't think so. If it's all about consistency something we've done is to share a ReSharper reformatting file which tells R# how to format code (when you press Ctrl+Alt+F or choose Reformat Code from the ReSharper menu). It has let us do things like not wrap interface implementations in regions (regions are evil) and decide how our code should be formatted like curly braces, spacing, etc. However it completely doesn't match StyleCop in style or form. You could probably tweak ReSharper to match StyleCop "to a certain extent" but I disagree on certain rules that are baked into the tool. For example take "this." having to prefix a variable. To me a file full of "this" prefixes is just more noise. ReSharper agrees with me because it flags "this.PropertyName" as redundant. Maybe the debate whether it's a parameter or a field is probably a non-issue. If a method is short, one can immediately identify the local variables and distinguish them from member fields and properties with a glance. If it is long, then there is probably a bigger issue than the code style: the method simply should be refactored. For whatever reason, Microsoft thinks "this." is important and more readable. Go figure. Rules can be excluded but it's a binary operation. Currently StyleCop doesn't have any facility on differentiating between "errors" and "warnings" or "suggestions". Maybe it should but then with all the exclusions and errors->warnings you could configure, the value of the tool quickly diminishes. Enough errors being turned into warnings and you would have to argue the value of the tool at all, versus a ReSharper template. In any case, feel free to try the tool out yourself. If you're starting with a brand new codebase and are willing to change your style to match a tool then this might work for you. For me, even with public frameworks I'm working on, the tool seems to be more regiment and rules than being fit for purpose. I suppose if you buy into the "All your StyleCop are belong to us" mentality, it'll go far for you. For me it's just lots of extra noise that seems to provide little value but YMMV. Hi Bil, I'd be careful with the 4.2 drop as it affects some of the core VS features, I thought that maybe they would drop an updated version this week as a tonne of issues got resolved within the last week but they haven't as there are still a few open. You can see those left open here - code.msdn.microsoft.com/.../ProjectReleases.aspx The common one that seems to affect most is that it takes down VS properties. Quite sincerely, I disagree with everything you just said. First of all you sound like exactly the same sort of thing that prevents from a single code style (my style is way better than yours, etc). Frankly, to a reasonable extend, every style is good, you just need to get used. If you don't want to use stylecop then don't, but don't base your conclusions on the use of resharper. Since when can you justify your code style on how easy it is to work with a propietary, non-free tool? Maybe it works for you but you certainly cannot generalize it as many people simply don't use R# Finally, stylecop is just a tool, tools help us by improving or speeding things we do but they don't think for us. When I design code I let the tool point out possible problems or improvements, the is my brain who is in charge of deciding whether they really are or not. I prefer applying standard formatting automagically instead of validating it. ReSharper, e.g., does the magic for a whole soluion - et voilà: All source files from all developers use the same style. Wouldn't reformatting with Resharper essentially show the entire file as being changed in source control? On a multi-person project it would be horrible to do that after things had been checked in ... the churn would be huge. What are the other options besides StyleCop, Resharper? Thanks I disagree with you on not using "this" with internal properties of a class. I think it's a lot more clear that way and it causes intellisense to get a hint that I want something so it displays a useful list. I found another program that does this called CodeIt.Right and it's a lot like FxCop but useful. It finds this kind of stuff and then offers to fix it for you so instead of just validating it will validate and then correct (but only if you tell it to do so). Really easy way to enforce coding style as well as a bunch of other stuff. Only drawback is that it isn't free (it's not expensive though) but then neither is resharper so whatever. "goes all Chef Ramsey on your ass" LOL, Oh, man, that was worth the price of admission. *wipes a tear from his eye* I just created a plugin for Resharper which highlights and StyleCop violations in realtime. My plan is to update this to have the ability to automatically fix some of the issues (either using stylecop if they release this functionality or by writing my own). If anyone is interested i've put it on CodePlex.
http://weblogs.asp.net/bsimser/archive/2008/07/04/microsoft-stylecop-totalitarian-rules.aspx
crawl-002
refinedweb
2,032
62.48
Code: import javax.swing.JOptionPane; import javax.swing.JDialog; import javax.swing.JFrame; public class apr2 { public static void main(String[] args) { int numOfGrades = 0; // number of grades int[] scores; // Array of grades // Get number of grades to be entered String numOfGradesString = JOptionPane.showInputDialog(null, "Please enter the number of grades to be averaged:", "Easy Grader 1.0", JOptionPane.QUESTION_MESSAGE); // Convert string into integer numOfGrades = Integer.parseInt(numOfGradesString); // Create array scores scores = new int[numOfGrades]; // gets grades and avgs grade for (int i = 0; i < scores.length; i++) { String scoreString = JOptionPane.showInputDialog(null, "Please enter a score:", "Easy Grader 1.0", JOptionPane.QUESTION_MESSAGE); // Convert string into integer scores[i] = Integer.parseInt(scoreString); int sum = 0; for (int x = 0; x < scores.length; i++) sum += scores[x] / numOfGrades; } // Display the result ----> problem area: JOptionPane.showMessageDialog(null, "Your final grade is: " sum + \n "Your letter grade earned is :" + \n "Total number of grades entered: " numOfGrades, JOptionPane.INFORMATION_MESSAGE "Easy Grader 1.0); System.exit(); } } Error msg: ')' expected This is a project for my java class that the instructor came up with. He expects if he can do it we can do it. The only problem is he has a bachelors degree in this stuff from a really good school and this is my first java class. Im not sure if i am on the right track or not, I dont know how to fix that error and I looked at almost every swing tutorial there is and I just cant get it. Im probably going to fail.
http://www.dreamincode.net/forums/topic/166045-problem-if-joption-output-message/page__p__979562
CC-MAIN-2016-22
refinedweb
254
59.5
Hi, I'm working through the You Can Do It! book by Francis Glassborow and am stuck on the chapter that introduces functions. Can anyone tell me why this won't build? The drawing functions header I made and it contains:The drawing functions header I made and it contains:Code: #include <iostream> #include "playpen.h" #include "drawing_functions.h" using namespace fgw; using namespace std; int main(){ playpen paper; paper.scale(3); draw_a_cross(paper,-4,0,9,0,-4,9,black); paper.display(); cout << "press return to end"; cin.get(); } The playpen header makes all the paper and fgw stuff work fine. The IDE I'm using is Quincy. These are the errors I'm getting:The playpen header makes all the paper and fgw stuff work fine. The IDE I'm using is Quincy. These are the errors I'm getting:Code: #ifdef DRAWING_FUNCTIONS_H #define DRAWING_FUNCTIONS_H #include "playpen.h" void draw_a_cross(fgw::playpen &, int left_of_cross_piece_x, int left_of_cross_piece_y, int width_of_cross, int bottom_of_cross_piece_x, int bottom_of_cross_piece_y, int height_of_cross, fgw::hue); #endif Any help would be greatly appreciated. :)
http://cboard.cprogramming.com/cplusplus-programming/88892-beginners-problem-first-funtion-printable-thread.html
CC-MAIN-2015-48
refinedweb
175
69.28
im working right now the first time with VideoTextures in Blender. I have right now a little problem, I know that my webcam is a Video4Linux2 Device - is it possible that it can work with the actual Blender Version? I saw this post ->. ... hlight=v4l and I thought - maybe there is already a solution for this? (Because the post is from 2008). This is the Code that i use - is there something wrong? Code: Select all import GameLogic as G import VideoTexture as VT def init_world(): cont = G.getCurrentController() obj = cont.owner matID = VT.materialID(obj, 'MAvideo') G.video = VT.Texture(obj, matID) S1 = G.expandPath("/dev/video0") video_source = VT.VideoFFmpeg(S1, 1, 25.0, 320, 240) video_source.repeat = -1 video_source.scale = True #video_source.flip = False #video_source.deinterlace = True G.video.source = video_source G.video.source.play() #sound = cont.actuators["a_sound"] #cont.activate(sound) #G.sound = sound def update(): G.video.refresh(True)
https://www.blender.org/forum/viewtopic.php?t=22571&view=previous
CC-MAIN-2017-47
refinedweb
153
65.59
Welcome to another exciting tutorial! The code for this tutorial was written by Ben Humphrey, and is based on the GL framework from lesson 1. By now you should be a GL expert {grin}, and moving the code into your own base code should be a snap! This tutorial will teach you how to create cool looking terrain from a height map. For those of you that have no idea what a height map is, I will attempt a crude explanation. A height map is simply... displacement from a surface. For those of you that are still scratching your heads asking yourself "what the heck is this guy talking about!?!"... In english, our heightmap represents low and height points for our landscape. It's completely up to you to decide which shades represent low points and which shades represent high points. It's also important to note that height maps do not have to be images... you can create a height map from just about any type of data. For instance, you could use an audio stream to create a visual height map representation. If you're still confused... keep reading... it will all start to make sense as you go through the tutorial :) #include <windows.h> // Header File For Windows #include <stdio.h> // Header file For Standard Input/Output ( NEW ) #include <gl\gl.h> // Header File For The OpenGL32 Library #include <gl\glu.h> // Header File For The GLu32 Library #include <gl\glaux.h> // Header File For The Glaux Library #pragma comment(lib, "opengl32.lib") // Link OpenGL32.lib #pragma comment(lib, "glu32.lib") // Link Glu32.lib We start off by defining a few important variables. MAP_SIZE is the dimension of our map. In this tutorial, the map is 1024x1024. The STEP_SIZE is the size of each quad we use to draw the landscape. By reducing the step size, the landscape becomes smoother. It's important to note that the smaller the step size, the more of a performance hit your program will take, especially when using large height maps. The HEIGHT_RATIO is used to scale the landscape on the y-axis. A low HEIGHT_RATIO produces flatter mountains. A high HEIGHT_RATIO produces taller / more defined mountains. Further down in the code you will notice bRender. If bRender is set to true (which it is by default), we will draw solid polygons. If bRender is set to false, we will draw the landscape in wire frame. #define MAP_SIZE 1024 // Size Of Our .RAW Height Map ( NEW ) #define STEP_SIZE 16 // Width And Height Of Each Quad ( NEW ) #define HEIGHT_RATIO 1.5f // Ratio That The Y Is Scaled According To The X And Z ( NEW ) TRUE By Default bool bRender = TRUE; // Polygon Flag Set To TRUE By Default ( NEW ) Here we make an array (g_HeightMap[ ]) of bytes to hold our height map data. Since we are reading in a .RAW file that just stores values from 0 to 255, we can use the values as height values, with 255 being the highest point, and 0 being the lowest point. We also create a variable called scaleValue for scaling the entire scene. This gives the user the ability to zoom in and out. BYTE g_HeightMap[MAP_SIZE*MAP_SIZE]; // Holds The Height Map Data ( NEW ) float scaleValue = 0.15f; // Scale Value For The Terrain ( NEW ) LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM); // Declaration For WndProc The ReSizeGLScene() code is the same as lesson 1 except the farthest distance has been changed from 100.0f to 500.0f. GLvoid ReSizeGLScene(GLsizei width, GLsizei height) // Resize And Initialize The GL Window { ... CUT ... } The following code loads in the .RAW file. Not too complex! We open the file in Read/Binary mode. We then check to make sure the file was found and that it could be opened. If there was a problem opening the file for whatever reason, an error message will be displayed. // Loads The .RAW File And Stores It In pHeightMap void LoadRawFile(LPSTR strName, int nSize, BYTE *pHeightMap) { FILE *pFile = NULL; // Open The File In Read / Binary Mode. pFile = fopen( strName, "rb" ); // Check To See If We Found The File And Could Open It if ( pFile == NULL ) { // Display Error Message And Stop The Function MessageBox(NULL, "Can't Find The Height Map!", "Error", MB_OK); return; } If we've gotten this far, then it's safe to assume there were no problems opening the file. With the file open, we can now read in the data. We do this with fread(). pHeightMap is the storage location for the data (pointer to our g_Heightmap array). 1 is the number of items to load (1 byte at a time), nSize is the maximum number of items to read (the image size in bytes - width of image * height of image). Finally, pFile is a pointer to our file structure! After reading in the data, we check to see if there were any errors. We store the results in result and then check result. If an error did occur, we pop up an error message. The last thing we do is close the file with fclose(pFile). // Here We Load The .RAW File Into Our pHeightMap Data Array // We Are Only Reading In '1', And The Size Is (Width * Height) fread( pHeightMap, 1, nSize, pFile ); // After We Read The Data, It's A Good Idea To Check If Everything Read Fine int result = ferror( pFile ); // Check If We Received An Error if (result) { MessageBox(NULL, "Failed To Get Data!", "Error", MB_OK); } // Close The File fclose(pFile); } The init code is pretty basic. We set the background clear color to black, set up depth testing, polygon smoothing, etc. After doing all that, we load in our .RAW file. To do this, we pass the filename ("Data/Terrain.raw"), the dimensions of the .RAW file (MAP_SIZE * MAP_SIZE) and finally our HeightMap array (g_HeightMap) to LoadRawFile(). This will jump to the .RAW loading code above. The .RAW file will be loaded, and the data will be stored in our Heightmap array (g_HeightMap). // Here we read read in the height map from the .raw file and put it in our // g_HeightMap array. We also pass in the size of the .raw file (1024). LoadRawFile("Data/Terrain.raw", MAP_SIZE * MAP_SIZE, g_HeightMap); // ( NEW ) return TRUE; // Initialization Went OK } This is used to index into our height map array. When ever we are dealing with arrays, we want to make sure that we don't go outside of them. To make sure that doesn't happen we use %. % will prevent our x / y values from exceeding MAX_SIZE - 1. We check to make sure pHeightMap points to valid data, if not, we return 0. Otherwise, we return the value stored at x, y in our height map. By now, you should know that we have to multiply y by the width of the image MAP_SIZE to move through the data. More on this below! int Height(BYTE *pHeightMap, int X, int Y) // This Returns The Height From A Height Map Index { int x = X % MAP_SIZE; // Error Check Our x Value int y = Y % MAP_SIZE; // Error Check Our y Value if(!pHeightMap) return 0; // Make Sure Our Data Is Valid We need to treat the single array like a 2D array. We can use the equation: index = (x + (y * arrayWidth) ). This is assuming we are visualizing it like: pHeightMap[x][y], otherwise it's the opposite: (y + (x * arrayWidth) ). Now that we have the correct index, we will return the height at that index (data at x, y in our array). return pHeightMap[x + (y * MAP_SIZE)]; // Index Into Our Height Array And Return The Height } Here we set the color for a vertex based on the height index. To make it darker, I start with -0.15f. We also get a ratio of the color from 0.0f to 1.0f by dividing the height by 256.0f. If there is no data this function returns without setting the color. If everything goes ok, we set the color to a shade of blue using glColor3f(0.0f, fColor, 0.0f). Try moving fColor to the red or green spots to change the color of the landscape. void SetVertexColor(BYTE *pHeightMap, int x, int y) // This Sets The Color Value For A Particular Index { // Depending On The Height Index if(!pHeightMap) return; // Make Sure Our Height Data Is Valid float fColor = -0.15f + (Height(pHeightMap, x, y ) / 256.0f); // Assign This Blue Shade To The Current Vertex glColor3f(0.0f, 0.0f, fColor ); } This is the code that actually draws our landscape. X and Y will be used to loop through the height map data. x, y and z will be used to render the quads making up the landscape. As always, we check to see if the height map (pHeightMap) contains data. If not, we return without doing anything. void RenderHeightMap(BYTE pHeightMap[]) // This Renders The Height Map As Quads { int X = 0, Y = 0; // Create Some Variables To Walk The Array With. int x, y, z; // Create Some Variables For Readability if(!pHeightMap) return; // Make Sure Our Height Data Is Valid Since we can switch between lines and quads, we check our render state with the code below. If bRender = True, then we want to render polygons, otherwise we render lines. if(bRender) // What We Want To Render glBegin( GL_QUADS ); // Render Polygons else glBegin( GL_LINES ); // Render Lines Instead (smooth)-STEP_SIZE); X += STEP_SIZE ) for ( Y = 0; Y < (MAP_SIZE-STEP_SIZE); Y += STEP_SIZE ) { // Get The (X, Y, Z) Value For The Bottom Left Vertex x = X; y = Height(pHeightMap, X, Y ); z = Y; // Set The Color Value Of The Current Vertex SetVertexColor(pHeightMap, x, z); glVertex3i(x, y, z); // Send This Vertex To OpenGL To Be Rendered // Vertex SetVertexColor(pHeightMap, x, z); glVertex3i(x, y, z); // Send This Vertex To OpenGL To Be Rendered } glEnd(); After we are done, we set the color back to bright white with an alpha value of 1.0f. If there were other objects on the screen, we wouldn't want them showing up BLUE :) glColor4f(1.0f, 1.0f, 1.0f, 1.0f); // Reset The Color } For those of you who haven't used gluLookAt(), what it does is position your camera position, your view, and your up vector. Here we set the camera in a obscure position to get a good outside view of the terrain. In order to avoid using such high numbers, we would divide the terrain's vertices by a scale constant, like we do in glScalef() below. The values of gluLookAt() are as follows: The first three numbers represent where the camera is positioned. So the first three values move the camera 212 units on the x-axis, 60 units on the y-axis and 194 units on the z-axis from our center point. The next 3 values represent where we want the camera to look. In this tutorial, you will notice while running the demo that we are looking a little to the left. We are also look down towards the landscape. 186 is to the left of 212 which gives us the look to the left, and 55 is lower than 60, which gives us the appearance that we are higher than the landscape looking at it with a slight tilt (seeing a bit of the top of it). The value of 171 is how far away from the camera the object is. The last three values tell OpenGL which direction represents up. Our mountains travel upwards on the y-axis, so we set the value on the y-axis to 1. The other two values are set at 0. gluLookAt can be very intimidating when you first use it. After reading the rough explanation above you may still be confused. My best advise is to play around with the values. Change the camera position. If you were to change the y position of the camera to say 120, you would see more of the top of the landscape, because you would be looking all the way down to 55. I'm not sure if this will help, but I'm going to break into one of my highly flamed real life "example" explanations :) Lets say you are 6 feet and a bit tall. Lets also assume your eyes are at the 6 foot mark (your eyes represent the camera - 6 foot is 6 units on the y-axis). Now if you were standing in front of a wall that was only 2 feet tall (2 units on the y-axis), you would be looking DOWN at the wall and would be able to see the top of the wall. If the wall was 8 feet tall, you would be looking UP at the wall and you would NOT see the top of the wall. The view would change depending on if you were looking up or down (if you were higher than or lower than the object you are looking at). Hope that makes a bit of sense! int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer glLoadIdentity(); // Reset The Matrix // Position View Up Vector gluLookAt(212, 60, 194, 186, 55, 171, 0, 1, 0); // This Determines The Camera's Position And View This will scale down our terrain so it's a bit easier to view and not so big. We can change this scaleValue by using the UP and DOWN arrows on the keyboard. You will notice that we mupltiply the Y scaleValue by a HEIGHT_RATIO as well. This is so the terrain appears higher and gives it more definition. glScalef(scaleValue, scaleValue * HEIGHT_RATIO, scaleValue); If we pass the g_HeightMap data into our RenderHeightMap() function it will render the terrain in Quads. If you are going to make any use of this function, it might be a good idea to put in an (X, Y) parameter to draw it at, or just use OpenGL's matrix operations (glTranslatef() glRotate(), etc) to position the land exactly where you want it. RenderHeightMap(g_HeightMap); // Render The Height Map return TRUE; // Keep Going } The KillGLWindow() code is the same as lesson 1. GLvoid KillGLWindow(GLvoid) // Properly Kill The Window { } The CreateGLWindow() code is also the same as lesson 1. BOOL CreateGLWindow(char* title, int width, int height, int bits, bool fullscreenflag) { } The only change in WndProc() is the addition of WM_LBUTTONDOWN. What it does is checks to see if the left mouse button was pressed. If it was, the rendering state is toggled from polygon mode to line mode, or from line mode to polygon mode. LRESULT CALLBACK WndProc( HWND hWnd, // Handle For This Window UINT uMsg, // Message For This Window WPARAM wParam, // Additional Message Information LPARAM lParam) // Additional Message Information { switch (uMsg) // Check For Windows Messages { case WM_ACTIVATE: // Watch For Window Activate Message { if (!HIWORD(wParam)) // Check Minimization State { active=TRUE; // Program Is Active } else { active=FALSE; // Program Is No Longer Active } return 0; // Return To The Message Loop } case WM_SYSCOMMAND: // Intercept System Commands { switch (wParam) // Check System Calls { case SC_SCREENSAVE: // Screensaver Trying To Start? case SC_MONITORPOWER: // Monitor Trying To Enter Powersave? return 0; // Prevent From Happening } break; // Exit } case WM_CLOSE: // Did We Receive A Close Message? { PostQuitMessage(0); // Send A Quit Message return 0; // Jump Back } case WM_LBUTTONDOWN: // Did We Receive A Left Mouse Click? { bRender = !bRender; // Change Rendering State Between Fill/Wire Frame return 0; // Jump Back } case WM_KEYDOWN: // Is A Key Being Held Down? { keys[wParam] = TRUE; // If So, Mark It As TRUE return 0; // Jump Back } case WM_KEYUP: // Has A Key Been Released? { keys[wParam] = FALSE; // If So, Mark It As FALSE return 0; // Jump Back } case WM_SIZE: // Resize The OpenGL Window { ReSizeGLScene(LOWORD(lParam),HIWORD(lParam)); // LoWord=Width, HiWord=Height return 0; // Jump Back } } // Pass All Unhandled Messages To DefWindowProc return DefWindowProc(hWnd,uMsg,wParam,lParam); } No major changes in this section of code. The only notable change is the title of the window. Everything else is the same up until we check for key presses.("NeHe & Ben Humphrey's Height Map if (active) // Not Time To Quit, Update Screen { SwapBuffers(hDC); // Swap Buffers (Double Buffering) } & Ben Humphrey's Height Map Tutorial", 640, 480, 16, fullscreen)) { return 0; // Quit If Window Was Not Created } } The code below lets you increase and decrease the scaleValue. By pressing the up key, the scaleValue is increased, making the landscape larger. By pressing the down key, the scaleValue is decreased making the landscape smaller. if (keys[VK_UP]) // Is The UP ARROW Being Pressed? scaleValue += 0.001f; // Increase The Scale Value To Zoom In if (keys[VK_DOWN]) // Is The DOWN ARROW Being Pressed? scaleValue -= 0.001f; // Decrease The Scale Value To Zoom Out } } // Shutdown KillGLWindow(); // Kill The Window return (msg.wParam); // Exit The Program } That's all there is to creating a beautiful height mapped landscape. I hope you appreciate Ben's work! As always, if you find mistakes in the tutorial or the code, please email me, and I will attempt to correct the problem / revise the tutorial. Once you understand how the code works, play around a little. One thing you could try doing is adding a little ball that rolls across the surface. You already know the height of each section of the landscape, so adding the ball should be no problem. Other things to try: Create the heightmap manually, make it a scrolling landscape, add colors to the landscape to represent snowy peaks / water / etc, add textures, use a plasma effect to create a constantly changing landscape. The possibilities are endless :) Hope you enjoyed the tut! You can visit Ben's site at:. Ben Humphrey (DigiB JoGL Code For This Lesson. ( Conversion by Abdul Bezrati ) * DOWNLOAD LCC Win32 Code For This Lesson. ( Conversion by Robert Wishlaw ) * DOWNLOAD Linux/GLX Code For This Lesson. ( Conversion by Patrick Schubert ) * DOWNLOAD Mac OS X/Cocoa Code For This Lesson. ( Conversion by Bryan Blackburn ) * DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Grant James ) < Lesson 33Lesson 35 > NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC OpenGL® is a registered trademark of Silicon Graphics Inc.
http://nehe.gamedev.net/tutorial/beautiful_landscapes_by_means_of_height_mapping/16006/
CC-MAIN-2015-40
refinedweb
3,013
72.46
Asked by: How to tell when a ReportViewer Control has finished processing the report? Question I understand there is supposed to be a RenderingComplete event for the ReportViewer Control - I do not see this in the list of available events. What I am trying to do is identify when the ReportViewer control has completed processing. Also, can this information be passed on to a JQuery or javascript event? Basically, I want to show my own "report is processing" animation on the page until the report finishes rendering. This is because the report height is somewhat large and you cannot see the report processing animation that shows in the middle of the ReportViewer control unless you scroll down on the page. I want something visible at the top of the page so that my users don't think the report is hung up. I am using VS2008 with SSRS 2008. Any suggestions would be appreciated. All replies VS2010 exposes a client-side ReportViewer JS object. You can hook up to the ReportViewer.isLoading property. There's sample code there that shows you how. Cephas Lin This posting is provided "AS IS" with no warranties. My C# code hooked it like this; mrvMicrosoftViewer.RenderingComplete += new RenderingCompleteEventHandler(RenderingCompleteEventHandler); and a method like this; public void RenderingCompleteEventHandler(object sender, RenderingCompleteEventArgs e) { // do something here ... } This is in C# in VS2008. Hello BernieHunt, Sorry - my reply about the Visual Studio version was actually to Cephas Lin - I guess you posted right before I did and it got out of order. My client prefers VB to C# (although I do not). I am therefore using VB as a language. I do have both of the references you mentioned: Microsoft.ReportViewer.Common and Microsoft.ReportViewer.WinForms, and I do not notice the event as an option in the ReportViewer properties (not sure it should show up there, but that's where I'm looking). I am incidentally doing this in an ASP.Net web application rather than in WinForms. I would have tried to convert your code, but this is an area I am unfamiliar with....do you have suggestions for me? I would try the DeveloperFusion site with its Convert C# to VB.Net page, but their site is currently not working. Thank you. It looks like you might be out of luck with RenderingComplete. But then this says to use one, in relation to a webform. But I can't find reference to either actually being in Reporting.Webforms namespace. I don't think it exists in a Webform. Bernie Hello Cephalin, no problems. I actually AM using WebForms, but am using VS2008 rather than VS2010. I don't have access to the feature you mentioned. Unless I can recreate it? I just wanted a simple way to show on the web page that the ReportViewer control was still processing, but put it at the top of the web page where the user would be more likely to see it instead of relying on the ReportViewer control to show that it was processing because the processing animation on this control isn't visible unless you scroll down the page. Thanks for trying. Carolyn, Maybe there is a way to do this through user interface. What about making the report viewer smaller so they can see the processing animation while it's rendering. Put a button next to the report viewer that says "expand" or something like that. The button will resize the report viewer to a usefull size for them to view the report. Just a crazy idea, but it may work for you. Bernie Bernie, that is an interesting thought. I don't think my users would like the extra step - they would probably just prefer the wait period. Way to go for being creative at least! You're right about one thing, I may need to think outside the box a bit on this one since there is no built-in or easy option available! I appreciate your input. Carolyn OK, now you've encouraged me so you get all the wacky ideas, hahaha. How about then the user requests the report, you give them a page with enough animation to keep the occupied. Render the report on the serverside and export to PDF. When that's done, redirect the user to the PDF file. I'm not sure of all the details on this, but I think it can be done. I'm a desktop developer so I may be missing something here. Bernie
https://social.msdn.microsoft.com/Forums/en-US/dfefe062-4708-4c79-a2ea-3d6e98dad664/how-to-tell-when-a-reportviewer-control-has-finished-processing-the-report?forum=vsreportcontrols
CC-MAIN-2020-45
refinedweb
752
66.33
It has been noted by non-me people that this website is an unusual place. Specifically, we have an uncharacteristically genteel and polite community by internet standards. Very few communities have the sort of low-key and thoughtful disagreement we see here, even ones with more stringent rules, fewer people, and more moderator coverage. In fact, you’ll notice there are basically no rules aside from the advice at the bottom, “Thanks for joining the discussion. Be nice, don’t post angry, and enjoy yourself. This is supposed to be fun.” That’s pretty vague as far as rules go, and you wouldn’t expect it to keep the trolls away. In fact, it doesn’t. The interesting thing about this is that I do very little in the way of moderation. Aside from requisite spam-handling, a vanishingly small percent of all posts actually require my attention. I read them all – even comments on posts from years ago – and I step in when I think things are getting nasty. A good week will see anywhere from 400 to 1,000 comments, depending on how often I’m posting and how much anyone cares. I have to step in to warn people or delete posts perhaps once or twice a month. That means less than one in a thousand comments presents a problem. Compare this to YouTube, where the ratio of insight to insipid is rarely better than 1:1. (And sometimes a lot worse.) So there’s only one moderator and no rules. Yet we’ve got good spelling, coherent discussion, and a calm tone. And unlike most forums, posting is open so there’s less direct accountability. So why don’t the comments here devolve into the usual YouTube-level sewer of hate as performance art? So what makes this site so special? It might be counter-intuitive, but the reason this place works so well is because there aren’t any written rules. I’ve said in the past that I like to keep the line blurry in order to encourage people to stay away from it. In my estimation, the world looks like this: In any random cross-section of internet society, you’ve got a couple of people who won’t ever stoop to unpleasantness. If they don’t have something nice to say, they won’t say anything. If the conversation turns sour, they simply leave or go quiet. These conversational saints are great to read and fun to have around. I aspire to be a saint, but if I’m being honest then all too often I fall into the next group… Most of the population consists of basically decent people who are willing to respond in kind. If you sling mud at them, they sling it right back. If you cuss, they cuss back. They prefer to inhabit civil places, but if they can’t have civility then they’ll make sure they have justice. Just like meeting in person, most of us tend to adopt the tone and posture of the environment around us. If it’s hostile, we’re hostile. If it’s gentle, we’re not eager to be the first person to raise our voice. And then, in every sample, you’re likely to have a tiny minority of completely batshit crazy moron assholes. This last group is obviously the root of the problem. If you let them run rampant, the saints will leave, and the normal people will sink down to their level. People will get angry, reactions will intensify, people will begin to hate and resent each other, and the conversation will degenerate. This is inevitable. A lot of things can put someone into this last group. Maybe they’re performing for attention and don’t care how destructive they’re being. Maybe they have a bunch of pain in their lives and they’re trying to share it. Maybe they were raised in some messed-up abusive environment and aggressive hate is their normal. Maybe they just aren’t very good at communicating. It doesn’t matter. They’re broken, and as a moderator you don’t have the power to fix them. The problem is in their heart, and nothing you say or do can make them care about others. Most communities are built around the idea of getting these people to behave. This is a mistake. Broken people cannot be fixed by rules. If you make the rules loose, they will find weak spots and exploit them. If you make the rules tight and specific, they will rules-lawyer you to the brink of insanity. They will haggle over the specifics of the rules, and they will insist everyone be held to precisely the same standards. If you let someone else slide, the nut will condemn you as a hypocrite or accuse you of injustice. We’ve all seen a rule along the lines of, .” The thing is, sane people know this. They understand it without being told. Nobody needs to post rules on the door to Olive Garden telling customers not to spit or punch. If someone breaks these rules then they’re sick, and we call the cops. The crazy people are the only ones who need these things explained to them, and even when you do explain it to them, they just see your rules as a problem to solve. The problem isn’t that they broke the rules regarding saying hateful things, the problem is that they wanted to say something hateful in the first place. Instead of making rules to compel crazies to behave – which can become a full-time enforcement project – I allow them to act out. And then I ban them. I want to know who the crazy people are, as fast as possible. The sooner they reveal their character, the sooner I can pull them out of the pool before they make a mess. This isn’t hard. Problem People are usually easy to spot. Now, in the context of an open system like this blog, “banning” doesn’t mean much. People can change personal details and come in as someone new. But so what? If someone assumes a new identity, they still have to pass the sanity test. They still have to behave like a human being. And if a banned person assumes a new identity and then behaves in a civilized manner? That’s not a flaw in enforcement. That’s mission accomplished. This system lets me give slack in a way that a strict set of rules doesn’t. If Ann Commenter hangs around for several weeks being generally sane and polite, then I can cut her some slack if she screws up. Maybe she’s having a bad day. Maybe the topic drifted into something that’s deeply personal to her and set her off. Maybe she’s got some stress in her life. Maybe she misunderstood what someone else said. Maybe I’m misunderstanding what she’s saying. With all of this in mind: I have created forums for this site. They’ve been sort of spreading by word-of-mouth over the last couple of months. I’ve watched them long enough that I’m reasonably confident they’re not going to endanger the conversations here on the blog, and I’m reasonably sure they aren’t going to change the tone of the site. However, the blog takes priority. If the forum diminishes the blog in any way, I’ll nuke it and we can forget the whole thing. The whole system is still on probation until we’re sure it will bring value to the community. If you’ve wanted to be able to have conversations with sane people about games that I don’t cover here on the site, then this may be what you’re looking for. Be nice, don’t post angry, and enjoy yourself. This is supposed to be fun. Overthinking Zombies Let's ruin everyone's fun by listing all the ways in which zombies can't work, couldn't happen, and don't make sense. The Terrible New Thing Fidget spinners are ruining education! We need to... oh, never mind the fad is over. This is not the first time we've had a dumb moral panic. The Biggest Game Ever Just how big IS No Man's Sky? What if you made a map of all of its landmass? How big would it be? Could Have Been Great Here are four games that could have been much better with just a little more work. Are Lootboxes Gambling? Obviously they are. Right? Actually, is this another one of those sneaky hard-to-define things? 170 thoughts on “Philosophy of Moderation” > nothing you say or do can make them care about others. I’ll disagree. There’s a way of speaking called NVC and I think it might help in this. Although it might be harder to do via net. Basically you try to figure out feelings and needs (that he/she was satisfying by his actions), communicate your emotions & needs and then propose a different way to meet his/her needs. Even if moderation is the moderator’s sole day job that’s asking a lot. Even if they could do it in person. Since no moderator does it as their sole job, I’d say NVC done by moderators wouldn’t work in practice. The amount of stress and required overworking would tip the cost-gain ratio over the table and rolling under the couch. This. Now, I will admit I don’t really know more about NVC than a quick check on the wiki provided me with but even if this is something that, when did correctly and persistently, works more often than it doesn’t it’s really not something suited to the internet forum, if not for any other reason than because of the amount of work it would require. Even assuming the necessary amount of time, will and skill on the side of the moderator I still fail to see how this could effectively work when the only communication channel that the moderator might have is PMs and we’re dealing with an “lol, I nonviolently communicated your mom last night, fag!” kind of individual. Although I do imagine the OP meant more that in the general sense there may be ways of “teaching” such an individual proper human forms than specifically meaning “on and through the forums.” > meant more that in the general sense Yes, I mean it’s possible in general to mitigate conflict. I have no idea how the method would work text only. > done by moderators > that the moderator might have Why only moderators? It’s possible to use by everyone. And even if you have individual with whom you can’t exchange f&e, the method tells you how to understand such inflammatory messages and not be insulted. Because reasonable, internet savvy people, assuming they don’t fall for the bait, will try to avoid interaction with these individuals, or call mods, or move the conversation elsewhere, or drop it altogether, or anything else other than trying to engage in some kind of lengthy mental engineering with people who are likely to not even pay attention to what is being said, whereas moderators have to do something about them. Again, I’d hate to sound dismissive, especially since I know very little about the method, but I just find it hard to imagine it being effectively applied in the kind of situation that we’re discussing. To be perfectly honest I’m somewhat sceptical of the of the method altogether and the, in my opinion overly optimistic, assumptions it makes about human nature (at least the way I understand them) but maybe I’m just bitter and cynical. It also goes beyond the matter of insulting. I don’t need to be insulted by a man screaming obscenities in the middle of my conversation with others to find him disruptive, my understanding of his motivations does nothing to relieve me of the burden of his, for lack of better word, input. It might work with some people, but there are always the people who will be a jerk until someone who is bigger, louder, or with more authority gets them to stop. The only problem is online you really only can ignore them or hope a mod banns them. Then again I could just be horribly jaded, working retail/customer service your entire life will do that to a person. Those ppl whom you call “jerks” do things you don’t like for a reason. The idea is to find the source and act on it. I’ve been reading a book called “The Sociopath Next Door.” The gist is something it’s hard for most people to wrap their heads around because we’ve been raised in a world that indoctrinates us with the untrue idea that “all humans are inherently good.” No, all people are inherently terrible, wicked creatures, we naturally want to be good because we’re raised in a society where we’re expected to be good. The gist is that there’s a small segment of the population, about 4%, who have no conscience. And no, NOTHING you do can EVER convince them that they should, because they see it as a flaw. I’m fairly sure I know one such person. I may know more, but I haven’t managed to detect them yet. Like Shamus said. You can not fix these people. You can’t teach them to behave because they have nothing to gain from behaving. You can’t make them want to behave, because they have nothing to gain from WANTING to behave. You can only spot them as quickly as possible, and get rid of them before they do too much damage. Also, I have seen cases where “nonviolent communication” actually makes the subject MORE irritable. Sometimes that subject was me. I’ve been reading a book called “Non-Violent Communication”. The gist is something it’s hard for most to wrap their heads around because we’ve been raised in a world that indoctrinates us to classify, analyze and determine levels of goodness/wrongness. No, all people do what they do to meet their needs and only sometimes we’ve got conflicting strategies. I don’t think either of you (or rather, the sources either of you are relying on) have really solid bases for your theories of human nature. Frankly, the real story about how humans think and what we’re really like, is that we don’t know. Much of psychology as it relates to personality and motivation has little scientific basis, and even the evolutionary psychologists are often spinning plausible “Just So Stories”, which feminist evolutionary psychologists criticize for being largely culturally based and biased. There is a lot of reliable psychological data, but it tends to be about little isolated traits and urges. Partly that’s because those are easier to study; looking for the keys where the light is. But partly it may be that there is no “Way people are” in a sense–that at base we’re all just assortments of little wants, fears, rules of thumb, reflexes and so forth, sort of superficially pasted together with, and sometimes overridden by, forebrain identity and rationality stuff. Just a lust here, a bias for a bird in the hand over two in the bush there, a predisposition to fear small things that have no fur and move suddenly the other place, with a fair amount of emergent behaviour happening when you stack them all together and use a thinking attachment to figure out how to satisfy/get/avoid all that stuff while dealing with contradictions. The gist is that there's a small segment of the population, about 4%, who have no conscience. And no, NOTHING you do can EVER convince them that they should, because they see it as a flaw. Fun fact; trying to make people have a conscience is not the same as trying to understand them. People without consciences still act rationally (unless this person you’re talking about routinely sticks their hand in a fire or something). Figure out why they do what they do, and offer them an alternative that does it better. One problem with that is there may not be such an alternative, particularly if there is no punitive approach to stuff they do bad. In screwing up other people’s lives they may well be taking what for them is an optimum path. In such a case, the only way to get them not to do it would be to make sure the path they chose becomes less optimum, by making sure they know their lives will be hell if they do it any more. I’m not one to advocate relying heavily on retributive justice, but then a good deal of crime is not committed by people with no conscience. I really doubt the figure is as high as 4%, and I’m not convinced that it’s always inherent and unalterable either. Studies have shown that higher income people, mainly very high, have a higher proportion of psychopathic traits than lower income people; this seems to be derived from a learned sense of entitlement rather than being inherent. So yeah, I don’t think there are that many people without conscience, and some got that way by circumstance and might be educated into having one again. But when you do find someone that really is like that, understanding them won’t necessarily help much. The only thing with a chance of working is heavy doses of rewards and penalties. The problem comes up when the objective of the person in question is to cause harm.(the example given in the book, purportedly a true story, was of a successful businessman who started at a very young age torturing and killing frogs at his parents’ vacation home. He thought it was hilarious.) Sometimes causing harm is incidental, but strictly the only way in which your alternative is “better” is in that it doesn’t hurt anybody and, objectively, has several additional downsides which make it less desirable. Cooperation, or at very least, coexistence, is an excellent ideal, and it is good to strive for. It simply happens that sometimes, given circumstances, it’s impossible, and you have to have a plan B. This is actually a terrible strategy for meeting the needs of a forum moderator, or indeed any civil society. The common citizen should be expected to give people common courtesy and the benefit of the doubt. He should not be expected to be able to provide professional quality psychiatric help to any random jerk who accosts him. By refusing said jerk an outlet in the form of being a jerk in public, we actually encourage him to seek out better strategies, like finding a qualified therapist. It is simply more efficient to try to turn one Crazy Jack into a Han Solo, than to try to turn seven Han Solos into Mister Rogers. but there are always the people who will be a jerk until someone who is bigger, louder, or with more authority gets them to stop. Most of those people view themselves as weak and act out to prove they can influence things. If you’re able to convince them that they have worth without needing to drain it from the people around them they’ll stop on their own. The real problem is doing that when 80% of the group is just going to try to get them to shut up by out-dicking them. Works both ways though. If they just get banned the moment they act out, then it becomes a less appealing tactic for trying to influence things. It may be possible to get them to act nicer if you spend a lot of time and effort and have some way of getting them to listen. But I don’t feel like I really have that responsibility towards random people on the internet who are hassling me. And if it isn’t done pretty carefully, the message continues to be “I acted out and people paid a lot of attention to me”. “If you’re able to convince people that they have worth…” Wrong. Terribly wrong. People have to demonstrate their worth before it can be recognized, otherwise you create an entitlement society. If you give needy people an inch, they will take a mile and run rampant demanding more and more entitlements. Chaos ensues. Accountability is what matters. It’s no use making people “feel good about themselves” in return for nothing. Achievers behave well because they feel good about themselves for providing for others and receiving adequate compensation in return (in the form of respect, admiration, money, attention, notoriety etc). Those who do not behave well lack the same self satisfaction, and they simply cannot understand it. Needing things is why they act out in the first place. If you give in to their demands, you only enable them. If one simply panders to the depraved, all they’ll do is wreak havoc. The recourse is to hold them to higher standards (and encourage higher standards and provide the means to reach them). Failing that, society must hold them accountable for their behavior. True. I moderate a Facebook forum of over 1000 and that is basically how we handle things. I think in over a year we have had to boot one. That said it takes 12 of us to moderate it and somedays it is super hard and stressful. I prefer Shamus’ method. I don’t see any particular reason to be inclusive/understanding/tolerant of undesirable behavior when there are an infinite number of other online spaces that individual can express himself elsewhere. However I agree with it in the real world. It makes a lot more sense to work with people when they do not have instant access to every community in the world simultaneously. I agree with you somewhat, but I also agree with the old saying, “Some people are only alive because it’s illegal to kill them.” Even in the real world, some individuals are simply worthless, and nothing can be done to recondition them. Hard part is figuring out who can and who can not be reconditioned, and how much effort to “recondition” someone is worth it vs. simply excising them from living society. That sounds about right. I mean, can most people be “saved”? Yes, I’d imagine that that’s true for a huge bloc of the rabble-rousers out there. Can everyone be saved? Probably not, no. Are there some edge cases where redemption could be achieved, but man, it is going to take an unholy boatload of patient person-hours to to get there from here, so it might not be worth the effort? I could see that. Either way, I don’t think the onus is on random website moderators to play therapist with rogue elements. There are only so many hours in the day; just because you maintain a website does not obligate you to make nice with each and every soul who decides to interact with it. Having defended the NVC approach above, I want to mention I agree with this. I think nearly everyone can be helped, but it’s not a mod’s job to make every random jackass a better person, and trying to is more likely to make the mod a worse one. I also think there are cases where people can only be helped after excising them from the community. You’re absolutely right. Website moderators have better things to do than play Therapist. There are people who you can go to if you need a therapist. They’re called Therapists. I’d like to believe such a thing is true, but unfortunately, and without the intention of offending you, I believe that not only it isn’t true, but that believing it is, is a little naive, and maybe kind of egocentric. I think believing you are capable of understanding every single person in the world is having a little too much self confidence. Such a thing is simply not possible. Every person is different. And even when we can generalize enough to accomodate people in certain groups, there’s still going to be an amount of people we can’t hope to understand. With so many different things going on inside a person’s mind it’s just simply not possible. Yes, that method you describe can (and most likely has) been used successfully in many cases, but it is certainly impossible for it to work all the time. Furthermore, even when it seems to work it might not be the case. For instance, the person you thought you had convinced might come back under a different name and with the same behavior, or might humor you for a while just to make you believe it’s working and then go back to doing it. Or, far more likely, he might find you boring and leave but find someone else to bother. In that last case you wouldn’t be solving the problem, you’d be transfering it to someone else. The most important thing you need to understand is that people are far less inhibited on the internet. People say things here that they’d never say in real life, and only because they have the gift of anonymity. Everyone behaves differently on the web, one way or another. Yesterday it was announced that one of the founders of the recently-funded Oculus Rift virtual headset project was killed in a traffic accident by a car running from the police due to an altercate. Visiting small places like personal blogs or less publicized sites, like Cinemablend, you’d see that comments in the articles mentioning that person’s death were all compasionate, understanding and/or at least civil thoughts on tragedies. Visiting big sites like IGN, though, you’d see an inmense amount of commenters resorting to crack jokes about the dead man, deciding to blame illegal inmigrants (due to the driver of the car having a spanish last name, even though no other information about him had been released) or insulting people for related or unrelated reasons. I made a comment there offering my condolences to the family and friends, and a sort of eulogy to the man (R.I.P.), and someone replied to me saying he still wanted his Oculus Rift and he better got it. I replied to him saying I knew his intention but I refused to sink to his level. He replied to me again saying the guy’s death was my fault because I was insensitive. Of course, I refused to reply and calmly flagged his comment as inappropiate. Yet those kind of comments were filled with people who replied to them saying things like “You should have been under that car” and such. My point is, this kind of behavior not only depends on being on the internet and not on real life, but it’s also influenced by the size of the group. The more people are, the easier the verbal violence escalates and the harder to control the situation becomes. There really is no simple answer. Believe me, I wish there was, but I can’t just make that come true by wishing it. It’s always bothered me to some extent how the overall internet ‘personality’ as it were trends towards this sort of thing. I know its not everywhere – I mean, here is the obvious example – but it always concerns me how something like 90% of articles I read have some sort of comment that makes me cringe. Unfortunately, its a very very large problem with deep-seated roots in… something. For me, I just try my best not to add to it and make things a little better when I can. And staying away from Youtube comments for the most part. Just wanted to make a couple comments on this. People say things here that they'd never say in real life, and only because they have the gift of anonymity. Which just makes it easier to identify what the problem is. It’s hard to figure people out when they’re hiding themselves, but they don’t hide as much when they think they’re anonymous. In that last case you wouldn't be solving the problem, you'd be transfering it to someone else. Banning does the same thing. Worst case scenario you accomplish nothing. (Well, worst-case scenario you do it wrong and make them worse, but any comment can do that.) it's also influenced by the size of the group Very much so. It’s entirely possible to be able to help someone on an individual level but make no headway when their friends are around. Your only hope is to dilute the group with people who think like you do (good luck), or break the group up and talk to people individually (good luck again). I replied to him saying I knew his intention but I refused to sink to his level. This never calms anything down. It’s the same as saying “I’m better than you”. I haven’t actually tried it, but I suggest starting a sentence, hitting a bunch of random keys, posting the message, blaming it on your cat and then launching into a rambling monologue about said cat. Fight fire with airheadedness. “Banning does the same thing. Worst case scenario you accomplish nothing. (Well, worst-case scenario you do it wrong and make them worse, but any comment can do that.)” Yes, but he seemed to be claiming his technique would actually solve the problem. I was merely pointing out that it wouldn’t. “This never calms anything down. It's the same as saying “I'm better than you”.” Maybe I misspoke there. I didn’t actually say “I refuse to sink to your level”. It’s what I did, not what I wrote to him. “I haven't actually tried it, but I suggest starting a sentence, hitting a bunch of random keys, posting the message, blaming it on your cat and then launching into a rambling monologue about said cat. Fight fire with airheadedness.” I’ve tried it (well, the general idea, not the cat thing) and it depends on the other guy. They might cease or might see it as a challenge. I have actually managed to calm one or two people down who were in the process of having a full-blown flame war. Those endeavors took at least an hour per post and quite a number of posts. And they worked because after a few posts back-and-forth it was mainly two people left talking (one of them being me). And I’m not even mentioning the times when it didn’t work. This is possible, and I encourage everyone to try it once in a while (it’s also good for your own communication skills, especially if you tend to go over the line sometimes yourself). A world where more people are able to do this is a better world. It is also completely unrealistic to hope that Shamus (or someone else in this community) will come to the rescue every time someone misbehaves in the comments here. No-one can even always in a mood that will allow them to do this, not even mentioning having enough time available. I love NVC. I think it’s fantastic. It’s made a huge difference in my life. But I think its effectiveness is limited in the cold communicative vacuum of cyberspace. I’m sure you know how textual communication like this doesn’t carry tone. I just read your post with you having a kind, motherly tone and it fits. Then I re-read it imagining you rolling your eyes while condescendingly telling all these nubs how NVC solves all the problems, and it still kinda fits (although I don’t believe it’s true!). That’s all me projecting onto what you wrote, because words on the screen are so devoid of communicative power. At least I’m aware of it, but if I wasn’t, and because of my own whatever projected you being a condescending dbag, there is going to be a difficult obstacle for you to overcome. A potential troll might, as Shamus says, be having some stress in their life. But that’s hard to tell. And following NVC means having a long series of back and forth, probing and digging to get to the root of their unmet need. Something which is entirely necessary for the people involved, but completely off-topic for a mostly-anonymous gaming discussion forum. I’d love to fill this paragraph with good ideas in return… but I got nothing. Smarter people than me are going to have to work out how to make space for that kind of thing in a community like this. D’awww, but I like discussing politics & religion on the internet! Granted, it’s rarely a nice conversation to have, but it’s also an important one. Though, for a forum dedicated to some specific, fun-oriented hobbies, it’s perfectly reasonable to avoid that vitriol. Oddly enough,the few times politics and religion popped up here,the conversations were mostly civil.Yet when Shamoose pulled in fanboys of certain games,it hit the fan,and it hit hard. We’ve talked about operating systems and game consoles. Keeping civil with regards to politics is nothing compared to that. What’s the saying? The lower the stakes, the more vicious the politics? You mean, the the defecation, hit the oscillation? Forsooth, the infernal containment measures were broken quite asunder. Shouldn’t it be ventilation rather than oscillation? Maybe there are some odd kind of fans that oscillate instead of rotate. Like, hand-held fans. Like fans of the GameBoy? Or maybe fans of hand-held fans. I would be a fan of watching fans of hand-held fans watch Watchmen. But who watches the Watchmen watchers’ watchers? The response chain above this one is an example of what makes the internet awesome. *Takes a screenshot* I’m a little disappointed too. Politics and religion are among the most important things possible to discuss. XKCD has forums where such things are possible. I think they are somewhat nice over there, too. This might be the most political thing I’ve posted here: Some of the reason I ban the two subjects is to protect myself. The discussions are very painful for me. I’m a Christian, but I don’t really fit in with the typical Christian groups and I’m often very frustrated by both how believers behave in the public arena and how they are portrayed. I supposed it might be a bit like the way reasonable, gentle animal activists feel about PETA’s outrageous behavior that makes the cause look like trollface.jpg. Or the way environmental activists feel about environmental terrorists. On the other side, I really can’t bear the horrible, ugly things people say about the Christian Right. Some of the things people say about “fundies” are just disgusting falsehoods that spring from ignorance, some are exaggerations, and some are well-deserved points that could have been said more gently. So whenever politics and religion collide I go bonkers, wanting to argue with both sides. It angers me and makes me forget that there are lots of really wonderful, compassionate people in the world who aren’t participating in this exchange. It’s bad for my heart and there’s no way I’d be able to moderate justly. It angers me and […] there's no way I'd be able to moderate justly. And this is one of the wisest things I think I’ve seen you say. Knowing one’s own limits and hot buttons takes more self-inspection and honesty than is common. Coupling that with the self-control to step away from the fray is a rare combo. Communities tend to take on the personalities of their founders. And that is why this one works so well. So, it’s basically a “You wouldn’t like me when I’m angry” kind of thing? The title image works on so many levels! And, in the defense of “outrageous behavior” of all varieties, it’s very difficult to agree on where this line even lies. Avoiding the question entirely is certainly safe. However (and this has puzzled me for a while) if, as you say, “I want to know who the crazy people are, as fast as possible.” it seems that allowing both political and religious discussions would be an excellent way to drawing such elements into plain sight. Your choice of course, but it seems strange. If I’m reading this right, the central issue here is that, were the subject to become religion and politics, Shamus would quickly become one of the crazy people, and would thus be forced to ban himself, which would make it hard for him to moderate in the future. Hehe. You will undoubtedly quote me this proverb: “˜Moderator, ban thyself' Classic. So by your estimation, the world is mostly filled with people who shoot first? BA-DA-PISSH In all seriousness, I usually get tired of most internet forums because of all the hate and insults. But here the tone is almost always polite, even when people have strong disagreements. I think that one of the most important reasons I keep reading this site. I don’t post very much myself, but I do enjoy reading a good discussion, as long as it’s polite and on the matter. Actually, I read him the other way: most people will wait for a shot, and only answer in a quid-pro-quo fashion, keeping the tone as calm as possible, as violent as necessary. I think you missed the punch line. It was a Han shot first joke. OMG…yes, I did miss it…might be because I never valued SW (BLASPHEMY) enough to watch it even a 2nd time (WE FOUND A WITCH) and not even once in the remastered versions (nvm the prequels…). I just don’t really appreciate the entire franchise (MAY WE BURN HIM?). I often read about the topic but could really not be bothered to make the connection at the moment.;) Also: a current conjunctivitis hinders my eyesight a bit more than I thought before…didn’t recognize Han… To be fair, Han did assess the situation and can be seen asking himself “Can I get out of this engagement quickly and without getting shot?” and only pulled out his blaster when it became clear the number of ways Greedo intended to let him leave did not include both the qualifiers “soon” and “alive” at the same time. I <3 this comment. That is a delightful summary of Han's thought process in that scene. This reminds me of the broken windows theory a lot, and I’m definitely inclined to believe it’s a huge factor in fostering a kind online community. I’m curious if/how other things play a role. For example, having no extrinsic reward system for comments. Similarly I’m curious to what extent your content self selects your community. Even for a community that aggressively moderates, only having to intervene a couple times a month is unusually low. Is there something about your format that keeps problematic elements away? Being very long form, without editing, and without clear boundaries I don’t think Spoiler Warning provides regularly paced or quick gratification. Perhaps this attracts a more considerate, long-term audience? Or maybe I’m just seeing connections that don’t exist, I dunno. I find this to be a fascinating study. Me too, actually. I have thought off and on over the years about what it would take to create an “ideal” forum space (for my own personal values of ideal, of course). I would probably err on the side of too many rules, which made this post particularly interesting to me. Shamus’ site is a great one to take inspiration from. [nods] I agree that it’s refreshing to see a forum space behave so well. I know that when I was maintaining my own reasonably well-trafficked forums several years back, I didn’t have any explicit rules; “I am the benevolent ghost in the machine, and will quietly nuke trolling elements accordingly” was the closest thing I had to a site “rule”. And it worked great! There were no language-lawyer trolls, because there was no language to lawyer against, and any rogue incendiary elements were quietly snuffed out. It’s nice to see the same general concepts bear fruit elsewhere, on an even greater public Internet stage. The science fiction author John Scalzi seems to go by a version of that theory. His commenting policy is relatively open and vague, and like Twenty Sided, the discourse is pretty polite and intelligent. (He probably gets a lot more vitriol but he also has a lot more explicitly political posts; still, even political disagreements are generally polite). In his author talk at Google some years ago, he says you’re responsible for your own site, and you have to ride herd on comments and ban trolls when necessary, because if your site becomes known as a sewer, only the jerks will comment there. That was a very interesting read, Shamus. Anyway, there’s a typo: “diect” instead of “direct”. Shamus, the reason your comments tend to be civilized is because you’re running a niche blog that requires a certain amount of gaming expertise to understand, not to mention a lot of reading. That’s a built-in IQ cutoff that filters out a lot of the mouth-breathers. Your own very “moderate” personality may also have something to do with it – like attracts like. I don’t think it really has anything to do with your rules policy. I don’t think the truly insane are going to be stopped by not understanding something or that they’d feel above going “TL;DR” (but silently) and just going to the comments to pick a fight. Also considering Campster’s “feedback” on Youtube (and my personal experience with people in person) implies that like does not attract like. Like I said, it’s a combination of all those factors. YouTube is a more visual medium that is much more “accessible”, for good or for ill. It’s also much easier to randomly stumble across a gaming-related YouTube video than it is to randomly stumble across a blog. I disagree.Violence,be it real life or internet,has nothing to do with intelligence. But it does have something to do with realizing that acting like a jerk on the Internet might just be a waste of your time. Stubbornness plays a much more important part in that.As well as upbringing. Well, even knowing it’s a waste of time, I also know it’s fun. I always laugh at “waste of time” comments, like if they weren’t starting fights on the internet they would be doing cancer research or filling in potholes in roads or something. Games are a waste of time. They’re also enormously popular, because they’re fun. If people have fun arguing, they’ll do it. I disagree that this is linked to IQ, a measure of how well someone can do a set of basic mental tasks. Just look at Sayre’s law, a comment on how bitter and pointless infighting is in academia between people who are certainly top percentile for IQ. On the main topic, I am reminded of my school years. There was one rule: pupils will act like gentlemen at all times. Some regulations had been filtered in over time: what was considered dress code, how the declaration of summer changed that, etc but the core of the system was this same limited policy that can be interpreted with the flexibility to run a benevolent dictatorship. I used to be a member of the Dwarf Fortress community. They are very smart, clever, and funny over there. They are also an INCREDIBLY hostile community, going so far as at least once almost rioting because someone gave DF an oblique compliment. I was both legitimately kicked out for poor behavior and decided I wanted to leave because I didn’t want to be around people like that. > your comments tend to be civilized is because […] not to mention a lot of reading. I disagree. When I first started following Shamus’s blog (very start of DMotR) Shamus would only write a paragraph or two and 5, 10 comments was a lot. 50 was crazy. What attracted me to the blog way back then and kept me here?- the comments! I tried to get my friends to follow the blog too. I remember gushing to my friends about how good the comments were in relation to other blogs. Point being, it’s always been good even when there wasn’t much here. But Shamus’ TV Tropes page clearly states that he’s “Known for having radical political opinions and strong religious beliefs, but not blogging about them ever.”! Maybe he just moderates himself for the mystique. IQ is not a measure of polite-discussion-ability. The highest intelligence will never prevent anyone from acting stupid. There was a nice study a few years ago showing that the highest concentration of political extremes (any extreme) was in universities and with people who hold University degrees in Germany. I’m not sure how those two concepts relate. Extreme political views are not a measure of lack of polite-discussion-ability. Or of stupidity. Weeelll… okay, I wasn’t very precise here, by equating extreme political views to stupidity. Thing is: There are people holding opinions that you would usually never associate with well-educated, smart people. But they still do. More so than elsewhere. Which in turn means the usual association between a degree and being a reasonable person to have a polite chat with is false. Quite contrary, it can be very hard to talk to a person who thinks he already knows everything (I should know, I’m one. And on a mission, too!) I have hardly any gaming expertise and I’m not a programmer. What keeps me coming back is basically Shamus’ prose style and wit. Thing is, aside from technical chops that stuff is to a fair extent an expression of personality. Basically, people who hang around here are the kind of people who like Shamus. Apparently such people aren’t big on angry bickering. I get the feeling if you think about it the spell will be broken and whatever impossible balance you have achieved here will collapse. I consider myself almost entirely a member of the first catagory yet bizarrely there is only one place on the internet where my social etiquette breaks down because the environment is so brokenly aggressive I just subconsciously see no point in being polite. Its not youtube where I find all the trolls can be easily ignored or played with by politely responding, in fact youtube’s level of troll is so easily identified they are actually kinda cute. its not The Escapist forums where the occasional flame war is unpleasant but avoidable provided you vow to only witness, never comment. Nope the truly most horrible place on the internet is the Steam forums, where even a curtsy glance on ones ill advised bi-yearly visits can be maddening. You re-play some age old classic game or buy an ambitious new indie title and out of curiosity peek into the forums to find a sea of hate. In fact just to be completely impartial and fair lets just randomly drop into the forum of a recent indie game, give the forum a surprise inspection and see what we can come up with. Xenonauts has been on Steam for about 2 days I wonder how thats going… ‘Its not finished’ Is a forum complaining that a game on early access is not finished. Apparently this is a person both willing to drop £15 on a game without reading about it yet also wants a very specific experience. Naturally he expresses his disdain by suggesting he was mislead and that the devs are evilly manipulating him into mindlessly buying their games. ‘Why is this game so ugly?’ Because its an indie title based off Xcom, a game few played for the graphics. ‘Should I pay $19.99 for an XCOM knock off?’ A forum complaining that the game is both too expensive and a rip off. Making a wide scale strategy game rivaling a triple A release without a publisher is both easy and free so this complaint is entirely valid. Basically just imagine this stuff 80% of the time across every forum of every game. I must admit the Xenonauts forums are not as bad today as they were yesterday and the examples I chose are very tame in comparison to the usual deal. At lease some of these were written without all CAPS or endless swearing. Still anybody who dips into the community knows Steam users are second only in adolescent fury to Xbox live. I get the feeling if you think about it the spell will be broken and whatever impossible balance you have achieved here will collapse. Mad Baron Felblood respectfully disagrees. I too, feared this outcome once, but this isn’t the first time Shamus has gone in depth on his moderation strategy. Occasionally having this little talk with his audience is part of what makes his method effective.Knowing the rules here, is basically a matter of knowing Shamus a little, and knowing the kind of community he is trying to grow here. Clear, but non-limiting, communication is the key to being a benevolent dictator. I think this may be the closest you’ve ever come to posting something political, Shamus. Dangerous! But a good read. All this matches my experience on WikiIndex and SpinDizzy MUCK. The rules lawyers think rules are a football for them to have fun with — essentially, one more way to troll. The crazies are, well, crazy. Broken, as you say. If they were able to understand what “be civil” means, they wouldn’t need an explanation. The rest just need a gentle reminder and a nice environment to be in, and they’ll behave. I’ve been reading the JREF forum for 11 years at this point, longer by far than any other. Their primary forum disruptor is the rules lawyer. These folks can keep it up for years, barely toeing the line and eating moderator time. This eventually resulted in a change to allow banning for “body of work.” Rules lawyers can also be some of your most prolific and otherwise interesting commentators, making banning them over being a general irritant a difficult call. Maybe it has something to do with the fact that 99% of the commenters here aren’t “TL:DR” types. Good point. The readership of the blog is already self-selecting: people who enjoy reading longer, thoughtful essays on stuff; probably less likely to have a knee-jerk reaction to something. While on this topic, I think also that Shamus himself has been promoting good behavior: not only by reminding people to be civil (especially when the topic is fertile for flamewars) but mostly because of the obvious care he takes in his writing to (1) avoid being misinterpreted, and (2) be fair-minded. I think these two traits really show all the time, and helps put the Han Solos of the world in the right frame of mind when they finally click that “post comment” button. As someone who runs a site and eventually intends to build a community around it, this was an interesting read. I think it’s worth noting that communities also tend to have cultures which grow from their initial members. You couldn’t, for example, apply your moderation style to 4chan /b/ and expect them to all turn into Martin Luther King Junior. When it comes to people, like attracts like. The “saintly” users are attracted to your site because there are other level-headed folks like them to get cozy with. One could probably go into a convoluted analogy about gardening at this point with weeds and/or bad seeds but that would be a somewhat dehumanizing, I guess. To be fair, I think its likely that some of the more unpleasant elements out there might filter in as the audience grows, regardless of the contents… and a huge jump in readership, if any, might attract more attention from the whole spectrum of people. But then again, they’d probably need some compelling reason to stay, and here that would involve actually reading the articles. :) I have tended to describe the community here as polite nonconformists. Or at least reasonably-considerate nonconformists. My guess is some combination of the folks attracted to this site and Shamus’ moderation system gives the results we see. Let’s hope the forums do not upset this delicate balance. Much as I enjoy reading Shamus, our preference in games only partially overlap; so it will be nice having some place to discuss other games with these folks. The only way, I think, that the forums could attract someone besides the blog readers is if one of the threads became incredibly popular to the internet at large, which tends to happen only with stories and certain LPs(which falls under stories, arguably). The only thing we have to fear is amazing content. Interesting thoughts, and along the lines what I’ve been thinking too. This makes you smart, and me right. Incidentally, here’s a body hair-removal that actually works! httpcolonslashslashetc. Totally agree on banning as only viable option. I had my fill of moderating IRC channels back in the day. There I learned that there just is no reason to give maliciousness (malevolence? vitriol?) room to spread. Your fuzzy rules works like an anarchosyndicalist heaven, and I’m glad this oasis exists. I just find it amazing that you do this all by yourself. The number of regulars in here must be in four digits, and working through the steady stream of posts must look like a regular job to your family by now. Roughly ten years ago, I ran a forum with a high school friend of mine. It was a general-purpose forum which allowed discussion of religion and politics (anything, really). In hindsight, it is remarkable to me how little trouble we had with toxic posters. It was not the sort of forum where you know everyone, although the population was smaller than what you have here. The reason I’m sharing this is because I notice an interesting commonality: All of the non-toxic places on the internet that I am familiar with enforce standards of communication. Typos happen to everyone, but you had to at least approach the ability to complete a sentence, and you had to make clear you were trying. I don’t know offhand what Shamus does, but I cannot remember the last time I saw a comment on this site that looked like a stereotypical text message. In the forum I ran, we had a semi-official rule barring leetspeak (the equivalent of the time), which was enforced variously through post-nuking, post-editing, and finally banning if someone demonstrated an unwillingness to change their ways. Long story short, I have noticed a correlation between literacy and politeness in online settings, and I am wondering whether anyone else has encountered this (or the reverse). I will say I hate sites with strict rules. Things like “no swearing” only leads to people insulting each other in other ways, usually with some obnoxious passive-aggressive attitude. It’s been observed that among myself and my 3 roommates(and even most of my friends), I am capable of some of the most graphic, offensive, and disturbing language in the group. I am also the one member of the group who almost never swears. There is also the fact that this is a private blog so you can have your moderation power absolute and don’t actually need the rules because what you say goes. This is something that, say, Escapist, Bioware or Steam forums can’t afford because they are “public”, they want to attract and keep as many people as they can while you’re only really interested in attracting people who are interested in your content (which is somewhat niche on top of that). It’s a bit like the difference between loosing a (possibly paying) customer in an MMO and kicking a disruptive individual from your private RP group. Shamus is willing to accept a lower level of traffic for his principles, but it’s not like he doesn’t need traffic; the blog is a fair portion of his livelihood, and a lot of the income stems from ads, which requires traffic. I’m sure if Shamus thought he could get a couple hundred thousand views a day while retaining the atmosphere of the blog, he’d do what was required to get that. > Shamus is willing to accept a lower level of traffic for his principles That’s implicitly stating that Shamus would gain traffic if he loosened his principles. I strongly disagree. For example, I’d leave and I’m sure others would too. You’d have to provide some pretty convincing proof before I’d believe it. If that (potentially playing) customer is disruptive to the point where he’s insulting others, throws hate-speech around, etc, he will drive other (also potentially playing) customers away. So it’s in your best interest to stomp down hard, make it known that sort of behaviour will not be tolerated and ensure that your enviroment is one a tolerant one that welcomes everyone (except shitheads like Mister Disruptive). Also: Places like the steam, bioware and escapist fora are also operated privately and their moderation power is equally absolute as Shamus’ is here. When you post on a forum, you’re effectively a guest in the house of the people who run that forum. Doesn’t matter if the forum is run by one guy from his comfy chair or by a huge multi-national corporation. The principle is the same and if the people who own and run the forum decide to throw you out of their metaphorical house because they don’t like what you said, they get to do that. I think the point was: Shamus does this pretty much on his own. If you’re say on the Bioware forum, the guy moderating it is not the lone boss of that forum. He also is not sure to be allowed to do whatever he pleases. That guy has superiors, telling him to maintain the good mood of the customers while moderating as strictly as required. What if he bans someone who acts like a huge jackass and that guy complains and someone in the upper echelon thinks the mod made a big fault that might disturb the forum’s peace? There are probably guidelines he has to follow and is not always sure how to react to people who appear to disturb the peace of the forum. Still, they depend on being in charge of it, since it pays their bills. Shamus is like the trainman on his blog & forum, but Dirk Modbrick on random forum #17 sure isn’t and I think the chances are good his moderating might suffer from that. That was largely my point, the stated rules aside the guidelines for the mods in those “big” places are usually specific on being pretty lenient. I’m not even going to count the times where I’ve seen mods fight a battle with dozens of disruptive users by deleting individual posts, repeatedly handing out 24 hours mutes or three day bans or having to deal with obvious alts on a “per offence” basis. And I do largely blame the “customer” philosophy: because these people are customers we don’t axe them, we give them gentle slaps on the wrists and otherwise smile and take it as long as we can. F!RST!!11! …well, except I’m not first — I’m more like… sixteenth. But if I had been first, this would have been a clear reminder that you do in fact have one extra rule, that isn’t really posted, either. :-) And one rule that I heartily agree with. I thought “first post” posts just were time-delayed to make the poster look like an idiot. Did that change? I think part of it is that this blog is so personal that people feel guilty about starting shit. It reminds me of the experiment where they left one of those “take a candy bar for $1” things at an office to see how much people just stole the candy bars. They found that doing something as simple as putting a picture of a pair of human eyes staring at the person on the sign massively reduced the theft. I think that people feel more like they’re being watched by a real human being here than say, on the Escapist. Actually, I’d love to see a real experiment done. Whenever someone wants to post on a forum, have a picture of someone looking at the poster disapprovingly above the textbox 50% of the time, and see if there’s a difference in the percentage of comments that result in moderation. I agree with this, I feel most people would find a hard time to start something in the comments section when there are heartfelt posts about Shamus playing Starcraft with his son, or about his struggles with finances. It’s the kind of personal touch mot big sites seem to lack and I think it plays with people’s empathy. They know if they talk crap they’re doing it in Shamus’s house. Also, discussion here is allowed and people are allowed to express their viewpoints. I’ve seen countless personal blogs and tumblers ruined because the moderation forced an echo chamber type of environment, where dissenting opinions were shut down by both commenters and the moderators. Seems so yet I have run into all sorts of nastiness on similar sized mom blogger blogs…Christian ones to boot. So good theory but definitely not true. Whoops, there goes my Faith in Humanity Chip â„¢ again, coulda sworn I burned that out years ago. :P In all seriousness, I guess it just speaks for the time and effort put into making the comments here and at chocohammer, digitalMumbles, and ErrantSignal the intelligent and dignified places they are. We’re boring to pick on. Trolls want attention. The only attention they get here is a ban. Since they can’t make another account and argue with the moderation team about the validity of the ban, they can’t even use the ban for attention. I think this is kind of on the right track. I’d go so far as to say that, on the blog, nested replies only go so far before they become a giant line of un-connectedness, so to me, that feels like it works best. So instead of trolling for large groups, the best that people aim for is short bursts of punning through a thread. Because at some point, you’re not going to keep track with who’s saying what to whom – but where puns end up going…they don’t need context. I love the “HULK RESPECTFULLY DISAGREE” image in the header. I stumbled across the forums a couple of weeks ago. I think it’s a pretty good community already, and I’m looking forward to the influx of new members we are sure to get from this announcement. It’s really nice to be a part of the 20 sided community. Shamus’s efforts to keep the trolls away have left behind a lot of really cool people that are nice to talk to. I think most of us just got compared to Han Solo, too, so that’s pretty awesome. (I say most because some of us are Mr Rogers I guess.) I’m interested to see if there is a notable upswing in people (especially since the article is a long piece about something unobviously related until the last paragraph). I reckon there’s probably a big crossover between the type of people who write comments/post in forums and the people who will have seen the more obscure ways he mentioned it. There should be a small group of those type of people who just happened to miss the previous ways for various reasons(maybe they don’t listen to diecast or had to stop listening to that one episode), I guess you yourself would be one of them, but beyond that I wouldn’t be surprised if the activity increase is relatively low I hope I’m a Han Solo! Am I the only one who finds it ironic that Shamus went through this spiel, but when you register for the forums, you get this? You agree not to post any abusive, obscene, vulgar, slanderous, hateful, threatening, sexually-orientated or any other material that may violate any laws be it of your country, the country where “Twenty Sided Forum” is hosted or International Law. Doing so may lead to you being immediately and permanently banned, with notification of your Internet Service Provider if deemed required by us. It’s basically just there so people wont cry out that there isn’t a rule against it thus they should be able to do it. Actually, it’s just part of phpBB’s policy (and many other boards have likewise notices) :/. Shamus had no real choice in the matter, except for the Hobson’s one. But yes, from phpBB’s standpoint, that’s one of the reasons. Obviously, the other big reason is to deny any legal culpability from falling upon them. Pffft. I guess that’s some boilerplate left in the forum software? I didn’t know it did that. Oddly enough, I don’t remember seeing that after I registered. Could be I skimmed over it, but I actually remember being very struck by how little there WAS in the way of “agree to these rules or else!” I think the blurb you posted might have been the only thing there, and that is a far cry from the pages upon pages of rules that most forums seem to have. The major influence I see in keeping these comments civil is Shamus leading by example. Even if he’s not a Mr. Rogers, he tries to be – and I would feel really bad about resorting to sniping and meanness with Shamus around, just because I can imagine the sigh of his disappointment in me. I do my best to keep this place as the kind of place I’d like to hang out. Which means that I do my best to not post cranky, to keep disagreements respectful and limited to addressing the other person’s statements, not the actual person, and to keep my sense of humour no bluer than about PG-13. This is one of my favourite communities on the web, because just about everyone here stays nice. Some of you other commenters have become friends to me – you’ll be able to recognise those people by the fact that I gently insult them (and expect them to gently insult me in turn; I’m a Brit, insults are how we express friendship) and those of you that aren’t friends yet are still acquaintances who I’d prefer to keep around. I agree. It’s a nice place to hang around. Shamus has achieved something I never thought possible in the Web 2.0 era… Sorry Shamus, but your “nice guys” – i.e. Mr. Rogerses – are well-known to be extremely hostile. Proof: QED. Also, we must consider his role in the SPOILERS Ultimate Showdown of Ultimate Destiny. Anyway. On the subject of moderation.. Eh, Shamus is mostly right. Overbearing rules and trigger-snapping moderators *can* hold for some time, but inevitably the forums/communities start to form cliques and favourites, which are allowed more and more digressions, especially wrt trolling and making insults through allusions. That happened in a supposedly wery good community, on Egosoft’s (developer of the X-series space sims) forums, and.. Well, yeah. They also used many of the same principles as Shamus, especially the ‘ultimate power’ of mods. But inherently the true fault was that the community had gotten stale, overfamiliar, and liked to indulge in circleje****g – by the end it was like one of those horror tropes, with monsters wearing human skins and acting kind whilst blood drips from their fangs. Um. So, yeah – something to watch out for. I’m pretty grateful for Shamus’ leniency and flexibility with moderating. Especially since I misdirect frustration quite a lot and you have to be an awesome moderator to ignore the personal stuff. It helps that with the way comments and the site works that when you do something really stupid, the blogpost gets buried quickly and so it’s easier to walk away from something that you should already have walked away from I’ve said this before on multiple prior posts, but this is my favorite place on the Internet with the best community. There’s genuine respect between the author(s) and the commenters. Very refreshing. My “home” on the internet (a forum where I spend the most amount of time… or at least feel like I do) is another place where we have basically no rules and… basically no problems. I’m an administrator, and in the past year, we’ve had exactly one problem person, and he’s more or less shaped up after we had a little chat with him. The reason we didn’t ban him (despite him causing multiple flame wars between people who had been friends for ages…) was because he could be constructive, he just usually didn’t think about how his posts could be interpreted before he made them… and he was the sort of guy where if you ban them, it would just make them want to come back more and more and be worse and worse. And now he’s a constructive member of society. Mission accomplished, I guess. I think a big part of it is one of the reasons this place is so nice–random idiots just aren’t interested in game theory and programming and criticizing Bioware. Similarly, at the site I mentioned, random people aren’t usually interested in writing techniques and sporking terrible fantasy books. Because that audience isn’t attracted here, they never become a problem. I used to be on a forum which had more rules, by the way, and the person who caused us the most trouble almost never actually broke the rules–she was just very, very good at walking precisely one inch inside of the line. Sometimes I think people view rules as a challenge… how close can you get? How far can you push it? If you are going to build a forum with a full code of conduct, it is important to include something equivalent to a Reckless Driving charge. You know it when you see it, and you let people know, that you’re not here to play the legal loophole game. I am interested in writing techniques and what “sporking” means with regard to terrible fantasy, and would like to know what site this is. Folks like you mentioned are why you should have wavy lines. “Do this three times, get a strike” is less effective than “This will get you a strike if you do it more than 1d8 times”. “Sporking” is sometimes also called MST3k-ing; it’s effectively taking a terrible piece of writing and going through it bit by bit to mock it. The most popular one on the site (and, in my opinion, the best one) is of Maradonia. You have almost assuredly never heard of Maradonia. You are very lucky. Twilight, Eragon, the usual crowd also make an appearance, though. hehe, the caption makes me think of FILMCRITHULK There is surely a dueling metaphor in here somewhere, except that our host gets the gun and the dishonorable lout gets a 5 second head start… My standing rule has always been “don’t argue on the Internet.” I am that “someone on the Internet is WRONG!” guy, so I avoid the problem by reading, but not commenting. This is one of 3 places I routinely comment, in part because even when ya’ll are wrong you’re interesting, but also because the conversation is far more social than argumentative. There is disagreement, but not much “WRONG!” This place may attract the saints just for that reason. And those of us who like to hang with the saints come along for the ride. I’m not terribly sure of the details, but I am pretty positive that you are wrong. Somehow. The amount of indecent comments you get actually ties pretty well to the average rate of sociopathy (1-3%). Actually quite a bit better than that. Not that all your problem commenters are sociopaths… but it certainly goes to show that your policy is pretty well suited to protecting against inevitable crazies. You missed a decimal point, he get’s 0.1% bad comments. Which means he’s doing a pretty good job, right? (Or only 1/10th of the normal number of crazy people reach the site) You know, I wondered why the forums were so busy today. The secret has been revealed! Sanctuary is breached! Get thee to the highlands, for the flood comes for us all! Ahem. What I mean is, hello, new forum people. See you there. Also, Shamus, I suspect that the intelligence and maturity of your posts draws in an audience that is, similarly, intelligent and mature. (Or, at least, feels less inclined to post if they cannot match the intelligence and maturity of others here.) You definitely cultivate a place of civil discussion and insightful commentary. And of the occasional terrible, terrible pun, but that’s largely Rutskarn’s fault. >> “And of the occasional terrible, terrible pun, but that's largely Rutskarn's fault.” Shamus could be seen as an enabler. “Maybe they just aren't very good at communicating.” Somehow, someway, this reminded me of DM of the Rings where Gimli insults the riders of Rohan after a critical Diplomacy failure… and I have no idea whay, but the thought of people talking normally and then suddenly someone critically fails their conversation roll makes me just chuckle. I think that the most significant reasons of this site being “clean” are: 1. Clever thoughtful and long articles – these filter stupid hateful people and trolls-on-purpose 2. Good example in the person of Shamus or other big guys here. I’m going to make a bit of a request. I love this community, but I don’t know nearly enough about gaming to be a participant. However, I *do* know a lot about other things, like language and foolish mistakes and poetry and pouring snacks into mugs when I want to moderate my crap intake. These things aren’t verbotten, but there’s no place to talk about them in the forum. So rather than a request, I’d make a proposal: an off-topic board. It would be a terrible proposal if I didn’t present any justification for it. I think, though I may be wrong, that there are certain things everyone is interested in and can talk about without becoming upset about them. Language would be my first example. I don’t want to have to find a linguistics forum to wallow in, attempting to decipher jargon, make useful contributions, or avoid the foolish, baseless discussions. But everyone loves language, especially the one they speak, and intelligent, well-mannered people are great to speak with in amateur terms. Perhaps this is a bit too much, but I’d like to know if there’s any other off-topic topic people want to discuss with this community in a forum space. Thanks! People have been using the ‘Twenty-sided’ subarea of the forums as an off-topic, at least we’ve been talking about films and books and anime and webcomics which don’t seem to be mentioned in the description I can’t load the forums on my desktop, it’s a server not found error. It works fine on my iPad though. That is such a bizarre statement to say. EDIT: It’s working now. I didn’t do anything, it just magically fixed itself. If you (Shamus) did it, thanks. If not, hail Sheogorath! Go shamus. Benevolent Dictator is the best form of government. ;) “performing for attention and don't care how destructive they're being” Ill admit i was close to this type of person in my younger years, but i have chilled to the middle level as i have got older. When i was around 17-18 i was just a dick possibly due to the sites i discovered first. But this place is too nice for me to ever even get in inclination to start shit or be unnecessarily rude to someone. For better or worse i pretty much just fly under the radar here…which would really surprise some people i know. I think it’s simply what Shamus said: his willingness to just ban people who are damaging the conversation and get on with his life. This is the key, and it honestly baffles me why so many people who run various sites with comments have problems with it. I’ve seen site owners agonize for days, weeks, months on end, about the problem of trolls, begging readers to not respond to them, trying so hard to understand. The regular commenters are screaming for the owner to please just ban this jackass already, and it’s as if the owner can’t even see the word “ban.” And eventually another community goes down the tubes. It is hard to truly ban someone from something like this though, even if you go for the IP you have to deal with dynamic ip’s and proxy servers. True, but you can take care a substantial chunk of the drive-by jerks who aren’t going to expend effort if they don’t have to. With people who are more hard core, as Shamus said you can just ban them again when they misbehave under their new name; no need to waste time trying to figure out if 321ecafkreJ is the same as that other guy Jerkface123, because what’s the point? It’s behavior that triggers the ban, not identity. Now, if someone was determined to dedicate their life to ruining the comments sections on this website I’ll agree they could hause a lot of trouble, but that’s true anywhere. Yea i agree, i have seen whole moderation teams trying to track down people they have banned, second guessing new members, possibly even banning innocent people and even turning of registration. Shamus has pretty much cracked the formula for good moderatership at least for a site of this size. Hmm… This makes me think about people who turn off comments entirely on their blogs. I understand some people get a disproportionate amount of bile and threats. But I’ve always thought that closing the discussion is not a solution, but a retreat. I think your way makes the most sense, Shamus. Your unofficial/official guidelines are simple, short, and with an emphasis that we’re here to have “fun”. Refusing to suffer ankle-biters, but understanding that there are complicated and varied reasons for why people act out like that. Granted, it’s not the moderator’s responsibility to heal a wounded soul. But at the same time, I think a lot of people today are taking advantage of and abusing the responsibilities they DO have, in the name of upholding a less hostile environment. I see a lot of immaturity in people online lately. In the comments and in the articles. It’s easy to block the jerks, but more and more I’m seeing people abuse the tools at their disposal to silence people for harmless disagreements. (The new trend on Youtube seems to be marking as spam any comment one disagrees with, so it will be covered up and unseen). I’m glad we have places like Twenty Sided. The rest of the internet could benefit from studying up on Graham’s Hierarchy of Disagreement. That’s like the food pyramid, right? We don’t really need the top one and they become more and more nutritious as we go down? Well, yes. If you’re feeding trolls instead of people. I just wanted to say that I really appreciate the atmosphere of this blog. I spend a lot of time online reading blogs, forums, and watching you tube, but this is the only place I actually stop to read and post in the comments. P.S. It probably says nothing good about me that the first thing to pop into my head when i read this post was that it would be funny to do a really stupid trolling post as a joke. Sweet, I’ll be able to keep up with discussions more easily again! I have faith your forums will remain sane, as there’s one other place just as sane on the Internet. The GamersWithJobs forums. There’s a thread on the Tropes vs. Women in Video Games video that I’ve been a part of. While there are moments where things have gotten dicey, no one has been banned, and the thread has not been locked. Everyone is mostly civil, and any negative feelings have not carried over into other threads. I imagine your forum will be much the same, even with potentially controversial topics. It goes along with what I’ve learned when witnessing various internet communities evolve. A lot can be saved with good management and/or leadership. And it never hurts when the people in charge are somewhat passionate and dedicated. I wouldn’t call the angry outliers and trolls ‘crazy’ and unreachable, but they are hard to reach. The ones that are hardest to reach are the trolls. They don’t have a grievance, they are just trying to stir up trouble. I much prefer your method of moderation Shamus. I usually waver between the too nice to say anything and the neutral balance groups. I am not an angry person and by the time I have finished typing a post I have bled off any anger that may have inspired it. Then I read and revise the post until I am either fairly certain that it is not inflammatory and conveys what I want to say or I delete it and go on to other things. screwed up email in previous post These “rules” (i.e. “I ban whomever I decide to”) sound pretty strict and rude, and I wanted to question whether this “tyrant” approach is actually a good idea. … until I read (and remembered) the bit about simply posting under a different name. Shamus can afford to ban people because the consequence isn’t the same as being banned as a registered user. Having posts deleted or being banned is still a punishment, but a muuch lighter one than it would be in a different context, where your user ID might even be tied to stuff you paid money for. The way it goes here, a poster can be punished but there’s always a way to come back and make it better. Short of offereing free communication courses, I think that is the best way to deal with the problem. And, in the end, you can’t argue too much with the results. Yes, it is a “tyrant” system, but your wording seems to imply that there’s something intrinsically wrong with that. The fact that it apparently works so well should lead you to question that assumption. Despite what innumerable self-absorbed internet idiots would like to claim, the internet is not and has never been a democracy. Free speech does not exist here unless the owner of a given website chooses to allow it. I am entirely, 100% OK with this. If a website’s dictators aren’t so benevolent… well, I’ll just take my discussion elsewhere. The difference between dictatorship on the internet and dictatorship in the real world is that it’s not nearly so easy to pick up and move in the physical world! The difference between dictatorship on the internet and dictatorship in the real world is that it's not nearly so easy to pick up and move in the physical world! … which is a bit like voting, which makes it a little more democratic, doesn’t it? There are some much more unpleasant places on the web wich are moderated less rigidly but with more dire consequences (as in: You paid for something, and if you do something vaguely undefined, it will be lost to you) — these are a lot more like actual dictatorships, and they should not exist. I find it deeply worrying if people who supposedly grew up in a democracy do not understand the difference. Now, Shamus is not our elected president, but that’s what I meant: He can afford to have vague rules because the consequences of breaking them are more like negative feedback and less like proper punishment. Just imagine a pay-site moderated by a swarm of trigger-happy mods. If you attract the attention of one of them, your yearly subscription is lost. Spot the difference? This, to me, is much much less democratic, and regardless of my ability to avoid such a scenario, no-one should be in that situation. Yeah, I wasn’t really considering paid sites (games, etc.), just free ones. That would be a bit of a different arena because there’s more personal investment. So the stakes are lower because we haven’t paid money? I would argue that the stakes are actually pretty high here as well. There are quite a few people here who post under our own names, have links to our own websites and businesses, and expose our own closely held opinions. If we get banned or burned we haven’t lost money, but we have lost reputation. The system runs on Honor, and that’s at least as large an incentive as cash. Ten bucks is nothing compared to even one hour of sincere discussion. At the risk of going political, I’ll simply say that I think the system here would work much better than our current real world legal system… as long as there was a good man at the top. It isn’t an issue of structures that “should not exist” as such, but of proper execution of justice. I have nothing substantive to add to this discussion (though your analysis of internet culture in general, and the aspect of it that tends to manifest here in particular, was interesting and, I think, apt), but I would just like to say that image set depicting “what the world looks like” literally made me LOL. It is fortunate that, while I am presently at work, it is after hours, and I am alone in the office. Bravo. Also, good luck with the forums. Dunno whether I’ll visit them or not, but I hope things go well there. I like this a lot, and I’d like to coopt a bit of it for my “rules” section in an online roleplaying group. Would that be alright with you? Of course! Thank you, I’ll make sure to give credit and a linkback where appropriate. I have to agree, with a small caveat. Having an explicit anti-(harassment, racism, sexism, etc) policy is actually useful in at least certain communities (from personal experience, I would say: gamers, skeptics/atheists and some species of libertarian-left politics) for the very reason that these things are not considered to be wrong by a non-trivial percentage of the community. The point here is to confirm that you will have the back of members of the community that are traditionally victimized by such. So while your moderation policy might be sufficient to keep the “traditional demographic” of these communities (not to name them in particular, but I’ll give you one guess on who I’m talking about) to play well with each other, not having an explicit policy makes the community suspect to the people who are traditionally on the receiving end of the worst abuse, and they may simply decide not to risk coming over. The point is not to have some sort of iron-clad rule, which will be ruleslawered to oblivion anyway, but to show a “statement of principles” that will make it clear to people that they are in a safe(r) space. Am I recalling correctly that the default policy is also that new users automatically fall into moderation for the first couple of posts? If so the combination of banhammer & auto-moderation on new accounts could be a significant factor in terms of why we don’t see more raving a-holes. That’s my theory anyway. That Shamus has managed to stumble upon a combination of settings that create a tougher environment for trolls to operate in, and his site doesn’t get quite enough traffic that managing it would completely overwhelm a single person yet. I LOVE that “No target shooting” picture. I don’t have anything helpful to add, I just felt like saying that now that you’ve explained your magic non-formula for making your comments section unnaturally pleasant, we have just one more reason to respect you as a person. :) Well, I could also remark that I appreciate your analysis of problematic people as “broken”. I don’t know much about outside people, but it’s probably true. It’s a sound way of interpreting the behavior we see in others, and we see it in ourselves all the time. When a member of our dissociative identity collective goes from Mr. Rogers to Mr. Hyde over a thoughtless remark and pounds out a vitriolic rant of their own, it’s not that they don’t understand the social norms. They’re usually reacting to some of the pain that created them, which no lay moderator would have the time or expertise to defuse. (I don’t even know if a therapist could pull it off.) We do our best to “pull them out of the pool before they make a mess”, preferably before anyone touches the submit button, but we can’t always police each other in time, so we completely understand why moderators sometimes have to take the pragmatic approach and pull us out for us. ^_^; –Chris&Co. Just came here from your latest ME post (14). And I would like to say this shamed me because I recently made a comment that was defiantly on the low end of constructive on the Noveria post. Also I don’t think you give yourself enough credit for the community here. Your posts are normally thoughtful and never angry no matter the subject. Content like that, combined with a relatively hands off moderation would strike me as strong troll repellent. So, basically, you have secret laws which you refuse to tell us about because you fear we might find ways around them. And once one of your secret laws has been violated, you immediately respond with the harshest punishment possible with no warning. …that’s fine and well, but it is basically the enforcement approach of a totalitarian country. And I doubt the approach could be generalized successfully to other forums. The fact is that the nature of man is shit, they are only kept in check by law, and that’s why many forums tend to degenerate. Trying to pretend that it is some tiny minority of undesirables who must been cleansed from society to attain utopia is a theory that many people have tried to apply throughout history, always unsuccessfully. Shamus said: It’s like you didn’t even read the article. Or understand how this comment section works. Ah a thinly-veiled Godwin’s law, I fear I must rebuttal with accusations of false equivalence, association fallacy and (whether you intended it or not) Reductio ad Hitlerum. And 3 years after you said it as well. What am I doing here, why am I bothering with this? I wanted to thank you for this article, which I find useful over and over again. I’m a community manager over at Stack Exchange and it rather accurately sums up the situation on many of our sites. For years, the only restriction we had was “be nice”, which is about as vague a policy as one could ask for. A few years ago, we added a list of rules, but thankfully these were things the community had mostly decided upon in advance. The key point I take from this post is that adding rules paradoxically makes moderation harder rather than easier due to a quirk in human nature. Another paradox (and I’m writing shortly after the Spoiler Warning show left this site) is that a community moderated for tone does seem to be unwelcoming to outsiders. To take an offline analogy, I have a junky car that recently got hit while parked on the street. A few days later, I got a note from parking enforcement warning me that I’d get a ticket if I didn’t move it. A neighbor had reported it as abandoned. Some people don’t like to see junky cars parked on the street and they like having a rule that makes them go away. In an online community, there’s usually some self-selection going on. Around here, the commenters tend to like nitpicking games. (I don’t comment very often because I’m perpetually 3-5 years behind in my game-playing habits.) It would be a bit like living in a neighborhood where everyone drove the same type of car. Nobody would ask the city to tow away a car if they drove a similar car themselves. It’s only when someone new comes in that they notice a barely drivable, Mad-Max styled, more rust than steel jalopy that’s been parked there for ages. I’d also observe it’s nearly impossible to fairly moderate content on your own. Everyone has blind spots when it comes to polite, but hurtful, statements. In fact, it’s very likely that people who make hurtful statements aren’t aware they are doing it. (There’s a whole ‘nother question of whether society has evolved extra-thin skin lately.) But the brilliant thing about moderating for tone is you can quickly figure out what sort of person you are dealing with when someone confronts them. People who apologize or just shut up are the people you want around. Anyway. This is a long, rambly comment on an ancient post that nobody will likely read. If anyone (Shamus in particular) does read this, I want to thank this community for its example of a pleasant place to read the comments. (Mostly. ;-) In regards to Mr. Young’s commenting policy, which I appreciated, years ago: Don’t stop . You are the representation of Western, rational discourse, You go after the needless trash but allow the ‘offensive’ argumentation. Though I disagree with the positions offered, you defend them as a valid intrepretation, and I give you credit for that even when I disagree. Sincerely, a fan since 2009. FYI, from years in the future: The link to the forums no longer works. I don’t know if the forums were nuked at some point, or if the URL changed, but it would be nice if an Edit tag were added providing a new link (“Edit: Old link dead. Go here instead”) or indicating whether the forums have been killed (“Edit: Forums dead [because>
https://www.shamusyoung.com/twentysidedtale/?p=19709&replytocom=340417
CC-MAIN-2020-24
refinedweb
16,188
70.23
cap-elb Capistrano plugin or deploying to Amazon EC2 instances behind Amazon ELBs (Elastic Load Balancers) Introduction This capistrano plugin lets you perform capistrano deployment tasks on either complete or highly segmented AWS instance sets within an AWS load balancer under your control. Installation cap-elb is a Capistrano plug-in configured as Ruby gem. You can install from rubygems.com or build from source via GitHub fork or downlaod. RubyGems install: $ gem install cap-elb How to Use In order to use the cap-elb plugin, you must require it in your deploy.rb: require 'cap-elb' If you have already been doing capistrano deploys to your AWS instances, you probably already have your AWD credentials configured. Either add your credentials to your ~/.caprc : set :aws_access_key_id, 'YOUR_AWS_ACCESS_KEY_ID' set :aws_secret_access_key, 'YOUR_AWS_SECRET_ACCESS_KEY' or, directly in your deploy file: set :aws_access_key_id, 'YOUR_AWS_ACCESS_KEY_ID' set :aws_secret_access_key, 'YOUR_AWS_SECRET_ACCESS_KEY' If you wish, you can also set other AWS specfic parameters: set :aws_params, :region => 'us-east-1' Setting region is required if you are not using default region 'us-east-1'. If your ELB is not in us-east-1, and you do not specify region parameter as above, you will get an error indicating that no ELB could be found with that name in that region. In order to define your instance sets associated with your load balancer based deploys, you must specify the load balancer name, the associated roles for that load balancer and any optional params: For example, in deploy.rb, you would enter the load balancer name (e.g. 'lb_webserver'), the capistrano role associated with that load balancer (.e.g. 'web'), and any optional params. loadbalancer :lb_webserver, :web loadbalancer :lb_appserver, :app loadbalancer :lb_dbserver, :db, :port => 22000 There are two special optional parameters you can add, :require and :exclude. These allow you to exclude instances associated with your named load balancer from the deploy, if they meet or fail to meet your :require/:exclude specifications. The :require and :exclude parameters work on Amazon EC2 instance metadata. AWS instances have top level metadata and user defined tag data, and this data can be used by your loadbalancer rule to include or exclude certain instances from the instance set. Take the :require keyword; Lets say we only want to deploy to AWS instances which are in the 'running' state. To do that: loadbalancer :lb_appserver, :app, :require => { :aws_state => "running" } The server set defined here for role :app are all instances in the loadbalancer 'lb_appserver' with aws_state set to 'running'. Perhaps you have added tags to your instances, if so, you might want to deploy to only the instances meeting a specific tag value: loadbalancer :lb_appserver, :app, :require => { :aws_state => "running", :tags => {'fleet_color' => "green", 'tier' => 'free'} } The server set defined here for role :app are all instances in the loadbalancer 'lb_appserver' with aws_state set to 'running', and that have the named 2 tags set, with exactly those values for each. There can be other tags in the instance, but the named tags in the rule must be present for the given instance to make it into the server set. Now consider the :exclude keyword; Lets say we do not want to deploy to AWS instances which are 'micro' sized. To do that: loadbalancer :lb_appserver, :app, :exclude => { :aws_instance_type => "t1.micro" } You can exclude instances that have certain tags: loadbalancer :lb_appserver, :app, :exclude => { :aws_instance_type => "t1.micro", :tags => {'state' => 'dontdeploy' } } When your capistrono script is complete, you can deploy to all instances within the ELB that meet your criteria with: % cap deploy Here's an example of a task that does a quick list of the instance ids (if any) within the load balancer associated with the 'app' role that meets the criteria you laid out in the loadbalancer definition line, add this to your cap deploy file: # run with cap ec2:list namespace :ec2 do desc "list instances" task :list, :roles => :app do run "hostname" end end This will give you the list of hosts behind the load balancer that meet the criteria. % cap ec2:list Version Informatoin Version 0.2.0 - ELBs behind VPCs didn't work on all 0.1.x, due to no :dns_name for VPC shielded hosts. Now we fall back to IP addresses in 0.2.x and above Documentation Additional Ruby class/method documentation is available at: - capistrano: - Amazon AWS: - Amazon AMI instance metadata: - Amazon AMI Tags: [[() Credits - capistrano: Jamis Buck - capistrano-ec2group: Logan Raarup - Logan's 2009 work with cap deploy using security group abstraction got me going on how to do an AWS oriented cap plug-in, thank you! Issue Notes - aws-sdk - very slow to callect full metadata for all n instances behind an ELB. amazon thread on the subject: []
http://www.rubydoc.info/gems/cap-elb/frames
CC-MAIN-2016-36
refinedweb
778
56.39
Angular Dart - Guide - Tutorial - Advanced Google builds many critical web apps using the Dart programming language, often with AngularDart. The next generation of Google AdWords is built in Dart. Google Fiber’s latest web app is built in it. So is Google’s internal CRM. Outside Google, amazing companies like Wrike, Blossom, Workiva, and DGLogik have been building their products in Dart. import 'dart:html'; main() async { var countdown = querySelector("#countdown"); for (int i = 100; i >= 0; i--) { countdown.text = "Time: $i"; await window.animationFrame; } } With a consistent language, well-crafted standard libraries, and cleaner DOM, Dart is a good choice even for programmers with limited or no JavaScript experience. Things work the way C, C#, ActionScript, and Java developers expect. Standard libraries provide classes that would otherwise need to be in external libraries or built from scratch. For example, dart:html is a sane, Dart-y wrapper around the DOM and window APIs, so you don’t need to worry about browser support. When you do need an external library, managing dependencies is easy, thanks to Dart’s pub package manager. Google and other companies have been using Dart for years. The SDK alone has hundreds of commits each month, and a new release every 6 weeks. Through all the activity, the API stays stable. 516 (average monthly commits to dart-lang GitHub repositories) You can do Dart web development with whatever web framework (or lack of one) you choose. We recommend AngularDart. Google uses this framework+language combo in production, and invests heavily in its development. Code completion, refactoring, step-by-step debugging, powerful static analysis, profiling, code coverage, a standard unit test library—all these are supported by the preferred IDE, WebStorm. Dart plugins also exist for other IDEs and editors, such as Atom, Sublime Text 3, Emacs, and Vim. The Dart language and tools help you find bugs early on, before they become big problems. An automatic code formatter helps you focus on how your code works, rather than how it looks.
https://webdev.dartlang.org/
CC-MAIN-2018-26
refinedweb
336
56.76
? Console Output Operations Console Operations are tasks performed on the command line interface using executable commands. The console operations are used in software applications because these operations are easily controlled by the operations are dependent on the input and output devices of the computer system. A console application is one that performs operations at the command prompt. All console applications consists of three streams, which are a series of bytes. These streams are attached to the input and output devices of the computer system and they handle the input and output operations. Three streams are: Output Methods In C#, all console operations are handled by the console class of the System namespace. A namespace is a collection of classes having similar functionalities. To write data on the console, you need the standard output stream. This stream is provided by the output methods of console class. There are two output methods that write to the standard output stream. They are: The following syntax is used for the Console.Write() method, which allows you to display the information on the console window. Console.Write(“<data>” + variables); Where, Data: Specifies strings or escape sequence characters enclosed in double quotes. Variables: Specify variable names whose value should be displayed on the console. The following syntax is used for the Console.WriteLine() method, which allows you to display the information on new line in the console window. Console.WriteLine(“<data>” + variables); The following code shows the difference between the console.write() method and Console.WriteLine() method. Console.WriteLine(“C# is a powerful programming language”);Console.WriteLine(“C# is a powerful”);Console.WriteLine(“Programming language”);Console.Write(“C# is a powerful”);Console.WriteLine(“Programming language”); Console.WriteLine(“C# is a powerful programming language”); Console.WriteLine(“C# is a powerful”); Console.WriteLine(“Programming language”); Console.Write(“C# is a powerful”); Output: C# is a powrful programming language C# is a powerful Programming language C# is a powerful Programming language Placeholders The WriteLine() and Write() methods accept a list of parameters to format text before displaying the output. The first parameter is a string containing markers in braces to indicate the position where the values of the variables will be substituted. Each marker indicates a zero-based index based on the number of variables in the list. For example, to indicates a zero-based index based on the number of variables in the list. For example, to indicate the first parameter position, you write {0}, second you write {1} and so on. The numbers in the curly brackets are called placeholders. Example: int number, result;number =5;result =100 * number;Console.WriteLine (“Result is {0} when 100 is multiplied by {1}”,result, number);result = 150 / number;Console.WriteLine (“Result is {0} when 150 is divided by {1}”, +result, number); int number, result; number =5; result =100 * number; Console.WriteLine (“Result is {0} when 100 is multiplied by {1}”,result, number); result = 150 / number; Console.WriteLine (“Result is {0} when 150 is divided by {1}”, +result, number); Result is 500 when 100 is multiplied by 5 Result is 30 when 150 is divided by 5 Here, {0} is replaced with the value in result and {1} is replaced with the value in number. Console Input Operations Input Methods In C# to read data, you need the standard input stream. This stream is provided by the input methods of the Console class. There are two input methods that enable the software to take in the input from the standard input stream. These methods are: Class StudentDetails{ Static void main (string [] args) { string name; Console.WriteLine(“ Enter your name:”); name = Console.ReadLine(); Console.WriteLine(“ You are{0}”, name); }} Class StudentDetails { Static void main (string [] args) { string name; Console.WriteLine(“ Enter your name:”); name = Console.ReadLine(); Console.WriteLine(“ You are{0}”, name); } } In the above code, the ReadLine() method reads the name as the string. The sting that is given is displayed as output using placeholders. Convert Methods The ReadLine() method can also be used to accept integer values from the user. The data is accepted as a string and then converted into the int data type. C# provides a convert class in the system namespace to convert one base data type to another base data type. The following snippet reads the name, age and salary using the Console.ReadLine() method and converts the age and salary into int and double using the appropriate convertsion methods of the convert class.); Enter your name: David Blake Enter your age: 34 Enter the salary: 3450.50 Name: David Blake, Age: 34, Salary: 3450.50 More: The Convert.ToInt32() method converts a specified value to an equivalent 32-bit signed integer, Convet.ToDecimal() method converts a specified value to an equivalent decimal number. Go through this tutorial video: Define Numeric Format specifiers Format specifiers are special characters that are used to display values of variables in a particular format. For example, you can display an octal value as decimal using format specifiers. In C#, you can convert numeric values in different formats. For example, you can display a big number in an exponential form. To convert numeric values using numeric format specifiers, you should enclose the specifier in curly braces. These curly braces must enclosed in double quotes. This is done in the output methods of the console class. The following is the syntax for the numeric format specifier Console.WriteLine(“(format specifier)”, variable name>); Format specifier: Is the numeric format specifier. Variable name: Is the name of the integer variable. Some Numeric Format Specifiers Numeric format specifiers work only with numeric data . A numeric format specifier can be suffixed with digits. The digits specify the number of zeros to be inserted after the decimal location. For example, if you use a specifier such as c3, three zeros will be suffixed after the decimal location of the given number. Format Specifier Name Description C or c Currency The number is converted to a string that represents a currency amount. D or d Decimal The number is converted to a string of decimal digits (0-9), prefixed by a minus sign is case the number is negative. The precision specifier indicates the minimum number of digits desired in the resulting string. This format is supported for fundamental types only. E or e Scientific (Exponential) The number is converted to a string of the form “-d.ddd…e+ddd”, where each ‘d’ indicates a digit (0-9). The following snippet demonstrates the conversion of a numeric value using C,D and E format specifiers. int num =456;console.WriteLine (“{0:C}”, num);Console.WriteLine (“{0:D}”, num);Console.WriteLine (“{0:E}”), num); int num =456; console.WriteLine (“{0:C}”, num); Console.WriteLine (“{0:D}”, num); Console.WriteLine (“{0:E}”), num); $456.00 456 4.560000E+002 F or f Fixed-point The number is converted to a string of the fomr “-ddd.ddd…” where each ‘d’ indicates a digit (0-9). If the number is negative, the string starts with minus sign. N or n Number The number is converted to a string of the form “-d,ddd,ddd.ddd…”. If the number is negative the string starts with a minus sign. X or x Hexadecimal The number is conveted to a string of hexadecimal digits. Uses “X” to produce “ABCDEF”, AND “x” to produce “abcdef”. The following snippet demonstrates the conversion of the numeric value using F, N and X format specifiers. int num =456;Console.WriteLine(“{0:F}”, num);Console.WriteLine(“{0:N}”, num);Console.WriteLine(“{0:X}”, num); int num =456; Console.WriteLine(“{0:F}”, num); Console.WriteLine(“{0:N}”, num); Console.WriteLine(“{0:X}”, num); Output 456.00 1CB Here I will explain what is delegates in c#.net with example. Basically delegates in c# are type safe objects which are used to hold reference of one or more methods in c#.net. Delegates concept will match with pointer concept of c language.); Add Result : 30 Sub Result : 10 -= Forgot Your Password? 2017 © Queryhome
http://tech.queryhome.com/157721/console-output-operations-in-c%23
CC-MAIN-2017-34
refinedweb
1,331
59.4