text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Here is the second in our series of blog posts from the Cascades Field Agency. You can check out the first post here () – Erin After looking through the QML code for a couple of applications, I’ve gotten the impression that Cascades application developers have some different habits than those writing QML with the QtQuick elements. While my sample size was pretty small, I think these several points may prove particularly useful to BlackBerry 10 application developers as they write their QML UIs. These tips are focused on enhancing the performance of your QML programs, both in terms of application speed and development/maintenance speed. Avoid declaring variant properties The variant type seems pretty useful in QML, and it’s certainly versatile, but don’t over use it. Using variant when you could use a more specific type means that you get less compile-time help and that you pay a performance penalty (to get the real value into and out of a QVariant all the time). If the variable is always a number, use int or real types. If it’s always a string, call it a string. A common usage of variant is to store controls, but you can use any type exposed QML as the type for your property. For example you can use ‘property Container delayedPart: null’ to hold an item you instantiated dynamically, so long as the root item of the component was a Container or a type based on a Container. Just keep in mind that 0 is a number, not a reference, so use null instead of zero for the initial values of these types. Any type QML knows can be used, not just Container, so there’s no need to use variant for your TextStyleDefinitions or Labels either. There is a bug in Qt 4 where you will still need to use the variant type when passing object references in a QML defined signal handler, but aside from that bug most usages of the variant type can be replaced with a more specific and appropriate type. Example QML, if you want to delay loading a component with code like: subcontrol = myComponentDefinition.createObject(); replace: property variant subcontrol: 0 with: property Container subcontrol: null Assuming the root item in the component definition is a Container{}. Any object type exposed to QML can be used, not just Container, pick the right one for your component. Do not use Javascript files just for storing properties Related to avoiding variant properties, I’ve seen a similar problem with storing properties in JavaScript files. Again this leads to inefficiency in having to wrap and unwrap script values, and again you lose the type information. There’s another problem if you try to alter the properties, as most JavaScript files have a new instance loaded for each QML file that uses them. For better and safer code store these properties in a QML object. This allows for typed properties and makes the instance management a lot clearer and harder to get wrong. For example, if you previously had a “Constants.js” file like this: var a = 123; var b = 4.5; var c = "6"; Which was used in a file like this: import "../common/Constants.js" as Constants Container { property int a: Constants.a property real b: Constants.b property string c: Constants.c } You can replace it with a “Constants.qml” like this: import bb.cascades 1.0 QtObject { //QtObject is the most basic non-visual type property int a: 123; property real b: 4.5; property string c: "6"; } And use it like this: import "../common" Container { attachedObjects: [ Constants { id: constants } ] property int a: constants.a property real b: constants.b property string c: constants.c } Avoid using .connect Like the variant type, the connect method on a function is sometimes necessary but not intended to be used a lot. The problem here is not one of performance, but of readability. By connecting signals in this manner you mess up the declarative flow of the code. Normally you should be using the signal handers on the objects themselves instead of adding imperative code into an unrelated signal handler (like creationCompleted). For example you could use FadeTransition { onEnded: otherControl.visible = false } instead of onCreationCompleted: { fadeTransitionItem.ended.connect(fadeTransitionEnded); } function fadeTransitionEnded() { otherControl.visible = false } This leads to less and clearer code. Save the connect method for when the object emitting the signal was created in C++ and so cannot be accessed from QML. Use ComponentDefinition for repeated items If you need to create many similar items, create them dynamically from a ComponentDefinition instead of copying and pasting the QML code for it. Just add a property int index to the top item in the component, and then add some code like the following: onCreationCompleted: { for (var i = 0; i<n; i++) { var item = component.createObject(); parentContainer.add(item); item.index = i; } } Any slight changes needed between the instances can be based off the index property, and the maintainability is immensely better than if those components had been copied and pasted over and over again. Use ControlDelegate (or ComponentDefinition) for delayed loading A good tactic for improving your application start time is to delay loading of any parts of the UI which are not essential for immediate use. It’s easy to do this using ControlDelegate, just wrap the elements you can wait for like so ControlDelegate { id: controlDelegate delegateActive: false sourceComponent: ComponentDefinition { Container { id: thisIsYourControl } } } And after the essential elements are loaded, set delegateActive to true. Once your control is loaded, it will function the same as before. Let the UI flow through QML Cascades is not purely QML APIs and it’s quite easy to drop down to C++ if you want to. Often this is a good way to do expensive logic faster. Normally though you should only be moving business logic to C++, not UI logic. For UI logic it’s usually better to keep it all in QML for two reasons. The first is that it’s easier to develop and maintain, as the entire UI in both appearance and flow is defined in one place. You should architect your application so that the UI flow is controlled at a high level from QML. If you need a complex C++ controller class or initialization logic, expose that to QML and control it from QML (by calling a function or by just creating it) to perform the complex logic, instead of having it managed in C++ and emit a signal to QML when it’s done. This places the high level UI flow just in QML, making it a lot easier to follow and manage, despite complex logic still being performed in C++. A common method of doing this is to expose a single QObject to the root context, containing all the application global methods and properties, which are then invoked, queried, or bound to from QML. Example Code, from C++: qmlDocument->setContextProperty("app", myControllerInstance); In QML: onClicked: { showLoadingScreen(); app.fetchData(app.defaultDataSource); app.dataReady.connect(hideLoadingScreen()); } Note that because the app object is exposed from C++, you have to use the connect function to hook up the signal. The second reason to keep your UI in QML is that you can actually lose some performance benefits when passing messages back and forth from the QML file to your C++ class. If you just need to handle some touch events in a way that doesn’t have a pre-made gesture you can store a couple of state variables inside the QML file instead of creating a C++ controller class. It saves you both development and run-time costs of hooking up the touch events from QML into the special object, and it keeps all the application touch handling code in one place. Use your own URI I’ve seen some code that exposes its C++ types like this: qmlRegisterType<MyType>("bb.cascades", 1, 0, "MyType"); Do not do this. Features are being added to the QML language to prevent you from injecting types into existing imports, meaning that this code will break in the future. But more importantly, this is avoiding the whole point of the module import system which leads to a lot of cases where your code can break even sooner! It also makes the code easier to understand and debug if proper imports are used. This can be as simple as: qmlRegisterType<MyType>("App", 1, 0, "MyType"); And then import App 1.0 in your QML file. CFA comment: Even though using your own namespace is correct, the fact is that you will currently break the momentics preview if you use your own URI. We recommend that during development you use bb.cascades and once you are done with the UI you change it to your own namespace. So there’s a few tips for writing faster QML with Cascades. Faster to write, faster to run, and faster to maintain. All worthy goals.
http://devblog.blackberry.com/2013/07/cfa-qml-perfomance-tips/?relatedposts_exclude=15407
CC-MAIN-2016-44
refinedweb
1,491
61.06
How do I make upgrades for my player? Game Engine, Players & Web Plug-in, Virtual Reality, support for other engines Moderators: jesterKing, stiv 2 posts • Page 1 of 1 How do I make upgrades for my player? How could I make upgrades that my player could buy from a store? Could someone show me to a tutorial? Or make on? ۞◄█°█‖‖‖‖‖█ʭ◙ʭ╬(░▒▓▓▒▒▒▒▒▒▒▒▒) Re: How do I make upgrades for my player? Jedijds wrote:How could I make upgrades that my player could buy from a store? Could someone show me to a tutorial? Or make on? You can use Python httplib to communicate between blenderplayer and the website (with PHP and some database ) In that manner: Code: Select all # needed for sending data routines import httplib #values we want to send username = "JOHN" score = 1230 print "connecting...", # connect to the server just as the browser do - to the 80 port. con = httplib.HTTPConnection("yoursite.com", 80) print "ok" print "sending request...", # send the GET request, note the path to the script doesn't start from server name, the path is absolute. # we send data after ? as variablename=variablevalue separated by & con.request("GET", "/somescript.php?username="+str(username)+"&score="+str(score)); print "ok" #get response from server - status codes here: # 200 means OK resp = con.getresponse() print resp.status, resp.reason 2 posts • Page 1 of 1 Return to “Interactive 3d” Who is online Users browsing this forum: Yahoo [Bot] and 0 guests
https://www.blender.org/forum/viewtopic.php?t=23065&view=next
CC-MAIN-2016-36
refinedweb
243
67.35
Hello JUCE community. I am testing dynamic link library. A first I exported project to Xcode and added source.cpp #include <stdio.h> #include<iostream> using namespace std; int hello(){ std::cout << "Hello World!"; return 0; } libNewProject.dylib was build by Xcode then I put this in /usr/local/lib/ use_juce.cpp #include <iostream> int hello(); int main() { hello(); std::cout << "Test Finish!\n"; return 0; } try to compile this with command below. $g++ -o use_juce use_juce.cpp -L./ -lNewProject it shows the error. Undefined symbols for architecture x86_64: "hello()", referenced from: _main in use_juce-60d598.o ld: symbol(s) not found for architecture x86_64 also checked the function in dylib $nm /usr/local/lib/libNewProject.dylib |grep hello 0000000000002260 T __Z5hellov I made the same simple dynamic link library without JUCE, it works. However the project exported from JUCE, this error happens. Any help appreciated. thank you.
https://forum.juce.com/t/how-to-develop-simple-dynamic-link-library-or-tutorial/44249
CC-MAIN-2021-10
refinedweb
148
63.25
In the past months, I’ve rewritten the entity-component system of my engine pet project about three times. Finally, something that ticks all the boxes has emerged. Today, I’d like to present this architecture. So far it has worked wonders for me, though I wouldn’t guarantee this to scale up to AAA sized projects. I still have much testing to do. The goal of this entity-component system is focused on gameplay programmers and their mental well-being. I want a system that is extremely fast to code with, extremely fast to prototype with and that lets you create small games for events like the One Hour Game Jam. Yes that’s a thing. At minimum, the system should be as easy to use as Unreal’s or Unity’s entity-component system. I also want it to be data-oriented and cache-friendly. That is a problem. Note, The system requires some C++17 features. It works on Visual Studio 2017 and Apple Clang. User Requirements In this blog post, the user is a gameplay programmer. - The user writes a component as if it was 1 object (like other popular engines). - You can store a reference/pointer to a single component in your class. It persists engine events. - No more work has to be done than in Unreal or Unity to create a new component. Entity-Component Requirements - Contiguous component data (ie. the challenge). - Provide get_component, add_component, kill_component methods on individual components. Well that’s not such a big list. It turns out it’s quite simple to achieve with a nifty little trick. The Basic Idea The challenging part of such an architecture is storing a component itself, and not a pointer to it. The engine needs to call predefined potentially implemented member functions. I refer to these as events, because of how the ended up being implemented. Usually you’d have some simple virtual methods, the user would implement as required, and everyone would be happy. This is traditional polymorphism, which will trash your cache and as such, we will shun and ignore such heresies for our performant code path ;) One of my failed experiments involved a main ComponentManager tuple and a ton of SFINAE and macros. The result actually worked but was quite unreadable and hard to debug. I didn’t find this to be such a great solution, but feel free to investigate such an architecture. It works. What changed everything is the curiously recurring template pattern (CRTP) idiom. At that moment, something in my mind unlocked (and I evolved to super-sayan mode etc). The problem fixed itself. If you know what CRTP is, then you’ve probably figured out where I’m going with this. CRTP To The Rescue So, I want to store an actual object (not a pointer), but I also want the user to create it as easily as he would’ve with a traditional polymorphic solution. Here is the magic. struct MyAwesomeComponent : public Component<MyAwesomeComponent> { }; We are inheriting a Component base class, but we are also providing our new class as a template parameter to it. That way, we can interact with the “true” T. As a bonus, we can transparently use SFINAE to check whether to call an engine event or not on the user class. Another plus is we can provide some helpful methods to the user component, since it inherits the base Component. It does get a little tricky to remember what is what when writing the base class though. Ultimately, this solves our problem (and many others) as we can store a static vector<T> in our base class. This guarantees the data is contiguous. Potential Issues - If you do not like CRTP, well. What can I say? ¯_(ツ)_/¯ - If you are working in dynamic libraries or other systems were you cannot simply use static data members. There may be a way to hack this system to make it work, though you will loose some precious simplicity. - Template “explosion” is a real issue. For small to medium games it should be reasonable, but your compiling may slow down to a halt on big teams. I’d love to hear ideas on how to improve upon this. The Entity Before we dig into the Component class, let’s write a simple Entity class. In this post, we will make a tiny example system with an init and an update event. We’ll implement a Transform component and a MegaSonicAirplane component. This code is for demonstration purposes only, and is most definitely not production ready. template <class T> struct Component; struct Entity { Entity() : id(_id_count++) // For demo only. Id == position in component // buffer. {} template <class T> Component<T> add_component() { static_assert(std::is_base_of<Component<T>, T>::value, "Your component needs to inherit Component<>."); /* Don't allow duplicate components. */ if (auto ret = get_component<T>()) return ret; return Component<T>::add_component(*this); } template <class T> Component<T> get_component() { static_assert(std::is_base_of<Component<T>, T>::value, "Components must inherit Component<>."); return Component<T>{ *this }; } uint32_t id; static const Entity dummy; private: Entity(uint32_t id_) : id(id_) { } static uint32_t _id_count; }; const Entity Entity::dummy{ std::numeric_limits<uint32_t>::max() }; uint32_t Entity::_id_count = 0; Our demo Entity is a simple 32 bit unsigned int. For simplicities sake, we increment it every time we create a new entity. The entity class provides an invalid dummy entity. This is required so our user can use Component “smart references” without having to initialize them. The Entity and Component classes are tightly coupled. The Entity forwards Component messages to the appropriate Component Managers. It is the “glue” which makes the whole system work. Getting a new Component is really simple, as validation is done at a later time. We simply construct a new Component “smart ref” and return that. SFINAE Ground Work A little SFINAE has never hurt anyone… Or has it? I promise this is cleaner than my last post on the subject! /* Beautiful SFINAE detector, <3 Walter Brown */ namespace detail { template <template <typename> typename Op, typename T, typename = void> struct is_detected : std::false_type {}; template <template <typename> typename Op, typename T> struct is_detected<Op, T, std::void_t<Op<T>>> : std::true_type {}; } // namespace detail template <template <typename> typename Op, typename T> static constexpr bool is_detected_v = detail::is_detected<Op, T>::value; /* Engine provided member function "look ups". */ namespace detail { template <class U> using has_init = decltype(std::declval<U>().init()); template <class U> using has_update = decltype(std::declval<U>().update(std::declval<float>())); } // namespace detail First, I use a simplified version of an upcoming proposal, is_detected. This is the most elegant way to use SFINAE and doesn’t require macros! For more information, see the cppreference entry or Marshall Clow’s talk on the subject. Next, we define template aliases to look for our desired engine “events”. Namely, the init() function and the update(float) function. This system is extremely flexible and future proof, it makes adding new “events” quite simple. The Component At this point, we are ready for the Component class. The inner-workings are explained below. template <class T> struct Component { Component(Entity e = Entity::dummy) : entity(e) { } static void* operator new(size_t) = delete; static void* operator new[](size_t) = delete; operator bool() const { return entity.id < _components.size(); } T* operator->() const { assert(*this == true && "Component doesn't exist."); return &_components[entity.id]; } template <class U> Component<U> add_component() { static_assert(std::is_base_of<Component<U>, U>::value, "Components must inherit Component<>."); return entity.add_component<U>(); } template <class U> Component<U> get_component() { static_assert(std::is_base_of<Component<U>, U>::value, "Components must inherit Component<>."); return Component<U>{ entity }; } static Component<T> add_component(Entity e) { // printf("Constructing %s Component. Entity : %u", // typeid(T).name(), e.id); T t; t.entity = e; _components.emplace_back(std::move(t)); if constexpr (is_detected_v<detail::has_init, T>) { _components.back().init(); } return Component<T>{ e }; } static void update_components(float dt) { if constexpr (is_detected_v<detail::has_update, T>) { for (size_t i = 0; i < _components.size(); ++i) { _components[i].update(dt); } } } protected: Entity entity; private: static std::vector<T> _components; }; template <class T> std::vector<T> Component<T>::_components = {}; The Component class acts as both a “smart reference” for the component itself and as a Component Manager. The user interfaces the smart ref: when using operator->(), the object will search in our contiguous data vector and return a pointer to the appropriate data. A bool() operator is provided to streamline the gameplay programmers code. Currently, it is up to the user to check whether the component reference is still valid, though I’m undecided if I like this or not. Get_component and add_component member functions have a few benefits. You can easily get a Component attached on the same Entity as yourself. Or just as easily get a Component attached to another Component (aka what Unity does). The Component constructor requires an Entity, we provide a default dummy value. This was added so a user can easily add Components to his class definition. The new operators are deleted for good measure. Init will be called if provided by the user on component creation. The engine will interface with the Component Manager, it is the static portion of the Component. The Entity uses the static add_component for example. Here the events will be called manually ( update_components). SFINAE is used to choose whether or not to execute the event if a user Component provides it. No cache misses. No overhead. Finally, we have the static vector, which is the “core” of our system. There isn’t much to it. In a real world use case, you’d want a lookup table of sorts to index into the vector. Every frame, Components should be sorted in 2 groups; enabled and disabled. Example Whew! That was a mouth-full. Seeing the system in action will probably help understand what is going on. Here is the simplest Transform ever written, and a damn fast plane Component. struct Transform : public Component<Transform> { struct vec3 { float x = 0.f; float y = 0.f; float z = 0.f; }; vec3 pos; }; struct MegaSonicAirplane : public Component<MegaSonicAirplane> { void init() { _transform = add_component<Transform>(); } void update(float dt) { _transform->pos.y += speed * dt; /* Another option for the user : */ // auto t = get_component<Transform>(); // t->pos.y += speed * dt; } void mega_render() { printf("MegaSonicAirplane %u : { %f, %f, %f }\n", entity.id, _transform->pos.x, _transform->pos.y, _transform->pos.z); } Component<Transform> _transform; const float speed = 1000.f; }; This all seems quite sane and readable to me. All the previous requirements have been respected. Lets launch a few airplanes to celebrate! They’re really just going upwards anyway, like fireworks. 5’000’000 should do it… const bool twin_peaks_is_perfection = true; int main(int, char**) { std::vector<Entity> es; es.reserve(5'000'000); for (int i = 0; i < 5'000'000; ++i) { es.push_back(Entity()); es.back().add_component<MegaSonicAirplane>(); } while (twin_peaks_is_perfection) { Component<MegaSonicAirplane>::update_components(dt); es[0].get_component<MegaSonicAirplane>()->mega_render(); } return 0; } And that’s it for the core system. We have arrived (somewhat) safely to destination. The weather is a cool breeze and sunny day. Thank you for travelling on contiguous data airlines… Where To Go From Here Personally, I am working on multi-threading the whole system. Even though it doesn’t make much sense for small games, I think it’ll be an interesting experiment. There is also more work required for the scene graph, which has a tendency to break data-contiguous systems by its nature. Finally, a proxy data structure used to store the components as Structure of Arrays is definitely on the horizon. I want to extend a huge thanks to Alex, Francis and Houssem for the constant brainstorming and discussions about game engine architectures. Full Code Yes, with includes. Enjoy o/
https://philippegroarke.com/blog/2017/09/30/friendly-data-oriented-entity-component-managers/
CC-MAIN-2021-04
refinedweb
1,947
50.02
The problem asks to compare if the string has consecutive same letters and rewrite it as the number of the letter plus the letter, for example, AAAAA as in 5A. But when I use the if statement to make the comparison the output become some very long number instead of the desired result. Here is a portion of the problem: Run-length encoding is a simple compression scheme best used when a dataset consists primarily of numerous, long runs of repeated characters. For example, AAAAAAAAAA is a run of 10 A’s. We could encode this run using a notation like *A10, where the * is a special flag character that indicates a run, A is the symbol in the run, and 10 is the length of the run. Here is the code: import java.util.Scanner; public class RunLengthEncoding { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.print("Enter input string: "); String s = input.nextLine(); for (int a = 0; a < s.length(); a++) { if ((s.charAt(a) < 'A' || s.charAt(a) > 'Z')) { System.out.print("Bad input"); System.exit(0); } } System.out.print("Enter flag character: "); char flag = input.nextLine().charAt(0); if (flag == '#' || flag == '$' || flag == '*' || flag == '&') { int count = 0; for (int i = 1; i < s.length(); i++) { if(s.charAt(i)=s.charAt(i-1)); count++; if (count == 1) System.out.print(s.charAt(i)); if (count == 2) System.out.print(s.charAt(i) + s.charAt(i)); if (count == 3) System.out.print(s.charAt(i) + s.charAt(i) + s.charAt(i)); else System.out.print(flag + s.charAt(i) + (count + 1)); } } else System.out.print("Bad input"); } } Your problem is here: if(s.charAt(i)=s.charAt(i-1)); First of all, you must compare chars using == not =. Second, putting a ; right after the if statement will cause it to terminate immediately. In other words, the following line is no longer part of the if statement. Change to: if(s.charAt(i) == s.charAt(i-1)) Edit Regarding your comment, something like this should work, though I didn't test it. Just replace your current large if block with the below: if (flag == '#' || flag == '$' || flag == '*' || flag == '&') { char last = ' '; char curr = ' '; int count = 1; for (int i = 0; i < s.length(); i++) { last = curr; curr = s.charAt(i); if (curr == last) { count++; }else { System.out.print(("" + count) + last); count = 1; } } }
https://codedump.io/share/EQ7S9LoQVBrQ/1/how-to-check-if-the-char-in-a-string-is-the-same-as-the-previous-char
CC-MAIN-2017-04
refinedweb
397
69.68
Defines types of IRC network response message. More... #include <ircresponsetype.h> Defines types of IRC network response message. Types are compliant to the response types defined by RFC 1459. Definition at line 16 of file ircresponsetype.h. Represents types defined by RFC 1459. In order to learn what each type represents please refer to RFC 1459 document. Definition at line 25 of file ircresponsetype.h. Initializes an invalid IRCResponseType object. Definition at line 8 of file ircresponsetype.cpp. Initializes object with specified type. Definition at line 14 of file ircresponsetype.cpp. Initializes object by attempting to convert specified string to MsgType through typeFromRfcString(). Definition at line 20 of file ircresponsetype.cpp. Creates IRCResponseType objects taking numeric value as the more important here. The MsgType, returned by type(), in the created object may still point to Invalid value but the numericValue() will be set to whatever was specified as the parameter of this method. Definition at line 34 of file ircresponsetype.cpp. Check if numeric value is between 200 and 399 (inclusive). See: RFC 1459. Definition at line 254 of file ircresponsetype.h. Check if numeric value is equal to or above 400. See: RFC 1459. Definition at line 264 of file ircresponsetype.h. Response is valid if type is different than Invalid. Definition at line 272 of file ircresponsetype.h. If message type can be represented as number, this will contain its value. Numeric types values are stored to easilly distinct a message family. For example all errors start with 400 and above. Definition at line 287 of file ircresponsetype.h. If type can be represented as an integer, this will convert it. Some IRC message types are represented by words like KILL or PING, but some are represented by numbers like 001, 311, 401, etc. This method will convert MsgType value to numeric value, if such value can be found. Internally, type is converted to string using the toRfcString() method and then that string is converted to integer. Definition at line 61 of file ircresponsetype.cpp. String representation of specified message type. This returns the RFC 1459 representation of the message type! Definition at line 76 of file ircresponsetype.cpp. String representation of the message type. This returns the RFC 1459 representation of the message type! Definition at line 297 of file ircresponsetype.h. Returns MsgType basing on typeRepresentation. It is either one of the known and implemented types or Invalid if string cannot be successfuly converted. Definition at line 218 of file ircresponsetype.cpp.
http://doomseeker.drdteam.org/docs/doomseeker_1.0/classIRCResponseType.php
CC-MAIN-2019-35
refinedweb
416
53.37
Open Source Jobs Open Source Jobs Open Source Professionals Search firm...; Open Source Jobs Select this account if you want to register... listed on opensourcexperts.com You can list your Open Source related Jobs E-mail code for complete control. POPFile: Open Source E-Mail... is considering making its Java Enterprise System server software open-source, John...Open Source E-mail Server MailWasher Server Open Source Open Source E-mail Server code for complete control. POPFile: Open Source E-Mail... making its Java Enterprise System server software open-source, John Loiacono.. Browser Open Source Browser Building an Open Source Browser One year ago -ages ago by Internet standards- Netscape released in open source... browser. Based on KHTML and KJS from KDE's Konqueror open source project Chapter 3. Develop clients that access the enterprise components , which contains both Java source code and compiled .class files, along... Implement Java clients calling EJBs Application client projects... is associated with the project so the Java source can be incrementally compiled Open Source Jobs Open Source Exchange ; DDN Open Source Code Exchange The DDN site...Open Source Exchange Exchange targeted by open-source group A new open-source effort dubbed Open Source Servers Open Source Servers Open Source Streaming Server the open source...; these tools are not available as part of the open source project. Technical..., and SUSE Linux Enterprise Server, an award-winning open-source server for delivering Implement Java clients calling Web Services Implement Java clients calling Web... clients calling Web Services Generating a Java client proxy and a sample... to be created in a Java, EJB, or Application Client project, you must open source help desk Open Source Help Desk Open Source Help Desk Software As my help desk... of the major open source help desk software offerings. I?m not doing...?s out there. The OneOrZero Open Source Task Management Open Source Business Model investors and large clients, and open source did better in boardroom... with publication of their source code on the Internet, as the Open CASCADE...Open Source Business Model What is the open source business model Open Source web mail and Yahoo launching beta versions of their new AJAX webmail clients, an Open Source...Open Source web mail Open Web Mail Project Open WebMail... Outlook to Open WebMail.Open WebMail project is an open source effort made Open Source Software Software Open source doesn't just mean access to the source code. The program must include source code, and must allow distribution in source code... with source code, there must be a well-publicized means of obtaining the source code IBM Open Source is that the open-source software for which the patented innovations are used must... a searchable database containing an index of open-source computer code...; IBM donates Rational IP to Open Source IBM wants to contribute Open Source HTML ; Open Source HTML Parsers in Java NekoHTML is a simple HTML...Open Source HTML HTML Tidy Library Project A quorum... element tags. Open Source Search Engines Open Source Content Management code for open-source CMSes is freely available so it is possible to customize...Open Source Content Management Introduction to Open Source Content Management Systems In this article we'll focus on how Open Source and CMS combine Open Source e-commerce code and J2EE best practices. Open Source E...Open Source e-commerce Open Source Online Shop E-Commerce Solutions Open source Commerce is an Open Groupware run Linux. The groupware service must be open source and based on open standards... open source office suite products and all the leading groupware clients running... source groupware clients to talk to open source groupware servers. On the client Open Source Version Control ), a popular open-source application within which many developers store code...-only CVS access. Open Source code version control...Open Source Version Control CVS: Open source version control CVS Java Freelance Jobs Java Freelance Jobs This Java Freelance Jobs is for the experienced Java Programmers. Freelance Jobs for Java Programmers is another good Open Source Download Buzz. Netscape Communicator Open Source Code White... 1.1.4 Release Notes. The source code for Open Workbench 1.1.4 is also available...Open Source Download Downloads - UNIX & Open Source A modification Open Source Code Open Source Code What is SOAP SOAP is a protocol for exchanging.... Open source code in Linux If the Chinese have IT, get... protocol must have an open source piece at its core if it is to remain uncorrupted. We java code to open and display MS-word document java code to open and display MS-word document java code to open and display MS-word document Mac OS X Open Source with the Java Web Services pack. Open Source software for Mac OS X...Mac OS X Open Source Mac os X wikipedia Mac OS X was a radical.... Open Source Mac OS X Server Mac OS X Server Open Source Defination Open Source Defination The Open Source Definition Open source doesn't just mean access to the source code. The distribution terms of open-source... Strategy Against Open Source These Open Source consultants do not sell software Games ; Open Source Java Game Utilities: LWJGL 0.98 and Game Gardens The days...; Open Source APIs for Java Technology Games MDR-EdO: Welcome to today's Java Live chat on open source APIs for Java Technology Games Open Source FTP Open Source FTP Open source FTP clients The always-excellent..., 2000 and XP. Open Source FTP Java Library EdtFTPj... ColoradoFTP is the open source Java FTP server. It is fast, reliable and extendable VoIP PBX Open Source the second beta of a "software appliance" version of Asterisk, the open source IP PBX... evaluation. Open source free IP PBX An open source group... members can contribute to writing open source code, similar to the Linux development Open Source Images Open Source Images Open Source Image analysis Environment TINA (TINA Is No Acronym) is an open source environment developed to accelerate... the development of key open source software within Europe and represents clear recognition Open Source VoIP architectures. Forward-thinking observers expect open source IP telephony products... "Voice over IP on a HardPhone running Linux and just using Open Source software...: FXS, which connects the Asterix open-source PBX to analog or IP phones; and FXO VoIP Jobs VoIP Jobs VoIP Jobs Opportunities Post your jobs on Packetizer... for the Oracle Service Delivery Platform is carrier-grade Java EE and Oracle?s MIT Open Source MIT Open Source Open Source at MIT The goal of this project is to provide a central location for storing, maintaining and tracking Open Source... open-source story brings bloggers in Wouldn't you know it? I finally caught PHP Jobs at Rose India be able to write elegant code that can get reused. Candidate must... is must Exposure to XML is also must Experience with open... PHP Jobs at Rose India   Open Source CRM integrate its Windows Server system with open source Java middleware from...Open Source CRM Open Source CRM solution Daffodil CRM is a commercial Open Source CRM Solution that helps enterprise businesses to manage customer Open Source Testing Tools in Java Open Source Testing Tools in Java In this page we are discussing about the different Open Source Testing Tools available in Java. JUnit JUnit... by the developer who implements unit tests in Java. JUnit is Open Source Software Open Source DRM and source code for DReaM, an open-source, "royalty-free digital rights management...Open Source DRM SideSpace releases open source DRM solution SideSpace Solutions released Media-S, an open-source DRM solution. Media-S is format Open Source Directory Open Source Directory Open Source Java Directory The Open Source Java... - to many different platforms. Open Source Java Directory The Open Source Java Directory is maintained by Steve Mallett, creator Open Source Intelligence Open Source Intelligence Open source Intelligence The Open Source..., a practice we term "Open Source Intelligence". In this article, we use three...; Open source intelligence Wikipedia Open Source Intelligence (OSINT Open Source Midi Sound API package (javax.sound.midi). Plumstone is an open source Java project...Open Source Midi Open Source Midi Proprietary software Being the best... share. As proponents of open source software, it should not be beneath us Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource Metaverse Project provides an open source metaverse engine along the lines... of an emerging concept in massively multiplayer online game circles: software written in Java Code Coverage Open Source Java Collections API...Open Source software written in Java Open Source Software or OSS... Purpose ERP/CRM Written in Java Open Source Open Source Antivirus to the pervasiveness of email, a favorite delivery platform for malicious code. Open source...Open Source Antivirus Developing Open Source AntiVirus Engines... a significant contribution to the development of a viable, working open source MySql Open Source . Under the Open Source License, you must release the complete source code for the application that is built on MySQL. You do not need to release the source code...MySql Open Source MySQL Open Source License MySQL is free use for those XML Editor * multi-platform (Java 1.3+) * free open-source software  ... benefits: it?s Open Source, so you can see the code. As a set of Cocoa classes it?s...Open Source XML Editor Open source Extensible XML Editor Open Source content Management System for java content repositories (JCR). Open source Content... Source Content Management Apache Lenya is an Open Source Java/XML Content... of Java open source projects. One of the primary features of a CMS is its Get IP Address in Java Get IP Address in Java In this example we will describe How to get ip address in java. An IP address is a numeric unique recognition number which...) { e.printStackTrace(); } } } Output Download source code Open Source Community , open source or not; neither open source nor proprietary code should be considered...Open Source Community Open Source Research Community In the spirit of free and open source software (F/OSS), we are attempting to establish Best Open Source Software Best Open Source Software Best Open Source Open source software. Often (and sometimes incorrectly) called freeware, shareware, and "source code," open... the term open source, and even fewer are aware that this alternative software Open Source Templatess software. The main use for open source web design is inspiration. Clients could...; Java open Source J2EE Templates EJOSA (Enterprise Java Open Source... Software Engineering The Enterprise Java Open Source Open Source Portals Open Source Portal A new resource aimed at sharing source code, original open...Open Source Portals Open source portals Standards support..., there are quite a number of open source projects competing in this space E-mail Open Source E-mail Open Source E-Mail...; hMailServer -Open source email hMailServer is a free, open...; POPFile: Open Source E-Mail Solution POPFile is a program Open Source CD the largest number of software packages. Open source code... to contain source code from the open-source project LAME, an MP3 encoder and player...Open Source CD TheOpenCD TheOpenCD is a collection of high quality Open Source Shopping Cart open source shopping cart. You can download and modify the source code... can modify source code of shopping cart as per your needs. The open... Languages A good open source shopping cart software must support following Jobs & Triggers () method. The jobs must have a no-argument constructor that is the ramification... Jobs & Triggers  ... then it must implement the Job interface which override the execute() method. Here source code - Java Beginners source code Hi...i`m new in java..please help me to write... and units in the amounts arrays. (amounts[1]=prices[1]*units[1].output display using message dialog box. Hi Friend, Try the following code: import What is Open Source? has to include source code. It must also enable distribution in source code...What is Open Source? Introduction Open source is a concept referring to production Open-source software of the application. But if the software is Open-source then the source code is also...Open-source software Hi, What is Open-source software? Tell me the most popular open source software name? Thanks Hi, Open-source java jobs Bangalore jobs java jobs Bangalore jobs HOW TO FIND OUT HEAP MEMORY IN CASE OF JAVA PROGRAM Code Coverage Tools For Analyzing Unit Tests written in Java Open Source JavaScript Open Source JavaScript The JavaScript Source This script reads... of the open source framework was released on July 14 by a small web development startup...; An Open source Javascript library Ajax is the term Java Get IP Address getHostAddress() returns the IP address. Here is the code of Java Get IP... Java Get IP Address  ...(); } } } Output will be displayed as: Download Source Code Open Source CMS proprietary products, the source code for open-source CMSes is freely available... Management Apache Lenya is an Open Source Java/XML Content...Open Source CMS Open Source Content Java Jobs Java Jobs Hi, Is there sufficient Jobs for Java programmers in 2012? Which is the sites for applying for Java Jobs? Thanks Open Source Reports Open Source Reports ReportLab Open Source ReportLab, since its early beginnings, has been a strong supporter of open-source. In the past we have found the feedback and input from our open-source community an invaluable aid Java source code - Java Beginners Java source code Write a small record management application for a school. Tasks will be Add Record, Edit Record, Delete Record, List Records. Each... should be used. All data must be stored in one or two files. Listing records Java open source software Open source software for Java In this page we will tell list down the most used Open source in Java. Java is one of the programming language used... is the list of java open source software used for the development and deployment Open Source Excel , and clients are working together to develop documentation of and related to open source... VBA Models Combo Set XL-VBA4 1 The Excel VBA Models Open Source Code Combo..., and numerical methods in open source code. Programs include Distribution 12 Random J2EE clients J2EE clients What are types of J2EE clients? Following are the types of J2EE clients: Applets. Application clients. Java Web Start-enabled rich clients, powered by Java Web Start technology. Wireless clients, based Open Source Database Source Java Database One$DB is an Open Source version of Daffodil...-source database that comes with a newer code base and an open-source reporting... Version 8.0.4, the most recent version of the open-source code upon which Open Source Blog Open Source Blog About Roller Roller is the open source blog server...; The Open Source Law Blog This blog is designed to let you know about developments in the law and business of open source software. It also provides Open Source Outlook of the code base and hosting the additions on the same open source basis.  ... Open Source Outlook Open Source Outlook Sync Tool Calendaring..., vertically locked down world; we need an open source solution for extracting Open Source Accounting but play with code. With easy access to an open source application, the techie can...Open Source Accounting Turbocase open source Accounting software TurboCASH .7 is an open source accounting package that is free for everyone calendra.css source code - Java Beginners calendra.css source code hello i need the source code... and year are getting displayed Hi Friend, Try the following code...; background-color: #EEE; display: none; position: absolute; z-index: 1; top: 0px Open Source Installer , install or update shared components. Open Source Java Tool... of Java development tools and a few common open source tools that aren't just...Open Source Installer Open source installer tool NSIS (Null Open Source POS Open Source POS Open-source POS system This past weekend People's... out items on the world's first entirely free, open-source point-of-sale system... copying the source code and also prevents anyone from getting inside to mine data Open Source Databases Source Database Benchmark PolePosition is an open source Java framework...Open Source Databases The Open Source Database Benchmark Featuring... Source Databases: A brief look This month I take a brief look at Open Source IP Filter Example address and status report will display as below: Download Source Code...() {} } Here is the source code of CallIpFilter Servlet.../CallIpFilter from IP 193.168.10.146. The message will display as below proxy in minutes and requires no code changes. Open Source Proxy Checker...Open Source proxy Open-source HTTP proxies A HTTP proxy is a piece..., and modify its source code to meet requirements. Squid Web Open Source Game Engine of the engine source, you must make your source code available for others to use under...; Open Source Game Development A 3D game engine is a complex collection of code..., Frameworks Open Source APIs for Java Technology Games MDR
http://www.roseindia.net/tutorialhelp/comment/84283
CC-MAIN-2014-52
refinedweb
2,829
65.62
Representing fields in a markup language documentDownload PDF Info - Publication number - US7533335B1US7533335B1 US10731515 US73151503A US7533335B1 US 7533335 B1 US7533335 B1 US 7533335B1 US 10731515 US10731515 US 10731515 US 73151503 A US73151503 A US 73151503A US 7533335 B1 US7533335 B1 US 7533335B1 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - field - document - application - markup - is a continuation-in-part application under 35 United States Code § 120 of U.S. patent application Ser. No. 10/187,060 filed on Jun. 28, 2002, which is incorporated herein by reference. An exemplary schema in accordance with the present invention is disclosed in a file entitled Appendix.txt in a CDROM attached to an application entitled “Mixed Content Flexibility,” Ser. No. 10/726,077, filed Dec. 2, 2003, which is hereby incorporated by reference in its entirety. A computer listing is included in a Compact Disc appendix in the attached CD ROM (quantity of two) in IBM-PC using MS-Windows operating system, containing file Appendix.txt, created on Dec. 26, 2006, containing 12,288 bytes (Copy 1 and Copy 2) and is hereby incorporated by reference in its entirety. a unique identifier for a collection of names that are used in XML documents as element types and attribute names. The name of a namespace is commonly used to uniquely identify each class of XML document. The unique namespaces, what types of. The XML standard is considered by many as the ASCII format of the future, due to its expected pervasiveness throughout the hi-tech industry in the coming years. Recently, some word-processors have begun producing documents that are somewhat XML compatible. For example, some documents may be parsed using an application that understands XML. However, much of the functionality available in word processor documents is not currently available for XML documents. The present invention is generally directed towards a method for representing an application's native field structures, such as “Creation Date of the Document”, “Formula”, a specially formatted number, a reference to text in another part of the document, or others in a markup language document. Fields are commonly used for document automation, so that the application itself includes certain information among the contents of the document, with possibly no extra user intervention required. The method of the invention provides a way to save this field definition information in a markup language (ML) document without data loss, while allowing the field structures to be parsed by ML-aware applications and to be read by ML programmers., the markup language specifies how the text is to be formatted or laid out, whereas in a particular customer schema, the ML tends to specify the text's meaning according to that customer's wishes (e.g., customerName, address, etc). The ML is typically supported by a word-processor and may adhere to the rules of other markup languages, such as XML, while creating further rules of its own. will not contain additional elements, or be treated as a text node.. Generally, the present invention is directed at representing field structures in an ML document. The ML document may be read by applications that do not share the same schema that created the document. The application not sharing the same schema may parse the field structures, regardless of whether or not the fields are understood. ML. Word-processor 120 internally validates ML file 210. When validated, the ML elements are examined as to whether they conform to the ML schema 215. A schema states what tags and attributes are used to describe content in an ML document, where each tag is allowed, and which tags can appear within other tags, ensuring that the documentation is structured the same way. Accordingly, ML 210 is valid when structured as set forth in arbitrary ML schema 215. ML validation engine 225 operates similarly to other available validation engines for ML documents. ML validation engine 225 evaluates ML that is in the format of the ML validation engine 225. For example, XML elements are forwarded to an XML validation engine. In one embodiment, a greater number of validation engines may be associated with word-processor 120 for validating a greater number of ML formats. Representing Fields in a Markup Language Document The present invention generally provides a method to represent an application's native field structures in markup language (ML) such as XML. The field structures may be parsed by applications that understand the markup other than the application that generated the ML file. Fields are commonly used for document automation, so that the application itself includes certain information among the contents of the document, with possibly no extra user intervention required. Fields may be a very powerful feature making the document authoring or editing process much more efficient. Fields are elements of the content of a document, whose purpose is to automatically generate or modify the content, or its appearance, depending on various conditions and/or settings specified by the user. Fields may be very simple or very complex. A defining characteristic of a field is that it is updatable. For example, a “LastSavedBy” field may insert the name of the last person who saved the document at the location of the field. When a different person saves the document from the one who saved it last time, the name inserted by the field is automatically replaced with the name of the latest user. The field therefore generates and modifies the content of the document depending the identity of the person saving the document. A “Ref” field (reference) is a more complex example. The field's result is text which is a “linked” copy of text from another place of the document, identified by a named bookmark. As soon as the original text changes, the text inserted by the field changes as well. The “ref” field may also affect the formatting of the copied text (e.g., by making the copied text uppercased). An even more complex example is a field which creates a table of contents for the document by: reproducing all the headings used in the document in a single location; organizing the headings according to their level to expose the hierarchy of the document; changing the formatting of the headings; automatically including the correct page number with each heading in the table of contents; and determining the numbering style to use for the table of contents. A table of contents that is the result of such a field is automatically updatable and self-organizing based on the contents of the document. Therefore, the maintenance of a table of contents is automated so that the user is not required to create and maintain the table of contents manually. Certain fields may refer to one another. For example, a field whose result is the Index section of a document relies on the existence of fields throughout the document that mark index entries. Also, certain fields may be nested one inside of another and work together in a “recursive” manner to create the desired result. In order for an application to support the concept of fields, the application represents each field internally by a structure mirroring field properties. A field structure generally consists of the following two major parts: - 1. field instructions - 2. field result “Field instructions” comprise the portion of a field containing pieces of information such as: - 1. the name of the field; - 2. zero or more arguments on which the field operates (e.g., file names, style names, bookmark names, numbers, literal text, and others); and - 3. zero or more options specific to the field that further modify the behavior of the field (e.g., formatting options, numbering style settings, and others). The “field result” comprises the portion of the field which contains the result of the operation performed by the field. The field result may simply be a number, but also may be as arbitrarily rich and complex as a whole fully formatted document or OLE (Object Linking and Embedding) object. The result is the part that is updated by the field when the value of the arguments of the field changes. Since a field itself is an editable part of a document, it coexists with the surrounding content. The field may be separated from the surrounding content by a field start and field end marks. Also, the instructions are separated from the result. In a first embodiment, the separators are visible to the user. In a second embodiment, the separators are not visible to the user. Correspondingly, in other embodiments, the instructions may or may not be visible to the user. Typically, a user is able to choose between a view where only field instructions are visible and one where only field results are displayed. Based on how the instructions portion of a field is structured, fields are divided into two major categories: simple fields—The instructions portion only contains instructions, and not richly formatted content or other embedded fields. complex fields—The instructions portion contains richly formatted content or other embedded fields. The present invention provides a method for saving all the field information described above as ML without losing any data, by mapping the application's internal field structures described above to saved ML markup. The present invention represents the fields in ML depending on whether the field is a “complex” field or a “simple” one. In the example shown, the simple field is represented by fldSimple element 310 containing instructions 320 and result 330. Instructions 310 of the field are written out as the string value of the instr attribute. Result 330 of the field is arbitrarily rich ML content written out as the child of fldSimple element 310. In the example given, the ML markup represents an “Author” field, whose function is to insert the name of the document author (John Doe) into the document, in upper case. Other field instructions and results may be used within a simple field, and a simple field may correspond to elements other than the fldSimple element without departing from the scope of the present invention. As shown, instructions 440 of a complex field themselves may contain arbitrarily rich content, including other fields. Accordingly, ML for a complex field includes the definition of two empty elements such as fldChar 410 and instrText 420. Element fldChar 410 marks the beginning of the field, the boundary between the instructions and the result, or the end of the field, depending on the value of its fldCharType attribute 430 (e.g., “begin”, “separate”, “end”, etc.). Element instrText 420 contains the ML markup for the arbitrarily rich instructions of the field. In one embodiment, the elements appear in the following specific order for the field representation to be valid: <fldchar fldCharType=“begin”/> . . . <instrText> - Field instructions go here . . . </instrText> . . . <fldChar fldCharType=“separate”/> - Field result goes here . . . <fldChar fldCharType=“end”/> The actual contents of the field instructions may vary from application to application, depending on the types of fields the application supports. The attached appendix is a listing of an exemplary portion of schema for generating the fields, in accordance with aspects of the present invention. At decision block 530, a determination is made whether each field used is a complex field. When the field being examined is a complex field, processing moves to block 540. However, if the field is not a complex field, the field is a simple field and processing moves to block 550. In another embodiment, the fields may be categorized according to fields other than complex fields and simple fields. At block 540, the properties of the complex field (when the field is a complex field) are mapped into elements, attributes, and values of the ML file. As an example, the fields may include “Creation Date of the Document”, “Formula”, a specially formatted number, a reference to text in another part of the document, or others that each have their own associated properties. Two elements used in mapping the properties of a complex field are the fldChar element and the instrText element (see At block 550, the properties of the simple field (when the field is a simple field) are mapped into elements, attributes, and values. An elements used in mapping the properties of a simple field is the fldSimple element (see At decision block 560, a determination is made whether all the fields of the document have had their properties mapped to elements, attributes, and values. If not all of the fields have been processed, processing returns to block 530 where the category of the next field is determined. However, if all the fields have been processed, then the process then moves to block 570. At block 570, the properties of the fields are stored in a ML document that may be read by applications that understand the ML. Once the properties are stored, processing moves to end block 580 and returns to processing other actions. In another embodiment, the properties of each field are mapped to elements, attributes, and values without a distinction being made between complex fields and simple fields..
https://patents.google.com/patent/US7533335?oq=6233682
CC-MAIN-2018-09
refinedweb
2,169
51.38
Types of Machine Learning Algorithm You Should Know First, you have a question: What is Machine Learning? Machine learning is the study of computer algorithms that can improve automatically through experience and by the use of data. It is a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. According to Arthur Samuel, “Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed.” According to Tom Michell, “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” Types of Machine Learning Algorithms There are so many types of Machine Learning systems, it is useful to classify them in broad categories based on: - Whether they are trained in human supervision Machine Learning systems can be classified according to the amount and type of supervision they get during training. There are four major categories: - Supervised Learning - Unsupervised Learning - Semi-supervised Learning - Reinforcement Learning - Whether they can learn incrementally on the fly. - Online Learning - Batch Learning - Whether they work by simply comparing new data points to known data points, or instead detect patterns in the training data and build a predictive model, much like scientists do. - Instance-based Learning - Model-based Learning Let’s understand all one by one with a simple explanation: Supervised Learning This algorithm is like approximation concept. Suppose you have given dataset where X is features and y is label, in simple language x is input and y is output of those inputs. Now you have to train a model with this x and y and select best function that computes x → y with less error. Now, with the help of that function, you can predict more and more output by giving input to the function. Here we, human, acts as a teacher where we feed the computer with training data containing the input and we show it the correct answers (output) and from the data the computer should be able to learn the patterns. Regression This model predicts continuous value output. Used for estimating the relationships between a dependent variable ( y or ‘labels’) and one or more independent variables ( x or ‘features’). Classification This model predicts discrete value output. It gives class as output. For example, in Spam Classification there’s 2 outcome Spam or Non-spam. So according to input data, it gives any one class as output. Common Algorithm - Support-vector machines (SVM) - Linear regression - Logistic regression - Naive Bayes - Linear discriminant analysis - Decision trees - K-nearest neighbors algorithm - Neural networks (Multilayer perceptron) Unsupervised Learning This model identify pattern from data. It divides data into categories and then, from new data, it checks where new data lies and gives output. Here dataset doesn’t have ‘Label’ or y and we have to just find some pattern or structure from data. Here there’s no teacher at all, actually the computer might teach you new things after it learns patterns in data. These algorithms a useful where the human expert doesn’t know what to look for in the data. Clustering and Association Rule Learning Algorithm Clustering algorithm is grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups. Association rule learning checks for the dependency of one data item on another data item and maps accordingly so that it can be more profitable. It tries to find some interesting relations or associations among the variables of the dataset. It discovers the interesting relations between variables. Clustering: Given many items find cohesive subsets of items. Association Rule Learning: Given many baskets find which items inside a basket predict another item in the basket. Common Algorithm - K-means clustering - KNN (k-nearest neighbors) - Hierarchal clustering - Anomaly detection - Neural Networks - Principle Component Analysis - Independent Component Analysis - Apriori algorithm - Singular value decomposition Semi-Supervised Learning In Supervised Learning, we use a labeled dataset, and in Unsupervised Learning, we use an unlabeled dataset. Semi-Supervised Learning is lies between these two. Some algorithms can deal with partially labeled training data, usually a lot of unlabeled data and a bit of labeled data. Here are some real-world application of Semi-Supervised Learning - Speech Analysis - Photo-Hosting Service (Like Google Photos) - Web Content Classification - Text Document Classification For example, Deep Belief Networks (DBNs) are based on unsupervised components called Restricted Boltzmann Machines (RBMs) stacked on top of one another. RBMs are trained sequentially in an unsupervised manner, and then the entire system is fine-tuned using supervised learning techniques. Reinforcement Learning This is a very cool method in Machine Learning and also branch of AI. The learning system, called an agent in this context, can observe the environment, select and perform actions, and get rewards in return (or penalties or negative rewards). It must then learn by itself what is the best strategy, called a policy, to get the most reward. A policy defines what action the agent should choose when it is in a situation. Look at this example of Robot Implementation to understand Reinforcement Learning. There is an environment in which a robot is an agent. There are 2 states, one is Fire Side and other is Water Side. If robot go in Fire Side, it’ll punish and if go in Water Side, it’ll rewarded. And once robot goes to the Fire Side, it’ll learn policy that Fire Side is bad so won’t go there. - Observe - Select action policy - Perform Action - Get Reward or Penalty - Update Policy (Learning) - Iterate until an optimal policy is found Google’s DeepMind project builds AlphaGo from Reinforcement Learning for play Go Game. It is the best example of a power of Reinforcement Learning. It beat the World Champion (Ke Jie) of Go. It learned its winning policy by analyzing millions of games, and then playing many games against itself. And then it was just applying the policy it had learned. Common Algorithm - Q-Learning - Markov Decision Process - Temporal Difference Online Learning In online learning, you train the system incrementally by feeding it data instances sequentially, either individually or by small groups called mini-batches. Each learning step is fast and cheap, so the system can learn about new data on the fly as it arrives. Here, data becomes available in a sequential order and is used to update the best predictor for future data at each step. Online learning algorithms can also train systems on huge datasets that cannot fit in one machine’s primary memory, this is called Out-of-Core learning. The algorithm loads part of the data, runs a training step on that data, and repeats the process until it has run on all the data. Batch Learning In Batch Learning, the system is incapable of learning incrementally. It must be trained using all the data. This will. So better option is that use algorithm that can learn incrementally. Instance-Based Learning Instance-Based Learning (also called Memory-Based Learning) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as lazy. For example, just flagging emails that are identical to known spam emails, your spam filter could be programmed to also flag emails that are very similar to known spam emails. This requires a measure of similarity between two emails. A similarity measure between two emails could be to count the number of words they have in common. The system would flag an email as spam if it has many words in common with a known spam email. The system learns the examples by heart, then generalizes to fresh cases by comparing them to the learned examples using a similarity measure. For example, the new instance would be classified as a triangle because most of the most similar instances belong to that class. Model-Based Learning Another way to generalize from a set of examples is to build a model of these examples, then use that model to make predictions. This is called Model-Based Learning.
https://kishanmodasiya.medium.com/types-of-machine-learning-algorithm-you-should-know-73c0e5fa8451?source=user_profile---------3----------------------------
CC-MAIN-2022-33
refinedweb
1,410
50.97
Andrey Chernyshev wrote: > Thanks Nathan! The Threads interface looks fine. Still, may be it > would be nice if two different methods are allocated for parkNanos and > parkUntil - passing the extra boolean parameter seems like an > overhead, though very little. I agree, just create another method rather than passing in a boolean flag. How are you going to avoid apps calling these public methods? We can do a security/calling stack check on each method send, but it may be preferable to make Threads a singleton and check in a getSingleton() call. > Another solution could be just to keep our own implementation of the > LockSupport in the luni-kernel (there is nothing to share for the > LockSupport with the original j.u.c, it contains almost no code). Is > there a reason why we can not do that? Probably best to keep our own primitive operations separate in the o.a.harmony package namespace. >> [2] >> >> >> dules/luni-kernel/src/main/java/org/apache/harmony/kernel/vm/Objects.java >> > > I guess the interesting question would be how do we rearrange the > already existing classes in Harmony, e.g. ObjectAccessor [3] and > ArrayAccessor [4] from the o.a.h.misc.accessors package of the > classlib, Do these need to be rearranged? Why can't we write the suncompat's Unsafe equivalents in terms of these accessors? > plus the o.a.util.concurrent.Atomics [5] from the DRLVM. Yep, these need to be moved into the kernel for all VMs to implement. We can define them in (a new) concurrent-kernel unless there is consensus that they would be more generally useful, i.e. misc-kernel or luni-kernel. > The proposed "Objects" seems like a combination of the above three. > For example, the following API set from the Objects: > > public static long objectFieldOffset(Field field) > public static void putLong(Object object, long fieldOffset, long > newValue) { > public static long getLong(Object object, long fieldOffset) > > is just equivalent to the one from the ObjectAccessor: > > public final native long getFieldID(Field f); > public final native void setLong(Object o, long fieldID, long value); > public final native long getLong(Object o, long fieldID); I agree. We should design the set the accessor/atomic methods that make sense, then express the suncompat version of Unsafe in terms of them. Andrey: did you check that everything in Objects is covered by existing accessor/atomics? > I guess j.u.concurrent won't use the direct read/write to objects, > except for volatile or atomic access? > Having two different interfaces for doing the same can be confusing - > it may not be clear, what is the relationship between "fieldID" from > the accessors package and "fieldOffset" from the Objects. Is there a big advantage to using longs rather than Field's directly? It looks like the Atomics may have been that way once, the javadoc still refers to '@parm field' though the signature is now 'long offset' <g>. > If we have a task to identify the minimum set of functionality which > is needed for j.u.concurrent, then it looks like the only object API > set we really may need to pick up is the one which is currently > contained in the o.a.util.concurrent.Atomics. I believe this is what Nathan did already in the Objects spec -- at least that was my understanding. > If the purpose is to propose some more generic interface for direct > object access, then why just don't move the existing XXXAccessor and > Atomics to the luni-kernel and go with their combination? Do accessors need to be in kernel? They are implemented solely in terms of JNI - right? +1 for Atomics moving into a kernel. Same comment as above for atomics etc. not being left as unguarded public types/methods to avoid surprises from mischievous apps. Regards, Tim > [3] > > > > [4] > > > > [5] > > > >> >> >> >> > > -- Tim Ellison (t.p.ellison@gmail.com) IBM Java technology centre, UK. --------------------------------------------------------------------- To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org For additional commands, e-mail: harmony-dev-help@incubator.apache.org
http://mail-archives.apache.org/mod_mbox/harmony-dev/200609.mbox/%3C45113DE6.7080205@gmail.com%3E
CC-MAIN-2016-44
refinedweb
667
57.06
Dejan Custic (dcustic@ca.ibm.com), Information Developer, IBM Software Group, Rational 20 Sep 2005Updated 10 Nov 2005 This article explains how to set up and work in a team environment with Rational Software Architect by importing an existing project and practicing model-driven, team-oriented development. The scenario in this article specifically involves two roles: configuration manager and developer. The user ID for the configuration manager is ucm_admin. The two developers have the user IDs dev1 and dev2. Before you begin: Installing and configuring software You must perform key software installation and configuration tasks before you set up your environment. Prerequisites The following software must be installed on client workstations: Setting up the ClearCase LT 6.0 environment The ClearCase LT 6.0 environment should be set up as follows: Using ClearCase 2003 You can also use ClearCase 2003 for this exercise. Some initial steps are different, but the ClearCase setup environment, including versioned object bases (VOBs), views, and so on, is the same. Setting up the user community ClearCase uses an integrated user identity based on the identity of the user that is logged in. In this scenario, a special account, called ucm_admin, performs the administrative operations in the source control system. You set this account to use a special group, called development, as its primary group. The users dev1 and dev2, also set the group development as their primary group. If you cannot arrange to set this group as the primary group for users in the domain, you can do one of the following things instead: If you use the default domain group called Domain Users, it requires less work and you do not need to use the environment variable; however, all users in the domain can read and potentially modify the ClearCase data. If you use a special group, you can hide information and restrict access to the ClearCase repositories (VOBs) to users in this group. Configuring ClearCase groups and environment variables In this exercise, you configure your ClearCase group as development and set the environment variable on your workstation. If you use local accounts, create the local users and the group and add the users to the group. Otherwise, arrange for your network administrator to perform these tasks in the domain. To configure your ClearCase group locally: To set a user environment variable on Windows XP: CLEARCASE_PRIMARY_GROUP development Setting up the ClearCase environment To set up the ClearCase environment, the administrative user ucm_admin completes these high-level steps, which are described in detail in the following procedures. The administrator typically performs this setup once. Creating the initial project VOB and UCM project ClearCase stores file elements, directory elements, derived objects, and metadata in a repository called a VOB. Each UCM project must have a project VOB (PVOB). A PVOB is a special type of VOB that stores UCM objects, such as projects, activities, and change sets. A PVOB must exist before you can create a UCM project. As the administrative user, create a PVOB called projects and a UCM project called InitialProject_1 through the ClearCase Getting Started wizard. To create a PVOB and UCM project: InitialProject_1 Note: The Import Source Files option is not appropriate for IDE projects because the IDEs contain the logic that determines which file types should be placed under source control. You do not use this initial repository called sources. Planning UCM components As the number of files and directories in your system increases, you need a way to reduce the complexity of managing them. Components are the UCM mechanism for simplifying the organization of your files and directories. The elements that you group into a component typically implement a reusable piece of your system architecture. By organizing related files and directories into components, you can view your system as a small number of identifiable components, instead of as one large set of directories and files. Within a component, you organize directory and file elements into a directory tree. You can convert existing VOBs or directory trees within VOBs into components, or you can create a component from scratch. Note: The directory and file elements of a component reside physically in a VOB. The component object resides in a PVOB. Creating a VOB To create a VOB for your IDE project: test_vob Creating a new UCM project In this exercise, you do not use the UCM project that you originally created, called InitialProject_1. Instead, you create a new UCM project called InitialProject. After you complete this procedure, your new UCM project contains a foundation baseline and the UCM component that is associated with your project is modifiable. To create a UCM project: InitialProject The following figure illustrates how a new UCM project is displayed in the ClearCase Project Explorer. Creating ClearCase work areas With UCM, a work area is the user work environment that is implemented with two objects: a stream and a view. A stream defines the working configuration for the view,or views, associated with it. A UCM project has one integration stream, which is part of the shared work area, and multiple development streams, each of which is part of a developer’s private work area. You typically work with a development stream and then deliver your work to the integration stream. The development stream tracks the activities that are assigned to you and enables you to work in isolation from the rest of the UCM project team. A view selects the appropriate versions of files and directories, as defined by a set of configuration rules, from all available versions in the VOB. ClearCase provides two types of views: snapshot and dynamic. With snapshot views, files are copied from the VOB to the local disk. Dynamic views reference files directly in the VOB. Note: ClearCase LT uses snapshot views only. Create work areas to populate the initial project framework and file artifacts. To create a ClearCase work area: The following figure illustrates how a ClearCase work area is displayed in the ClearCase Project Explorer. Your work area is rooted under ucm_admin_InitialProject (for example C:\views\ucm_admin_InitialProject). In ClearCase, each VOB appears as a subdirectory under the view root. UCM components can exist either as an entire VOB, or as first-level subdirectories underneath a VOB. In this exercise, your component is located in a separate VOB. Sharing a modeling project You share a modeling project, so that other team members can also work on it. In this section, you log into Rational Software Architect as ucm_admin, import a modeling project and share it in ClearCase. Starting Rational Software Architect Start Rational Software Architect and create an initial workspace. To start Rational Software Architect: Note: Your snapshot view location and your workspace location should always be separate. Enabling the ClearCase SCM adapter and starting ClearCase Enable the ClearCase SCM adapter and start ClearCase. To enable the ClearCase SCM adapter and start ClearCase: Set the preference to automatically connect to ClearCase when Rational Software Architect starts. To automatically connect to ClearCase when Rational Software Architect starts: Importing an existing modeling project In this exercise, you import an existing modeling project called Piggy Bank. In accordance with the Rational Unified Process (RUP), the Piggy Bank sample UML model is divided into three models that each describes a different aspect of the system: the use-case model, analysis model, and design model. To import the Piggy Bank modeling project: The following figure illustrates how the Piggy Bank modeling project is displayed in the Model Explorer view. Sharing a project Share your project to allow other team members to access it. To share your project: Share project The following figure illustrates how a shared project is displayed in the Model Explorer view. Adding to the modeling project Make changes to your models and store them in ClearCase, so that other team members can view them. Open a diagram and update a use-case diagram with an action. To update a use-case diagram: Select Account [true] The following figure illustrates how a new action is displayed in the diagram editor. Saving your work and checking it in Save your work, and then check your changes into ClearCase. To save and check in your files: Delivering to the integration stream The ClearCase deliver operation makes the work in one stream available to another stream. Work is delivered in the form of activities or baselines. Differences between versions that are already part of the target stream of the delivery operation and versions that are being delivered are resolved through merging. Versions associated with an activity or baseline must be checked in to be delivered. Only activities that were modified after the last deliver operation from the development stream are considered for delivery. Deliver your files to the integration stream so that other users can work with the shared model. Until you deliver to the integration stream, users who join the UCM project see empty work areas. To deliver the activities to the integration stream: Note: Do not complete the delivery now. Leave the Delivering to View window open. You will complete the delivery later after you test files in the integration view. You have merged and checked out all of the files onto the integration stream and left these files checked out in the integration view. Viewing the ClearCase branch structure Each time that you revise and check in an element, ClearCase creates a new version of the element in the VOB. ClearCase can organize the different versions of an element in a VOB into a version tree. Like any tree, a version tree has branches. Each branch represents an independent line of development. Changes to one branch do not affect other branches until you merge. In UCM projects, the stream maintains a record of which branch or set of branches you use in a project; you typically do not work directly with branches. You can view the underlying ClearCase branch structure that is associated with the streams by looking at the version tree. To view the ClearCase branch structure: The following figure illustrates how a version tree is displayed. Testing the delivery in the integration view At this stage, you typically verify that the application works as expected by testing the delivery and confirming that all merges are resolved correctly and that all changes are delivered. However, because no one else is currently working on the project, you do not need to perform this verification now. Completing the delivery to the integration stream You should still have an incomplete delivery to your integration stream. To complete the delivery to the integration stream: Creating and recommending a baseline With UCM, at certain points in the development cycle as dictated by your development process, your integrator or project leader creates a new baseline based on the activities that you and your team members delivered. A baseline identifies one version of every element that is visible in a component. Typically, baselines go through a cycle of testing and defect fixing until they reach a satisfactory level of stability. When a baseline reaches this level, you designate it as a recommended baseline. When developers join the UCM project, they populate their work areas with the versions of directory and file elements from the UCM project’s recommended baseline. Alternatively, developers can join the UCM project at a feature-specific development stream level, in which case they populate their work areas with the development stream’s recommended baseline. This practice ensures that all members of the UCM project team start with the same set of files. In the integration stream, create a baseline and then recommend the baseline so that users can gain access to the latest UCM components. Creating a baseline In the integration stream, create a baseline for your UCM component. Note: You can also create a separate baseline for individual UCM components. To create a baseline: Recommending a baseline Recommend the baseline that users access when they rebase their development streams or join the project. To recommend a baseline: When you seed the list for the new baseline at the INITIAL promotion level, you see the new baseline that you just created. After you recommend a new baseline for the first time, you typically inform your team to join the UCM project and begin work. Rebasing your development stream The ClearCase rebase operation provides a way for you to update work areas with work that has been integrated, tested, and approved for general use. This work is represented by baselines. To work with the set of versions in the recommended baseline, you rebase your work area. To minimize the amount of merging necessary while you deliver activities, you rebase your work area with each new recommended baseline as it becomes available. After you rebase, you typically build and then test the source files in your development view to verify that your undelivered activities build successfully with the versions in the baseline. Update your work area with the latest UCM project changes by rebasing your development stream to the recommended baseline for the integration stream. To rebase the development stream: Developing models as part of a team Before you start this exercise, ensure that you performed the initial setup for each new user, as described in Before you begin: installing and configuring software. Setting up work areas for the developers This exercise refers to two users: dev1 and dev2. Set up each user’s work area by joining the UCM project and importing the shared Piggy Bank modeling project. To join the UCM project and import the Piggy Bank modeling project: As dev2, make a change to a use-case diagram by renaming an action. To rename an action: Display Selected Account The following figure illustrates how a renamed action is displayed in the diagram editor. Saving your work and checking it in While logged in as dev2, from Rational Software Architect, complete the delivery of your files to the integration stream. For more information, see Delivering to the integration stream. Because your delivery only reflects new model changes and no code changes, you do not need to test the projects in the integration view before you complete the delivery. As ucm_admin, create and recommend a baseline so that the changes that dev2 delivered are shared with the team. For more information, see Creating and recommending a baseline. Rebasing as dev1 As dev1, from Rational Software Architect, rebase your development stream to the recommended baseline for the integration stream to update your work area with the changes that dev2 delivered. To rebase to the recommended baseline: Note: You should always rebase your view with models closed. If you rebase your view when models are open, you are not prompted to reload and you can erase all changes from the previous version. Tips for working in ClearCase If you work in ClearCase outside of Rational Software Architect when a Rational Software Architect workspace is open, your changes are not automatically reflected in the workspace. If you do create this situation, resolve it as follows: These actions synchronize the file system state on disk with the in-memory state of the Model Explorer view and the source control status. Starting parallel development: Comparing and merging models In this exercise, you perform parallel development. The two users on your team make different changes to the same model element. In the next exercise, when the second user tries to check in and deliver files, the user must perform a merge to resolve the differences. The following steps describe the workflow in this exercise: A merge typically starts when you check in a model to a configuration management system and a newer version of the same model already exists in the repository. At the start of the merge, all non-conflicting differences and trivial conflicts are resolved automatically. You must then manually resolve the remaining conflicts by selecting a version of a model from which to accept changes. After you resolve the remaining conflicts, you can save the merged model and close the merge editor. Introducing conflicts to the model In this exercise, dev1 introduces a change, and then delivers the change to the integration stream. The dev2 user then makes a conflicting change, starts to deliver, and initiates a merge so that the conflicting change can be resolved. To make a change as dev1: Auditor To make a conflicting change as dev2: Manager The Merge window opens. You can view the differences and conflicts between contributor and ancestor files in the Left, Right, and Ancestor views. You can also view details about each difference and conflict in the Structural Differences view. The Merged result view displays the merged model. Resolving the conflict At this point, dev1 and dev2 have both made changes to the same file. The dev1 user has checked in and delivered changes. The dev2 user delivered a conflicting change, which started a merge. The dev2 user must resolve the conflict and complete the delivery. To resolve the conflict: The merge is now complete and the results are under ClearCase control. Conclusion This concludes the initial setup of a team development infrastructure. The next team development scenario will cover the use of Rational Software Architect and Concurrent Versions System (CVS). Resources About the author Dejan Custic is an information developer at IBM Rational in Kanata, Ontario, Canada. Rate this page Please take a moment to complete this form to help us better serve you. Did the information help you to achieve your goal? Please provide us with comments to help improve this page: How useful is the information?
http://www.ibm.com/developerworks/rational/library/05/0920_scenarios/index.html
crawl-002
refinedweb
2,919
51.99
Bonjour from <?xml:namespace prefix = st1 ns = “urn:schemas-microsoft-com:office:smarttags” /> There’s lots of opinions of what this means for the virtualization software vendor landscape. But that’s less important really than what it means for IT pros and developers. They’re now getting much broader access to the benefits of a hypervisor platform, one that’s interoperable with other hypervisors, and one that will have a common management interface. It’s a reminder that virtualization is just another means to an end (obviously a great one, otherwise I wouldn’t blog here). The value has become management, automation, processes, and the like. That’s enough from me. Here’s a video with Simon Crosby, CTO of Citrix, and Mike Neil, GM of virtualization, talking about Citrix Essentials for Hyper-V. Enjoy, Patrick [update: Two items. First, the real-world clock on our blog is out of sorts. I hit publish on this blog on Feb. 23, not Feb. 20. Just in case someone out there was wondering. I’ll try to fix this issue. Second, Barry at Citrix blogged about Essentials for Hyper-V, too, and included demo. Check it out]. Join the conversationAdd Comment
https://blogs.technet.microsoft.com/virtualization/2009/02/20/the-virtualization-essentials-from-citrix/
CC-MAIN-2016-44
refinedweb
199
60.01
Jakarta EE Programming/Stateless Session Beans Here is a short tutorial to use a stateless session EJB using Eclipse. - In Eclipse, right-click on the Project Explorer view. - Select New -> EJB Project. If EJB Project doesn't appear, Select New -> Other, select EJB -> EJB Project and click on Next. - On "Project name", type helloworld-ejb. - Click on Finish. - Right-click on the Project Explorer view. - Select New -> Session bean (EJB 3.x). If Session bean (EJB 3.x) doesn't appear, Select New -> Other, select EJB -> Session bean (EJB 3.x) and click on Next. - On "Java package", type org.wikibooks.en. - On "Class name", type MyFirstEJB. Leave the other options as it is. - Click on Finish. - On the same package, create a new interface MyFirstEJBRemote. - Add the following method signature inside it: public String sayHello(); - Add the following annotation above the signature of the interface: @Remote - Open the class MyFirstEJB. - Remove the annotation @LocalBean. - Add the following method inside it: public String sayHello() { return "Hello World!"; } - Right-click on the project. - Select Export -> EJB JAR file . If you don't find the option EJB JAR file, click on Export... instead, select EJB -> EJB JAR file and click on Next >. The web project should be named helloworld. - Choose a location for the destination. - Click on Finish . - Go on the folder where you have created your JAR. You should see a JAR file named helloworld-ejb.jar . You can delete it. - Right-click on the Project Explorer view. - Select New -> Enterprise Application project. - On "Project name", type helloworld-ear. - Click on Next . - Select the project helloworld-ejb. - Click on Finish . You should have a new project called helloworld-ear. Among other things, it should contain Deployment Descriptor: helloworld-ear/Modules/EJB helloworld-ejb.jar . - Right-click Export -> EAR file, choose a destination and click on Finish. - Create a copy of the EAR file and change the extension to .zip . - Explore the content of the ZIP file. You should see the JAR file named helloworld-ejb.jar inside. You can delete the ZIP file. - Copy/paste your EAR file in the deployment folder of your application server. - Start your application server. Now your EJB is usable. Unfortunately, we don't know how to use it yet. - Shutdown your application server. - Reuse the WAR project that you created in this page. - Right-click on Java Resources/src . - Select New -> Package . - On name, type org.wikibooks.en. - Right-click on the new package. - Select New -> Servlet . - On Class name, type EJBServlet. - Type the following code in the class: package org.wikibooks.en; import java.io.IOException; import java.io.PrintWriter; import javax.naming.InitialContext; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class EJBServlet extends HttpServlet { private static final long serialVersionUID = 5847939167723571084L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = new PrintWriter(response.getOutputStream()); out.println("Calling the EJB..."); try { InitialContext initialContext = new InitialContext(); MyFirstEJB myFirstEJB = (MyFirstEJB) initialContext .lookup("java:global/experience4/experience3/MyFirstEJB"); out.println(myFirstEJB.sayHello()); } catch (Exception e) { out.println(e); } out.flush(); out.close(); } } - If you are using another application server than JBoss, you should have to change the lookup java:global/experience4/experience3/MyFirstEJB. - Open the file web.xmlin WebContent/WEB-INF . - Before the first markup <servlet>, type the following code: <servlet> <servlet-name>servlet</servlet-name> <servlet-class>org.wikibooks.en.EJBServlet</servlet-class> </servlet> - Before the first markup <servlet-mapping>, type the following code: <servlet-mapping> <servlet-name>servlet</servlet-name> <url-pattern>/servlet</url-pattern> </servlet-mapping> - Right-click Export -> EAR file. - For the destination, choose the deployment folder of the application server. - Click on Finish. - Start your application server. - Go on the URL. You should see Calling the EJB... . It means that you manage to call the servlet. You should also see Hello World! . If you see a text that is a Java exception, it means that the servlet failed to communicate with the EJB. You can verify that the text comes from the EJB by changing the text in the code and redeploy the EAR.
https://en.m.wikibooks.org/wiki/Jakarta_EE_Programming/Stateless_Session_Beans
CC-MAIN-2021-04
refinedweb
687
55.5
view raw In Python, I have the following example class : class Foo: self._attr = 0 @property def attr(self): return self._attr @attr.setter def attr(self, value): self._attr = value @attr.deleter def attr(self): del self._attr Typically, Python code strives to adhere to the Uniform Access Principle. Specifically, the accepted approach is: foo.x = 0, not foo.set_x(0) @property, which preserves the access semantics. That is, foo.x = 0now invokes foo.set_x(0). The main advantage to this approach is that the caller gets to do this: foo.x += 1 even though the code might really be doing: foo.set_x(foo.get_x() + 1) The first statement is infinitely more readable. Yet, with properties, you can add (at the beginning, or later on) the access control you get with the second approach. Note, too, that instance variables starting with a single underscore are conventionally private. That is, the underscore signals to other developers that you consider the value to be private, and they shouldn't mess with it directly; however, nothing in the language prevents them from messing with it directly. If you use a double leading underscore (e.g., __x), Python does a little obfuscation of the name. The variable is still accessible from outside the class, via its obfuscated name, however. It's not truly private. It's just kind of ... more opaque. And there are valid arguments against using the double underscore; for one thing, it can make debugging more difficult.
https://codedump.io/share/29AS7f47T6Xd/1/quotpublicquot-or-quotprivatequot-attribute-in-python--what-is-the-best-way
CC-MAIN-2017-22
refinedweb
247
61.63
NAME msgget -- get message queue LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> int msgget(key_t key, int msgflg); DESCRIPTION The msgget() function: +o msg_perm.cuid and msg_perm.uid are set to the effective uid of the calling process. +o msg_perm.gid and msg_perm.cgid are set to the effective gid of the calling process. +o msg_perm.mode is set to the lower 9 bits of msgflg. +o msg_cbytes, msg_qnum, msg_lspid, msg_lrpid, msg_rtime, and msg_stime are set to 0. +o msg_qbytes is set to the system wide maximum value for the number of bytes in a queue (MSGMNB). +o was not set in msgflg and no message queue associated with key was found. SEE ALSO msgctl(2), msgrcv(2), msgsnd(2) HISTORY Message queues appeared in the first release of AT&T System V UNIX.
http://manpages.ubuntu.com/manpages/oneiric/man2/msgget.2freebsd.html
CC-MAIN-2015-27
refinedweb
146
69.79
Created on 2018-02-12 12:06 by dilyan.palauzov, last changed 2018-03-17 18:07 by r.david.murray. This issue is now closed. diff --git a/Lib/distutils/command/sdist.py b/Lib/distutils/command/sdist.py --- a/Lib/distutils/command/sdist.py +++ b/Lib/distutils/command/sdist.py @@ -251,14 +251,11 @@ class sdist(Command): for fn in standards: if isinstance(fn, tuple): alts = fn - got_it = False for fn in alts: if self._cs_path_exists(fn): - got_it = True self.filelist.append(fn) break - - if not got_it: + else: self.warn("standard file not found: should have one of " + ', '.join(alts)) else: diff --git a/Lib/email/_header_value_parser.py b/Lib/email/_header_value_parser.py --- a/Lib/email/_header_value_parser.py +++ b/Lib/email/_header_value_parser.py @@ -567,14 +567,7 @@ class DisplayName(Phrase): @property def value(self): - quote = False - if self.defects: - quote = True - else: - for x in self: - if x.token_type == 'quoted-string': - quote = True - if quote: + if self.defects or any(x.token_type == 'quoted-string' for x in self): pre = post = '' if self[0].token_type=='cfws' or self[0][0].token_type=='cfws': pre = ' ' diff --git a/Lib/idlelib/config.py b/Lib/idlelib/config.py --- a/Lib/idlelib/config.py +++ b/Lib/idlelib/config.py @@ -402,7 +402,7 @@ class IdleConf: because setting 'name' to a builtin not defined in older IDLEs to display multiple error messages or quit. See. - When default = True, 'name2' takes precedence over 'name', + When default is True, 'name2' takes precedence over 'name', while older IDLEs will just use name. When default = False, 'name2' may still be set, but it is ignored. """ We generally don't accept patches on bugs.python.org. Please open a pull request on github, see Dilyan, please explain what you believe the problems to be and how the patch solves it. These seem to be 3 separate issues. Do not change idlelib.config. config_main.def contain 'default = True' or 'default = False' and that is what the docstring references. The variables got_it in distutils/command/sdist and quote in email/_header_value_parser can be skipped making the code shorter and faster. The risk of introducing a bug is higher than the minimal benefit of making the changes. Thus we do not typically accept changes like this. We'll clean up such code when we touching it for other reasons.
https://bugs.python.org/issue32829
CC-MAIN-2021-49
refinedweb
387
52.66
I have a homework assignment that I have no idea where to go or what to do next. Could you please help? Thanks! The assignment reads: your program is to read from a file students.dat which may contain a list of student names that could contain as many as eleven characters. In addition to the list of names, your program should read four sets of numbers which represent two exams worth 20%, a final exam worth 30%, and homework worth 30%. The student's name and grades is on the first line, with no title lines. Find the weighted average grade for all students. The average grade is to berounded to the nearest integer, not to truncated to an integer. Then, sort the list of students by grade, and assign letter grades to each student. The sorting can be a selection or a bubble sort, and is to be done in function sub-program. Say also if the student passes or fails. Put the sorted list into a file called outstu.dat. Put a two line title above the output. The grade distribution is 85-100 A 70-84 B 55-69 C 40-54 D 0-39 F This is what i have so far: Code: /* Homework 12 */ #include FILENAME "students.dat" #include <stdio.h> #include <math.h> #include <string.h> struct record { char name[12]; int ex1, ex2, final, hw, ave, grade passf1[5]; }; int main(void) { struct record s[50]; int nos, i=0; FILE *students; { while (fscanf(students, "%s %i %i %i %i," s[i].name, s[i].ex1, s[i].ex2, s[i].final, s[i].hw)==5) ++i; s[i].ave=s[i].ex1*0.2+s[i].ex2*0.2+s[i].final*0.3+s[i].hw*0.3; if(s[i].ave>=85) s[i].grade='A'; if(s[i].ave>=70) s[i].grade='B'; if(s[i].ave>=55) s[i].grade='C'; if(s[i].ave>=40) s[i].grade='D'; if(s[i].ave>=0) s[i].grade='F'; if(s[i].ave>=40) strcpy(s[i].passf1, "pass"); else strcpy(s[i].pass1, "fail"); sort(s, nos); for(i=0, i<nos, ++i); fprintf(outf, "%-11s %5i %5i %5i %5i %5i %c %4sh," s[i].name, s[i].ex1, s[i].ex2, s[i].final, s[i].hw) void sort(record s[], int n); int k,l m; record hold; for(k=0, k<=2, ++k); m=k; for(i=k+1, j<=n); if(s[j] < x[m], ave); m=j; hold=x[m]; x[m]=x[k]; x[k]=hold;
https://cboard.cprogramming.com/c-programming/115958-help-beginning-programming-printable-thread.html
CC-MAIN-2018-05
refinedweb
437
74.39
IntelliJ IDEA 7 M1 introduces the full-blown support for Spring and Hibernate through the dedicated facets. Traditionally, Spring and Hibernate are integrated with a wide range of IntelliJ IDEA productivity-boosting features. With IntelliJ IDEA you can create Spring applications from scratch with just few keystrokes. Here I outline some examples that demonstrate how IntelliJ IDEA can help you. - Context files are created from templates - ALT+INS inside of a context file lets instantly add beans and patterns - Very wide range of beans is supported. Each of them is created through a dedicated live template — all you have to do is to type values for required properties. (More live templates to come in later builds) - If required libraries are missing from your system, they can be automatically downloaded and configured - You don’t need to dig through class files to manually generate properties, just select them from the list - Quick-fixes are available for rapid error resolution - HIbernate diagram not only displays relationships, but lets you create classes and entities, automatically generating required code. - As always, CTRL+SPACE code completion helps you with bean names, property values, settings and tons of other stuff. It can recognize your entire project structure and help even with beans you created by annotating Java code. - IntelliJ IDEA refactorings are also Spring and Hibernate-aware, so you can modify and upgrade your projects at the full pace. To get your hands on and try Spring with IntelliJ IDEA, download the latest EAP build of IntelliJ IDEA 7 M1. Technorati tags: IntelliJ IDEA, IntelliJ, Spring, Hibernate This feature is great, and really follows IDEA’s principles. I have only one question: would that API be customizable? With Spring 2.x namespace handlers that would be an uber-feature, like a quick IDE based on your DSL.
http://blog.jetbrains.com/idea/2007/06/spring-and-hibernate-coding-assistance/
CC-MAIN-2015-22
refinedweb
301
51.78
0 so I'm working on a class for a vector which I want to hash like a tuple, and to do so, a vector instance needs acess to it's properties. here's the constructor for the performative properties I'm using, and the vector class: def newProp(): newdict = {} getter = newdict.get setter = newdict.__setitem__ def wrapper(vec,val): if val is None: return vtype = val.__class__ if vtype is str: setter(vec, float(val) ); return if numtype(vtype): setter(vec,val) return property(getter, wrapper) class vector(object): __slots__ = [] X = newProp() Y = newProp() Z = newProp() W = newProp() def __init__(vec,*other): # other is for special initialization vec.X = vec.Y = vec.Z = vec.W = None def __hash__(vec): return hash( (vec.X, vec.Y, vec.Z, vec.W) ) now there's alot to be questioned, particularly because I don't know everything about how Python works... but this is just the most performative approach I could think of for an automated vector class. the problem I'm getting is hash(vector()) leads to a recursion error since dict.get calls vec.hash for each property. I'm looking for a workaround that's just as performative or better. thanks. :)
https://www.daniweb.com/programming/software-development/threads/509600/recursion-error-with-performative-property
CC-MAIN-2018-39
refinedweb
201
57.47
On Thu, Jan 13, 2005 at 10:59:02PM +0100, Sylvain Wallez wrote: > Release early, release often, and whiteboard isn't even supposed to > really work. So that's ok ;-) I left some bugs so it would be allowed into the whiteboard ;) Specifically, the editor which combines the editing of the model, binding, and template in a single page does not quite work yet, and there is an issue with dynamically detecting changes in macro repositories which are included by other macro repositories (just the classic "when should we check for updates, and how deep down the tree should we check" problem.) There are some code design issues which will need to be cleaned up (e.g. TopDefinition, ugh!) After the fact I realized that macros should probably use the same namespace wherever they are used (model, binding, template,...) since they have the same syntax and semantics everywhere. There are more features to add, but basic functionality is working, as you can see in the separate Swan editors for xreports, sitemaps, models, bindings, and templates, so I figure it is ready for others to check for usefulness and start to refine into something which we may eventually want to merge into the main distribution. If anybody would like to check this out, read this link: then change to the forms directory: cd cocoon/src/blocks/forms be sure to record your current branch for later reference: svn info | grep URL and use the "svn switch" command to switch to the whiteboard forms: svn switch or like this (note the https) if you are a committer: svn switch When you want to switch back to the your old branch make sure you are in the forms directory: cd cocoon/src/blocks/forms then switch back to the branch you recorded earlier: svn switch <branch-URL-you-recorded-earlier> I am not sure, but you might have to do a "build clean" between building the two branches to get a sucessfully running build. Note that the whiteboard branch of Cocoon Forms is NOT supported and the interfaces in it *can and will* be modified on a whim. This is to allow ideas to be experimented with and refined via actual shared code (in addition to using chat and email), without prematurely incurring the burden of support and deprecation cycles. Since Swan is a heavy user of the current set of experimental features, it could be viewed as a guinea pig for testing them. If you modify, add, or remove a cforms feature then please also edit the Swan samples to match your changes. This way you can judge the changes by how they work in practice, rather than just in the abstract. > Thanks for this! You're welcome :) --Tim Larson
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200501.mbox/%3C20050114044125.GB21395@localhost%3E
CC-MAIN-2017-09
refinedweb
458
56.32
Just a short notice to show how simple it is to send mail from Python using the SMTP module. Note: you must have an accessible SMTP server running somewhere. I have one on my domain, but word on the street says that if you ask nicely, your ISP can provide you with one. Anyway, here is the code: from smtplib import SMTP import datetime debuglevel = 0 smtp = SMTP() smtp.set_debuglevel(debuglevel) smtp.connect('YOUR.MAIL.SERVER', 26) smtp.login('USERNAME@DOMAIN', 'PASSWORD') from_addr = "John Doe <john@doe.net>" to_addr = "foo@bar.com" subj = "hello" date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" ) message_text = "Hello\\nThis is a mail from your server\\n\\nBye\\n" msg = "From: %s\\nTo: %s\\nSubject: %s\\nDate: %s\\n\\n%s" \ % ( from_addr, to_addr, subj, date, message_text ) smtp.sendmail(from_addr, to_addr, msg) smtp.quit() Some notes: - You'll have to insert your mail server and SMTP port. Note that the port can also be 25 (or any other, if you've configured the server appropriately) - At least on my server, the username must be the full email address - The message must contain all these fields to be accepted - Set debuglevel to 1 to see lots of insightful debugging information from the module
http://eli.thegreenplace.net/2008/09/10/sending-mail-from-python-with-smtp/
CC-MAIN-2015-06
refinedweb
208
65.22
I've currently trying to convert a little project written in C++ into a more conventional style. I'm learning the very basics of C++ and I thought it would help me to understand. What the original author has done is write a lot of code in header files. He has used them almost exclusively. I thought it would be a good idea to create proper header and .cpp files. The header I'm looking at right now has no includes or anything. It goes straight into defining the one function that it consists of. When I try to put the prototype of the function in the header, and the actual function into its own .cpp file I get a lot of problems with non included variables (even namespace std). What was available to the old header when it had the whole function in it? How could the old header have this information and the cpp file I created doesn't? Just to clarify: older header had: void func_name() { code, no sign of any includes } Now: heade: void func_name(); cpp file: void func_name() { code, no sign of any includes exact same as previous.h } Thanks you for your help. Post some code that demonstrates the problem or I don't think anyone can figure out what the problem is. the old header... unless it gets included somewhere... doesn't get compiled. So it can contain just about anything and not cause any problems. It does get included. It's so frustrating working with this sloppy code. I mean I'm no good but even to me this looks like bad news. I thought there might be some logic to it but apparently not. I'll keep investigating other routes... Remember that each cpp file must include it's own header so that the functions can call each other. I use these macros to keep loops clean and simple. #define LoopForward(min,var,max) for(var=min;var<=max;var++) #define LoopBackward(min,var,max) for(var=max;var>=min;var--) #define LoopForwardStartAndLength(var,start,length) LoopForward(start,var,((start)+(length))-1) #define LoopBackwardStartAndLength(var,start,length) LoopBackward(start,var,((start)+(length))-1) #define LoopForwardLengthFromZero(var,length) LoopForward(0,var,(length)-1) #define LoopBackwardLengthFromZero(var,length) LoopBackward(0,var,(length)-1) I don't see how those macros makes the code cleaner. In my opinion all they do is to make things more obfuscated. Do you have macros like Code: #define DIVIDE(x,y) (x/y) #define ASSIGN(x,y) (x=y) #define COMPARE(x,y) (x==y) as well? #define DIVIDE(x,y) (x/y) #define ASSIGN(x,y) (x=y) #define COMPARE(x,y) (x=
http://forums.codeguru.com/showthread.php?505962-Code-Formatting-Help...&goto=nextnewest
CC-MAIN-2017-34
refinedweb
446
66.84
I have wanted to create a development blog for the longest time now and I have tested a lot of methods from creating everything from scratch to using a CMS but I knew that I wanted the front end to be built with react and to look good and none of the solutions that I tried were good enough for me until today. I found this library called Frontity which would connect to WordPress’s REST API and it would get everything you need from there, it is really simple to use and requires none too little setup to be able to start the blog. The setup Why reinvent the wheel and build a new CMS where we already have WordPress which is amazing and open source? It is just as easy as running the command $ npx frontity create <app-name> After running this command you would get to choose from 2 themes, mars and WordPress’s 2020 theme I choose to go with the mars theme because that is what I was looking for but you can go with any and there are even themes online you can choose or just build your own. After you initiate the project you just have to set up the pointing to WordPress, so if you go into your project directory and edit the file frontity.settings.js there you will have to edit 2 values const settings = { "name": "my-first-frontity-project", "state": { "frontity": { "url": " "title": "Abod's blog", "description": "A look into my brain 🧠" } }, "packages": [ { "name": "@frontity/mars-theme", "state": { "theme": { "menu": [ [ "Home", "/" ], [ "Portfolio", " ] ], "featured": { "showOnList": true, "showOnPost": true } } } }, { "name": "@frontity/wp-source", "state": { "source": { "url": " } } }, "@frontity/tiny-router", "@frontity/html2react" ] }; and change to your own domain or you can just leave the same for now to just see test it out, but these links are where frontity is going to try to contact the WordPress REST API to get the information needed as posts, tags, authors and such. You can now run the website by typing $ npx frontity dev That is how simple it is to create your blog with WordPress as a headless CMS. For me instead of hosting my own WordPress intense on my server i just use 000webhost but you can use what ever you want and then so that people wont be able to get to the front end of my website i just created a new folder in the public_html/wp_content/themes/ directory and created 2 files in there for wordpress to know it is a theme, style.css and index.php. I left the style.css empty but populated the index.php with a redirect script <?php header( "Location: ); ?> So now everytime someone tries to get to my WordPress front end they are going to be redirected to the React app instead. Addons Prismjs As a developer, I like to post some code snippets into my blog from time to time and I think all developers could agree that syntax highlighting is a good thing to have for readability so I wanted to use Prism.js with it. It was just as simple as installing Prism.js with npm or yarn $ npm i prismjs or $ yarn add prismjs and then in my <project>/packages/mars-theme/src/post.js i just added import Prism from "prismjs"; And then added all the languages that I would want to use, for intense import "prismjs/components/prism-typescript" And the same thing for the plugins import "prismjs/plugins/line-numbers/prism-line-numbers" And now in order for Prism engine to run we have to create a use Hook which is called in the Post function useEffect(() => { Prism.highlightAll(); }, []); This is not going to take effect with the normal wordpress code block so I use an addon Done ! Cookie consent With today’s GDPR we have to tell the user that we are using cookies on this website so how would we set it up? I am using a react library called react-cookie-consent and it is just as simple as installing it with $ npm i react-cookie-consent or $ yarn add react-cookie-consent Importing it to our <project>/packages/mars-theme/src/index.js import CookieConsent from "react-cookie-consent"; And then adding it at the bottom of out Theme function <CookieConsent location="bottom" buttonText="Got it!" cookieName="myAwesomeCookieName2" style={{ background: "#2B373B"}} buttonStyle={{ color: "#fff", backgroundColor: "#1f38c5", fontSize: "24px" }} expires={150} > This website uses cookies to enhance the user experience.{" "} </CookieConsent> And that is it, now you will have a cookie consent screen on your website that easy. Hope this was useful and thanks for reading! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/abodsakah/create-a-simple-react-blog-with-wordpress-48n0
CC-MAIN-2022-21
refinedweb
774
60.48
In most rules, the problematic code has been highlighted in red, with the key elements to concentrate on in bold. // this is a problem because it is in red There will usually be a description explaining the example followed by some better code in blue. // this is better because it is in blue Where code not directly relevant to the example has been removed, it has been replaced with a ':' character. The code equivalent of “blah blah blah”. In the below class definition, the entire lower portion of the class definition (where the methods are) has been removed from where the ':' character is. public class InputFieldRenderInfo { public String fieldName; public String style; public boolean readOnly = false; : <= Unimportant code removed here }
http://javagoodways.com/read_example_Reading_the_examples.html
CC-MAIN-2021-21
refinedweb
120
51.18
What is AutoComplete Supposed to Do? I have an Ext.field.Select field that i've bound to a store. It is functioning fine as a drop-down but I would like it to function as an Autocomplete. Checking the Autocomplete box does not seem to change the function at all. The documentation says that it set's the DOM autocomplete attribute to on but I don't know what the implications of this are considering that typing in the text area launches a select popup. Hope this helps. It is used for prefilling previously entered values. Nagwani Sencha Designer Development Team Thanks, I found that but I can't figure out how it makes sense in the context of a Select control. When you type in the text box for the select a popup provides the list to select from. If you refresh the list it does not return (autocomplete) to the previously selected item. I am suspecting that autocomplete is just inherited and isn't really useful for a select but maybe there is a way to configure the select to make it useful. What I would expect AutoComplete to mean for a Select control is that an empty text box is provided and as you type a drop-down list gives you a filtered set of records from a store that match what you have typed in some way. I see lots of comments about providing real AutoComplete functionality but haven't found a working sample. Any pointers would be greatly appreciated.
https://www.sencha.com/forum/showthread.php?198220-What-is-AutoComplete-Supposed-to-Do
CC-MAIN-2015-35
refinedweb
255
63.39
Computer Science Archive: Questions from October 07, 2010 - FunnyPlanet8001 askedImplement an array-based program that manages a collection of... Show moreHow should I start doing this program? Implement an array-based program that manages a collection of DVDs. The data for each DVD will consist of a title, category, running time, year of release, and price. The user must be able to add new DVDs to the collection, remove a DVD, edit information stored for a DVD, list all DVDs in a specified category (or all DVDs if no category is specified), and find & display a DVD in the collection given its title. the user should also be able to sort the collection of DVDs by year, by title, or by category. The program must be able to read and save a collection to file. You MUST create a class to represent each DVD object. The collection of DVDs must be an array of objects of the class type. NOTE - NO user input stuff should be inside any classes. I will not grade favorably any work that has cin statements and prompt messages anywhere but inside the main application program (or functions thereof). Bonus Opportunity (30 points) - In addition to creating the class to represent the DVD object, create a class implement the collection of DVDs. The functionality required of the collection should be implemented within the class (or by friends of the class), and the application program should respond to user commands by invoking the appropriate methods of the class. • Show less0 answers - WordToTheMisses askedShould a company select proprietary, open source, or free software for its most important business i... More »1 answer - Anonymous asked1. Write a program to find the average density of earth 2. Write a program for the conversion of tem0 answers - Anonymous askedWrite a program that prompts the user to enter a single keystroke and stores the charac... Show moreINSTRUCTIONS Write a program that prompts the user to enter a single keystroke and stores the character pressed in a ‘char’ variable. The program then processes the keystroke and determines if the keystroke was a letter, numeral, or symbol. If the keystroke was a letter, display a message indicating whether the letter is a vowel or consonant. If the character is a numeral, information about whether the numeral is even or odd is displayed. If the character is a symbol, information is displayed indicating whether the symbol is found in the lower ASCII table (7 bit representations) or the upper ASCII table (8 bit representations). Your program should make use of #define codes to classify the keystroke. Your program should contain at least one instance of an IF, IF-ELSE, IF-ELSE-CHAIN, and SWITCH-CASE structure. Use the following shell to start your program: #include <stdio.h> #ifndef __CHARCODES__ #define __CHARCODES__ // type codes #define LETTER 1 #define NUMBER 2 #define SYMBOL 3 // sub-type codes #define VOWEL 4 #define CONSONANT 5 #define ODD 6 #define EVEN 7 #define UPPER_ASCII 8 #define LOWER_ASCII 9 #endif int main() { char keyStroke = 0; int typeCode = -1, subTypeCode = -1; // GET THE KEYSTROKE (remember to clean phantoms) // ANALYZE THE KEYSTROKE (IF, IF-ELSE, IF-ELSE-CHAIN here) // DISPLAY RESULTS BASED ON SET CODES (SWITCH-CASE) return 0; } • Show less1 answer - Anonymous askedi... Show moreCreate a Flowchart using this program: #include <iostream> #include <string.h> using namespace std; int main() { char oper; int var1 = 0, var2 = 0; while(true) {cout<<"enter variable1 operation variable 2 (ex 1 + 2) "; cin>>var1; cin>>oper; cin>>var2; cout<<var1<<oper<<var2<<" = "; switch(oper) {case '+':cout<<var1+var2; break; case '-':cout<<var1-var2; break; case '*':cout<<var1*var2; break; case '%':if(var2>var1) cout<<"Try again"; else cout<<var1%var2; break; case '/':if(var2==0) cout<<"Error-division by zero"; else cout<<(double)var1/var2; break; case 'q': case 'Q': return 0; default: cout<<"Invalid operator"; } cout<<endl; } } And using the question: Write a C++ program that mimics a calculator. The program should prompt the user and take as input a number, an operator such as + , - , / , * followed by a second number. It then should output the first number, the operator, the second number and the results. Check to make sure division by zero is not performed by your program. Some sample /input output are as follows: Input Output 1 + 5 1 + 5 = 6 1 - 9 1 – 9 = -8 10 / 3 3.33 1 / 0 Error – division by zero 2 * 100 2 * 100 = 200 2 % 20 Try again 2 Q 1 Exit program Your program must make use of if-else, while loop, and switch control structures. • Show less1 answer - Wxdude asked//Return the reversal of an... Show moreBook exercise 5.3: (Palindrome integer) Write the following two methods //Return the reversal of an integer, i.e. reverse(456) returns 654 public static int reverse(int number) //Return true if number is palindrome public static boolean isPalindrome(int number) Use the reverse method to implement isPalindrome. A number is a palindrome if its reversal is the same as itself. Write a main method that prompts to user to enter an integer and reports whetehr the integer is a palindrom. • Show less1 answer - Wxdude askedWrite a class that contains the following t... Show moreBook exercise 5.9: (Conversions between feet and meters) Write a class that contains the following two methods: /** Converts from feet to meters */ public static double footToMeter(double foot) /** Converts from meters to feet */ public static double meterToFoot(double meter) • Show less1 answer - Wxdude askedBook exercise 6.3: (Counting occurrence of numbers) Write a program that reads the integers between... Show moreBook exercise 6.3: (Counting occurrence of numbers) Write a program that reads the integers between 1 and 100 and counts the occurrences of each. Assume the input ends with 0. time. • Show less1 answer - Wxdude askedWrite a program that reads in ten numbers and display... Show moreBook exercise 6.5: (Printing distinct numbers) Write a program that reads in ten numbers and displays distinct numbers (i.e., if a number appears multiple times, it is displayed only once). Hint: Read a number and store it to an array if it new. If the number is already in the array, ignore it. After the input, the array contains the distinct numbers. Here is a sample run of the program (sample input is underlined for emphasis only): Enter ten numbers: 1 2 3 2 1 6 3 4 5 2 The distinct numbers are: 1 2 3 6 4 5 • Show less2 answers - Anonymous askedThe rate of coo... Show moreModel the cooling of a keg of beer at 25 0C which is placed in a coolroom held at 00c. The rate of cooling of the keg is given by the equation: d?beer = UA (?room-?beer) dt mCp where: d?beer = temperature of the beer and keg (0C); room = tem- perature of the coolroom ( 0C); t = time ( s); m = cobined mass of beer and the keg ( kg); Cp = combined heat capacity of beer and keg ( KJ Kg-1 K-1); U = overall heat transfer coefficient (KWm-2 k-1); A = surface area of keg (m2). The keg contains 50 L of beer. The beer has a density of 1008 kgm-3. The empty keg weighs 10 kg. The combined heat capacity of beer and keg is 3:56 KJ Kg-1 K-. The overall heat transfer coefficient is 0:1 KWm-2 k-1. The keg is a cylinder 460mm high and 390mm in diameter. Write a script le that uses the above functions and the ode45 func- tion to plot the cooling of the keg over a two hour period. • Show less0 answers - Anonymous askedsomeone helped me... Show moreThe assignment is here helped me earlier but it does not really fullfil the requirements nor does it compile.. so please help mehere is what the person wrote:- would really appreciate your help..thanks a lot• Show less1 answer - Anonymous asked1 answer - Anonymous asked1 answer - Anonymous askedI'm having huge problems in understanding what needs to be done between the Implementation file and... Show moreI'm having huge problems in understanding what needs to be done between the Implementation file and Interface file with method declaration and how to pass variables between them.Below is my Implementation file, currently with empty methods to match the Interface file under this./*Wesley HawesCS253 HW3 Implementation FileOctober 6th, 2010*/#include "Freq.h"Freq::Freq(int max_size) {}Freq::Freq(const Freq){}Freq &Freq::operator=(const Freq &){}Complex::~Complex(){}__________________________________________________________________________/*Wesley HawesCS253 HW3 Interface FileOctober 6th, 2010*/#ifndef Freq_h_included#define Freq_h_included#include <iostream>#include <string>class Freq {public:Freq(int max_size);//The parameter indicates the maximum size of sequences that are to be collected/emitted.Freq(const Freq &);//The copy constructor copies the accumulated statistics and max_size.Freq & operator=(const Freq &);//The assignment operator copies the accumulated statistics and max_size.~Freq();//destructorvoid process(std::istream &stream);//Read an entire stream and collect statistics on the data therein. It does not close the stream. It doesn’t output anything. In fact, none of the required Freq methods should produce any output. (operator<< isn’t a method.)bool process(const std::string &filename);//Open a file, collect statistics on the data therein, and close it. It returns true for success, false for failure. It doesn’t output anything.void clear();//Reset the accumulated statistics.int operator[](const std::string &sequence) const;//Return the number of times that the string sequence has been encountered. If the length of sequence is greater than max_size, return 0.private:int max_size;};#endif______________________________________________________________________________The method requirements are also described (super vaguely) at this link...Any direction on what to do with these methods would be great.• Show less0 answers - Anonymous askedStep 3: C# programming. Write a program that allows the user to enter any number of names. Allow the... Show moreStep 3: C# programming. Write a program that allows the user to enter any number of names. Allow the user to input their name followed by a space and the last name. Order the names in ascending order and display the results with the last name listed first, followed by a comma and then the first name. If middle initial is entered, it should follow the first. • Show less0 answers - Anonymous askedthe relational opera... Show moreSpecify the following queries on the database schema shown in Figure 5.5, using the relational operators discussed in this chapter. Also show the result of each query as it would apply to the database state of Figure 5.6. a. Retrieve the names of all employees in department 5 who work more than 10 hours per week on the 'ProductX' project. b. List the names of all employees who have a dependent with the same first name as themselves. c. Find the names of all employees who are directly supervised by 'Frank. g. For each department, retrieve the department name and the average salary of all employees working in that department. h. Retrieve the average salary of all female employees. i. Find the names and addresses of all employees who work on at least one project located in Houston but whose department has no location in Houston. j. List the last names of all department managers who have no dependents • Show less2 answers - Anonymous asked6.22. Consider the two tables T1 and T2 shown in Figure 6.13. Show the results of the following oper1 answer - anonb7 askedGive the values o... Show moreAssume the following linked list for the first two problems: see up top for pic 1. Give the values of each of the following expressions: a. head.data b. head.next.next.data c. second.next.data • Show less1 answer - anonb7 askedWrite the necessary statement(s) to accomplish e... Show moreAssume the linked list up top for this problem: Write the necessary statement(s) to accomplish each of the following: a.Write the necessary statement to change the value in the first node to 11. b. Write the necessary statements to insert a node at the head of the list. The new node should contain 85. c.Write the statement to remove the first element of the list. • Show less1 answer - anonb7 askedpublic N... Show moreAssume the following class definition for this problem. public class Node { public int data; public Node next; } Write a method that accepts the head of a singly linked list of integers with the above-defined definition for the node of the list. The method should return the largest value in the list. • Show less0 answers - anonb7 askedpublic N... Show moreAssume the following class definition for this problem. public class Node { public int data; public Node next; } Write a method that accepts the heads of two singly linked list of integers with the above-defined definition for the node of the list. The method should return true if the lists contain the same values and false otherwise. • Show less0 answers - anonb7 askedpublic N... Show moreAssume the following class definition for this problem. public class Node { public int data; public Node next; } Write a method that accepts the head of a singly linked list of integers and returns an array containing those integers. • Show less1 answer - vtstudent asked1. Here is swap.h: #pragma once // Swap two integers. void swap( int *, int * ); Write, test and sub1 answer - Anonymous asked0 answers - vtstudent asked0 answers - vtstudent askedRoot squ... Show moreHere is square_root.h: #pragma once struct Root { int floor; int ceiling; bool is_exact; }; Root square_root( int ); This header establishes a user-defined type, Root, composed internally of two ints and a bool. The function square_root() is to calculate the square root of the integer one feeds to it. If this square root is itself an integer, the function is to set both floor and ceiling to this integer, and to make is_exact true. Otherwise, it should set floor to the next integer below the square root, ceiling to the next integer above, and is_exact to false. It should return all these together in the form of a Root object. For example, square_root(7) should return a Root whose floor is 2, whose ceiling is 3, and whose is_exact is false. Write, test and submit square_root.cpp to implement the function. • Show less0 answers - vtstudent asked#pr... Show moreReduce last week’s reverse_string.cpp to a module reverse.h and reverse. cpp. Here is reverse.h: #pragma once void reverse( char * ); Write, test and submit reverse.cpp. Use pointers in preference to array logic wherever you can, and use the keyword const where it makes sense to do so. reverse_string.cpp: #include <iostream> using std::cout; using std::endl; using std::cin; int main() {int i,j,n; char a[50],b[50]; cout<<"Enter a string: "; cin>>a; n=strlen(a); for(i=0;i<n;i++) *(b+i)=*(a+n-i-1); *(b+i)='\0'; cout<<a<<" reversed is "<<b<<"\n"; system("pause"); return 0; } • Show less1 answer - Anonymous asked1 answer - Anonymous asked#... Show moreNeed to make a merge function that works with this .cpp file: #include <iostream> #include <vector> #include <algorithm> using namespace std; int main() { typedef vector<int> IntVector; typedef IntVector::iterator IntVectorIt; size_t n,m; cout << "Enter the size of the first array for merging" << endl; cin >> n; cout << endl; cout << "Enter the size of the second array for merging" << endl; cin >> m; cout << endl; //Allocate vectors of sizes n, m and n+m //Define iterators and their initial and terminal values IntVector Vector1(n); IntVectorIt start1,end1,it1; start1 = Vector1.begin(); end1 = Vector1.end(); IntVector Vector2(m); IntVectorIt start2,end2,it2; start2 = Vector2.begin(); end2 = Vector2.end(); IntVector Vector3(n+m); IntVectorIt start3,end3,it3; start3 = Vector3.begin(); end3 = Vector3.end(); //Enter n consecutive even numbers into vector 1 int i=0; for(it1=start1;it1!=end1;it1++) { *it1=2*i; i++; } //Output numbers stored in vector 1 cout << "The content of vector 1" << endl; for(it1=start1;it1!=end1;it1++) cout << *it1 << " "; cout << endl; //Enter m consecutive even numbers into vector 2 i=0; for(it2=start2;it2!=end2;it2++) { *it2=2*i+1; i++; } //Output numbers stored in vector 2 cout << "The content of vector 2" << endl; for(it2=start2;it2!=end2;it2++) cout << *it2 << " "; cout << endl; //Perform merging of vectors 1 and 2 into vector 3 merge(start1,end1,start2,end2,start3); //Print out the result of merging cout << "The content of vector 3 after merging" << endl; for(it3=start3;it3!=end3;it3++) cout << *it3 << " "; cout << endl; return 0; } • Show less0 answers -Consider a toy 8-bit floating-point format with a sign bit, a 4-bit exponent field, and a 3-bit sign... Show moreConsider a toy 8-bit floating-point format with a sign bit, a 4-bit exponent field, and a 3-bit significand field. Exponents 0000 and 1111 are reserved for special values, and the rest are used to encode exponents -7(0001) and +6(1110). The exponent base is 2. The significand has a hidden 1 to the left of the radix point, with the 3-bit field constituting its fractional part. a)Represent the numbers x = 0.5 and y = -2 in this format. b) Compute the sum x + y using the rules of floating-point arithmetic. Show all the steps. • Show less1 answer - Anonymous askedConsider the following set of floating point numbers F (10, 3, -2, 2). Exactly how many numbers are ... More »1 answer - Anonymous askedaddi $1... Show moreaddi $9, $zero, 0 addi $12, $zero, 5000 Loop: addi $12, $12, 4 lw $8, 40($12) add $9, $9, $8 addi $11, $11, -1 bne $11, $zero, Loop For the above MIPS program, determine the instruction format for the lw and bne instructions and the values of each instruction field. You may want to refer to the MIPS reference card available here:. Also provide the binary and hex values for the complete instruction. For example, for the instruction lw $v1,0($a0), the instruction format is I; the instruction fields are op=2316 = 3510, rs=410, rt=310, imm=010; the binary value of the instruction is therefore 100011 00100 00011 0000000000000000, and the hex value is 0x8C830000 • Show less1 answer - Anonymous askedCertain other instructions can be added to the instruction set of MicroMips if desired. Add the foll... Show moreCertain other instructions can be added to the instruction set of MicroMips if desired. Add the following pseudoinstruction to MicroMips. Choose an appropriate encoding the the instruction and specify all modifications required in the multicyce data path and associated control circuits. Make sure that the chosen encodings are not in conflict with other MiniMips instructions. a) Add the following pseudoinstruction: Move(move)The following images may be of use in answering the question:• Show less0 answers - Anonymous askedWrite a method named printColumns that accepts two parameters: a maximum number and a number of spac... More »1 answer -1 answer - Anonymous asked1) Write a method named average3 that accepts three integers as parameters and returns the average o1 answer - Anonymous askedFor this program you will use an in... Show morePlease solve the follow question using instructions listed below. For this program you will use an input file that holds the following: a name, number of people at a party, and a diameter of the type of pizza ordered inches. Your program should calculate how many pizzas are needed for the party, assuming each person will eat 4 pieces. Output should go to the screen in stdout. To calculate all this, assume a slice has the area of 14.125 inches. To calculate the number of slices, divide the area of the pizza by 14.125 The area of a pizza is pi * r^2, where pi = 3.14 and r is the radius (the radius is 1/2 the diameter. The input file is provided for you as "catering2.dat" output should look like the following: PIZZA PARTY Name Guests Size of Pizza Number of Pizzas --------------------------------------------------------- bailey 50 12 x chow 30 24 y zuckerman 100 36 z where x,y, and z stand for the actual values calculated. • Show less0 answers - Anonymous askedI am using the book: "Data Structures and algorithms in java" 4th edition by Michael T Goodrich and... Show moreI am using the book: "Data Structures and algorithms in java" 4th edition by Michael T Goodrich and Roberto Tamassia.I need the code for this algorithm so I can finish my homework and this is the only part I am having trouble.• Show less1 answer - Anonymous askedIn regard to the architectural impact of branches, what does the phrase "shadow branch cycle&qu... More »1 answer - Anonymous askedCan someone please show me how to write a program to draw 10 concentric circles in a rectangle using... More »1 answer - Anonymous askedprovide a real c... Show moreFor each of the following concepts, 1) give a brief but clear explanation, and 2) provide a real case example of the HR DB on how the integrity constrain is enforced by the DBMS. (*Hint: The DB’s table schema contains the example information.) a) Domain IC. b) Primary key IC. c) Referential IC. d) The use of NULL.from the previous pic you can see the schema of the data base and all the attributes• Show less1 answer - Anonymous asked1 answer - SlipperyViolin4305 askedWrite a program to calculate the area of several basic geometric shapes. The program first asks the... Show moreWrite a program to calculate the area of several basic geometric shapes. The program first asks the user for a shape ( C = Circle , R = Rectangle , S = Square , and T = Triangle ). The program then calls one of four user defined functions to calculate the area of the shape and return the answer to the main for printing to the screen. Depending on the shape, the corresponding function's argument(s) is (are) the necessary parameter(s) for area calculation. Therefore, the main should receive the value(s) of the parameter(s) from the user before calling any function. You must test your program for all shapes. The program should reject request for any other shapes; this option must be tested as well. The following is a list of parameters for each shape and the expression for its area calculation: Shape Parameters Area Expression Circle (C) Radius Area = 3.14159(Radius)(Radius) Square (S) Side Area = (Side)(Side) Rectangle (R) Height, Width Area = Width(Height) Triangle (T) Base, Height Area = 0.5 (Base)(Height) • Show less1 answer - SlipperyViolin4305 askedIt is desired to create a mortgage estimator by writing a program containing a function. The user in... Show moreIt is desired to create a mortgage estimator by writing a program containing a function. The user inputs the amount of loan ( L ), the loan term in number of months ( N ), and the annual interest rate ( I ). The program should make use of a function that accepts these values as inputs through its argument and calculates and returns the monthly payment and the total payments over the life of the loan. The monthly payment and the total payment could be calculated using the following expressions: Note that the interest rate must be expressed in decimal; for example, if the interest rate is 8% it must be entered as 0.08 . Test your program for several scenarios and submit the results with the program listing. The values of L , N , I , monthly_payment , and total_payment should be written to a file as listed below: Loan Amount Months Interest Monthly Payment Total Payment 10000 36 0.06 ....... ....... 120000 120 0.05 ....... ....... 85000 60 0.07 ....... ....... 257000 360 0.08 ....... ....... 320000 180 0.05 ....... ....... • Show less1 answer - Anonymous asked1 answer - Anonymous askedDefine Real number in BMF? Define Real number using syntax diagrams? Translate syntax diagram into p... Show moreDefine Real number in BMF? Define Real number using syntax diagrams? Translate syntax diagram into program to recognize real numbers? • Show less2 answers - Anonymous askedWrite a MARIE program to evaluate the expression A x B + C x D, where the values of A,B,C and D are ... More »1 answer - TrickyRhino1431 askeda program that asks the user to enter an odd whole number n that is less than 10 and then prints a d... Show morea program that asks the user to enter an odd whole number n that is less than 10 and then prints a diamond shape using * with n rows. The format of the diamond should be same as the format of the diamond shown below. The program should check the validity of user’s input. In case, the user enters an invalid value, the program should display an error message and ask the user to enter the value again. The program should keep asking the user to enter the value n until it gets a valid input, in which case it should display the diamond with n rows. * * * * * * * * * * * * * * * * * * * * * * * * * • Show less1 answer - TrickyRhino1431 askedpositive rea... Show moreA method known as the Newton-Raphson method allows us to calculate the square root of a positive real number a as follows: start with x = 1.0. Then set y = ( x + a / x ) / 2. If x and y are not yet close enough, then set x = y and recalculate y. Stop when the difference between x and y is small enough to be negligible. Write a C program to calculate the square root of a given number this way. Your program should read both a and the desired tolerance (i.e. the desired difference between x and y), e.g. 0.0001. Sample Output 2: Enter a positive real number: 9 Enter the tolerance: 0.0002 Square root of 9.000000 is 3.000000 • Show less1 answer - Anonymous askedI have 5 file it is work very well but I need to add DATE of the tabel by State Machine... Show more0 answers - Anonymous askedFor mulate the following problem as a Linear programming problem. Just the formulation. Do not solve... Show moreFor mulate the following problem as a Linear programming problem. Just the formulation. Do not solve. Farmer Jones bakes two types of cake (chocolate and vanilla) to supplement his income. Each chocolate cake can be sold for $1, and each vanilla cake can be sold for $0.50. Each chocolate cake requires 20 minutes of baking time and uses 4 eggs. Each vanilla cake requires 40 minutes of baking time and uses 1 egg. Eight hours of baking time and 30 eggs are available. Formulate an LP to maximize the farmer Jones's revenue. • Show less1 answer - Anonymous asked1.How are AVL trees are better than Binary Search trees(BST trees)? 2. When do you want to use AVL t1 answer - Anonymous askedREAD CAREFULLY!!!----... Show moreLink to problem Information READ CAREFULLY!!!---- arabicNum = JOptionPane.showInputDialog(null, "Enter a Roman Numeral:", JOptionPane.QUESTION_MESSAGE); String digit = arabicNum + ""; int arabic = digit.length(); if ((arabic == 0)|| (arabic==5)){ JOptionPane.showMessageDialog(null, "Invalid Input!!"); } if (arabic == 1) { // checking if the input of the entered amount is a length of 1 bytes if (digit.equals("0")){ System.out.print("Invalid"); } else if (digit.equals("I")) { System.out.println("1"); } else if (digit.equals("V")) { System.out.println("5"); } else if (digit.equals("X")) { System.out.println("10"); } else if (digit.equals("L")) { System.out.println("50"); } else if (digit.equals("C")) { System.out.println("100"); } else if (digit.equals("D")) { System.out.println("500"); } else if (digit.equals("M")) { System.out.println("1000"); } else if (digit.equals("8")) { } } else if (arabic == 2) { // checking if the input of the entered amount is a length of 2 bytes String digit1 = digit.substring(0, 2); if (digit1.equals("II")) System.out.print("2"); else if (digit1.equals("IV")) System.out.print("4"); else if (digit1.equals("VI")) System.out.print("6"); else if (digit1.equals("IX")) System.out.print("9"); else if (digit1.equals("XI")) System.out.print("11"); else if (digit1.equals("XV")) System.out.print("15"); else if (digit1.equals("XX")) System.out.print("20"); else if (digit1.equals("XL")) System.out.print("40"); else if (digit1.equals("LX")) System.out.print("60"); else if (digit1.equals("XC")) System.out.print("90"); else if (digit1.equals("CC")) System.out.print("200"); else if (digit1.equals("CD")) System.out.print("400"); else if (digit1.equals("DC")) System.out.print("600"); else if (digit1.equals("CM")) System.out.print("900");That is my logic so far... basically just wondering how I can stick them all together without going through the whole thing. The format MUST be just like this...NO METHODS ARRAYS OR LOOPS!!!!!• Show less0 answers - Anonymous asked1. Assume that the queuing length in a router exhibits normal distribution (mean and variance known)0 answers - Anonymous asked2 answers
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2010-october-07
CC-MAIN-2015-06
refinedweb
4,760
65.42
How to Use Constants in C Programming Even with C programming, computers and their electronic brethren enjoy doing repetitive tasks. In fact, anything you do on a computer that requires you to do something over and over demands that a faster, simpler solution be at hand. Often, it’s your mission to simply find the right tool to accomplish that goal. How to use the same value over and over It may be too early in your C programming career to truly ponder a repetitive program. But that doesn’t mean you can’t code programs that use values over and over. Exercise 1: Create a new project, ex0511, and type in the source code, as shown in It’s a Magic Number. Save it, build it, run it. IT’S A MAGIC NUMBER #include <stdio.h> int main() { printf("The value is %d\n",3); printf("And %d is the value\n",3); printf("It's not %d\n",3+1); printf("And it's not %d\n",3-1); printf("No, the value is %d\n",3); return(0); } The code uses the value 3 on every line. Here’s the output: The value is 3 And 3 is the value It's not 4 And it's not 2 No, the value is 3 Exercise 2: Edit the code to replace the value 3 with 5. Compile and run. You might think that Exercise 2 is cruel and requires a lot of work, but such things happen frequently in programming. There has to be a better way. Basics of constants in C programming A constant is a shortcut — specifically, something used in your code to substitute for something else. A constant operates at the compiler level. It’s created by using the #define directive, in this format: #define SHORTCUT constant SHORTCUT is a keyword, usually written in all caps. It’s created by the compiler to represent the text specified as constant. The line doesn’t end with a semicolon because it’s a compiler directive, not a C language statement. But the constant you create can be used elsewhere in the code, especially in the statements. The following line creates the constant OCTO, equal to the value 8: #define OCTO 8 After defining the constant, you can use the shortcut OCTO anywhere in your code to represent the value 8 — or whatever other constant you define; for example: printf("Mr. Octopus has %d legs.",OCTO); The preceding statement displays this text: Mr. Octopus has 8 legs. The OCTO shortcut is replaced by the constant 8 when the source code is compiled. The #define directive is traditionally placed at the top of the source code, right after any #include directives. You can define strings as well as values: #define AUTHOR "Dan Gookin" The string that's defined includes the double quotes. You can even define math calculations: #define CELLS 24*80 The definitions can be used anywhere in the source code. How to put constants to use in C programming Anytime your code uses a single value over and over (something significant, like the number of rows in a table or the maximum number of items you can stick in a shopping cart), define the value as a constant. Use the #define directive. Preparing for Constant Updates shows an update to the source code in Exercise 1. The VALUE constant is created, defined as being equal to 3. Then that constant is used in the text. The constant is traditionally written in all caps, and you can see in the source code how doing so makes it easy to find, and identify as, a constant. PREPARING FOR CONSTANT UPDATES #include <stdio.h> #define VALUE 3 int main() { printf("The value is %d\n",VALUE); printf("And %d is the value\n",VALUE); printf("It's not %d\n",VALUE+1); printf("And it's not %d\n",VALUE-1); printf("No, the value is %d\n",VALUE); return(0); } Exercise 3: Create a new project named ex0513 using the source code from Preparing for Constant Updates. If you like, you can use the source code from Exercise 1 as a starting point. Build and run. The output is the same as for the first version of the code. But now, whenever some bigwig wants to change the value from 3 to 5, you need to make only one edit, not several. Exercise 4: Modify the source code from The Computer Does the Math so that the two values 8 and 2 are represented by constants. THE COMPUTER DOES THE MATH #include <stdio.h> int main() { puts("Values 8 and 2:"); printf("Addition is %d\n",8+2); printf("Subtraction is %d\n",8-2); printf("Multiplication is %d\n",8*2); printf("Division is %d\n",8/2); return(0); }
http://www.dummies.com/how-to/content/how-to-use-constants-in-c-programming.html
CC-MAIN-2014-15
refinedweb
803
70.84
2009-06-15 11:02:20 8 Comments I am doing a https post and I'm getting an exception of ssl exception Not trusted server certificate. If i do normal http it is working perfectly fine. Do I have to accept the server certificate somehow? Related Questions Sponsored Content 34 Answered Questions [SOLVED] Android Studio: Add jar as library? - 2013-05-17 11:41:59 - Ozzie - 678779 View - 930 Score - 34 Answer - Tags: android android-studio gradle gson dependency-management 11 Answered Questions [SOLVED] Proper use cases for Android UserManager.isUserAGoat()? - 2012-11-14 08:34:01 - Ovidiu Latcu - 274977 View - 3182 Score - 11 Answer - Tags: java android usermanager 76 Answered Questions [SOLVED] Close/hide the Android Soft Keyboard - 2009-07-10 11:27:17 - Vidar Vestnes - 1243776 View - 3164 Score - 76 Answer - Tags: android android-layout android-softkeyboard android-input-method soft-keyboard 13 Answered Questions [SOLVED] How to create a self-signed certificate with openssl? - 2012-04-16 14:14:42 - michelemarcon - 1020483 View - 873 Score - 13 Answer - Tags: ssl openssl certificate ssl-certificate x509certificate 36 Answered Questions [SOLVED] Getting Chrome to accept self-signed localhost certificate - 2011-09-28 08:41:51 - pjohansson - 801895 View - 824 Score - 36 Answer - Tags: google-chrome ssl certificate self-signed 77 Answered Questions [SOLVED] Why is the Android emulator so slow? How can we speed up the Android emulator? - 2009-10-12 11:45:53 - Andrie - 976400 View - 3122 Score - 77 Answer - Tags: android performance android-emulator qemu 28 Answered Questions [SOLVED] SSL certificate rejected trying to access GitHub over HTTPS behind firewall - 2010-09-23 09:41:46 - oharab - 536378 View - 366 Score - 28 Answer - Tags: git ssl github cygwin ssl-certificate 19 Answered Questions [SOLVED] Android "Only the original thread that created a view hierarchy can touch its views." - 2011-03-02 00:07:31 - herpderp - 424519 View - 732 Score - 19 Answer - Tags: android multithreading 38 Answered Questions [SOLVED] Is there a unique Android device ID? - 2010-05-07 00:47:28 - Tyler - 829700 View - 2354 Score - 38 Answer - Tags: android uniqueidentifier 20 Answered Questions [SOLVED] Trusting all certificates using HttpClient over HTTPS - 2010-04-15 04:41:29 - harrisonlee - 488781 View - 356 Score - 20 Answer - Tags: java ssl https certificate httpclient @MikeL 2016-10-15 14:27:46 Sources that helped me get to work with my self signed certificate on my AWS Apache server and connect with HttpsURLConnection from android device: SSL on aws instance - amazon tutorial on ssl Android Security with HTTPS and SSL - creating your own trust manager on client for accepting your certificate Creating self signed certificate - easy script for creating your certificates Then I did the following: create_my_certs.sh: bash create_my_certs.sh yourdomain.com Place the certificates in their proper place on the server (you can find configuration in /etc/httpd/conf.d/ssl.conf). All these should be set: SSLCertificateFile SSLCertificateKeyFile SSLCertificateChainFile SSLCACertificateFile Restart httpd using sudo service httpd restartand make sure httpd started: Stopping httpd: [ OK ] Starting httpd: [ OK ] Copy my-private-root-ca.certto your android project assets folder Create your trust manager: SSLContext SSLContext; CertificateFactory cf = CertificateFactory.getInstance("X.509"); InputStream caInput = context.getAssets().open("my-private-root-ca.cert.pem"); Certificate ca; try { ca = cf.generateCertificate(caInput); } finally { caInput.close(); } And make the connection using HttpsURLConnection: HttpsURLConnection connection = (HttpsURLConnection) url.openConnection(); connection.setSSLSocketFactory(SSLContext.getSocketFactory()); Thats it, try your https connection. @Rohit Mandiwal 2014-06-03 11:08:11 Courtesy Maduranga When developing an application that uses https, your test server doesn't have a valid SSL certificate. Or sometimes the web site is using a self-signed certificate or the web site is using free SSL certificate. So if you try to connect to the server using Apache HttpClient, you will get a exception telling that the "peer not authenticated". Though it is not a good practice to trust all the certificates in a production software, you may have to do so according to the situation. This solution resolves the exception caused by "peer not authenticated". But before we go to the solution, I must warn you that this is not a good idea for a production application. This will violate the purpose of using a security certificate. So unless you have a good reason or if you are sure that this will not cause any problem, don't use this solution. Normally you create a HttpClientlike this. HttpClient httpclient = new DefaultHttpClient(); But you have to change the way you create the HttpClient. First you have to create a class extending org.apache.http.conn.ssl.SSLSocketFactory. Then create a method like this. Then you can create the HttpClient. HttpClient httpclient = getNewHttpClient(); If you are trying to send a post request to a login page the rest of the code would be like this. You get the html page to the InputStream. Then you can do whatever you want with the returned html page. But here you will face a problem. If you want to manage a session using cookies, you will not be able to do it with this method. If you want to get the cookies, you will have to do it via a browser. Then only you will receive cookies. @bernhardrusch 2014-09-19 08:49:21 really had a hard time to get this working - but with your solution it finally worked - thank you ! @Hardik 2016-01-27 13:48:07 Thanks your solution worked! +1 @SerCna 2014-08-21 05:36:28 Just use this method as your HTTPClient: @Erick Guardado 2014-01-20 23:00:16 I make this class and found in you code white this @Juan Sánchez 2013-09-19 15:44:18 If you are using a StartSSL or Thawte certificate, it will fail for Froyo and older versions. You can use a newer version's CAcert repository instead of trusting every certificate. @Speise 2012-10-05 08:41:27 Any of this answers didn't work for me so here is code which trust any certificates. @Speise 2012-10-05 08:45:00 This code works fine with URL pattern - localhost:8443/webProject/YourService @Michal 2013-11-07 12:58:25 The constructor SSLSocketFactory(null, null, null, null, null, X509HostnameVerifier) is undefined @Syed Ghulam Akbar 2012-07-23 06:19:40 For some reason the solution mentioned for httpClient above didn't worked for me. At the end I was able to make it work by correctly overriding the method when implementing the custom SSLSocketFactory class. This is how it worked perfectly for me. You can see the full custom class and implementing on the following thread: @alumat 2012-04-01 10:16:43 This is a known problem with Android 2.x. I was struggling with this problem for a week until I came across the following question, which not only gives a good background of the problem but also provides a working and effective solution devoid of any security holes. 'No peer certificate' error in Android 2.3 but NOT in 4 @Adrian Spinei 2010-10-31 17:56:15 None of these worked for me (aggravated by the Thawte bug as well). Eventually I got it fixed with Self-signed SSL acceptance on Android and Custom SSL handling stopped working on Android 2.2 FroYo @Nate 2009-06-15 18:07:59 I'm making a guess, but if you want an actual handshake to occur, you have to let android know of your certificate. If you want to just accept no matter what, then use this pseudo-code to get what you need with the Apache HTTP Client: CustomSSLSocketFactory: FullX509TrustManager is a class that implements javax.net.ssl.X509TrustManager, yet none of the methods actually perform any work, get a sample here. Good Luck! @Codevalley 2010-09-27 12:18:18 Why is there the FACTORY variable?? @Nate 2010-09-27 17:16:49 Clarified in post. You need FACTORY for the other override methods a SocketFactory needs to create a socket. @Codevalley 2010-09-28 07:42:01 return FACTORY.createSocket(); is having problem. The created socket is giving null Pointer exception on execute(). Also, I noticed, there are 2 SocketFactory classes. 1. org.apache.http.conn.ssl.SSLSocketFactory and 2. javax.net.ssl.SSLSocketFactory @Codevalley 2010-09-28 08:30:57 Sorry, I think after little more digging, I found the problem, I have to implement the connectSocket() function, which by default returns null. Any idea on this? @Cephron 2011-03-18 18:23:31 Hi Nate, this looks like something helpful for my situation. But, I'm having some issues with making the CustomSSLSocketFactory. What class is it supposed to extend? Or what interface is is it supposed to implement? I've been experimenting with: javax.net.SocketFactory, javax.net.ssl.SSLSocketFactory, org.apache.http.conn.scheme.SocketFactory, all of which force implementation of other methods... So, in general, what are the namespaces of the classes used in your code? Thank you. :) @Nate 2011-03-18 19:40:52 Cephron, CustomSSLSocketFactory implements LayeredSocketFactory and SocketFactory, in the org.apache.http.conn.scheme package. At the time of this original post, HttpClient in Android was 4.0.x (not 4.1). @Cephron 2011-03-18 21:19:21 Thank you for the clarification! But, in that case, I'm hitting the same wall as Codevalley. What do we put in the connectSocket() function? The FACTORY object doesn't seem to have anything to perform that function... @Omar Rehman 2011-06-08 12:57:29 Does not work for me, gives me same Not trusted server certificate error. @RajaReddy PolamReddy 2012-08-27 10:59:26 how can i get this class FullX509TrustManager @saxos 2010-10-22 14:55:26 You can also look at my blog article, very similar to crazybobs. This solution also doesn't compromise certificate checking and explains how to add the trusted certs in your own keystore. @bajohns 2010-09-19 06:20:55 While trying to answer this question I found a better tutorial. With it you don't have to compromise the certificate check. *I did not write this but thanks to Bob Lee for the work @Donn Felker 2011-06-13 21:09:22 This is the perfect answer. I made an update post that shows how to deal with non listed CA's as I was getting a PeerCertificate error. See it here: blog.donnfelker.com/2011/06/13/… @Ulrich Scheller 2009-06-16 08:34:44 This is what I am doing. It simply doesn't check the certificate anymore. and @Joe D'Andrea 2010-09-02 21:13:53 This looks good! I'm using WebView, however, and only need to connect to a https server for test purposes. (The client can't provision one with a matching FQDN, nor can they test on http.) Is there any way to tackle this when using WebView? Do I just drop this code in the Activity where the WebView is and "it just works?" (Suspecting not … ??) @Robert 2011-07-25 10:39:29 i'm trying to use that solution step by step and I'm getting such exception: 07-25 12:35:25.941: WARN/System.err(24383): java.lang.ClassCastException: org.apache.harmony.luni.internal.net. in place where I'm trying to open connection... any idea why?? @devsri 2012-02-22 17:24:12 Hey I did as suggested by you but I am getting the exception as javax.net.ssl.SSLException: Received fatal alert: bad_record_mac. I have also tried replacing TLSwith SSLbut it did not help. Please help me out, Thanks @cimnine 2012-03-01 07:44:17 While this is good during development phase, you should be aware of the fact that this allows anyone to MITM your secure connection by faking a random SSL certificate, which makes your connection not secure anymore. Have a look at this question to see it made the right way, here to see how to receive a certificate, and finally this to learn how to add it to the keystore. @Ulrich Scheller 2012-05-03 15:03:35 The above is just setup code, for the normal request. You can see an example at github.com/uscheller/EasyHttpClient/blob/master/src/… @Ulrich Scheller 2012-05-08 15:16:39 Since it is published unter the Apache License, you are free to use it anywhere. @cimnine 2013-04-27 12:18:35 @Unknown Here you go: gist.github.com/cimnine/5472913 @Sai Durga 2015-10-09 11:27:42 Thank you so much.After spending lot of time in searching ,your solution works for me. @user207421 2016-03-12 02:14:58 You should not use recommend or post this code. It is radically insecure. You should solve the actual problem. @Radhey 2018-03-28 10:07:26 perfectly working here (Y). Thanks champ! @Matti Lyra 2009-06-15 11:54:22 I don't know about the Android specifics for ssl certificates, but it would make sense that Android won't accept a self signed ssl certificate off the bat. I found this post from android forums which seems to be addressing the same issue: @Sam97305421562 2009-06-15 11:57:49 what is the solution for the same to get an https connection ... @Sam97305421562 2009-06-15 12:02:36 Http Post method will work for the same ? @Matti Lyra 2009-06-15 12:15:33 Like I said I don't know the Android specifics but I's day that you have to tell the platform what the ssl certificate of the server you are connecting to is. That is if you are using a self signed cert that's not recognized by the platform. The SslCertificate class will probably be helpful: developer.android.com/reference/android/net/http/… I'll dig into this later today when I have more time.
https://tutel.me/c/programming/questions/995514/https+connection+android
CC-MAIN-2018-39
refinedweb
2,312
63.49
This section of the forum is now closed; we are working on a new support model for WDL that we will share here shortly. For Cromwell-specific issues, see the Cromwell docs and post questions on Github. Race condition for simple script causing a job to run forever? I have a very simple python script that parses an Illumina filename for a lane identifier and writes this to stdout. import sys import re # take a command line argument that uses a Illumina machine output # read lane name from this filename filename = sys.argv[1] result = re.search("L([0-9]{3})", filename) print(result.group(0)) For example, this takes the filename "Undetermined_S0_L001_R1_001.fastq.gz" and outputs "L001". The WDL task looks like this: task discover_lane_name_from_filename{ String filename File python_lane_name command{ python3 ${python_lane_name} ${filename} } output{ String lane = read_string(stdout()) } } I am calling this as part of a scatter operation, so it runs more than once for different filenames. In my last workflow, this task ran 4 times for different inputs. 3/4 of these completed very quickly. 1/4 continued running for > 90 minutes. I checked the stdout file in the execution directory for the failing job, and it contains the correct output "L004", so the python script is completing successfully, but the job (running on SLURM) never completes. My best guess is that this is a race condition; Cromwell is not expecting the job to complete so quickly, and is waiting for something to change before declaring the job complete. I understand that spawning new jobs to perform simple operations like this incurs lots of overhead. How should I alter my workflow so that it runs consistently? Best Answer So what we ended up doing was making the script-epilogueconfigurable on a per-backend basis. Please let me know if you have any questions about this. Thanks! Answers Hi @mmah, I'm not convinced the length of the jobs is what matters... What version of cromwell are you running on? Cromwell v25. The problem appears to be related to the state WaitingForReturnCodeFile. I see jobs enter this state, but not exit: [INFO] [04/05/2017 13:14:44.123] [cromwell-system-akka.dispatchers.backend-dispatcher-99] [akka://cromwell-system/user/SingleWorkflowRunnerActor/WorkflowManagerActor/WorkflowActor-612a5dcc-f952-40b3-98be-c62194d3fd91/WorkflowExecutionActor-612a5dcc-f952-40b3-98be-c62194d3fd91/612a5dcc-f952-40b3-98be-c62194d3fd91-EngineJobExecutionActor-ancientDNA_screen.discover_lane_name_from_filename:2:1/612a5dcc-f952-40b3-98be-c62194d3fd91-BackendJobExecutionActor-612a5dcc:ancientDNA_screen.discover_lane_name_from_filename:2:1/DispatchedConfigAsyncJobExecutionActor] DispatchedConfigAsyncJobExecutionActor [UUID(612a5dcc)ancientDNA_screen.discover_lane_name_from_filename:2:1]: Status change from - to WaitingForReturnCodeFile For jobs that succeed, the execution directory contains a rcfile with content that looks a like a return code: 0. For jobs that fail, there is a rc.tmpfile. I don't know what the states are, or how a job transitions between states. Hi @mmah, Cromwell waits in WaitingForReturnCodeFileuntil the rc file appears, but it looks like that never happens here for some reason. Could you please email me any files that look like they were created by Cromwell in this directory? Thanks! Hi Matthew In the failed shards I see something like the following in the execution/stderrfile: It looks like something is going wrong where SLURM thinks the job continues to run and yet SLURM is unable to kill it. If SLURM thinks the job is still running then Cromwell will too, which explains the hanging. Unfortunately I don't know any more about SLURM, but for further debugging the script.submitfile in the executiondirectory is what Cromwell actually used to submit the job. Please let us know if there's anything more we can do to help. Thanks Miguel What process writes the rcfile? Is this done in the parent process, or in the child process? Where can I find the code that does this? execution/scriptdoes something like sync is a likely candidate for nondeterministic behavior. Yeah, our team discussed this today. I'm going to remove it from 25 hotfix and 26+. I understand that syncis slated for removal in v26. I will wait for the v26 release and recheck my workflow then. So what we ended up doing was making the script-epilogueconfigurable on a per-backend basis. Please let me know if you have any questions about this. Thanks!
https://gatkforums.broadinstitute.org/wdl/discussion/comment/37963/
CC-MAIN-2019-43
refinedweb
706
57.16
Important: Please read the Qt Code of Conduct - confusion in connect function Hello , I am trying to create an socket client and server program . but i am not using QT predefined functions . I am using Windows header files to develop this program . My problem is there is a function name (connect) in TCP socket that comes under <Winsock2.h> header file similarly there is a connect function in QT TCP . I want to make the program use the windows call instead of QT predefined call? - mrjj Lifetime Qt Champion last edited by @ManiRon Hi But you can just do it ? Make sure to link the right DLLS for the function you use or they be undefined linker errors. like for the connect function it shows that so you must link to that ws2_32.dll via the pro file. - Christian Ehrlicher Lifetime Qt Champion last edited by To use the connect from the global namespace you can use ::connect(...)
https://forum.qt.io/topic/95538/confusion-in-connect-function
CC-MAIN-2021-39
refinedweb
158
73.98
In today’s Programming Praxis exercise, our goal is to produce all combinations of two elements of a list, without duplicates. Let’s get started, shall we? import Data.List The basic idea is pretty simple: we start with the first element and make all combinations with the other ones. Since that element is now no longer needed, we can remove it and repeat the process for the rest of the list. pairs :: [a] -> [(a, a)] pairs xs = [(a,b) | (a:bs) <- tails xs, b <- bs] Some tests to see if everything is working properly: main :: IO () main = do print $ pairs [1..4] print $ length (pairs [1..4]) == 6 print $ pairs [1..6] print $ length (pairs [1..6]) == 15 print $ pairs [1..16] print $ length (pairs [1..16]) == 120 Tags: bonsai, code, combinations, Haskell, kata, pairs, praxis, programming
https://bonsaicode.wordpress.com/2013/05/03/programming-praxis-pairing-students/
CC-MAIN-2016-07
refinedweb
138
76.72
Introduction to Loudmouth Mikael Hallendal, Imendio AB Introduction In this article I will give a quick glance on what Jabber is and how you can use a library called Loudmouth to develop applications that makes use of Jabber for communication. Brief introduction to Jabber Jabber is a protocol for presence and notification. It started out as a chat protocol (like ICQ, AIM, MSN) with the big difference that it was designed to bridge all of the other chat protocols together. That is, Jabber was designed to be able to speak to ICQ, AIM, MSN and other non-open protocols. This is done through server-side transports and to the client developer it is transparent whether a message is sent to an ICQ contact or another Jabber contact. Jabber is XML based and one connection maps to an XML document. This is a very beautiful and extensible design and makes Jabber very powerful for other things than chatting. Introduction to Loudmouth Loudmouth is a small and lightweight library written in C. It is designed to be easy to use while not hiding the full power of the Jabber protocol. In order to use it a knowledge of the Jabber protocol is needed. By using loudmouth the user doesn't have to know that the underlying protocol uses XML to send the messages. The user does however need to know that Loudmouth uses a tree based design for messages (which corresponds to the XML nodes which is sent over the wire). We will later see that a loudmouth message corresponds to the XML node that is sent between the server and the client. Installing Loudmouth Loudmouth can be downloaded from here. Loudmouth has two requirements, GLib 2.0 and GnuTLS (optional). Loudmouth uses GLib to not have to reproduce data structures and get a better platform independency. This means that Loudmouth works or will work with small modifications on any platform where GLib is available. It's been successfully tested on Linux, Mac OSX and Windows. When secure communication through SSL is required Loudmouth uses the GnuTLS library. It's most likely a good idea to make sure this is installed so that the Loudmouth library is built with this support. Using Loudmouth As mentioned earlier a loudmouth message (LmMessage) corresponds to an XML node. For example: <message type="chat" to="foo@jabber.bar.com"> <body>My text</body> </message> which is a jabber message sent to foo@jabber.bar.com with text "My text" corresponds to: LmMessage *m = lm_message_new_with_subtype ("foo@jabber.bar.com", LM_MESSAGE_TYPE_MESSAGE, LM_MESSAGE_SUB_TYPE_CHAT); lm_message_node_add_child (m->node, "body", "My text"); as can be seen working with Loudmouth is pretty straight forward if you know the Jabber protocol. It is also very powerful since you can construct any elements you desire. As mentioned above, Jabber is very extensible and Loudmouth keeps it that way by letting the application developer construct any kind of elements and add arbitrary childs to it. Synchronous Code Example All communication in Loudmouth is done through the LmConnection object. Each operation has an asynchronous mode and a synchronous mode, depending on what kind of application are being developed. If you are writing a GUI application you most definitely want to use the asynchronous functions. That way your GUI won't freeze up while sending/receiveing messages. In some applications you might want things to happen synchronously, for example if you are writing small tools like a little program that just sends a message to a contact. This will be used for this example: #include <loudmouth.h> int main (int argc, char **argv) { LmConnection *conn; GError *error = NULL; LmMessage *m; if (argc < 6) { g_print ("Usage: test server username password recipient message"); return -1; } conn = lm_connection_new (argv[1]); if (!lm_connection_open_and_block (conn, &error)) { g_print ("Couldn't open connection to '%s':\n%s\n", argv[1], error->message); return -1; } if (!lm_connection_authenticate_and_block (conn, argv[2], argv[3], "MyTestApp", &error)) { g_print ("Couldn't authenticate with '%s' '%s':\n%s\n", argv[2], argv[3], error->message); return -1; } m = lm_message_new (argv[4], LM_MESSAGE_TYPE_MESSAGE); lm_message_node_add_child (m->node, "body", argv[5]); if (!lm_connection_send (conn, m, &error)) { g_print ("Error while sending message to '%s':\n%s\n", argv[4], error->message); } lm_message_unref (m); lm_connection_close (conn, NULL); lm_connection_unref (conn); return 0; } Asynchronous Description The above example showed how to use the synchronous APIs to send a message through Loudmouth. If you instead want to use the asynchronous APIs you will define callback functions that will be run when an operation is finished. Here is an example: lm-send-async.c Conclusion Using Loudmouth requires that you know the Jabber protocol in order to be able to construct the messages. By using Loudmouth you will get a nice C API to read and contruct the Jabber messages instead of having to work with raw XML. Further more Loudmouth handles all the communication for you letting you concentrate on your application instead. This article only showed the synchronous API, the asynchronous API is built up on using callbacks. That will be handled in another article. References - The Loudmouth project page - More code examples -
http://developer.imendio.com/publications/introduction_loudmouth
crawl-002
refinedweb
845
53.92
Opened 15 years ago Closed 15 years ago Last modified 4 years ago #1475 enhancement closed fixed (fixed) Basic and Digest HTTP-Auth Implementation. Description Attachments (1) Change History (25) comment:1 Changed 15 years ago by comment:2 Changed 15 years ago by Branched to dreid/http-auth-1475 comment:3 Changed 15 years ago by Since it's auth, cred seems more appropriate to me. comment:4 Changed 15 years ago by Currently it's checked in as twisted/web2/httpauth.py there was some question as to it's general usefulness and applicability to other protocols (such as SMTP and IMAP DIGEST-MD5) and SIP. I personally don't see any reason why there should be any problems, except that SIP seems a little more liberal with certain parts of the response, and SMTP and IMAP require that challenges and responses be base64 encoded. All of which seem to be able to be provided by subclasses of the provided api. The one big problem is the ICredentialFactory (both in name and what it does) unfortunately it is a necessary object for doing http as it serves the purpose of remembering challenges across requests. comment:5 Changed 15 years ago by Merged forward and reorganized in http-auth-1475-2, there is still some question as to wether or not the appropriately reusable parts should be pulled out of web2, so i'm shelving that issue for a later time. Reassigning for review. comment:6 Changed 15 years ago by comment:7 Changed 15 years ago by How do you set up a server which supports more than one kind of authentication (e.g. accepts either digest or basic?) comment:8 Changed 15 years ago by Currently you don't. Atleast not with the same http auth wrapper. I wanted it to be as simple as possible and didn't deam multiple credential factories a worthwhile effort (at that time.) Though now that I look at the implementation it should be reasonably simple. comment:9 Changed 15 years ago by comment:10 Changed 15 years ago by comment:11 Changed 15 years ago by - I notice there is some commented-out Auth code in http.py. now that your code obsoletes the ideas behind it, it should be deleted. - ICredentialFactory.getChallenge is documented as requiring a 'str' return value, but both implementations return (scheme, {...}). - ICredentialFactory.decode should probably be documented as able to raise error.LoginFailed. - I don't think it's worth pretending DigestCalcResponse and DigestCalcHA1 are classes (I assume that's what the intenion behind their capitalization was). - Shouldn't DigestCredentialFactory.outstanding have some occasional cleanup? Probably no need for a reactory-timer, just a check in getChallenge or decode to see if it's been so long since an item was added. - I am slightly unhappy about checking for iter on the credentialFactories object and wrapping it in a list if it doesn't have one. But that is not a big deal. In wrapper.py: self.credentialFactories = dict([(factory.scheme, factory) \ for factory in credentialFactories]) - Man, you don't need no backslash! That is so C. Does C even require that? I don't know. - What would make this branch so totally hot and awesome is if it added an example to the documentation. comment:12 Changed 15 years ago by I'd really like to see this finished and approved. comment:13 Changed 15 years ago by I'm going to work on the docs, and then I'll pass it back to dreid. comment:14 Changed 15 years ago by comment:15 Changed 15 years ago by oops comment:16 Changed 15 years ago by comment:17 Changed 15 years ago by comment:18 Changed 15 years ago by Typo in authentication.xhtml -- s/propogate/propagate/ comment:19 Changed 15 years ago by Unauthorized.Resource.init docstring describes parameter wwwAuthenticate, but init takes "factories" as the only argument. comment:20 Changed 15 years ago by ICredentialFactory.getChallenge should probably take the request as the argument. The request includes the peer address as well as any other information a hypothetical authentication method might need. Changed 15 years ago by Patch with some readability and stylistic edits comment:21 Changed 15 years ago by comment:22 Changed 15 years ago by comment:23 Changed 10 years ago by comment:24 Changed 4 years ago by [mass edit] Removing review from closed tickets.
https://twistedmatrix.com/trac/ticket/1475
CC-MAIN-2021-04
refinedweb
738
53.61
I would like to find everything that is in between : 'GH'. i.e. for 'GH ab H b G bc GH', I would like to get : ' ab H b G bc '. What I did is : - Code: Select all import re A=r'GH ab H b G bc GH' pattern_2a=re.compile(r'GH'r'(?P<ContOpt>[^GH]{0,})'r'GH') ans=pattern_2a.findall(A) print ans But this is not working... I guess it is because : ^GH mean all excepted G or H while I would like all excepted GH (as a block of 2 characters..). And so I do : If anyone could help me with that one that would be very nice!! Also i tried to use findall to find everything in between AA and BC with there are some B and/or C in between. E.g. str=r'AA aaaBaaaCaaa BC' => ans =' aaaBaaaCaaa ' To try to do so, I used: - Code: Select all import re str=r'AA aaaBaaaCaaa BC' pattern_2a=re.compile(r'AA'r'(?P<ContOpt>[^BC]{0,})'r'BC') ans=pattern_2a.findall(str) print ans But this is not working. Probably because ^BC means no B nor C and I would like no BC... Thanks! Stephane Ps : (Of course) something like that is working (but it is not what I need....) : - Code: Select all import re A=r'GH ab b G bc H' pattern_2a=re.compile(r'GH'r'(?P<ContOpt>[^H]{0,})'r'H') ans=pattern_2a.findall(A) print ans Ps : (of course) if I change the condition to 1 element then it works....: - Code: Select all import re str=r'AA aaaBaaaaaa C' pattern_2a=re.compile(r'AA'r'(?P<ContOpt>[^C]{0,})'r'C') ans=pattern_2a.findall(str) print ans ans=[' aaaBaaaaaa ']
http://www.python-forum.org/viewtopic.php?p=654
CC-MAIN-2016-30
refinedweb
293
73.27
It would be beneficial to have API implemented in issue 135670 public in IDE cluster. It should be reasonably extensible as it is based on interfaces and factory methods. I will attach API javadoc (if it will be made public API classes will be refactored to org.netbeans.api). Module exported two main API groups - input processing (org.netbeans.modules.extexecution.api.input) and execution support (org.netbeans.modules.extexecution.api). *** Issue 126570 has been marked as a duplicate of this issue. *** Created attachment 64953 [details] current javadoc I'd like to make this module (extexecution) public under development. It is placed in extexecution directory. -it should be moved to ide cluster -package refactorings intended: org.netbeans.modules.extexecution.api -> org.netbeans.api.extexecution org.netbeans.modules.extexecution.api.input -> org.netbeans.api.extexecution.input org.netbeans.modules.extexecution.api.print -> org.netbeans.api.extexecution.print -contrib module languages.execution should be removed Interested clients I know about (except current friends): serverplugins, erlang, ruby. Please review. MK1: Are the Java Home env var related methods necessary in a generic execution API? Re MK1: No, they are not. I'll remove it. Thanks for catching this. [JG01] ExecutionDescriptor.getOut/ErrProcessorFactory Javadoc says "ExecutionService automatically uses the printing one." - vague, what is "the printing one"? Use @link etc. where appropriate. [JG02] getOut/ErrConvertorFactory says "(that used by ExecutionService automatically." which is not even a sentence - typo? [JG03] The Javadoc for ED and ED.Builder do not seem to make clear what the defaults for the various options are. [JG04] ED.Builder could be folded into ED itself, which might be easier to understand. Example: public class ED { public ED(); public ED controllable(boolean); // ... } There would be no particular need for getter methods, since the descriptor is just passed to ExecutionService in the same package anyway, and I don't see any likely need to pass a descriptor around to other code before then. The class Javadoc for ED is also misleading; it says "To build the most common kind of descriptor use ExecutionDescriptor.Builder." when in fact the *only* way to make a descriptor is to use the builder. [JG05] Suggest avoiding nested classes such as ED.InputProcessorFactory - harder to type and manage imports for. [JG06] ExecutionService.run Javadoc is vague. I assume the Integer is a process exit status but this is not documented. And it says "This method can be invoked multiple times returning the different and unrelated Futures." which is unexplained - is the processCreator called again each time? What would be the point of calling this method multiple times, when you could just make a newService for each distinct process? [JG07] What if the Callable<Process> cannot create a Process? (IOException is common e.g. if the executable path cannot be found.) Is it supposed to wrap IOException in RuntimeException? What happens to the Future<Integer> in such a case? [JG08] ExternalProcessBuilder would seem to be more convenient if it actually implemented Callable<Process>, rather than forcing you to write new Callable<Process>() {public Process call() {return myEPB.create();}} which ties into JG07 since create() throws IOException. Perhaps a new interface ProcessCreator { Process create() throws IOException; } (to be implemented by EPB) would be a good replacement for Callable<Process>. [JG09] EPB's constructor should clarify that the "command" is intended to be a single executable ("echo"), not a shell-parsed compound of command and arguments ("echo hello"). [JG10] "pwd" stands for "print working directory" and is inappropriate for a setter. Perhaps use "cwd", or better, "workingDirectory". [JG11] What would pwd(null) mean? Remove this possibility unless it does something useful. [JG12] I am not sure why you would ever call pwdToPath but it seems this could be removed without causing any great inconvenience. [JG13] Given the semantics of addPath, "prependPath" might be a better name. [JG14] "HTTP proxy settings are configured (http.proxyHost and http.proxyPort variables)." is vague. Do you mean you set some environment variables for the subprocess based on the abovenamed Java system properties in the caller? [JG15] InputProcessor.reset() should explain what it is that might be reset. It is hard to guess from the Javadoc what this would be used for. [JG16] InputReaderTask class Javadoc shows method executorService.submit(runnable) and executorService.shutdownNow() which do not appear to exist. [JG17] InputReaders.forFileInputProvider is not very clear. Is the fileProvider going to be called several times as each file is read in turn? How do you signal that there are no more files to read? [JG18] LineConvertors.filePattern refers to "extPattern" but I guess you mean "filePattern". [JG19] If fileLocator is null, is some default impl used that e.g. looks for absolute path? or file:/some/path? [JG20] Do file lines begin at 0 or 1? Document it. [JG21] Is there any facility for matching column numbers, as the Ant module does? Wow, thanks Jesse. Re JG01: Should be fixed in main. Re JG02: I couldn't find "ExecutionService automatically" anywhere. Re JG03: It is described in Builder constructor's javadoc. Re JG04: I'll consider it/work on it tomorrow. Fixed javadoc. Re JG07 & JG08: Callable declares call() as "public V call() throws Exception". Future throws ExecutionException on get(). It's a good idea for EPB to implement Callable<Process>. I'll do it. Re JG09: Fixed in main. Re JG15: Fixed in main. Re JG16: JDK's ExecutorService != ExecutionService. input.* api does not depend on execution one. Improved sample code. Re JG18: Fixed in main. I'll look at the rest tomorrow (need to change clients). Changeset c914201b3bf9. Re MK01: Fixed in main. Re JG05: I think it is the right usage of nested classes (such as Map.Entry) - these classes don't have any meaning outside of the ED. Re JG07 & JG08: Is there any benefit of having ProcessCreator instead of Callable<Process>? EPB can look like this: public final class ExternalProcessBuilder implements Callable<Process> { ... public Process call() throws IOException { // create process } ... } Re JG10: Fixed in main. Re JG11: Fixed in main. Re JG12: Fixed in main. Re JG13: Fixed in main. Re JG19: Improved Javadoc. The default impl will try to find the file simply by new File(filename).isFile(). Re JG21: No. I'll look at ant. Changeset e53fbd324b1e and 00cd8c124f7a. BTW overall looking pretty nice; one of the better documented APIs I have seen be reviewed. JG02: HTML Javadoc says "getOutConvertorFactory public ExecutionDescriptor.LineConvertorFactory getOutConvertorFactory() Returns the factory for convertor to use with processor printing the standard output (that used by ExecutionService automatically. Returns: the factory for convertor to use with processor printing the standard output" JG03 - OK, though I would have expected each property to specify its default; when I am looking to see what the default value is for a property, I am probably looking at the Javadoc of that property. BTW "properites" is a typo. JG07/08 - Callable<Process> is fine; I did not realize it could throw Exception. But Javadoc should be explicit about what the consequence of that is. And yes it would be nice for EPB to implement it. Re JG02: Fixed in main. Re JG07/08: In order to solve this: -EPBuilder refactored to EPCreator -it implements Callable<Process> -it is immutable -improved javadoc Attaching the patch. Created attachment 65258 [details] EPB to EPC refactoring Re JG07/08: Actually I'm not sure about the preferred name - Builder/Creator. Any opinion? "Builder" was OK with me; "Creator" is OK too. No opinion. BTW if you use --git (or [diff] git=1) then patches with renames are a lot easier to read. vk01: ExternalProcessBuilder workingDirectory(File workingDirectory) looks like a setter... doesn't have set in the name. vk02: the comment for ExternalProcessCreator workingDirectory(File workingDirectory) makes it sound like null is an acceptable argument.. but the code checks that it is not. vk03: the api "pattern" you are using in ExternalProcessCreator is not the pattern that most other APIs in the platform seem to follow. Why are you using this alternate pattern? vk04: Why call() instead of startProcess() or exec()... the name of the method should expose the fact that the Process is started. vk05: it looks like you are missing unit tests for some of the methods that are non-trivial like workingDirectory() and redirecterrorStream()... are these covered by qa-functional tests? I am also confused by your comments... fixed in main. you have already committed this without review? oh well. vk06: (correction to vk01) ExternalProcessCreator workingDirectory(File workingDirectory) looks like a setter... doesn't have set in the name. Re JG04: Fixed in main. Re JG07/08: Fixed in main (I preserved ExternalProcessBuilder). Re JG14: Fixed in main. To make summary - still not done: JG17, JG20, JG21. Changeset 04d727bccae4. Re vk02: I'll improve it, however I'm bit afraid of talking explicitly about null in javadoc where null is not allowed - while quick read this could be confusing. Re vk03: Originally it was builder + immutable class. Jesse suggested that builder could me merged. So it is immutable class. Immutable objects are inherently thread safe, you can easily define constants etc. Re vk04: It was this way originally. On Jesse's suggestion I change the Builder to implement Callable<Process>. It is more usable for typical usecase. Re vk05: I'll improve it tomorrow. Re vk06: It is not a setter. >>I am also confused by your comments... fixed in main. you have already committed this without review? oh well. We are not reviewing particular change. The purpose of this review is to make extexecution public. It is already in trunk and it is friend and I'm still fixing it. This review is not about the patch in desc13 which was posted to make an agreement on JG07/08 solution. Re JG06: Fixed in main. Re JG17: Fixed in main. Re VK02: Fixed in main. Re VK05: In fact these methods are pretty simple. The test would be one-liner and would need to increase access level. I've added immutability test. Changeset a79163f0c76c. Re JG20: Fixed in main. Changeset 3c57c9eb7026. Remaining - JG21. *** Issue 57000 has been marked as a duplicate of this issue. *** [JG22] Possibility for consideration - a flag to run the process with a special wrapper which sets stdout to be unbuffered, since otherwise programs which print to stdout without flushing can see their output delayed for a long time or even up to program exit. See issue #147099 for an example. Alternately, find a way to create a pseudotty; this would also enable Console usage (see issue #68770). If you're interested in pseudo-tty's (pt's) take a look at > [JG22] Terminal Emulator We fight with this in Ruby (likely the same for all having-interactive-interpreters languages and for all alike cases). We hope this will be solved by the fully-fledged Terminal Emulator, Ivan pointed to, for post-6.5. Thus features like issue 133994 might be implemented easily. There will likely be top-level 'terminalemulator' component in IZ. All after 6.5 is code-frozen and works on next release starts, I guess. JG21: As there is no client that would require column matching (ruby as upcoming client does not use it either) I would preserve the current state as column matching can be added later in a compatible way. JG22: Terminal emulator should be right choice for interactive console apps (once finished). For now the extexecution provides the reasonable level of support (used for grails console). It provides processors which do not wait for EOL. Although there are buffered stream on the way, this works reasonably. Afaik similar approach should work for ruby as well. If there are no further objections, I will make it public by applying changes described in desc4. Before making it public I would like to apply the following patch. Contains these changes (based on ruby experience with the new api): - semantic of LineConverter changed - allows null return value to signal that it does not handle the line. Chain converter removed from factory methods for LineConverter. Added proxy(LineConverter...) wrapping up multiple converters. This soultion provide better flexibility. - ConvertedLine is a final class - ExecutionDescriptor.InputProcessorFactory - method signature changed from "InputProcessor newInputProcessor()" to "InputProcessor newInputProcessor(InputProcessor defaultProcessor)" providing access to infrastructure provided processor. Created attachment 73122 [details] proposed change Apologies for coming into this late, but I'm also interested, as we use our own internal code to execute start/stop and administer of MySQL server. An enhancement request: often you need superuser access to execute the MySQL process. It would be very useful to be able to indicate that an execution needs to be run as superuser, and this API handles bringing up the appropriate OS-specific UI to get the su password, if supported (e.g. gksu on Linux/Solaris and AppleScript's "with administrator privileges" on Mac). Hi David, I do not think it is a proper place for such API. External execution is used just for executing the process in NetBeans and handle output. I think it is a lot of system specific code so I wouldn't put it into public API at the first place. However the API you suggest can be easily built on top of the execution API. You can make it friend and design it properly. Later, if we would change our mind we can easily integrate it as support. P. If there are no objections I will integrate suggested patch tomorrow. Patch applied to main ae3ebc384d4d. Fixed friends in contrib 81fbadfe67b8. Integrated into 'main-golden', will be available in build *200811061401* on (upload may still be in progress) Changeset: User: phejl@netbeans.org Log: #136929 Make Execution API public (applied improvements in api) One last (fully compatible) change I would like to push before integration. Allows client to select preferred output window printing type (char - immediate, line - line by line). Created attachment 73819 [details] proposed change I would like to push mentioned change tomorrow. Any objections? To make a small summary: The recent changes were valid use cases revealed during the ruby's migration (so nothing like making API better without any reasonable justification ;) ). As the migration is finished I do not expect any further changes and I would like to (finally) repackage and publish APIs this week. Pushed the last patch to main 7bb873de7bb6. Integrated into 'main-golden', will be available in build *200811200201* on (upload may still be in progress) Changeset: User: phejl@netbeans.org Log: #136929 Make Execution API public (desc 35) Refactored to API packages, moved to the ide cluster - main ba1644ac124f. Done. Integrated into 'main-golden', will be available in build *200811220201* on (upload may still be in progress) Changeset: User: phejl@netbeans.org Log: #136929 Make Execution API public (refactored to api packages) In updating to the trunk I updated some code which uses the extexecution api. I was really surprised by a couple of things. I had code which did something like this: descriptor.foo(true); descriptor.bar(false); descriptor.baz(5); Since the "setter" methods return an ExecutionDescriptor, I figured this was just the chaining pattern - I could do descriptor.foo(true).bar(false).baz(5); In any case, neither the first alternative nor the second alternative worked! (The second alternative would have worked if I had assigned the result to descriptor). That's because each "setter" method actually duplicates the entire object and passes a new object back! I think that's pretty strange. If you really want an immutable descriptor I think you should have a Builder object as an inner class; the Builder is mutable. You do something like new ExecutionDescriptor descriptor = ExecutionDescriptor.Builder().foo(true).bar(false).baz(5).build(); In other words, all the setter methods are on the Builder class; the ExecutionDescriptor has only getters. The Builder class has a build() method which constructs the immutable. Right now, there's (a) a potential of incorrect code where users don't realize they have to copy out result and use it instead, e.g. you have to write code like this: desc = new ExecutionDescriptor() desc = desc.foo(true) desc = desc.bar(false) desc = desc.baz(5) If you just call foo() and bar() the code won't work (I made that mistake). (b) This is very inefficient. I looked again and the case I had a problem with was actually ExternalProcessBuilder. It has the same usage pattern -- you have to use the return value. I see some internal BuilderData state which I think deals with sharing data among these copies but it still has to be inefficient if you want to support isolation among the different partially constructed objects. In my opinion, it would be much cleaner to pull this into a separate Builder innerclass. The other thing I ran into, which is why I was poking in these classes in the first place, is that I need to support -multiple- post execution hooks. In the end I worked around it by making my own new post hook delegate to the previous post hook, but it would be better if these were lists such that I could simply add as many post hooks as I need. a) This is just a matter of personal preference imo - see JG04. It is Jarda's "cumulative factory" pattern. b) It's inefficient only when it is used inefficiently. The common case is you always execute something with the same descriptor, perhaps changing one or two parameters. Because of the immutability you can use only one instance for the any number of (even concurrent) executions. private static final ExecutionDescriptor DESC = new ExecutionDescriptor().foo(true).bar(false).baz(5); // no instance created ever ExecutionService service = ExecutionService.newService(processBuilder, DESC, "command"); // or with changed parameter - one instance ExecutionService service = ExecutionService.newService(processBuilder, DESC.baz(10), "command"); Anyway if this become a performance bottleneck we can introduce builder compatibly any time later. +1 on tor's objection. The common pattern I've seen elsewhere (outside of netbeans codebase) is to always return the same instance to allow for chaining. IMHO it's not about performance primarily but API usability. From my usecases I don't see why I would like to keep around and immutable instance around that I change one or two parameters each time. It's always a "fire-and-forget" approach. Construct and run. Next time do the same again. You don't have to keep an instance around. You can use such approach in case you don't think generation collector is fast enough. If we want to keep it immutable (and I hope so - there are too many threads in this api) we can't return the same instance - only builder would help. I don't fully understand your argument about the chaining - you can use it (and I would even recommend it): ExecutionDescriptor desc = new ExecutionDescriptor() .foo(true) .bar(false) .baz(5); I'm bit frustrated as it seems the api review process does not work - this issue was opened for months without any objection on this topic (except Jesse's suggestion). Anyway feel free to file a separate api issue with builder patch. The current API seems fine to me, though I would not strongly object to Tor's suggested syntax. (I would object to having _both_ styles at once.) ExecutionDescriptor is already in a sense a builder for ExecutionService. Perhaps adding @UseReturnValue (or whatever the right annotation is) to the existing ED methods would be helpful. JG04 was (a) a suggestion for making the API slimmer and simpler by omitting the separate builder class, (b) a request to remove the public getters which served no clear function, (c) a request to correct the Javadoc. The difference in opinion is I guess over (a) only. Sorry, I didn't review this API. But I've gotta say I am not a believer in the "cumulative factory" pattern, at least not as applied here. I read the writeup () and I don't think this is a good fit. I think Builders tend to be stateful, and I haven't seen any uses around process execution for example where you have a template that you keep instantiating. Rather than keeping a half-built instance around that you keep templating from, I think it would be more natural to create a shared factory method (say RubyExecution.createRubyDescriptor()) which constructs the basic descriptor before custom handling (say adding test output recognizers for unit test execution) are added. My main concerns here is that (a) this is not a common way to do it, and (b) this can lead to unexpected errors where you think you're chaining calls and you were supposed to handle the result value. Yes, we can use a findbugs annotation to help detect this, but that won't help anybody, and I'm not sure I understand the benefits of this approach. Immutable objects are good, but for an object builder (what else is an ExternalProcessBuilder?) I think statefulness is expected, and I really don't think these objects will be used where immutability is most helpful (sharing objects, concurrency, etc). Anyway, the review is over etc. so do what you want with my feedback. I made a mistake using the API so I thought others might have the same misunderstanding, and I wasn't sure how final you considered the API to be. (I think it would be really useful to add the second thing I asked for -- a way to register -multiple- post execution hooks - and for symmetry, perhaps multiple pre execution hooks too.)
https://netbeans.org/bugzilla/show_bug.cgi?id=136929
CC-MAIN-2015-32
refinedweb
3,583
58.58
- 28 Oct 2021 01:56:41 UTC - Distribution: FFI-Platypus - Module version: 1.56 - Source (raw) - Browse (raw) - Changes - How to Contribute - Repository - Issues (14) - Testers (1390 / 14 / 0) - KwaliteeBus factor: 1 - 71.39% Coverage - License: perl_5 - Perl: v5.8.4 - Activity24 month - Tools - Download (378KB) - MetaCPAN Explorer - Permissions - Permalinks - This version - Latest version++ed by:21 non-PAUSE usersand 19) - Dependencies - Capture::Tiny - ExtUtils::MakeMaker - FFI::CheckLib - IPC::Cmd - JSON::PP - List::Util - constant - parent - and possibly others - Reverse dependencies - CPAN Testers List - Dependency graph - NAME - VERSION - SYNOPSIS - DESCRIPTION - CONSTRUCTORS - ATTRIBUTES - METHODS - EXAMPLES - CAVEATS - SUPPORT - CONTRIBUTING - SEE ALSO - ACKNOWLEDGMENTS - AUTHOR NAME FFI::Platypus - Write Perl bindings to non-Perl libraries with FFI. No XS required. VERSION version 1.56 SYNOPSIS use FFI::Platypus 1.00; # for all new code you should use api => 1 my $ffi = FFI::Platypus->new( api => 1 ); ++, Go, Fortran, Rust, Pascal. Essentially anything that gets compiled into machine code. This implementation uses libffito accomplish this task. libffi Raku One of those "other" languages could be Raku and Raku,. You are strongly encouraged to use API level 1 for all new code. There are a number of improvements and design fixes that you get for free. You should even consider updating existing modules to use API level 1 where feasible. How do I do that you might ask? Simply pass in the API level to the platypus constructor. my $ffi = FFI::Platypus->new( api => 1 ); The Platypus documentation has already been updated to assume API level 1. CONSTRUCTORS new my $ffi = FFI::Platypus->new( api => 1, - api [version 0.91] Sets the API level. Legal values are 0 Original API level. See FFI::Platypus::TypeParser::Version0 for details on the differences. 1 Enable the next generation type parser which allows pass-by-value records and type decoration on basic types. Using API level 1 prior to Platypus version 1.00 will trigger a (noisy) warning. All new code should be written with this set to 1! The Platypus documentation assumes this api level is set. 2 Enable version 2 API, which is currently experimental. Using API level 2 prior to Platypus version 2.00 will trigger a (noisy) warning. API version 2 is identical to version 1, except: - Pointer functions that return NULLwill return undefinstead of empty list This fixes a long standing design bug in Platypus. - Array references may be passed to pointer argument types This replicates the behavior of array argument types with no size. So the types sint8*and sint8[]behave identically when an array reference is passed in. They differ in that, as before, you can pass a scalar reference into type sint8*. -. -whenas an alias for sint32instead of intas you do with C). If the foreign language plugin supports it, this will also enable Platypus to find symbols using the demangled names (for example, if you specify CPP for C++ you can use method names like Foo::get_bar()with "attach" or "function". api [version 1.11] my $level = $ffi->api; Returns the API level of the Platypus instance.'); # only FFI::Platypus::Type::will be prepended to it. types my @types = $ffi->types; my @types = FFI::Platypus->types; Returns the list of types that FFI knows about. This will include the native libffitypes (example: sint32, opaque);_function');. "); [version 0.91] my $function = $ffi->function( $name => \@fixed_argument_types => \@var_argument_types => $return_type); my $function = $ffi->function( $name => \@fixed_argument_types => \@var_argument_types => $return_type, \&wrapper); my $function = $ffi->function( $name => \@fixed_argument_types => \@var_argument_types); my $function = $ffi->function( $name => \@fixed_argument_types => \@var_argument_types => \&wrapper); Version 0.91 and later allows you to creat functions for c variadic functions (such as printf, scanf, etc) which can take a variable number of arguments. The first set of arguments are the fixed set, the second set are the variable arguments to bind with. The variable argument types must be specified in order to create a function object, so if you need to call variadic function with different set of arguments then you will need to create a new function object each time: # int printf(const char *fmt, ...); $ffi->function( printf => ['string'] => ['int'] => 'int' ) ->call("print integer %d\n", 42); $ffi->function( printf => ['string'] => ['string'] => 'int' ) ->call("print string %s\n", 'platypus'); Some older versions of libffi and possibly some platforms may not support variadic functions. If you try to create a one, then an exception will be thrown. [version 1.26] If the return type is omitted then voidwill be the assumed return type.);_function_name', ['int', 'string'] => 'string'); $ffi->attach(['my_c_function; }); [version 0.91] $ffi->attach($name => \@fixed_argument_types => \@var_argument_types, $return_type); $ffi->attach($name => \@fixed_argument_types => \@var_argument_types, $return_type, \&wrapper); As of version 0.91 you can attach a variadic functions, if it is supported by the platform / libffi that you are using. For details see the functiondocumentation. If not supported by the implementation then an exception will be thrown. closure my $closure = $ffi->closure($coderef); my $closure = FFI::Platypus-function); $ffi->attach_cast("cast_name", $original_type, $converted_type, \&wrapper); my $converted_value = cast_name($original_value); This function attaches a cast as a permanent xsub. This will make it faster and may be useful if you are calling a particular cast a lot. [version 1.26] A wrapper may be added as the last argument to attach_castand works just like the wrapper for attachand functionmethods. sizeof my $size = $ffi->sizeof($type); my $size = FFI::Platypus-. kindof [version 1.24] my $kind = $ffi->kindof($type); Returns the kind of a type. This is a string with a value of one of countof [version 1.24] my $count = $ffi->countof($type); For array types returns the number of elements in the array (returns 0 for variable length array). For the voidtype returns 0. Returns 1 for all other types. def [version 1.24] $ffi->def($package, $type, $value); my $value = $ff->def($package, $type); This method allows you to store data for types. If the $packageis not provided, then the caller's package will be used. $typemust be a legal Platypus type for the FFI::Platypus instance. unitof [version 1.24] my $unittype = $ffi->unitof($type); For array and pointer types, returns the basic type without the array or pointer part. In other words, for sin16[]or sint16*it will return sint16.). bundle [version 0.96 api = 1+] $ffi->bundle($package, \@args); $ffi->bundle(\@args); $ffi->bundle($package); $ffi->bundle; This is an interface for bundling compiled code with your distribution intended to eventually replace the packagemethod documented above. See FFI::Platypus::Bundle for details on how this works. package [version 0.15 api = 0] $ffi->package($package, $file); # usually __PACKAGE__ and __FILE__ can be used $ffi->package; # autodetect Note: This method is officially discouraged in favor of bundledescribed above. If you use FFI::Build (or the older deprecated 1.00; my $ffi = FFI::Platypus->new( api => 1 ); $ffi->lib(undef); $ffi->attach(puts => ['string'] => 'int'); $ffi->attach(atoi => ['string'] => 'int'); puts(atoi('56')); Discussion: putsand atoishould be part of the standard C library on all platforms. putsprints a string to standard output, and atoiconverts a string to integer. Specifying undefas a library tells Platypus to search the current process for symbols, which includes the standard c library. libnotify use FFI::CheckLib; use FFI::Platypus 1.00; #( api => 1 ); or Makefile.PLfile : $ffi- 1.00; use FFI::Platypus::Memory qw( malloc free memcpy ); my $ffi = FFI::Platypus->new( api => 1 ); my $buffer = malloc 12; memcpy $buffer, $ffi->cast('string' => 'opaque', "hello there"), length "hello there\0"; print $ffi->cast('opaque' => 'string', $buffer), "\n"; free $buffer; Discussion: mallocand freeare standard memory allocation functions available from the standard c library and. Interfaces to these and other memory related functions are provided by the FFI::Platypus::Memory module. structured data records use FFI::Platypus 1.00; use FFI::C; my $ffi = FFI::Platypus->new( api => 1, lib => [undef], ); FFI::C->ffi($ffi); package Unix::TimeStruct { FFI::C->struct(tm => [ tm_sec => 'int', tm_min => 'int', tm_hour => 'int', tm_mday => 'int', tm_mon => 'int', tm_year => 'int', tm_wday => 'int', tm_yday => 'int', tm_isdst => 'int', tm_gmtoff => 'long', _tm_zone => 'opaque', ]); # For now 'string' is unsupported by FFI::C, but we # can cast the time zone from an opaque pointer to # string. sub tm_zone { my $self = shift; $ffi->cast('opaque', 'string', $self->_tm_zone); } # attach the C localtime function $ffi->attach( localtime => ['time_t*'] => 'tm', sub { my($inner, $class, $time) = @_; $time = time unless defined $time; $inner->(\$time); }); } # now we can actually use our Unix::TimeStruct class my $time = Unix::TimeStruct-. For C pointers to structs, unions and arrays of structs and unions, the easiest interface to use is via FFI::C. If you are working with structs that must be passed as values (not pointers), then you want to use the FFI::Platypus::Record class instead. We will discuss this class later. The C localtimefunction takes a pointer to a C struct. We simply define the members of the struct using the FFI::C structmethod. Because we used the ffimethod to tell FFI::C to use our local instance of FFI::Platypus it registers the tmtype for us, and we can just start using it as a return type! structured data records by-value libuuid use FFI::CheckLib; use FFI::Platypus 1.00; use FFI::Platypus::Memory qw( malloc free ); my $ffi = FFI::Platypus->new( api => 1 ); $ffi->sizeof('uuid_t'); uuid_generate($uuid); my $string = "\0" x $ffi->sizeof('uuid_string');are exactly 37 bytes. puts and getpid use FFI::Platypus 1.00; my $ffi = FFI::Platypus->new( api => 1 ); $ffi->lib(undef); $ffi->attach(puts => ['string'] => 'int'); $ffi->attach(getpid => [] => 'int'); puts(getpid()); Discussion: putsis part of standard C library on all platforms. getpidis available on Unix type platforms. Math library use FFI::Platypus 1.00; use FFI::CheckLib; my $ffi = FFI::Platypus->new( api => 1 ); as the library to find them. Strings use FFI::Platypus 1.00; my $ffi = FFI::Platypus->new( api => 1 ); : ASCII and UTF-8 Strings are not a native type to libffibut the are handled seamlessly by Platypus. If you need to talk to an API that uses so called "wide" strings (APIs which use const wchar_t*or wchar_t*), then you will want to use the wide string type plugin FFI::Platypus::Type::WideString. APIs which use other arbitrary encodings can be accessed by converting your Perl strings manually with the Encode module. Attach function from pointer use FFI::TinyCC; use FFI::Platypus 1.00; my $ffi = FFI::Platypus->new( api => 1 ); 1.00; use FFI::Platypus::Memory qw( malloc ); use FFI::Platypus::Buffer qw( scalar_to_buffer buffer_to_scalar ); my $endpoint = "ipc://zmq-ffi-$$"; my $ffi = FFI::Platypus->new( api => 1 ); to ask libzmq which version it is. zmq_versionreturnsvariable (and the others) has been updated and we can use it to verify that it supports the API that we require. Notice that we define three aliases for the opaquetype: zmq_context, zmq_socket 1.00; use FFI::CheckLib qw( find_lib_or_exit ); # This example uses FreeBSD's libarchive to list the contents of any # archive format that it suppors. We've also filled out a part of # the ArchiveWrite class that could be used for writing archive formats # supported by libarchive my $ffi = FFI::Platypus->new( api => 1 ); $ffi->lib(find_lib_or_exit lib => 'archive'); $ffi->type('object(Archive)' => 'archive_t'); $ffi->type('object(ArchiveRead)' => 'archive_read_t'); $ffi->type('object(ArchiveWrite)' => 'archive_write_t'); $ffi->type('object(ArchiveEntry)' => 'archive_entry_t'); package Archive; # base class is "abstract" having no constructor or destructor $ffi->mangler(sub { my($name) = @_; "archive_$name"; }); $ffi->attach( error_string => ['archive_t'] => 'string' ); package ArchiveRead; our @ISA = qw( Archive ); $ffi->mangler(sub { my($name) = @_; "archive_read_$name"; }); $ffi->attach( new => ['string'] => 'archive_read_t' ); $ffi->attach( [ free => 'DESTROY' ] => ['archive_t'] => 'void' ); $ffi->attach( support_filter_all => ['archive_t'] => 'int' ); $ffi->attach( support_format_all => ['archive_t'] => 'int' ); $ffi->attach( open_filename => ['archive_t','string','size_t'] => 'int' ); $ffi->attach( next_header2 => ['archive_t', 'archive_entry_t' ] => 'int' ); $ffi->attach( data_skip => ['archive_t'] => 'int' ); # ... define additional read methods package ArchiveWrite; our @ISA = qw( Archive ); $ffi->mangler(sub { my($name) = @_; "archive_write_$name"; }); $ffi->attach( new => ['string'] => 'archive_write_t' ); $ffi->attach( [ free => 'DESTROY' ] => ['archive_write_t'] => 'void' ); # ... define additional write methods package ArchiveEntry; $ffi->mangler(sub { my($name) = @_; "archive_entry_$name"; }); $ffi->attach( new => ['string'] => 'archive_entry_t' ); $ffi->attach( [ free => 'DESTROY' ] => ['archive_entry_t'] => 'void' ); $ffi->attach( pathname => ['archive_entry_t'] => for FreeBSD provided as a library and available on a number of platforms. One interesting thing about libarchive is that it provides a kind of object oriented interface via opaque pointers. This example creates an abstract class Archive, and concrete classes ArchiveWrite, ArchiveReadand ArchiveEntry. The concrete classes can even be inherited from and extended just like any Perl classes because of the way the custom types are implemented. We use Platypus's objecttype for this implementation, which is a wrapper around an opaque(can also be an integer) type that is blessed into a particular class. Another advanced feature of this example is that we define a mangler to modify the symbol resolution for each class. This means we can do this when we define a method for Archive: $ffi->attach( support_filter_all => ['archive_t'] => 'int' ); Rather than this: $ffi->attach( [ archive_read_support_filter_all => 'support_read_filter_all' ] => ['archive_t'] => 'int' ); ); unix open use FFI::Platypus 1.00; { package FD; use constant O_RDONLY => 0; use constant O_WRONLY => 1; use constant O_RDWR => 2; use constant IN => bless \do { my $in=0 }, __PACKAGE__; use constant OUT => bless \do { my $out=1 }, __PACKAGE__; use constant ERR => bless \do { my $err=2 }, __PACKAGE__; my $ffi = FFI::Platypus->new( api => 1, lib => [undef]); $ffi->type('object(FD,int)' => 'fd'); $ffi->attach( [ 'open' => 'new' ] => [ 'string', 'int', 'mode_t' ] => 'fd' => sub { my($xsub, $class, $fn, @rest) = @_; my $fd = $xsub->($fn, @rest); die "error opening $fn $!" if $$fd == -1; $fd; }); $ffi->attach( write => ['fd', 'string', 'size_t' ] => 'ssize_t' ); $ffi->attach( read => ['fd', 'string', 'size_t' ] => 'ssize_t' ); $ffi->attach( close => ['fd'] => 'int' ); } my $fd = FD->new("$0", FD::O_RDONLY); my $buffer = "\0" x 10; while(my $br = $fd->read($buffer, 10)) { FD::OUT->write($buffer, $br); } $fd->close; Discussion: The Unix file system calls use an integer handle for each open file. We can use the same objecttype that we used for libarchive above, except we let platypus know that the underlying type is intinstead of opaque(the latter being the default for the objecttype). Mainly just for demonstration since Perl has much better IO libraries, but now we have an OO interface to the Unix IO functions. bzip2 use FFI::Platypus 1.00; use FFI::CheckLib qw( find_lib_or_die ); use FFI::Platypus::Buffer qw( scalar_to_buffer buffer_to_scalar ); use FFI::Platypus::Memory qw( malloc free ); my $ffi = FFI::Platypus->new( api => 1 ); . The Win32 API use utf8; use FFI::Platypus 1.00; my $ffi = FFI::Platypus->new( api => 1, lib => [undef], ); # see FFI::Platypus::Lang::Win32 $ffi->lang('Win32'); # Send a Unicode string to the Windows API MessageBoxW function. use constant MB_OK => 0x00000000; use constant MB_DEFAULT_DESKTOP_ONLY => 0x00020000; $ffi->attach( [MessageBoxW => 'MessageBox'] => [ 'HWND', 'LPCWSTR', 'LPCWSTR', 'UINT'] => 'int' ); MessageBox(undef, "I ❤️ Platypus", "Confession", MB_OK|MB_DEFAULT_DESKTOP_ONLY); Discussion: The API used by Microsoft Windows present some unique challenges. On 32 bit systems a different ABI is used than what is used by the standard C library. It also provides a rats nest of type aliases. Finally if you want to talk Unicode to any of the Windows API you will need to use UTF-16LEinstead of utf-8which is native to Perl. (The Win32 API refers to these as LPWSTRand LPCWSTRtypes). As much as possible the Win32 "language" plugin attempts to handle this transparently. For more details see FFI::Platypus::Lang::Win32. bundle your own code ffi/foo.c: #include <ffi_platypus_bundle.h> #include <string.h> typedef struct { char *name; int value; } foo_t; foo_t* foo__new(const char *class_name, const char *name, int value) { (void)class_name; foo_t *self = malloc( sizeof( foo_t ) ); self->name = strdup(name); self->value = value; return self; } const char * foo__name(foo_t *self) { return self->name; } int foo__value(foo_t *self) { return self->value; } void foo__DESTROY(foo_t *self) { free(self->name); free(self); } lib/Foo.pm: package Foo; use strict; use warnings; use FFI::Platypus 1.00; { my $ffi = FFI::Platypus->new( api => 1 ); $ffi->type('object(Foo)' => 'foo_t'); $ffi->mangler(sub { my $name = shift; $name =~ s/^/foo__/; $name; }); $ffi->bundle; $ffi->attach( new => [ 'string', 'string', 'int' ] => 'foo_t' ); $ffi->attach( name => [ 'foo_t' ] => 'string' ); $ffi->attach( value => [ 'foo_t' ] => 'int' ); $ffi->attach( DESTROY => [ 'foo_t' ] => 'void' ); } 1; You can bundle your own C (or other compiled language) code with your Perl extension. Sometimes this is helpful for smoothing over the interface of a C library which is not very FFI friendly. Sometimes you may want to write some code in C for a tight loop. Either way, you can do this with the Platypus bundle interface. See FFI::Platypus::Bundle for more details. Also related is the bundle constant interface, which allows you to define Perl constants in C space. See FFI::Platypus::Constant for details. How do I get constants defined as macros in C header files This turns out to be a challenge for any language calling into C, which frequently uses #definemacros. You can also use the new Platypus bundle interface to define Perl constants from C space. This is more reliable, but does require a compiler at install time. It is recommended mainly for writing bindings against libraries that have constants that can vary widely from platform to platform. See FFI::Platypus::Constant for details. What about enums? The C enum types are integers. The underlying type is up to the platform, so Platypus provides enumand senumtypes for unsigned and singed enums respectively. At least some compilers treat signed and unsigned enums as different types. The enum values are essentially the same as macro constants described above from an FFI perspective. Thus the process of defining enum values is identical to the process of defining macro constants in Perl. For more details on enumerated types see "Enum types" in FFI::Platypus::Type. There is also a type plugin (FFI::Platypus::Type::Enum) that can be helpful in writing interfaces that use enums. Memory leaks There are a couple places where memory is allocated, but never deallocated that may look like memory leaks by tools designed to find memory leaks like valgrind. This memory is intended to be used for the lifetime of the perl process so there normally this isn't a problem unless you are embedding a Perl interpreter which doesn't closely match the lifetime of your overall application. Specifically: - type cache some types are cached and not freed. These are needed as long as there are FFI functions that could be called. - attached functions Attaching a function as an xsub will definitely allocate memory that won't be freed because the xsub could be called at any time, including in ENDblocks. The Platypus team plans on adding a hook to free some of this "leaked" memory for use cases where Perl and Platypus are embedded in a larger application where the lifetime of the Perl process is significantly smaller than the overall lifetime of the whole process. I get seg faults on some platforms but not others with a library using pthreads. On some platforms, Perl isn't linked with libpthreadsif Perl threads are not enabled. On some platforms this doesn't seem to matter, libpthreadscanseven. older versions of Google's Go. This is a problem for C / XS code as well. - Languages that do not compile to machine code Like .NET based languages and Java. the Makefile.PLfile necessary for building, testing (and even installing if necessary) without Dist::Zilla. Please keep in mind though that these files are generated so if changes need to be made to those files they should be done through the project's dist.inifile. If you do use Dist::Zilla and already have the necessary plugins installed, then I encourage you to run dzil testbeforethat is normally automatically built by ./Build test. If you prefer to use proveor run tests directly, you can use the ./Build libtestcommand to build it. Example: % perl Makefile.PL % make % make ffi-test % prove -bv t # or an individual test % perl -Mblib t/ffi_platypus_memory.t The build process also respects these environment variables: - ... This distribution uses Alien::FFI in fallback mode, meaning if the system doesn't provide pkg-configand libffiit will attempt to download libffiand build it from source. If you are including Platypus in a larger system (for example a Linux distribution) you only need to make sure to declare pkg-configor pkgconfand the development package for libffias prereqs for this module. SEE ALSO - NativeCall Promising interface to Platypus inspired by Raku. - FFI::Platypus::Type Type definitions for Platypus. - FFI::Platypus::Record Define structured data records (C "structs") for use with Platypus. - FFI::C Another interface for defining structured data records for use with Platypus. Its advantage over FFI::Platypus::Record is that it supports unions and nested data structures. Its disadvantage is that it doesn't support passing structs by-value. - FFI::Platypus::API The custom types API for Platypus. - FFI::Platypus::Memory Memory functions for FFI. - FFI::CheckLib Find dynamic libraries in a portable way. -::Go Documentation and tools for using Platypus with - FFI::Platypus::Lang::Win32 Documentation and tools for using Platypus with the Win32 API. - Wasm and Wasm::Wasmtime Modules for writing WebAssembly bindings in Perl. This allows you to call functions written in any language supported by WebAssembly. These modules are also implemented using Platypus. - Older, simpler, less featureful FFI. It used to be implemented using FSF's ffcall. Because ffcallhas been unsupported for some time, I reimplemented this module using FFI::Platypus. - C::DynaLib Another FFI for Perl that doesn't appear to have worked for a long time. - C::Blocks Embed a tiny C compiler into your Perl scripts. - Alien::FFI Provides libffi for Platypus during its configuration and build stages. -not only helped me get started with FFI but significantly influenced the design of Platypus. Dan Book, who goes by Grinnz on IRC for answering user questions about FFI,2020 by Graham Ollis. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Module Install Instructions To install FFI::Platypus, copy and paste the appropriate command in to your terminal. cpanm FFI::Platypus perl -MCPAN -e shell install FFI::Platypus For more information on module installation, please visit the detailed CPAN module installation guide.
https://metacpan.org/pod/FFI::Platypus
CC-MAIN-2022-21
refinedweb
3,673
55.44
Chapter 16. The STL Once C++ had a good template mechanism, people started implementing data structures using these templates. The most widespread collection of templates came from SGI and HP and was called the STL. - Standard Template Library (STL) A set of data structures and algorithms using templates. Now part of the C++ standard. For a complete reference to the STL: Read C++ in a Nutshell Go to Containers A Container is an object that stores other objects (its elements), and that has methods for accessing its elements. All Containers provide methods to create iterators (see iterators below). All containers provide the following methods: - unsigned int size() returns the current size of the container - bool empty() returns true if the container is empty Sequences In most containers that we have seen so far, the order of elements is important. The STL provides several sequence containers. Most of these containers support the same operations. Then why bother having two implementations for it? Because every container performs differently on different operations! Example: We'll talk about the two template types "vector" and "list" (in the <vector> and <list> includes) A vector is based on an array. Getting the nth element of an array is fast -> getting the nth element of a vector is fast Adding an element to the end of the list is fast Adding an element in the middle requires moving all elements past this one back -> slow Adding an element to the beginning requires moving all elements back -> very slow list is based on a liked list Getting the nth element of a linked list requires traversing all elements to that point -> slow Adding an element anywhere in the list does not require any movement -> fast (once the element is found) You will learn more about the implementation issues in the data structures class. Both vector and list support: - push_back(T x) add an element to the end of the list - pop_back() removes the last element - T back() returns the last element - T front() returns the first element Only list supports: - push_front(T x) add an element to the beginning of the list - pop_front() removes the first element Only vector supports: - [] or at() access element at the given index. Here are some examples: #include <iostream> #include <list> // For list #include <vector> // For vector using namespace std; // All STL containers are // in the std namespace. int main() { vector<int> a; // Declare a vector that takes int a.push_back(3); // add to end of vector a.push_back(24); a.push_back(42); // as long as we still have elements while (!a.empty()) { // print the last element cout << a.back() << " "; // and remove it! a.pop_back(); } cout << endl; // this (above) printed 42 24 3 list<int> b; // declares a linked list that takes int // adds some elements to the end b.push_back(3); b.push_back(24); b.push_back(42); // as long as there are elements left while (!b.empty()) { // print the last element cout << b.back() << " "; // and remove it b.pop_back(); } cout << endl; // this (above) printed 42 24 3 list<int> c; // declares a linked list that takes int // adds some elements to the end c.push_back(3); c.push_back(24); c.push_back(42); // as long as there are elements left while (!c.empty()) { // print the first element cout << c.front() << " "; // and remove it c.pop_front(); } cout << endl; // this (above) printed 3 24 42 return 0; } So now that we see the use of different containers, lets see what we have available: - deque A deque (double-ended queue) is a sequence container that supports fast insertions and deletions at the beginning and end of the container. Inserting or deleting at any other position is slow, but indexing to any item is fast. The header is <deque>. Pg 470 deque supports: [], at(), front(), back(), push_front(), push_back(), pop_front(), pop_back(), empty(), size(), ... - list A list is a seuqence container that supports rapid insertion or deletion at any position, but does not support random access. The header is <list>. Pg 559 - vector A vector is a sequence container that is like an arrays, except that it can grow in size as needed. Items can be rapidly added or removed only at the end. At other positions, inserting and deleting items is slower. The header is <vector>. Pg 722 Practice: Implement a short program (the whole program) that declares a deque of type float add the elements 1.234, 12.34, and 123.4 prints the first element removes the first element prints how many elements are in the deque #include <iostream> #include <deque> using namespace std; int main() { deque<float> f; f.push_back(1.234); f.push_back(12.34); f.push_back(123.4); cout << f.front() << " "; cout << f.at(0) << " "; cout << f[0] << endl; f.pop_front(); cout << "There are " << f.size() << " elements left!"; cout << endl; return 0; } Iterators Since all the containers have a different implementation, we need a standard way of going through all the items. These things are called "iterators". We are already used to iterators, we just didn't know it: For vector and arrays we used integers. We ran theses from 0 to size()-1. Therefore this "int" was an iterator. Iterators use a lot of operator overloading to behave similar to pointers. Declaring: To declare an iterator use the subclass "iterator" for your specific data type. Example: vector<int> a; // Your vector a // ... a lot of a.push_back() // The actual declaration: vector<int>::iterator it; There are two standard functions for iterators: - begin() returns an iterator pointing to the first element - end() returns an iterator pointing after the last element Iterators can be advanced with the ++ operator (some can go backwards with --) and compared with the == or != operator. To run over all element, you can therefore use: it = a.begin(); while (it!=a.end()) { // ... it++; } Or quicker: for (it = a.begin();it!=a.end();it++) { ... } To get the element an iterator points to, you act as if iterator would be a pointer and use the dereference operator *. Example: cout << *it; Practice: Assume you have given the following declaration: list<char> l; l.push_front('l'); l.push_back('a'); l.push_front('b'); Write a for loop that uses an iterator to iterate over l and print all the contents of the list. list<char>::iterator li; for (li=l.begin();li!=l.end();li++) cout << *li; Print just the second element (emulate at(1) ) list<char>::iterator li; li = l.begin(); li++; // can not use li = li+1 in this case cout << *li; All of these containers support insert() and erase(), but we had to introduce iterators first. - iterator erase(iterator p) erases the item that p points to. erase returns an iterator that points to the item that comes immediately after the deleted item or end(). - iterator insert(iterator p, T x) inserts x immediately before p and returns an iterator that points to the newly inserted element x. Warning: Iterators may become "invalid" after an insert or a delete operation! You should therefore use the return value if possible! Example: vector<int> v; // ... // same as p.pop_front(), would it exist. v.erase(v.begin()); A more complex example: Delete all occurences of "42" in a list: list<int> l; // ... list<int>::iterator i = l.begin(); while (i!=l.end()) { if ((*i) == 42) i = l.erase(i); else i++; } Another example: list<int>::iterator i = l.begin(); // insert the number 0 at the beginning i = l.insert(i,0); // make sure i points after the first element i++; Practice: Assume this given deque: deque<int> d; // d gets filles with some values Write a loop that inserts a 42 before every occurence of 0 in d. Two hints: iterators to a deque become invalid after insertion. Make sure you use the return value! don't write an infinite loop! deque<int>::iterator i = d.begin(); while (i!=d.end()) { if (*i == 0) { i = d.insert(i,42); i++; } i++; } Possible test question: For a server application you need to write a FIFO (first in, first out) queue, so that all incoming jobs are processed in the order they arrive. Which STL container would you use for such a queue and why (1-2 sentences)? Associative Containers An associative container contains keys that can be quickly associated with values. There is: - map Stores a pair of keys and associated values. The keys determine the order of the elements in the map. map requires unique keys. Header: <map>. Pg 202 - multimap same as map, but allows duplicate keys. Header: <map>. Pg 608 - set Stores just keys in ascending order. Set requires unique keys. Header: <set> - multiset same as set, but allows duplicate keys. Header: <set> To declare a map we need two datatypes, the key and the value datatype: map<string,int> m; We can then use keys of the given type as index to store and retrieve contents: m["Jan"] = 1; m["Feb"] = 2; cout << m["Jan"] << endl; Most operations on map can take a key as parameter where we usually have to use iterators, e.g. erase: m.erase("Jan"); Trying to use an element that was not set works fine. If the valuetype is a class, then it will even create a new object for you (calls the default constructor). cout << m["Mar"] << endl; Practice: To map from student id's to name it is usually wise to use a map, since we do not want to create an array with 10000000000 elments. Define a variable of a map type with "long" as keytype and "string" as valuetype Fill in two random students of your discretion (do NOT use your real SSN!!!) map<long,string> students; students[123456789] = "Some"; students[987654321] = "One"; students[987654321] = "Else"; An iterator over a map<K,V> will give you a pair<K,V> for every element you access (this is the real pair which is differnt from the one we used in lab). You can access the key in the member variable first, and the value in the member variable second. Thanks to operator overloading you may either use the dereference (*) or the dereference and access member (->) operator (or both, as you wish). Example: for (map<string,int>::iterator i=m.begin();i!=m.end();i++) { cout << (*i).first << " " << i->second; } Practice: Using an iterator, iterate over your students map defined earlier. For every student print something like this on the screen (remember: first is the key, second is the value): Student: Max Berger Student ID: 123456789 map<long,string>::iterator si; for (si=students.begin();si!=students.end();si++) { cout << "Student: " << si->second << endl; cout << "ID: " << si->first << endl; } Iterator categories There are five categories of iterators: - Input Permits you to read a sequence in one pass. The increment operator (++) advances to the next element, but there is no decrement operator. The dereference operator returns an rvalue, not an lvalue, so you can read elements but not modify them. - Output Permits you to write a sequence in one pass. The increment operator (++) advances to the next element, but there is no decrement operator. You can dereference an element only to assign a value to it. You cannot compare output iterators. - Forward Permits unidirectional access to a sequence. You can refer to and assign to an item as many times as you want. You can use a forward iterator whenever an input or an output iterator is required. - Bidirectional Similar to a forward iterator but also supports the decrement (--) operator to move the iterator back one position. Example: list<>::iterator. - Random access Similar to bidirectional iterator but also supports the subscript [] operator to access any index in the sequence. Also, you can add or subtract an integer to move a random access iterator by more than one position at a time. Subtracting two random access iterators yields the distance between them. You can compare two random iterators with < or >.Thus, a random access iterator is most like a conventional pointer, and a pointer can be used as a random access iterator. Examples: deque<>::iterator, vector<>::iterator. We will hardly every see anything but bidirectional and random access iterators. But it is important to know the other types exist. With the exception of output each of these iterators includes all of the above. (a bidi is also forward and input, etc.). In this example we use the fact that we can do math with iterators to find the index of elements that match a certain value. vector<int> v; v.push_back(1); v.push_back(2); v.push_back(2); v.push_back(3); for (vector<int>::iterator i=v.begin();i!=v.end();i++) { if ((*i) == 2) { cout << "I found the number 2 at index " << i - v.begin() << endl; } } Practice: Assume the vector definition from above. Write a for loop that uses an iterator to output every other element. Actually advance the iterator by 2. Here you'll have to use the < operator instead of !=. for (vector<int>::iterator i=v.begin();i<v.end();i+=2) { cout << *i << endl; }
https://max.berger.name/teaching/s06/script/ch16.html
CC-MAIN-2021-21
refinedweb
2,160
65.52
In this chapter, we will cover the following recipes: - Installing JDK 9 on Windows and setting up the PATH variable - Installing JDK 9 on Linux (Ubuntu, x64) and configuring the PATH variable - Compiling and running a Java application - New features in Java 9 - Using new tools in JDK 9 - Comparing JDK 8 with JDK 9 Every quest for learning a programming language begins with setting up the environment to experiment 9. Then, we'll end the chapter with a comparison between the JDK 8 and JDK 9 installations. In this recipe, we will look at installing JDK on Windows and how to set up the PATH variable to be able to access the Java executables (such as javac, java, and jar, among others) from anywhere within the command shell. Visit and accept the early adopter license agreement, which looks like this: After accepting the license, you will get a grid of the available JDK bundles based on the OS and architecture (32/64 bit), as shown here: -, among others, are available in the bin directory of your JDK installation. There are two ways you could run these tools from the command prompt: - Navigate to the directory where the tools are installed and then run them, as follows: cd "C:\Program Files\Java\jdk-9_0<<):The variables defined under System variablesare available across all the users of the system, and those defined under User variables for sanaullaare available only to the user, sanaulla. - Click on Newunder User variables for <your username> to add a new variable, with the name JAVA_HOME, and its value as the location of the JDK 9 installation. For example, it would be C:/Program Files/Java/jdk-9(for 64 bit) or C:/Program Files (x86)/Java/jdk-9(for 32 bit): - The next step is to, then click on New. - Any of the actions in the previous step will give you a popup, as shown in the following screenshot (on Windows 10): The following image shows the other Windows versions: - You can either click on Newin the first picture and insert the value, %JAVA_HOME%/bin, or you can append the value against the Variable valuefield by adding ; %JAVA_HOME%/bin. The semicolon ( ;) in Windows is used to separate multiple values for a given variable name. - After setting the values, open the command prompt and then run javac -version, and you should be able to see javac 9-eaas the output. If you don't see it, then it means that the bin directory of your JDK installation has not been correctly added to the PATHvariable. In this recipe, we will look at installing JDK on Linux (Ubuntu, x64) and also how to configure the PATH variable to make the JDK tools (such as javac, java, jar, and others) available from any location within the terminal. - Follow the Steps 1 and 2 of the Installing JDKÂ. - Once the download completes, you should have the relevant JDK available, for example, jdk-9+180_linux-x64_bin.tar.gz. You can list the contents by using $> tar -tf jdk-9+180_linux-x64_bin.tar.gz. You can even pipe it to moreto paginate the output: $> tar -tf jdk-9+180_linux-x64_bin.tar.gz | more. - Extract the contents of the tar.gzfile under /usr/libby using $> tar -xvzf jdk-9+180_linux-x64_bin.tar.gz -C /usr/lib. This will extract the contents into a directory, /usr/lib/jdk-9. You can then list the contents of JDK 9 by using $> ls /usr/lib/jdk-9. - Update the JAVA_HOMEand PATHvariables by editing the .bash_aliasesfile in your Linux home directory: $> vim ~/.bash_aliases export JAVA_HOME=/usr/lib/jdk-9 export PATH=$PATH:$JAVA_HOME/bin Source the .bashrc file to apply the new aliases: $> source ~/.bashrc $> echo $JAVA_HOME /usr/lib/jdk-9 $>javac -version javac 9 $> java -version java version "9" Java(TM) SE Runtime Environment (build 9+180) Java HotSpot(TM) 64-Bit Server VM (build 9+180, mixed mode) In this recipe, we will write a very simple modular Hello world program to test our JDK installation. This simple example prints Hello world in XML; after all it's the world of web services.. - Now, also to test your JDK installation. The directory structure with the preceding files is as follows: - Let's now compile and run the code. From the directory, hellowordxml, by using java --module-path mods -m com.packt/com.packt.HelloWorldXml. You will see the following output: <messages><message>Hello World in XML</message></messages> Do not worry if you are not able to understand the options passed with the java or javac commands. You will learn about them in Chapter 3, Modular Programming. The release of Java 9 is a milestone in the Java ecosystem. The much awaited modular framework developed under Project Jigsaw will be part of this Java SE release. Another major feature in this is the JShell tool, which is an REPL tool for Java. Apart from this, there are other important API changes and JVM-level changes to improve the performance and debuggability of the JVM. In a blog post (), Yolande Poirier categorizes JDK 9 features into the following: - Behind the scenes - New functionality - Specialized - New standards - Housekeeping - Gone The same blog post has summarized the preceding categorization into the following image: In this recipe, we will discuss a few important features of JDK 9 and, wherever possible, also show a small code snippet of that feature in action. Every new feature in JDK is introduced by means of JDK Enhancement Proposals, also called JEPs. More information about the different JEPs part of JDK 9 and the release schedule of JDK 9 can be found on the official project page:. We have picked a few features, which we feel are amazing and worth knowing about. In the following few sections, we'll briefly introduce you to those features. Java's Process API has been quite primitive, with support only to launch new processes, redirect the processes' output, and error streams. In this release, the updates to the Process API enable the following: - Get the PID of the current JVM process and any other processes spawned by the JVM - Enumerate the processes running in the system to get information such as PID, name, and resource usage - Managing process trees - Managing sub processes Let's look at a sample code, which prints the current PID as well as the current process information: //NewFeatures.java public class NewFeatures{ public static void main(String [] args) { ProcessHandle currentProcess = ProcessHandle.current(); System.out.println("PID: " + currentProcess.getPid()); ProcessHandle.Info currentProcessInfo = currentProcess.info(); System.out.println("Info: " + currentProcessInfo); } } Note This feature is being included in the incubator module. This means that the feature is expected to change in the subsequent releases and may even be removed completely. So, we advise you to use this on an experimental basis. Java's HTTP API has been the most primitive. Developers often resort to using third-party libraries, such as Apache HTTP, RESTlet, Jersey, and so on. In addition to this, Java's HTTP API predates the HTTP/1.1 specification and is synchronous and hard to maintain. These limitations called for the need to add a new API. The new HTTP client API provides the following: - A simple and concise API to deal with most HTTP requests - Support for HTTP/2 specification - Better performance - Better security - A few more enhancements Let's see a sample code to make an HTTP GET request using the new APIs. Below is the module definition defined within the file module-info.java: //module-info.java module newfeatures{ requires jdk.incubator.httpclient; } The following code uses the HTTP Client API, which is part of jdk.incubator.httpclient module: Â()); System.out.println("Response Body: " + response.body()); } } In Java SE 7, _ were introduced as part of the numbers, whereby a large number could be conveniently written by introducing _ between the digits. This helped in increasing the readability of the number, for example: Integer large_Number = 123_123_123; System.out.println(large_Number); In Java SE 8, the use of _ in the variable names, as shown earlier, resulted in a warning, but in Java SE 9, this use results in an error, which means that the variables can no longer have _ in their names. The other changed part of this JEP is to support private methods in interfaces. Java started with interfaces with absolutely no method implementations. Then, Java SE 8 introduced default methods that allowed interfaces to have methods with implementations, called default methods. So any class implementing this interface could choose not to override the default methods and use the implementation provided in the interface. Java SE 9 is introducing private methods, wherein the default methods in the interfaces can share code between them by refactoring the common code into private methods. Another useful feature is the allowing of effectively final variables to be used with try-with-resources. As of Java SE 8, we needed to declare a variable within the try-with-resources block, such as the following: try(Connection conn = getConnection()){}catch(Exception ex){}. However, with Java SE 9, we can do the following: try(conn){}catch(Exception ex){} Here, conn is effectively final; that is, it has been declared and defined before, and will never be reassigned during out the course of the program execution. program as scala> println("Hello World"); Some of the advantages of the JShell REPL are as follows: - Help language learners to quickly try out the language features - Help experienced developers to quickly prototype and experiment before adopting it in their main code base - Java developers can now boast of an REPL Let's quickly spawn our command prompts/terminals and run the JShell command, as shown in the following image: There is a lot more we can do, but we will keep that for Chapter 13, The Read-Evaluate-Print Loop (REPL) Using JShell., the new feature of multirelease JAR files allows developers to build JAR files with different versions of class files for different Java versions. The following example makes it more clear. Here is an illustration of the current JAR files: jar root - A.class - B.class - C.class Here is how multirelease - 9 folder is picked for execution. On a platform that doesn't support multirelease JAR files, the classes under the versions directory are never used. So, if you run the multirelease JAR file on Java 8, it's as good as running a simple JAR file. In this update, a new class, java.util.concurrent.Flow, has been introduced, which has nested interfaces supporting the implementation of a publish-subscribe framework. The publish-subscribe framework enables developers to build components that can asynchronously consume a live stream of data by setting up publishers that produce the data and subscribers that consume the data via subscription, which manages them. The four new interfaces are as follows: java.util.concurrent.Flow.Publisher java.util.concurrent.Flow.Subscriber java.util.concurrent.Flow.Subscription java.util.concurrent.Flow.Processor(which acts as both Publisherand Subscriber). The main aim of this project is to introduce the concept of modularity; support for creating modules in Java and then apply the same to the JDK; that is, modularize the JDK. Some of the benefits of modularity are as follows: - Stronger encapsulation: The modules can access only those parts of the module that have been made available for use. So, the public classes in a package are not public unless the package is explicitly exported in the module info file. This encapsulation cannot be broken by using reflection (except in cases where the module is an open module or specific packages in the module have been made open). - Clear dependencies: Modules must declare which other modules they would be using via the requiresclause. - Combining modules to create a smaller runtime, which can be easily scaled to smaller computing devices. - Make the applications more reliable by eliminating runtime errors. For example, you must have experienced your application failing during runtime due to missing classes, resulting in ClassNotFoundException. There are various JEPs, which are part of this project, as follows: - JEP 200 - modular JDK: This applies the Java platform module system to modularize the JDK  into a set of modules that can be combined at compile time, build time, or runtime. - JEP 201 - modular source code: This modularizes the JDK source code into modules and enhances the build tools to compile the modules. - JEP 220 - modular runtime images: This restructures the JDK and JRE runtime images to accommodate modules and to improve performance, security, and maintainability. - JEP 260 - encapsulate most internal APIs: This allows a lot of internal APIs to be accessed directly or via reflection. Accessing internal APIs that are bound to change is quite risky. To prevent its use, they are being encapsulated into modules and only those internal APIs that are widely used are being made available until a proper API is in its place. - JEP 261 - module system: This implements the module system Java specification by changing the Java programming language, JVM, and other standard APIs (). This includes the introduction of a new construct called module, { }, with its supported keywords, such as requires, exports, opens, and uses. - JEP 282: jlink, the Java linker: This allows packaging modules and their dependencies into smaller run times. More details about Project Jigsaw can be found from the Project Jigsaw homepage (). There are quite a few features listed that are significant for developers, and we thought of grouping them together for your benefit: - Enhance the Javadoc tool to generate HTML 5 output and the generated Javadoc should support the local search for classes and other elements. - Make G1 as the default garbage collector and remove GC combinations that have been deprecated in Java 8. G1 is the new garbage collector (which has been in existence since Java SE 7), which focuses on reducing the pause times of the garbage collection activity. These pause times are very critical to latency-critical applications and, hence, such applications are going towards adopting the new garbage collector. - Changing the internal representation of Stringto make use of a byte array rather than a character array. In a character array, each array element is 2 bytes, and it was observed that a majority of strings use 1 byte. This resulted in wasteful allocation. The new representation would also introduce a flag to indicate the type of encoding used. - The new stackwalking API to support navigating the stack trace, which will help to do much more than just print the stack trace. - Allow the image I/O plugin to support the TIFF image format. There are a few new command-line tools introduced in JDK 9 to support new features. We will give you a quick overview of these tools and the same will be explained with recipes of their own in the later chapters. You should have JDK 9 installed and the PATH environment variable updated to add the path to the bin directory of your JDK installation. Also, you need to have tried out HelloWorldXml explained in the recipe, Compiling and running a Java application. We will look at a few interesting new command-line tools introduced. This tool is used for scanning the usage of deprecated APIs in a given JAR file, classpath, or source directory. Suppose we have a simple class that makes use of the deprecated method, addItem, of the java.awt.List class, as follows: import java.awt.List; public class Test{ public static void main(String[] args){ List list = new List(); list.addItem("Hello"); } } Compile the preceding class and then use jdeprscan, as follows: C:Program FilesJavajdk-9bin>jdeprscan.exe -cp . Test You will notice that this tool prints out class Test uses method java/awt/List addItem (Ljava/lang/String;)V deprecated, which is exactly what we expected. This tool analyses your code base specified by the path to the .class file, directory, or JAR, lists the package-wise dependency of your application, and also lists the JDK module in which the package exists. This helps in identifying the JDK modules that the application depends on and is the first step in migrating to modular applications. We can run the tool on our HelloWorldXml example written earlier and see what jdeps provides: $> jdeps mods/com.packt/ com.packt -> java.base com.packt -> java.xml.bind com.packt -> java.io java.base com.packt -> java.lang java.base com.packt -> javax.xml.bind java.xml.bind com.packt -> javax.xml.bind.annotation java.xml.bind This tool is used to select modules and create a smaller runtime image with the selected modules. For example, we can create a runtime image by adding the com.packt modules created in our HelloWorldXml example: $> jlink --module-path mods/:$JAVA_HOME/jmods/ --add-modules com.packt --output img Looking at the contents of the img folder, we should find that it has the bin, conf, include, and lib directories. We will learn more about jlink under Chapter 3, Modular Programming. JMOD is a new format for packaging your modules. This format allows including native code, configuration files, and other data that do not fit into JAR files. The JDK modules have been packaged as JMOD files. The jmod command-line tool allows create, list, describe, and hash JMOD files: create: This is used to create a new jmodfile list:  This is used to list the contents of a jmodfile describe: This is used to describe module details hash: This is used to record hashes of tied modules Due to the application of a modular system to JDK under Project Jigsaw, there have been a few changes in the JDK directory structure installed in your systems. In addition to these, there were a few changes undertaken to fix the JDK installation structure, which dates back to the times of Java 1.2. This has been deemed to be a golden opportunity by the JDK team to fix the issues with the JDK directory structure. To see the difference in the JDK 9 directory structure, you will need to install a pre-JDK 9 version. We have chosen to use JDK 8 to compare with JDK 9. So, go ahead and install JDK 8 before you proceed further. - We did a side-by-side comparison of both the JDK installation directories as shown in the following: - Following are our observations from the preceding comparison: - The jredirectory has been completely removed and has been replaced by jmods and conf. The jmodsdirectory contains the runtime images of the JDK modules, the confdirectory contains configuration and property files, which were earlier under the jredirectory. - The contents of jrebinand jrelibhave been moved to the lib and bin directories of JDK installation.
https://www.packtpub.com/product/java-9-cookbook/9781786461407
CC-MAIN-2020-40
refinedweb
3,146
50.67
App packages and deployment (Windows Runtime apps) [This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation] As a developer, you don't write routines to install or uninstall your Windows Runtime app. Instead, you package your app and submit it to the Store. Users acquire your app from the Store as an app package. The operating system uses info in an app package to install the app, and ensure that all traces of the app are gone from the device after the user uninstalls it. An app package is a container based on the Open Packing Conventions (OPC) standard. OPC defines a structured way to store data and resources for the app by using a standard ZIP file. For info about how to use Microsoft Visual Studio to deploy app packages, see Deploying Windows Runtime apps from Visual Studio. Starting with Windows 8.1 and Windows Phone 8.1, new app bundles help to optimize the packaging and distribution of an app. And resource packs let you offer extras, like localization or assets for high-resolution displays, to customers who want them, without affecting disk space, bandwidth, or the app-purchase experience for customers who don't. Also, hard linking optimizes the installation of your app by eliminating data duplication by not downloading the same file more than once. Windows Runtime app deployment The Windows Runtime app model is a declarative state-driven process that provides all installation and update data and instructions for an app in a single package. In this declarative model, deployment operations are reliable. The files shipped in the package are immutable, which means that they haven't been modified since they were delivered to the device. Because the package owner doesn't need to write custom actions and code, the number of failure points are reduced. During the update process, a new version of the app is downloaded and installed to the user’s profile; immediately afterwards, the old version is removed from the device. In contrast to Windows Installer, there is no concept of patch files or any other files that are used to deploy a Windows Runtime app. Note On Windows, because Windows Runtime apps are installed into a user’s profile, each user has complete control over their Windows Store apps. Apps can be installed, updated, and removed without affecting any other user’s apps on the device. For more info about deployment, see Deployment for Windows Runtime apps. Windows Runtime app packages – .appx All the components that define a Windows Runtime app are stored in a Windows Runtime app package. This Windows Runtime app package has a .appx file extension and is the unit of installation for a Windows Runtime app. Windows Runtime app packages are ZIP-based container files that are defined as a subset of the ISO and ECMA Open Packaging Conventions (OPC) standards. Each Windows Runtime app package contains the app’s payload files plus info needed to validate, deploy, manage, and update the app. From a high-level view, each Windows Runtime app package contains these items: App payload App code files and assets Payload files are the code files and assets that you author when you create your Windows Runtime app. App manifest App manifest file (Package.appxmanifest) The app manifest declares the identity of the app, the app's capabilities, and info for deploying and updating. For more info about the app manifest file, see App package manifest. App block map App package’s block map file (AppxBlockMap.xml) The block map file lists all the app files contained in the package along with associated cryptographic hash values that the operating system uses to validate file integrity and to optimize an update for the app. For more info about the block map file, see App package block map. App signature App package’s digital signature file (AppxSignature.p7x) The app package signature ensures that the package and contents haven't been modified after they were signed. If the signing certificate validates to a Trusted Root Certification Authorities Certificate, the signature also identifies who signed the package. The signer of the package is typically the publisher or author of the app. These preceding items comprise a fully self-contained Windows Runtime app that can be deployed to Windows 8 and later and Windows Phone 8.1 and later. You create the app payload and manifest files for your app. When Visual Studio packages your app, it automatically adds the app block map and signature files to the package. But you can also use the standalone MakeAppx and SignTool utilities if you want to manually package your app. These sections describe how to package and deploy Windows Runtime apps. - How to create an app package - How to create an app package signing certificate - How to sign an app package using SignTool - How to troubleshoot app package signature errors - How to programmatically sign an app package - How to develop an OEM app that uses a custom file Package identity One of the most fundamental pieces of the app package is the 5-part tuple that defines the package. This tuple is known as the package identity and consists of this data: Name A general name that is used for the app package. For example, "myCompany.mySuite.myApp". Note This name isn't necessarily what is displayed on the app tile. Publisher The publisher refers to the publisher of the Windows Runtime app. In most cases, the publisher is the same as the account that was used to register for a developer account. Version A four part version descriptor (major.minor.build.revision) that is used to service future releases of the app. For example, "1.0.0.0". Note You must use all four parts of the version descriptor. ProcessorArchitecture The target architecture of the app package. This value can be "x86", "x64", "arm", or "neutral". In many cases, this field can be "neutral" to represent all architectures. ResourceID Optional. A publisher-specified string that specifies the resources of the app package. This part of the tuple is used primarily if the app package has assets that are specific to a region, such as, languages. If you are creating the package manually, see the Identity element. Package format Here we describe details about app packages, that is, the .appx file format. App packages are read-only Although app packages are based on a subset of OPC, we recommend not to use existing APIs for manipulating OPC or ZIP files to edit app packages. After you create an app package, treat the package as read-only. The Visual Studio and MakeAppx processes that create app packages automatically generate and add the AppxBlockMap.xml file to the package. If you change any of the package contents, you need to update the package’s block map file to match. To create a new package and block map file, you must rebuild the package with Visual Studio, the MakeAppx pack command, or the Windows 8 native-code IAppxPackageWriter APIs. App package payload file names To comply with OPC, the file path names for all files that are stored in an app package must be Uniform Resource Identifier (URI) compliant. File paths that are not URI compliant need to be percent-encoded when stored in an app package, and decoded back into the original file path when extracted from the package. For example, consider this payload file with a path and name that contains embedded spaces and URI reserved characters '[' and ']': \my pictures\kids party[3].jpg When you store this payload file in the app package, the path for the file becomes: /my%20pictures/kids%20party%5B3%5D.jpg Visual Studio, the app packager (MakeAppx), and the Packaging APIs handle the percent-encoding and decoding of file paths automatically. If you attempt to extract files directly from an app package by using general ZIP utilities or APIs, the file paths might remain percent-encoded. So we recommend that you extract files from an app package by using either the MakeAppx unpack command or the IAppxPackageReader APIs. App package capacities App packages support apps up to these capacity limits: App package reserved path and file names These path and file names are reserved, so don't use them for app payload files: App package digital signatures You must sign every app package before users can install them. While app package signing is partly automated through Authenticode, you must control the following features when you sign app packages: - The subject name of the code signing certificate must match the Publisher attribute that is specified in the Identity element of the AppxManifest.xml file in the app package. - The hash algorithm that is used to sign the app package must match the hash algorithm that is used to generate the AppxBlockMap.xml file in the app package. This hash algorithm is specified in the HashMethod attribute of the BlockMap element. - An app package can't be time stamped independently of signing. It must be time stamped during signing if time stamping is desired. - App packages don't support multiple enveloped signatures. The signature of a package determines how the Windows Runtime app is licensed. How an app is licensed affects how it can be installed and run, so even two app packages with the same package identity might not be treated as equivalent during installation. For example, you can't install an app package with the same package identity as another already installed app, if it doesn't also have the same signature. Declarative install Windows Runtime app deployment is an entirely declarative process that relies on the app package manifest. You use the app package manifest to capture your desired integration with the operating system. For example, you use the app package manifest to declare the need to use a file type association, such as .jpg, on the operating system. By doing this, the operating system can completely manage the Windows Runtime app deployment process so that it’s a consistent, dependable experience for each user on a multitude of devices. Moreover, by having a declarative installation Windows Runtime app, uninstallation of the app becomes deterministic and repeatable. Within the app package manifest, you can declare a wide range of technologies as part of Windows Runtime app installation. App pre-requisites: To successfully deploy an app, the operating system must satisfy all of that app’s pre-requisites that are referenced in the app package manifest. For example, if a version of the operating system exposes a new API that a Windows Runtime app calls, the app declares a pre-requisite on that specific minimum version operating system. In this case, the proper operating system must be present on the target device before the app can be installed. In Windows 8 and later and Windows Phone 8.1 and later, you can declare these key types of pre-requisites in the app package manifest: OSMinVersion Specifies the minimum version of the operating system and app model platform where this app is permitted to run. OSMaxVersionTested Specifies the maximum version of the operating system where this app was tested by the developer and known to be in a working state. This pre-requisite field is used by the operating system to determine if there is any app compatibility issue that might arise during the app’s usage. For example, if the app calls an API from the Windows 8 SDK and the API was later changed in a subsequent version of the operating system, the app might behave incorrectly. This prerequisite field helps ensure that the operating system can identify and correct this behavior so the app continues to function on all subsequent versions of the operating system. Capabilities Windows Runtime apps that need programmatic access to user resources such as Pictures or connected devices such as a webcam, must declare the appropriate capability. An app requests access by declaring capabilities in its app package manifest. You can declare capabilities by using the Manifest Designer in Visual Studio, or you can add them manually to the package manifest as described in How to specify capabilities in a package manifest. For more info about capabilities, see App capability declarations. Dependencies: The Store hosts a unique set of app packages that contain operating system components that are serviced independently of the operating system. Windows Runtime apps can use these app packages by declaring a dependency in their app package manifest. These components contained in app packages hosted by the Store are called operating system component libraries. The Store manages the process of ensuring that the correct version of the component library is present when the app is installed on a device. These libraries, which include Windows Library for JavaScript, C++ Runtime Libraries (CRT), and PlayReady DRM, are essential to the creation of Windows Runtime apps. When an app deploys from the Store, the operating system satisfies the dependency declaration by downloading and installing the appropriate component library with the app that is being downloaded from the Store. For side loading Windows Store apps for testing or enterprise deployment, the Windows component library app package must be supplied and specified during deployment of the app package. App bundles Starting with Windows 8.1 and Windows Phone 8.1, you can use the app bundle (or .appxbundle package) to help optimize the packaging and distribution of a Windows Runtime app and resource packages to users all around the world. Note Create one app bundle for all your architectures rather than separate bundles for each architecture. You create the app bundle payload for your app. Visual Studio creates and adds the bundle manifest. When Visual Studio bundles your app, it automatically splits the resources into separate packages and adds the app block map and signature files to the bundle. These items make up a fully self-contained Windows Runtime app that can be deployed to systems beginning with Windows 8.1 and Windows Phone Microsoft DirectX feature level. Each app bundle can contain many different resource packages to support different device configurations. When directly referencing a resource package in your Windows Runtime app, we recommend that you app model to understand the contents of the app bundle and determine at installation time which app package and resource packages to install. Resource packages Starting with Windows 8.1 and Windows Phone 8.1, you can use the resource package to contain additional resources for the core app (for example, French-specific assets like strings or images). By using the resource package, you can separate the core app package from those additional resources. The resource package thus serves to tailor the app’s overall experience without requiring download and installation of all resource packages to the device. The resource package is optional and can’t be depended on by the app package. This means the app package must contain at least one set of default resources that can always be used in case no resource packages were installed on the device. This helps keep a couple of key promises: The app package can always be installed and launched properly on any device without resource packages. If the installed resource package is not complete, the app package has resources to fall back on. The resource package serves these purposes in the app model: Provides resource candidates that the resource-management system can use when the app runs, to tailor the app’s experience. Provides metadata that allows the resource package to target a specific resource qualifier (for example, user language, system scale, and DirectX features). Resource packages can target only one resource qualifier per package. But, your app can have many resource packages. Resource packages must never contain code. Hard linking Starting with Windows 8.1 and Windows Phone 8.1, when the OS installs your app, it optimizes the installation by not downloading the same file more than once, whenever feasible. That is, if the OS determines that your app uses a file that was already previously installed on the device, the OS creates a shared version of that file and then makes your app hard link to the shared version. This reduces the app's installation time and disk footprint. These shared files can be libraries, runtimes, and so on. To take advantage of hard linking, we recommend that you follow best practices. For example, always attempt to reuse the same runtime or library binaries whenever possible with each version of your app, unless you absolutely must update them. Windows 8.1 Update : For the initial release of Windows 8.1, hard linking was limited to only apps from the Windows Store. For Windows 8.1 Update, hard linking is further enabled among side loaded enterprise apps. For more info about side loading, see Deploying enterprise apps. Per user deployments Note Windows Runtime app deployments are per user, which means they only impact the account of the user who installed them. Furthermore, in multi-user scenarios, users don't have any knowledge of what was installed for any other user. For example, suppose UserA installed the Notepad app while UserB installed the Calculator app. In this scenario, UserA and UserB have no knowledge into other apps installed to the same computer (app isolation). App isolation Note App isolation on the operating system is only limited to the user portion of the Windows Runtime app. All other data from the app is stored in a location that the operating system can access. For example, suppose UserA installed the Calculator app and UserB also installed the Calculator app; in this scenario, only one copy of the Calculator app binaries are stored on the drive (%ProgramFiles%\WindowsApps), and both users have access. UserA doesn't see UserB’s app data and vice versa. While the runtime binaries are shared, the app data is still isolated. The %ProgramFiles%\WindowsApps directory can't be changed. This also includes the underlying %ProgramFiles% directory as well as the %ProgramData% and %UserProfile% directories. Multi version existence Note In addition to containing all the Windows Store app binaries for all users on the system, the WindowsApps directory can also contain multiple versions of the same Windows Store app. For example, suppose both UserA and UserB installed the Notepad app, and UserA updated to version 2 of the Notepad app while UserB didn't. In this scenario, two versions of the Notepad app exist on the operating system. Because only one version is installed for each user, the multiple versions don't conflict with each other. This behavior is also applicable to dependency packages. Deployment for Windows Runtime apps These sections describe the flow of installing, updating, and removing Windows Runtime apps. Installing Windows Runtime apps This figure shows the flow of installing Windows Runtime apps: The Windows Runtime app deployment process occurs in multiple phases. Initially, the OS acquires and validates the app manifest, app block map, and app signature. Next, the OS checks the app package’s deployment criteria to ensure that the app deployment will be successful. Next, the OS stages the package binaries to the WindowsApps directory. Finally, the OS registers the package into the user's account. Deployment checks (validation) This figure shows the phase where the OS performs deployment checks: After the user starts to install a Windows Runtime app, the OS must complete these checks before the deployment process can begin: OSMinVersion must be satisfied You specify app prerequisites within the app package manifest. They express the requirement for a specific minimum operating system version to be satisfied. For more info about app prerequisites, see App pre-requisites. App dependencies must be satisfied Windows Runtime apps can express a dependency on another app package for added functionality that the app needs. For more info about app dependencies, see Dependencies. Disk space must be sufficient Each Windows Runtime app requires a certain amount of disk space to be present so the app can deploy. If there isn't enough disk space present on the device to deploy the package, deployment fails. App isn't already deployed Within the context of the specific user that installed the Windows Runtime app, the app can't be installed again so a check whether the app is not installed already must be performed. App assets must pass signature check Using the already validated BlockMap, each file in the app package must have its integrity checked. Package staging This figure shows the phase where the OS stages the package: After the app model determines that the package can deploy on the device, the OS stores the package’s contents on the disk in the %ProgramFiles%\WindowsApps directory in a new directory named after the package identity: This figure shows the phase where the OS registers the package: Package registration is the final phase in the deployment process. During this phase, the extensions that are declared in the manifest are registered with the operating system. This enables the app to deeply integrate with the operating system. For example, if you want your app to be able to open .txt files, declare a FileTypeAssociation extension as XML in your app package manifest and specify .txt as the file type. At deployment time, this XML is translated into the set of system changes that need to occur to properly register the app to handle .txt files. These changes are then made on behalf of the app, by the app model. The app model supports many different extensions. For more info about these extensions, see App contracts and extensions. Updating Windows Runtime apps This figure shows the flow of updating Windows Runtime apps: The updating workflow is similar to that of Installing Windows Runtime apps, but here are a few key differences that make updating unique. Deployment checks for updating This figure shows the phase where the OS performs deployment checks in updating: If the currently installed app package’s version is greater than or equal to what the user is trying to install, deployment won't succeed. Package staging (delta downloads) This figure shows the phase where the OS stages the updated package: The staging process during update is similar to the typical staging process during installation. But the key difference is that a pre-existing version of the package is already installed on the operating system. After the pre-existing package installation completes, a set of payload files are downloaded and copied to the device. In many cases, many of those payload files won’t change or will only slightly change in the updated version of the app package. You can use those payload files to construct the updated app package content and assets. Because the BlockMap structure of the app package contains a list of hashes for each block of each file, the OS can compute the precise set of changes on a block level via a comparison of the former and new app BlockMap files. Here are the possible outcomes as a result of this comparison: A file was unchanged The file is hard linked to the updated package folder location. Blocks in a file were unchanged The unchanged blocks are copied over into the updated package folder. Blocks in the file were changed The changed blocks are marked to be downloaded from the source. An entire file is new The entire file will be downloaded from the source. A file no longer exists The file will not be used for the update at all. After the comparison completes, all the data that can be preserved is hard linked or copied and any new data is downloaded from the source and used to make the updated files. Package registration for updating This figure shows the phase where the OS registers the updated package: When you update a package’s registration, the OS also needs to update registrations of the previous versions. The app model automatically updates any existing extension registrations, removes obsolete registrations, and registers new extensions based on the declarations that are present in the manifests of the former and new versions of the app. Apps in use The process of de-registering an app package from the operating system involves removing the internals that allow the OS to launch the Windows Runtime app. In some certain cases, the app can be running while the update occurs. In this scenario, the deployment engine requests to suspend and subsequently terminate the app. The update process either succeeds or fails depending on the outcome of that request. In the case where the operations succeed, the app is also prevented from launching for the short duration of time while the registration of the new app package and de-registration of the former app package occurs. After this phase completes, the app is re-allowed to launch. App data App data is an entity that has a version that is outside of the actual Windows Runtime app. As such, if the ApplicationData.Version wasn't updated along with the update for the Windows Runtime app, the app state isn't affected by the update for the Windows Runtime app. Package de-staging This figure shows the phase where the OS de-stages the updated package: After the registration operation successfully completes, if the pre-existing version of the package is not being used by any other user on the operating system, the package is marked for removal from the operating system. Initially, the OS copies the pre-existing version's app package folder into the %ProgramFiles%\WindowsApps\Deleted directory. When no other deployment operations are ongoing, the OS deletes the pre-existing version's app package folder. Note In multi-user scenarios, the app package might still be installed on the operating system for another user. In this case, the package content isn't de-staged from the operating system until all users have removed it. Removing Windows Runtime apps This figure shows the flow of removing Windows Runtime apps: Note Similarly to the way packages are installed per user on a computer, they are also only removed per user. If the Windows Store app is installed for multiple users, it is only de-registered for the current user. For example, if UserA and UserB have the Calculator app installed and UserA uninstalls the app, it is removed only for UserA and not for UserB. The removal process consists of the same basic phases as the updating process. Package de-registration This figure shows the phase where the OS de-registers the removed package: The de-registration process is where the Windows Runtime app’s integration into the user’s account is removed. Any associated data that was installed to the user’s account, such as app state, is also removed. Similar to the updating process, the deployment engine must request the app be suspended and terminated via the Process Lifetime Manager (PLM) so the app can be de-registered from the user’s account. Note After the PLM returns, the removal operation continues to de-register the Windows Runtime app from the user’s account. The operation continues even if the PLM was not successful. Package de-staging This figure shows the phase where the OS de-stages the removed package: After the de-registration operation successfully completes, the package, if it is not being used by any other user on the operating system, is marked for removal from the operating system. Initially, the OS copies the package's app package folder into the %ProgramFiles%\WindowsApps\Deleted directory. When no other deployment operations are ongoing, the OS deletes the package's app package folder. Note In multi-user scenarios, the app package might still be installed on the operating system for another user. In this case, the package content isn't de-staged from the operating system until all users have removed it. App bundle deployment Starting with Windows 8.1 and Windows Phone 8.1, you can deploy app bundles to optimize the packaging and distribution of your app. The deployment of app bundles via the Store follows this workflow. The Windows Runtime app deployment process occurs in multiple phases. Initially, the OS acquires and validates the app bundle manifest, bundle block map, and bundle signature. Next, the OS checks the bundle manifest to ensure that there is an app that can be deployed on the current architecture. When the right app package has been found, the OS checks the app package’s deployment criteria to make sure that the app deployment will be successful. Then the OS determines the subset of applicable resource packages for deployment and stages these package binaries to the \WindowsApps\ directory. Finally, the OS registers the app package and resource packages into the user's account. Validation After the user starts to install a Windows Runtime app, the OS must complete these checks before deployment can begin. Package applicability After the OS verifies that the app bundle can be deployed on the system, it then determines the resource packages to deploy alongside the app package to enhance the user’s experience. Applicability is checked based on these three specific resource qualifiers. Package staging After the OS determines that the app bundle can be deployed on the system and which packages to deploy, the package contents are downloaded to the \WindowsApps\ directory. A new directory is created for each package downloaded and is named by using the package identity name value, like this. . Inventory of packages As Windows Runtime apps are installed, updated, and removed, a given user can at any time view which app packages are installed or pre-staged. Note A user with Administrator privileges can also determine which app packages are installed for all users on the system. For more info about how to inventory packages on the operating system, see Tools and PowerShell cmdlets. Frequently asked questions How do I install multiple packages simultaneously? You can install multiple packages by calling the Packaging APIs multiple times or once for the entire set of packages to be installed. The underlying implementation of the app model allows for any number of app packages to be awaiting deployment. But only up to 3 concurrent staging operations per user (total of 7 simultaneous stage operations per operating system) and 1 registration operation per operating system is allowed. What happens if the package is already installed? If the package is already installed, a package with the same package identity that is registered for the current user’s account already exists. In this scenario, the identical package isn't installed. Can an update be made mandatory? You can't make an update mandatory via the app model. The update model is strictly optional. In rare cases, the Store can deem an update appropriate to be distributed as a higher priority update (such as a security fix). In this case, the update can be forcefully deployed to clients. Note In the Enterprise, you can force an update via group policy. How can I roll back an update to a previous version? You can't roll back a Windows Runtime app to a previous version of the app. Because the app package data is removed from the operating system shortly after the package is uninstalled, there is no way to restore it. Can I move my %ProgramFiles%, %ProgramData% or %UserProfile% directories around? This is not a supported scenario for Windows Runtime apps and will cause errors in using the app. Packaging and deployment programming interfaces Windows Runtime - Windows.ApplicationModel.Package class - Windows.Management.Core namespace - Windows.Management.Deployment namespace - Windows.Phone.Management.Deployment namespace
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh464929(v=win.10)
CC-MAIN-2019-26
refinedweb
5,272
53
- fieldset questions - Has anyone used Grid paging with ColdFusion - [SOLVED]function appears in my array...? - background-image doesn't load - LayoutDialog - incorrect gridpanel height - Ajax timeout vs. error? - Dynamic columns resize problem - form does not post form content - Problem in comboBox - Panel will not close - Why does my event only run once? - adding tool tip for dialog button - How do I effectively set a new root node in a tree - HTMLEditor - XML Attributes in Combo Boxes - Fire Validate On Form Load - form validation displaying serverside error - Accelerators for alert/confirm/etc - Grid (Style rules with 1.1) - showing grid lines - [Solved] Ajax timestamp to disable caching breaks URL - javascript same domain policy - Targeting Links in Layouts - Problem with Dialog box - Using Element on form input with no id attribute - Grid in Dialog [easy question] - Query Store Like an SQL Query - Combobox not working with struts html select Tag in FireFox - Server update after every element drag - [solved]how2 remove : property : "value" from json Object ? - How can set code source on HtmlEditor - Drop-down menu & scrollers - Tree - event/method for when tree.expandAll loaded? - Ext.Ajax.request return Ext.Ajax has no properties - ComboBox remote dataset, initial value lookup - [Solved] Ext.form.TextField and CSS. - TextArea selection functions - _2 has no properties Ext.menu.MenuMgr - Sizing of HtmlEditor inside - Ext.namespace - About creating complex objects dinamically - Stupd firebug question - Lazy initialize pushes variable out of scope - Ajax not working properly, do not know why - What parts to load - My DragDrop widget is so slow! why? - [Solved] data.Store problem - [SOLVED]multiple east & west border layout region titles - IE6, dom or event bug ? It round me crasy - Save tree structure into xml file - Build forms from existing HTML - Ext.data.Record mapping on multiple xml-nodes - Please Help - how to reuse a page - How to automaticall scroll a div to bottom? - how2 use invalidHandleClasses in Ext.dd.DDProxy ? - Similar tool in Ext like YAHOO Logger to debug events? - [Ext-1.1] TreePanel root node reload problem - while loading an Ext page going to connect Extjs.com - tree load xml using ajax and server updates - Ajax.request asp.net failure - problem with "loading... or Please Wait..." box - Limit parameter not Incrementing in grid pagination - Flash Player in Grid Column? - htmlEditor in dialog in Internet Explorer - Tree node icons - How to autoselect the first element in a combobox? - Grid Quicksearch HistoryComboBox Problem - How do I disable a MenuButton in ToolBar - [Solved] LayoutDialog not closing properly [Ext-1.1] - How to dragdrop from grid to textarea(textfield) - Problem displaying XML in Ext.Msg.alert? - load dynamical file in contentpanel - Does extjs have functionality for getting url params - Grid Editor in tabs - how can i run a function when the layout tabs close? - ComboBox doesn't works correctly after ext update - New to Extjs and firebug - Grid - is empty message/load event? - destruction and El.cache ? - [1.1] How to set title to collapsed region? - dialog with lazy init doesn't work - Method not supported in extjs! - HTML inside Window - MessageBox when server unavailable - precache images? - Show grid inside a grid? - Load Grid w/JSON object - Shrinking Resizable Textarea - Ext defektive in combination with this script - Transfer variables to a second function - Disable Client side Form validation per submit possible? - ext-all.js - Add a DIV tag for every row in a Grid - tabs in two rows? - Grid Cell Specific Javascript - Ajax Request - have a problem to resize the column - ComboBox initial load question - about design - ComboBox diappearing open image - validateedit - Ext JS and AjaxPro question - Hover Menu with Ext - [SOLVED] Problems with dynamic menu - Grid Cell Drag and Drop in Version 1.1 - How to create Ext Element from javascript variable - [SOLVED (but not sure why...)] JsonReader doesn't read into store - timeout > 30 - page reload with ExtJs - how to add tips on each row in a grid? - dialog box - Posting to an .ASPX page - Tree with multiple root nodes - onScroll event for ContentPanel? - Form and layout problem - File Upload - [Solved] DatePicker initial value error? - Grid paging problem - Adding a Panel to a LayoutRegion at a specific position - [Solved] Grid and Dialogbox - Dialog crashes - Design error - Ext 2.0 - Key Mapping in FormPanels - How can I write vertical title in a collapsedTitle Ext.LayoutRegion? - Middle tier (between ExtJs and Resource Bundles) - Drag & drop - [solved] json is driving me nuts, what's wrong? - Need Help with Tree - Results became slow after some clicks. - link of the treeNode display in iFrame and NOT in a new window - [Solved] ComboBox options dissapear when one is selected - [1.1] ComboBox and id config value - Inserting fieldsets on the fly... - questions about Ext.form.Checkbox - poor performances - Hints in grid - Grid with Checkbox as one of the column - how to create a TreeNode in collapsed state - checkbox click event in view - ColdFusion + JSON: Does this look right? - Grid Screencast not working - HTML fragment executing Javascript - TreeHEditor maxlength - Preventing user from exiting grid row or cell - Issues Changing a Cell's Foreground Color in a Grid - Paging View + LoadMask - Drag & Drop... how ? - Link to change contentPanel - grid column sorting causing multiple server hits - How to change the style of TreeNode - datefield: setting datepicker config-options on create? - selected row element - Nested Layout and IE 6 - image viewer - Dumping the nested contents of an object - positioning fields in a form - iframe in a LayoutDialog - how to show a jsp page in a dialog? - problem with combobox when updating - scrollbar is showing in the main screen (Layout Dialog Example) - Is there a Scriptaculous style "blind" effect in Ext? - Problems while collapsing/expanding a div containing HtmlEditor - tab and iframe events - Is autocreate a valid config option for a grid? - Does this problem exist in Ext? css mucking up in a table when edit mode is started - How to create a Pivot Table via Extjs.. - Table to Grid / tablegrid passign dateFormat and currency in config? - Auto-complete textbox control - want to create Justification buttons... - Border layout: Spacing beween open panels - Problems with IE - How to block row height from expanding in Grid? - [Solved] Ext 1.1: How to set column style (css) in Ext.grid.ColumnModel config? - Ext.form.ComboBox => problems accessing submitted input field data - add onload event for img - Element.on documentation? - [newbie] where is Element::get() defined? - Install the image chooser locally - Linked comboBoxes. How to implement? - Cloning Fields on event - mask fadeIn fades past opacity point - [newbie] how does droptarget work? - load a document into a tab problems - QuickTip of Cell Contents - How to keep scope in selectPath callback? - Hiding TextField/ComboBox - Ext Tutorial? - [newbie] where is Element::get() defined? - Adding column headers to Combo? - CSS change 1.0b2->1.1: Combobox arrow disappears - Ext.data.Store loadexception - [newbie] how to debug extjs? - Posting Question - Help with Ext.View scrolling - Grid startdrag event not firing? - Hidden Fields in Forms - saving a nested tree - Getting JS Error in FB Linux but not in FB Windows - Forms and exception on submit... - How to disable all items in a Ext.form.Form? - NOOB: Update MessageBox content - word wrap in a grid? - How to use renderer ? - Combobox Initial Value - Ext.get - [SOLVED (by me)] Combobox won't submit value - i want graphics with EXT - Is this possible in nested Layout? - DomQuery path select direct child from root - loadMask question - debugging in IE - [SOLVED] dateField formatting problem - Why does BLANK_IMAGE_URL in ext-yui-adapter.js refer back to? - Equivalent ext function to jQuery.fn.trigger() ?? - Problem: loading grid in accordion pan - [1.1, Tree] DnD node into root when rootVisible: false problem - json to form (easy) but the other way - focus issues - how to trigger a close function of contentpanel - Effect for UpdateManager - Populating Combobox with Ajax call and xml - FeedReader don't read full item's description - How to Change the row style of a grid when action is performed - How stretch grid to 100% - store, on reload bug - 'uncheck' event for disjunct radiobuttons (grouped through name-attribute) - scrollTo in Ext.ContentPanel or Ext.BorderLayout - Problem with too big TreePanel - TextField growMax on page load [UNSOLVED] - Grid demo not working - How to capture mouse X/Y on a DatePicker? - Is it possible to change response from server before storing into Datastore? - Post Grid Render Event - How to set titlebar to true after the BorderLayout has been created - DateField in grid editor - How can I access XML data loaded with Ext.data.HttpProxy? - Reducing Cursor Flicker on SlitBars - Basic layout query - Cookie in a IFrame - Form Layout - Width specification/UI Defacement - How to disable a HtmlEditor?? - Dynamically adding / removing checkboxes - Ext.ToolBar is not a constructor - Double click on a grid row opens new grid? - IE error in ext-all.js 'undefined' is null or not an object - Requesting file on other domain, load not works - Remove CSS Style on TextField - Ext.get and Google Maps - Using a slim Ext - How to destroy objects? Please help - other page's script recognize in div - combobox value didn't set correctly - Reconfigure grid question - How to get file upload into a datagrid cell
https://www.sencha.com/forum/archive/index.php/f-5-p-25.html?s=4afd0cf4f78e737f9af26d00bf104c19
CC-MAIN-2020-05
refinedweb
1,506
56.55
Well, here it is, CodePlex has gotten a new facelift, which has been the result of countless hours of design discussion and sifting through feedback provided by users of the site who have felt strongly enough to give their opinion about how they think things should be and we are hoping that we've delivered an experience which most everyone will find acceptable. The neon green is gone! so now it won't seem as if every day is like St. Patrick's Day on the site. You'll notice that a lot of bells and whistles have not been added nor graphical fluff, the reason being that the team has always been of the opinion that CodePlex isn't about the home page, it's about the projects themselves and unlike Frank Lloyd Wright's approach to the design of the Guggenheim Museum on 5th Ave. in New York, the intention here was NOT to try and outshine the content we are hosting, but rather to simply provide the necessary functionality for site users to interact effectively with it. The site home page remains visually minimalistic from a design perspective and continues to occupy a supporting role for the real stars on the stage: the projects themselves. Bug Fixing and Perf Optimization In addition to the new site home page, a lot of bug fixing went into this release and those efforts were occurring in parallel with several data center upgrades needed to improve site operations moving forward. We've had some major successes in improving usability and optimizing performance and we'll be continuing to work on these things. Notice how much quicker it is to navigate through work items now - both Basic and Advanced views. Some more work needs to be done, but great strides have been made - test it out and see for yourself. New Project Subdomain URLs A new feature was added where project owners can point others to their projects using the more personalized CNAME project subdomain URL format. Here is an example: Right now, it only works to point to the project home page and will revert to the usual CodePlex domain in the URL when you click on any of the other tabs in your project, but support will be coming for the other project areas in the future.Size Limit for Release File Uploads Has Been Increased The size limit for creating individual files inside project releases has now been increased to 250 MB. This means that you can have multiple files inside a single release, each of which can be up to 250 MB now. This will help out the projects with large file requirements. Currently most projects won't need this kind of a limit, but it's there, if they do. Anyways, we hope you like the new changes, and as always we'd like to hear your thoughts about them. Here is the URL for the Discussions forum dedicated to conversations about the CodePlex site itself. Even brief comments can carry weight there: Notice on CodePlex that a new project search UI appears once you make an initial search for a project, either by using the project search box itself or by clicking on a tag. There are some interesting new search enhancements and filters, and I'm going to list them all out here to give the full picture of everything that has been done: 1. The search results now have the target keyword highlighted in yellow, for quick identification within the result set. Notice that all related tags are also included in the result set, and any which contain the target keyword are also color highlighted. 2. You can now filter your search to find ONLY release quality projects by simply checking a checkbox that excludes project releases, which are still in development. 3. There is now both a 'Simple Search' and 'Advanced Search', and you can easily toggle between the two, by simply clicking the new hyperlink provided to the right of the search input box. The new Advanced Search feature allows you to filter results by both project maturity and license type. These features were added in direct response to requests from the community. 4. Notice the new dropdown control that allows you to further filter your search by the following parameters: - Relevance - Current Release - Downloads - Page Views - Project Start Date - Ratings - Title 5. The rating of the project's current default release now also appears in the search results (with pretty gold stars), the rating being based on cumulative voting by registered members of the CodePlex community. The release names are hyperlinked, so you can go directly to them, instead of having to go to the project home page first. 6. There have been several perf improvements made to project search as well. We keep looking for ways to optimize and this release incorporates a new algorithm, which when used with the new search filters, promises to give the user far more relevant results. Try it out and compare to the last time you used it. We're interested in continuing to improve on search moving forward, and your feedback is critical in helping us to drill down on the areas where things can be made better, so please let us know about your search experiences, either in the discussion forum dedicated to improving the CodePlex site, or the CodePlex Issue Tracker. Note that if you log a work item in the Issue Tracker, it becomes an action item for the CodePlex team and will gain increased priority as more of the community votes for it over time. One of the new features we just released on CodePlex is a web tree view browser for looking at files under source control. This makes for a much improved experience for users who view code in the projects on the site. The visual presentation of source code files uses syntax highlighting with specific color matching to keywords, so that what you see is very similar to how files look in Visual Studio under its default display settings. Click the 'Expand all' link to open up the tree view and see the next level of nested files under the top level namespaces: This view below is of a C# file after expanding out the tree control. You can now quickly navigate between and drill down into the exact files in a changeset you are interested in. The code is, of course, fully copy and pastable, but notice the new feature addition that when you view a source file now you get a unique URL for that specific file based on a unique ID for that page at the end of the URL, so you can bookmark it for easy reference when you want to return to it later on. ex. Let us know what you think of the new functionality and please offer any ideas for improving things by submitting a work item on the CodePlex Issue Tracker: Trademarks | Privacy Statement
http://blogs.msdn.com/petec/
crawl-002
refinedweb
1,166
60.58
Storage classes in C Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 25 minutes | Coding time: 5 minutes A Storage Class defines the scope (visibility) and life-time of variables and/or functions within a C Program. They precede the type that they modify. In other words, Storage Classes are used to describe the features of a variable/function. These features basically include the scope, visibility and life-time which help us to trace the existence of a particular variable during the runtime of a program.These specifiers are the keywords which can appear next to the top-level type of a declaration. The use of these keywords affects the storage duration and linkage of the declared object, depending on whether it is declared at file scope or at block scope Simply, In C language, each variable has a storage class which decides the following things: - scope i.e where the value of the variable would be available inside a program. - default initial value i.e if we do not explicitly initialize that variable, what will be its default initial value. - lifetime of that variable i.e for how long will that variable exist. The following storage classes are most oftenly used in C programming : - Automatic variables - External variables - Static variables - Register variables Automatic variables:. Auto variables can be only accessed within the block/function they have been declared and not outside them (which defines their scope). Of course, these can be accessed within nested blocks within the parent block/function in which the auto variable was declared. However, they can be accessed outside their scope as well using the concept of pointers given here by pointing to the very exact memory location where the variables resides. Example 1 :- #include <stdio.h> int main() { int a; //auto char b; float c; printf("%d %c %f",a,b,c); // printing initial default value of automaticvariables a, b, and c. return 0; } Output 1 :- garbage garbage garbage Example 2 :- #include <stdio.h> int main() { int a = 10,i; printf("%d ",++a); { int a = 20; for (i=0;i<3;i++) { printf("%d ",a); // 20 will be printed 3 times since it is the local value of a } } printf("%d ",a); // 11 will be printed since the scope of a = 20 is ended. } Output 2 :- 11 20 20 20 11 External or Global variable extern. The external storage class is used to tell the compiler that the variable defined as extern is declared with an external linkage elsewhere in the program. Global variables remain available throughout the program execution. By default, initial value of the Global variable is 0(zero). One important thing to remember about global variable is that their values can be changed by any function in the program. The variables declared as extern are not allocated any memory. It is only declaration and intended to specify that the variable is declared elsewhere in the program. We can only initialize the extern variable globally, i.e., we can not initialize the external variable within any block or method. An external variable can be declared many times but can be initialized at only once. If a variable is declared as external then the compiler searches for that variable to be initialized somewhere in the program which may be extern or static. If it is not, then the compiler will show an error.. Example - 1 :- #include <stdio.h> int main() { extern int a; printf("%d",a); } Output 1 :- main.c:(.text+0x6): undefined reference to `a' collect2: error: ld returned 1 exit status Example 2 :- #include <stdio.h> int a; int main() { extern int a; // variable a is defined globally, the memory will not be allocated to a printf("%d",a); } Output 2 :- 0 Static variables: static.The variables defined as static specifier can hold their value between the multiple function calls. Example - 1 :- #include <stdio.h> static char c; static int i; static float f; static char s[100]; void main () { printf("%d %d %f %s",c,i,f); // the initial default value of c, i, and f will be printed. } Output 1 :- 0 0 0.000000 (null) Example 2 :- #include <stdio.h> void sum() { static int a = 10; static int b = 24; printf("%d %d \n",a,b); a++; b++; } void main() { int i; for(i = 0; i<3; i++) { sum(); // The static variables holds their value between multiple function calls. } } Output 2 :- 10 24 11 25 12 26 Register variable:. We can not dereference the register variables, i.e., we can not use &operator for the register variable. The register keyword is used for the variable which should be stored in the CPU register. However, it is compiler's choice whether or not; the variables can be stored in the register. We can store pointers into the register, i.e., a register can store the address of a variable. Static variables can not be stored into the register since we can not use more than one storage specifier for the same variable. NOTE: We can never get the address of such variables. Example 1 :- #include <stdio.h> int main() { register int a; // variable a is allocated memory in the CPU register. The initial default value of a is 0. printf("%d",a); } Output 1 :- 0 Example 2 :- #include <stdio.h> int main() { register int a = 0; printf("%u",&a); // This will give a compile time error since we can not access the address of a register variable. } Output 2 :- main.c:5:5: error: address of register variable ?a? requested printf("%u",&a); ^~~~~~ Which storage class should be used and when? To improve the speed of execution of the program and to carefully use the memory space occupied by the variables, following points should be kept in mind while using storage classes: We should use static storage classonly when we want the value of the variable to remain same every time we call it using different function calls. We should use register storage classonly for those variables that are used in our program very oftenly. CPU registers are limited and thus should be used carefully. We should use external or global storage classonly for those variables that are being used by almost all the functions in the program. If we do not have the purpose of any of the above mentioned storage classes, then we should use the automatic storage class.
https://iq.opengenus.org/storage-classes-c/
CC-MAIN-2021-21
refinedweb
1,082
63.8
I am a new guy to java. I want to find the longest sequential same character array in a input character arrays. For example,this character array bddfDDDffkl, the longest is DDD, and this one: rttttDDddjkl, the longest is tttt. I use the following code to deal with this problem. But, I want to improve my code, For example, if there are two same length arrays (for example rtttgHHH, there are two longest: ttt and HHH), how to solve this problem? Thanks in advance. My following code: public class SeqSameChar { public static void main (String[] args) { int subLength = 0; Scanner sc = new Scanner(System.in); String[] num = null; num = sc.nextLine().split(" "); String[] number = new String[num.length]; for(int i = 0; i< number.length;i++) { number[i] = String.valueOf(num[i]); } subLength =length(number,num.length); System.out.println(subLength); for(int i = index; i < index+subLength; i++) { System.out.print(number[i]); } System.out.println(c==c1); } public static int index; //to calculate the longest contiguous increasing sequence public static int length(String[] A,int size){ if(size<=0)return 0; int res=1; int current=1; for(int i=1;i<size;i++){ if(A[i].equals(A[i-1])){ current++; } else{ if(current>res){ index=i-current; res=current; } current=1; } } return res; } } This algorithm will work perfectly fine for what you want to develop: Before that, let me make it clear that if you want to check repeatitions of 2 different characters same number of times, you have to run a for loop in reverse to identify the 2nd character. So if the 2nd character is not same as the first one identified, and also if it's number of repeatitions are the same, you else, just for loop because both the characters are going to be same. public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter String 1: "); String A1 = sc.nextLine(); MaxRepeat(A1); } public static void MaxRepeat(String A) { int count = 1; int max1 = 1; char mostrepeated1 = ' '; for(int i = 0; i < A.length()-1;i++) { char number = A.charAt(i); if(number == A.charAt(i+1)) { count++; if(count>max1) { max1 = count; mostrepeated1 = number; } continue; } count = 1; } count = 1; int max2 = 1; char mostrepeated2 = ' '; for(int i = A.length()-1; i>0; i--) { char number = A.charAt(i); if(number == A.charAt(i-1)) { count++; if(count>max2) { max2 = count; mostrepeated2 = number; } continue; } count = 1; } if((max1==max2) && (mostrepeated1==mostrepeated2)) { System.out.println("Most Consecutively repeated character is: " + mostrepeated1 + " and is repeated " + max1 + " times."); } else if((max1==max2) && (mostrepeated1!=mostrepeated2)) { System.out.println("Most continously repeated characters are: " + mostrepeated1 + " and " + mostrepeated2 + " and they are repeated " + max1 + " times"); } }
https://codedump.io/share/zFZlIvGiNadT/1/java-find-the-longest-sequential-same-character-array
CC-MAIN-2018-05
refinedweb
452
50.23
Opened 10 years ago Closed 9 years ago #4239 closed (duplicate) child classes of Form only include base_fields attributes of parents Description Child classes of Form only include the base_fields attributes of their parent classes, not both, as it should. Here's an example: from django import newforms as forms from django.db import models class ExtendedForm(forms.Form): base_fields = { 'new_field': forms.CharField() } class MyModel(models.Model): first_field = models.CharField(maxlength=16) second_field = models.CharField(maxlength=16) obj = MyModel.objects.get(id=1) MyForm = form_for_instance(obj, form=ExtendedForm) f = MyForm() However, when I render f, only new_field shows up. It appears that the most reasonable place to fix this at the moment is in the DeclarativeFieldsMeta.__new__ method. The method is expecting the fields as class attributes, when instead, the fields are in the attrs dict with the 'base_fields' index. I've attached a patch. Attachments (1) Change History (3) Changed 10 years ago by comment:1 Changed 10 years ago by comment:2 Changed 9 years ago by Note: See TracTickets for help on using tickets. Closing as a dupe of #5050 which is a bit more comprehensive
https://code.djangoproject.com/ticket/4239
CC-MAIN-2016-50
refinedweb
189
59.4
Table of Contents Recently, industry-leading programmable communications provider Twilio partnered with RapidAPI in order to make their excellent services that much more accessible. Twilio’s services allow applications to programmatically make and receive phone calls, send and receive text messages, and perform other communication services via its RESTful APIs. We’ll be looking at how to interact with Twilio’s SMS API via RapidAPI using Python’s Requests library, and then we’ll apply what we cover in the RapidAPI UI to a simple Flask app that will fire off a text message when it receives an HTTP GET request. Today, we’re going to: - Collect relevant account details - Purchase a number from Twilio from which to send a text message - Send your first text message! - Put all this new knowledge and more into practice with a simple Flask app How To Send SMS with Python 1. Sign up for Twilio on RapidAPI Without further ado, let’s go ahead and sign up. Head over to in your browser, sign up if you haven’t already, and click “Subscribe.” Note: Sending a text message incurs a cost described in credits worth $0.0001 cents. In addition to the varying cost of sending text messages, you will also need to purchase a phone number from Twilio, which will cost $1.00. 2. Get your account particulars: Then, you can fetch your account details. Simply select the topmost GET request in the left sidebar. Then, once logged in and subscribed, you can click the blue button shown below to execute the call. 3. Buy a phone number from Twilio: Then, you can purchase a phone number from which to send your text messages. Simply select the endpoint POST Buy a Phone Number in the sidebar navigation, enter the required parameters, and click ol’ blue again up at the top. Purchasing a number from Twilio requires three parameters: Note: If you don’t have your SID yet, or if you just don’t have it handy for a test, you can simply pass an `a` in the request, and RapidAPI will autocomplete it for you when it passes the request to Twilio. 4. Send your first text: Alright, let’s kick the tires and see if it runs! We’ll need to pass the following: If all goes well, you should receive your first text message within just a few moments. 5. Build something with it: Finally, we’ll build a simple Flask app that listens for a GET request, and fires off a pre-written text message one comes in. I plan to use this to notify me when some silly long process (i.e. db ingest) completes. 🙂 First, let’s create a file called `app.py`. Then we’ll handle our imports. You will need to import Flask and the Python requests library. If you don’t have Flask installed, don’t let that deter you. Pip has you covered. To install, simply run: pip3 install flask We can also set the variables that we are likely to want to change in the future as global variables, like so. from flask import flask import requests X_RAPID_API_KEY = 'x_rapid_api_key' TWILIO_PHONE_NUMBER = 'phone_number_purchased_from_twilio' TO_NUMBER = 'recipients_phone_number' SID = 'your_sid' MESSAGE_BODY = 'Is your number {}? jk haha. I'm done.'.format(TO_NUMBER) Notice my hilarious use of Python3’s string formatting. The curly braces define the location in the string that the `format()` value will be placed when the app runs. Next, we’ll create our app proper. In order to do so, we’ll have to instantiate the Flask class. This leaves us with this magical variable `app` that lets us do all sorts of neat stuff — not least of which is exposing an API that will listen for our requests. app = Flask(__name__) if __name__ == '__main__': app.run() Now that that’s out of the way, let’s create the first route that our app will handle. To do so, we’ll need to define a route and the HTTP request methods that will be allowed at that endpoint. You can see below that I have defined a route `/notify`, which will accept an HTTP GET request. Next, we’ll need to write a function that will be executed when our app receives said GET request. This is where all of that progress we made above comes back into the picture. @app.route('/notify', methods=['GET']) def notify(): url = "{}/Messages.json".format(SID) params = {"from":TWILIO_PHONE_NUMBER,"body":MESSAGE_BODY,"to":TO_NUMBER} headers = { 'x-rapidapi-host': "twilio-sms.p.rapidapi.com", 'x-rapidapi-key': X_RAPID_API_KEY, 'content-type': "application/x-www-form-urlencoded" } response = requests.post(url, headers=headers, params=params) return '' if __name__ == '__main__': app.run() Notice that we’ve included the required params in our call to the Twilio SMS API. We are returning an empty string, because it will make Flask happy, and we don’t really care about what response the requester gets back. Instead, we are mainly interested in getting the text message sent to the person who needs to be notified on behalf of the requester. And once we fire off our POST request to Twilio, we’ve done our part. The rest is up to Twilio’s magical API! Test it: So, my motivation behind this is at least a little selfish. I plan to run this app locally and then to simply add a GET request to any old script that points at `http//:0.0.0.0:5000/notify`, so that I can leave long-running scripts unattended, but I’ll get a heads up when they finish. To replicate this use case, we can do the following: - In your terminal, run `python3 app.py`. - Create a second file called `fake_process.py`. - Paste in the following code. - Open a second terminal, cd into your working directory, and run `python3 fake_process.py` - It will count to 10,000 to replicate the delay that would come with a real process running, and then send the GET request our API is listening for. - Receive a text message from yourself like a boss!!! import requests def fake_process(): count = 0 while count < 10000: counter += 1 if counter == 10000: response = requests.get('') print(response.status_code) fake_process() And that’s it! You now have a nifty notifier app thingy, and (more importantly) you’re ready to go build something exceptional with your newfound knowledge. Happy building! Next steps: Explore all of Twilio’s services that are now offered on RapidAPI’s platform. Twilio Lookup — Reduce undeliverable messages, identify local-friendly number formats, and resolve caller names with Twilio Lookup. Find phone types, carriers, and more. Twilio Verify Phone Number — Twilio Verify API makes it simple to add phone number verification to your web application. It supports codes sent via voice and SMS. How to Use Twilio on RapidAPI Here’s a short video, showing you how to use the Twilio API (with RapidAPI) and integrating it into your app: - How to use an API with Python - How to build an API in Python (Flask) - How to Create an API in Python (Django Framework) - How to use the OWM API in Python - Best SMS APIs - How to Send SMS with PHP (Twilio or Nexmo) - How to use the Twilio API with Ruby
https://rapidapi.com/blog/python-sms-api/
CC-MAIN-2020-29
refinedweb
1,204
70.94
Receive a message from the socket at a specified address #include <sys/types.h> #include <sys/socket.h> ssize_t recvfrom( int s, void * buff, size_t len, int flags, struct sockaddr * from, socklen_t * fromlen ); libsocket Use the -l socket option to qcc to link against this library. The recvfrom() routine receives a message from the socket, s, whether or not it's connection-oriented. If from is nonzero, and the socket is connectionless, the source address of the message is filled in. The parameter fromlen is a value-result parameter, initialized to the size of the buffer associated with from, and modified on return to indicate the actual size of the stored address. This routine returns the length of the message on successful completion. If a message is too long for the supplied buffer, buf, excess bytes may be discarded depending on the type of socket that the message is received from -- see socket(). If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking -- see ioctl() -- in which case recvfrom() returns -1 is returned and sets the external variable errno). recv(), recvmsg(), select()
https://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/r/recvfrom.html
CC-MAIN-2022-21
refinedweb
194
60.95
Informazioni sul libro Fundraising For Dummies Azioni libroInizia a leggere! Azioni libroInizia a leggere Informazioni sul libro Fundraising For Dummies! Anteprima del libro Fundraising For Dummies - John Mutz you! Part I Putting Your Fundraising Ducks in a Row In this part . . . Before you can start bringing in the big bucks to fund your organization, you need to begin at the beginning — by figuring out the lay of the land and getting a sense of what’s possible in your fundraising environment. Anytime you start something new, you have to take some time to get your feet under you and become familiar with the basics of your task. And in times of economic upheaval, being able to assess your starting point — and envision your end goal — is more important than ever. This part of the book introduces you to the foundation of your fundraising efforts: your passion, your mission, your board, and your message. Use this part to put the cornerstones in place as you begin building your fundraising approach. Chapter 1 Fundraising in a Changing Economy In This Chapter Keeping your thumb on the pulse of the economy Discovering your opportunity during an economic downturn Finding success by building relationships Taking advantage of an upcoming economic recovery Chances are you love a challenge. You probably also enjoy people, have a passion for your cause, have skills that help you communicate easily, are personable, and know how to focus on details while keeping in mind the big picture. In your heart of hearts, you also may have a never-say-die belief that good causes need good people to raise the funds that keep them going. Congratulations! You’re in the right line of work. Fundraising may not be the easiest job you ever do in your life, but, as you gain understanding and experience, you discover that it offers great intrinsic and lasting rewards: relationships with passionate and dedicated people; the achievement of goals for a cause you believe in; the excitement of knowing your efforts are contributing to the common good — by way of putting food on the table for those who are hungry, opening doors for those who need them, or cleaning up the environment for the next generation. All along the way, you have the chance to be a matchmaker of good works and good people — bringing together people who have a desire to help with an organization that needs them. Even with all these inherent benefits, however, now isn’t an easy time to be a fundraiser. If you’ve been in the role for any length of time, you’ve probably spent a lot of time watching with a wary eye as the economy pitches and sways. You wonder whether donors will have anything left to give; you watch your endowment drop; you cringe at the economic forecasts. After all, in almost every industry today — education, healthcare, social services, environmental protection, public service, and so on — you find giving numbers down, corporations tightening their purse strings, foundations offering fewer grants, and government dollars slowing to a trickle. Although it’s important to have your eyes open, to know what’s happening in the world, and to discern how the current economic situation is impacting your organization, not everything is doom and gloom. As you see in the world around you, times of disequilibrium find their way back to balance. As the economy shifts and topples, you get the opportunity to look more closely at your foundation, your approach, your programs, your messaging, and your people. You now have the time to give a closer look to the areas you took for granted when times were good. How has your organization changed? What are your opportunities today? How can you work together with your staff and board more effectively — while improving your efficiency and cutting costs at the same time — so that when the numbers begin to rise again (and they will), you’re ready to move even more effectively into a time of abundance? This chapter offers practical in-the-trenches ideas for navigating through tough times, capitalizing on your successes, and planting seeds now for some major blossoming in the months and years ahead. Looking at the Stark Realities Just how bad is it? According to the Center on Philanthropy at Indiana University, the Philanthropic Giving Index (PGI), which evaluates confidence in charitable giving, reached an all-time low in 2009, dropping almost 49 percent since December 2007. When the PGI was calculated in the depths of the U.S. recession, more than 93 percent of fundraisers said the economy had a negative or very negative effect on their ability to raise funds. Even though the numbers show that donors who traditionally have given less than $1,000 are giving roughly the same amount they gave in previous years, donors who traditionally have given more than $1,000 are being impacted in a big way, and the size and number of gifts they are giving have been significantly reduced. Uncertainty is in the air, and even your more affluent donors may be experiencing difficult personal economic circumstances. Giving USA 2009, a report showing the results of philanthropic giving in 2008, illustrates just how bleak the numbers really are. Compared to the philanthropic giving total for 2007 (just over $314 billion), total giving in 2008 was just over $307 billion, a drop of 5.7 percent (adjusted for inflation). Individual giving — which represents a full 75 percent of all philanthropic gifts — dropped 6.3 percent. In most industry areas, fundraisers aren’t surprised that giving is down — in some cases, with dramatic drops. Here’s a quick tally of the drops in giving from 2008 to 2009 in a selection of industry areas: Arts, culture, and humanities: Down 9.9 percent Education: Down 9 percent Environment/animals: Down 9 percent Human services: Down 15.9 percent Health services: Down 10 percent International affairs: Down 3.1 percent For religious organizations and those categorized as public-society benefit groups (for example, civil rights’ organizations, community-improvement groups, and disaster-relief organizations), the numbers were slightly better: Religious organizations: Up 1.6 percent Public-society benefit groups: Up 1.5 percent This means that unless you work in one of the few groups with an increase in giving, you are likely feeling the pinch in tough economic times, no matter where you’re fundraising, how well-known your organization may be, or how many successful campaigns you’ve run. Yet, even though the funds available for your services seem to be stretching thin at times, the need for services isn’t lacking in the slightest — in fact, the needs are undoubtedly increasing faster than you can supply them. It’s important to balance the dismal facts and figures that accompany economic downturns with a larger sense of the ebb and flow of philanthropic work. Money may be tight right now, but the number of people who care about your cause isn’t in short supply. Being able to tell your story in a positive way that clearly shows others how they can help is an important first step toward fulfilling your mission in any economy. With a little creativity, vision, collaboration, and passion, you may find that you can easily do more with less — while serving a greater number of people than you’d previously thought possible. In the following sections, we outline some of the difficulties your organization may be facing and point you in the right direction for coping with them. Identifying cutbacks and understanding the reasons for them Nothing about the fundraising climate today is business as usual. Giving is down, and, although an upswing is certainly on the way, nobody knows when with any certainty. This basic fact brings us to three stark realities that every fundraiser needs to recognize in times of economic challenge: Reality 1: Giving is down. Reality 2: Personal income is down. Reality 3: Government expenditures are down. As a fundraiser, these three realities add up to the realization that unless your organization hits it big with a major event, gets a huge grant you’ve been working on for a while, or suddenly discovers a sleigh full of major donors who are intact financially and ready and willing to give, your donations are likely to be lower than forecasted during an economic slump. The main reason for this slump in donations is Reality 2, the fact that personal income is down — no matter where or how you make (or made) a living. People actively working in every industry work harder and make less because costs are elevated. People who had counted on investment income have taken a big hit and may be more concerned today about making major gifts. To top it all off, if you’re an organization that relies less on people for donations and more on government support for your programs and services, you may find that your program has been reduced, underfunded, or even cut off from your source of state or federal funding. To help you navigate these choppy waters, we’ve included information in Chapter 8 on how to connect with your donors in a variety of low-cost, high-impact ways. Chapter 11 helps you think through your approach for writing engaging, inspiring grant proposals, and all chapters in Part IV focus on specific campaigns you can use to approach your donors in different ways. Coping with staff reductions and shrinking budgets Watching contributions slip and investment values fall inevitably strikes a cold fear in the heart of every nonprofit leader and fundraising team. Sure, the idea of reducing programs and services is a difficult one. But the toughest calls of all — for organizations founded on the idea of people helping people — are the ones that impact the lives of the people you serve and the friends and colleagues you work with. Does a decision you feel you have to make to cope with the current economic state mean that staff won’t get a raise this year? That some open positions will go unfilled? That layoffs are on the horizon? Ready for the good news? Are you ready for some good news? In his book Democracy in America, Alexis de Tocqueville wrote that people generally rise to the occasion presented to them. Most people have seen evidence of this phenomenon in their lives, whether it’s in a neighborhood rallying for a sick child, a community raising funds for an after-school program, or dramatic and personal humanitarian efforts like the many people who dropped everything to help the victims of Hurricane Katrina. On the one hand, you have to face the facts and figures and deal with the dire predictions and circumstances that accompany down economies. But on the other hand, you’ve got the history and culture of what countless people have done in America to combat bad times in the past. Somehow or another, when times are tough, individuals and organizations alike develop more compassion for those in need and for those causes that are important for society. Your organization can do the same today as you figure out how to do a better job of working with what you’ve got — doing more with less. All the hardship you’re dealing with now offers you countless lessons to learn, and it may even result in a more efficient, focused, and streamlined organization. As your make your way over the many hurdles, you gather lots of wisdom from the experience. Plus — and this is icing on the cake — when things begin to get better, you’ll have one heck of a good story to tell. In Chapter 5, we show you how to help your board tackle the tough decisions so you know what to plan for and what to expect. You may be surprised to find a little breathing room and discover that you can chart a course that is open and honest and that builds trust throughout the entire organization, even — and perhaps especially — in the midst of trying times. You may not be able to give staff a raise this year, but you can offer other benefits to offset that loss. Depending on the way your organization is structured, you may be able to offer flex time, give an extra personal day, or change other perks that don’t relate to an increase in the bottom line. Dealing with hard times that linger One of the biggest challenges in economic uncertainty is that the forecasting models that worked in the past don’t seem to fit — and you don’t know exactly what to expect. What can you do to prepare and preserve your organization if digging out of the recession takes longer than expected? What if your donations are down for another 6 months — or 12, or 24? Whether recovery comes quickly or eases in slowly over time, the smart thing to do is begin where you are today with a good, clear look at the building blocks of your organization. We show you how to prepare and preserve your organization with your case statement in Chapter 4, and then we show you how to use it to build a full fundraising plan in Chapter 6. Finding reliable sources One of the big shocks — and even bigger lessons — of the recession of 2009 is that as a society we need to be willing to look more closely at where we place our trust. Large corporations that seemed to be operating ethically and efficiently floundered and fell when bad lending practices put everyone at risk. As we begin to pull together our recovery, the question of where we place our trust remains. Donors will be asking you the same thing. Who’s a reliable source? On what information will you base your major financial decisions? Can your organization be counted on? Will your organization be around tomorrow? Next month? Next year? In Chapter 3, we offer a number of resources to help you steer your organization effectively using ethical principles in fundraising. In that chapter, you discover a number of organizations that are designed to uphold the best ideals in fundraising, made up of people who work to guarantee that — troubled times or not — fundraising remains a noble endeavor. Finding Your Opportunity: A Crisis Is Too Good to Waste Even though at times the challenges you face as a fundraiser may feel more like mountains than molehills, you cross all challenges the same way: one step at a time. Wherever your organization finds itself — in financial peril, in economic uncertainty, or with lower-than-expected donations and few prospects for grant funding — you can find a path to more solid ground. Use the measures presented in the following sections to begin to restore a sense of stability. Revisiting your mission Your current situation offers you opportunities to look more closely at the programs and services you offer and to get clear about your priorities and positions. You can fine-tune your case statement, revisit your mission, get your board engaged, and maybe bring in some great new volunteers. In Chapters 4 through 6, we show you how to shine a light on your programs and services and reprioritize so the programs that meet the biggest need in your community right now are the ones that get your attention. By getting clear on your mission and exploring creative and innovative ways to deliver the programs and services that meet your goals, you streamline your efforts, which helps you do more of what works — and less of what doesn’t. Paring your services (or pairing up to provide them!) By definition, focusing on some services means giving less attention and effort to others. How can you help your organization decide what to keep and what to drop? Chapter 5 offers an exercise to help you evaluate your options. In some cases, you may be able to fulfill your mission by keeping all your programs and services alive through a creative partnership or merger. Today organizations are teaming up like never before to share costs, reduce overhead, and get more done. Find out more about teaming up with other organizations in Chapter 6. Collaboration isn’t a new idea in nonprofit work, but many organizations still like to run their own shows when it comes to delivering programs and services. If you can team up with another organization to meet the needs of a greater number of people at a reduced cost, go for it. Besides helping you serve more people, working together with other organizations also improves your overall image in the eyes of grantors (the organizations or foundations who offer grants to organizations like you), which can significantly boost your bottom line. After all, foundations like to see cooperative efforts on the grant proposals you submit. (Find out more about grants in Chapter 11.) Nurturing the donor-agency relationship In down economic times, it’s more important than ever to know your donor. Research the giving patterns of your constituents, find out what life is like for your regular donor, stay in touch with your major givers, and use your messaging to reassure, inform, and invite donors to stay engaged with you. Check out Chapters 7 through 10 to discover everything you need to know about researching and cultivating your relationships with your donors. Make sure your messaging is empathetic and honest — but stay away from crying Wolf! too often. However, when the need is real, dramatizing a community crisis can be an effective way to gather new donors and encourage those who have given in the past to give again. For example, a food pantry recently discovered it was going to come up short in supply of its needs, so the board enlisted the support of the local mayor. This added support resulted in lots of free media and an uptake in giving. Crying Wolf! may work once in a while (especially when someone in your community is truly in dire need), but overusing dramatic urgency can leave your donors underimpressed. Better to be straightforward, present the opportunities, and invite your donors to be part of your cause. You can reduce the anxiety your staff and donors may be feeling for any number of reasons simply by being candid (although being candid does take a little courage). When you use candor with kindness to address the situation directly, people feel relief to know the straight story — even if it’s not good news — and they usually feel they can trust what you’re telling them. Turning to cost-effective processes In the middle of a thriving economic time, people tend to build on programs and services, layer new ideas over the tried-and-true, and take on expenses in a generally optimistic frame of mind. When money gets tight, people constrict that expansiveness and begin to look more closely at what they spend and why. This kind of close evaluation is really a good thing for your organization to do periodically, whether or not it accompanies financial hardship. By scrutinizing the giving patterns and donation flow in your organization, you can get a sense of which campaigns during the year have worked in the past and make some estimation of what may work again. You can also find out more about the who, what, when, where, and how of special events and explore areas in your organization where you tend to spend a lot of your money (in four-color print pieces, for example). After you identify what processes you’ve been using to fulfill your organization’s mission, you can look for ways to make those processes more cost effective. In Chapters 14 through 17, we show you how to build community, get your message out, increase the visibility of your brand, and build your online presence by using free or low-cost tools that reach an ever-widening audience (for literally a fraction of the cost you’re paying for print pieces). In Chapter 19, we show you how to use another low-cost tool — Webinars — to share your organization with potential donors. Webinars provide a great opportunity to reduce heavy travel expenses for everyone in your organization while making meetings more flexible and time-effective. For a low cost, you can host an online session with presentations, a whiteboard, video, and other programs, while your organization’s leaders talk by phone. Talking Up Your Successes and Building Relationships It’s been said that it is human nature to create — we create with our ideas, our thoughts, our words, and our actions. When you talk up your recent successes, no matter how small they might be, you inspire the people listening to think positively about your organization. When you’re the voice of stability in a time of great change, you paint the possibility of better times in your listeners’ minds. When you make a real effort to build relationships with your donors — simply to build the relationship, whether they choose to donate or not — your donors realize that you value more than their financial contribution, which makes a huge difference in the amount of goodwill they feel for your organization and perhaps for you personally. One day that goodwill may convert to a dollar amount — or it may lead other potential donors to your door. The following sections discuss some key points to keep in mind as you promote your organization. Telling your story well Especially when daily news is filled with negativity, people love to hear a good story. Lucky for you, your organization likely gives you lots of positive stories to tell — people who have been helped by what you do, volunteers who love their work, improvements you have made, lives you have changed. Although these good things may happen against a backdrop of short-fall funding and delayed grants, share your successes out loud with your donors, your staff, and your public. In Chapters 12 and 13, we show you how to tell the stories of the good things that are going on in your organization. We explain how to make a splash in print (through your annual report), take your stories online, and even post them in videos. Whether you’re a lone fundraiser in your organization or you work with a team or committee, prepare a weekly dashboard that shows what donations came in, what expenses went out, and what kind of progress you’re making. This simple data serves as a quick-look guide to show your progress week by week; plus, it gets your whole team engaged in the effort of recognizing and talking about your organization’s successes — big or small. Engaging people who care People who cared about your organization before the economic downturn still care about it today — they just may be less certain about their own abilities to give. You can help your donors recognize the many ways they can give to your cause by staying in touch with them through e-mail, phone calls, and the Web — even during economic downturns. Stick with your normal pattern of communication, whatever that may be. Consistency is important in building donor relationships, and your donors will be paying careful attention to the way you navigate through this rocky time. You can be creative about the ways in which you invite your donors to participate in your cause by increasing the number of giving options, spotlighting volunteering, offering matched-giving opportunities, or hosting low-cost, service-related events that enable donors to socialize and find out more about your organization. Chapter 8 shows you how to get to know your donors in a way that makes helping out your organization a natural next step. Developing relationships with key businesses and funders Corporate philanthropy takes a hit just like everything else in tumultuous economic times, but the corporate citizen also has a vested interest in doing good — both for tax reasons and for the general goodwill and morale of employees. Continuing to develop good relationships with businesses in your area or industry is important, and continuing to talk with potential major givers is a given. Chapter 22 takes a closer look at corporate philanthropy, and Chapters 9 and 21 show you what you need to do to cultivate those important major givers and secure their gifts. Meet as usual with your potential business sponsors and constituents — communicate, communicate, communicate. If you can find that perfect match — just the right fit for a particular corporate-giving program, for example — you may be surprised to discover how many people truly want to give, even though they feel limited in how much they can give right now. Over time, a $15-a-month donation may increase to five or ten times that amount. Doing Your Best to Bring In the Dollars Just because the numbers are down and people are forecasting difficult days doesn’t mean you can’t keep trying. During economically tough times, it’s more important than ever to ramp up your fundraising energy, double your efforts, and stay in a positive frame of mind. Spending just as much time and energy as you always have in cultivating relationships, telling your story, looking for good fits, and managing your expenses effectively will pay off in the long run. Here are just a few of the ways you can make your hard work pay off in tough times: Reduce costs on your annual fund drive by replacing most or all of your print appeals with e-mail. Improve your Web site and make it possible for donors to give online. Participate in community events to help keep your organization in the public eye. Be willing to partner with other organizations who have similar or complementary missions. Say thank you — graciously (even if you’re only getting a portion of what you’d hoped for). Come up with creative, low-cost special events that help people socialize while doing something good for your organization. Be willing to share your challenges with others and enlist their help in reaching your goals. Chapters 14 through 17 show you how to reduce your print costs by moving many of your fundraising efforts online. These chapters also explain how to get listed in charity portals and work with online affinity groups. Chapter 19 is all about creating attention-getting special events (while keeping an eye on your budget). Preparing Now for When Things Start Looking Up As you can see, many of the challenges fundraisers face right now are really opportunities in disguise. A chance to streamline programs and processes. An opportunity to focus on what really matters. An invitation to take a look at the mission statement, goals, and objectives of your organization to see whether you need to update them, renovate them, or throw them out and rewrite them completely. You can position your organization to be prepared for the good times that will return (sooner or later) by completing some of the tasks we present in the next section. If you’re still unsure about how your organization will face the challenges ahead, the final section in this chapter may provide the encouragement you need to forge ahead. Laying the groundwork to take advantage of an economic recovery What can you do now to get your organization ready to make the most of the good times when they return? Here are just a few ideas: Get your case statement in sterling shape (see Chapter 4). Take a look at your organizational chart — is your organization set up to run efficiently? Build your donor relationships, day by day (see Part II). Make sure your communications are clear, honest, and mission-driven (see Part III). Figure out how to use social networking technologies to build community and goodwill (see Chapter 14). Consider partnering with another organization to share costs and increase visibility. Consider adding a contingency fund in next year’s budget. Replace much of your print expenses by using e-mail for letters and newsletters (see Chapter 15). Expand what you offer on your Web site, and provide more information for donors and readers (see Chapter 16). Build your brand and use it everywhere (see Chapter 17). Invite donor feedback to welcome your donors’ engagement with your organization (and be sure to acknowledge and use the feedback you get). As you prepare your organization for the recovery that’s sure to come, be sure to take a look at Chapter 24, which offers a glimpse into fundraising in the future. Optimism is in order — better days are ahead! In Chapter 5, we invite you to plot the history of your organization. What you likely discover as you create your organization’s timeline is that you or your predecessors have survived downturns before like the one you’re in now. Your organization has, indeed, risen to difficult times in the past and overcome them, and, ultimately, things did improve. What’s more, the difficult times in the past and those you experience today give your organization the chance to witness the creativity, compassion, and collaboration that shows up when people face challenges together. Moving forward with hope Before we jump headfirst into the many fundraising approaches available to you, we take a look at two stories that offer slightly different approaches to fundraising in tough times. Consider what these two organizations did to survive the challenging trials that faced them, and then read the rest of this book to find out what you can do to help your own organization make the most of fundraising in both good times and bad. Giving from a belief in abundance A local church decided that good work isn’t done only by churches but rather is done by lots of nonprofit organizations. As a result, the church decided to set aside 10 percent of the collections it received each Sunday and donate that amount to a local nonprofit. The immediate reaction of the leadership team and the pastor to this idea was We’re barely making it now — this is a hand-to-mouth organization! However, after considering the options — and trying to answer honestly the question Do we believe we live in an abundant society? — they went ahead with the plan. To this day, this church has never missed a payroll and continues to donate 10 percent of its weekly donations to local nonprofit organizations. The minister used to call John (one of the authors of this book) and say, John, I had to write that check and I thought, ‘I can’t do it . . . I can’t do it!’ But I did it anyway, and the money came. No matter what your mission may be or which constituency you serve, taking a good look at where your principles align — as individuals and as an organization — gives you the opportunity to see whether your actions are in alignment with your beliefs. Do you believe your good work will ultimately be funded? Are you generous to other organizations with missions similar to your own, in terms of sharing community interest, expressing goodwill, and being willing to collaborate when possible? Examine your own attitudes about nonprofit work and consider what kinds of belief statements your organization makes through its daily operating practices. Creativity and the power of collaboration When John took on the role of lieutenant governor for the state of Indiana in 1980, the nation was deep in an economic downturn. Everything was in double-digits: inflation, unemployment, and interest rates. As the new administration sought creative ways to respond to the widespread challenges, it realized that helping communities develop their economic potential was a powerful way to plant good seeds during a time of struggle. In communities all over the state, John and his staff gathered the nonprofit community (including faith-based organizations), the business community, and government representatives and presented a plan for a local economic development initiative that helped communities (1) identify the community’s assets; (2) identify the people in the community who could be enlisted to help (volunteer leaders who get things done); and (3) identify funders who could contribute to the initiative. More than 165 Local Economic Development Corporations (LEDOs) were born during that difficult time, and they’re still operating today. The inspired idea helped these communities stop wringing their hands and begin to move forward by recognizing their assets, mobilizing their talent, and beginning to build for the future. When dire times call for creative measures, you can trust the fact that people rise to the creative challenge. In your organization, you may be fretting about many things, but one thing is certain: If you’re inviting and listening to the input of people who care about your cause and are dedicated to your mission, new ideas will arise. And you will know you have a winning idea when it includes the creative and collaborative effort of many people and brings the best to the table for both your organization and the people you serve. Weathering the storm As you know well, the fundraising landscape today is full of sand traps and snake pits and unseen twists and turns. Eventually, the way will feel easier and the path will look clearer, but when you encounter moments of challenge, you find the resources — internal and external — that you need to accomplish your mission. People mobilize to help you. Opportunities pop up unexpectedly. New programs bring new donors. And the good work continues. It’s your job as your organization’s fundraiser to put all the ideas you get from your donors, your board, your volunteers, your staff, and yourself to work through fundraising approaches that honor your mission, respect your donor, tell your story, and invite dedication and commitment to your cause. Chapter 2 Identifying the Fruits of Your Fundraising Passion In This Chapter Finding the spark that first brought you to nonprofit work Building service with passion Plugging in with social media Fundraising folks have an old saying: "People don’t give to causes. People give to people with causes." This saying means, in essence, you’re one of the most important parts of the fundraising process. Your inspiration, your perspiration, your passion. So now comes the hard part: What are you passionate about? Chances are good that passion for a particular cause led you to fundraising in the first place. Oh, sure, you find professional fundraisers out in the field who are interested first and foremost in turning a fast buck. But those people are few and far between in our experience. People are drawn to organizations because they see a need — perhaps up close and personal — and because they feel compelled to do what they can to make a difference. When you’re part of a mission that’s close to your heart, the potential for creative effort and action increases and others are inspired and attracted to what you’re doing. Not only is that spark of passion the driving force behind your desire to help, but it’s also one of the best tools you can use as you fan the embers of possibility into a full fundraising flame. When you’re trying to fundraise in uncertain economic times, plugging in to your own passion — why you do what you do — is a vitally important part of telling your organization’s story with the energy that captures people’s attention. In this chapter, we take a look at having and staying in touch with that initial spark that brought about the birth of your organization, that keeps it going, and that you caught and are helping to flame. We also show you how to fan the flame to ignite others for your cause, give you the rundown on some basic fundraising lingo, and reveal just how many nonprofits you’re competing against to raise funds (so you know just how vast the industry is). And for those of you who are just breaking into the nonprofit world, we give you some advice on maintaining the buzz. Finally, we give you a taste of how to use social media to build excitement about your organization. Sparking Fundraising Action As anyone who’s ever had any experience with trying to raise money can attest, fundraising isn’t a pretty word. In fact, it’s a tough term to confront, a kind of oh-no-here-comes-the-pitch word. Some people say that fundraising is really friend-raising, but saying that is like putting a bit of polish on an otherwise slippery surface. Nonetheless, fundraising is a necessary part of any nonprofit organization — the part that puts the hinges on the doors so people can open them and the part that keeps the blankets on the beds and the food in the pantry. It pays the salary for the midwife, helps the senior citizen find affordable medication, and provides the day-camp scholarships for inner-city kids. But fundraising isn’t the main objective of a nonprofit organization, although you may sometimes feel like it gets the bulk of the focus. Fundraising is the means to an end, the way to fulfill your mission, whether that mission is reaching people who are homeless or in need, healing the sick, or promoting the art or music you’re passionate about. Taking the time to think through beliefs about money in general and fundraising in particular is important because your unexplored ideas may — for better or worse — affect your overall success in your role. When you consider the biases, apologies, and reactions that you battle against — within yourself and from the general population — when you set out to raise funds, you will be better informed, have a deeper understanding of donors, and be more likely to be successful in your role. In this section, we touch on what you need to do to spread your initial spark and passion about your cause to your potential donors. (Chapter 3, which deals with the ethics of fundraising, covers how you think about what you do.) We also introduce you to the nuts and bolts of the fundraising language so that you can talk the fundraising talk to the donors you’re trying to attract. Remembering why you signed on You may be involved with fundraising today, or you may be considering a request for involvement, but, either way, the initial spark that got you interested in your cause is what we’re talking about here. Like the Olympic flame, your spark gets carried from person to person and warms the very lifeblood of your organization, whether you’re a volunteer, staff member, or board member. You’ve lost that loving feeling If you feel that you’ve lost your initial spark or haven’t really analyzed what brings you to the cause you’re helping to promote, ask yourself the following questions and see whether that spark reignites (and if it doesn’t, you probably need to find a different cause to get involved with): When did I first become involved with this organization? What brought me here? What did I think was important about this cause? Why did I decide to help? What was going on in my life at the time that this cause appealed to me? What is my favorite success with this organization? A client’s happiness? A problem solved? A new connection made? What do I need in order to reconnect with my passion for this cause? How can I help others see what I see? Knowing your spark story is important for several reasons: When you share it, it inspires others. When you remember it, it inspires you. When you recognize its importance, it helps you remember your priorities. When you keep it in mind, it provides a common ground where you can meet — and enlist help from — others whom you bring into your organization’s cause. Helping your donor catch the spark We talk in this chapter about the importance of knowing what brought you to nonprofit work in the first place. That initial spark shows in your eyes and your smile. It carries in your voice and makes your story ring true. It shows in the manner in which you promote your organization and in the personal pride you take in your relationship to your work and your cause. This section presents a few key
https://it.scribd.com/book/26928087/Fundraising-For-Dummies
CC-MAIN-2021-10
refinedweb
6,737
56.18
The Open Toolkit library 1.0-beta-3Posted Tuesday, 9 March, 2010 - 13:50 by the Fiddler in Size: 472 bytes md5_file hash: d79af23824d692c8d08d601f8ddc385b First released: 9 March, 2010 - 13:50 Last updated: 9 March, 2010 - 14:26 [Overview] Please report any issues you encounter at [Known issues] * The Mac OS X port needs more testing. If you encounter a bug, please report it at. * OpenGL 3.1 and 3.2 fucntions may be missing specific tokens. Please report any such issues at * Mono 2.2 and 2.4.0 fail to compile OpenTK due to a compiler bug (). Please compile with Mono 2.0, 2.4.2+ or use the supplied binaries instead. * The example browser should list summaries for available samples. * MonoDevelop fails to sign assemblies (bugs and). * XBuild <= 2.6.1 fails to compile OpenTK. This issue has been upstream. [API changes] Please note that binary compatibility is not preserved between beta releases. If you are upgrading from OpenTK 0.9.9-0 or earlier you can simplify the upgrade process by adding a reference to OpenTK.Compatibility.dll and OpenTK.GLControl.dll (if necessary). OpenTK.Compatibility contains code and APIs that have been deprecated and removed from the core library and supports applications written against the Tao framework (Tao.OpenGl, Tao.OpenAl and Tao.Platform.Windows.SimpleOpenGlControl). [1.0 beta-3] No API changes. [1.0 beta-2] 1. NormalPointer(..., int) and FogCoordPointer(..., int) overloads no longer specify a 'size' argument. Solution: this is the correct signature, the 'size' argument found in previous OpenTK versions was invalid. This bug is unlikely to appear in practice. [1.0 beta-1] 1. Compiler errors in OpenTK.Graphics.ES20 about missing "All" enums. Solution: The ES20 namespace now contains proper, type-safe enums, similar to the OpenGL and OpenAL namespaces. Please replaces all instances of the All enum by the enums suggested by your compiler. 2. DisplayDevice.AvailableDevices and AvailableResolutions now return IList and IList instead of DisplayDevice[] and DisplayResolution[], respectively. Solution: Please store the return value in an IList variable, instead of T[] (where T may be DisplayDevice or DisplayResolution). This issue is unlikely to come up in practice. 3. OpenGL|ES 1.0, 1.1 and OpenCL namespaces are missing. Solution: These namespaces were not finalized in time for OpenTK 1.0 will be distributed as separate libraries. Please upgrade to the latest 1.x-dev release in. [0.9.9-3] 1. OpenTK.Matrix4d no longer contains an [int, int] indexer. Solution: there is no solution at this time. If you were using this indexer, please file an issue report at 2. OpenTK.Graphics.ES20.GetProgramInfoLog now takes a StringBuilder instead of an 'out string' parameter. Solution: the previous signature was incorrect. Please create and pass a StringBuilder to this method. 3. A number of OpenTK.Graphics.OpenGL 3.2 methods now take strongly-typed enums instead of the 'All' enum. Solution: please replace the All enum by the correct one, as indicated by the compiler error. 4. GameWindow OnLoad and OnUnload methods are now protected instead of public. Solution: change the access qualifier to protected when overriding these methods. [0.9.9-2] 1. OpenTK.Utilities assembly no longer exists. Solution: add a reference to OpenTK.Compatibility. 2. OpenTK.GLControl no longer exists in OpenTK.dll. Solution: add a reference to OpenTK.GLControl. 3. OpenTK.Graphics.GL has been moved to OpenTK.Graphics.OpenGL.GL. Solution: add a reference to OpenTK.Compatibility or change the relevant qualifiers from OpenTK.Graphics to OpenTK.Graphics.OpenGL or add a using directive for OpenTK.Graphics.OpenGL. 4. OpenTK.Audio.AL has been moved to OpenTK.Audio.OpenAL.AL. Solution: add a reference to OpenTK.Compatibility or change the relevant qualifiers from OpenTK.Audio to OpenTK.Audio.OpenAL or add a using directive for OpenTK.Audio.OpenAL. 5. GameWindow events are no longer raised if the relevant On* method is overriden. Solution: ensure that you call "base.On*" when you override one of the "On*" methods (e.g. OnLoad). 6. GameWindow OnLoad and OnUnload methods are now protected instead of public. Solution: change the access qualifier to protected for overriding methods. 7. DisplayResolution and DisplayDevice classes have been moved from OpenTK.Graphics into the root OpenTK namespace. Solution: change the relevant using directives and qualifiers from OpenTK.Graphics to OpenTK. 8. TextPrinter is marked as deprecated. Solution: there is no solution at this time. The TextPrinter will continue to work as expected but is need of a dedicated maintainer. 9. Tao.OpenGl, Tao.OpenAl and Tao.Platform.Windows.SimpleOpenGlControl are marked as deprecated. Solution: use core OpenTK classes if possible. The Tao namespaces are only offered for compatibility with existing applications and new projects should avoid using them. [0.9.9-1] 1. The OpenTK.Math namespace no longer exists. Please replace all references by 'OpenTK'. This can be easily achieved with the following Search & Replace operations: 'using OpenTK.Math;' -> 'using OpenTK;' 'OpenTK.Math.' -> 'OpenTK.' 2. OpenCL bitfields are now mapped to 'long' instead of 'int'. Casts from [Flags] enums to 'int' may now fail. Please avoid such casts or use 'long' instead. [0.9.9] 1. GameWindow.Resize and GameWindow.OnResize have changed signatures: ResizeEventHandler Resize(object, ResizeEventArgs) -> EventHandler Resize(object, EventArgs) OnResize(ResizeEventArgs) -> OnResize(EventArgs) Please replace all instances of "ResizeEventHandler" by "EventHandler and replace "e.Width" / "e.Height" by "this.Width" and "this.Height". 2. All GameWindow.On* functions are now 'protected' instead of 'public'. Please mark all relevant overrides as 'protected'. 3. Glu is now marked as deprecated. Please use OpenTK instead. 4. OpenTK.Input.[Keyboard|Mouse|Joystick]Device are marked as obsolete. Please continue using these classes normally. A future update will provide a much more versatile input API. [0.9.8-1] 1. Parameters of OpenTK.Graphics.GL.GetShaderSource have been changed: you should now pass a single StringBuilder instead of a StringBuilder[] array. 2. Parameters of OpenTK.Graphics.GL.GetUniformIndices have been changed: you should now pass a string[] array instead of a single string. 2. Parameters of OpenTK.Graphics.GL.TransformFeedbackVaryings have been changed: you should now pass a string[] array instead of a single string. [0.9.8-1] This release renames GL.GetBoolea to the correct GL.GetBoolean. [0.9.8] 1. OpenTK 0.9.8 replaces several instances of the "All" and "Version*" enums with strongly-typed equivalents. This is a breaking change. If you are affected by this change, replace these enums with the ones suggested by your compiler. The 'v' suffix has been removed from several OpenTK.Graphics.GL functions. Please search and replace any of the following functions (list non-inclusive): Uniform1v -> Uniform1 Materialv -> Material Lightv -> Light 2. Several instances of the "Version12" enum have been replaced with strongly-typed equivalents. This is a breaking change that affects programs using the imaging subset of OpenGL 1.2. If you are affected by this change, please replace all relevant instances of "Version12" with the correct enum, as indicated by your compiler. 3. OpenTK 0.9.8 removes several OpenGL overloads that take arrays of a single item. This is a breaking change. If you are affected by this change, please use the 'ref' or 'out' overload for the relevant function. [0.9.7] OpenTK 0.9.7 replaces several instances of the "All" and "Version30" enums with strongly-typed equivalents. This is a breaking change that potentially affects programs using OpenGL 3.0 functionality. If you are affected by this change, please replace the relevant instances of "All" or "Version30" with the correct enum, as reported by your IDE. OpenTK 0.9.7 also fixes the naming of several core and extension functions ending in "Instanced", "Indexed" or "Varyings". If you are affected by this change, please add the missing 'd' or 's' to the relevant functions.
http://www.opentk.com/node/1615
CC-MAIN-2014-52
refinedweb
1,301
53.68
Re: Select Statement with 4 tables help required please From: David Browne (meat_at_hotmail.com) Date: 08/12/04 - Next message: Peter The Spate: "Re: Opinions on procedural language being introduced into SQL Server 2005" - Previous message: Peter The Spate: "Re: TRANSACTION question" - In reply to: Joe Celko: "Re: Select Statement with 4 tables help required please" - Messages sorted by: [ date ] [ thread ] Date: Thu, 12 Aug 2004 10:14:35 -0500 "Joe Celko" <jcelko212@earthlink.net> wrote in message news:O0W9tJAgEHA.3928@TK2MSFTNGP11.phx.gbl... > >> No. Singular names for relations is not only common, but correct. << > > Correct based on what ANSI or ISO standard? It is in violation of > ISO-11179 and common sense. > > Quick test #1: Look at the data element name "employee" and tell me on > sight if it is a table or a column with your naming conventions. It is > legal SQL syntax to use the same name for both a table and a scalar. That's a good thing. It would be a column if it was the domain participating in another relation, and a table name if it was an enumeration of that domain. > > The worst thing that OO people trying to learn SQL do is use the same > name for a table and a column. It is legal SQL syntax, but a mess to > read. > "A mess to read" is not an argument. I say it's not a mess to read. Complete impass. > Quick test #2: Close your eyes and let someone say "employee" and a > picture of Dilbert pops into your head -- a single incidence of a > concrete entity. Close your eyes and let someone say "Personnel" and > yuou get an abstract image of a class in a business organization. Big > conceptual difference. > And where should we enumerate all the employees? In the employee table. > >> Huh? He said "IPCRecommendation.RecommendationID = Recommendation.ID" > << > > Those are full table names and a bitch to read. Again. Not a bitch to read. >An alias would look > like this: "R1.recommendation_id = IPC.recommendation_id" and the same > data element name would be used in the entire schema. Why? Because it > is the same data element in the entire schema. Why not? because it gives no information about the relationship. IPCRecommendation.RecommendationID = Recommendation.ID makes it clear that this is a foreign key relationship and that recommendation is the primary key table. > > >> A table named, say appointment might have a column, date. The proper > name for the column is "DATE" not "APPOINTMENT_DATE", which would be > redundant. << > > Wrong, totally, horribly wrong! First, never use a rserved word for a > column name. DATE is also too vague even if you changed it to "date"; > it begs the question "date of what?" appointment? birth? hire? > termination? death? Obviously the date of the appointment. If an appointment had more than one date, then another name would be required. > > This is the old joke: > > "When I was a kid, we had three cats." > "what were their names?" > "Cat, cat and cat." > "That sounds screwed up; how did you tell them apart?" > "Who cares? They did not come when you called them anyway!" > > Again, read ISO-11179. Your data will not come whe it is called either. > > >> But then if that column is referenced by another table is would be > referenced as "APPOINTMENT_DATE". << > > And if it appears in ten tables, you blindly construct ten new name and > screw your data dictionary beyond repair. One hundred tables, one > hundred new names! As you walk from room to room in your house, do also > you change your name, based on your physical location? Nonsense. Look, I have a livingroom television and a bedroom television. If I am in the living room I don't say "honey turn off the livingroom television". In the livingroom it's just the television. But as we pull out of the driveway I say "honey I left the livingroom television on". The name is qualified when refered to from outside of its local namespace, unqualified from within. > > >> Similarly a table called recomendation should not have a primary key > of recommendation_id. It should be just id.<< > > And when I look up this mystic, magic, all-purpose "id" in the data > dictionary, I can quickly see that it appears in hundreds of tables and > views, and that it has tens of various datatypes. Why would you find > this desirable? Why would you not? What's the problem? It's obviously an artificial key and it's obvious what table it belongs to. > >> It is a basic rule of naming that you shouldn't repeat higher levels > of the namespace in lower level names without a very good reason. > Column names are scoped inside table names, and so shouldn't repeat the > name of the table. << > > NO, the data elements exist at the schema level in an RDBMS!! I put > tables together from attributes (with the help of a data dictionary) to > models model of entities in SQL. No you don't and no one else does either. The schema is made up of tables which are in turn made up of columns. No one ever has to go directly from the schema to the column. > > You can do joins because of this ability to build a table from those > data elements at the schema level > > In COBOL and traditional file systems, I have separate, disjoint files > as my unit of work. That is where we get the rules about not repeating > names in the unbreakable hierarchy of records and fields. I had to > write them out with full qualification, just like you are still doing in > SQL. This is an RDBMS and not a COBOL file system naming hierarchy and > records and fields are nothing like rows and columns. Columns names are scoped inside table names. Table names are scoped inside schema names, and schema names are scoped inside database names. Nothing like COBOL. Not a hangover from file processing. > > I am trapped in an extended stay hotel for another week while I wait for > the movers, For that, I am truly sorry. >so I started to pool notes for a book on SQL programming > style together. I spent a few years doing this kind of research for > AIRMICS in the 1980's and I am amazed as to how much of that work is > ignored today. Yes I too am appalled at the bad and slopy SQL style I see every day. David - Next message: Peter The Spate: "Re: Opinions on procedural language being introduced into SQL Server 2005" - Previous message: Peter The Spate: "Re: TRANSACTION question" - In reply to: Joe Celko: "Re: Select Statement with 4 tables help required please" - Messages sorted by: [ date ] [ thread ]
http://www.tech-archive.net/Archive/SQL-Server/microsoft.public.sqlserver.programming/2004-08/2656.html
crawl-002
refinedweb
1,112
74.19
#include <game.h> List of all members. String version of the use join method. String version of the game goal. Send the game data. Write data that is needed for the server. Number of this game. Last changes to this game. Name of this game. General promotion of this game. Name of the user that created this game. Selection in the member join choices. Selection in the game goal choices. Name of the map of this game. The identification of the game server. Session of the game on the central server. Current state of the game. List of parties in this game.
http://moros.sourceforge.net/doxygen/classgame.html
CC-MAIN-2017-13
refinedweb
102
90.87
Cloud computing might be one of the best ways to host the applications. However, it takes dedicated effort to manage cloud applications. Also, it is now common for companies to rely on multiple cloud computing solution. This brings us to the concept of the hybrid cloud where it is common to use private, public and other third-party services. Hybrid clouds also force companies to manage different monitoring services, making it hard for them to have all the information under one roof. This gives rise to the need for a tool that can help manage multiple solutions at one single place. Grafana is a tool that lets you do just that. Yup, you can manage multiple cloud services through one simple dashboard. In this article, we will go through a simple tutorial on how to monitor AWS CloudWatch With Grafana. Before we dive deep into the actual tutorial, let’s learn more about the platforms that we are using. AWS CloudWatch: AWS CloudWatch is the Amazon’s monitoring services for cloud resources. It is used to collect and display metrics and other useful information. Administrators can use AWS CloudWatch to know what needs to be improved or get warned when something breaks. It works with all the AWS resources Grafana: Grafana is a popular open source dashboard tool. It works out of the box with different data sources including Graphite OpenTSDB, and InfluxDB. The user can easily edit and modify the dashboard according to their requirement. It also enables them to monitor multiple solutions from a single place. It uses different types of graphs, charts and another form of tools to help users in giving them an bird eye view. Monitor AWS CloudWatch With Grafana Grafana comes with an easy-to-use pluggable architecture. This means that you can create a dashboard with the widgets of your liking. It also comes with plugins and widgets. Monitoring AWS CloudWatch from Grafana is easy. All you need to do is follow the tutorial. So, without any delay, let’s get started. IAM Role Set up As we already know, AWS can be accessed using IAM roles. IAM roles can be an entity or a third party application. { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics" ], "Resource": "*" } ] } Now, that the IAM Role is created, it is now time to start an EC2 instance. You can start an EC2 instance from your AWS dashboard as shown in the image below. You need to use the script below to launch the EC2 instance. The user-data script has all the allowances that are required to run a successful Grafana server. You need to make sure that the role that you created earlier is associated with the instance. #!/bin/sh yum install -y service grafana-server start /sbin/chkconfig --add grafana-server Creating a Grafana Account Now that you have created an IAM Role and an EC2 instance, the next step is to create your Grafana account. You can go to their official website and register a free account. As it is open source, you can either download the client or create a free instance on their website as shown in the image below. To make your instance works as intended, you need to open port 3000 for inbound traffic. It can easily be done through Grafana Dashboard. Connecting to your EC2 Instance using Grafana With everything ready, it is now time to open your browser and open up the instance. To do so, type in the following in your browser. If you did everything correctly until now, you would be redirected to the Grafana login page as shown below. To log in, you need the username as “admin” and password as “admin.” Once you log in, you will be greeted by the beautiful interface of Grafana. You can access the different options from the side menu as shown in the image below. As Gafana comes with in-built support for CloudWatch, you don’t have to install any additional plugin. Click on the gear icon on the left menu, and then go to Data Sources. Once you are there, click on “Add Data Source.” Now, you need to select the type to CloudWatch from the drop-down menu. It will bring you to new form that needs to be filled. Type in the name and also select the default region. Once done, click on save and test. If it is working, it will show a message, “data source is working.” Congratulations, you have successfully connected your EC2 instance with CloudWatch with Grafana. Alternative approach: As we have used IAM role, we don’t have to fill up the credentials profile name. However, you can also go forward and connect without IAM role by creating a simple credentials file that will contain the AWS Secret and Access Key. The file needs to be created under ~ /.aws/credentials. Creating a new Dashboard With the data source connected, it is now time to create a dashboard. To do so, just click on the “create dashboard” option from the side menu. Add a new graph from the options available and now select AWS/EC2 as the required namespace. The other two things that you need to set is CPU Utilization as metric and also the instance id. That’s it! You have successfully created a graph and started monitoring your EC2 instance variables. You can create more graphs and widgets on dashboard according to your own requirements. Conclusion: – Working with Grafana will surely give you an edge as you can monitor multiple instances, both private and public. The CloudWatch API’s are used to communicate between AWS and Grafana. The good news is that Grafana comes with in-built support for AWS CloudWatch and it won’t take much of your time to set it up. So, what do you think about the tutorial? Are you all ready to build dynamic and interactive dashboards? Comment below and let us know.
https://blog.eduonix.com/software-development/learn-monitor-aws-cloudwatch-grafana/
CC-MAIN-2021-43
refinedweb
1,001
73.37
Distributing Human Resources among Software Development Projects 1 - Monica Beasley - 2 years ago - Views: Transcription o be assigned o developmen proecs. The esimaion calculaes he opimal disribuion from he economical poin of view (i.e.: ha which less coss). To accomplish his, we buil an economical model o disribue human resources among proecs in one organisaion. The equaions and algorihms used o do his are presened in he paper. We also briefly presen a ool which does hese esimaions.. Inroducion In he las few years, an increasing demand for personnel qualified in Informaion Technologies has been observed, which grealy exceeds he offer: in 998 here was in Europe 500,000 free posiions in IT, wih an expeced increase of up o 2.4 million people for he year 2004 []. Some repors from he European Commission also confirm his endency [2][3], which is reviewed by daily newspapers almos every monh. The siuaion in he U.S.A. is quie similar: he yearly number of visas for foreigners qualified in Informaion Technologies has been increased o he poin of 300,000 ones, alhough big sofware organizaions have solicied a greaer number of licenses. Wih his siuaion, sofware organizaions mus do an adequae planning of heir human resources among he differen proecs ha hey are carrying ou. An addiional difficuly is o fix he number of developmen proecs o be acceped by a sofware organizaion, when many imes i is known ha he organizaion has no (and will no have) people enough o execue all he proecs in ime and budge. However, he reecion of a sofware proec will cause probably he loss of ha cusomer for he organizaion, and maybe a negaive impac on oher poenial cusomers. Therefore, i is imporan o accep proecs, bu i is also a basic aciviy o disribue he human resources in an adequae way among all of hem, and his one is he main goal of his paper. We propose o make an economical model of he porfolio of sofware developmen proecs, in order o esimae he opimal quaniy of human resources o be devoed o each proec, during every day of he considered period. We have successfully esed he mehod wih several proecs which use differen life cycle models, alhough we have reasons o believe ha i can be easily adapable o sill-non-sudied life cycles. This mehod is a very advanced evoluion of he work presened las year in his same forum [4]. The paper is organized as follows: Secion 2 explains he model we use o represen a porfolio of developmen proecs from an economical poin of view, which includes several equaions. In Secion 3, an algorihm o calculae he disribuion is presened, as This work is par of he MPM and MATIS proecs. MPM is developed wih Aos ODS, S.A. and parially suppored by he Miniserio de Ciencia y Tecnología, Programa de Tecnologías de la Información y las Comunicaciones (IT ); MATIS is parially suppored by he European Union and CICYT (D97-608TIC). 2 well as CRED, a ool we have developed o do esimaions. Secion 4 exposes our conclusions and fuure and immediae work lines. 2. Economical model of a porfolio of developmen proecs One of he primary goals of he sofware organizaion is o obain economical benefis from he developmens. Independenly of he life cycle seleced for a given proec, his one can be considered as he union of several subproecs. Someimes, he consecuion of a subproec will be a mus o sar anoher one (in he waerfall life cycle model, he requiremen analysis comes before he design phase), bu oher imes, he life cycle seleced allows o develop in parallel differen pars of he sysem. Depending on he conrac beween he cusomer and he supplier organizaions, maybe some of hose subproecs mus be delivered according o a previously signed scheduling. or example, a sysem developed using he Unified Process can be delivered in several releases, each one wih some incremen of funcionaliy wih respec o he previous one. These successive releases consiue parial resuls whose dae of deliver, prize and possible sancions by delay may be covenaned in he developmen conrac. 2.. Maximizaion equaion The sofware organizaion mus ake ino accoun all hese facors o do an adequae resource assignmen o each proec and subproec, wih he primary goal of maximize is economical benefis. This can be represened by he following equaion: Max B = I C = i= ( Ii Ci) Eq. In Eq.., B represens he benefis produced by he developmen proecs in he porfolio; I and C respecively are he oal incomes and coss, whereas Ii and Ci represen he incomes and coss of he i-h proec. As every proec is an aggregaion of subproecs, Eq. can be rewrien in his way: Max B i = i= = ( Ii Ci) Eq. 2 In Eq. 2, i is he number of subproecs if he i-h proec, and Ii and Ci are, respecively, he incomes and coss of he -h subproec of he i-h proec (hereinafer, he i subproec). Depending so on he conrac as on he seleced life cycle, someimes Ii will be zero, since no all subproecs will be delivered o he cusomer and hey will no produce any income. In he oher side, Ii is ypically covenaned in he developmen conrac, since he cusomer organizaion needs o know he proec budge before is assignmen o he developmen organizaion. Coss of each subproec are differen: each one akes more or less ime han anoher one, and he people in charge of hem have differen levels and salaries (analyss, programmers, ess engineers...). Anoher influencing facor on he coss of a subproec (and, herefore, on he coss of he complee proec) is he exisence of delays in he dae of delivering and of sancions by delays. Wih his assumpion, coss of he i proec can be represened in his manner: Ci = Ri + Qi Eq. 3 3 Ri are he coss of he resources devoed o he subproec, whereas Qi is he quaniy o be paid by sancions due o delays in he i subproec. We suppose ha a subproec needs only one ype of resource (analyss, for example): oherwise, i could be divided ino more subproecs. Wih his consideraion, Ri can be expressed in his way: Ri = ( Ti h) = Eq. 4 In Eq. 4, Ti is he number of hours of he -h resource needed o execue he i subproec; is he number of differen resources (analyss, programmers, es engineers, ec.); h represens he cos of one hour of resource of he ype. The second variable of Eq. 3, Qi, depends on he scheduled dae of delivering (or scheduled duraion) for he i proec and on he real dae (or real duraion). A a firs sigh, Qi can be expressed in his way: Qi = Si ( Ei Pi) Eq. 5 In Eq. 5, Si is he covenaned sancion for he i subproec (obviously, if such subproec will no be delivered, i will no have any sancion, alhough i could have some influence on nex subproecs). Ei isherealnumberofdaysusedofinishhei proec, whereas Pi is he scheduled duraion, also in days, which depends on he oal number of hours of effor required by he i proec (Ti ). = Pi was esimaed by he sofware organizaion using some esimaion mehod and is covenaned in he conrac. The value of Ei depends on he number of hours devoed o he subproec. In a given subproec, we will reach he value of Ei when he sum of he hours devoed each day o he i proec will be equal o he number of required hours of he -h resource, Ti. In oher words: Ei = m m / e ik = Ti Eq. 6 k = = = In Eq. 6, e ik is he number of hours of he -h resource devoed o he i proec in he k-h day. A his poin, we have characerized all he variables we need o rewrie Eq. : Max B = I C = = i= i i Ii ( Ti h) Si ( Ei Pi) i= = = ( Ii Ci) = i= = ( Ii Ci) = i i= = [ Ii Ri Qi] = Eq. 7 In Eq. 7, he only unknown quaniy is Ei, since he res have previously assigned values Consrains To esimae he opimal disribuion of resources, we mus idenify now he possible consrains o be applied o Eq. 7. 4 The firs consrain represens he fac ha he sum of he hours of a given resource devoed during a concree day o all subproecs canno be greaer han he available number of hours of ha kind of resource in ha day (A, A 2 ): e + e e A [, ], e 2 + e e 2 A 2 [, ],... The subindex is used o group resources by is ype (i.e.: analyss, programmers...). Eq. 8 summarizes he previous equaions: i= eik AK,, k Eq. 8 The nex consrain means ha he ime devoed o a subproec mus be equal o he ime required by i: i = eik = Ti, k, Eq. 9 The following simple consrains deermine he minimal values of he variables ha inervene in Eq. 7: Ii 0 hi 0 Si 0 T e i A ik K 0 0 0,, k, k Eq. 0o Complee model (for reference) Wih all he equaions saed in he previous subsecions, he porfolio of proecs can be modelled wih he following equaion: 5 i Max B = Ii i subec o : eik AK,, k i= eik = Ti, k, = Ii 0 h 0 Si 0 Ti 0, eik 0, k AK 0, k = = = ( Ti h) Si ( Ei Pi) The meaning of each variable in he lef side is he following: = number of proecs in he porfolio i = number of subproecs in he i-h proec Ii = incomes by he i subproec =number of differen ypes of resources Ti = number of hours required of resource ype by he i subproec h = cos of each hour of resource devoed o he i subproec Si =sancionobepaidbyeachdayofdelayinhedeliverofhei subproec Ei = real number of days used o finish he i proec Pi is he scheduled duraion (in days) of he i subproec e = resources devoed he k-h day o he i subproec i A K = available resources of he ype in he k-h day I is imporan o remember he following definiion of Ei (Eq. 6): Ei = m m / e ik = Ti. This equaion is he basis o esimae he opimal disribuion of resources. k = = = 3. Resources esimaion All we need o esimae he resources o be assigned o every developmen proec in he porfolio is o resolve he equaion shown in Secion 2.3. As no all he consrains are lineal (see Eq. 6), he simplex mehod canno be used. Some of he candidae mehods o resolve his kind of equaions are Geneic algorihms or Simulaed annealing. However, as he number of unknown values is very lile, we can find a very good approximaion o he opimal soluion esing all he possible combinaions. 6 3.. Algoryhm A coarse version of he algorihm could be he following: max =-MAXDOUBLE for k=0 o maxduraion for i= o do for = o for e=0 o A[,k] e[i,,k,]=e if Benefi()>max hen max=benefi() saveparameers(i,,k,e) endif end end end end igure. irs version of he algoryhm. Obviously, he code in igure can be very opimized hrough he applicaion of he consrains and some oher rules. or example: We will assign resources (assignmen of values o he e arrayinigure) when: o o The i proec needs hem In he k-h day, here are available resources of he ype which is being esed. We will calculae he benefi (Eq. 7) when he curren combinaion which is being esed implies he finalizaion of all he proecs in he porfolio. The e array can be a vecor (wih he consequen memory saving): only some ransformaion rules are required CRED: a ool for esimaing resources disribuion In order o do he auomaic esimaion of resources, we have developed CRED, a ool which helps in he calculus of he resources disribuion. I implemens an opimised version of he algorihm previous. igure 2 shows one he CRED s screens, in he momen of doing an esimaion. igure 2. CRED during he calculus of a disribuion. 7 CRED allows he racking of he proecs, o change he assignaion, ec. In his manner, several kind of repors, saisics and graphics can be obained, in order o analyze effors, resources, ec., and compare hem agains a baseline. igure 3 shows he screen used o change he emporary availabiliy of resources. igure 3. Changing resources availabiliy. igure 4 shows a view of he class diagrams used o build he ool: resources needed by a subproec are deermined by he associaion wih he Resourceecessiy class. Porfolio 0..* -mproecs Proec -m Subproecs..* Subproec -mrequired Resourceecessiy -mresource Resource..* igure 4. Concepual diagram Conex of CRED CRED is now being adaped o allow is full inegraion in MATIS [5], an inegral environmen for managing mainenance proecs. MATIS uses XMI o exchange informaion among he ools ha compose i. In his conex, CRED should be capable of sending is informaion o MATIS in a sandardized forma based on XML, which is one of our curren works. 8 4. Conclusions and fuure work This paper has presened a mehod and a ool o esimae he disribuion of resources o be assigned o developmen proecs. The esimaion aemps o assure an opimal disribuion from he economical poin of view. Boh he mehod and he ool mus be modified in order o ake ino accoun: Dependencies (i.e.: a subproec canno begin before he end of anoher one) Indirec coss (i.e.: he organizaion canno accep a proec because i will no have available resources) 5. References [] Bourke, T.M. (999). Seven maor ICT companies oin he European Commission o work owards closing he skills gap in Europe (Press Release). Available also a (ov., 7, 2000): hp:// [2] European Commission (999). The compeiiveness of European enerprises in he face of globalisaion. Available a (ov, 7, 2000): hp://europa.eu.in/comm/research/pdf/com98-78en.pdf [3] European Commission (2000). Employmen in Europe Available a (ov., 7, 2000): hp://europa.eu.in/comm/employmen_social/empl&esf/docs/empleurope2000_en.pdf [4] Polo, M., Piaini, M. and Ruiz,. (2000). Planning he non-planneable mainenance. Proec Conrol, The Human acor: Proceedings of he ESCOM-SCOPE 2000 Combined Conference. Munich, Germany. [5] Ruiz,., Piaini,. and Polo, M. (200). An Inegraed Environmen for Managing Sofware Mainenance Proecs. In van Bon (Ed.): World Class IT Service Managemen Guide, 2 nd ediion. Addison Wesley.) Reporting to Management CHAPTER 31 Reporing o Managemen Inroducion The success or oherwise of any business underaking depends primarily on earning revenue ha would generae sufficien resources for sound growh. To achieve his objec Capacity Planning and Performance Benchmark Reference Guide v. 1.8 Environmenal Sysems Research Insiue, Inc., 380 New York S., Redlands, CA 92373-8100 USA TEL 909-793-2853 FAX 909-307-3014 Capaciy Planning and Performance Benchmark Reference Guide v. 1.8 Prepared by:
http://docplayer.net/17876001-Distributing-human-resources-among-software-development-projects-1.html
CC-MAIN-2018-43
refinedweb
2,507
60.24
How to optimize your Jupyter Notebook. Jupyter Notebook is nowadays probably the most used environment for solving Machine Learning/Data Science tasks in Python. Jupyter Notebook is nowadays probably the most used environment for solving Machine Learning/Data Science tasks in Python. Jupyter Notebook is a client-server application used for running notebook documents in the browser. Notebook documents are documents able to contain both code and rich text elements such as paragraphs, equations, etc... In this article, I will walk you through some simple tricks on how to improve your experience with Jupyter Notebook. We will start from useful shortcuts and we will end up adding themes, automatically generated table of contents etc... Shortcuts can be really useful to speed up writing your code, I will now walk you through some of the shortcuts I found most useful to use in Jupyter. There are two possible way to interact with Jupyter Notebook: Command Mode and Edit Mode. Some shortcuts works only on one mode or another while others are common to both modes. Some shortcuts which are common in both modes are: In order to enter Jupyter command mode, we need to press Esc and then any of the following commands: In order to instead enter Jupyter edit mode, we need to press Enter and successively any of the following commands: Not many users are aware of this, but it is possible to run shell commands in a Jupyter notebook cell by adding an exclamation mark at the beginning of the cell. For example, running a cell with !ls will return all the items in the current working directory and running a cell with !pwd will instead print out the current directory file-path. This same trick, can also be applied to install Python packages in Jupyter notebook. !pip install numpy If you are interested in changing how your Jupyter notebook looks like, it is possible to install a package with a collection of different themes. The default Jupyter theme looks like the one in Figure 1, in Figure 2 you will see how we will be able to personalise it's aspect. Figure 1: Default Jupyter Notebook Theme We can install our package, directly in the notebook used the trick I showed you in the previous section: !pip install jupyterthemes We can the run the following command to list the names of all the available themes: !jt -l # Cell output: # Available Themes: # chesterish # grade3 # gruvboxd # gruvboxl # monokai # oceans16 # onedork # solarizedd # solarizedl Finally, we can choose a theme using the following command (in this example I decided to use the solarized1 theme): !jt -t solarizedl Once ran this command and refreshed the page, our notebook should like like the one in Figure 2. Figure 2: Solarized1 notebook Theme In case you wish anytime to come back to the original Jupyter notebook theme, you can just run the following command and refresh your page. !jt -r Notebook extensions can be used to enhance user experience offering a wide variety of personalizations techniques. In this example, I will be using the nbextensions library in order to install all the necessary widgets (this time I suggest you to first install the packages through terminal and then open the Jupyter notebook). This library makes use of different Javascript models in order to enrich the notebook frontend. ! pip install jupyter_contrib_nbextensions ! jupyter contrib nbextension install --system Once nbextensions is installed you will notice that there is an extra tab on your Jupyter notebook homepage (Figure 3). Figure 3: Adding nbextensions to Jupyter notebook By clicking on the Nbextensions tab, we will be provided with a list of available widgets. In my case, I decided to enable the ones shown in Figure 4. Figure 4: nbextensions widgets options Some of my favourite extensions are: Auto-generate a table of contents from markdown headings (Figure 5). Figure 5: Table of Contents 2. Snippets Sample codes to load common libraries and create sample plots which you can use as starting point for your data analysis (Figure 6). Figure 6: Snippets 3. Hinterland Code autocompletion for Jupyter Notebooks (Figure 7). Figure 7: Code autocompletion The nbextensions library provides many other extensions apart for these mention three, I encourage you to experiment and test any-other which can be of interest for you! By default, the last output in a Jupyter Notebook cell is the only one that gets printed. If instead we want to automatically print all the commands without having to use print(), we can add the following lines of code at the beginning of the notebook. from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" Additionally, it is possible to write LaTex in a Markdown cell by enclosing the text between dollar signs ($). It is possible to create a slideshow presentation of a Jupyter Notebook by going to View -> Cell Toolbar -> Slideshow and then selecting the slides configuration for each cell in the notebook. Finally, going to the terminal and typing the following commands the slideshow will be created. pip install jupyter_contrib_nbextensions # and successively: jupyter nbconvert my_notebook_name.ipynb --to slides --post serve Magics are commands which can be used to perform specific commands. Some examples are: inline plotting, printing the execution time of a cell, printing the memory consumption of running a cell, etc... Magic commands which starts with just one % apply their functionality just for one single line of a cell (where the command is placed). Magic commands which instead starts with two %% are applied to the whole cell. It is possible to print out all the available magic commands by using the following command: %lsmagic Recommended Reading ☞ 5 Common Python Mistakes and How to Fix Them ☞ Introduction to K means Clustering ☞ 5 TensorFlow and ML Courses for Programmers ☞ Best Java machine learning library ☞ How To Prepare Your Dataset For Machine Learning In Python ☞ How to Deploy Machine Learning Models on Mobile and Embedded Devices ☞ Learn Machine Learning with Python for Absolute Beginners ☞ Speed Up Your Python Code with Cython.
https://morioh.com/p/727748031e25
CC-MAIN-2020-40
refinedweb
994
58.72
On Wed, Apr 06, 2005 at 12:55:21AM -0700, Stephane Eranian wrote: > David, > > > > I think we would be way of the safe side with: > > > PFM_MAX_PMCS=320 (256+64) > > > PFM_MAX_PMDS=320 (256+64) > > > > Ok. I suspect it will end up being massive overkill for nearly every > > CPU we ever deal with, but who cares, really. > > > Yes, given how hard it is to get more counters I don't think we'll ever > reach 256. But for PMU which use indexed registers we may have, let's say, > 32 registers scattered across the entire 256-entry namespace. Very well. > > Well, there are so few, I don't think we need to be particularly > > clever. I suppose we could use the SPR numbers, minus some offset > > (perfctr does this in the latest versions), but I was thinking > > something simple, say: > > > > PMCs: > > 0: MMCR0 > > 1: MMCR1 > > 2: MMCR2 (ppc32 only) > > 3: MMCRA (ppc64 only) > > > > PMDs: > > 0: timebase > > 1: PMC1 > > 2: PMC2 > > ... > > 8: PMC8 > > > > (The PowerPC documentation's use of PMC for "Performance Monitor > > Counter" makes this look a little confusing). > > > >From this description, look like you have 8 counters but you don't have > 8 configuration registers for them. Do they always go in pairs? No. The individual event selection fields, such as they are are spread across MMCR0 (32bit) and MMCR1 (64bit). The rest of MMCR0 and MMCRA have general control bits, plus the settings for the various muxes which affect the interpretation of the event selection fields. I still don't fully understand the event selection logic, I have to admit - it's pretty baroque. > > Incidentally, how were you planning to implement the perfctr-like > > virtualized tsc and mmap() based sampling you were talking about? > > That could have an impact on how we do things here. > > > See below for mmapping. > > > > > > > Hrm... I doubt it would really be all that costly to pack the wholes, > > > > > > Related to event sets and the register virtual mapping that is done by perfctr. > > > The same kind of mapping could be provided per-event set. It would be possible > > > to return the address at which a set is visible. That would be automatic remapping. > > > Of course, that goes against the model which I recently change whereby the user > > > must call mmap() explicitely to remapping the sampling buffer. But it would > > > be hard to reuse the same call to map PMD registers of a set. There is only > > > one file descriptor per context. Sowe need to find another trick to indicate > > > which set to mmap. I think we could find a nice trick with the mmap offset. > > > Offset=0 means sampling buffer, offset=1 means set0, offset=2 means set1 and > > > so forth. Do you have any ideas on this? > > > > Yes, I've always thought using the offset as a selector for which > > information to access would be a reasonable idea. However, they'll > > need to be multiples of whole pages to work properly. I also think > > the offsets should be somewhere up high: because you also support > > read() on the fd, I think it would be counter-intuitive for an mmap() > > at 0 *not* to return the same data as read(). > > > What about on creation each set returns an opaque cookie which must be used > as the offset to mmap. This way, the user does not have to deal with page size. > We could add the cookie in the structures passed to PFM_CREATE_EVTSETS and > PFM_GETINFO_EVTSETS. This would cover set0 which is always created by default. > And yes, you need one page per set because each set can individually be destroyed. > The cookie could correspond to low or high offset depending on what is > more convenient.. However, that's not actually what I was getting at. I don't know that it's necessary to support mmap()ing the ring buffer. All I was saying is that if it were possible to mmap() at offsets around the value of the file pointer (so, near 0), it would be very peculiar for it to give you something other than read() does. So I suggest that for this other information we put it at a high offset and basically never allow the file pointer to reach up to it.. One other question: how are you planning to implement the mmap() based sampling and tsc virtualization features from perfctr? -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! View entire thread
http://sourceforge.net/p/lse/mailman/message/11635112/
CC-MAIN-2016-07
refinedweb
746
71.85
As our program grows bigger, it may contain many lines of code. Instead of putting everything in a single file, you can use modules to separate codes in separate files as per their functionality. This makes our code organized and easier to maintain. Module is a file that contains code to perform a specific task. A module may contain variables, functions, classes etc. Let's see an example, Suppose, a file named greet.js contains the following code: // exporting a function export function greetPerson(name) { return `Hello ${name}`; } Now, to use the code of greet.js in another file, you can use the following code: // importing greetPerson from greet.js file import { greetPerson } from './greet.js'; // using greetPerson() defined in greet.js let displayName = greetPerson('Jack'); console.log(displayName); // Hello Jack Here, - The greetPerson()function in the greet.js is exported using the exportkeyword export function greetPerson(name) { ... } - Then, we imported greetPerson()in another file using the importkeyword. To import functions, objects, etc., you need to wrap them around { }. import { greet } from '/.greet.js'; Note: You can only access exported functions, objects, etc. from the module. You need to use the export keyword for the particular function, objects, etc. to import them and use them in other files. Export Multiple Objects It is also possible to export multiple objects from a module. For example, In the file module.js // exporting the variable export const name = 'JavaScript Program'; // exporting the function export function sum(x, y) { return x + y; } In main file, import { name, sum } from './module.js'; console.log(name); let add = sum(4, 9); console.log(add); // 13 Here, import { name, sum } from './module.js'; This imports both the name variable and the sum() function from the module.js file. Renaming imports and exports If the objects (variables, functions etc.) that you want to import are already present in your main file, the program may not behave as you want. In this case, the program takes value from the main file instead of the imported file. To avoid naming conflicts, you can rename these functions, objects, etc. during the export or during the import . 1. Rename in the module (export file) // renaming import inside module.js export { function1 as newName1, function2 as newName2 }; // when you want to use the module // import in the main file import { newName1, newName2 } from './module.js'; Here, while exporting the function from module.js file, new names (here, newName1 & newName2 ) are given to the function. Hence, when importing that function, the new name is used to reference that function. 2. Rename in the import file // inside module.js export { function1, function2 }; // when you want to use the module // import in the required file with different name import { function1 as newName1, function2 as newName2 } from './module.js'; Here, while importing the function, the new names (here, newName1 & newName2 ) are used for the function name. Now you use the new names to reference these functions. Default Export You can also perform default export of the module. For example, In the file greet.js: // default export export default function greet(name) { return `Hello ${name}`; } export const age = 23; Then when importing, you can use: import random_name from './greet.js'; While performing default export, - random_name is imported from greet.js. Since, random_nameis not in greet.js, the default export ( greet()in this case) is exported as random_name. - You can directly use the default export without enclosing curly brackets {}. Note: A file can contain multiple exports. However, you can only have one default export in a file. Modules Always use Strict Mode By default, modules are in strict mode. For example, // in greet.js function greet() { // strict by default } export greet(); Benefit of Using Module - The code base is easier to maintain because different code having different functionalities are in different files. - Makes code reusable. You can define a module and use it numerous times as per your needs. The use of import/export may not be supported in some browsers. To learn more, visit JavaScript import/export Support.
https://www.programiz.com/javascript/modules
CC-MAIN-2021-04
refinedweb
670
60.21
Best Answer kylomas , 12 December 2013 - 12:13 AM esahtar90, As a new user it is beneficial to understand what went wrong with the code you are having a problem with. _FileWriteToLine expects a file name, you gave it a file handle. So let's change you code as follows: #include <File.au3> ;-------------------------------------------------------------------------------- ; create test file filecopy(@scriptdir & '\testinputfile.txt',@scriptdir & '\testinputfile2.txt',1) ;-------------------------------------------------------------------------------- Global $count = 0 $file = @scriptdir & "\testinputfile2.txt" $po_number = FileReadLine($file, 1) $file_count = _FileCountLines($file) For $i = 1 To $file_count $line = FileReadLine($file, $i) If StringInStr($po_number, $line) > 0 Then if _FileWriteToLine($file, $i, "", 1) <> 1 Then ConsoleWrite(@error & @LF) ; added an error check with console output $count += 1 Else ExitLoop EndIf Next MsgBox(0, "Information:", "Count is " & $count & " PO number " & $po_number) If you run this it will return "Count is 3 PO number is 3022". The output file is left with two lines of 3022. This is because you are deleting the line that matches your number with _FileWriteToLine($file, $i, "", 1) while using an increment variable to read the file line by line. 1ST read in loop - file has 5 lines = 3022 2ND read in loop - file has 4 lines = 3022 and you start testing at line 2 skipping line 1 (which is 3022) 3RD read in loop - file has 3 lines = 3022 skipping line 1 and 2... Not only is this a logical error but it is terribly inefficient (see HELP file doc for FileReadLine). Let's say that we change the _FileWriteLine to _FileWriteToLine($file, $i, " ", 1) to write a blank space rather than delete the line. Now the message is "Count is 5 PO number is 3022" and there are 5 blank lines in the output file (as we expect). Probably not what you want but logically correct. Given all of the above it is better to read the file to either a string or an array, process the string/array and write the file back from the string/array. A couple of examples of this have been posted in previous replies. I hope this helps. Right now your best friend is the HELP file. kylomas edit: spellingGo to the full post
http://www.autoitscript.com/forum/topic/156910-need-help-to-delete-lines-in-a-text-file-upon-string-match/
CC-MAIN-2014-42
refinedweb
360
69.31
7.4.2 Providing your own templates In addition to the template classes provided by the C++ standard library you can define your own templates. The recommended way to use templates with g++ is to follow the inclusion compilation model, where template definitions are placed in header files. This is the method used by the C++ standard library supplied with GCC itself. The header files can then be included with ‘#include’ in each source file where they are needed. For example, the following template file creates a simple Buffer<T> class which represents a circular buffer holding objects of type T. #ifndef BUFFER_H #define BUFFER_H template <class T> class Buffer { public: Buffer (unsigned int n); void insert (const T & x); T get (unsigned int k) const; private: unsigned int i; unsigned int size; T *pT; }; template <class T> Buffer<T>::Buffer (unsigned int n) { i = 0; size = n; pT = new T[n]; }; template <class T> void Buffer<T>::insert (const T & x) { i = (i + 1) % size; pT[i] = x; }; template <class T> T Buffer<T>::get (unsigned int k) const { return pT[(i + (size - k)) % size]; }; #endif /* BUFFER_H */ The file contains both the declaration of the class and the definitions of the member functions. This class is only given for demonstration purposes and should not be considered an example of good programming. Note the use of include guards, which test for the presence of the macro BUFFER_H, ensuring that the definitions in the header file are only parsed once if the file is included multiple times in the same context. The program below uses the templated Buffer class to create a buffer of size 10, storing the floating point values 0.25 and 1.0 in the buffer: #include <iostream> #include "buffer.h" using namespace std; int main () { Buffer<float> f(10); f.insert (0.25); f.insert (1.0 + f.get(0)); cout << "stored value = " << f.get(0) << '\n'; return 0; } The definitions for the template class and its functions are included in the source file for the program with ‘#include "buffer.h"’ before they are used. The program can then be compiled using the following command line: $ g++ -Wall tprog.cc $ ./a.out stored value = 1.25 At the points where the template functions are used in the source file, g++ compiles the appropriate definition from the header file and places the compiled function in the corresponding object file. If a template function is used several times in a program it will be stored in more than one object file. The GNU Linker ensures that only one copy is placed in the final executable. Other linkers may report "multiply defined symbol" errors when they encounter more than one copy of a template function--a method of working with these linkers is described below.
http://www.network-theory.co.uk/docs/gccintro/gccintro_59.html
crawl-001
refinedweb
465
58.92
Susan IbachTechnical Evangelist AzureFest is coming to Kitchener! () Why should you care? Well let’s face it, as students we don’t like to attend long and boring lectures. We prefer to get straight to business and get our hands dirty. AzureFest will get you to do just that! Simply bring your own laptop and a credit card (event and tools are free; it is only required for account activation) and get ready to actually do stuff. If you can make it to or happen to be around Kitchener, be sure to attend. Holiday In - Kitchener 30 Fairway Road South, Kitchener, ON N2A 2N2 Tuesday, May 31, 2011 Click here to register UPDATE: Our friends at Canada’s Technology Triangle .NET User Group, are giving away an XBOX 360 (250GB) to one lucky attendant who submits an evaluation form. Don’t forget to submit the event evaluation form for a chance to win!. Cloud Computing is one of the most talked about buzz words these days. But what is it? What is Windows Azure? Windows Azure is Microsoft’s Cloud platform that allows you to build and deploy highly available and scalable apps using .NET (or Java, PHP, Python, and Ruby). As a student, why do I need to know about Cloud? Two reasons: So now it’s up to you to take the plunge – become tomorrow’s thought leader in this space and differentiate yourself amongst your fellow students. You don’t have to, and shouldn’t, wait to get started. So how do I get started? If you’re off for the summer, great! More time to learn. If you’ve just started your summer semester, then there isn’t as much time to learn, but learning in short intervals of time will definitely get the job done (like your mother and/or father used to say when you were in grade school - “You have to do a little bit of homework everyday”). Join me for a 3-part webinar series where we can get together and library, in the lab, at home, or in res.. Think of this webinar like a lab. It’s hands-on (just minus having to submit it at the end to be graded) meaning you’ll be following along in your own environment and, by the end of the webinar, your application will be running on Windows Azure! We’re just going to cover the basics, but you’ll soon find that you’ll be wanting to learn more about this exciting platform. Hopefully that’ll be the case, at which point, all you have to do is comment on this post, let me know and I’ll make sure that we continue the Windows Azure conversation moving forward. Looking forward to seeing you on the webinar. In parts 1 and 2 of this series, we setup a multiplatform XNA solution that deploys to the PC, Xbox 360 and Windows Phone 7 devices seamlessly. In this 3rd and final part of the series, we’ll implement platform-specific behavior within the same codebase. And we’ll finally make our ship do something! Conditional compilation symbols When you created the three projects, Visual Studio set project-specific Conditional compilation symbols. These flags can be used with preprocessing directives to create conditions in the compilation process. To view and edit these symbols, Right-click on a game project, select Properties, and go to the Build tab. By default, Visual Studio added the WINDOWS symbol for the Windows project, XBOX and XBOX360 symbols for the Xbox project and WINDOWS_PHONE symbol the Windows Phone 7 project. In this article, we’ll use these to set certain platform-specific properties and capture hardware-specific input. Settings up graphics properties We’ll use the GraphicsDeviceManager to setup the screen properties for each platform. In the Initialize() method of the Game1 class, add the three #if preprocessor directives: protected override void Initialize() { // TODO: Add your initialization logic here #if WINDOWS #endif #if XBOX #if WINDOWS_PHONE base.Initialize(); } For the purpose of this tutorial, we’ll run all three platform builds in full-screen. However, we’ll need to set an appropriate resolution for each one: // TODO: Add your initialization logic here graphics.PreferredBackBufferWidth = GraphicsDevice.DisplayMode.Width; graphics.PreferredBackBufferHeight = GraphicsDevice.DisplayMode.Height; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 480; graphics.IsFullScreen = true; graphics.ApplyChanges(); For the PC, we retrieve the desktop resolution and set our back-buffer (the viewport) to match it. For the Xbox, we use its native 720p high-definition output by setting the back-buffer bounds to 1280 by 720 pixels. For the phone, we simply set the resolution to the maximum resolution dictated by the Windows Phone 7 platform. Finally, we set the IsFullScreen property to true and call the ApplyChanges() method to commit the above changes. Note that these 2 lines of code are outside of any platform-specific preprocessor directives, since we want these two things to apply to all three of our builds. When building the solution, depending on the target project, each flag in the processing directive will be checked. If it is set, the code inside the #if-#endif pair will be built. This means that you must be careful about fragmenting your code. For example, if you declare a field in a #if-#endif directive, but assign to it outside of the same condition, you will get an error for a specific platform (or a specific condition) about a missing declaration. At this point, if you were to run the project on any platform, the resolution will be automatically adjusted and you should see the ship in platform’s native full-screen resolution, without any scaling or stretching! Note: IsFullScreen is not strictly needed for the Xbox version, as all XNA games on the Xbox run in full-screen by default. Capturing platform-specific input Let’s make things a bit more interesting and challenging, by rotating the ship with a platform specific input device. On the PC, the input comes primarily from the mouse/keyboard combination; on the Xbox 360 from the Xbox controller; on the Windows Phone 7 from the touch panel. The Xbox controller can also be used as a gamepad in Windows. A keyboard can be used on the Xbox 360, but you should avoid this scenario, since most Xbox 360 owners don’t have a keyboard connected to their console. Windows Phone 7 devices also have additional buttons such as the “back” button, which can be read in XNA. First, we’ll add platform independent code that will rotate the ship around the y-axis (also known as “Yaw”). Add a float to hold the yaw angle: public class Game1 : Microsoft.Xna.Framework.Game { ... float yawAngle = 0.0f; public Game1() ... } To rotate the ship, we’ll set the world matrix to the rotation about the y-axis matrix that is constructed with the yawAngle, which we set with an input device. protected override void Update(GameTime gameTime) world = Matrix.CreateRotationY(yawAngle); base.Update(gameTime); Next we will capture input from the keyboard to rotate the ship with the arrow keys. Remember that keyboard input should be captured on Windows only; we check if WINDOWS is set. if (Keyboard.GetState().IsKeyDown(Keys.Left)) yawAngle += 0.05f; if (Keyboard.GetState().IsKeyDown(Keys.Right)) yawAngle -= 0.05f; ... We’ll use the Xbox controller’s left thumb-stick to rotate the ship left and right as well. yawAngle += GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.X; If you were to test this code now, you’ll notice a little quirk. On Windows we’re incrementing/decrementing the angle by a constant of 0.05 radians every update. But on the Xbox the angle depends on the X position of the thumb-stick, which can result in a large value. The easiest solution is to simply dampen the value by a constant: yawAngle += GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.X * 0.05f; On Windows Phone 7, input is not as simple as keyboard strokes and thumb-stick positions. The phone captures input by recording touches. Touch points and displacement between them can be determined from raw touch data. Common higher level motions – known as gestures – are already provided by the XNA framework, so you don’t have to write your own. We’ll use the horizontal drag gesture to rotate the ship. Before we go any further, however, we need to include a reference to the Microsoft.Xna.Framework.Input.Touch assembly. It wasn’t included because our Windows Phone 7 project was created from the Windows project. Expand the Windows Phone Copy of XNAIntro project, Right-click on References and select Add Reference. Under the .NET tab select Microsoft.Xna.Framework.Input.Touch. We can now use the namespace in Game1. Since it is to be used for the phone only, be sure to place it in a proper compilation condition:.Input.Touch; namespace XNAIntro { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game } Before we can detect the horizontal drag gesture, we need to let the touch panel know that this gesture needs to be enabled. TouchPanel.EnabledGestures = GestureType.HorizontalDrag; Once the gesture is enabled, you can access the gesture with the TouchPanel.ReadGesture() method: while (TouchPanel.IsGestureAvailable) { GestureSample gesture = TouchPanel.ReadGesture(); yawAngle += gesture.Delta.X * 0.005f; } We first check whether or not a gesture is available with the TouchPanel.IsGestureAvailable property. Failure to check for this condition will result in an exception thrown by the first TouchPanel.ReadGesture() call. We then read the gesture from the touch panel, which will return the next available gesture. This means that when you have multiple gestures enabled you would need to check the gesture.GestureType property to determine which gesture this is. In this case, the check is not required because the horizontal drag gesture is the only one we enabled. Also note that since the input feedback is different between all devices, we use a different dampening value. Ready, Steady, Go! We’re done! Select any platform and run the little “game” to enjoy a truly breathtaking experience. Using a single codebase, you effectively created not one or two, but three game versions. All three are identical, and yet all three run on different hardware architectures and accept platform-specific input. Although this is the conclusion of the series, you can find plenty of resources to get you going further on the App Hub at. Enjoy! In part 1 of this series, we set up a tri-platform XNA solution that can be deployed to three platforms simultaneously. While the cornflower blue screen we saw in the previous part is a truly breathtaking achievement, in this part we’ll write a little bit of code to make our game do something. XNA is a game development framework. Games often render stuff. Therefore, if A = B and B = C and Albatrosses feed on both fish and krill, then we must conclude that by the end of this article we’ll be drawing something! Be sure to download the space ship model included in this article. This model was stolen borrowed from the AppHub catalog, where it is used in numerous XNA samples. Open up the solution that we created in part 1 and brace yourself for the extremely complex and time-consuming process of importing the model into our project. Ready? Drag-and-drop Ship.fbx and ShipDiffuse.tga directly into the XNAIntroContent project in Visual Studio. Right-click on ShipDiffuse.tga and click Exclude From Project. You’re done! While you might be relieved to know how quick and simple the process really is, you are probably wondering why we excluded the texture from the Content project. ShipDiffuse.tga is Ship.fbx’s UV map – a texture that is mapped on to the model when it is rendered in 3D. It is referenced directly by the model file, which means that the Content Pipeline will be aware of its physical location on disk during build time. When we drag-and-dropped the two files into the Content project, they were physically copied to the project’s folder, where they are expected to be. Since the texture is used by the model, we don’t need to load or use it explicitly in our project, hence its exclusion. Inclusion of the texture in the project would not cause any technical issues, but it will be built twice (once by the model’s processor and once by the default texture processor) resulting in a warning, “Asset was built 2 times with different settings”. Now that we have the model in our content project, let’s write some code to load it and draw it on the screen. We’ll begin by declaring a field for it: Model ship; public Game1() graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; You’ll notice that the ContentManager’s RootDirectory property is set to “Content”. This is to reference the name of the Content project set in the Content project’s properties under Content Root Directory. This means that if you have two or more Content projects in one solution, the ContentManager of each game project could have its RootDirectory property set to the desired Content project name. Let’s use the ContentManager to load our model. In the LoadContent() method add: ship = Content.Load<Model>("Ship"); The string Ship is the relative path to the asset in the Content project (our model in this case) minus the extension. There is no need to add Content in the path. Note that ContentManager’s Load method is generic and can therefore be used to load any type of an asset, including models, sounds, textures, XML files and custom types. Now that we loaded our 3D model, we want to draw it. In the perfect world, this would be done by simply calling a Model.Draw() method that would somehow magically read our minds and draw the model just as we want it. In the real world, things are not quite as simple. First, we need to use a number of Matrices to define various properties needed to draw a 3D scene. Add the following matrix declarations after your model. Matrix world; Matrix view; Matrix projection; Before we do anything else, a quick rundown of what each Matrix is and what it does: We’ll set these parameters in the LoadContent() method. Right after loading our model, add the following: world = Matrix.Identity; view = Matrix.CreateLookAt(Vector3.Backward * 5000.0f, Vector3.Zero, Vector3.Up); projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio, 1.0f, 10000.0f); We set the world matrix to identity, which essentially translates to the world’s “origin”. We specify the “3D origin” (Vector3.Zero) as the target (this is where our model will be), place the camera an arbitrary distance backwards (Vector3.Backward * 5000.0f) and specify the positive Y-Axis as the camera’s “Up” direction (Vector3.Up). Our projection matrix will have the aspect ratio of the viewport (GraphicsDevice.Viewport.AspectRatio), Pi over 4 radians (or 45 degrees) field of view, and fairly arbitrary value for the near and far planes. The reason for so many arbitrary values is due to the limited scope of this article. Their exact meaning is simply not important here. We are ready to draw our model. Add the following private method in the Game1 class: private void DrawModel(Model model) foreach (ModelMesh mesh in model.Meshes) foreach (BasicEffect effect in mesh.Effects) { effect.World = world; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); This method does three simple things. The first foreach loop iterates over all of the meshes in the model. The second loop iterates over each effect; where the all of the mandatory and optional parameters are set. In actual fact, foreach Effect in ModelMesh.Effects is a shortcut for iterating over each MeshPart and setting each MeshPart Effect individually. Needless to say, this is definitely not within the scope of this article. The third and last step is drawing the mesh with mesh.Draw(). Notice that we make use of our three key matrices – world, view and projection – to set the three properties required to correctly draw the geometry (but not necessarily the color and lighting) of the model. We also use EnableDefaultLighting() method to light our model using the Three-point lighting method (). We’re almost ready to draw, but there is one last piece of the puzzle missing in our drawing code. We need to account for transforms that were applied to each mesh when the model was created. The transforms are specified in each Bone of the model. The collection of Bones defines the mesh hierarchy where each mesh has a relation to its parent, and the transform specifies the mesh’s transformation relative to its parent. For example, if a tree model has a mesh for each of its branches, then each branch (and ultimately the leaves) will be positioned relative to a parent branch. We look up the transforms with two simple lines of code: Matrix[] transforms = new Matrix[ship.Bones.Count]; ship.CopyAbsoluteBoneTransformsTo(transforms); And then account for each transform when multiplying by the world matrix: World = transforms[mesh.ParentBone.Index] * world; Our final drawing code now looks like this: Matrix[] transforms = new Matrix[ship.Bones.Count]; ship.CopyAbsoluteBoneTransformsTo(transforms); effect.World = transforms[mesh.ParentBone.Index] * world; Let’s draw our ship by calling the DrawModel(ship) in the Draw() method of the Game1 class: protected override void Draw(GameTime gameTime) GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here DrawModel(ship); base.Draw(gameTime); By this point, you might be wondering which platform this code was meant for. After all, we have three separate projects, which will be built for three separate and architecturally unique hardware platforms. You might even be convinced that at least some conditions must be placed in code to ensure that it runs on three different CPUs and is rendered by three different GPUs. Oddly enough, it is with this concern that I would like to formally welcome you to the wonderful world of XNA – a true, tri-platform game development framework. Stay tuned for part 3!
http://blogs.msdn.com/b/cdnstudents/archive/2011/05.aspx?PageIndex=2
CC-MAIN-2015-11
refinedweb
3,041
64.41
0 i am having trouble with lets say i do not have any test scores and enter -1 there is an error of dividing by zero. import java.util.Scanner; public class myarray { public static void main(String args[ ]) { Scanner kbd = new Scanner(System.in); //total number of quiz scores possible in CS 200 final int MAX_SIZE = 10; //Create (declare & instantiate) an int array of size 10, called: quizScores int [] quizScores = new int[MAX_SIZE]; //Declare any other variable storage space needed by the program int number; int sum =0; int ave =0; int index; int count =0; /*Initialize the first 7 values to your quiz scores thus far in this course from user keyboard input with a "do-while loop", sentinel value (-1) to stop loop, and counter to keep track of how many values you have assigned to the array. */ do { System.out.println("Please enter your quiz scores. Enter -1 to quit."); number = kbd.nextInt(); quizScores[count] = number; if(quizScores[count] != -1) { count++; } } while(number != -1 && count < quizScores.length); /*Next, use the counter from the loop above to run a "for loop" to sum your quiz scores. */ for(index = 0; index < count; index++) { sum = sum + quizScores[index]; } /*Then, find the average score of your quizzes thus far using the sum divided by the counter*/ ave = sum / count; //Display (echo) your individual scores for(index = 0; index < count; index++) { System.out.print(quizScores[index] + " "); } //Display the sum of the scores System.out.println(); System.out.println("The sum of the quiz scores is: " + sum); //Display the quiz score average System.out.println("The average quiz score is: " + ave); } //closing main method } //closing class header
https://www.daniweb.com/programming/software-development/threads/394665/get-average-of-test-scores
CC-MAIN-2017-17
refinedweb
275
60.85
This node can be used to create ShotGrid Server work items. Session work items in this node are associated with a long running ShotGrid process. Note The ShotGrid API is not provided with Houdini and must be installed manually. To install the ShotGrid API, perform the following steps: Download the latest release from and extract it. Copy the shotgun_api3 folder to a location in Houdini’s python path. This can be found by running the following command in a command prompt or terminal: hython -c 'import sys; print(sys.path)'or by running import sys; print(sys.path)in Houdini’s python shell. In order to authenticate to ShotGrid, one of the following three methods are required. Note As Shotgun is migrating to the new name ShotGrid, the existing $PDG_SHOTGUN_* environment variables and shotgun.json have been deprecated and will be removed in a future release. Please migrate to their ShotGrid equivalents, $PDG_SHOTGRID_* and shotgrid.json respectively. Create a shotgrid.json file in $HOUDINI_USER_PREF_DIRor within a defined $PDG_SHOTGRID_AUTH_DIRor, with the following syntax: { “script_name”: “”, “api_key”: “” } Define $PDG_SHOTGRID_LOGINand $PDG_SHOTGRID_PASSWORDenvironment variables. Define $PDG_SHOTGRID_SCRIPT_NAMEand $PDG_SHOTGRID_API_KEYenvironment variables. The order of precedence for authentication is $PDG_SHOTGRID_LOGIN, then $PDG_SHOTGRID_SCRIPT_NAME, and then to try and use shotgrid.json. This is so that an override can be specified in the environment. Using shotgrid.json is recommended as exposing credentials in the environment can be a security risk. See command servers for additional details on the use of command chains.. Session Count from Upstream Items When this toggle is enabled, the node will create a single server work item and a session with the server for each upstream work item. Otherwise, a server item will be created for each upstream work item. Number of Sessions The number of sessions to create with the server. Each session work item will cook in serial with other sessions using the same server. The chain of work items starting from this session item down to the Command Server End node will cook to completion before starting the next session. Server Port The TCP port number the server should bind to (when Connect to existing server if off), or the port to use to connect to an existing server (when Connect to existing server is on). The default value 0 tells the system to dynamically choose an unused port, which is usually what you want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can use an expression here such as 9000 + @pdg_index. Connect to Existing Server When this toggle is enabled, the work item will connect to an existing server rather than spawning a new one. Server Address The existing server address, when Connect to Existing Server is enabled. Load Timeout The timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server will be marked as failed ShotGrid URL The URL to the ShotGrid instance to communicate with. The default value is $PDG_SHOTGRID_URL, which expects that the environment variable will be defined when the work item executes. However this can be set to an absolute path if it’s known. HTTP Proxy The URL of a proxy server to communicate with the ShotGrid instance. Custom CA Certs A path to a custom list of SSL certificate authorities to use while communicating with the ShotGrid instance. Feedback Attributes When on, the specified attributes are copied from the end of each iteration onto the corresponding work item at the beginning of the next iteration. This occurs immediately before the starting work item for the next iteration cooks. Tip The attribute(s) to feedback can be specified as a space-separated list or by using the attribute pattern syntax. For more information on how to write attribute patterns, see Attribute Pattern Syntax. Feedback Output Files When on, the output files from each iteration are copied onto the corresponding work item at the beginning of the next loop iteration. The files are added as outputs of that work item, which makes them available as inputs to work items inside the loop. These parameters can be used to customize the names of the work item attributes created by this node. Iteration The name of the attribute that stores the work item’s iteration number. Number of Iterations The name of the attribute that stores the total iteration count. Loop Number The name of the attribute that stores the loop number.
https://www.sidefx.com/docs/houdini/nodes/top/shotgunserver.html
CC-MAIN-2022-21
refinedweb
752
54.93
MySQL-python-1.2.2]$ /v/linux24_i386/lang/python/3.0rc1/bin/python3.0 setup.py File "setup.py", line 9 raise Error, "Python-2.3 or newer is required" ^ SyntaxError: invalid syntax Kyle VanderBeek 2009-01-10 We should be using sys.hexversion which appeared way back in python 1.5 --- setup.py (revision 565) +++ setup.py (working copy) @@ -4,7 +4,7 @@ import sys from setuptools import setup, Extension -if sys.version_info < (2, 3): +if not hasattr(sys, "hexversion") or sys.hexversion < 0x02030000: raise Error("Python-2.3 or newer is required") No, this is from PEP3109. There are more lines we should modify to adapt to python 3. Kyle VanderBeek 2009-02-06 Andy has put this fix in 1.2.3 beta 1, and it's also now in trunk.
http://sourceforge.net/p/mysql-python/bugs/263/
CC-MAIN-2014-15
refinedweb
134
71.51
Timeline May 26, 2007: - 10:28 PM Ticket #722 (svg support test crashes on IE 5.5) closed by - fixed: 2.4RC5 - 10:28 PM Ticket #720 (remove console.log() from OpenLayers.Format.WKT) closed by - fixed: 2.4RC4 - 10:28 PM Ticket #719 (SVG renderer does not always redraw LineStrings and Polygons) closed by - fixed: 2.4RC4 - 10:28 PM Ticket #718 (WMS.Untiled Clone doesn't work) closed by - fixed: 2.4RC4 - 10:28 PM Ticket #715 (layer.js needs sanity check) closed by - fixed: 2.4RC4 - 10:28 PM Ticket #711 (OpenLayers.Layer.Image requires OpenLayers.Tile.Image) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #710 (Install instructions unclear) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #708 (change WKT format to deal in features instead of geometries) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #706 (Full CSS support fails when Control.OverviewMap is loaded) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #703 (OpenLayers.Layer.Vector do not properly destroy its features) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #701 (SVG render does not always clear features when map extent changes) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #698 (add close box option to AnchoredBubble) closed by - fixed: 2.4RC4 - 10:27 PM Ticket #697 (Vector example to show how to use styles) closed by - fixed: 2.4RC4 - 10:26 PM Ticket #696 (events need to fall through the overview map extent rectangle) closed by - fixed: 2.4RC4 - 10:26 PM Ticket #695 (GeoRSS serializer is broken) closed by - fixed: 2.4RC4 - 10:26 PM Ticket #694 (Safari 1.3.2 doesn't work with OL 2.4) closed by - fixed: 2.4RC4 - 8:31 PM Ticket #726 (Patch for toggle type buttons in Panel.js) created by - Hi, would you please consider to apply the attached patch to the … - 12:20 PM Changeset [3194] by - OL 2.4 RC5 with animated zooming and panning - 12:18 PM Changeset [3193] by - - 12:18 PM Changeset [3192] by - - 12:17 PM Changeset [3191] by - - 12:16 PM Changeset [3190] by - - 12:11 PM Changeset [3189] by - OL 2.4 RC5 with animated zooming and panning - 10:51 AM Ticket #725 (Customizing LayerSwitcher labels) created by - Provide a way for customizing LayerSwitcher labels. By default it must … May 25, 2007: - 3:20 PM Ticket #724 (OpenLayers.Util.onImageLoad sometimes doesn't work) closed by - duplicate: Ah, I see the "sanity check" was applied in r2286. Excuse the … - 12:59 PM Ticket #724 (OpenLayers.Util.onImageLoad sometimes doesn't work) created by - For some reason, this.map is not always defined inside … - 10:18 AM Changeset [3188] by - copy trunk - 10:16 AM Changeset [3187] by - - 10:14 AM Changeset [3186] by - - 10:06 AM Changeset [3185] by - - 10:03 AM Changeset [3184] by - - 10:01 AM Changeset [3183] by - - 9:36 AM Ticket #723 (permalink to change url in browser adress bar (the yahoo way)) created by - It would be nice when the permalink feature would change the url in … - 5:56 AM Release/2.4/Announce/RC5 created by - - 5:49 AM Changeset [3182] by - Tag RC5. - 5:45 AM Changeset [3181] by - Pullup #722 from trunk for RC5. svn merge 3177:HEAD. - 5:35 AM Ticket #722 (svg support test crashes on IE 5.5) reopened by - - 4:52 AM Ticket #722 (svg support test crashes on IE 5.5) closed by - fixed: fixed in r3180 - 3:49 AM Changeset [3180] by - Some browsers (IE5.5) don't support documnet.implementation. Check if … May 24, 2007: - 11:56 PM Ticket #722 (svg support test crashes on IE 5.5) created by - document.implementation is only support for versions after 6.x under … - 10:42 AM Changeset [3179] by - changes to examples/geojson.html - 8:46 AM Ticket #721 (Add georss:box support and items agregation) created by - Working on the web interface of GeoNetwork (metadata catalog for … - 7:55 AM Changeset [3178] by - Tag RC4 release. - 7:51 AM Release/2.4/Announce/RC4 created by - - 7:43 AM Changeset [3177] by - Pullup trunk for RC4. Fixes: #694 Safari 1.3.2 doesn't work with … - 7:25 AM Changeset [3176] by - box shmox - 6:46 AM Changeset [3175] by - WMSManager (ticket #687): css updated - 6:35 AM Changeset [3174] by - WMSManager (ticket #687): added permalink function. created … - 12:16 AM Changeset [3173] by - Zoom the map in more, since in some cases a miscalculation of grid … - 12:08 AM Changeset [3172] by - Opera grids are slightly different, so instead of concocting a bounds … May 23, 2007: - 11:56 PM Changeset [3171] by - Fix typo in 'multimap' in test list. - 11:51 PM Changeset [3170] by - Add missing test file to SVN. - 11:47 PM Changeset [3169] by - remove console.log() from OpenLayers.Format.WKT , patch from Fredj, … - 11:31 PM Ticket #720 (remove console.log() from OpenLayers.Format.WKT) created by - see patch - 9:23 PM WikiStart edited by - (diff) - 9:23 PM WikiStart edited by - (diff) - 9:22 PM WikiStart edited by - (diff) - 9:15 PM WikiStart edited by - (diff) - 8:23 PM Ticket #391 (Render visible tiles first) closed by - wontfix: The answer to this is 'use different hostnames for different layers'. … - 8:14 PM Changeset [3168] by - Update install documentation from discussion in #710. - 8:10 PM Ticket #707 (editingtoolbar control) closed by - fixed: This turned out to be a relative linking/CSS issue that was resolved … - 8:07 PM Changeset [3167] by - #719 - until proper clipping is achieved, this solution redraws as … - 7:57 PM Ticket #719 (SVG renderer does not always redraw LineStrings and Polygons) created by - Same issue as #701 but not resolved for geometries other than points. … - 7:32 PM Changeset [3166] by - #701 - clear points from the SVG root that fall outside of the max … - 6:43 PM makeovers created by - - 4:35 PM Changeset [3165] by - minor change in example - 3:54 PM Changeset [3164] by - support for RFC-2 - read/write Point, MultiPoint, LineString, … - 1:18 PM Changeset [3163] by - Apply #718 with sde's approval, fixing WMS.Untiled clone method bug … - 7:55 AM Ticket #718 (WMS.Untiled Clone doesn't work) created by - Clone in WMS.Untiled copies the tile object, so it's shared. Fix it.r May 22, 2007: - 8:51 PM Ticket #717 (multimap map layer causes failure to load in IE) created by - try loading the attached HTML in IE - 8:49 PM Changeset [3162] by - Minor change to Layer.js as described in patch-like form in #715: … - 8:40 PM Changeset [3161] by - need to put MM layer last to get it to load in IE; filing bug ticket - 8:21 PM Changeset [3160] by - removing the OpenPlans WMS because it appears to only work with the … - 8:09 PM Changeset [3159] by - cleaning up the white space so this is more readable, adding a couple … - 4:37 PM Ticket #716 (Make LayerSwitcher support displayInLayerSwitcher for Base Layers) created by - If a map is to be rendered with only one base layer, having the base … - 2:14 PM Ticket #715 (layer.js needs sanity check) created by - I have been experiencing an intermittent timing problem in … - 12:58 PM Ticket #714 (Eliminate tile blanking when reloading a tile) created by - When tiles need to be reloaded (perhaps due to a mergeNewParams … May 21, 2007: - 7:23 PM Ticket #713 (SVG Renderer clipping) created by - The SVG renderer should draw clipped geometries where possible. This … - 7:24 AM Changeset [3158] by - #708 - make the WKT format like the other vector formats - … May 18, 2007: - 9:39 PM Ticket #712 (Get everything in the OpenLayers namespace) created by - Function, Number, and String methods would be nice to have in the … - 8:20 AM Changeset [3157] by - #711 - image layer needs jsdoc requires statement for single file … May 17, 2007: - 10:43 PM Ticket #711 (OpenLayers.Layer.Image requires OpenLayers.Tile.Image) created by - for profiles with the image layer, the doc comment is needed […] … - 12:37 PM Ticket #710 (Install instructions unclear) created by - Maybe I'm a newb, but I can't get a local install to work. The … - 11:01 AM Changeset [3156] by - add write support for multi-point, linestring, and polygon - 10:03 AM Changeset [3155] by - fix part two to #703 - dont try to erase a null geometry - 9:39 AM Changeset [3154] by - merge r3094:HEAD from trunk - 9:35 AM Changeset [3153] by - #706 - map constructor should not add duplicate theme related link … - 8:53 AM Ticket #709 (typo in the LinearRing class doc) closed by - invalid: Sorry the addPoint method is handle by the inherited multipoint class - 8:46 AM Ticket #709 (typo in the LinearRing class doc) created by - As far as I understand, the addPoint and removePoint methods are now … - 8:16 AM Changeset [3152] by - merge r3092:HEAD from trunk May 16, 2007: - 3:25 PM Ticket #708 (change WKT format to deal in features instead of geometries) created by - Nobody should really be doing much with geometries. We render … May 15, 2007: - 2:14 PM Changeset [3151] by - Since we have a destroyFeatures command, use it. Patch from fredj, … - 2:13 PM Changeset [3150] by - #694 - regarding inheritance of toString: for IE, check that instance … - 2:10 PM Changeset [3149] by - Apply patch from John Cole to make closeBox show up in AnchoredBubble … - 6:17 AM Ticket #707 (editingtoolbar control) created by - In Firefox 2: If the js is loaded directly, the editing toolbar … May 14, 2007: - 4:07 PM Ticket #706 (Full CSS support fails when Control.OverviewMap is loaded) created by - #460 patch/docs work great until a Control.OverviewMap is added to the … May 13, 2007: - 8:52 AM Changeset [3148] by - WMSManager (ticket #687): added: WMS legend images per OL layer - 6:57 AM Changeset [3147] by - WMSManager (ticket #687): added: WMS layer infos panel, WMS select … May 12, 2007: - 8:47 AM Ticket #705 (Allow Custom Icon for GeoRSS layer) created by - When I create a map with two different GeoRSS layers, I want to be … May 11, 2007: - 8:03 AM Ticket #704 (Netscape 7.0 no map images) created by - When using Netscape 7.0 map is not displaying any images I get no … May 10, 2007: - 6:45 AM Ticket #703 (OpenLayers.Layer.Vector do not properly destroy its features) created by - See patch, specially the HACK HACK comment. thanks - 3:20 AM Changeset [3146] by - Vector Features styling example, from Cameron Shorter. I did a bit of … - 2:30 AM Ticket #702 (OpenLayers.Ajax.Request does not support asynchronous = false) created by - The patch comes from prototype.js 1.5.1 rc4. Tested with firefox … May 9, 2007: - 10:21 PM Ticket #701 (SVG render does not always clear features when map extent changes) created by - Looks like the renderer may not be getting rid of all the features it … - 6:45 PM Changeset [3145] by - Give the layer it's own getZoomForResolution - because we're dealing … - 5:52 PM Changeset [3144] by - Attempt to fix Safari 1.3 brokenness. - 5:51 PM Changeset [3143] by - BaseTypes patch branch. May 8, 2007: - 11:40 PM Ticket #700 ([patch] Linecap style) created by - A small patch to let the user specify the linecap style of a feature. … - 6:43 PM Changeset [3142] by - Example which shows text labels on a polygon using a modified version … May 7, 2007: - 1:04 PM Changeset [3141] by - adding GeoJSON support to google sandbox - 12:56 PM Changeset [3140] by - basic read support for GeoJSON - Point, Line, and Polygon - 10:10 AM Ticket #699 (add mergeNewParams to WFS layer) created by - OpenLayers.Layer.WFS doesn't currently have a mergeNewParams method. […] - 9:49 AM Changeset [3139] by - get rid of this dateline wrapping nonsense - maps are flat for Pete's … - 6:56 AM Ticket #698 (add close box option to AnchoredBubble) created by - There is no option to open an AnchoredBubble with a close box. When … - 3:52 AM Changeset [3138] by - WMSManager (ticket #687): added multiple layer select and 'add layer' … - 12:00 AM Changeset [3137] by - watermarking examples May 6, 2007: - 9:20 PM Ticket #697 (Vector example to show how to use styles) created by - This patch shows how to apply a style to vector features. It is … - 2:14 PM Changeset [3136] by - WMSManager (ticket #687): added switch layer system for overlays, … May 5, 2007: May 4, 2007: - 9:54 AM Changeset [3135] by - WMSManager (ticket #687): probably added basic TileCache support - 9:03 AM Changeset [3134] by - WMSManager (ticket #687): fix bug when LatLonBoundingBox is not set. … - 8:00 AM Changeset [3133] by - #696: let events fall through the extent rectangle - 7:56 AM Ticket #696 (events need to fall through the overview map extent rectangle) created by - There is a problem with events stopping at the extent rectangle. A … - 7:35 AM Changeset [3132] by - Update GeoRSS serialization so that it works. Damn you, lat/lon! … - 7:34 AM Changeset [3131] by - WMSManager (ticket #687): added overlay capability, added Exceptions … - 7:25 AM Changeset [3130] by - Fix tests. - 7:24 AM Changeset [3129] by - Why can't we just live on a plane? - 7:17 AM Ticket #695 (GeoRSS serializer is broken) created by - Still uses geometry.lat, geometry.lon - 7:15 AM Ticket #694 (Safari 1.3.2 doesn't work with OL 2.4) created by - TypeError - Value undefined (result of expression … - 7:06 AM Changeset [3128] by - Add tests for GeoRSS serialization. - 6:55 AM Changeset [3127] by - Better HTML for narrow windows. - 6:54 AM Changeset [3126] by - Fix GeoRSS serializer. - 6:53 AM Changeset [3125] by - Branch for georss serializer fix. May 3, 2007: - 12:20 PM Ticket #693 (Problem with Layer Grid.js:moveTo() function with Gmap with an overlay) created by - From Louvy Joseph email (paraphased) Having a Gmap baselayer set … - 9:41 AM Changeset [3124] by - Fix copyright. - 9:11 AM Changeset [3123] by - Implemented changes to Class.create() and Class.inherit() to simplify … - 9:08 AM Changeset [3122] by - make superclass sandbox - 7:09 AM Changeset [3121] by - WMSManager (ticket #687): trying workaround for proxy request (hope … - 6:44 AM Changeset [3120] by - ArcWebServices Layer, and example of how to use it. Note that this … - 6:27 AM Changeset [3119] by - Create sandbox for ArcWebServices Layer. - 6:02 AM Changeset [3118] by - WMSManager (ticket #687): trying workaround for proxy request (hope … - 5:09 AM Changeset [3117] by - WMSManager (ticket #687): trying workaround for proxy request - 3:37 AM Ticket #692 (Individual Icons/Markers in WFS Layer) created by - hi there, i am looking for a way to create individual icon … - 3:25 AM Changeset [3116] by - WMSManager (ticket #687): trying workaround for proxy request - 3:11 AM Changeset [3115] by - WMSManager (ticket #687): trying workaround for proxy request - 2:48 AM Changeset [3114] by - WMSManager (ticket #687): first upload May 2, 2007: - 11:04 PM Ticket #691 (vectoring doesn't work properly on Ubuntu Feisty Firefox) created by - Several vector-related issues on Ubuntu Feisty Firefox (version … - 7:20 AM Changeset [3113] by - Tag OpenLayers 2.4-rc3. - 7:16 AM Release/2.4/Announce/RC3 created by - - 7:10 AM Ticket #690 (Invalid extent rectangle dimensions in OverviewMap must be handled) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:10 AM Ticket #683 (Markers Layer - Out of Range at startup) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:10 AM Ticket #682 (control panels don't pass through mouseup) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:09 AM Ticket #681 (markers don't draw when zooming to level 0 on the map) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:09 AM Ticket #680 (Deleting selected features from a vector layer causes unusual behavior) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:09 AM Ticket #679 (feature.fid is null in IE6) closed by - fixed: Applied to 2.4 branch, r3112 (2.4RC3) - 7:06 AM Changeset [3112] by - Merge changes from trunk to 2.4: svn merge trunk/openlayers/@3088 … - 7:01 AM Changeset [3111] by - fix tests -- ie destroys page and thereofre map object, which causes … May 1, 2007: - 4:14 PM Changeset [3110] by - for whatever reason, the html in this file was twice, and exactly. I … - 11:08 AM Changeset [3109] by - copying file for ominiverdi's sandbox - 11:04 AM Changeset [3108] by - Creating sandbox - 8:53 AM Ticket #127 (Ruler tool doesn't work on different projections) closed by - duplicate: Dupe of #173, since there is no longer a 'ruler tool' in OpenLayers. - 8:51 AM Ticket #96 (IE6: Map does not refresh until all the tiles have been downloaded.) closed by - wontfix: It seems to be the general consensus that there is no good solution … - 8:48 AM Ticket #78 (create a ViaMichelin data source) closed by - wontfix: Agreed, this should go away. Layer sources need to either be: * … - 8:21 AM Changeset [3107] by - #690 - force overview map extent rectangle to have non-negative … - 7:33 AM Release/2.4 edited by - let's not be pessimistic here. (diff) - 7:29 AM Release/2.4 edited by - add links to rc announces (diff) - 7:21 AM Changeset [3106] by - fix for #682 - let the mouseup event go through on the panel, just … - 7:20 AM Changeset [3105] by - The Vector layer tests fail in IE because there is no destroy on this … - 7:09 AM Changeset [3104] by - Apply Erik's patch from #680 to fix "removeFeatures() in Layer.Vector … - 7:04 AM Changeset [3103] by - fix for #683 -- fixes markers layer visibility/inrangeitude on startup Apr 30, 2007: - 2:15 AM Ticket #690 (Invalid extent rectangle dimensions in OverviewMap must be handled) created by - In some cases the layer used for OverviewMap can have different aspect … Apr 29, 2007: - 7:13 PM Changeset [3102] by - key for dev.openlayers.org - 6:34 PM Changeset [3101] by - example with the google key for openlayers.org - 3:25 PM Changeset [3100] by - overlay any layer (in EPSG:54005) on Google - 1:27 PM SettingZoomLevels edited by - typo: maxResolution min->max (diff) Apr 28, 2007: - 1:07 PM Changeset [3099] by - create google sandbox - 12:12 PM Ticket #689 (Multiple optional WMS layers on same map get turned on/off together) closed by - worksforme: Was a problem with the .map file - the MAP NAME was the same as the … - 10:50 AM Ticket #689 (Multiple optional WMS layers on same map get turned on/off together) created by - I define multiple WMS Layers in OpenLayers - each of which point to … - 8:45 AM Ticket #688 (Mouse cursor = "wait" on WFS loading) created by - This patch allows for the change of mouse cursor when it is over the … Apr 27, 2007: - 8:17 AM Ticket #687 (WMS multiple Servers connector (aka WMSManager)) created by - This is a first implementation of a connector for OL to add on the fly … - 8:12 AM Ticket #686 (Treat Google Layer as projected data) created by - Currently, Google is treated inside OpenLayers as unprojected data. … - 8:00 AM ApplyingPatches created by - - 7:49 AM CreatingPatches edited by - Add link to ApplyingPatches wiki (diff) - 6:05 AM Ticket #685 (WFS markers are drawn when layer is not in range...) created by - Using firebug on FF2, I noticed my WFS marker layer making requests … Apr 26, 2007: - 4:29 PM Changeset [3098] by - fix for #681 - Make sure markers layer always draws on the first … - 4:19 PM Ticket #684 (Map's Initial Zoom Level Should be 'null') created by - As related to #681, the initial zoom level of the map should be 'null' - 3:28 PM Changeset [3097] by - add tests for out of range markers (#683). This test can be used as a … - 2:58 PM Ticket #683 (Markers Layer - Out of Range at startup) created by - If the markers layer is out of range at startup, we need to make sure … - 8:41 AM Ticket #682 (control panels don't pass through mouseup) created by - Control Panel div elements block mouseup. They shouldn't. - 7:03 AM Ticket #681 (markers don't draw when zooming to level 0 on the map) created by - The OpenLayers.Map object has a default zoom of '0'. Because of this, … - 6:31 AM Changeset [3096] by - Add GML format test, which for the time being only has a constructor … - 4:35 AM Ticket #680 (Deleting selected features from a vector layer causes unusual behavior) created by - removeFeatures() in Layer.Vector does not remove features from the … - 3:29 AM Changeset [3095] by - Fix panel example in IE. Note: See TracTimeline for information about the timeline view.
http://trac.osgeo.org/openlayers/timeline?from=2007-05-26T22%3A28%3A22-0700&precision=second
CC-MAIN-2016-26
refinedweb
3,488
57.91
Starting from this Module, you have to be careful for the source codes that span more than one line. When you copy and paste to the text or compiler editor, make it in one line! This Module is a transition from C to C++ and Topics of C++ such as Functions, Arrays, Pointers and Structure that have been discussed in C Tutorial, will not be repeated. They are reusable! This Module and that follows can be a very good fundamental for Object Oriented programming though it is an old story :o). The source code is available in C++ Encapsulation source code. 1. // program start.cpp 2. #include <iostream> 3. using namespace std; 4. 5. struct item // a struct data type 6. { 7. int keep_data; 8. }; 9. 10. void main() 11. { 12. item John_cat, Joe_cat, Big_cat; 13. int garfield; // a normal variable 14. 15. John_cat.keep_data = 10; // assigning values 16. Joe_cat.keep_data = 11; 17. Big_cat.keep_data = 12; 18. garfield = 13; 19. 20. // displaying data 21. cout<<"Data value for John_cat is "<<John_cat.keep_data<<"\n"; 22. cout<<"Data value for Joe_cat is "<<Joe_cat.keep_data <<"\n"; 23. cout<<"Data value for Big_cat is "<<Big_cat.keep_data<<"\n"; 24. cout<<"Data value for garfield is "<<garfield<<"\n"; 25. cout<<"Press Enter key to quit\n"; 26. // system("pause"); 27. } Next, please refer to example program class.cpp. This program is identical to the last one except for a few program portions. The first difference is that we have a class instead of a structure beginning in line 7. class item The only difference between a class and a structure is that a class begins with a private section by default whereas a structure begins with a public section. The keyword class is used to declare a class as illustrated. The class named item is composed of the single variable named keep_data and two functions, set() and get_value(). A more complete definition of a class is: a group of variables (data), and one or more functions that can operate on that data. In programming language terms, attributes, behaviors or properties of the object used for the member variables. 1. // program class.cpp using class instead of struct 2. #include <iostream> 3. using namespace std; 4. 5. // the class declaration part 6. 7. class item 8. { 9. int keep_data; // private by default, it is public in struct 10. public: // public part 11. void set(int enter_value); 12. int get_value(void); 13. }; 14. 15. // class implementation part 16. 17. void item::set(int enter_value) 18. { 19. keep_data = enter_value; 20. } 21. int item::get_value(void) 22. { 23. return keep_data; 24. } 25. 26. // main program 27. void main() 28. { 29. item John_cat, Joe_cat, Big_cat; 30. // three objects instantiated 31. int garfield; // a normal variable 32. 33. John_cat.set(10); // assigning values 34. Joe_cat.set(11); 35. Big_cat.set(12); 36. garfield = 13; 37. // John_cat.keep_data = 100; 38. // Joe_cat.keep_data = 110; 39. // these are illegal cause keep_data now, is private by default 40. 41. cout<<"Accessing data using class\n"; 42. cout<<"-------------------------\n"; 43. cout<<"Data value for John_cat is "<<John_cat.get_value()<<"\n"; 44. cout<<"Data value for Joe_cat is "<<Joe_cat.get_value()<<"\n"; 45. cout<<"Data value for Big_cat is "<<Big_cat.get_value()<<"\n"; 46. cout<<"\nAccessing data normally\n"; 47. cout<<"---------------------------\n"; 48. cout<<"Data value for garfield is "<<garfield<<"\n"; 49. 50. // system("pause"); 51. } All data at the beginning of a class defaults to private and the private keyword is optional. This means, the data at the beginning of the class cannot be accessed from outside of the class; it is hidden from any outside access. Therefore, the variable named keep_data which is part of the object named John_cat defined in line 37 and 38, is not available for use anywhere in the main() program. That is why we have to comment out the following codes: // John_cat.keep_data = 100; // Joe_cat.keep_data = 110; It is as if we have built a wall around the variables to protect them from accidental corruption by outside programming influences. The concept is graphically shown in figure 12.1, item class with its wall built around the data to protect it. You will notice the small peep holes (through the arrow) we have opened up to allow the user to gain access to the functions set() and get_value(). The peep holes were opened by declaring the functions in the public section of the class. void item::set(int enter_value) { keep_data = enter_value; } int item::get_value(void) { return keep_data; } These two function definitions are called the implementation of the functions. The class name is required because we can use the same function name in other classes and the compiler will know with which class to associate each function implementation. Notice that, the private data contained within the class is available within the implementation of the member functions of the class for modification or reading in the normal manner. You can do anything with the private data within the function implementations which are a part of that class; also the private data of other classes is hidden and not available within the member functions of this class. This is the reason we must prepend the class name to the function names of this class when defining them. Figure 12.2 depicts the data space following the program execution. It is legal to declare variables and functions in the private part, and additional variables and functions in the public part also. In most practical situations, variables only declared in the private section and functions only declared in the public part of a class definition. Occasionally, variables or functions are declared in the other part. This sometimes leads to a very practical solution to a particular problem, but in general, the entities are used only in the places mentioned for consistency and good programming style. A variable with class scope is available anywhere within the scope of a class, including the implementation code, and nowhere else. Hence, the variable named keep_data has a class scope. The following is a list of terminologies that you need to understand their meaning in object oriented programming for this tutorial discussion. We have defined that, objects have attributes and by sending message, the object can do something, that is action or something can be done, so there will be an event. Now for program named class.cpp, we can say that we have a class, composed of one variable and two methods. The methods operate on the variable contained in the class when they receive messages to do so. Lines 11 and 12 of this program are actually the prototypes for the two methods, and are our first example of a prototype usage within a class as shown below: void set(int enter_value); int get_value(void); You will notice line 11 which says that the method named set() requires one parameter of type int and returns nothing, hence the return type is void. The method named get_value() however, according to line 12 has no input parameters but returns an int type value to the caller. 12.3.4 Sending A Message Or Function Call? After the definition in lines 2 through 24, we finally come to the program where we actually use the class. In line 29 we instantiate three objects of class item and name the objects John_cat, Joe_cat and Big_cat. Each object contains a single data point which we can set through the use of the method set() or read through the use of the get_value() method, but we cannot directly set or read the value of the data point because it is hidden within the block wall around the class as if it is in a container. In line 32, we send a message to the object named John_cat instructing it to set its internal value to 10, and even though this looks like a function call, it is properly called sending a message to a method. It is shown below: John_cat.set(10); // assigning values Remember that the object named John_cat has a method associated with it called set() that sets its internal value to the actual parameter included within the message. You will notice that the form is very much like the means of accessing the elements of a structure. You mention the name of the object with a dot connecting it to the name of the method. This means, perform operation set(), with argument 10 on the instance of the object John_cat. In a similar manner, we send a message to each of the other two objects, Joe_cat and Big_cat, to set their values to those indicated. Lines 37 and 39 have been commented out because the operations are illegal. The variable named keep_data is private by default and therefore not available to the code outside of the object itself. Also, the data contained within the object named John_cat is not available within the methods of Joe_cat or Big_cat because they are different objects. The other method defined for each object is used in lines 43 through 45 to illustrate their usage. In each case, another message is sent to each object and the returned result is output to the standard output, screen, via the stream library cout There is another variable named garfield declared and used throughout this example program that illustrates, a normal variable can be intermixed with the objects and used in the normal manner. Compile and run this program. Try removing the comments from lines 37 and 38, and then see what kind of error messages your compiler issues. Examine the program named robject.cpp carefully, a program with a few serious problems that will be overcome in the next program example by using the principles of encapsulation. 1. // program robject.cpp 2. #include <iostream> 3. using namespace std; 4. 5. // a function prototype 6. int area(int rectangle_height, int rectangle_width); 7. 8. struct rectangle 9. { 10. int height; // public 11. int width; // public 12. }; 13. 14. struct pole 15. { 16. int length; // public 17. int depth; // public 18. }; 19. 20. // rectangle area 21. int surface_area(int rectangle_height, int rectangle_width) 22. { 23. return (rectangle_height * rectangle_width); 24. } 25. 26. // main program 27. void main ( ) 28. { 29. rectangle wall, square; 30. pole lamp_pole; 31. 32. wall.height = 12; // assigning values 33. wall.width = 10; 34. square.height = square.width = 8; 35. 36. lamp_pole.length = 50; 37. lamp_pole.depth = 6; 38. 39. cout<<"Area of wall = height x width, OK!"<< "\n"; 40. cout<<"-------------------------------------"<< "\n"; 41. cout<<"----> Area of the wall is "<<surface_area(wall.height, 42. wall.width)<< "\n\n"; 43. cout<<"Area of square = height x width, OK!"<< "\n"; 44. cout<<"-------------------------------------"<< "\n"; 45. cout<<"----> Area of square is 46. "<<surface_area(square.height,square.width)<<"\n\n"; 47. cout<<"Non related area?"<<"\n = height of square x width of the wall?"<< 48. "\n"; 49. cout<<"-------------------------------------"<< "\n"; 50. cout<<"----> Non related surface area is 51. "<<surface_area(square.height,wall.width)<<"\n\n"; 52. cout<<"Wrong surface area = height of square"<<"\nx depth of lamp 53. pole?"<<"\n"; 54. cout<<"-------------------------------------"<< "\n"; 55. cout<<"---->Wrong surface area is 56. "<<surface_area(square.height,lamp_pole.depth)<<"\n"; 57. 58. // system("pause"); 59. } We have two structures declared, one being a rectangle and the other is a pole. The depth of the lamp pole is the depth it is buried in the ground, the overall length of the pole is therefore the sum of the height and depth. Figure 12.3 try to describe the data space after the program execution. It may be a bit confused at the meaning of the result found in line 50 where we multiply the height of the square with width of the wall, because the data can be access publicly. Another one, although it is legal, the result has no meaning because the product of the height of the square and the depth of the lamp_pole has absolutely no meaning in any physical system we can think up in reality because they don’t have relation and the result is useless. The error is obvious in a program as simple as this, but in a large program production it is very easy for such problems to be inadvertently introduced into the code by a team of programmers and the errors can be very difficult to find. If we have a program that defined all of the things we can do with a square’s data and another program that defined everything we could do with lamp_pole’s data, and if the data could be kept mutually exclusive, we could prevent these silly things from happening. If these entities must interact, they cannot be put into separate programs, but they can be put into separate classes to achieve the desired goal. Compile and run the program. tenouk fundamental of C++ object oriented tutorial
http://www.tenouk.com/Module12.html
crawl-002
refinedweb
2,125
66.94
0 I am trying to connect to SQL SERVER 2008 using a JDBC connection. I have read some older posts on this topic(SQL server 2000, SQL server 2005) and I feel there may be some minor variations and so am posting on this forum. I am running win XP with SP 3. I have downloaded MS Sqlserver JDBC driver 3.0 and appended the following CLASSPATH on command prompt c:>set CLASSPATH= C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar. (The path and the spelling is correct). When SQL server 2008 is up and running, I am using the following program to test a connection to the database import java.sql.Connection; import java.sql.SQLException; import java.sql.DriverManager; import java.lang.*; class TestDB { public static void main (String [] args) { try { Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); } catch(ClassNotFoundException cnfe) { System.out.println(cnfe); } } } It compiles fine but throws a run time exception as java.lang.classNoFfoundException. Can somebody pls help me trouble shoot. Thanks Raghu
https://www.daniweb.com/programming/software-development/threads/298726/ms-sql-server-2008-jdbc-connection-java-lang-classnotfoundexception
CC-MAIN-2016-22
refinedweb
176
52.46
From: Stewart, Robert (stewart_at_[hidden]) Date: 2002-02-01 11:49:57 From: Karl Nelson [SMTP:kenelson_at_[hidden]] > > > From: Karl Nelson [SMTP:kenelson_at_[hidden]] > > > > > > Does cout << format("%#010x" , complex<int>(20,21) ); mean > > > A) the user wants all integers output to be show base with > > > 12 charactors zero padded. 0x00000016+0x00000017i > > > B) The total width should be 10 with showbase and zero padding? > > > 0x16+0x17i > > > > I would definitely expect B. Otherwise, the width has relatively little > > control over the output; any user-defined type could wreak havoc on > > formatting. > >. > ========================================================== > #include <iostream> > #include <iomanip> > #include <complex> > using namespace std; > int main() > { > for (int i=1;i<1000000;i*=10) > { > cout << setw(5) << i << " "; > cout << setw(5) << complex<int>(i) << endl; > } > } > > 1 (1,0) > 10 (10,0) > 100 (100,0) > 1000 (1000,0) > 10000 (10000,0) > 100000 (100000,0) > =========================================================== > > Thus either A or B would have been better. Since your width was only five, you never got to see any padding with the complex<> output. cout was using setw(5) as indicating a minimum width, but the result was always at least five characters. >. Rob Susquehanna International Group, LLP Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2002/02/24343.php
CC-MAIN-2020-40
refinedweb
209
64.2
Abstract Versioning of Atom is done with the namespace only; there is no "version" element or attribute. Extensibility is done using normal XML extensions; there is no mustUnderstand or mustIgnore attribute. Rationale The Working Group has had a hard time coming to consensus on versioning, possibly because we know in our hearts that we cannot predict the future about how either will be needed. Some folks want to try to make future changes happen as gracefully as possible, but implementing that today takes the ability to guess what kind of changes that we will want to make. We don't have that ability. The proposals for extensibility that include mustIgnore and mustUnderstand in essence are telling Atom applications how they should act, even though those actions don't affect technical interoperability. Some folks like that; others (particularly people who have already-deployed RSS readers) don't seem to. So, instead of trying to predict the future and control future changes, we might just admit that we just have to live with whatever mistakes we make in Atom 1.0. If the mistake is egregious, it can get fixed by Atom 2.0 with a different namespace. By allowing Atom processors to do whatever they want with extensions they don't recognize, we will probably find that software will quickly figure out what to do without our stern advice. Proposal Remove section 4.1 on the "version" attribute. Remove mention of the version attribute from a few places in the rest of the document, including from the example. Add a section on how to extend Atom. The section should say, basically: - Extensions to Atom are done using normal XML extension mechanisms. - An Atom-reading system MUST NOT fail if it sees an extension that it does not recognize. - An Atom-reading system that sees an extension that it does not understand might ignore it completely, might show an indication that something was seen but not understood, might show the content of the extension, or something else as it pleases. - An Atom-producing system that includes extensions needs to understand that Atom-reading systems might not understand the extension and might act in ways that the producer could not predict.
http://www.intertwingly.net/wiki/pie/PaceNoVersioningPlainExtensibility
crawl-002
refinedweb
368
52.49
On Mon, 24 Jan 2005 16:28:47 +0100, Nicolas Lehuen <nicolas.lehuen at gmail.com> wrote: > On Mon, 24 Jan 2005 09:12:24 -0500, Jorey Bump <list at joreybump.com> wrote: > > Nicolas Lehuen wrote: > > > On Sun, 23 Jan 2005 18:38:43 -0500, Graham Dumpleton > > > <grahamd at dscpl.com.au> wrote: > > > > >. > > > > I guess I'm on record as considering this to be normal Python behaviour, > > and not a bug. Talk of modifying Publisher's current import mechanism > > threatens the significant investment I have in the applications I've > > already developed. I'd rather see Publisher left alone, as other > > frameworks are maturing that address this issue. > > > > We should be clear about what we are talking about: > > > > Issue: > > > > An interpreter imports a module's name into the namespace independent of > > its location in the filesystem. > > > > Pros: > > > > - This is consistent with normal Python behaviour, where an interpreter > > loads modules from the current directory or PYTHONPATH. [1] > > > > - There is no ambiguity about imports, and no need to rewrite code when > > moving the module to another location. > > > > - The Python language documentation serves as a good resource, as native > > behaviours are not contradicted. [2] > > > > - General purpose code needs *very* little modification to run under > > mod_python.publisher. > > > > - The default Python behaviour of caching imported modules can be used > > to great leverage, and plays a significant role in the performance boost > > offered by mod_python. [3] > > > > - With careful planning, per-interpreter package repositories can be > > created. > > > > Cons: > > > > - This consistently trips up newbies, who are expecting PHP-like > > behaviour from their web applications. > > > > - Within an interpreter (a VirtualHost, for example), such care must be > > taken to create unique module names that porting an application or > > working in a team environment is significantly more fragile than in > > other web technologies, where it usually suffices to throw the code in > > its own directory. > > > > - Although a lot of issues can be addressed in .htaccess files, so much > > configuration occurs at the apache level that it is difficult to use > > mod_python effectively without apache admin privileges. [4] > > > > - The Multi-Processing Modules (MPMs) in apache 2 may have a unique > > effect on an application. After throwing in the different platforms, > > Python versions, and developer styles, it can be very hard to reach a > > consensus on whether an issue even exists. > > > > Possible solution: > > > > Automatically package module based on the file path. > > > > Pros: > > > > - Hopefully, this would provide better separation of modules in namespace. > > > > Cons: > > > > - Serious potential to break existing Publisher applications. > > > > - The same module might be loaded into a separate namespace for > > different applications, eliminating the advantage of module caching. [5] > > > > - This encourages the reuse of module names (such as index.py), which is > > considered a bad programming practice in Python. [1] > > > > - If the code assumes and uses a package name based on its location, it > > will need to be rewritten if it is moved. > > > > - The possibility that autopackaging will arbitrarily create a namespace > > clash with a standard module is unacceptably high, especially where > > third party packages are installed. > > > > - It severely limits the ability to test portions of the code from the > > command line, because it adds an element of unpredictability to the > > module search path. > > > > - Seasoned Python programmers might consider mod_python.publisher to be > > "broken" because it doesn't conform to Python standards. [6] > > > > [1] For a clear description of Python's module search path, see: > > > > > > [2] The documentation for mod_python is sparse because it simply embeds > > an interpreter in apache. See the Python documentation for programming > > issues: > > > > [3] This seems to be the main complaint levied against mod_python, in > > one form or the other. I consider it one of it's greatest strengths. For > > example, it allows me to import database handling code from a module > > once for that interpreter, allowing multiple applications to share the > > connection -- a poor man's database pool, if you will. > > > > [4] The same can be said about php.ini, but noone ever needs to restart > > apache to develop PHP applications. > > > > [5] What is the solution to the problems caused by module caching? It > > doesn't make much sense to force mod_python to act like it's spawning a > > new interpreter for every request. Run the app as a CGI instead of using > > mod_python. > > > > [6] The strength of Publisher is that it leverages the advantages of > > Python with very little overhead, simply adding a number of conveniences > > specific to web application development. mod_python.psp is a much better > > area to radically depart from this paradigm, as it is expected to > > provide PHP-like behaviour. > > Sorry, I don't have much time, so for now I'll give a short answer : I > agree that importing modules should be consistent between the standard > Python way (the import keyword) and the apache.import_module way. > > The problem is that mod_python.publisher does not follow the standard > Python way in that it imports published modules as root-level modules. > It's as if mod_python.publisher changed sys.path for each and every > published module. This is not the standard Python way, and the source > for many errors. > > Note that the current package system in Python is way, way smarter > than mod_python.publisher : in a package, you can import fellow > modules by giving their local names. It's not recommended by > pychecker, but it works. The smart part is that if I have this : > > / > package1.py > /subdir > __init__.py > package1.py > package2.py > > Then whatever I do, /subdir/package1.py is imported as subdir.package1 > and does not collide with /package1.py. mod_python.publisher is not as > smart, this is very confusing and surprising, nearly everybody > complains about this and hence, I think this should be filed as a bug. > > So maybe apache.import_module should not be modified (except for > performance, but that's another issue which is already in the bugs > database), but mod_python.publisher sure has to be changed. > > As far as I'm concerned, this bug was sufficient to have me implement > a publisher which works the way I want ; Daniel Popovitch and Graham > Dumpleton did the same with different designs. If each and every > developer develops his own publisher, it may be time to think about > changing the built-in one. > > Regards, > Nicolas > Or it could be the sign that mod_python.publisher should be completely erased from the project, since it poses much more problems than it solves. Maybe we should include both mpservlets and Vampire in mod_python, and forget about the default publisher (except maybe for compatibility issues). Regards, Nicolas
https://modpython.org/pipermail/mod_python/2005-January/017200.html
CC-MAIN-2022-21
refinedweb
1,072
55.84
Memory Management in Java: Garbage Collection Tuning and Optimization Tuning garbage collection is an important task in designing scalable applications in Java. This is similar to general performance-tuning activity. Working with the 200 GC-related JVM parameters to optimize the application code will make little difference. Instead, following a simple process will guarantee progress. General steps in the process are: -. General Concepts Let us begin with an example from manufacturing by observing an assembly line in a factory. The line is assembling Tesla Cars from ready-made components. Tesla Model X are built on this line in a sequential manner. Monitoring the line in action, we measure that it takes four hours to complete a car from the moment the frame enters the line until the assembled car leaves the line at the other end. Continuing our observations we can also see that one car is assembled after each minute, 24 hours a day, every day. Simplifying the example and ignoring maintenance windows, we can calculate that in any given hour such an assembly line assembles 60 Tesla Cars. Equipped with these two measurements, we now possess crucial information about the current performance of the assembly line in regards of latency and throughput: - Latency of the assembly line: 4 hours - Throughput of the assembly line: 60 cars/hour Notice that latency is measured in time units. Throughput of a system is measured in completed operations per time unit. The demand for the Tesla Cars has been steady for a while and the assembly line has kept producing cars with the latency of four hours and the throughput of 60 cars/hour for months. Now the demand for the bikes suddenly doubles. Instead of the usual 60 * 24 = 1,440 cars/day the customers demand twice as many. The performance of the factory is no longer satisfactory and something needs to be done. Elon Musk correctly concludes that the latency of the system is not a concern – instead he should focus on the total number of cars produced per day. Coming to this conclusion and assuming he is well funded through the sales of PayPal, Elon would immediately take the necessary steps to improve throughput by adding capacity. As a result we would now be observing not one but two identical assembly lines in the same factory. Both of these assembly lines would be assembling the very same car every minute of every day. By doing this, Tesla has doubled the number of bikes produced per day. Instead of the 1,440 cars the factory is now capable of shipping 2,880 cars each day. It is important to note that we have not reduced the time to complete an individual car as it still takes four hours to complete a car from start to finish. In the example above a performance optimization task was carried out, coincidentally impacting both throughput and capacity. As in any good example we started by measuring the system’s current performance, then set a new target and optimized the system only in the aspects required to meet the target. In this example an important decision was made – the focus was on increasing throughput, not on reducing latency. While increasing the throughput, we also needed to increase the capacity of the system. Instead of a single assembly line we now needed two assembly lines to produce the required quantity. So in this case the added throughput was not free, the solution needed to be scaled out in order to meet the increased throughput requirement. An important alternative should also be considered for the performance problem at hand. The seemingly non-related latency of the system actually hides a different solution to the problem. If the latency of the assembly line could have been reduced from 1 minute to 30 seconds, the very same increase of throughput would suddenly be possible without any additional capacity. Whether or not reducing latency was possible or economical in this case is not relevant. What is important is a concept very similar to software engineering – you can almost always choose between two solutions to a performance problem. You can either throw more hardware towards the problem or spend time improving the poorly performing code. Latency Latency goals for the GC have to be derived from generic latency requirements. Generic latency requirements are typically expressed in a form similar to the following: - All user transactions must respond in less than 10 seconds - 90% of the invoice payments must be carried out in under 3 seconds - Recommended products must be rendered to a purchase screen in less than 100 ms When facing performance goals similar to the above, we would need to make sure that the duration of GC pauses during the transaction does not contribute too much to violating the requirements. “Too much” is application-specific and needs to take into account other factors contributing to latency including round-trips to external data sources, lock contention issues and other safe points among these. Let us assume our performance requirements state that 90% of the transactions to the application need to complete under 1,000 ms and no transaction can exceed 10,000 ms. Out of those generic latency requirements let us again assume that GC pauses cannot contribute more than 10%. From this, we can conclude that 90% of GC pauses have to complete under 100 ms, and no GC pause can exceed 1,000 ms. For simplicity’s sake let us ignore in this example multiple pauses that can occur during the same transaction. Having formalized the requirement, the next step is to measure pause durations. There are many tools for the job. Let us use GC logs, namely for the duration of GC pauses. The information required is present in different log snippets so let us take a look which parts of date/time data are actually relevant, using the following The event stopped the application threads for 0.0713174 seconds. Even though it took 210 ms of CPU times on multiple cores, the important number for us to measure is the total stop time for application threads, which in this case, where parallel GC was used on a multi-core machine, is equal to a bit more than 70 ms. This specific GC pause is thus well under the required 100 ms threshold and fulfils both requirements. Extracting information similar to the example above from all GC pauses, we can aggregate the numbers and see whether or not we are violating the set requirements for any of the pause events triggered. Throughput Throughput requirements are different from latency requirements. The only similarity that the throughput requirements share with latency is the fact that again, these requirements need to be derived from generic throughput requirements. Generic requirements for throughput can be similar to the following: - The solution must be able to process 1,000,000 invoices/day - The solution must support 1,000 authenticated users each invoking one of the functions A, B or C every five to ten seconds - Weekly statistics for all customers have to be composed in no more than six hours each Sunday night between 12 PM and 6 AM So, instead of setting requirements for a single operation, the requirements for throughput specify how many operations the system must process in a given time unit. The GC tuning part now requires determining the total time that can be spent on GC during the time measured. How much is tolerable for the particular system is application-specific, but as a rule of thumb, anything over 10% would look suspicious. Let us now assume that the requirement at hand foresees that the system processes 1,000 transactions per minute. Let us also assume that the total duration of GC pauses during any minute cannot exceed six seconds (or 10%) of this time. Having formalized the requirements, the next step would be to harvest the information we need. The source to be used in the example is GC logs, from which we would get information similar to the This time we are interested in user and system times instead of real time. In this case we should focus on 23 milliseconds (21 + 2 ms in user and system times) during which the particular GC pause kept CPUs busy. Even more important is the fact that the system was running on a multi-core machine, translating to the actual stop-the-world pause of 0.0713174 seconds, which is the number to be used in the following calculations. Extracting the information similar to the above from the GC logs across the test period, all that is left to be done is to verify the total duration of the stop-the-world pauses during each minute. In case the total duration of the pauses does not exceed 6,000ms or six seconds in any of these one-minute periods, we have fulfilled our requirement. Capacity Capacity requirements put additional constraints on the environment where the throughput and latency goals can be met. These requirements might be expressed either in terms of computing resources or in cold hard cash. The ways in which such requirements can be described can take the following form: - The system must be deployed on Android devices with less than 512 MB of memory - The system must be deployed the latency and throughput requirements. With unlimited computing power, any kind of latency and throughput targets could be met, yet in the real world the budget and other constraints have a tendency to set limits on the resources one can use. Example Consider the following code snippet in Java: import java.util.*; submits two jobs to run every 100 ms. Each job emulates objects with immediately see the impact of GC in the log files, similarly to the following: 2015-06-04T13:34:16.119-0200: 1 723: GC (Allocation Failure) PSYoungGen: 2 114016K 73191K(234496K) 421540K-421269K(745984K) 3 0.0858176 secs Times: user=0.04 sys=0.06, real=0.09 secs 2015-06-04T13:34:16.738-0200: 1 2.342: GC (Allocation Failure) 2 PSYoungGen: 234462K-93677K(254976K) 582540K-593275K(766464K) 3 0.2357086 secs Times: user=0.11 sys=0.14, real=0.24 secs 2015-06-04T13:34:16.974-0200: 1 2.578: Full GC (Ergonomics) 2 PSYoungGen: 93677K-70109K(254976K) 3 ParOldGen: 499597K-511230K(761856K) 593275K-581339K(1016832K) 4 Metaspace: 2936K-2936K(1056768K) 5 us assume that we have a throughput goal to process 13,000,000 jobs/hour. Running. Visit our discussion forum to ask any question and join our communityView Forum
https://iq.opengenus.org/memory-management-in-java-garbage-collection-tuning-and-optimization/
CC-MAIN-2018-51
refinedweb
1,764
50.46
Hello, I am new in Python. I am trying to find the distance between each vertex and its neighbours. I access neighbours of each vertex with list(M.Vertices.GetVertexNeighbours(i)) However, I could not access the point coordinates of each neighbour by writing M.Vertices.GetVertexNeighbours(i[0]) Could you suggest me a way to get a list of point coordinates of each neighbour? Coordinate of each element in a list of list Hello, Hi @mimosapudica, GetVertexNeighbours returns an array of vertex indices not Points. To access the vertex point and measure the distance you would have to access one neighbour in the loop at line 17 like this: neighbour_vertex_pt = M.Vertices[j] then get the distance to the vertex you store at line 10 like this: distance = V.DistanceTo(neighbour_vertex_pt) _ c. Thanks, Clement. I understood that neighbour_vertex_pt = M.Vertices[j] gives each neighbour of a node in the list. when I use DistanceTo, I get this error. Runtime error (MissingMemberException): ‘PlanktonVertex’ object has no attribute ‘DistanceTo’ I also tried to use this line below but could not get the distance between each node with its neighbours but I got the same error distances= neighbour_vertex_pt.pos-V.pos Here is the code: import rhinoscriptsyntax as rs import Rhino.Geometry as rg import Rhino V_count = M.Vertices.Count # number of all vertices V_list=[] VN_list=[] for i in range(V_count): V= M.Vertices[i] #N=M.Vertices.GetVertexNeighbours(i) point=(V.X,V.Y,V.Z) nodes=rs.AddPoint(point) V_list.append(nodes) VN_idx = list(M.Vertices.GetVertexNeighbours(i)) for j in VN_idx: number_of_Neighbours=VN_idx.Count neighbour_vertex_pt = M.Vertices[j] distance = distances= neighbour_vertex_pt.pos-V.posTry converting your PlanktonMesh into a regular Rhinocommon Mesh before plugging it into the Python component, and make sure the type hint on the input is set to Mesh or None. Also, if you are working directly with Rhino.Geometry types, try not to mix with rhinoscriptsyntax methods like rs.AddPoint, or there will be unnecessary trouble converting back and forth from Guids. is there a reason you use the Plankton mesh as input or do you need the halfedge structure at all ? If not, i would suggest to use the regular Mesh component as input to your Python component, then you can omit using GetVertexNeighbours and use GetConnectedVertices method instead to get the neighbours. If you need the Plankton mesh as input, you can try to get a Point3d from the Plankton vertex, then DistanceTo works: import Rhino.Geometry as rg import Rhino for i, p_vertex in enumerate(M.Vertices): # get 3d point from plankton vertex v_pt = rg.Point3d(p_vertex.X, p_vertex.Y, p_vertex.Z) # get neighbour indices neighbour_indices = M.Vertices.GetVertexNeighbours(i) print "Vertex {} has {} neighbours".format(i, neighbour_indices.Count) for n in neighbour_indices: # get 3d point from neighbour plankton vertex n_pt = rg.Point3d(M.Vertices[n].X, M.Vertices[n].Y, M.Vertices[n].Z) # mesasure distance distance = v_pt.DistanceTo(n_pt) print "Distance to neighbour with index {} = {}".format(n, distance) _ c. Thank you @clement I solved neighbouring problem. I will try to deform the mesh and relax. I am trying to displace the vertex of mesh with iteration. When I click the test in gh python each time, it does not reset the mesh. I try to find copy mesh in gh python. It did not work. How can we make a complete copy of mesh inside gh python? How can I reset the mesh? 190525_Mesh_reset.gh (14.2 KB) Hi @mimosapudica, i am getting an error if i try to open your file, MeshFromPoints is missing and i was not able to install it. Have you tried something like mycopy = mesh.Duplicate() ? _ c. It did not work. I am sending you the file .3dm and .gh again.190525_Mesh_reset_problem.3dm (1.5 MB) 190525_Mesh_reset.gh (14.3 KB)
https://discourse.mcneel.com/t/coordinate-of-each-element-in-a-list-of-list/63265
CC-MAIN-2018-51
refinedweb
638
68.26
Walkthrough: Authoring a Composite Control with Visual C# Composite. click Project to open the New Project dialog box. From the list of Visual C# projects, select the Windows Forms Control Library project template, type ctlClockLib in the Name box, and then click OK..cs, and then click Rename. Change the file name to ctlClock.cs. Click the Yes button when you are asked if you want to rename all references to the code element "UserControl1".. To add a Label and a Timer to your composite control In Solution Explorer, right-click ctlClock.cs, and then click View Designer. In the Toolbox, expand the Common Controls node, and then double-click Label. A Label control named label1 is added to your control on the designer surface. In the designer, click label1. In the Properties window, set the following properties.. On the File menu, click Save All to save the procedure, you will add properties to your control that enable the user to change the color of the background and text. To add a property to your composite control In Solution Explorer, right-click ctlClock.. On the File menu, click Save All to save the project. Controls are not stand-alone applications; they must be hosted in a container. Test your control's run-time behavior and exercise its properties with the UserControl Test Container. For more information, see How to: Test the Run-Time Behavior of a UserControl. To test your control Press F5 to build the project and run your control in the UserControl Test Container. In the test container's property grid,., right-click ctlClockLib, point to Add, and then click User Control. The Add New Item dialog box opens. Select the Inherited User Control template. In the Name box, type ctlAlarmClock.cs, and then click Add. The Inheritance Picker dialog box appears. Under Component Name, double-click ctlClock. In Solution Explorer, browse through the current projects. Adding the Alarm Properties. To add properties to your composite control In Solution Explorer, right-click ctlAlarmClock, and then click View Code. Locate the public class statement. Note that your control inherits from ctlClockLib.ctlClock. Beneath the opening brace ({) statement, type the following code. [C#] the composite control in the same manner as you would add to any composite control. To continue adding to your alarm clock's visual interface, you will add a label control that will flash when the alarm is sounding. To add the label control In Solution Explorer, right-click ctlAlarmClock, and,. [C#] In the Code Editor, locate the closing brace (}) at the end of the class. Just before the brace, add the following code. [C#] // the current time, flash an alarm. { if (AlarmTime.Date == DateTime.Now.Date && AlarmTime.Hour == DateTime.Now.Hour && AlarmTime.Minute == DateTime.Now.Minute) { // Sets lblAlarmVisible to true, and changes the background color based on // the value of blnColorTicker. The background color. Using the Inherited Control on a Form,, right-click ctlClockLib, and then click Build.. In the designer, double-click dtpTest. The Code Editor opens to private void dtpTest_ValueChanged. Modify the code so that it resembles the following. [C#].
http://msdn.microsoft.com/en-us/library/a6h7e207(v=vs.100).aspx
CC-MAIN-2013-48
refinedweb
516
68.97
In 1669, Isaac Newton found an algorithm to solve for the roots (values for which the function equals zero) of a polynomial equation. In this method, one guesses a starting value and then repeatedly makes small changes that yield improved approximations to the solution. The process terminates once the desired precision is reached. Newton's original method did not use the derivative of f(x). Later, in 1690, Joseph Raphson found an improvement to Newton's method which did use the derivative of f(x), f'(x). Each iterative step of the Newton-Raphson method is.xn+1 := xn - f(xn)/f'(xn) where h is small.where h is small.f(x+h) - f(x-h) f'(x) ≈ --------------- 2·h The steps to apply Newton-Raphson method to find the root of an equation f(x) = 0 using an approximate derivative are Zeno 1.2 is an interpreter for the Zeno programming language. It is an easy to learn and is suitable for educational purposes. const TOLERANCE : real := 0.00000001 const MAX_ITER : int := 50 program var root : real var num_roots, i : int put "solving: f(x) = exp(x) - 3*x^2" repeat put "initial guess"... get root if newton_approx( root ) then put "root = ", root else put "could not solve" end if until not another end program % f(x) = exp(x) - 3*x^2 function Fx( x : real ) : real return exp(x) - 3*x^2 end function % % Solve for a root of a function of a single variable by starting % from an initial guess and using the function value and an % approximate derivative to search for the solution % function newton_approx( var root : real ) : boolean var iter : int := 0 var h, diff : real repeat h := 0.01 * root % root could be small or zero if abs( root ) < 1 then h := 0.01 end if % calculate guess refinement diff := 2*h*Fx(root) / (Fx(root + h) - Fx(root - h)) % update guess root := root - diff iter := iter + 1 until ( iter > MAX_ITER ) or ( abs( diff ) < TOLERANCE ) if abs( diff ) <= TOLERANCE then return true end if return false end function % returns the absolute value of the argument function abs( x : real ) : real if x < 0.0 then x := -x end if return x end function % ask user to continue or not function another : boolean var ans : string put "another [Y|N]"... get ans return (ans[1] = 'Y') or (ans[1] = 'y') end function solving: f(x) = exp(x) - 3*x^2 initial guess? 0 root = -0.458962 another [Y|N]? y initial guess? 1 root = 0.910008 another [Y|N]? y initial guess? 2 root = 0.910008 another [Y|N]? y initial guess? 3 root = 3.73308 another [Y|N]? n AbCd Classics - free on-line books
http://home.att.net/~srschmitt/newtons_method.html
crawl-002
refinedweb
452
66.64
How many times have you wanted to rotate an image in a PictureBox control? I know I've wanted to do it on many occasions. This control makes it simple to rotate your images. Referencing it is easy in code - the namespace is System.Windows.Forms so you can easily declare one as RImage. This control inherits everything so any property/method a PictureBox has, this control will have as well - however, there are several properties that will allow you other options that were built into the control. Public Property ShowThrough() As Boolean Public Property Direction() As DirectionEnum Public Property Rotation() As Integer Public Property TransparentColor() As Color ShowThrough will determine whether or not space not used by the image will be transparent and show the control below it. This is accomplished using regions. Direction as you notice is declared as DirectionEnum. This enumeration was created for the control. It tells the control which way to rotate the image: Clockwise, or Counter_Clockwise. Rotation is the angle of rotation of the image. It is in degrees. The range is 0 to 359; however, inputting a number outside of the range will scale it to the appropriate angle that is in the range. TransparentColor will allow you to set a transparent color for the image. The demo project has an image included with a Lime background. If the TransparentColor property is set to Lime - that color in the image will be painted transparent. Unfortunately, it doesn't show the controls below it through, just the parent's background color. Right-click on an empty area of your toolbox and choose Add/Remove Items. At the dialog screen on the .NET Framework Components tab, choose Browse and navigate to the DLL file. Once you've done this, hit OK. You've now added the control to the toolbox. Using the control is simple. It works almost exactly like the PictureBox control. Add one to your form, and give it an image. Once it has an image, ShowThrough will start working (if there is no image, OnPaint quits before it can set the transparent region) if it is set. Now just set your Direction from the drop-down box in the Properties window and enter an angle for your Rotation. If you'd like to set a transparent color for your image - you can do that with TransparentColor in the Properties window. All of the rotation is handled in the OnPaint event. I start by getting the current corners of the image. Dim bm_in As New Bitmap(MyBase.Image) Dim wid As Single = bm_in.Width Dim hgt As Single = bm_in.Height Dim corners As Point() = { _ New Point(0, 0), _ New Point(wid, 0), _ New Point(0, hgt), _ New Point(wid, hgt)} Next I grab the center of the image - the point we want to rotate around - and subtract from each of the corners. Dim cx As Single = wid / 2 Dim cy As Single = hgt / 2 Dim i As Long For i = 0 To 3 corners(i).X -= cx corners(i).Y -= cy Next Now we need to get the Sine of theta, and the Cosine of theta - which means we need theta. Dim theta As Single = CSng((_degree) * _direction) * PI / 180 Dim sin_theta As Single = Sin(theta) Dim cos_theta As Single = Cos(theta) Here is where the magic comes in, we need to apply the rotation formulas to all the corners. Dim X As Single Dim Y As Single For i = 0 To 3 X = corners(i).X Y = corners(i).Y corners(i).X = (X * cos_theta) - (Y * sin_theta) corners(i).Y = (Y * cos_theta) + (X * sin_theta) Next OK, we've got the rotated corners, let's fix the offset we created when finding the rotation point, and for that we'll need the minimum x and y values. Dim xmin As Single = corners(0).X Dim ymin As Single = corners(0).Y For i = 1 To 3 If xmin > corners(i).X Then xmin = corners(i).X If ymin > corners(i).Y Then ymin = corners(i).Y Next For i = 0 To 3 corners(i).X -= xmin corners(i).Y -= ymin Next Now we can actually output the rotated image, but first I create a region using a function to create it based on these corners. Dim bm_out As New Bitmap(CInt(-2 * xmin), CInt(-2 * ymin)) Dim bgr As Graphics = Graphics.FromImage(bm_out) Dim rg As Region = CreateTransRegion(corners) Dim tp As Point = corners(3) ReDim Preserve corners(2) bgr.DrawImage(bm_in, corners) Now we have the rotated image in an output buffer, plus a region to allow transparency for the parts of the control that won't draw the image. Now comes the implementation of the SizeMode. For StretchImage, we'll need the width and height of the rotated image - this is easy as it's stored in the corners array at index 3 - and for stretching or centering we'll need a new region. Dim gr_out As Graphics = pe.Graphics gr_out.FillRectangle(New SolidBrush(Me.BackColor), 0, 0, Me.Width, Me.Height) bm_in.MakeTransparent(_transColor) If _sizemode = PictureBoxSizeMode.StretchImage Then Dim maxW As Integer = tp.X Dim maxH As Integer = tp.Y For t As Integer = 0 To 2 If maxW < corners(t).X Then maxW = corners(t).X If maxH < corners(t).Y Then maxH = corners(t).Y Next 'get hscale Dim hscale As Double = Me.Width / maxW 'get vscale Dim vscale As Double = Me.Height / maxH 'convert points corners(0) = New Point(corners(0).X * hscale, corners(0).Y * vscale) corners(1) = New Point(corners(1).X * hscale, corners(1).Y * vscale) corners(2) = New Point(corners(2).X * hscale, corners(2).Y * vscale) gr_out.DrawImage(bm_out, 0, 0, Me.Width, Me.Height) Dim np(3) As Point np(0) = corners(0) np(1) = corners(1) np(2) = corners(2) np(3) = New Point(tp.X * hscale, tp.Y * vscale) rg = CreateTransRegion(np) We don't need quite so much for centering the image. ElseIf _sizemode = PictureBoxSizeMode.CenterImage Then Dim wadd As Integer = CInt((Me.Width / 2) - (bm_out.Width / 2)) Dim hadd As Integer = CInt((Me.Height / 2) - (bm_out.Height / 2)) corners(0) = New Point(corners(0).X + wadd, corners(0).Y + hadd) corners(1) = New Point(corners(1).X + wadd, corners(1).Y + hadd) corners(2) = New Point(corners(2).X + wadd, corners(2).Y + hadd) gr_out.DrawImage(bm_in, corners) Dim np(3) As Point np(0) = corners(0) np(1) = corners(1) np(2) = corners(2) np(3) = New Point(tp.X + wadd, tp.Y + hadd) rg = CreateTransRegion(np) Any other size modes just get output to 0,0, so we don't need anything special there, except for AutoSize - which just changes the control's size to match the image. Else gr_out.DrawImage(bm_in, corners) End If If _sizemode = PictureBoxSizeMode.AutoSize Then MyBase.Width = bm_out.Width MyBase.Height = bm_out.Height End If The last thing we take care of is the region (for transparency), but if the property ShowThrough is changed to False then we need to make sure to get rid of any existing region. Me.Region = Nothing If _showThrough Then Me.Region = rg End If And that's it! It is certainly possible to modify it to rotate around a point other than the center (although I haven't tried). If you want to send me any bugs/suggestions, please send them to codeproject@stdominion.net. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/rimage.aspx
crawl-002
refinedweb
1,243
69.38
To promote encapsulation, a type or type member may hide itself from other types or other assemblies by adding one of the following five access modifiers to the declaration: The type or type member is fully accessible. This is the implicit accessibility for enum members (see Section 3.6 later in this chapter) and interface members (see Section 3.5 later in this chapter). The type or type member in assembly A is accessible only from within A. This is the default accessibility for nonnested types, and so may be omitted. The type member in type T is accessible only from within T. This is the default accessibility for class and struct members, and so may be omitted. The type member in class C is accessible only from within C, or from within a class that derives from C. The type member in class C and assembly A is accessible only from within C, from within a class that derives from C, or from within A. Note that C# has no concept of protected and internal, whereby "a type member in class C and assembly A is accessible only from within C, or from within a class that both derives from C and is within A." Note that a type member may be a nested type. Here is an example of using access modifiers: // Assembly1.dll using System; public class A { private int x=5; public void Foo( ) {Console.WriteLine (x);} protected static void Goo( ) { } protected internal class NestedType { } } internal class B { private void Hoo ( ) { A a1 = new A ( ); // ok Console.WriteLine(a1.x); // error, A.x is private A.NestedType n; // ok, A.NestedType is internal A.Goo( ); // error, A's Goo is protected } } // Assembly2.exe (references Assembly1.dll) using System; class C : A { // C defaults to internal static void Main( ) { // Main defaults to private A a1 = new A( ); // ok a1.Foo( ); // ok C.Goo( ); // ok, inherits A's protected static member new A.NestedType( ); // ok, A.NestedType is protected new B( ); // error, Assembly 1's B is internal Console.WriteLine(x); // error, A's x is private } } A type or type member cannot declare itself to be more accessible than any of the types it uses in the declaration. For instance, a class cannot be public if it derives from an internal class, or a method cannot be protected if the type of one of its parameters is internal to the assembly. The rationale behind this restriction is whatever is accessible to another type is actually usable by that type. In addition, access modifiers cannot be used when they conflict with the purpose of inheritance modifiers. For example, a virtual (or abstract) member cannot be declared private, since it would be impossible to override. Similarly, a sealed class cannot define new protected members, since there is no class that could benefit from this accessibility. Finally, to maintain the contract of a base class, a function member with the override modifier must have the same accessibility as the virtual member it overrides.
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+I+Programming+with+C/Chapter+3.+Creating+Types+in+C/3.3+Access+Modifiers/
crawl-001
refinedweb
503
56.76
Learn how to design and build reusable user interface elements by using custom view subclasses from the UIKit framework in Swift. The problem: UI, UX, design Building user interfaces is the hardest part of the job! In a nutshell: design is a process of figuring out the best solution that fits a specific problem. Graphic design usually means the physical drawing on a canvas or a paper. UX is literally how the user interacts with the application, in other words: the overall virtual experience of the "customer" journey. UI is the visible interface that he/she will see and interact with by touching the screen. 👆 If I have to put on the designer hat (or even the developer hat) I have to tell you that figuring out and implementing proper user interfaces is the most challenging problem in most of the cases. Frontend systems nowadays (mobile, tablet, even desktop apps) are just fancy overlays on top of some JSON data from a service / API. 🤷♂️ Why is it so hard? Well, I believe that if you want to be a good designer, you need a proper engineering mindset as well. You have to be capable of observing the whole system (big picture), construct consistent UI elements (that actually look the same everywhere), plan the desired experience based on the functional specification and many more. It's also quite a basic requirement to be an artist, think outside of the box, and be able to explain (describe) your idea to others. 🤯 Now tell me whose job is the hardest in the tech industry? Yep, as a gratis everyone is a designer nowadays, also some companies don't hire this kind of experts at all, but simply let the work done by the developers. Anyway, let's focus on how to create nice and reusable design implementations by using subclasses in Swift. 👍 Appearance, themes and styles Let me start with a confession: I barely use the UIAppearance API. This is a personal preference, but I like to set design properties like font, textColor, backgroundColor directly on the view instances. Although in some cases I found the appearance proxy very nice, but still a little buggy. Maybe this will change with iOS 13 and the arrival of the long awaited dark mode. Dear Apple please make an auto switch based on day / night cycles (you know like the sunset, sunrise option in the home app). 🌙 - Style is a collection of attributes that specifiy the appearance for a single view. - Theme is a set of similar looking view styles, applied to the whole application. Nowadays I usually create some predefined set of styling elements, most likely fonts, colors, but sometimes icons, etc. I like to go with the following structure: Fonts - title - heading - subheading - body - small Colors - title - heading - background Icons - back You can have even more elements, but for the sake of simplicity let's just implement these ones with a really simple Swift solution using nested structs: struct App { struct Fonts { static let title = UIFont.systemFont(ofSize: 32) static let heading = UIFont.systemFont(ofSize: 24) static let subheading = UIFont.systemFont(ofSize: 20) static let body = UIFont.systemFont(ofSize: 16) static let small = UIFont.systemFont(ofSize: 14) } struct Colors { static let title = UIColor.blue static let heading = UIColor.black static let background = UIColor.white } struct Icons { static let back = UIImage(named: "BackIcon")! static let share = UIImage(named: "ShareIcon")! } } //usage example: App.Fonts.title App.Colors.background App.Icons.back This way I get a pretty simple syntax, which is nice, altough this won't let me do dynamic styling, so I can not switch between light / dark theme, but I really don't mind that, because in most of the cases it's not a requirement. 😅 Structs vs enums: I could use enums instead of structs with static properties, but in this case I like the simplicity of this approach. I don't want to mess around with raw values or extensions that accepts enums. It's just a personal preference. What if you have to support multiple themes? That's not a big issue, you can define a protocol for your needs, and implement the required theme protocol as you want. The real problem is when you have to switch between your themes, because you have to refresh / reload your entire UI. ♻️ There are some best practices, for example you can use the NSNotificationCenter class in order to notify every view / controller in your app to refresh if a theme change occurs. Another solution is to simply reinitialize the whole UI of the application, so this means you basically start from scratch with a brand new rootViewController. 😱 Anyway, check the links below if you need something like this, but if you just want to support dark mode in your app, I'd suggest to wait until the first iOS 13 beta comes out. Maybe Apple will give some shiny new API to make things easy. Book this place now! Promote your product through this page. Feel free to contact me for a wide variety of sponsorship options including text, image and video content! Emojis are more than welcome! 🎉 Custom views as style elements I promised styling by subclassing, so let's dive into the topic. Now that we have a good solution to define fonts, colors and other basic building blocks, it's time to apply those styles to actual UI elements. Of course you can use the UIAppearance API, but for example you can't simply set custom fonts through the appearance proxy. 😢 Another thing is that I love consistency in design. So if a title is a blue, 32pt bold system font somewhere in my application I also expect that element to follow the same guideline everywhere else. I solve this problem by creating subclasses for every single view element that has a custom style applied to it. So for example: - TitleLabel (blue color, 32pt system font) - HeadingLabel (blue color, 24pt system font) - StandardButton (blue background) - DestructiveButton (red background) Another good thing if you have subclasses and you're working with autolayout constraints from code, that you can put all your constraint creation logic directly into the subclass itself. Let me show you an example: import UIKit class TitleLabel: self.textColor = App.Colors.title self.font = App.Fonts.title } func constraints(in view: UIView) -> [NSLayoutConstraint] { return [ self.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor, constant: 16), self.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor, constant: -16), self.centerYAnchor.constraint(equalTo: view.centerYAnchor), ] } } As you can see I only have to set the font & textColor attributes once, so after the view initialization is done, I can be sure that every single instance of TitleLabel will look exactly the same. The usage is pretty simple too, you just have to set the class name in interface builder, or you can simply create the view like this: // loadView method in a view controller... let titleLabel = TitleLabel() self.view.addSubview(titleLabel) NSLayoutConstraint.activate(titleLabel.constraints(in: self.view)) The thing I like the most about this approach is that my constraints are going to be just in the right place, so they won't bloat my view controller's loadView method. You can also create multiple constraint variations based on your current situation with extra parameters, so it's quite scalable for every situation. 👍 View initialization is hard The downside of this solution is that view initialization is kind of messed up, because of the interface builder support. You have to subclass every single view type (button, label, etc) and literally copy & paste your initialization methods again and again. I already have some articles about this, check the links below. 👇 In order to solve this problem I usually end up by creating a parent class for my own styled views. Here is an example for an abstract base class for my labels: class Label: } } So from now on I just have to override the initialize method. class TitleLabel: Label { override func initialize() { super.initialize() self.font = App.Fonts.title self.textColor = App.Colors.title } } See, it's so much better, because I don't have to deal with the required view initialization methods anymore, also autoresizing will be off by default. ❤️ My final takeaway from this lesson is that you should not be afraid of classes and object oriented programming if it comes to the UIKit framework. Protocol oriented programming (also functional programming) is great if you use it in the right place, but since UIKit is quite an OOP framework I believe it's still better to follow these paradigms instead of choosing some hacky way. 🤪 If you like my post, please also follow me on twitter, and subscribe to my monthly newsletter. It's 100% Swift only content, no spam ever! External sources - UIKit init patterns - Swift init patterns - Custom UIView subclass from a xib file - Protocol-Oriented Themes for iOS Apps - UIAppearance - UIAppearance Tutorial: Getting Started - How I use UIAppearance to manage my app theme? - iOS dark theme – Marcin Czachurski - Improving Dynamic Type Support - UIView styling with functions
https://theswiftdev.com/2019/02/19/styling-by-subclassing/
CC-MAIN-2019-39
refinedweb
1,504
53
I have exactly the same problem that you have. FOP is essential to the project of my company. But I don't think going on with C1 is a good solution. As soon as you want to generate "large" pdf output, C1 gets performance problems, especially if you combine it (as I do) with XSLT. That is why we decided to test C2 in terms of stability,usability and "migration". As I am in the middle of getting into C2, I cannot yet tell you if its still too early or not. But I don't think that the problems "performance" and "fop support" can be easily fixed in C1. Ulrich Mayring wrote: > > Alexander Weinmann wrote: > > > > I think that the real problem is inside the Xalan version > > that comes with Cocoon. Xalan uses DOM Level 1 (no namespaces) > > and fop0.14 relies on the dom nodes created with DOM Level2. > > This means that I do not see how to get XSLT+FOP working > > with Cocoon1.8 and FOP 0.14. > > > > We need C2 for that. > > C2 is not even in Beta yet, so "cocoon" currently is defined as C1, > right? Does this mean that cocoon officially will not support newer > versions of fop anymore? > > If that is the case it would be nice to have an official announcement of > some sort, so I can take that to my employer and tell him we need to > migrate to C2 right now, if we want to make use of the new fop features. > Migration is a big issue for us, we have much C1 stuff to port and I'd > rather migrate, when C2 is ready. On the other hand fop support is > mission-critical here :) > > cheers, > > Ulrich > > -- > Ulrich Mayring > DENIC eG, Systementwicklung > > --------------------------------------------------------------------- > To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org > For additional commands, e-mail: cocoon-users-help@xml.apache.org -- Alexander Weinmann | Web Developer BCT Technology AG | D-77731 Willstätt/Germany |
http://mail-archives.us.apache.org/mod_mbox/cocoon-users/200011.mbox/%3C3A0BE25A.CFCA226C@bct-technology.com%3E
CC-MAIN-2019-35
refinedweb
323
73.17
Constructing Renko Charts Using Python 6 min read Renko originated from the Japanese word Renga, which means brick. Renko Charts are built using price movements, unlike other charts, which use price and standardized time intervals. When price moves a certain amount, new brick gets formed, and each block is positioned at a 45-degree angle (up or down) to the last brick. Let's look at both the candlestick and Renko chart for Apple stock. (Feel free to play with the charts) Renko Chart for APPLE Stock Candlestick Chart for APPLE Stock As you can notice, Chart 1 (Renko) is much cleaner than Chart 2 (Candlestick), and that is precisely the beauty of Renko Charts. They reduce the noise and gives us a clear indication of the trend. Now, let's talk about the different methods to plot Renko charts, and later we will discuss its python implementation and the pros and cons of using this charting type. Disclaimer: Article for educational purposes only; no trading advice is being given or intended. Possible Implementations Method 1: Using Fixed Price Bricks For example, let's take a stock whose price is $500 we fix the price change for the bricks to be $10. That is when the price is $510 - Renko chart will paint green brick. Similarly, when the price hits $490. The Renko chart will paint red brick. Method 2: Using percentage change for painting bricks. Example:- When the stock price increase by 1%, the Renko chart paints a green brick. When the stock price increase by 1%, the Renko chart paints a red brick. TIP:- For various investment horizons, it's good to adjust the percentage accordingly. An ideal approach to use is: - 0.25% - 0.5% - For Short Term Investment - 1% - 1% - For Medium Term Investment - 3% - 5% - For Long Term Investment Method 3: Using Average True Range(ATR) ATR is the most common method used for Renko charts in the industry as it's dynamic. ATR is a measure of Volatility that fluctuates with time. The fluctuating ATR value will be used as the box size in the Renko charts. For this article, I have used the ATR Technique to plot Renko Charts. Python Implementation Step 1: Installing the necessary packages and libraries pip install yfinance # for getting OHLCV data pip install mplfinance # for plotting Renko charts Step2: Importing necessary libraries import datetime as dt import yfinance as yf import pandas as pd import mplfinance as fplt Step3: Getting OHLCV data start_date = dt.datetime.today()- dt.timedelta(1825) # getting data of around 5 years. end_date = dt.datetime.today() ticker_name = "TCS.NS" ohlcv = yf.download(ticker_name, start_date, end_date) We are downloading five years worth of data from Yahoo Finance using the yfinance library. Once downloaded, the ohlcv dataframe should look like the below. Step4: Defining Average True Range function(ATR) The Average True Range is specified with the below formula, in short, it is calculated by calculating three ranges - High Price (for that day) - Low Price (for that day) - High Price (for that day) - Adjusted Close (for the previous day) - Low Price (for that day) - Adjusted Close (for the previous day) MAX of all the three above is our True Range, we calculate this True Range for a number of days and then use the Average of this True Range as Average True Range (ATR) Source: Investopedia I hope that makes sense; if not, please feel free to clarify in the comments below. # Function to calculate average true range def ATR(DF, n): df = DF.copy() # making copy of the original dataframe df['H-L'] = abs(df['High'] - df['Low']) df['H-PC'] = abs(df['High'] - df['Adj Close'].shift(1))# high -previous close df['L-PC'] = abs(df['Low'] - df['Adj Close'].shift(1)) #low - previous close df['TR'] = df[['H-L','H-PC','L-PC']].max(axis =1, skipna = False) # True range df['ATR'] = df['TR'].rolling(n).mean() # average –true range df = df.drop(['H-L','H-PC','L-PC'], axis =1) # dropping the unneccesary columns df.dropna(inplace = True) # droping null items return df Step 5: Calling ATR function We call the ATR function with n=50, which means that we are going to use the average true range for the last 50 days. print(ATR(ohlcv, 50)) As you can see in the above example of TCS.NS the ATR has decreased from 126.5 to 51.84 Rupees on a 50-day rolling basis. So ideally, if we capture the latest ATR of the stock and consider that in our Renko chart, that would be more useful. bricks = round(ATR(ohlcv,50)["ATR"][-1],0) #capturing the latest ATR #rounding off the result to an integer. print(bricks) Step 6: Plotting Renko Chart The most straightforward piece, thanks to the mplfinance library. fplt.plot(ohlcv,type='renko',renko_params=dict(brick_size=bricks, atr_length=14), style='yahoo',figsize =(18,7), title = "RENKO CHART WITH ATR{0}".format('ticker_name')) Great, so now you know how to calculate the ATR and plot Renko Charts using Python; you can now automate this metric, monitor some stocks, and deploy your trading robot. Trading Idea: Go Long on 3rd brick for three consecutive green bricks and go short on 3rd brick for every three consecutive red bricks. Just for our reader's benefit, I had like to highlight not all is good with Renko charts; there are pros and cons to each and everything, and using just one indicator to do trading is not good enough for the modern world. Given that, here are a couple of pros and cons for you to consider. Pros - Reduces all sorts of market noise. - Suggests clear levels of market support and resistance levels. - Works great for Price-Action type strategies because Renko charts only carry price elements; time is not considered. Cons - The Renko bricks might not form a new brick every trading day if they are within a price range of the defined brick size, confusing traders. - The charts will miss price movement information if we choose a larger brick size (a larger range). What happens within this range is not shown in these charts. Solution: Choosing brick size according to Average True Range(ATR) - It is challenging to decide on an optimum setting to define the size of Renko bricks; there is no industry standard. For a better understanding of code, complete code is available on Google Colab. if you liked this article. 😇 Did you find this article valuable? Support Trade With Python by becoming a sponsor. Any amount is appreciated!
https://tradewithpython.com/constructing-renko-charts-using-python
CC-MAIN-2022-21
refinedweb
1,091
71.55
Java 1.1 IO streams Posted on March 1st, 2001 At this point you might be scratching your head, wondering if there is another design for IO streams that could require more typing. Could someone have come up with an odder design?” Prepare yourself: Java 1.1 makes some significant modifications to the IO stream library. old streams have been left in for backwards compatibility and: - New classes have been put into the old hierarchy, so it’s obvious that Sun is not abandoning the old streams. - There are times when you’re supposed to use classes in the old hierarchy in combination with classes in the new hierarchy and to accomplish this there are “bridge” classes: InputStreamReader converts an InputStream to a Reader and OutputStreamWriter converts an OutputStream to a Writer. As a result there are situations in which you have more layers of wrapping with the new IO stream library than with the old. Again, this is a drawback of the decorator pattern – the price you pay for added flexibility. The most important reason for adding the Reader and Writer hierarchies in Java 1.1 is for internationalization. The old IO IO operations. In addition, the new libraries are designed for faster operations than the old. As is the practice in this book, I will attempt to provide an overview of the classes but assume that you will use online documentation to determine all the details, such as the exhaustive list of methods. Sources and sinks of data Almost all of the Java 1.0 IO stream classes have corresponding Java 1.1 classes to provide native Unicode manipulation. It would be easiest to say “Always use the new classes, never use the old ones,” but things are not that simple. Sometimes you are forced into using the Java 1.0 IO stream classes because of the library design; in particular, the java.util.zip libraries are new additions to the old stream library and they rely on old stream components. So the most sensible approach to take is to try to use the Reader and Writer classes whenever you can, and you’ll discover the situations when you have to drop back into the old libraries because your code won’t compile. Here is a table that shows the correspondence between the sources and sinks of information (that is, where the data physically comes from or goes to) in the old and new libraries. In general, you’ll find that the interfaces in the old library components and the new ones are similar if not identical. Modifying stream behavior In Java 1.0, streams were adapted for particular needs using “decorator” subclasses of FilterInputStream and FilterOutputStream. Java 1.1 IO streams continues the use of this idea, but the model of deriving all of the decorators from the same “filter” base class is not followed. This can make it a bit confusing if you’re trying to understand it by looking at the class hierarchy. and it’s apparent that you’re supposed to use the new versions instead of the old whenever possible (that is, except in cases where you’re forced to produce a Stream instead of a Reader or Writer). Java 1.1 IO library. To make the transition to using a PrintWriter easier, it has constructors that take any OutputStream object. However, PrintWriter has no more support for formatting than PrintStream does; the interfaces are virtually the same. Unchanged Classes Apparently, the Java library designers felt that they got some of the classes right the first time so there were no changes to these and you can go on using them as they are: The DataOutputStream, in particular, is used without change, so for storing and retrieving data in a transportable format you’re forced to stay in the InputStream and OutputStream hierarchies. An example To see the effect of the new classes, let’s look at the appropriate portion of the IOStreamDemo.java example modified to use the Reader and Writer classes: //: NewIODemo.java // Java 1.1 IO typical usage import java.io.*; public class NewIODemo { public static void main(String[] args) { try { // 1. Reading input by lines: BufferedReader in = new BufferedReader( new FileReader(args[0]));( // Oops: must use deprecated class: new StringBufferInputStream(s2)); while(true) System.out.print((char)in3.readByte()); } catch(EOFException e) { System.out.println("End of stream"); } // 4. Line numbering & file output try { LineNumberReader li = new LineNumberReader( new StringReader(s2)); BufferedReader in4 = new BufferedReader(li); PrintWriter out1 = new PrintWriter( new BufferedWriter( new FileWriter("IODemo.out"))); while((s = in4.readLine()) != null ) out1.println( "Line " + li.getLineNumber() + s); out1.close(); } catch(EOFException e) { System.out.println("End of stream"); } // 5. Storing & recovering data try { DataOutputStream out2 = new DataOutputStream( new BufferedOutputStream( new FileOutputStream("Data.txt"))); out2.writeDouble(3.14159); out2.writeBytes("That was pi");()); // Can now use the "proper" readLine(): System.out.println(in5br.readLine()); } catch(EOFException e) { System.out.println("End of stream"); } // 6. Reading and writing random access // files is the same as before. // (not repeated here) } catch(FileNotFoundException e) { System.out.println( "File Not Found:" + args[1]); } catch(IOException e) { System.out.println("IO Exception"); } } } ///:~ In general, you’ll see that the conversion is fairly straightforward and the code looks quite similar. There are some important differences, though. First of all, since random access files have not changed, section 6 is not repeated. Section 1 shrinks a bit because if all you’re doing is reading line input you need only to wrap a BufferedReader around a FileReader. Section 1b shows the new way to wrap System.in for reading console input, and this expands because System.in is a DataInputStream and BufferedReader needs a Reader argument, so InputStreamReader is brought in to perform the translation. In section 2 you can see that if you have a String and want to read from it you just use a StringReader instead of a StringBufferInputStream and the rest of the code is identical. Section 3 shows a bug in the design of the new IO stream library. If you have a String and you want to read from it, you’re not supposed to use a StringBufferInputStream any more. When you compile code involving a StringBufferInputStream constructor, you get a deprecation message telling you to not use it. Instead, you’re supposed to use a StringReader. However, if you want to do formatted memory input as in section 3, you’re forced to use a DataInputStream – there is no “DataReader” to replace it – and a DataInputStream constructor requires an InputStream argument. So you have no choice but to use the deprecated StringBufferInputStream class. The compiler will give you a deprecation message but there’s nothing you can do about it. [48] Section 4 is a reasonably straightforward translation from the old streams to the new, with no surprises. In section 5, you’re forced to use all the old streams classes because DataOutputStream and DataInputStream require them and there are no alternatives. However, you don’t get any deprecation messages at compile time. If a stream is deprecated, typically its constructor produces a deprecation message to prevent you from using the entire class, but in the case of DataInputStream only the readLine( ) method is deprecated since you’re supposed to use a BufferedReader for readLine( ) (but a DataInputStream for all other formatted input). If you compare section 5 with that section in IOStreamDemo.java, you’ll notice that in this version, the data is written before the text. That’s because a bug was introduced in Java 1.1, which is shown in the following code: //: IOBug.java // Java 1.1 (and higher?) IO Bug import java.io.*; public class IOBug { public static void main(String[] args) throws Exception {()); } } ///:~ It appears that anything you write after a call to writeBytes( ) is not recoverable. This is a rather limiting bug, and we can hope that it will be fixed by the time you read this. You should run the above program to test it; if you don’t get an exception and the values print correctly then you’re out of the woods. Redirecting standard IO Java 1.1 has added methods in class System that allow you to redirect the standard input, output, and error IO streams using simple static method calls: Redirecting output is especially useful if you suddenly start creating a large amount of output on your screen and it’s scrolling past faster than you can read it. Redirecting input is valuable for a command-line program in which you want to test a particular user-input sequence repeatedly. Here’s a simple example that shows the use of these methods: //: Redirecting.java // Demonstrates the use of redirection for // standard IO in Java 1.1 import java.io.*; class Redirecting { public static void main(String[] args) { try { BufferedInputStream in = new BufferedInputStream( new FileInputStream( "Redirecting.java")); // Produces deprecation message:! } catch(IOException e) { e.printStackTrace(); } } } ///:~ This program attaches standard input to a file, and redirects standard output and standard error to another file. This is another example in which a deprecation message is inevitable. The message you can get when compiling with the -deprecation flag is: Note: The constructor java.io.PrintStream(java.io.OutputStream) has been deprecated. However, both System.setOut( ) and System.setErr( ) require a PrintStream object as an argument, so you are forced to call the PrintStream constructor. You might wonder, if Java 1.1 deprecates the entire PrintStream class by deprecating the constructor, why the library designers, at the same time as they added this deprecation, also add new methods to System that required a PrintStream rather than a PrintWriter, which is the new and preferred replacement. It’s a mystery. There are no comments yet. Be the first to comment!
http://www.codeguru.com/java/tij/tij0114.shtml
CC-MAIN-2017-34
refinedweb
1,626
56.66
Deprecated since API level 1 AttributeList public interface AttributeList This."); [...] } See also: Summary Public methods getLength int getLength () Return the number of attributes in this list. The SAX parser may provide attributes in any arbitrary order, regardless of the order in which they were declared or specified. The number of attributes may be zero. getName String getName (int i) Return the name of an attribute in this list (by position). The names must be unique: the SAX parser shall not include the same attribute twice. Attributes without values (those declared #IMPLIED without a value specified in the start tag) will be omitted from the list. If the attribute name has a namespace prefix, the prefix will still be attached. See also: getType String getType (String name) Return the type of an attribute in the list (by name). The return value is the same as the return value for getType(int). If the attribute name has a namespace prefix in the document, the application must include the prefix here. See also: getType String getType (int i) Return the type of an attribute in the list (by position).". See also: getValue String getValue (String name) Return the value of an attribute in the list (by name). The return value is the same as the return value for getValue(int). If the attribute name has a namespace prefix in the document, the application must include the prefix here. See also: getValue String getValue (int i) Return the value of an attribute in the list (by position). If the attribute value is a list of tokens (IDREFS, ENTITIES, or NMTOKENS), the tokens will be concatenated into a single string separated by whitespace. See also:
https://developer.android.com/reference/org/xml/sax/AttributeList.html
CC-MAIN-2017-43
refinedweb
280
63.29
Standard Python distribution doesn't come bundled with NumPy module. A lightweight alternative is to install NumPy using popular Python package installer, pip. pip install numpy The best way to enable NumPy is to use an installable binary package specific to your operating system. These binaries contain full SciPy stack (inclusive of NumPy, SciPy, matplotlib, IPython, SymPy and nose packages along with core Python). Anaconda (from) is a free Python distribution for SciPy stack. It is also available for Linux and Mac. Canopy () is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac. Python (x,y): It is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from) Package managers of respective Linux distributions are used to install one or more packages in SciPy stack. sudo apt-get install python-numpy python-scipy python-matplotlibipythonipythonnotebook python-pandas python-sympy python-nose sudo yum install numpyscipy python-matplotlibipython python-pandas sympy python-nose atlas-devel Core Python (2.6.x, 2.7.x and 3.2.x onwards) must be installed with distutils and zlib module should be enabled. GNU gcc (4.2 and above) C compiler must be available. To install NumPy, run the following command. Python setup.py install To test whether NumPy module is properly installed, try to import it from Python prompt. import numpy If it is not installed, the following error message will be displayed. Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import numpy ImportError: No module named 'numpy' Alternatively, NumPy package is imported using the following syntax − import numpy as np
https://www.tutorialspoint.com/numpy/numpy_environment.htm
CC-MAIN-2020-40
refinedweb
274
57.27
tui-tree3.5.3 • Public • Published TOAST UI Component : TreeTOAST UI Component : Tree Component that displays data hierarchically. 🚩 Table of Contents🚩 Table of Contents - Collect statistics on the use of open source - Browser Support - Features - Examples - Install - Usage - Pull Request Steps - Documents - Contributing - Dependency - License Collect statistics on the use of open sourceCollect statistics on the use of open source TOAST UI Tree applies Google Analytics (GA) to collect statistics on the use of open source, in order to identify how widely TOAST UI option when creating the instance. var options =...usageStatistics: falsevar instance = container options; 🌏 Browser Support🌏 Browser Support 🎨 Features🎨 Features - Creates each node hierarchically by data. - Folds or unfolds the children of each node. - Supports optional features. Selectable: Each node can be selected. Draggable: Each node can be moved. Editable: Each node can be edited. ContextMenu: A context menu can be created for each node. Checkbox: A checkbox can be added to each node and a 3-state checkbox is used. Ajax: Requests server and handles the CRUDfor each node. - Supports templates. - Supports custom events. - Provides the file of default css style. 🐾 Examples🐾 Examples - Basic : Example of using default options. - Using checkbox : Example of adding checkbox on each node and handling. - Using Ajax : Example of using server request, Selectable, Draggable, Editable, ContextMenufeatures. More examples can be found on the left sidebar of each example page, and have fun with it. 💾 Install-tree # Latest version$ npm install --save tui-tree@<version> # Specific version bowerbower $ bower install tui-tree # Latest version$ bower install tui-tree#<tag> # Specific version Via Contents Delivery Network (CDN)Via Contents Delivery Network (CDN) TOAST UI products are available over the CDN powered by TOAST Cloud. You can use the CDN as below. If you want to use a specific version, use the tag name instead of latest in the url's path. The CDN directory has the following structure. tui-tree/ ├─ latest/ │ ├─ tui-tree.js │ ├─ tui-tree.min.js │ └─ tui-tree.css ├─ v3.3.0/ │ ├─ ... Download Source FilesDownload Source Files 🔨 Usage🔨 Usage HTMLHTML Add the container element to create the component. JavaScriptJavaScript This can be used by creating an instance with the constructor function. To get the constructor function, you should import the module using one of the following ways depending on your environment. Using namespace in browser environmentUsing namespace in browser environment var Tree = tuiTree; Using module format in node environmentUsing module format in node environment var Tree = ; /* CommonJS */ ; /* ES6 */ You can create an instance with options and call various APIs after creating an instance. var container = document;var instance = container ... ;instance; For more information about the API, please see here. 🔧 Pull Request Steps.tree.git$ cd tui.tree$ Documents📙 Documents You can also see the older versions of API page on the releases page. 💬 Contributing💬 Contributing 🔩 Dependency🔩 Dependency - tui-code-snippet >=1.5.0 - tui-context-menu >=2.1.1 (Optional, needs forusing ContextMenufeature) - jQuery >=1.11.0 (Optional, needs for using Ajaxfeature) 📜 License📜 License This software is licensed under the MIT © NHN. install npm i tui-tree weekly downloads 18 version 3.5.3 license MIT
https://www.npmjs.com/package/tui-tree
CC-MAIN-2019-26
refinedweb
514
50.02
Hi - Struts :// Thanks. Hi Soniya, We can use oracle too in struts...Hi Hi friends, must for struts in mysql or not necessary... know it is possible to run struts using oracle10g....please reply me fast its About Struts processPreprocess method - Struts About Struts processPreprocess method Hi java folks, Help me... is not valid? Can I access DB from processPreprocess method. Hi... that will make me to use this method . Can I use this for client validation, so Struts - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I i have no any idea about struts.please tell me briefly about struts?** Hi Friend, You can learn struts from the given link: Struts Tutorials Thanks Can i insert image into struts text field Can i insert image into struts text field please tell me can i insert image into text field I want detail information about switchaction? - Struts I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction Struts Struts Hi i am new to struts. I don't know how to create struts please in eclipse, Here my quation is can i have more than one validation-rules.xml files in a struts...*; import org.apache.struts.action.*; public class LoginAction extends Action...; <p><html> <body></p> <form action="login m getting Error when runing struts application. i have already define path in web.xml i m sending -- ActionServlet...-- java.lang.ClassNotFoundException: org.apache.struts.action.ActionServlet. how i i am Getting Some errors in Struts - Struts i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me... to struts action class. Plese tell me how to send this 5 employee record table Struts hi can anyone tell me how can i implement session tracking... session management? we can maintain using class HttpSession. the code follows... for later use in in any other jsp or servlet(action class) until session exist struts struts I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from... the checkbox.i want code in struts Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading... can again download the same file in future. It is working fine when I... when required. I could not use the Struts API FormFile since Struts - Framework Struts Good day to you Sir/madam, How can i start... to learn and can u tell me clearly sir/madam? Hi Its good... knowledge of JSP-Servlet, nothing else. Best of Luck for struts. Thanks struts - Struts struts hi.. i have a problem regarding the webpage in the webpage i have 3 submit buttons are there.. in those two are similar and another one... included third sumbit button on second form tag and i given corresponding action Using radio button in struts - Struts will be the same value but not the same serail number). I need to know which option... , but the radio button has only just one value that i can pass.what can i do to solve it ? (The user can choose only one radio button - it's mean that we talking about do May I know how to create a web page? May I know how to create a web page? can u suggest me how to start tell me i don't know about cms and history - Java Interview Questions tell me i don't know about cms and history what is cms,when can it is used ,what are the advantages and what are the more information about CMS java - Struts java i want to know how to use struts in myEclipse using an example i want to know about the database connection using my eclipse ? pls send me the reply Hi friend, Read for more information. http Struts Hello Experts, How can i comapare in jsp scriptlet in if conditions struts html tag - Struts struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work Struts - Struts Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI Based on struts Upload - Struts Based on struts Upload hi, i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code < Reg struts - Struts Reg struts Hi, Iam having Booksearch.jsp, BooksearchForm.java,BooksearchAction.java. I compiled java files successfully. In struts-config.xml Is this correct? I would like to know how do i have to make action while im running my application in tomcat5.5 i got jasper exceptions /WEB-INF/struts-html.tld is not found it says ....how can i solve How can I learn Java? How can I learn Java? Hi, I have just completed a course in HTML... programming. How can I learn Java? in shortest possible time. I mean I just want to begin... are the best ways to learn Java for a beginner? Thanks Hi, You can struts - Struts struts Hi, I am new to struts.Please send the sample code for login... the code immediately. Please its urgent. Regards, Valarmathi Hi Friend....shtml http Parameter month I Report - Java Beginners Parameter month I Report hy, I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report use like "12/2008". Thank's Hi friend, Plz give details validation - Struts validation Hi, Can you give me the instructions about... form.but i don't want the username to be duplicated.means one username must be entered single time only. thank you Hi friend, Read for Hello I like to make a registration form in struts inwhich... compelete code. thanks Hi friend, Please give details with full.... Struts1/Struts2 For more information on struts visit to : - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please send it immediately. Regards, Valarmathi Hi Friend, Struts Hello I have 2 java pages and 2 jsp pages in struts... for getting registration successfully Now I want that Success.jsp should display.... Thanks in advance Hi friend, Please give full details about db - Struts About DB in Struts I have one problem about database. i am using netbeans serveri glassfish. so which is the data source struts config file should be? Please - Framework Struts Good day to you Sir/madam, How can i start... to learn and can u tell me clearly sir/madam? Hi friend... using the View component. ActionServlet, Action, ActionForm and struts-config.xml Where can I learn Java Programming question which is "Where can I learn Java Programming?". We have... have to search more for "Where can I learn Java Programming?", just...: Hibernate Struts 1.x Part I XML Parsers : An Overview Know about XML Parsers. XML-APIs : An overview Know about various XML APIs  ...; Know about various XML Java APIs and Packages JAXP -SAX Code - Struts Struts Code Hi I executed "select * from example" query and stored all the values using bean . I displayed all the records stored in the jsp using struts . I am placing two links Update and Delete beside each record . Now I Struts - Struts Struts Hello ! I have a servlet page and want to make login page in struts 1.1 What changes should I make for this?also write struts-config.xml... javax.servlet.http.*; import java.io.*; import java.sql.*; public class loginservlet Struts Warnings ...About FormBeanConfig & about Cancel Forward - Struts Struts Warnings ...About FormBeanConfig & about Cancel Forward Hi Friends... I am trying a very small code samples of Contact Application i... that information saved successfully. but whenever i run the project on Tomcat use of Struts - Struts use of Struts Hi, can anybody tell me what is the importance of sturts? why we are using it? Hitendra Hi, I am sending... example. Thanks... name stored in database. how can i do this please suggest me.. I am using multiboxes - Struts are checked but when i click on uncheck it does nothing(The uncheck radio button... in javascript code or in struts bean. Hi friend, Code to solve...multiboxes I have a program that has an array of multiboxes and 2
http://roseindia.net/tutorialhelp/comment/4640
CC-MAIN-2015-35
refinedweb
1,458
76.42
Write a program that will read in a sentence of up to 100 characters and output the sentence with spacing corrected and with letters corrected for capitalization. In other words, int the output sentence all strings of two or more blanks should be compressed to a single blank. The sentence should start with an uppercase letter but should contain no other uppercase letters. Do not worry about proper names; if the first letter is changed to lowercase, that is acceptable. Treat a line break as if it were a blank space in the sense that a line break and any number of blanks are compressed to a single blank. Assume that the sentence ends with a period and contains no other periods. Example input: the Answer to life, the Universe, and everything IS 42. should produce the following output: The answer to life, the universe, and everything is 42. here is the code I have so far. #include <iostream> #include <string> #include <cctype> using namespace std; void swap(char& v1, char& v2); //Interchanges the values of v1 and v2. string remove(const string& s); //Returns a copy of s but with extra whitespaces removed. string makeUpper (const string& s); //Returns a copy of s that has all lowercase //characters changed to uppercase, with other characters unchanged. int main() { string str; cout << "Enter a sentence to be corrected\n" << "followed by pressing Return.\n"; getline(cin, str); return 0; } void swap(char& v1, char& v2) { char temp = v1; v1 = v2; v2 = temp; } string remove(const string& s) { int start = 0; int end = s.length(); string temp(s); { end--; swap(temp[start], temp[end]); start++; } return temp; } //Uses <cctype> and <string> string makeUpper(const string& s) { string temp(s); for (int i = 0; i < s.length(); i++) temp[i] = toupper(s[i]); return temp; } string removePunct(const string& s, const string& punct) { string noPunct; //initialized to empty string int sLength = s.length(); int punctLength = punct.length(); for (int i = 0; i < sLength; i++) { string aChar = s.substr(i,1); //A one-character string int location = punct.find(aChar, 0); //Find location of successive charcters //of src in punct. if (location < 0 || location >= punctLength) noPunct = noPunct + aChar; //aChar is not in punct, so keep it } return noPunct; }
https://www.daniweb.com/programming/software-development/threads/268172/correcting-a-sentence
CC-MAIN-2017-34
refinedweb
374
71.65
Hi everybody I defined an array pointer to hold a massive amount of data, (actually the RGB data of a screen with 1024*768 pixels). the data is assigned from another temporary array to the this array within a function that calculates the values of the temporary array and then assign them to the global one using FOR loops. the global array values should be kept saved to be used in an OpenGL function as a store of an image data and then display the image on the screen. when I run the program, the program works fine until it reaches a specific row number of the screen rows (row number 16 of 128 rows) and crash giving the following message: Unhandled exception at 0x7c812afb in OPENGL_.....exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012fbb4.. when I used the delete function to delete the array it works until it reaches a different and specific row number (28) means works more but the array will be deleted before I use it which is unwanted. at any case, I need the program to work till the end of the rows (128). any one can help me in this please? how can I get rid of this error and make the program works? is there any problem about the array pointer, either the way of defining it or the way i am using it? if not what is the problem here, any idea? many thanks in advance here is parts of the OpenGL code in C++ environment: #include <iostream> #include <stdlib.h> //#include "glext.h" //#include "glew.h" . . #include <GL/glut.h> #endif #include "stdafx.h" #include <windows.h> using namespace std; . . int column_index = 0; int row_index = 0; . . unsigned char *pixels_index = new unsigned char[3*1024*768]; . void update(int value) { . unsigned char *pixels_index_temp = new unsigned char[3*8*6]; . . int x_column = 8*column_index; int y_row = 6*row_index; glReadPixels(x_column, y_row, 8, 6, GL_RGB, GL_UNSIGNED_BYTE, pixels_index_temp); for (int yi=0;yi<6;yi++) { for (int xi=0;xi<8;xi++) { int local_index = 3*(yi*w_temp + xi); int global_index = 3*((row_index*w*h_temp) + (yi*w) + (column_index*w_temp) + xi); pixels_index[global_index] = pixels_index_temp[local_index]; pixels_index[global_index + 1] = pixels_index_temp[local_index + 1]; pixels_index[global_index + 2] = pixels_index_temp[local_index + 2]; } } delete [] pixels_index_temp; . . if (row_index < 128){glutTimerFunc(100, update, 0);} else { .. .. . . delete [] pixels_index; }
https://www.daniweb.com/programming/software-development/threads/313540/unhandled-exception-error
CC-MAIN-2018-43
refinedweb
384
56.45
The GCC 4.4 release series differs from previous GCC releases in more than the usual list of new features. Some of these changes are a result of bug fixing, and some old behaviors have been intentionally changed in order to support new standards, or relaxed in standards-conforming ways to facilitate compilation or runtime performance. Some of these changes are not visible to the naked eye, and will not cause problems when updating from older GCC versions. However, some of these changes are visible, and can cause grief to users porting to GCC 4.4. This document is an effort to identify major issues and provide clear solutions in a quick and easily-searched manner. Additions and suggestions for improvement are welcome. When using the preprocessor statement #elif, the argument is now evaluated even if earlier #if or #elif conditionals evaluated non-zero. This is done to make sure they are valid constant expressions. (For details, see bug 36320). For example, the code #if 1 #elif #endif Now produces the following diagnostic: error: #elif with no expression To fix this, either use #else without an argument or provide a constant expression when using #elif. GCC warns about more cases of type-punning while optimizing, like the following. struct A { char data[14]; int i; }; void foo() { char buf[sizeof(struct A)]; ((struct A*)buf)->i = 4; } Now produces the following diagnostic: warning: dereferencing type-punned pointer will break strict-aliasing rules This can be temporarily worked around by using -fno-strict-aliasing or by ignoring this class of warning via -Wno-strict-aliasing. To fix, access the structure from pointers of an equivalent type, use a union, use memcpy, or (if using C++) use placement new. See the section Casting does not work as expected when optimization is turned on in our bug reporting documentation for more information. Some of the standard C++ library include files have been edited to include only the smallest possible number of additional files. As such, C++ programs that used std::printf without including <cstdio>, or used uint32_t without including <stdint.h> will no longer compile. In detail: The file <cstdio> is no longer included as part of <string>, <ios>, <iomanip>, <streambuf>, or <locale>. The file <stdint.h> is no longer included as part of <string> or<ios>. Some of the standard C++ library include files have been edited to use replacement overloads for some common C library functions (if available), with the goal of improving const-correctness: functions passed a const char* return const char*. The table below shows the functions and files that have been changed. An example. #include <cstring> const char* str1; char* str2 = strchr(str1, 'a'); Gives the following compiler error: error: invalid conversion from ‘const char*’ to ‘char*’ Fixing this is easy, as demonstrated below. #include <cstring> const char* str1; const char* str2 = strchr(str1, 'a'); More information about the C++ standard requirements can be found in chapter 21, section "Null-terminated sequence utilities." GCC by default no longer accepts code such as struct A { virtual ~A (); }; struct B : public A { int i; }; struct C { const B a; C() { bar(&a); } void bar(const B*); }; but will issue the diagnostic In constructor 'C::C()': error: uninitialized member 'C::a' with 'const' type 'const B' To fix, use a member initialization list to initialize the member, like so: C(): a(B()) { bar(&a); } Jakub Jelinek, Results of a test mass rebuild of rawhide-20090126 with gcc-4.4.0-0.9 Copyright (C) Free Software Foundation, Inc. Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved. These pages are maintained by the GCC team. Last modified 2018-09-30.
http://gcc.gnu.org/gcc-4.4/porting_to.html
CC-MAIN-2018-47
refinedweb
618
51.89
CS50 programming style Coding style A computer program is meant for two audiences: the computer that compiles and runs it, and the people who must read, modify, maintain and test it. Think about writing a program the same way you think about writing a paper: structure, organization, word choice and formatting are just as important as content. A program that works but has a terrible style is unreadable, and therefore useless. Real-world software development teams use common programming style guides. For example, if you are working on the Linux kernel, you would use Linus’ Coding Style. If you are working on a gnu project, you would closely follow the instructions in Chapter 5 “Making the best use of C” of their GNU Coding Standards document. Other organizations might adopt other long-respected coding standards like the NetBSD source code style guide, or they might produce their own guidelines based on several others. Your company will most likely have one they prefer. Style guides include things like formatting your source code, comment requirements, how certain C constructs should (or shouldn’t) be used, variable naming conventions, cross-platform compatibility requirements, and more. We realize that coding style can be a very personal choice, but in the professional world you will seldom have the privilege of choosing your own style. Regardless of the style you choose, develop, or are forced to use, stick with it. Consistency is a PLUS! CS50 style For CS50 assignments involving C programming, please follow these guidelines (inspired by the K&R C book and by Linus): - Avoid placing multiple statements on a single line. - Break long statements (more than 80 characters) over multiple lines. - Indent appropriately; emacs and other C-savvy text editors can indent automatically. See below. - Place the opening brace at the end of the line, e.g., in ifand forstatements. - Exception: for functions, place the opening brace at the beginning of the next line. - Use spaces around binary operators, except struct and pointer references. Do not use spaces between a unary operator and its operand. See below. - Use parentheses liberally when it helps to make an expression clear. Adding parentheses rarely hurts, and might actually prevent a mistake. - Avoid calling exit()from anywhere other than main(). Unwind back to mainusing error-return values and exit cleanly. - Always initialize variables, either when they are created, or soon thereafter. Initialize pointers to NULL if target not yet known. - Declare function prototypes with type and name of formal parameters. - Avoid using global variables. If they are absolutely necessary, restrict their use to a single source file using the statickeyword. - Avoid using gotounless absolutely necessary - you must have a really good reason for using a goto, in very exceptional cases. - Avoid preprocessor macros; #definemacros tend to be a source of difficult bugs. Instead, use constfor constants and use real functions (or inline functions if you must). - Don’t use “magic” numbers in your code. Use constto create a named constant, e.g., const float pi = 3.1416; - Use constwherever you can, to indicate a value that will not change. - Use the booltype whenever a function should return a boolean value, or a variable should hold a boolean flag. Avoid old C conventions that use 0 for false and non-zero for true. - Wrap calls to malloc()in type-specific helper functions; see below. - Choose either camelCaseor snake_case, and be consistent. - Break up large programs into multiple files. Every file (except for that containing main) should have a corresponding .hfile that declares all functions, constants, and global variables meant to be visible outside the file. - Break up large functions, aiming for strong cohesion and weak coupling. Always remember: You are writing for clarity and communication, not to show how clever you are or how short and dense you can make your code. Commenting: Comment your code as you write it: it is much easier to write comments while your intentions are fresh in your mind than to go back later and do it. Keep comments short, simple and to the point. Comment wherever the code is not self-describing (see the reading assignments). Use the // style of commenting for one-line comments, and the /* ... */ style for multi-line block comments. Use four types of comments: - Start-of-file comments. - Start-of-function comments. - Paragraph comments - End-of-line comments Use them in the following fashion: Start-of-file-comments. You should place a block comment at the start of each file. This comment should include the names of programmers, the date the file was written, and a high-level description of the file’s contents, e.g., /* * stack.c Bill Stubblefield November 20, 1994 * * This file contains the definitions of a stack class. It includes functions: * * ... list functions, with brief descriptions (if needed) * */ Start-of-function comments. Write a header for each function. This comment should include a description of what the function does, the meaning of its parameters, meaning of its return value (if any). For example, if a function float sqrt(float number); requires its argument to be positive, document it. Similarly, specify any constraints on the output. List all error conditions and what the function does with them. List any side effects. If the function algorithm is not obvious, describe it (often a good idea). Also, if you borrow the algorithm from another source, credit the source and author. Paragraph comments. Often procedures can be divided into a series of steps, such as initialization, reading data, writing output. Place a small comment before each such section describing what it does. End-of-line comments. Place a brief comment at the end of those lines where needed to clarify the code. Don’t overdo it, but use them to call the reader’s attention to subtleties in the code. Align the comments so that all the comments for a function begin in the same column, although this column can vary for different functions. Spacing: Place a space after keywords like if, else, for, while, do, switch, etc., after commas in function calls, after semicolons in a for loop, between a right parenthesis and a left bracket, and around binary operators (except . and , and ->). Remember that assignment is a binary operator. I usually do not put spaces between a function name and its parameter list, or an array name and its subscripts. For example, for (i = 0; i < N; i++) { x = x + f(A[i], i); } Indenting: Let your text editor help you auto-indent your code. Often, trouble with auto-indentation is a clue to your own syntax mistake (such as forgetting brackets). When you create or open a file, emacs will recognize C by the filename extension .c or .h and switch to “C mode”; you’ll see this mode on the emacs status line. In C mode, hitting the TAB key while the cursor is on a given line indents it to the correct level, assuming that the preceding non blank line has been indented correctly. Ending a line with a left bracket and hitting return will automatically indent the next line appropriately. Also, a line beginning with a right bracket will indent to the correct level. Finally, typing // on a new line will create a comment and indent it to the line of code. Dynamic memory allocation Avoid sprinking calls to malloc() and free() throughout your code. Instead, think about the kinds of things you need to create and destroy, and write type-specific wrapper for each such type. For example, if your program manipulates things of type struct listnode, you would write two functions: struct listnode *new_listnode(...); void free_listnode(struct listnode *node); The first function calls malloc(sizeof(struct listnode) and initializes all of its contents, perhaps using parameters passed by the caller. The second calls free(node). Both involve careful error-checking code. See example names6.c. There are many advantages to this approach: - The mainline code is more readable, because it’s clear what new_listnode()is doing. - Code involving malloccan sometimes be tricky, and you isolate that trickiness in one spot and focus on getting it right once. - Some new types might need multiple malloccalls, as in our linked-list example names6.c. All those malloccalls (and corresponding freecalls) can be in the new/ freefunctions. - The newfunction acts like a ‘constuctor’ in object-oriented languages and can ensure the newly returned memory is initialized, or at least, not random bytes. - You can insert debugging output or reference-counting logic, or set debugger breakpoints, in these new/ freefunctions and immediately have that feature apply to all occurrences of your program’s work with this type. Program structure Although C allows us to be very flexible with where we put declarations, a standard layout makes it easier to read the code. A good convention is: /* * Start-of-file-comments */ #include <stdio.h> #include <stdlib.h> . . . // global type and constant definitions const float PI = 3.1416; . . . // function prototypes void push(int item); /* *************************** * Start-of-function-comments */ int main(const int argc, char *argv[]) { // local const, type and variable declarations // body of code } /* *************************** * Start-of-function-comments */ void push(int item) { // local const, type and variable declarations // function body } Although you can declare variables at anytime before they are used, it is sometimes best to place all declarations at the beginning of the function. That way a reader can easily find them. There are times when it is convenient or prudent to do otherwise; we’ll come back to this issue. It is also a good idea to break large programs up into multiple files. For example, a ‘stack’ module may be declared in ‘stack.h’, defined in ‘stack.c’, and used in ‘main.c’. Simplicity This single most important thing you can do to write good code is to keep it simple. As William of Occam said in the 14th century: “Do not multiply entities without necessity.” Simplicity has many aspects; a few of these include: Make all functions small, coherent and specific. Every function should do exactly one thing. A good rule of thumb is that you should be able to describe what a function does in a single sentence. Generally, C functions occupy less than a page, with most functions occupying 10-30 lines. Use small parameter lists. Avoid extremely long parameter lists. If you find the parameters to a function growing, ask yourself if the function is trying to do too much, or if the function is too vague in its intent. Avoid deeply nested blocks. Structures such as if, for and while define blocks of code; blocks can contain other blocks. Try not to nest blocks too deeply. Any function with more than a couple levels of nesting can be confusing. If you find yourself with deeply nested structures, consider either simplifying the structure or defining functions to handle some of the nested parts. Use the simplest algorithm that meets your needs. Einstein once said: “Things should be as simple as possible, but no simpler.” This is good advice for selecting an algorithm. There are a great many extremely clever, complex algorithms in computer science. Make an effort to know them and use the algorithm that meets your needs for efficiency. Do not shun complex algorithms, but do not choose them without reason. Be consistent. Consistency can come in many forms. A few of these include: - Try to be consistent in numbers and types of function parameters. If two functions have a similar function, try to give them similar sets of parameters. - Try to be consistent in your use of loops and other program constructs. - Use consistent naming and commenting styles. Don’t be clever. Samuel Johnson once said (I may not be quoting him exactly) “When you find something particularly clever in your writing, strike it out.” C offers many constructs, such as conditional expressions, unary operators, etc. that make it possible to write extremely compact, dense unreadable code. Use these features, but also ask yourself: “Will another programmer understand what I mean here?” Practice defensive programming! It is important you write C programs defensively. That is, you need to check the input the program receives, make sure it is as expected (in range, correct datatype, length of strings, etc.), and, if it is not acceptable, provide appropriate message(s) back to the user in terms of the program usage. The user should never be able to cause your program to adversely impact any aspect of the system it’s running on, including system files, other users’ files or processes, or network access. - Make sure command-line arguments and function parameters have legal values. - Check the results of all calls to standard libraries or the operating system. For example, check all memory allocations ( malloc) to detect out-of-memory conditions. - Check all data obtained from users or other programs. - Check limit conditions on loops and arrays. For example, what happens if you try to access a value that is out of bounds? When you detect an error condition, first consider ways to modify the code to prevent the error from happening in the first place. If that is not possible, ask if there is a way the code can recover from the error. If there is no reasonable way of recovering, print an error message and exit the program. In short, if someone (such as the grader) can crash your program, you lose points, whether in this class or in a future job. Required compiler options For all C programming assignments in this class, you must use (at a minimum) the following gcc compile options: gcc -std=c11 -Wall -pedantic ... program.c ... These instruct the compiler to compile for the C11 language standard, display all possible warnings, and to issue warnings if any non-ISO standard C features proided by gcc are used, respectively. You will likely need to add other options to these; for example, if you use mathematics functions, you need to #include <math.h> in your C program and add -lm to the command line. Recall that our standard .bashrc defines an alias mygcc to make it easy to apply these options every time: alias mygcc='gcc -Wall -pedantic -std=c11 -ggdb'
https://www.cs.dartmouth.edu/~cs50/Resources/CodingStyle.html
CC-MAIN-2021-39
refinedweb
2,357
65.32
Can anyone give me an example on how to use the TLabel in Borland builder? Thanks a lot! Pete This is a discussion on TLabel in BorlandBuilder5 help!!! within the C++ Programming forums, part of the General Programming Boards category; Can anyone give me an example on how to use the TLabel in Borland builder? Thanks a lot! Pete... Can anyone give me an example on how to use the TLabel in Borland builder? Thanks a lot! Pete You choose the TLabel component from the component pallete. Then you click on the form where you want to put it. You can then change the text in the label by changing the Caption property in the object inspector. Need any more info? Thanks for your response. I tried that, but I doesn't give me the results I want. All I want is to print "Hello world" and all I get is a grey screen without output. As you can see I'm just learning this and any feedback on how to get that output will be appreciated. Thanks! In other words, I want to have "hello world" print by using code; either this code or any code that works with builder5. #include <iostream.h> main () { cout<<"hello world"; return 0; } Jim S. Ok, I'll try to make it a little more detailed. 1) Click on the label component. 2) click on the form(It's the grey window that will say something like "Form1" unless you changed that). It will display a label with the labels name as the caption. 3) there should be a windows to the left of the form called the object inspector. The top drop down list should say something like "Label1 : TLabel". and below the drop down listthere should be a list of properties. To change the text you change the text in the Caption property. On the form it should now say what you put in that property. If you still can't figure it out, I could give you an example "Hello World" project. Hey Thanks again, I was able to print with the instructions. Is there any way I can print it just by using code as in the example? If you can give an "Hello World" example with code and where to inserted, it would be great. Thanks for your help! Jim S. There might be a way, but why would you want to do that. if you want to change the Caption in code, you can use code like: I don't really know if there is a way to make a label using JUST code in BCB.I don't really know if there is a way to make a label using JUST code in BCB.Code:Label1->Caption = "whatever"; -kje11 Thanks once again for all your help! The only reason I wanted to do it like that is to practice what I learned in class today. In the Lab, we have this old compiler called Turbo C++ and even though it's from Borland, it does things differently. With that program, the only way to get that output is by typing the code shown in the example. Thanks for all your help!!! Jim S. BCB and VC++ have the capacity to build projects in text (aka console or DOS) mode and graphical mode. TLabel is one way to get things shown on the screen in graphical mode. To use iostream and cout you would be in text/console/DOS mode however. If you click on File and then new application you should be given a menu of options as to which mode you wish to use to build/make the current project/application. Choose the Console/text/Dos mode option if you want to use text mode programming. Leave it at default setting if you wish to use graphical/Windows mode. In Windows, all output to the screen is in graphical mode. cout and cin play no (or very little routine) role. Things are "event" driven, meaning a button is "pushed", the mouse moves over a given spot on the scree, a change occurs in an edit box secondary to user input, etc. none of which are pertinent to text mode. Although it isn't obvious when using BCB, use of classes, inheritance, etc. are useful in Windows, although use of control loops, knowing how lists, arrays, queues, etc. and even knowing about the standard template library all have a big impact. Learning how to program in text mode before going to graphical mode, eventhough you can make a basic Window program pretty easy with BCB, is something I think that will be worth your while.
http://cboard.cprogramming.com/cplusplus-programming/10929-tlabel-borlandbuilder5-help.html
CC-MAIN-2013-20
refinedweb
779
82.34
1 /* 2 * Copyright 2003 18 package org.apache.avalon.fortress.util.dag;19 20 import java.util.*;21 22 /**23 * DirectedAcyclicGraphVerifier provides methods to verify that any set of24 * vertices has no cycles. A Directed Acyclic Graph is a "graph" or set of25 * vertices where all connections between each vertex goes in a particular26 * direction and there are no cycles or loops. It is used to track dependencies27 * and ansure that dependencies can be loaded and unloaded in the proper order.28 *29 * @author <a HREF="mailto:dev@avalon.apache.org">Avalon Development Team</a>30 * @version CVS $ Revision: 1.1 $31 */32 public class DirectedAcyclicGraphVerifier33 {34 /**35 * Verify that a vertex and its set of dependencies have no cycles.36 *37 * @param vertex The vertex we want to test.38 *39 * @throws CyclicDependencyException if there is a cycle.40 */41 public static void verify( Vertex vertex )42 throws CyclicDependencyException43 {44 // We need a list of vertices that contains the entire graph, so build it.45 List vertices = new ArrayList();46 addDependencies( vertex, vertices );47 48 verify( vertices );49 }50 51 /**52 * Recursively add a vertex and all of its dependencies to a list of53 * vertices54 *55 * @param vertex Vertex to be added.56 * @param vertices Existing list of vertices.57 */58 private static void addDependencies( final Vertex vertex, final List vertices )59 {60 if ( !vertices.contains( vertex ) )61 {62 vertices.add( vertex );63 64 for ( Iterator iter = vertex.getDependencies().iterator(); iter.hasNext(); )65 {66 addDependencies( (Vertex)iter.next(), vertices );67 }68 }69 }70 71 /**72 * Verify a set of vertices and all their dependencies have no cycles. All73 * Vertices in the graph must exist in the list.74 *75 * @param vertices The list of vertices we want to test.76 *77 * @throws CyclicDependencyException if there is a cycle.78 */79 public static void verify( List vertices )80 throws CyclicDependencyException81 {82 // Reset the orders of all the vertices.83 resetVertices( vertices );84 85 // Assert that all vertices are in the vertices list and resolve each of their orders.86 Iterator it = vertices.iterator();87 while ( it.hasNext() )88 {89 Vertex v = (Vertex) it.next();90 91 // Make sure that any dependencies are also in the vertices list. This adds92 // a little bit to the load, but we don't test this and the test would have93 // failed, this would lead to some very hard to track down problems elsewhere.94 Iterator dit = v.getDependencies().iterator();95 while( dit.hasNext() )96 {97 Vertex dv = (Vertex) dit.next();98 if ( !vertices.contains( dv ) )99 {100 throw new IllegalStateException ( "A dependent vertex (" + dv.getName() + ") of "101 + "vertex (" + v.getName() + ") was not included in the vertices list." );102 }103 }104 105 v.resolveOrder();106 }107 }108 109 /**110 * Sort a set of vertices so that no dependency is before its vertex. If111 * we have a vertex named "Parent" and one named "Child" that is listed as112 * a dependency of "Parent", we want to ensure that "Child" always comes113 * after "Parent". As long as there are no cycles in the list, we can sort114 * any number of vertices that may or may not be related. Both "Parent"115 * and "Child" must exist in the vertices list, but "Child" will also be116 * referenced as a dependency of "Parent".117 *118 * <p>119 * <b>Implementation Detail:</b> This particular algorithm is a more120 * efficient variation of the typical Topological Sort algorithm. It uses121 * a Queue (Linked List) to ensure that each edge (connection between122 * two vertices) or vertex is checked only once. The efficiency is123 * O = (|V| + |E|).124 * </p>125 *126 * @param vertices127 * @throws CyclicDependencyException128 */129 public static void topologicalSort( final List vertices ) throws CyclicDependencyException130 {131 // Verify the graph and set the vertex orders in the process.132 verify( vertices );133 134 // We now that there are no cycles and that each of the vertices has an order135 // that will allow them to be sorted.136 Collections.sort( vertices );137 }138 139 /**140 * Resets all the vertices so that the visitation flags and indegrees are141 * reset to their start values.142 *143 * @param vertices144 */145 public static void resetVertices( List vertices )146 {147 Iterator it = vertices.iterator();148 while ( it.hasNext() )149 {150 ( (Vertex) it.next() ).reset();151 }152 }153 }154 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/avalon/fortress/util/dag/DirectedAcyclicGraphVerifier.java.htm
CC-MAIN-2016-44
refinedweb
720
58.28
Hello guys I'm starting to learn python, and I installed python 2.7 on my pc that runs Windows. The problem I can not open is the python console to create programs. Can someone help me?.! Hello guys The default editor for python, IDLE, is not very screen reader accessible to begin with. You can create scripts with a simple text editor like notepad, just save the documents as plain text with *.py on the end. You can then run them from the command line by typing "python myscript.py", assuming your paths are setup correctly. There are also other more accessible IDE's around, such as Notepad++. Others may be along to share their preferred tools as well. -AudiMesh3D v1.0.0: Accessible 3D Model Viewer Hi, I wouldn't call the interactive input Python offers itself as a console actually. When I hear console, I usually think of terminals like the Linux shell, but thats indeed not what Python has to offer. The two things Python can give us itself are: 1. A scientilla based IDE (which isn't that great and you should probably try to get something better) and 2. an interactive command input, which allows us to test commands, conditions and small programs on the fly. None of those should you be using to "create" programs. Well, you didn't really describe your problem yet. You only said "it doesn't let me do that", but why? Whats your exact problem? I'd recommend you to create a test program file (like test.py), write some hello world example into it, like follows: def main() print 'Hello world!' main() Open some command prompt, navigate into the folder where the test.py file is currently located, and type "python test.py". As long as your PATH environment variable is set up correctly, python should interpret the file and print you some hello world statement. Best Regards. Hijacker Man, people still want to learn Python 2... ah well... Is there anything wrong by learning Python 2? The book I'm studying says you need to learn Python2 before you can learn Python3. Most programs are made in Python2 anyway. I'm sorry, but I'm not sure I got your point. > guys! Thanks for the tips. @SLJ, you don't need to learn Python 2 before learning Python 3. In fact, I'd say learning Python 2 before 3 hinders you a little because there are several differences between the two languages (like Python 2 have multiple different types of strings and Python 3 not having such a thing). Yes, a lot of stuff is written in Python 2. But a lot of stuff is written in Python 3, too. As someone who learned and used Python since 2009, I'll say go for Python3. Many of my Python2 code worked with not much need for convertion. Of course, there are differences, but as a starter you even won't notice them at the beginning. Hi. @Slj, it is not true that you have to learn python 2 before 3. That's probably something the author put in to force or scare you into using python 2. Either one is fine, the problem you have is that once you get good at programming you will need libraries to do certain things that python can't do itself. Python 2 has many libraries that have been finished, but python 3 still has many libraries that are still being worked on, or have been abandoned. For example wx python which controls mouse and keyboard input is fully developed for python 2, but the one for python 3 is still being worked on today. What I'm saying is you can't use a python 2 library for 3 and the other way around. Since you're a beginner, I would personally recommend using python 2 while your learning and then once you have the basics down move on to python 3. Python 2 supports older systems which is nice if you want backward compatibility, python 3 is faster and a little easier to use. There really is no good and bad, it all depends on you're personal preference. Hth. @Diego, I don't know what you mean by console. Just stick to using notepad for now it will make things a lot easier for you. Playing music and coding, are kinds of real world magic. Interesting. Thanks for your replies. 11 2017-09-14 00:18:47 (edited by Hektor 2017-09-14 00:21:51) If you are using Windows, one way of using Python in console mode would be te: * From the desktop, bring up the Run dialog (Windows+r) * Focus will be placed in a field, type: cmd * Press Enter. This will place you in a Window with a DOS prompt. * Type: python * Press ENTER This will start python in console mode. You can also access Python in this same way from the Windows PowerShell. When you are in the PowerShell window just enter the same python command. When you are done with the python console, just type: exit() at the python console prompt and you will be returned to the DOS or PowerShell prompt. It is also possible to setup a desktop icon that will do all of that for you and just drop you in a Python console if you wish. I can write that up later if you wish. The Python console is a good place to try out small snipets of code--especially when you are learning. I also find myself using it as a calculator. if you want you can to use the visual st[udio Hello. Python and visual studio? I don't think that works. Visual studio is more for microsoft languages, c# c++, visual basic, things like that. I could be wrong but I 'm pretty sure abut this. Playing music and coding, are kinds of real world magic. @Guitarman, VS is not just for Microsoft languages, and C++ is not a Microsoft language (unless you count the CLR extensions to it when you use the appropriate project type). But you can certainly add other languages to it that are not MS-specific: There's D for Visual Studio; Python Tools for Visual Studio; RemObjects Elements... Interesting. Well I learned something new thanks Ethin. Playing music and coding, are kinds of real world magic.
http://forum.audiogames.net/viewtopic.php?pid=329347
CC-MAIN-2017-47
refinedweb
1,068
82.14
I have two assigments due in the morning and I left my book in the computer lab and it's locked up. I think I got the first one done, but the second one I don't know. Here is the problem. make a program to allow a student to obtain hes GPA. each class may vary on number of credit hours. therefor there will be 2 inputs, grade and credit hours. promp the student to use a numeric value for thye grade 4 for A, 3 for B and so on. A student may take one or several classes. ID must be an input value. No need to echo input except for ID, but should display the credit hours that the student has earned as well as the GPA. I did this when I had my book, but I think it is way off. this is our second program and we only know very simple steps. Any help would be great. #include <iostream> #include <cctype> using std::cin; using std::cout; int main() { int id, mark = 0; char yesno = 'y'; do { do { cout<<"Please enter ID:"; cout<<"Please enter grade: "; cin>>grade; cout<<"Please enter credit hours: "; cin>>mark; } while ( (mark < 0) && (mark > 100) ) if (mark>=4) cout<<"Student "<<id<<" got a grade A"; else if (mark>=3) cout<<"Student "<<id<< " got a grade B"; else if (mark>=3) cout<<"Student "<<id<< " got a grade C"; else if (mark>=2) cout<<"Student "<<id<< " got a grade D"; //else if (mark<=2) // else cout<<"Student "<<id<< " got a grade F"; cout<<"Again?: "; cin>>yesno; } while (toupper(yesno)!='N'); cout<< return 0
https://cboard.cprogramming.com/cplusplus-programming/25708-help.html
CC-MAIN-2017-26
refinedweb
271
81.12
Today I will cover how to look at type information from the command line of windbg/kd. You can do all of this in the UI with a mouse, but that takes too long ;). I like to keep my hands on the keyboard and not move around. More importantly, by learning the command line way, you can embed commands to execute in a breakpoint statement so that every time the bp is hit, you get all of your information displayed for you without any additional typing. I am going to cover the “dt” and “??” commands. Here is a C++ program which I will use in demonstrating these commands. Before executing any of the commands, I set a bp on the “return 0;” statement and I am now at that point in the program. #include <windows.h> #include <Stdio.h> typedef struct _BELOW { _BELOW() : One(1) {} ULONG One; } BELOW, *PBELOW; typedef struct _TOP { _TOP(PBELOW Below) : Three(3), Pointer(Below) { Embedded.Two = 2; } struct { ULONG Two; } Embedded; PBELOW Pointer; ULONG Three; } TOP, *PTOP; int _cdecl main(int argc, char *argv[]) { BELOW below; TOP top(&below); ULONG values[4], i; for (i = 0; i < sizeof(values)/sizeof(values[0]); i++) { values[i] = i+1; } return 0; } dt (which stands for display type) is a very powerful command which can perform a lot of expression evaluation for you. You can have it simply dump a local variable [1]. You can refine the output to a specific field [2] or substructure [3] and [4]. Note the difference between the output of [3] and [4], by including the “.”, dt will dump all subfields of the structure. [1] 0:000> dt top +0x000 Embedded : _TOP::<unnamed-tag> +0x004 Pointer : 0x01002034 _BELOW +0x008 Three : 3 [2] 0:000> dt top Three +0x008 Three : 3 [3] 0:000> dt top Embedded +0x000 Embedded : _TOP::<unnamed-tag> [4] 0:000> dt top Embedded. +0x000 Embedded : +0x000 Two : 2 Even better, dt iterate over an array [5] and can follow pointers from the structure [6]. But what if you want to dump all the fields of the pointer? dt top Pointer->. doesn’t work (because it is not a valid C expression). The “??” command comes to the rescue here; you can use it to evaluate the pointer value [7]. Not only will ?? evaluate C or C++ expresssions that contain variables, it can also evaluate expressions using types [8]. [5] 0:000> dt -a values Local var @ 0x6ff68 Type unsigned long[] [0] @ 0006ff68 ——————————————— 1 [1] @ 0006ff6c ——————————————— 2 [2] @ 0006ff70 ——————————————— 3 [3] @ 0006ff74 ——————————————— 4 [6] 0:000> dt top Pointer->One +0x004 Pointer : +0x000 One : 1 [7] 0:000> ?? top.Pointer struct _BELOW * 0x01002034 +0x000 One : 1 [8] 0:000> ?? sizeof(top) unsigned int 0xc 0:000> ?? sizeof(_TOP) unsigned int 0xc This is just the tip of the iceburg with these commands. I suggest that you open up the debugger help (c:\debuggers\debugger.chm) and look up these commands and their syntax…it is pretty powerful stuff! dump/display type, surely? Or is it historically dump tmp, and corrected to display type? fixed. thanks for catching the typo, i meant type. darn… I thought there’d be some interesting history there. Keep up the good blogging! history of what? the debuggers or WDM?
https://blogs.msdn.microsoft.com/doronh/2006/03/22/debugger-commands-dt-that-make-my-life-easier-part-4/
CC-MAIN-2016-36
refinedweb
543
73.88
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 5 years, 3 months ago. How to read MPR121 status registers from inside the interrupt ISR I have been trying to work this problem out for several hours now with no luck. I am using a Nucleo F767ZI board and an MPR121 sensor for capacitive touch sensing. As a test program to ensure that the sensor worked, I had the micro controller continuously read the status register 0x00 (over i2c) in an infinite while loop and when one of the first three electrodes were touched, a corresponding LED would light up. This worked perfectly. I am now attempting to modify the code to use InterruptIn so that the micro controller isn't continuously checking the status register. The sensor has an IRQ out pin pulled high until a change in the status of one of the electrodes is detected in which case it is pulled down until either register 0 or register 1 (the electrode status registers) is read from. When I run this code, the callback function is successfully called but when register 0 is read, it always seems to return 0, as if none of the electrodes were touched, yet it pulls the interrupt pin low correctly when touched, and resets when I attempt to read. I don't know if this would be a problem related to the sensor, or if the Nucleo board isn't reading the values correctly. #include "mbed.h" #include "MPR121.h" //Serial pc(USBTX, USBRX); DigitalOut red(PF_13); DigitalOut green(PE_9); DigitalOut blue(PE_11); ///if defined TARGET_LPC1768 || TARGET_LPC11U24 I2C i2c(PB_9, PB_8); InterruptIn irq(PF_12); MPR121 touch_pad(i2c, irq, MPR121::ADDR_VSS); void checkPads() { uint8_t value = touch_pad.readRegister(MPR121::ELE0_7_STAT); if(value % 2 < 1) red = 0; else red = 1; if(value % 4 < 2) green = 0; else green = 1; if(value % 8 < 4) blue = 0; else blue = 1; } void fallInterrupt() { checkPads(); } int main() { touch_pad.init(); touch_pad.enable(); irq.fall(&fallInterrupt); while (1) {} } Essentially, if the 'checkPads()' function is moved to the while loop, it performs as expected, but moving it to the interrupt function causes it to not work properly, even though the fallInterrupt function is being called when expected. Any insight on the problem would be appreciated. 1 Answer 5 years, 3 months ago. One further thought, the library you are using for the touch sensor reads the status registers on an interrupt. You may want to try editing the library so that it's not clearing the status every interrupt. Commenting out line 99 should do that. I am assuming the line 99 you are referring to is that of MPR121.cpp: _irq->fall(this, &MPR121::handler); The handler doesn't appear to set any of the registers to zero. I commented it out anyway just to test it and it doesn't fixed the problem. The flag idea however is one way to approach it and I had thought about it. There are other sensors though to consider in the project so I don't want to have the while loop constantly reading the status of each flag variable but if worset comes to worst it is certainly one way to go. I appreciate your suggestions!posted by 20 Jan 2017 Sorry - I can't help on the problem, I don't know either the board or sensor and I can't see anything obviously wrong. But for future reference, if you do Then the formatting of the code will be displayed correctly. Without it it the formatting and which bits are comments gets lost making things a lot harder to read.posted by Andy A 20 Jan 2017 Thinking about it can you try something a little different - Have the interrupt set a flag that the registers need to be checked and then in the main code look for that flag and then read the registers (and clear the flag obviously). I have no idea whether this is suitable for your final application or not but it would at least show that the problem is with the I2C read in an interrupt rather than being caused by only performing a single register read as opposed to constant register reads.posted by Andy A 20 Jan 2017 Thank you for pointing this out to me, I have since fixed the formatting.posted by Robert Cook 20 Jan 2017
https://os.mbed.com/questions/76740/How-to-read-MPR121-status-registers-from/
CC-MAIN-2022-21
refinedweb
745
66.67
I’ve already written the much-delayed blog on Hosting, but I can’t post it yet because it mentions a couple of new Whidbey features, which weren’t present in the PDC bits. Obviously Microsoft doesn’t want to make product disclosures through my random blog articles. I’m hoping this will be sorted out in another week or two. While we’re waiting, I thought I would talk briefly(!) about pumping and apartments. The CLR made some fundamental decisions about OLE, thread affinity, reentrancy and finalization. These decisions have a significant impact on program correctness, server scalability, and compatibility with legacy (i.e. unmanaged) code. So this is going to be a blog like the one on Shutdown from last August (see). There will be more detail than you probably care to know about one of the more frustrating parts of the Microsoft software stack. First, an explanation of my odd choice of terms. I’m using OLE as an umbrella which includes the following pieces of technology: COM – the fundamental object model, like IUnknown and IClassFactory DCOM – remoting of COM using IDL, NDR pickling and the SCM Automation – IDispatch, VARIANT, Type Libraries, etc. Active/X – Protocols for controls and their containers Next, some disclaimers: I am not and have never been a GUI programmer. So anything I know about Windows messages and pumping is from debugging GUI applications, not from writing them. I’m not going to talk about WM_PENCTL notifications or anything else that requires UI knowledge. Also, I’m going to point out a number of problems with OLE and apartments. The history of the CLR and OLE are closely related. In fact, at one point COM+ 1.0 was known internally as COM98 and the CLR was known internally as COM99. We had some pretty aggressive ship targets back then!. The bottom line is that OLE has had at least as much impact on Microsoft products and the industry, in its day, as .NET is having now. But, like anything else, OLE has some flaws. In contrast to the stark architectural beauty of COM and DCOM, late-bound Automation is messy. At the time this was all rolled out to the world, I was at Borland and then Oracle. As an outsider, it was hard for me to understand how one team could have produced such a strange combination. Of course, Automation has been immensely successful – more successful than COM and DCOM. My aesthetic taste is clearly no predictor of what people want. Generally, people want whatever gets the job done, even if it does so in an ad hoc way. And Automation has enabled an incredible number of application scenarios. Apartments If there’s another part of OLE that I dislike, it’s Single Threaded Apartments. Presumably everyone knows that OLE offers three kinds of apartments: Single Threaded Apartment (STA) – one affinitized thread is used to call all the objects residing in the apartment. Any call on these objects from other threads must perform cross-thread marshaling to this affinitized thread, which dispatches the call. Although a process can have an arbitrary number of STAs (with a corresponding number of threads), most client processes have a single Main STA and the GUI thread is the affinitized thread that owns it. Multiple Threaded Apartment (MTA) – each process has at most one MTA at a time. If the current MTA is not being used, OLE may tear it down. A different MTA will be created as necessary later. Most people think of the MTA as not having thread affinity. But strictly speaking it has affinity to a group of threads. This group is the set of all the threads that are not affinitized to STAs. Some of the threads in this group are explicitly placed in the MTA by calling CoInitializeEx. Other threads in this group are implicitly in the MTA because the MTA exists and because these threads haven’t been explicitly placed into STAs. So, by the strict rules of OLE, it is not legal for STA threads to call on any objects in the MTA. Instead, such calls must be marshaled from the calling STA thread over to one of the threads in the MTA before the call can legally proceed. Neutral Apartment (NA) – this is a recent invention (Win2000, I think). There is one NA in the process. Objects contained in the NA can be called from any thread in the process (STA or MTA threads). There are no threads associated with the NA, which is why it isn’t called NTA. Calls into NA objects can be relatively efficient because no thread marshaling is ever required. However, these cross-apartment calls still require a proxy to handle the transition between apartments. Calls from an object in the NA to an object in an STA or the MTA might require thread marshaling. This depends on whether or not the current thread is suitable for calling into the target object. For example, a call from an STA object to an NA object and from there to an MTA object will require thread marshaling during the transition out of the NA into the MTA. Threading The MTA is effectively a free-threaded model. (It’s not quite a free-threaded model, because STA threads aren’t strictly allowed to call on MTA objects directly). From an efficiency point of view, it is the best threading model. Also, it imposes the least semantics on the application, which is also desirable. The main drawback with the MTA is that humans can’t reliably write free-threaded code. Well, a few developers can write this kind of code if you pay them lots of money and you don’t ask them to write very much. And if you code review it very carefully. And you test it with thousands of machine hours, under very stressful conditions, on high-end MP machines like 8-ways and up. And you’re still prepared to chase down a few embarrassing race conditions once you’ve shipped your product. But it’s not a good plan for the rest of us. The NA model is truly free-threaded, in the sense that any thread in the process can call on these objects. All such threads must still transition through a proxy layer that maintains the apartment boundary. But within the NA all calls are direct and free-threaded. This is the only apartment that doesn’t involve thread affinity. Although the NA is free-threaded, it is often used in conjunction with a lock to achieve rental threading. The rental model says that only one thread at a time can be active inside an object or a group of objects, but there is no restriction on which thread this might be. This is efficient because it avoids thread marshaling. Rather than marshaling a call from one thread to whatever thread is affinitized to the target objects, the calling thread simply acquires the lock (to rent the context) and then completes the call on the current thread. When the thread returns back out of the context, it releases the lock and now other threads can make calls. If you call out of a rental context into some other object (as opposed to the return pathway), you have a choice. You can keep holding the lock, in which case other threads cannot rent the context until you fully unwind. In this mode, the rental context supports recursion of the current thread, but it does not support reentrancy from other threads. Alternatively, the thread could release the lock when it calls out of the rental context, in which case it must reacquire the lock when it unwinds back and returns to the rental context. In this mode, the rental context supports full reentrancy. Throughout this blog, we’ll be returning to this fundamental decision of whether to support reentrancy. It’s a complex issue. If only recursion is supported on a rental model, it’s clear that this is a much more forgiving world for developers than a free-threaded model. Once a thread has acquired the rental lock, no other threads can be active in the rented objects until the lock has been released. And the lock will not be released until the thread fully unwinds from the call into the context. Even with reentrancy, the number of places where concurrency can occur is limited. Unless the renting thread calls out of the context, the lock won’t be released and the developer knows that other threads aren’t active within the rented objects. Unfortunately, it might be hard for the developer to know all the places that call out of the current context, releasing the lock. Particularly in a componentized world, or a world that combines application code with frameworks code, the developer can rarely have sufficient global knowledge. So it sounds like limiting a rental context to same-thread recursion is better than allowing reentrancy during call outs, because the developer doesn’t have to worry about other threads mutating the state of objects in the rental context. This is true. But it also means that the resulting application is subject to more deadlocks. Imagine what can happen if two rental contexts are simultaneously making calls to each other. Thread T1 holds the lock to rent context C1. Thread T2 holds the lock to rent context C2. If T1 calls into C2 just as T2 calls into C1, and we are on the recursion plan, we have a classic deadlock. Two locks have been taken in different sequences by two different threads. Alternatively, if we are on a reentrancy plan, T1 will release the lock for C1 before contending for the lock on C2. And T2 will release the lock for C2 before contending for the lock on C1. The deadlock has been avoided, but T1 will find that the objects in C1 have been modified when it returns. And T2 will find similar surprises when it returns to C2. Affinity Anyway, we now understand the free-threaded model of the MTA and NA and we understand how to build a rental model on top of these via a lock. How about the single-threaded affinitized model of STAs? It’s hard to completely describe the semantics of an STA, because the complete description must incorporate the details of pages of OLE pumping code, the behavior of 3rd party IMessageFilters, etc. But generally an STA can be thought of as an affinitized rental context with reentrancy and strict stacking. By this I mean: - It is affinitized rental because all calls into the STA must marshal to the correct thread and because only one logical call can be active in the objects of the apartment at any time. (This is necessarily the case, since there is only ever one thread). - It has reentrancy because every callout from the STA thread effectively releases the lock held by the logical caller and allows other logical callers to either enter or return back to the STA. - It has strict stacking because one stack (the stack of the affinitized STA thread) is used to process all the logical calls that occur in the STA. When these logical calls perform a callout, the STA thread reentrantly picks up another call in, and this pushes the STA stack deeper. When the first callout wants to return to the STA, it must wait for the STA thread’s stack to pop all the way back to the point of its own callout. That point about strict stacking is a key difference between true rental and the affinitized rental model of an STA. With true rental, we never marshal calls between threads. Since each call occurs on its own thread, the pieces of stack for different logical threads are never mingled on an affinitized thread’s actual stack. Returns back into the rental context after a callout can be processed in any order. Returns back into an STA after a callout must be processed in a highly constrained order. We’ve already seen a number of problems with STAs due to thread affinity, and we can add some more. Here’s the combined list: - Marshaling calls between threads is expensive, compared to taking a lock. - Processing returns from callouts in a constrained fashion can lead to inefficiencies. For instance, if the topmost return isn’t ready for processing yet, should the affinitized thread favor picking up a new incoming call (possibly leading to unconstrained stack growth) or should it favor waiting for the topmost return to complete (possibly idling the affinitized thread completely and conceivably resulting in deadlocks). - Any conventional locks held by an affinitized thread are worthless. The affinitized thread is processing an arbitrary number of logical calls, but a conventional lock (like an OS CRITICAL_SECTION or managed Monitor) will not distinguish between these logical calls. Instead, all lock acquisitions are performed by the single affinitized thread and are granted immediately as recursive acquisitions. If you are thinking of building a more sophisticated lock that avoids this issue, realize that you are making that classic reentrancy vs. deadlock decision all over again. - Imagine a common server situation. The first call comes in from a particular client, creates a few objects (e.g. a shopping cart) and returns. Subsequent calls from that client manipulate that initial set of objects (e.g. putting some items into the shopping cart). A final call checks out the shopping cart, places the order, and all the objects are garbage collected. Now imagine that all those objects are affinitized to a particular thread. As a consequence, the dispatch logic of your server must ensure that all calls from the same client are routed to the same thread. And if that thread is busy doing other work, the dispatch logic must delay processing the new request until the appropriate affinitized thread is available. This is complicated and it has a severe impact on scalability. - STAs must pump. (How did I get this far without mentioning pumping?) - Any STA code that assumed a single-threaded world for the process, rather than just for the apartment, might not pump. Such code breaks when we introduce the CLR into the process, as we will see. Failure to Pump Let’s look at those last two bullet points in more detail. When your STA thread is doing nothing else, it needs to be checking to see if any other threads want to marshal some calls into it. This is done with a Windows message pump. If the STA thread fails to pump, these incoming calls will be blocked. If the incoming calls are GUI SendMessages or PostMessages (which I think of as synchronous or asynchronous calls respectively), then failure to pump will produce an unresponsive UI. If the incoming calls are COM calls, then failure to pump will result in calls timing out or deadlocking. If processing one incoming call is going to take a while, it may be necessary to break up that processing with intermittent visits to the message pump. Of course, if you pump you are allowing reentrancy to occur at those points. So the developer loses all his wonderful guarantees of single threading. Unfortunately, there’s a whole lot of STA code out there which doesn’t pump adequately. For the most part, we see this in non-GUI applications. If you have a GUI application that isn’t pumping enough, it’s obvious right there on the screen. Those bugs tend to get fixed. For non-GUI applications, a failure to pump may not be noticed in unmanaged code. When that code is moved to managed (perhaps by re-compiling some VB6 code as VB.NET), we start seeing bugs. Let’s look at a couple of real-world cases that we encountered during V1 of the CLR and how the lingering effects of these cases are still causing major headaches for managed developers and for Microsoft Support. I’ll describe a server case first, and then a client case. ADO.NET and ASP.NET are a winning combination. But ASP.NET also supports an ASP compatibility mode. In this mode, legacy ASP pages can be served up by the managed ASP.NET pipeline. Such pages were written before we invented our managed platform, so they use The purpose of this STA threadpool was to allow legacy STA COM objects in general, and VB6 objects in particular, to be moved from the client to the server. The result suffers from the scaling problems I alluded to before, since requests are dispatched on up to 100 STA threads with careful respect for any affinity. Also, VB6 has a variable scope which corresponds to “global” (I forget its name), but which is treated as per-thread when running on the server. If there are more than 100 clients using a server, multiple clients will share a single STA thread based on the whims of the request dispatch logic. This means that global variables are shared between sets of clients in a surprising fashion, based on the STA that they happen to correspond to. A typical ASP page written in VBScript would establish a (hopefully pooled) database connection from The ASP page contains no explicit pumping code. Indeed, at no point was the STA actually pumped. Although this is a strict violation of the rules, it didn’t cause any problems. That’s because there are no GUI messages or inter-apartment COM calls that need to be serviced. This technique of executing ASP pages on STAs with (It’s important that finalization occurs on non-application threads, since we don’t want to be holding any application locks when we call the Finalize method. And today the CLR only has a single Finalizer thread, but this is an implementation detail. It’s quite likely that in the future we will concurrently call Finalize methods on many objects, perhaps by moving finalization duties over to the ThreadPool. This would address some scalability concerns with finalization, and would also allow us to make stronger guarantees about the availability of the finalization service). Our COM Interop layer ensures that we almost only ever call COM objects in the correct apartment and context. The one place where we violate COM rules is when the COM object’s apartment or context has been torn down. In that case, we will still call IUnknown::Release on the pUnk to try to recover its resources, even though this is strictly illegal. We’ve gone backwards and forwards on whether this is appropriate, and we provide a Customer Debug Probe so that you can detect whether this is happening in your application. Anyway, let’s pretend that we absolutely always call the pUnk in the correct apartment and context. In the case of an object living in an STA, this means that the Finalizer thread will marshal the call to the affinitized thread of that STA. But if that STA thread is not pumping, the Finalizer thread will block indefinitely while attempting to perform the cross-thread marshaling. The effect on a server is crippling. The Finalizer thread makes no progress. The number of unreleased pUnks grows without bounds. Eventually some resource (usually memory) is exceeded and the process crashes. One solution is to edit the original ASP page to pump the underlying STA thread that it is executing on. A light-weight way to pump is to call Thread.CurrentThread.Join(0). This causes the current thread to block until the current thread dies (which isn’t going to happen) or until 0 milliseconds have elapsed – whichever happens first. I’ll explain later why this also performs some pumping and why this is a controversial aspect of the CLR. A heavier-weight way to pump is to call GC.WaitForPendingFinalizers. This not only performs pumping, but it also waits for the Finalization queue to drain. If you are porting a page that produces a modest number of COM objects, doing a simple Join on each page may be sufficient. If your page performs elaborate processing, perhaps creating an unbounded number of COM objects in a loop, then you may need to either add a Join within the loop or WaitForPendingFinalizers at the end of the page processing. The only way to really know is to experiment with both techniques, measuring the growth of the Finalization queue and the impact on server throughput. ADO’s Threading Model There was another problem with using If these classes are registered as Single, OLE will carefully ensure that their instances can only be called from the thread that they were created on. This implies that the objects can assume a single-threaded view of the world and they do not need to be written in a thread-safe manner. If these classes are registered as Both, OLE will ensure that their instances are only called from threads in the right apartment. But if that apartment is the MTA, these objects better have been written in a thread-safe manner. For example, they had better be using InterlockedIncrement and Decrement, or an equivalent, for reference counting. Unfortunately, the In fact, the legacy - The page queries up an row object, which enters managed code via COM Interop as an RCW (runtime-callable wrapper). ADO - By making a COM call on this RCW, the page navigates to a field value. This field value also enters managed code via COM Interop as an RCW. - The page now makes a COM call via which results in a call out to the remote database. At this point, the STA thread is pumped by the DCOM remote call. Since this is a remote call, it’s going to take a while before it returns. ADO - The garbage collector decides that it’s time to collect. At this point, the RCW for the field value is still reachable and is reported. The RCW for the row object is no longer referenced by managed code and is collected. - The Finalizer thread notices that the pUnk underlying the row’s RCW is no longer in use, and it makes the cross-apartment call from the Finalizer thread’s apartment (MTA) to the row object’s apartment (STA). ADO - Recall that the STA thread is pumping for the duration of the remote database call (#3 above). It picks up the cross-thread call from the Finalizer (#5 above) and performs the Release on the Row object. This is the final Release and deletes the unmanaged Row object from memory. This logical call unwinds and the Finalizer thread is unblocked (hurray). The STA thread returns to pumping. ADO - The remote database call returns back to the server machine. The STA thread picks it up from its pumping loop and returns back to the page, unwinding the thread. - The page now updates the field value, which involves a COM call to the underlying object. ADO crashes or randomly corrupts memory. ADO What happened? The This assumption worked fine in the days of ASP and VB6. So nobody even noticed the bug until the CLR violated those threading assumptions – without violating the underlying OLE rules, of course. It was impractical to fix this by opening up Instead, the No Typelib Registered Incidentally, we ran into another very common problem when we moved existing client or server COM applications over to managed code. Whenever an application uses a COM object, it tries hard to match the thread of the client to the ThreadingModel of the server. In other words, if the application needs to use a ThreadingModel=Main COM object, the application tries to ensure that the creating thread is in an STA. Similarly, if the application needs to use a ThreadingModel=Free COM object, it tries to create this object from an MTA thread. Even if a COM object is ThreadingModel=Both, the application will try to access the object from the same sort of thread (STA vs. MTA) as the thread that created the object. One reason for doing this is performance. If you can avoid an apartment transition, your calls will be much faster. Another reason has to do with pumping and reentrancy. If you make a cross-apartment call into an STA, the STA better be pumping to pick up your call. And if you make a cross-apartment call out of an STA, your thread will start pumping and your application becomes reentrant. This is a small dose of free-threading, and many application assumptions start to break. A final reason for avoiding apartment transitions is that they often aren’t supported. For instance, most ActiveX scenarios require that the container and the control are in the same STA. If you introduce an apartment boundary (even between two STAs), bizarre cases like Input Synchronous messages stop working properly. The net result is that a great many applications avoid using COM objects across apartment boundaries. And this means that – even if that COM object is nominally marshalable across an apartment boundary – this often isn’t being tested. So an application might install itself without ensuring that the typelib of the COM component is actually registered. When the application is moved to managed code, developers are frustrated to see InvalidCastExceptions on the managed side. A typical sequence is that they successfully ‘new’ the COM object, implying that the CoCreate returned a pUnk which was wrapped in an RCW. Then when they cast it to one of the interfaces that they know is supported, a casting exception is thrown. This casting exception is due to a QueryInterface call failing with E_NOINTERFACE. Yet this HRESULT is not returned by the COM object, which does indeed support the interface. Instead, it is returned by a COM apartment proxy which sits between the RCW and that COM object. The COM apartment proxy is simply failing to marshal the interface across the apartment boundary – usually because the COM object is using the OLEAUT marshaler and the Typelib has not been properly registered. This is a common failure, and it’s unfortunate that a generic E_NOINTERFACE doesn’t lead to better debuggability for this case. Finally, I can’t help but mention that the COM Interop layer added other perturbations to many unmanaged COM scenarios that seemed to be working just fine. Common perturbations from managed code include garbage collection, a Finalizer thread, strict conformance to OLE marshaling rules, and the fact that managed objects are agile with respect to COM apartments and COM+ contexts (unless they derive from ServicedComponent). For instance, Trident required that all calls on its objects occur on the correct thread. But Trident also had an extension model where 3rd party objects could be aggregated onto their base objects. Unfortunately, the aggregator performed blind delegation to the 3rd party objects. And – even more unfortunate – this blind delegation did not exclude QI’s for IMarshal. Of course, managed objects implement IMarshal to achieve their apartment and context agility. So if Trident aggregated a managed object as an extention, the containing Trident object would attempt to become partially agile in a very broken way. Hopefully we found and dealt with most of these issues before we shipped V1. Not Pumping a Client I said I would describe two cases where non-pumping unmanaged code caused problems when we moved to managed code. The above explains, in great detail, how We all know that a WinForms GUI client is going to put the main GUI thread into an STA. And we know that there’s a lot of pumping in a GUI application, or else not much is going to show on the screen. Assume for a moment that a Console application also puts its main thread into an STA. If that main thread creates any COM objects via COM Interop, and if those COM objects are ThreadingModel= On a well-loaded server, that failure is quickly noticed by the developer or by the folks in operations. But on a client, this might be just a mild case of constipation. The rate of creation of finalizable objects may be low enough that the problem is never noticed. Or it may be noticed as a gradual build up of resources. If the problem is reported to Microsoft Support, the customer generally categorizes it as a garbage collection bug. So what is the apartment of a Console application’s main thread? Well, it depends. If you build a Console application in Notepad, the main thread is likely to start off in the MTA. If you build a Console application with Visual Studio, then if you pick C# or VB.NET your main thread is likely to be in an STA. If you build a Console application with Visual Studio and you choose managed C++, your main thread is likely to be in an MTA for V1 or V1.1. I think it’s likely to be in an STA for our next release. Wow. Why are we all over the place on this? Mostly, it’s because there is no correct answer. Either the developer is not going to use any COM objects in his Console application, in which case the choice doesn’t really matter, or the developer is going to use some COM objects and this should inform his decision. For instance, if the developer will use COM objects with ThreadingModel= Either way, the developer has some responsibility. Unfortunately, the choice of a default is typically made by the project type that he selects in Visual Studio, or is based on the CLR’s default behavior (which favors MTA). And realistically the subtleties of apartments and pumping are beyond the knowledge (or interest) of most managed developers. Let’s face it: nobody should have to worry about this sort of thing. The Managed CoInitialize Mess There are three ways to select an apartment choice for the main thread of your Console application. All three of these techniques have concerns associated with them. 1) You can place either an STAThreadAttribute or MTAThreadAttribute onto the main method. 2) You can perform an assignment to System.Threading.CurrentThread.ApartmentState as one of the first statements of your main method (or of your thread procedure if you do a Thread.Start). 3) You can accept the CLR’s default of MTA. So what’s wrong with each of these techniques? The first technique is the preferred method, and it works very well for C#. After some tweaks to the VB.NET compiler before we shipped V1, it worked well for VB too. Managed C++ still doesn’t properly support this technique. The reason is that the entrypoint of a managed C++ EXE isn’t actually your ‘main’ routine. Instead, it’s a method inside the C-runtime library. That method eventually delegates to your ‘main’ routine. But the CLR doesn’t scan through the closure of calls from the entrypoint when looking for the custom attribute that defines the threading model. If the CLR doesn’t find it on the method that is the EXE’s entrypoint, it stops looking. The net result is that your attribute is quietly ignored for C++. I’m told that this will be addressed in Whidbey, by having the linker propagate the attribute from ‘main’ to the CRT entrypoint. And indeed this is how the VB.NET compiler works today. What’s wrong with the second technique? Unfortunately, it is subject to a race condition. Before the CLR can actually call your thread procedure, it may first call some module constructors, class constructors, AssemblyLoad notifications and AssemblyResolve notifications. All of this execution occurs on the thread that was just created. What happens if some of these methods set the thread’s ApartmentState before you get a chance? What happens if they call Windows services like the clipboard that also set the apartment state? A more likely scenario is that one of these other methods will make a PInvoke call that marshals a BSTR, SAFEARRAY or VARIANT. Even these innocuous operations can force a CoInitializeEx on your thread and limit your ability to configure the thread from your thread procedure. When you are developing your application, none of the above is likely to occur. The real nightmare scenario is that a future version of the CLR will provide a JIT that inlines a little more aggressively, so some extra class constructors execute before your thread procedure. In other words, you will ship an application that is balanced on a knife edge here, and this will become an App Compatibility issue for all of us. (See for more details on the sort of thing we worry about here). In fact, for the next release of the CLR we are seriously considering making it impossible to set the apartment state on a running thread in this manner. At a minimum, you should expect to see a Customer Debug Probe warning of the risk here. And the third technique from above has a similar problem. Recall that threads in the MTA can be explicitly placed there through a CoInitializeEx call, or they can be implicitly treated as being in the MTA because they haven’t been placed into an STA. The difference between these two cases is significant. If a thread is explicitly in the MTA, any attempt to configure it as an STA thread will fail with an error of RPC_E_CHANGED_MODE. By contrast, if a thread is implicitly in the MTA it can be moved to an STA by calling CoInitializeEx. This is more likely than it may sound. If you attempt a clipboard operation, or you call any number of other Windows services, the code you call may attempt to place your thread in the STA. And when you accept the CLR default behavior, it currently leaves the thread implicitly in the MTA and therefore is subject to reassignment. This is another place where we are seriously considering changing the rules in the next version of the CLR. Rather than place threads implicitly in the MTA, we are considering making this assignment explicit and preventing any subsequent reassignment. Once again, our motivation is to reduce the App Compat risk for applications after they have been deployed. Speaking of race conditions and apartments, the CLR has a nasty bug which was introduced in V1 and which we have yet to remove. I’ve already mentioned that any threads that aren’t in STAs or explicitly in the MTA are implicitly in the MTA. That’s not strictly true. These threads are only in the MTA if there is an MTA for them to be in. There is an MTA if OLE is active in the process and if at least one thread is explicitly in the MTA. When this is the case, all the other unconfigured threads are implicitly in the MTA. But if that one explicit thread should terminate or CoUninitialize, then OLE will tear down the MTA. A different MTA may be created later, when a thread explicitly places itself into it. And at that point, all the unconfigured threads will implicitly join it. But this destruction and recreation of the MTA has some serious impacts on COM Interop. In fact, any changes to the apartment state of a thread can confuse our COM Interop layer, cause deadlocks on downlevel platforms, and lead to memory leaks and violation of OLE rules. Let’s look at how this specific race condition occurs first, and then I’ll talk about the larger problems here. - An unmanaged thread CoInitializes itself for the MTA and calls into managed code. - While in managed code, that thread introduces some COM objects to our COM Interop layer in the form of RCWs, perhaps by ‘new’ing them from managed code. - The CLR notices that the current thread is in the MTA, and realizes that it must “keep the MTA alive.” We signal the Finalizer thread to put itself explicitly into the MTA via CoInitializeEx. - The unmanaged thread returns out to unmanaged code where it either dies or simply calls CoUninitialize. The MTA is torn down. - The Finalizer thread wakes up and explicitly CoInitializes itself into the MTA. Oops. It’s too late to keep the original MTA alive and it has the effect of creating a new MTA. At least this one will live until the end of the process. As far as I know, this is the only race condition in the CLR that we haven’t fixed. Why have we ignored it all these years? First, we’ve never seen it reported from the field. This isn’t so surprising when you consider that the application often shares responsibility for keeping the MTA alive. Many applications are aware of this obligation and – if they use COM – they always keep an outstanding CoInitialize on one MTA thread so the apartment won’t be torn down. Second, I generally resist fixing bugs by adding inter-thread dependencies. It would be all too easy to create a deadlock by making step 3 wait for the Finalizer thread to CoInitialize itself, rather than just signaling it to do so. This is particularly true since the causality of calls from the Finalizer to other threads is often opaque to us, as I’ll explain later. And we certainly don’t want to create a dedicated thread for this purpose. Dedicated threads have a real impact on Terminal Server scenarios, where the cost of one thread in a process is multiplied by all the processes that are running. Even if we were prepared to pay this cost, we would want to create this thread lazily. But synchronizing with the creation of another thread is always a dangerous proposition. Thread creation involves taking the OS loader lock and making DLL_THREAD_ATTACH notifications to all the DllMain routines that didn’t explicitly disable these calls. The bottom line is that the fix is expensive and distasteful. And it speaks to a more general problem, where many different components in a process may be individually spinning up threads to keep the MTA from being recycled. A better solution is for OLE to provide an API to keep this apartment alive, without requiring all those dedicated threads. This is the approach that we are pursuing for the long term. In our general cleanup of the CLR’s treatment of CoInitialize, we are also likely to change the semantics of assigning the current thread’s ApartmentState to Unknown. In V1 & V1.1 of the CLR, any attempt to set the state to Unknown would throw an ArgumentOutOfRangeException, so we’re confident that we can make this change without breaking applications. If the CLR has performed an outstanding CoInitializeEx on this thread, we may treat the assignment to Unknown as a request to perform a CoUninitialize to reverse the operation. Currently, the only way you can CoUninitialize a thread is to PInvoke to the OLE32 service. And such changes to the apartment state are uncoordinated with the CLR. Now why does it matter if the apartment state of a thread changes, without the CLR knowing? It matters because: 1) The CLR may hold RCWs over COM objects in the apartment that is about to disappear. Without a notification, we cannot legally release those pUnks. As I’ve already mentioned, we break the rules here and attempt to Release anyway. But it’s still a very bad situation and sometimes we will end up leaking. 2) The CLR will perform limited pumping of STA threads when you perform managed blocking (e.g. WaitHandle.WaitOne). If we are on a recent OS, we can use the IComThreadingInfo interface to efficiently determine whether we should pump or not. But if we are on a downlevel platform, we would have to call CoInitialize prior to each blocking operation and check for a failure code to absolutely determine the current state of the thread. This is totally impractical from a performance point of view. So instead we cache what we believe is the correct apartment state of the thread. If the application performs a CoInitialize or CoUninitialize without informing us, then our cached knowledge is stale. So on downlevel platforms we might neglect to pump an STA (which can cause deadlocks). Or we may attempt to pump an MTA (which can cause deadlocks). Incidentally, if you ever run managed applications under a diagnostic tool like AppVerifier, you may see complaints from that tool at process shutdown that we have leaked one or more CoInitialize calls. In a well-behaved application, each CoInitialize would have a balancing CoUninitialize. However, most processes are not so well-behaved. It’s typical for applications to terminate the process without unwinding all the threads of the process. There’s a very detailed description of the CLR’s shutdown behavior at. The bottom line here is that the CLR is heavily dependent on knowing exactly when apartments are created and destroyed, or when threads become associated or disassociated with those apartments. But the CLR is largely out of the loop when these operations occur, unless they occur through managed APIs. Unfortunately, we are rarely informed. For an extreme example of this, the Shell has APIs which require an STA. If the calling thread is implicitly in the MTA, these Shell APIs CoInitialize that calling thread into an STA. As the call returns, the API will CoUnitialize and rip down the apartment. We would like to do better here over time. But there are some pretty deep problems and most solutions end up breaking an important scenario here or there. Back to Pumping Enough of the CoInitialize mess. I mentioned above that managed blocking will perform some pumping when called on an STA thread. Managed blocking includes a contentious Monitor.Enter, WaitHandle.WaitOne, WaitHandle.WaitAny, GC.WaitForPendingFinalizers, our ReaderWriterLock and Thread.Join. It also includes anything else in FX that calls down to these routines. One noticeable place where this happens is during COM Interop. There are pathways through COM Interop where a cache miss occurs on finding an appropriate pUnk to dispatch a call. At those points, the COM call is forced down a slow path and we use this as an opportunity to pump a little bit. We do this to allow the Finalizer thread to release any pUnks on the current STA, if the application is neglecting to pump. (Remember those ASP Compat and Console client scenarios?) This is a questionable practice on our part. It causes reentrancy at a place where it normally could never occur in pure unmanaged scenarios. But it allows a number of applications to successfully run without clogging up the Finalizer thread. Anyway, managed blocking does not include PInvokes directly to any of the OS blocking services. And keep in mind that if you PInvoke to the OS blocking services directly, the CLR will no longer be able to take control of your thread. Operations like Thread.Interrupt, Thread.Abort and AppDomain.Unload will be indefinitely delayed. Did you notice that I neglected to mention WaitHandle.WaitAll in the list of managed blocking opeprations? That’s because we don’t allow you to call WaitAll from an STA thread. The reason is rather subtle. When you perform a pumping wait, at some level you need to call MsgWaitForMultipleObjectsEx, or a similar Msg* based variant. But the semantics of a WAIT_ALL on an OS MsgWaitForMultipleObjectsEx call is rather surprising and not what you want at all. It waits for all the handles to be signaled AND for a message to arrive at the message queue. In other words, all your handles could be signaled and the application will keep blocking until you nudge the mouse! Ugh. We’ve toyed with some workarounds for this case. For example, you could imagine spinning up an MTA thread and having it perform the blocking operation on the handles. When all the handles are signaled, it could set another event. The STA thread would do a WaitHandle.WaitOne on that other event. This gives us the desired behavior that the STA thread wakes up when all handles are signaled, and it still pumps the message queue. However, if any of those handles are “thread-owned”, like a Mutex, then we have broken the semantics. Our sacrificial MTA thread now owns the Mutex, rather than the STA thread. Another technique would be to put the STA thread into a loop. Each iteration would ping the handles with a brief timeout to see if it could acquire them. Then it would check the message queue with a PeekMessage or similar technique, and then iterate. This is a terrible solution for battery-powered devices or for Terminal Server scenarios. What used to be efficient blocking is now busily spinning in a loop. And if no messages actually arrive, we have disturbed the fairness guarantees of the OS blocking primitives by pinging. A final technique would be to acquire the handles one by one, using WaitOne. This is probably the worst approach of all. The semantics of an OS WAIT_ALL are that you will either get no handles or you will get all of them. This is critical to avoiding deadlocks, if different parts of the application block on the same set of handles – but fill the array of handles in random order. I keep saying that managed blocking will perform “some pumping” when called on an STA thread. Wouldn’t it be great to know exactly what will get pumped? Unfortunately, pumping is a black art which is beyond mortal comprehension. On Win2000 and up, we simply delegate to OLE32’s CoWaitForMultipleHandles service. And before we wrote the initial cut of our pumping code for NT4 and Win9X, I thought I would glance through CoWaitForMultipleHandles to see how it is done. It is many, many pages of complex code. And it uses special flags and APIs that aren’t even available on Win9X. The code we finally wrote for the downlevel platforms is relatively simple. We gather the list of hidden OLE windows associated with the current STA thread and try to restrict our pumping to the COM calls which travel through them. However, a lot of the pumping complexity is in USER32 services like PeekMessage. Did you know that calling PeekMessage for one window will actually cause SendMessages to be dispatched on other windows belonging to the same thread? This is another example of how someone made a tradeoff between reentrancy and deadlocks. In this case, the tradeoff was made in favor of reentrancy by someone inside USER32. By now you may be thinking “Okay. Pump more and I get reentrancy. Pump less and I get deadlocks.” But of course the world is more complicated than that. For instance, the Finalizer thread may synchronously call into the main GUI STA thread, perhaps to release a pUnk there, as we have seen. The causality from the Finalizer thread to the main GUI STA thread is invisible to the CLR (though the CLR Security Lead recently suggested using OLE channel hooks as a technique for making this causality visible). If the main GUI STA thread now calls GC.WaitForPendingFinalizers in order to pump, there’s a possibility of a deadlock. That’s because the GUI STA thread must wait for the Finalizer thread to drain its queue. But the Finalizer thread cannot drain its queue until the GUI thread has serviced its incoming synchronous call from the Finalizer. Reentrancy, Avalon, Longhorn and the Client Ah, reentrancy again. From time to time, customers inside or outside the company discover that we are pumping messages during managed blocking on an STA. This is a legitimate concern, because they know that it’s very hard to write code that’s robust in the face of reentrancy. In fact, one internal team completely avoids managed blocking, including almost any use of FX, for this reason. Avalon was very upset, too. I’m not sure how much detail they have disclosed about their threading model. And it’s certainly not my place to reveal what they are doing. Suffice it to say that their model is an explicit rental model that does not presume thread affinity. If you’ve read this far, I’m sure you approve of their decision. Avalon must necessarily coexist with STAs, but Avalon doesn’t want to require them. The CLR and Avalon have a shared long term goal of driving STAs out of the platform. But, realistically, this will take decades. Avalon’s shorter term goal is to allow some useful GUI applications to be written without STAs. Even this is quite difficult. If you call the clipboard today, you will have an STA. Avalon also has made a conscious design choice to favor deadlocks over reentrancy. In my opinion, this is an excellent goal. Deadlocks are easily debugged. Reentrancy is almost impossible to debug. Instead, it results in odd inconsistencies that manifest over time. In order to achieve their design goals, Avalon requires the ability to control the CLR’s pumping. And since we’ve had similar requests from other teams inside and outside the company, this is a reasonable feature for us to provide. V1 of the CLR had a conscious goal of making as much legacy VB and C++ code work as was possible. When we saw the number of applications that failed to pump, we had no choice but to insert pumping for them – even at the cost of reentrancy. Avalon is in a completely different position. All Avalon code is new code. They are in a great position to define an explicit model for pumping, and then require that all new applications rigorously conform to that model. Indeed, as much as I dislike STAs, I have a bigger concern about Longhorn and its client focus. Historically, Microsoft has built a ton of great functionality and added it to the platform. But that functionality is often mixed up with various client assumptions. STAs are probably the biggest of those assumptions. The Shell is an example of this. It started out as a user-focused set of services, like the namespace. But it’s growing into something that’s far more generally useful. To the extent that the Shell wants to take its core concepts and make them part of the base managed Longhorn platform, it needs to shed the client focus. The same is true of Office. For instance, I want to write some code that navigates to a particular document through some namespace and then processes it in some manner. And I want that exact same code to run correctly on the client and on the server. On the client, my processing of that document should not make the UI unresponsive. On the server, my processing of that document should not cause problems with scalability or throughput. Historically, this just hasn’t been the case. We have an opportunity to correct this problem once, with the major rearchitecture that is Longhorn. But although Longhorn will have both client and server releases, I worry that we might still have a dangerous emphasis on the client. This may be one of the biggest risks we face in Longhorn. Winding Down Finally, I feel a little bad about picking something I don’t like and writing about it. But there’s a reason that this topic came up. Last week, a customer in Unfortunately, MSHTML was written as client-side functionality. In fact, I’m told that it drives its own initialization by posting Windows messages back to itself and waiting for them to be pumped. So if you aren’t pumping an STA, you aren’t going to get very far. There’s that disturbing historical trend at Microsoft to combine generally useful functionality with a client bias again! We explained to the customer the risks of using client components on a server, and the pumping behavior that is inherent in managed blocking on an STA. After we had been through all the grisly details, the customer made the natural observation: None of this is written down anywhere. Well, I still never talked about a mysterious new flag to CoWaitForMultipleHandles. Or how custom implementations of IMessageFilter can cause problems. Or the difference between But I’m sure that at this point I’ve said far more than most people care to hear about this subject. I always like your blogs Chris. Nothing like taking the rest of the week off for a little light reading…… 😉 -Mathew Nolton Excellent as always! It is a shame that you do not have 120 hours per day. You should write the MSDN documentation! Maybe MS should invest in some cloning vats… >> Well, I still never talked about a mysterious new flag to CoWaitForMultipleHandles. Or how custom implementations of IMessageFilter can cause problems. Or the difference between Main and Single. Or the relationship between apartments and COM+ contexts and ServicedComponents. Or the amazing discovery that OLE32 sometimes requires you to pump the MTA if you have DCOM installed on Win9X. << BTW I would like to hear some horror stories about them. Why don’t you make these long ones articles and not posts? Chris, >> Also, VB6 has a variable scope which corresponds to “global” (I forget its name), but which is treated as per-thread when running on the server. From my experience, Global is its name (or Public – they are the same). Global variables in VB are always global to the apartment they are in. Hence, as you say: >> This means that global variables are shared between sets of clients in a surprising fashion, based on the STA that they happen to correspond to. Seeya Haven’t freed up enough time to read the whole article yet, but when you see "NDR pickling" in the first screenful you just know you’re in for a treat Keep up the good work Chris… As an outsider, it was hard for me to understand how one team could have produced such a strange combination. Oops.. apparently I messed up the excerpt Wouldn’t it be nice if we had source to the really confusing bits, so that when things didn’t do what we expected we could figure out why? Chris, Thanks for another great post. Two questions: 1. Was it really necessary to build COM interop into the CLR, instead of building it on top of it? This way you need to support all the design mistakes forever. Just look at Remoting and MarshalByRefObject. 2. Do STAs use the message queue for synchronization or is there a separate queue? Using a separate queue would help a bit with reentrancy (at least you wouldn’t have to worry about getting WM_PAINT and messages posted to the GUI thread from the worker threads while in a call). Dejan Apartments and Pumping in the CLR Chris, I couldn’t help thinking about a problem we have currently after reading this article. We are using Biztalk 2002 (MTA C++ Unmanaged Code) to call VB.Net components hosted under COM+ on windows 2003. Now, we are suffering a slow memory leak and the dev team has been shrugging their sholders trying to figure out what is wrong here. I understand that a .net component which inherits from Serviced Component has a reference to Unmanged code and the caller should call dispose on the serviced component hosted in COM+. In our case Biztalk Unmanged code is the caller and there is no way it is calling Dispose on our Serviced Components. I am figuring that under load the finalizer is not geting called for our Servied Components because of some internal mechanism used to marshall between the manged and unmanaged code. If there is a break in the load then some internal mechanism gets invoked and cleans things up. Any good ideas here? There doesn’t seem to be a good way to have our components clean themselves up, and it gets further complicated becuase our componets are participating in a DTC root transaction created by the Unmanged code which I don’t believe will alow us to self destruct our .Net componets before returning to the calling MTA thread in the Biztalk Code. I hope I am on the right track here, with my description. Feel free to correct anything where I am way wrong. Thanks for your help in advance! Shawn Smith For a problem like this, I would start off trying to get some additional data. There are two things that come to mind. First, I would use the managed object profiler to determine which types of objects are contributing to the leakage. I would also use !sos.finalizequeue to break into the application and see what objects are waiting to be finalized. You could put that ‘sos’ command on a breakpoint that you routinely hit, with ‘;g’ to proceed directly afterwards. If you have a log open while debugging, this will give you a stream of snapshots of the finalizequeue. From this you can determine whether the queue is growing without bounds, or whether it’s generally empty. If it’s growing without bounds, the Finalizer thread isn’t keeping up with the work. Perhaps this is due to an apartment issue, and we can investigate that possibility. But if the queue is generally empty, then the problem is that the GC isn’t recovering the objects that are leaking. Ideas that spring to mind there are object pooling or large amounts of unmanaged memory that are the cause of the leak. Once you have the extra data, if you are still puzzled, feel free to email me directly and I will try to help. I cannot hold myself but to ask about what’s happening. I have a code running in STA that’s creates an COM object A, than it caches the interface it got back. Next a thread in a thread pool makes use of that interface, and everything seems fine. After about 10-15min a thread in the pool try to make another call using the same cached interface and it get exception. The object and interface are still OK to use from the original STA thread. What’s happening? If I had to guess, DCOM timed out a reference and ran it down. I believe this timeout is 6 minutes. Historically, we’ve had some problems balancing between the two bugs of creating uncollectible cycles and being subject to the 6 minute rundown. Initially, we used the OLE handle table, but the performance was poor. More importantly, the lifetime flags for the handle table (MSHLFLAGS_NORMAL, *_TABLESTRONG & *_TABLEWEAK) are all interpreted differently based on whether you place a pUnk to a proxy into the table, or whether the pUnk is directly to the server object. Although we no longer use the OLE handle table, we still marshal pUnks between apartments using CoMarshalInterface and those flags. So one possibility is that we are screwing this up and creating a weak reference which is subsequently timed out. Would you mind sending me your repro, so I can investigate this? Please send it directly to my email. Thanks. Hello, thank you for wonderful blog. I’m trying to put your blog into Japanese at my blog. There is a question about MTA. >Some of the threads in this group are explicitly placed in the MTA by calling CoInitializeEx. >Other threads in this group are implicitly in the MTA. I thinks that thread comes to belong to MTA by calling CoInitialiseEx, what meaning is it that other thread(s) belong implicitly at MTA? Apartments and Pumping in the CLR – Translate to Japanese ?????? ??????????????????????????? ?????????????????????????????????????(T_T) ??????????????????????? Kazuhiko, you are asking about the difference between a thread being implicitly vs. explicitly in the MTA. A thread is explicitly in the MTA when it calls CoInitializeEx to place itself there. Once an MTA exists, all the other non-STA threads are considered to be inside that MTA. They do not each have to call CoInitializeEx to get this behavior. However, if the last thread holding the MTA alive (by calling CoInitializeEx) happens to terminate or call CoUninitialize, then the MTA is destroyed. At this point, the threads that were implicitly considered to be in the MTA are no longer in the MTA. It is no longer legal to make COM calls on these threads without first calling CoInitializeEx on at least one of them, to create a new MTA for them. So being explicitly in the MTA is a much stronger position than being implicitly in the MTA. Amazing, not only do you write technical stuff that makes me all dizzy, you also seem to be able to provide technical support for problems most of us would rather not get involved in. I am awed. Chris, thanks so much for writing this. It really helps me understand the problems we were seeing with MS’ Java implementation back in 1999. I remember well how baffled your support people were the first time they saw one of those processes with about 4 x 10^6 pUnks waiting to get cleaned up. Those were the days… 😉 Regarding reentrancy vs. concurrency in COM: I have always thought that reentrancy is the greater evil of the two. In my opinion, it is more difficult to write correct STA code that it is to write correct multithreaded code. It’s more or less obvious where concurrency can occur, and it can be controlled in a sane and structured manner with locks. Reentrancy on the other hand can happen in places where you least expect it. In a large project you effectively have to assume that any function call can allow STA to be reentered. And the mechanisms for controlling STA reentrancy feel like ugly hacks (IMessageFilter). No thanks I’ll take my multithreading and locks any day. Pavel, I’m certainly not going to argue with you about the dangers of STA or IMessageFilter! Last week I was involved with a very nasty customer issue related to calling out of an STA via COM while in a synchronous cross-thread SendMessage. Also last week a reader of this blog sent me a V1.1 app (which we’ve fixed for Whidbey) that involved a deadlock between the Finalizer thread and an STA message dispatch in WinForms. However, I think that the rental model is often superior to free-threading. Really, the rental model is just a big lock, so you can view it as a special case of free-threading. With an STA, you never know if someone you call will inflict reentrancy on you. In Avalon’s rental model, I have heard that this risk is explicitly controlled. You can lock the tree or otherwise prevent reentrancy when you call into other code — and the code you call is forced to conform to this restriction. You may have deadlocks, but you won’t have reentrancy. Apartments and Pumping in the CLR – Translate to Japanese ?????? ??????????????????????????? ?????????????????????????????????????(T_T) ??????????????????????? Regarding releasing COM object on the wrong thread. We have an MFC application that is hosting the CLR. When we run our test suites we do some lengthy processing without doing any explicit pumping (we are doing cross apartment calls so COM should perform some pumping, right?). After a while, the GC starts releasing COM object on the wrong thread, which in our case is disastrous. Objects that depend on thread local storage crash. So I used CLRSPY to see what was going on and it reports the following. Disconnected Context in <myapp>.exe (PID 1044): The context (cookie 293312) is disconnected. Releasing the interfaces from the current context (cookie 294600). I guess this is the debug probe mentioned in this article. Any idea why is this happening? The only way we have found so far to fix this is to actively pump all messages in the message queue. We have found that one way provoke the error on NT4 is to return SERVERCALL_RETRYLATER from a custom message filter, but on XP the message filter is never invoked. Is there any way to prevent the runtime to release COM objects on the wrong thread? Since we are developing a client application we’d rather have memory leaks than random crashes. My apologies — I have been out of town for the last 2 weeks. I would very much like to debug your scenario. Is it in a form where you can send it to me? If not, can you confirm that everything is local to one process, that you use apartments but not COM+ contexts, and that the STA threads are not being terminated? Also, is "lengthy processing" in excess of 6 minutes? Please contact me directly. And if we find anything of general interest from debugging, I will post our findings back here. Thanks. ICorDebug (ICD) is a com-classic interface. In terms of COM threading models, ICorDebug is technically… When you’re writing a platform, you sometimes need to make hard choices between making the common cases… We’ve all encountered people who use big words just to impress folks. I think the smarter people communicate… Disclaimer: I hesitated posting this because this is a topic that is extremely complicated and deep,… PingBack from The managed synchronization mechanisms, including Monitor , WaitHandle.WaitAny , ManualResetEvent , ReaderWriterLock PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/cbrumme/2004/02/02/apartments-and-pumping-in-the-clr/
CC-MAIN-2016-30
refinedweb
10,905
63.19
I would like to stop images from loading, as in not even get a chance to download, using greasemonkey. Right now I have var images = document.getElementsByTagName('img'); for (var i=0; i<images.length; i++){ images[i].src = ""; } Almost all images are not downloaded. So your script almost working as is. I've tested the following script: // ==UserScript== // @name stop downloading images // @namespace // @include* // ==/UserScript== var images = document.getElementsByTagName('img'); for (var n = images.length; n--> 0;) { var img = images[n]; img.setAttribute("src", ""); } Use a dedicated extension to manage images (something like ImgLikeOpera). If you'd like to filter images on all browser then a proxy with filtering capabilities might help e.g., Privoxy.
https://codedump.io/share/bntJGb5H1EQr/1/removing-images-with-greasemonkey
CC-MAIN-2016-50
refinedweb
116
53.78
Collection Data Structures in Swift Learn about the fundamental collection data structures in this tutorial: arrays, dictionaries and sets. Update note: This tutorial was updated for Swift 3.0 and Xcode 8.0 by Niv Yahel. Original post by Tutorial Team member Ellen Shapiro. Imagine you have an application that needs to work with a lot of data. Where do you put that data? How do you keep it organized and handle it efficiently? If your program only manages one number, you store it in one variable. If it has two numbers then you’d use two variables. What if it has 1000 numbers, 10,000 strings or the ultimate library of memes? And wouldn’t it be nice to be able to find a perfect meme in an instant? In that case, you’ll need one of the fundamental collection data structures, such as Arrays, Dictionarys, and Sets. As you’ll learn, these collection data structures allow you and the user to manipulate huge databases with a swipe across a screen. Thanks to Swift and its ever-evolving nature, you’ll be able to use native data structures that are blazing fast! Here’s how the tutorial will flow: - First, you’ll review what a data structure is, and then you’ll learn about Big-O notation. It’s the standard tool for describing the performance of different data structures. - Next you’ll observe these data structures by measuring the performance of arrays, dictionaries, and sets — the most basic data structures available in Cocoa development. Incidentally, it’ll also double as a rudimentary introduction to performance testing. - As you proceed, you’ll compare the performance of mature Cocoa structures with newer, Swift-only counterparts. - Finally, you’ll briefly review some related types offered by Cocoa. These are data structures that you might be surprised to learn are already at your fingertips! Getting Started. A specific collection type might make some activities especially efficient such as adding a new item, finding the smallest item, or ensuring you’re not adding duplicates. Without collection data structures, you’d be stuck trying to manage items one by one. A collection allows you to: - Handle all those items as one entity - Imposes some structure - Efficiently insert, remove, and retrieve items What is “Big-O” Notation? Big-O notation — that’s the letter O, not the number zero — is a way of describing the efficiency of an operation on a data structure. There are various kinds of efficiency: you could measure how much memory the data structure consumes, how much time an operation takes under the worst case scenario, or how much time an operation takes on average. It’s not the raw memory or time that we care about here though. It is the way the memory or time scales, as the size of the data structure scales. In this tutorial, you’ll measure how much time an operation takes on average. Common sense will tell you that an operation takes longer to perform when there are larger quantities of data. But sometimes there is little or no slowdown, depending on the data structure. Big-O notation is a precise way of describing this. You write an exact functional form that roughly describes how the running time changes based on the number of elements in the structure. When you see Big-O notation written as O(something-with-n), n is the number of items in the data structure, and something-with-n is roughly how long the operation will take. “Roughly”, ironically enough, has a specific meaning: the behavior of the function at the asymptotic limit of very large n. Imagine n is a really, really large number — you’re thinking about how the performance of some operation will change as you go from n to n+1. The most commonly seen Big-O performance measures are as follows, in order from best to worst performance: O(1) — (constant time) No matter how many items are in a data structure, this function calls the same number of operations. This is considered ideal performance. O(log n) — (logarithmic) The number of operations this function calls grows at the rate of the logarithm of the number of items in the data structure. This is good performance, since it grows considerably slower than the number of items in the data structure. O(n) — (linear) The number of operations this function calls will grow linearly with the size of the structure. This is considered decent performance, but it can grind along with larger data collections. O(n (log n)) — (“linearithmic”) The number of operations called by this function grows by the logarithm of the number of items in the structure multiplied by the number of items in the structure. Predictably, this is about the lowest level of real-world tolerance for performance. While larger data structures perform more operations, the increase is somewhat reasonable for data structures with small numbers of items. O(n²) — (quadratic) The number of operations called by this function grows at a rate that equals the size of the data structure, squared — poor performance at best. It grows quickly enough to become unusably slow even if you’re working with small data structures. O(2^n) — (exponential) The number of operations called by this function grows by two to the power of the size of the data structure. The resulting very poor performance becomes intolerably slow almost immediately. O(n!) (factorial) The number of operations called by this function grows by the factorial of the size of the data structure. Essentially, you have the worst case scenario for performance. For example, in a structure with just 100 items, the multiplier of the number of operations is 158 digits long. Witness it for yourself on wolframalpha.com. Here’s a more visual representation of performance and how it degrades when there are more items in a collection, going from one to 25 items: Did you notice that you can’t even see the green O(log n) line because it is so close to the ideal O(1) at this scale? That’s pretty good! On the other hand, operations that have Big-O notations of O(n!) and O(2^n) degrade so quickly that by the time you have more than 10 items in a collection, the number of operations spikes completely off the chart. Yikes! As the chart clearly demonstrates, the more data you handle, the more important it is to choose the right structure for the job. Now that you’ve seen how to compare the performance of operations on data structures, it’s time to review the three most common types used in iOS and explore how they perform in theory and in practice. Common iOS Data Structures The three most common data structures in iOS are arrays, dictionaries and sets, each of which deserves your attention. In this section, you’ll: - Consider how they differ in ideal terms as fundamental abstractions - Examine the performance of the actual concrete classes that iOS offers for representing those abstractions. For the three types, iOS offers multiple concrete classes that work for the same abstraction. In addition to the old Foundation data structures available in Swift and Objective-C, there are new Swift-only versions of data structures that integrate tightly with the language. Since its introduction, Swift has brought many performance improvements to Swift data structures and now outperforms Foundation data structures in the majority of cases. However, the “best” one to use still depends on the operation you want to perform. Arrays An array is a group of items placed in a specific order, and you can access each item via an index — a number that indicates its position in the order. When you write the index in brackets after the name of the array variable, this is subscripting. Swift arrays are immutable if you define them as constants with let, and mutable if you define them as variables with var. In contrast, a Foundation NSArray is immutable by default. If you want to add, remove or modify items after creating the array, you must use the mutable variant class NSMutableArray. An NSArray is heterogeneous, meaning it can contain Cocoa objects of different types. Swift arrays are homogeneous, meaning that each Array is guaranteed to contain only one type of object. However, you can still define a single Swift Array so it stores various types of Cocoa objects by specifying that the one type is AnyObject, since every Cocoa type is also a subtype of this. Expected Performance and When to Use Arrays The primary reason to use an array is when the order of variables matters. Think about those times when you sort contacts by first or last name, a to-do list by date, or any other situation when it’s critical to find or display data in a specific order. Apple’s documentation includes three key expectations for Array performance in the CFArray header: - Accessing any value at a particular index in an array is at worst O(log n), but should usually be O(1). - Searching for an object at an unknown index is at worst O(n (log n)), but will generally be O(n). - Inserting or deleting an object is at worst O(n (log n))but will often be O(1). These guarantees subtly deviate from the simple “ideal” array that you might expect from a computer science textbook or the C language, where an array is always a sequence of items laid out contiguously in memory. Consider it a useful reminder to check the documentation! In practice, these expectations make sense when you think about them: - If you already know where an item is, then looking it up in the array should be fast. - If you don’t know where a particular item is, you’ll need to look through the array from beginning to end. Your search will be slower. - If you know where you’re adding or removing an object it’s not too difficult. Although, you may need to adjust the rest of the array afterwards, and that’s more time-consuming. How well do these expectations align with reality? Keep reading to find out! Note: Swift officially became open source in December of 2015. You can look through the Swift source code yourself to see how these data structures are implemented under the hood! Sample App Testing Results Download the sample project and open it in Xcode. It’s time to play around with testing methods that will create and/or test an array and show how long it took to perform each task. Note: In the app, the Debug configuration automatically sets optimization to a level equal to the release configuration — this is so that when you test the application you get the same level of optimization you’d see in the real world. You need a minimum of 1000 items to run tests with the sample app, so that results are large enough to detect. When you build and run, the slider will be set to 1000. Press the Create Array and Test button, and you’ll be testing in no time: Drag the slider over to the right side until it hits 10,000,000, and press Create Array and Test again to see the difference in creation time with a significantly larger array: These tests were run against an iPhone 7 running iOS 10.0 from Xcode 8.0, which includes Swift 3.0. With 10,000 times as many items, creating the array only takes about 1,537 times as much time. In this case, for around 106.5 times the number of items, it took roughly 1,649 times as much time to create the array. Any more than a few dozen items, and this was unusable. Since the dark days of old, Swift has made tremendous performance improvements and you can now easily make huge arrays with few problems! What about NSMutableArray? You can still call Foundation classes from Swift without having to drop back down to Objective-C. Take a look inside the DataManipulators folder in Xode. Here you’ll find the various objects that handle the work of setting up the array and performing the various tasks that are then timed. You’ll notice there is a class called SwiftArrayManipulator. This conforms to ArrayManipulator which is where the methods are defined that the timing code uses. There is also a class called NSArrayManipulator that also conforms to ArrayManipulator. Because of this, you can easily swap in an NSMutableArray for a Swift Array. The project code is simple enough that you can try an NSMutableArray with a single line change to compare the performance. Open ArrayViewController.swift and change line 27 from: let arrayManipulator: ArrayManipulator = SwiftArrayManipulator() To: let arrayManipulator: ArrayManipulator = NSArrayManipulator() Build and run again, and press Create Array and Test to test the creation of an NSMutableArray with 1000 items: …and then again with 10,000,000 items: Raw performance for creation is 30 times slower than Swift. Some of that may come from the need to jockey between objects that can be stored in an NSArray and its Swift counterparts. This is a substantial improvement since Swift 2.3 when they were roughly the same. However, you only create an array once, and you perform other operations on it far more often, such as finding, adding, or removing objects. In more exhaustive testing, such as when you’re using some of Xcode 8’s performance testing methods to call each of these methods 50 times, patterns begin to emerge: - Creating a Swift Arrayand an NSArraydegrade at roughly the same rate between O(log n)and O(n). Swift is faster than Foundation by roughly 30 times. This is a significant improvement over Swift 2.3, where it was roughly the same as Foundation! - Adding items to the beginning of an array is considerably slower in a Swift Arraythan in an NSArray, which is around O(1). Adding items to the middle of an array takes half the time in Swift Arraythan in an NSArray. Adding items to the end of a Swift Array, which is less than O(1), is roughly 6 times faster than adding items to the end of an NSArray, which comes in just over O(1). - Removing objects is faster in a Swift Arraythan a NSArray. From the beginning, middle or end, removing an object degrades between O(log n)and O(n). Raw time is better in Swift when you remove from the beginning of an Array, but the distinction is a matter of milliseconds. - Looking up items in Swift is faster for the first time since its inception. Look ups by index grow at very close rates for both Swift arrays and NSArraywhile lookup by object is roughly 80 times faster in Swift. Dictionaries Dictionaries are a way of storing values that don’t need to be in any particular order and are uniquely associated with keys. You use the key to store or look up a value. Dictionaries also use subscripting syntax, so when you write dictionary["hello"], you’ll get the value associated with the key hello. Like arrays, Swift dictionaries are immutable if you declare them with let and mutable if you declare them with var. Similarly on the Foundation side, there are both NSDictionary and NSMutableDictionary classes for you to use. Another characteristic that is similar to Swift arrays is that dictionaries are strongly typed, and you must have known key and value types. NSDictionary objects are able to take any NSObject as a key and store any object as a value. You’ll see this in action when you call a Cocoa API that takes or returns an NSDictionary. From Swift, this type appears as [NSObject: AnyObject]. This indicates that the key must be an NSObject subclass, and the value can be any Swift-compatible object. When to use Dictionaries Dictionaries are best used when there isn’t a particular order to what you need to store, but the data has meaningful association. To help you examine how dictionaries and other data structures in the rest of this tutorial work, create a Playground by going to File…\New\Playground, and name it DataStructures. For example, pretend you need to store a data structure of all your friends and the names of their cats, so you can look up the cat’s name using your friend’s name. This way, you don’t have to remember the cat’s name to stay in that friend’s good graces. First, you’d want to store the dictionary of people and cats. Add the following to the playground: import Foundation let cats = [ "Ellen" : "Chaplin", "Lilia" : "George Michael", "Rose" : "Friend", "Bettina" : "Pai Mei"] Thanks to Swift type inference, this will be defined as [String: String] — a dictionary with string keys and string values. Now try to access items within it. Add the following: cats["Ellen"] //returns Chaplin as an optional cats["Steve"] //Returns nil Note that subscripting syntax on dictionaries returns an optional. If the dictionary doesn’t contain a value for a particular key, the optional is nil; if it does contain a value for that key you get the wrapped value. Because of that, it’s a good idea to use the if let optional-unwrapping syntax to access values in a dictionary. Add the following: if let ellensCat = cats["Ellen"] { print("Ellen's cat is named \(ellensCat).") } else { print("Ellen's cat's name not found!") } Since there is a value for the key “Ellen”, this will print out “Ellen’s cat is named Chaplin” in your Playground. Expected Performance Once again, Apple outlines the expected performance of dictionaries in Cocoa in the CFDictionary.h header file: - The performance degradation of getting a single value is guaranteed to be at worst O(log n), but will often be O(1). - Insertion and deletion can be as bad as O(n (log n)), but typically be closer to O(1)because of under-the-hood optimizations. These aren’t quite as obvious as the array degradations. Due to the more complex nature of storing keys and values versus a lovely ordered array, the performance characteristics are harder to explain. Sample App Testing Results DictionaryManipulator is a similar protocol to ArrayManipulator, and it tests dictionaries. With it, you can easily test the same operation using a Swift Dictionary or an NSMutableDictionary. To compare the Swift and Cocoa dictionaries, use a similar procedure as you used for the arrays. Build and run the app and select the Dictionary tab at the bottom. Run a few tests – you’ll notice that dictionaries take significantly longer to create than arrays. If you push the item slider up to 10,000,000 items, you might even be able to get a memory warning or even an out-of-memory crash! Back in Xcode, open DictionaryViewController.swift and find the dictionaryManipulator property: let dictionaryManipulator: DictionaryManipulator = SwiftDictionaryManipulator() Replace it with the following: let dictionaryManipulator: DictionaryManipulator = NSDictionaryManipulator() Now the app will use NSDictionary under the hood. Build and run the app again, and run a few more tests. If you’re running Swift 2.1.1, your findings should be similar to the results of more extensive testing: - In raw time, creating Swift dictionaries is roughly 6 times faster than creating NSMutableDictionariesbut both degrade at roughly the same O(n)rate. - Adding items to Swift dictionaries is roughly 100 times faster than adding them to NSMutableDictionariesin raw time, and both degrade close to the best-case-scenario O(1)rate promised by Apple’s documentation. - Removing items from Swift dictionaries is roughly 8 times faster than removing items from NSMutableDictionaries, but the degradation of performance is again close to O(1)for both types. - Swift is also faster at lookup, with both performing roughly at an O(1)rate. This version of Swift is the first where it beats Foundation by a significant amount. These are amazing improvements over Swift 2.3 where both types of dictionaries performed roughly the same. Swift 3.0 has implemented substantial improvements to dictionaries. And now, on to the final major data structure used in iOS: Sets! Sets A set is a data structure that stores unordered, unique values. Unique is the key word; you won’t won’t be able to add a duplicate. Swift sets are type-specific, so all the items in a Swift Set must be of the same type. Swift added support for a native Set structure in version 1.2 – for earlier versions of Swift, you could only access Foundation’s NSSet. Note that like arrays and dictionaries, a native Swift Set is immutable if you declare it with let and mutable if you declare it with var. Once again on the Foundation side, there are both NSSet and NSMutableSet classes for you to use. When to use Sets Sets are most useful when uniqueness matters, but order does not. For example, what if you wanted to select four random names out of an array of eight names, with no duplicates? Enter the following into your Playground: let names = ["John", "Paul", "George", "Ringo", "Mick", "Keith", "Charlie", "Ronnie"] var stringSet = Set<String>() // 1 var loopsCount = 0 while stringSet.count < 4 { let randomNumber = arc4random_uniform(UInt32(names.count)) // 2 let randomName = names[Int(randomNumber)] // 3 print(randomName) // 4 stringSet.insert(randomName) // 5 loopsCount += 1 // 6 } // 7 print("Loops: " + loopsCount.description + ", Set contents: " + stringSet.description) In this little code snippet, you do the following: - Initialize the set so you can add objects to it. It is a set containing Stringobjects. - Pick a random number between 0 and the count of names. - Grab the name at the selected index. - Log the selected name to the console. - Add the selected name to the mutable set. Remember, if the name is already in the set, then the set won't change since it doesn't store duplicates. - Increment the loop counter so you can see how many times the loop ran. - Once the loop finishes, print out the loop counter and the contents of the mutable set. Since this example uses a random number generator, you'll get a different result every time. Here's an example of the log produced while writing this tutorial: John Ringo John Ronnie Ronnie George Loops: 6, Set contents: ["Ronnie", "John", "Ringo", "George"] Here, the loop ran six times in order to get four unique names. It selected Ronnie and John twice, but they only wound up in the set once. As you're writing the loop in the Playground, you'll notice that it runs on a, well, loop, and you'll get a different number of loops each time. In this case, you'll need at least four loops, since there must always be four items in the set to break out of the loop. Now that you've seen Set at work on a small scale, it's time to examine performance with a larger batch. Sample App Testing Results Apple didn't outline overall expectations for set performance as they did for dictionaries and arrays, so in this case you'll just look at real-world performance. The Swift Set documentation outlines performance characteristics for a couple of methods, but NSMutableSet does not. The sample project has NSSetManipulator and SwiftSetManipulator objects in the SetViewController class similar to the setup in the array and dictionary view controllers, and can be swapped out the same way. In both cases, if you're looking for pure speed, using a Set probably won't make you happy. Compare the numbers on Set and NSMutableSet to the numbers for Array and NSMutableArray, and you'll see set creation is considerably slower -- that's the price you pay for checking if every single item in a data structure is unique. Detailed testing revealed that since Swift's Set shows that performance degradation and raw time for most operations is extremely similar to that of NSSet: Creation, removal, and lookup operations are all similar between Foundation and Swift in raw time. However, it is faster in Swift for the first time. Creation degrades for both Swift and Foundation set types at a rate of around O(n). This is expected because every single item in the set must be checked for equality before a new item may be added. When requiring a data structure for a large sample size, a Set's initial creation time cost will be a major consideration. Removing and looking up actions are both around O(1) performance degradations across Swift and Foundation, making set lookup considerably faster than array lookup. This is largely because set structures use hashes to check for equality, and the hashes can be calculated and stored in sorted order. Overall, it appears that adding an object to an NSSet stays near O(1), whereas it can degrade at a rate higher than O(n) with Swift's Set structure. Swift has seen very significant performance improvements in collection data structure performance in its short public life, and will hopefully continue to see them as Swift evolves. Lesser-known Foundation Data Structures Arrays, dictionaries and sets are the workhorses of data-handling. However, Cocoa offers a number of lesser-known and perhaps under-appreciated collection types. If a dictionary, array or set won't do the job, it's worth checking if one of these will work before you create something from scratch. NSCache Using NSCache is very similar to using NSMutableDictionary – you just add and retrieve objects by key. The difference is that NSCache is designed for temporary storage for things that you can always recalculate or regenerate. If available memory gets low, NSCache might remove some objects. They are thread-safe, but Apple's documentation warns: …The cache may decide to automatically mutate itself asynchronously behind the scenes if it is called to free up memory. This means that an NSCache is like an NSMutableDictionary, except that Foundation may automatically remove an object at any time to relieve memory pressure. This is good for managing how much memory the cache uses, but can cause issues if you rely on an object that may potentially be removed. NSCache also stores weak references to keys rather than strong references. NSCountedSet NSCountedSet tracks how many times you've added an object to a mutable set. It inherits from NSMutableSet, so if you try to add the same object again it will only be reflected once in the set. However, an NSCountedSet tracks how many times an object has been added. You can see how many times an object was added with countForObject(). Note that when you call count on an NSCountedSet, it only returns the count of unique objects, not the number of times all objects were added to the set. To illustrate, at the end of your Playground take the array of names you created in your earlier NSMutableSet testing and add each one to an NSCountedSet twice: let countedMutable = NSCountedSet() for name in names { countedMutable.add(name) countedMutable.add(name) } Then, log out of the set itself and find out how many times "Ringo" was added: let ringos = countedMutable.count(for: "Ringo") print("Counted Mutable set: \(countedMutable)) with count for Ringo: \(ringos)") Your log should read: Counted Mutable set: {( George, John, Ronnie, Mick, Keith, Charlie, Paul, Ringo )}) with count for Ringo: 2 Note that while you may see a different order for the set, you should only see "Ringo" appear in the list of names once, even though you can see that it was added twice. NSOrderedSet An NSOrderedSet along with its mutable counterpart, NSMutableOrderedSet, is a data structure that allows you to store a group of distinct objects in a specific order. "Specific order" -- gee, that sounds an awful lot like an array, doesn't it? Apple succinctly sums up why you'd want to use an NSOrderedSet instead of an array (emphasis mine): You can use ordered sets as an alternative to arrays when element order matters and performance while testing whether an object is contained in the set is a consideration -- testing for membership of an array is slower than testing for membership of a set. Because of this, the ideal time to use an NSOrderedSet is when you need to store an ordered collection of objects that cannot contain duplicates. NSCountedSetinherits from NSMutableSet, NSOrderedSetinherits from NSObject. This is a great example of how Apple names classes based on what they do, but not necessarily how they work under the hood. NSHashTable and NSMapTable Not this kind of Map Table. (Courtesy the Tennessee Valley Authority(!) via Flickr Creative Commons) NSHashTable is another data structure that is similar to Set, but with a few key differences from NSMutableSet. You can set up an NSHashTable using any arbitrary pointers and not just objects, so you can add structures and other non-object items to an NSHashTable. You can also set memory management and equality comparison terms explicitly using NSHashTableOptions enum. NSMapTable is a dictionary-like data structure, but with similar behaviors to NSHashTable when it comes to memory management. Like an NSCache, an NSMapTable can hold weak references to keys. However, it can also remove the object related to that key automatically whenever the key is deallocated. These options can be set from the NSMapTableOptions enum. NSIndexSet An NSIndexSet is an immutable collection of unique unsigned integers intended to represent indexes of an array. If you have an NSArray of ten items where you regularly need to access items' specific positions, you can store an NSIndexSet and use NSArray's objectsAtIndexes: to pull those objects directly: let items : NSArray = ["one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten"] let indexSet = NSMutableIndexSet() indexSet.add(3) indexSet.add(8) indexSet.add(9) items.objects(at: indexSet as IndexSet) // returns ["four", "nine", "ten"] You specify that "items" is an NSArray since right now, Swift arrays don't have an equivalent way to access multiple items using an NSIndexSet or a Swift equivalent. An NSIndexSet retains the behavior of NSSet that only allows a value to appear once. Hence, you shouldn't use it to store an arbitrary list of integers unless only a single appearance is a requirement. With an NSIndexSet, you're storing indexes as sorted ranges, so it is more efficient than storing an array of integers. Pop Quiz! Now that you've made it this far, you can test your memory with a quick quiz about what sort of structure you might want to use to store various types of data. For the purposes of this quiz, assume you have an application where you display information in a library. Q: What would you use to create a list of every author in the library? [spoiler title="Unique List of Author Names"] A Set! It automatically removes duplicate names, which means you can enter every single author's name as many times as you want, but you'll still have only a single entry -- unless you mistype an author's name, doh! Once you create the Set, you can use set.allObjects to access the Array of unique names, and then perform operations that depend on order such as sorting. [/spoiler] Q: How would you store the alphabetically sorted titles of a prolific author's entire body of work? [spoiler title="Alphabetically-Sorted Titles"] An Array! Since you're in a situation where you have a number of similar objects (all titles are Strings) and their order matters (titles must be sorted alphabetically) this is an ideal case for an Array. [/spoiler] Q: How would you store the most popular book by each author? [spoiler title="Most Popular Book by each author"] A Dictionary! If you use the author's name as the key and the title of that author's most popular book as the value, you can access the most popular book by any author like this: mostPopularBooks["Gillian Flynn"] //Returns "Gone Girl" [/spoiler] Where to Go From Here? I'd like to give special thanks to my fellow tutorial team member, Ellen Shapiro, who initially created this tutorial and updated it until Swift 1.2. I would also like to thank Chris Wagner, who started on an Objective-C version of this article before the SwiftHammer came down upon us all, for passing along his notes and sample project for me to use while pulling this tutorial together. I'll also say thanks to the Swift team at Apple -- constant improvements to Swift have made the native data structures stay true to the Swift name. In the foreseeable future, it seems that we won't have a need for the Foundation data structures anymore. :] If you want to learn more about data structures for iOS, here are a few excellent resources: - NSHipster is a fantastic resource for exploring little-known corners of the Cocoa APIs including data structures. - Peter Steinberger of PSPDFKit fame's excellent article in ObjC.io issue 7 on Foundation data structures. - Former UIKit engineer Andy Matuschak's article in ObjC.io issue 16 about Struct-based data structures in Swift. - AirspeedVelocity, a blog looking into some of the internals of Swift, has written a blog post titled "Arrays, Linked Lists, and Performance". It closely follows implementing a stable merge sort. - A super, super deep dive into the internals of NSMutableArray, and how changing items in an NSMutableArrayaffects memory. - A fascinating study of how NSArray and CFArray performance changes with extremely large data sets. This is further evidence that Apple names classes not for what they do under the hood, but how they work for developers. - If you want to learn more about algorithmic complexity analysis, Introduction to Algorithms will fill your brain with more than you're likely to ever need to know in practice, but perhaps exactly as much as you'll need to know to pass job interviews. Got more questions? Go nuts in the comments below!
https://www.raywenderlich.com/1172-collection-data-structures-in-swift
CC-MAIN-2021-04
refinedweb
5,668
60.85
I am doing an online into to C++ programming class. I know I'm on the right track (ie, using the right tools) but the book I have is the wrong one and it's no help. This is the assignment: "Create a class Rectangle. The class has private attributes length and width, each of which defaults to 1. It has member functions that calculate the perimeter and the area of the rectangle. It has set and get functions for both length and width. The set functions (i.e. setLength() and setWidth()) should verify that length and width are larger than 0.0 and less than 20.0. If the parameter does not satisfy, use default value = 1.0 The get functions (i.e. getLength() and getWidth() ) should return the value of the attributes. The constructor should use the set functions to initialize the attributes. The following is main function you should use. The main idea is that, given three rectangles a, b and c, output their lengths, widths, perimeters and areas. Include header files when necessary." I thought I knew how to do it, but for some reason I'm stumbling on... something. I just don't know what's going wrong, and I know it must be really basic. I'm pretty sure I'm missing something, but I'm not sure what to look up in a tutorial so I can fix what's wrong, you know? Code://this is my coding #include <iostream> using namespace std; class rectangle { private: int width, height; public: rectangle (); rectangle (int,int); int area (void) {return (x*y); int perimeter (void) {return (x*2 + y*2);} }; rectangle::rectangle () //a tutorial online said that's how you set defaults { x = 1; y = 1; } rectangle::rectangle (int x, int y) //derived from notes { width = x; height = y; } //this is what was given int main() { Rectangle a, b(4.0,5.0), c(67.0, 888.0); cout<< setiosflags(ios::fixed | ios::showpoint); cout<<setprecision(1); cout<<"a: length = " << a.getLength() << "; width = " << a.getWidth() << "area = " << a.area() <<'\n'; cout<< "b: length = " << b.getLength() << "; width = " << b.getWidth() << "; perimeter = " << b.perimeter() << "; area = " << b.area() << '\n'; cout << "c: length = " << c.getLenght() << "; width = " << c.getWidth() << "; perimeter =" << c.perimeter() << "; area = " << c.area() << endl; return 0; } //my coding again int setWidth() //because it is based on a, b, and c, it is blank, right? { if ((x < 0.0) || (x > 20.0)) { x = 1.0; } Return x; } //have to use X and Y because Width and Height are private, right? int setHeight() { if ((y < 0.0) || (y > 20.0)) { y = 1.0; } return y; } When I compile, I get three errors: "error C2535: '__thiscall rectangle::rectangle(void)' : member function already defined or declared" (that one I get twice) "fatal error C1004: unexpected end of file found Error executing cl.exe." This is of course due tonight (asking online was my last resort after trying to make it work on my own) and if someone can just tell me where I'm going wrong and how I should be doing it instead I'd appriciate it.
https://cboard.cprogramming.com/cplusplus-programming/64186-functions-classes-what-did-i-do-wrong.html
CC-MAIN-2018-05
refinedweb
514
85.59
Convert enum in QProcess to string I'm writing some code which responds to the error() signal in QProcess. What I'd like to do is easily output the value of the QProcess::ProcessError that gets passed in... I was going to take a generic approach when I dug through the code and found that they don't use Q_ENUMS inside of QProcess which would have made the enum values easily QString-able. See: As opposed to enums in the Qt namespace, which are Q_ENUMed: Is there any "rule" for Qt developers like, if you have a class that uses enumerated values publicly, you should tell moc about them with Q_ENUMS so that people can write really friendly error handlers. Should I submit this as a bug/feature request? Is there some means of getting at this thing that I don't know about? It is my understanding that one must use Q_ENUMS if you are going to try and use QMetaEnum. You could do something like: @#define ENUM_TO_STR(X,Y) do { if((X) == (Y) return QString::fromLatin1( #Y ); } while(0) QString error2str(const int &error) { ENUM_TO_STR(error, QProcess::FailedToStart); ENUM_TO_STR(error, QProcess::Crashed); ENUM_TO_STR(error, QProcess::Timedot); ENUM_TO_STR(error, QProcess::ReadError); ENUM_TO_STR(error, QProcess::WriteError); ENUM_TO_STR(error, QProcess::UnknownError); }@ Yeah, I was trying to avoid writing any kind of specific case function - given Qt has a generic way of getting information about enums, it would be awesome if people registered their enums. what about QIODevice::errorString() ? QIODevice is the base class of QProcess and there is an example that uses errorString() with a QProcess object, so you might just give it a try. :) see here: @ QProcess builder; builder.setProcessChannelMode(QProcess::MergedChannels); builder.start("make", QStringList() << "-j2"); if (!builder.waitForFinished()) qDebug() << "Make failed:" << builder.errorString(); else qDebug() << "Make output:" << builder.readAll(); @
https://forum.qt.io/topic/40902/convert-enum-in-qprocess-to-string
CC-MAIN-2018-30
refinedweb
301
60.55
#include <stdlib.h> void* malloc( size_t size ); This function allocates storage of size_t bytes. malloc() returns the address of the first byte allocated. Unlike calloc() this storage is not gauranteed to be contiguous. If storage cannot be allocated, NULL is returned. See also: calloc(). Back to Essential C Functions. void * malloc(size_t size) int foo[50]; What to include to use it: void *malloc(size_t size); malloc() allocates unused memory space whose size in bytes is specified by size. The type is unspecified, as this memory space can now be used for anything, given enough room. Either returns a pointer to the memory address allocated, or, if something goes wrong, the null pointer. The result of this function, barring errors, can be passed to the free() function, which will return the memory space to the operating system once the program is done with it. malloc() will fail if size is set to zero, or if insufficient memory is available. On POSIX implementation, it will set errno to ENOMEM in the latter case. calloc, dynamic array, free, new, realloc malloc() (n., from the Latin mal-, which means bad, and the Latin locus, which means place) 1. a function to return a bad place to store data; a routine characterized by slowing down a program and wasting space. "Half my goddamn students used ~ to store heads for their linked lists. Didn't they learn about the & operator in 15-213?" malloc() is a good first order approximation of a general way to get memory, but before you call it, consider where else you might want to store your data. What's the actual scope of your data? For how long does it have to be alive? Consider the following: void do_something_with_an_element(int a) { list *l = malloc(sizeof(*l)); l->value = a; list_enqueue(some_list, l); do_some_things_with_the_list(); list_remove(some_list, l); free(l); } There might not be a whole lot of logic there, but boy is malloc() slow! And as it turns out, in that case, the memory didn't need to come from malloc(); it could just as well have been stacked. When you use malloc(), you might not always have options. But wouldn't it be nice if everyone exercised them when they did? Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/malloc
CC-MAIN-2021-10
refinedweb
388
64.91
rostopic publish/subscribe over serial port Hi, I'm trying to receive IMU data from simulink on a STM32 Nucleo board over port "ttyACM0" to a topic "/raw_imu" and also publish navigation plan coordinates from rostopic "/local_plan". I'm confused about the general method of doing this. Do I make a script to run on the host computer or the STM board? I've tried the tutorials on the the wiki and this is the best I can come up with but I'm not sure where to put it. This part is only for the board publisher. (I know it's very not right but any tips would be super helpful) #!/usr/bin/env python import roslib import rospy from nav_msgs.msg import Path import serial ros::NodeHandle nh; ser = serial.Serial('/dev/ttyACM0', 9600) def talker(): while not rospy.is_shutdown(): data= ser.read(2) rospy.loginfo(IMU) pub.publish(String(IMU)) rospy.sleep(1.0) if __name__ == '__main__': try: pub = rospy.Publisher('IMU', String) rospy.init_node('talker') talker() except rospy.ROSInterruptException: pass
https://answers.ros.org/question/324573/rostopic-publishsubscribe-over-serial-port/?sort=latest
CC-MAIN-2021-49
refinedweb
174
60.11
#include <skeletonQuery.h> Primary interface to reading bound skeleton data. This is used to query properties such as resolved transforms and animation bindings, as bound through the UsdSkelBindingAPI. A UsdSkelSkeletonQuery can not be constructed directly, and instead must be constructed through a UsdSkelCache instance. This is done as follows: Definition at line 70 of file skeletonQuery.h. Definition at line 73 of file skeletonQuery.h. Compute joint transforms in joint-local space, at time. This returns transforms in joint order of the skeleton. If atRest is false and an animation source is bound, local transforms defined by the animation are mapped into the skeleton's joint order. Any transforms not defined by the animation source use the transforms from the rest pose as a fallback value. If valid transforms cannot be computed for the animation source, the xforms are instead set to the rest transforms. Compute joint transforms which, when concatenated against the rest pose, produce joint transforms in joint-local space. More specifically, this computes restRelativeTransform in: Compute joint transforms in skeleton space, at time. This concatenates joint transforms as computed from ComputeJointLocalTransforms(). If atRest is true, any bound animation source is ignored, and transforms are computed from the rest pose. The skeleton-space transforms of the rest pose are cached internally. Compute joint transforms in world space, at whatever time is configured on xfCache. This is equivalent to computing skel-space joint transforms with CmoputeJointSkelTransforms(), and then concatenating all transforms by the local-to-world transform of the Skeleton prim. If atRest is true, any bound animation source is ignored, and transforms are computed from the rest pose. Compute transforms representing the change in transformation of a joint from its rest pose, in skeleton space. I.e., These are the transforms usually required for skinning. Returns the animation query that provides animation for the bound skeleton instance, if any. Returns an array of joint paths, given as tokens, describing the order and parent-child relationships of joints in the skeleton. Returns the world space joint transforms at bind time. Returns a mapper for remapping from the bound animation, if any, to the Skeleton. Returns the underlying Skeleton primitive corresponding to the bound skeleton instance, if any. Returns the bound skeleton instance, if any. Returns the topology of the bound skeleton instance, if any. Returns true if the size of the array returned by skeleton::GetBindTransformsAttr() matches the number of joints in the skeleton. Returns true if the size of the array returned by skeleton::GetRestTransformsAttr() matches the number of joints in the skeleton. Return true if this query is valid. Definition at line 76 of file skeletonQuery.h. Boolean conversion operator. Equivalent to IsValid(). Definition at line 79 of file skeletonQuery.h. Inequality comparison. Return false if lhs and rhs represent the same UsdSkelSkeletonQuery, true otherwise. Definition at line 91 of file skeletonQuery.h. Equality comparison. Return true if lhs and rhs represent the same UsdSkelSkeletonQuery, false otherwise. Definition at line 83 of file skeletonQuery.h. Definition at line 238 of file skeletonQuery.h.
https://www.sidefx.com/docs/hdk/class_usd_skel_skeleton_query.html
CC-MAIN-2022-27
refinedweb
506
58.79
Behaviours Tutorial Jira includes several features that allow you and your team to be flexible in how you gather information to complete your work. One way to accomplish this flexibility is through field configurations. A field configuration customises how fields behave, based on the issue operation screen they appear on. A behaviour in ScriptRunner allows you to take that field customisation further. Behaviours are part of the overall Jira functionality, so you manage them from the Behaviours page in the Administration menu (Administration>Add-ons>Behaviours). However, each behaviour maps to a field for a project and/or issue type. A behaviour allows you to define field actions for a specific project or issue context. Behaviours are similar to field configurations but enable you to work with additional options. As a reminder, a field configuration in Jira defines how fields in your instance act and can handle tasks such as setting hidden/visible fields or required/optional fields. A behaviour allows you to create additional requirements or restrictions on the field, and you can do so for specific issue types and/or projects. However, a behaviour cannot override field configurations- for example, if a field is required, you cannot use a behaviour to make it optional. You should also be aware that if you use a behaviour to link to other projects, the user interacting with the behaviour needs the appropriate permissions in those projects to do whatever it is you are having them do. For example, if you create a behaviour that allows a user to link to issues, that user must have the Link Issues permission in the other projects. Why Use Behaviours? Behaviours give you more control over your fields in Jira. They let you extend the standard field configuration options and provide you with additional settings so that you can require fields to be completed at different issue operation screens. Some behaviours also let you add additional fields when a particular item is selected. So, for example, a project may need customer information that is very specific, but if it is a new customer, they are likely not in the system yet. Using a behaviour, you can include an option in a select list, that would then cause a new text field to show, where you can add the new customer’s information. Behaviours also give you more power to get information from users within context. So, when a user needs to create a bug issue, a behaviour could be used to explain what information is needed in the description. Because you can auto-generate additional fields based on input from existing fields with behaviours, you gather information when it is needed and after it is known. The Parts of a Behaviour When you create a new behaviour, you need to update three main sections including: general settings, where you define a guide workflow and set an initialise (if using), fields, where you set the action for the behaviour, and mappings, where you set what projects and issue types use this behaviour. Settings A Required Name and Optional Description You start your new behaviour with a Name and Description. This required name should identify the behaviour. The description is optional, but it can help other Jira administrators understand the purpose of the behaviour and how they can use it. Behaviour Settings The Behaviour Settings define some general options for the behaviour before you get into the specific field settings. The settings include a Use Validator Plugin option, which allows you to check your workflow for the use of a specific validator in Jira Suite Utilities. This option is turned off by default. You also see a menu for a Guide Workflow. This guide workflow helps when you add a condition to a field, by looking at the workflow steps from the workflow you selected to use as a guide. Lastly, this section is where you set your initialiser, if using. More on that, soon. Fields Behaviours allow you to set additional options on fields in Jira. When you select a field, you can work with pre-built options available similar to what you find in a field configuration. These options include optional/required, writable/read-only, and shown/hidden. You can also add a pre-built condition on a field that allows you to set additional restrictions or requirements based on the field you choose. For example, the Assignee field may include a Current User in Group condition or a User in Project Role condition. And lastly, if you need to add a server-side script to run on this field each time a user interacts with it, you click Add Server-Side Script and add your script. Mapping This option sets to which projects and/or issue types the behaviour applies. If you are mapping to a Jira Service Desk project, you need to use the option Use Service Desk Mapping. Why? Service Desk projects work a bit differently than Jira Core or Jira Software projects in that they map to service desks (which are projects) and request types instead of projects and issue types. Once you add a mapping, you can view that mapping for the behaviour on the Fields page for the behaviour. Once or Often? As part of updating the new behaviour fields, you need to determine how often your behaviour runs. You must set up a behaviour to run one time for a field, or anytime you update a field. If the behaviour runs once, you set a script called an "initialiser." In an initialiser, the script runs once and only once for each new instance. For example, you can set a Default Description behaviour that adds some guidance in the Description field of an issue the first time a user creates an issue. The script won’t run again if that user edits the issue, because they only need it to run the first time. Not all behaviours need or use an initialiser. The other option for behaviours is to run every time a user edits the field. When you set this option, you choose the field or fields for the behaviour, and then add the server-side script to update the field every time a user interacts with that field. So when do you use these? It depends on what you need to achieve. If you don’t want information to be overwritten by a behaviour (like in the case of the Default Description example), you probably want to use an initialiser function. On the other hand, if you need this behaviour to happen any time the field is edited, you want to add the server-side script to the field. Examples of Behaviours Add a Default Description Behaviour In this example, we set a default description for renewal issues in the Great Adventure Licensing and Finance project. This description contains set text to help the licensing specialists complete the issues, acting as a template to gather the correct information. From the Jira Administration menu, select Add-ons. On the Manage Apps page, under Behaviours, click Behaviours (You can also type . to open a shortcut dialog box, and then type Behaviours). Under Add Behaviour, enter a Name and Description for your new behaviour, then click Add. The new behaviour appears on the Behaviours page. Notice that it is not currently mapped. For your new behaviour, click Fields, the first option in the list under the Operations column. Now we’re on the Edit Behaviour page. Under Behaviour Settings, there are several options. For this behaviour, we need to add an initaliser, so click Create Initialiser. An inline script editor opens. Copy and paste the script below into the Initialiser inline script editor. def desc = getFieldById("description") def defaultValue = """\ h2. Renewal Information * Confirm Company Name: * Confirm Existing License: * Confirm Number of Users: * Confirm Type of License: h3. Notes Provide any notes on renewal. Copy/pate from proposals and email correspondence as needed. h3. Final Actions * Update Jira Issue with appropriate information. * Assign issue to Licensing lead for approval. """.stripIndent() if (!desc.formValue) { desc.setFormValue(defaultValue) } This script adds a default value to the Description field in Jira as a sort of prompt or template for users when they create issues. The first portion of the script points to the Description field, and the second portion defines the default value, including formatted text. This script is for the specific example described, but it can be edited. After you’ve made any changes, click Save and a success message appears. Near the top of the page, in the blue-outline box titled No Mapping Defined, click Add One Now. On the Choose Applicable Context window, set your mapping options: For Choose Mapping Type, select Use Project/IssueType Mapping. For Choose Projects, select the the correct project(s). For Choose Issue Types, select applicable issue types. If you want this to apply to all issue types, you can leave it at Any Issue Type. If you want this behaviour to apply to specific issue types, select from the list. Use (CTRL+ click) or (Cmd+click) to select multiple issue types. .When you are finished, click Add Mapping. Back on the Behaviours page, the new behaviour now includes mapping. Note that you can still edit all of the options we just set or even delete the behaviour. Once you have your new behaviour mapped, you should test the behaviour in the project(s) you mapped it to. To check the Default Description, create a new issue in the project and issue type you indicated. If things went as planned, you should see the new default description. For more examples of behaviours see our documentation or the Adaptavist Library Built-in Behaviour Options In addition to using recipes and writing custom behaviours, you can also update behaviours using built-in setting options. The process for these built-in options is similar to working with a script, as we saw in the default description behaviour example. However, instead of adding a script, you manipulate the field through behaviour settings and conditions. Create a new Behavior by entering a Name and clicking Add. Add a mapping to the project and issue type you want this behavior to affect. Click Fields to open this behaviour and view its settings. Under Behaviour Settings, there are several things you can change. Leave the Use Validator Plugin set to off. Look at the Guide Workflow. This setting allows you to pick the right workflow statuses and transitions when you use conditions for the behaviour. Set an initialiser script if needed. Remember, initialisers run the first time the issue loads, but they require some Groovy scripting. When you add a field, there are some specific settings. This area is where you can set field configuration type options for the behaviour and apply certain conditions to the behaviour. Choose a field and click Add. There are three options to set: Optional, Writable; Shown. Next are conditions for the behaviour. These look for the requirements to allow the different configuration options to be true. For example, you could create a condition that uses the condition to allow only project administrators to edit this custom field. Under Conditions, there is an option to add a server-side script field. If you use a behaviour that does not require an initialiser, you would add and update the custom script for the behaviour here, though you don’t have to. Updating the field options and conditions can create a behaviour for the field selected without writing a script. Finally, at the very bottom, you see that you can add additional fields, so you could create a behaviour with multiple fields for a certain project and/or issue type. Of course when you are finished, click.
https://scriptrunner.adaptavist.com/6.11.0/jira/tutorials/behaviours-tutorial.html
CC-MAIN-2020-50
refinedweb
1,958
63.49