text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The types of forms you should be looking to make reusable are what can be considered auxiliary forms. Those that display an application's About information, give spell check functionality, or are logon forms are all likely candidates. More specialized forms that are central to an application's primary function are likely to be too specific to that particular development to make designing them for reuse worthwhile. Alternatively, these specialized forms might still be considered worth making public for use by applications outside those in which they reside. Writing forms to be reusableAs programmers, we should all be familiar with the idea of reusing forms by now. Windows has had the common File, Print, and Font dialog boxes since its early versions, and these have been available to Visual Basic users through the CommonDialog control right from the first version of Visual Basic. Visual Basic 5 introduced the ability to reuse custom-developed forms in a truly safe way. Visual Basic 4 gave us the ability to declare methods and properties as Public to other forms. Prior to this, only code modules could have access to a form's methods and data. This limitation made form-to-form interaction a little convoluted, with the forms having to interface via a module, and generally made creating a reusable form as a completely encapsulated object impractical. Visual Basic 5 provided another new capability for forms. Like classes and controls, forms can now raise events, extending our ability to make forms discrete objects. Previously, if we wanted forms to have any two-way interaction, the code within each form had to be aware of the interface of the other. Now we have the ability to create a form that "serves" another form or any other type of module, without any knowledge of its interface, simply by raising events that the client code can deal with as needed. The ability to work in this way is really a prerequisite of reusable components. Without it, a form is always in some way bound, or coupled, to any other code that it works with by its need to have knowledge of that code's interface. In the following progress report example, you'll find out how to design a generic form that can be reused within many applications. You'll also see how to publicly expose this form to other applications by using a class, allowing its use outside the original application. This topic covers two areas of reuse: reuse of the source code, by which the form is compiled into an application; and reuse of an already compiled form from another application as a distributed object. Introducing the progress formThe form we'll write here, shown in Figure 14-7, is a generic progress form of the type you often see when installing software or performing some other lengthy process. This type of form serves two basic roles. First, by its presence, the form confirms that the requested process is under way, while giving the user the opportunity to abandon the process if necessary. Second, by constantly displaying the progress of a process, the form makes the process appear faster. With Visual Basic often wrongly accused of being slow, this subjective speed is an important consideration. Figure 14-7 A generic progress form in action This example gives us a chance to explore all the different ways you can interact with a form as a component. The form will have properties and methods to enable you to modify its appearance. Additionally, it will raise two events, showing that this ability is not limited to classes. When designing a form's interface, you must make full use of property procedures to wrap your form's properties. Although you can declare a form's data as Public, by doing so you are exposing it to the harsh world outside your component-a world in which you have no control over the values that might be assigned to that component. A much safer approach is to wrap this data within Property Get and Property Let procedures, giving you a chance both to validate changes prior to processing them and to perform any processing you deem necessary when the property value is changed. If you don't use property procedures, you miss the opportunity to do either of these tasks, and any performance gains you hope for will never appear because Visual Basic creates property procedures for all public data when it compiles your form anyway. It's also a good policy to wrap the properties of any components or controls that you want to expose in property procedures. This wrapping gives you the same advantages as mentioned previously, plus the ability to change the internal implementation of these properties without affecting your interface. This ability can allow you to change the type of control used. For example, within the example progress form, we use the Windows common ProgressBar control. By exposing properties of the form as property procedures, we would be able to use another control within the form or even draw the progress bar ourselves while maintaining the same external interface through our property procedures. All this prevents any changes to client code, a prerequisite of reusable components. The generic progress form uses this technique of wrapping properties in property procedures to expose properties of the controls contained within it. Among the properties exposed are the form caption, the progress bar caption, the maximum progress bar value, the current progress bar value, and the visibility of the Cancel command button. Although all of these properties can be reached directly, by exposing them through property procedures, we're able to both validate new settings and perform other processing if necessary. This is illustrated by the AllowCancel and ProgressBarValue properties. The AllowCancel property controls not only the Visible state of the Cancel command button but also the height of the form, as shown in this code segment: Public Property Let AllowCancel (ByVal ibNewValue As Boolean) If ibNewValue = True Then cmdCancel.Visible = True Me.Height = 2460 Else cmdCancel.Visible = False Me.Height = 1905 End If Me.Refresh End Property The ProgressBarValue property validates a new value, avoiding an unwanted error that might occur if the value is set greater than the current maximum: Public Property Let ProgressBarValue(ByVal ilNewValue As Long) ' Ensure that the new progress bar value is not ' greater than the maximum value. If Abs(ilNewValue) > Abs(gauProgress.Max) Then ilNewValue = gauProgress.Max End If gauProgress.Value = ilNewValue Me.Refresh End Property The progress form eventsThe progress form can raise two events. Events are most commonly associated with controls, but can be put to equally good use within other components. To have our form generate events, we must declare each event within the general declarations for the form as shown here: Public Event PerformProcess(ByRef ProcessData As Variant) Public Event QueryAbandon(ByRef Ignore As Boolean) Progress forms are usually displayed modally. Essentially, they give the user something to look at while the application is too busy to respond. Because of this we have to have some way for our progress form appear modal, while still allowing the application's code to execute. We do this by raising the PerformProcess event once the form has finished loading. This event will be executed within the client code, where we want our process to be carried out. Private Sub Form_Activate() Static stbActivated As Boolean ' (Re)Paint this form. Me.Refresh If Not stbActivated Then stbActivated = True ' Now this form is visible, call back into the calling ' code so that it may perform whatever action it wants. RaiseEvent PerformProcess(m_vProcessData) ' Now that the action is complete, unload me. Unload Me End If End Sub Components used in this way are said to perform a callback. In this case we show the form, having previously prepared code in the PerformProcess event handler for it to callback and execute once it has finished loading. This allows us to neatly sidestep the fact that when we display a form modally, the form now has the focus and no further code outside it is executed until it unloads. The final piece of sample code that we need to look at within our progress form is the code that generates the QueryAbandon event. This event allows the client code to obtain user confirmation before abandoning what it's doing. This event is then triggered when the Cancel command button is clicked. By passing the Ignore Boolean value by reference, we give the event handling routine in the client the opportunity to change this value in order to work in the same way as the Cancel value within a form's QueryUnload event. When we set Ignore to True, the event handling code can prevent the process from being abandoned. When we leave Cancel as False, the progress form will continue to unload. The QueryAbandon event is raised as follows: Private Sub cmdCancel_Click() Dim bCancel As Boolean bCancel = False RaiseEvent QueryAbandon(bCancel) If bCancel = False Then Unload Me End Sub From this code, you can see how the argument of the QueryAbandon event controls whether or not the form is unloaded, depending on its value after the event has completed. Using the progress formThe code that follows illustrates how the progress form can be employed. First we have to create an instance of the form. This must be placed in the client module's Declarations section because it will be raising events within this module, much the same way as controls do. Forms and classes that raise events are declared as WithEvents, in the following way: Private WithEvents frmPiProg As frmProgress We must declare the form in this way; otherwise, we wouldn't have access to the form's events. By using this code, the form and its events will appear within the Object and Procedure combo boxes in the Code window, just as for a control. Now that the form has been declared, we can make use of it during our lengthy process. First we must create a new instance of it, remembering that the form does not exist until it has actually been Set with the New keyword. When this is done we can set the form's initial properties and display it, as illustrated here: ' Instantiate the progress form. Set frmPiProg = New frmProgress ' Set up the form's initial properties. frmPiProg.FormCaption = "File Search" frmPiProg.ProgressBarMax = 100 frmPiProg.ProgressBarValue = 0 frmPiProg.ProgressCaption = _ "Searching for file. Please wait..." ' Now Display it modally. frmPiProg.Show vbModal, Me Now that the progress form is displayed, it will raise the PerformAction event in our client code, within which we can carry out our lengthy process. This allows the progress form to be shown modally, but still allow execution within the client code. Private Sub frmPiProg_PerformProcess(ProcessData As Variant) Dim nPercentComplete As Integer mbProcessCancelled = False Do ' Update the form's progress bar. nPercentComplete = nPercentComplete + 1 frmPiProg.ProgressBarValue = nPercentComplete ' Peform your action. ' You must include DoEvents in your process or any ' clicks on the Cancel button will not be responded to. DoEvents Loop While mbProcessCancelled <> True _ And nPercentComplete < frmPiProg.ProgressBarMax End Sub The final piece of code we need to put into our client is the event handler for the QueryAbandon event that the progress form raises when the user clicks the Cancel button. This event gives us the chance to confirm or cancel the abandonment of the current process, generally after seeking confirmation from the user. An example of how this might be done follows: Private Sub frmPiProg_QueryAbandon(Ignore As Boolean) If MsgBox("Are you sure you want to cancel?", _ vbQuestion Or vbYesNo, Me.Caption) = vbNo Then Ignore = True mbProcessCancelled = True End If End Sub From this example, you can see that in order to use the progress form, the parent code simply has to set the form's properties, display it, and deal with any events it raises. Making a form publicAlthough forms do not have an Instancing property and cannot be made public outside their application, you can achieve this effect by using a class module as an intermediary. By mirroring the events, methods, and properties of your form within a class with an Instancing property other than Private, making sure that the project type is ActiveX EXE or ActiveX DLL, you can achieve the same results as you can by making a form public. Using the progress form as an example, we will create a public class named CProgressForm. This class will have all the properties and methods of the progress form created earlier. Where a property of the class is accessed, the class will merely delegate the implementation of that property to the underlying form, making it public. Figure 14-8 shows this relationship, with the client application having access to the CProgressForm class but not frmProgress, but the CProgressForm class having an instance of frmProgress privately. To illustrate these relationships, we will show how the ProgressBarValue property is made public. First we need to declare a private instance of the form within the Declarations section of our class: Private WithEvents frmPiProgressForm As frmProgress Figure 14-8 Making a form public using a public class as an intermediary Here we see how the ProgressBarValue property is made public by using the class as an intermediary: Public Property Let ProgressBarValue(ByVal ilNewValue As Long) frmPiProgressForm.ProgressBarValue = ilNewValue End Property Public Property Get ProgressBarValue() As Long ProgressBarValue = frmPiProgressForm.ProgressBarValue End Property Similarly, we can subclass the PerformProcess and QueryAbandon events, allowing us to make public the full functionality of the progress form. For example, we could subclass the QueryAbandon event by reraising it from the class, in reaction to the initial event raised by the form, and passing by reference the initial Ignore argument within the new event. This way the client code can still modify the Ignore argument of the original form's event. Private Sub frmPiProgressForm_QueryAbandon(Ignore As Boolean) RaiseEvent QueryAbandon(Ignore) End Sub There is a difficulty with exposing the progress form in this way. The form has a Show method that we must add to the class. Because we're using the form within another separate application, this method cannot display the form modally to the client code. One solution is to change the Show method of the CProgressForm class so that it always displays the progress form modelessly. Another possible solution is to use a control instead of a public class to expose the form to the outside world. Those of you who have used the common dialogs before will be familiar with this technique. This enables you to make the form public in the same way as with CProgressClass, but additionally you can add a Display method, in which you call the form's Show method, showing it modally to the form that the control is hosted on. Public Sub Display(ByVal inCmdShow As Integer) ' Display the progress form. frmPiProgressForm.Show inCmdShow, UserControl.Parent End Sub
http://www.brainbell.com/tutors/Visual_Basic/Forms_as_Reusable_Components.htm
CC-MAIN-2018-43
refinedweb
2,487
50.87
CLR (Common language runtime) is nothing new now, .NET developers are familiar with the CLR, how to work with the object. There is no doubt that now we can design our component and application easily using CLR. We will not discuss about the CLR. We will try to focus on the DLR ((Dynamic Language Runtime). At first, we need to know what is DLR. In a simply world, we can say that the DLR has some new features which added to common language runtime (CLR) for Microsoft .NET Framework 4.0. More information about CLR can be found here. Let’s try to get some more. One of the powerful features of Microsoft .NET Framework 4.0 is the Dynamic Types. Dynamic Types allow us to write code in such a way that we can go around compile time checking. Note that bypass does not mean that we can remove the compile time errors, it means if the operation is not valid, then we cannot detect the error at compile time. This error will appear only in run time. Microsoft .NET Framework 4.0 provides us the System.Dynamic namespace. To use Dynamic Types, we need to add the references DLL of System.Dynamic. System.Dynamic Well , I hope that you get the basics of DLR ((Dynamic Language Runtime), so let's try to play with a few code snippets and a few keywords of Microsoft Visual Studio .NET. We are familiar with the keyword “var” we will play with this keyword, before that one interesting thing I must share is that it is possible to change an object type runtime? Probably your answer will be NO. But objects defined with dynamic keyword can change their type at runtime. Isn't it cool. A simple code example is given below:More information about System.Dynamic can be found here. var System.Dynamic Here in this example below, we have a class called Class1(). The Class1() has two functions displayMessage() and displayNumeric(). Both functions are returning different values. Class1() Class1() displayMessage() displayNumeric() static void Main(string[] args) { dynamic myDynamicVariable; // "dynamic" is keyword for declaring dynamic type variables var _objClass = new Class1(); // Creating object myDynamicVariable = _objClass.displayMessage(); Console.WriteLine("Value: " + myDynamicVariable + ". Type of dynamic variable is: " + myDynamicVariable.GetType()); // Displaying types myDynamicVariable = _objClass.displayNumeric(); Console.WriteLine("\nValue: My CP Member Id # " + myDynamicVariable + ". Type of dynamic variable is: " + myDynamicVariable.GetType()); // Displaying types Console.ReadLine(); } class Class1 { private string sMessage = "The Code Project is COOL"; public string displayMessage() { return sMessage; } public int displayNumeric() { return 1186309; } } If you look, we have declared a dynamic variable myDynamicVariable. myDynamicVariable dynamic myDynamicVariable; // "dynamic" is keyword for declaring dynamic type variables So we are assigning string value to it and printing it in console, value as well as its type. This is all about Dynamic type. string I hope this might be helpful to.
http://www.codeproject.com/Articles/115211/Introduction-DLR-Dynamic-Language-Runtime?msg=3625915
CC-MAIN-2014-52
refinedweb
474
59.7
From: Pete Zaitcev <zaitcev@redhat.com>This code appears to be more trouble than it's worth, considering thatno normal users reload drivers. So, we comment it for now. It is notremoved outright for the benefit of hackers (that is, myself).Signed-off-by: Pete Zaitcev <zaitcev@yahoo.com>Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>--- drivers/block/ub.c | 2 ++ 1 file changed, 2 insertions(+)--- scsi-2.6.orig/drivers/block/ub.c 2005-09-21 17:29:38.000000000 -0700+++ scsi-2.6/drivers/block/ub.c 2005-09-21 17:29:54.000000000 -0700@@ -2217,8 +2217,10 @@ * This is needed to clear toggles. It is a problem only if we do * `rmmod ub && modprobe ub` without disconnects, but we like that. */+#if 0 /* iPod Mini fails if we do this (big white iPod works) */ ub_probe_clear_stall(sc, sc->recv_bulk_pipe); ub_probe_clear_stall(sc, sc->send_bulk_pipe);+#endif /* * The way this is used by the startup code is a little specific.---To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2005/9/22/61
CC-MAIN-2017-51
refinedweb
190
62.24
A reflective enum implementation for C++ Because reflection makes you wise, not smart wise_enumis a standalone smart enum library for C++11/14/17. It supports all of the standard functionality that you would expect from a smart enum class in C++: - Tells you the number of enumerators - Lets you iterate over all enum values - Converts string to enum, and enum to string - Does everything in an idiomatic C++ way (friendly to generic programming, compile time programming, etc) Let's look at a bit of code. You can declare an enum like this: // Equivalent to enum Color {GREEN = 2, RED}; WISE_ENUM(Color, (GREEN, 2), RED) You can also declare an enum class instead of an enum, specify the storage explicitly, declare an enum nested inside a class, or even adapt an already declared enum: // Equivalent to enum class MoreColor : int64_t {BLUE, BLACK = 1}; WISE_ENUM_CLASS((MoreColor, int64_t), BLUE, (BLACK, 1)) // Inside a class, must use a different macro, but still works struct Bar { WISE_ENUM_MEMBER(Foo, BUZ) }; // Adapt an existing enum you don't control so it works with generic code namespace another_lib { enum class SomebodyElse { FIRST, SECOND }; } WISE_ENUM_ADAPT(another_lib::SomebodyElse, FIRST, SECOND) You can ask the enum how many enumerators it has: static_assert(wise_enum::size == 2, ""); Iterate over the enumerators: std::cerr << "Enum values and names:\n"; for (auto e : wise_enum::range) { std::cerr << static_cast(e.value) << " " << e.name << "\n"; } Convert between strings and enums: // Convert any enum to a string std::cerr << wise_enum::to_string(Color::RED) << "\n"; // Convert any string to an optional auto x1 = wise_enum::from_string("GREEN"); auto x2 = wise_enum::from_string("Greeeeeeen"); assert(x1.value() == Color::GREEN); assert(!x2); Check whether something is a wise enum at compile time: cpp static_assert(wise_enum::is_wise_enum_v, ""); static_assert(!wise_enum::is_wise_enum_v, ""); enum flub { blub, glub }; static_assert(!wise_enum::is_wise_enum_v, ""); It has a few notable design choices. First, when you use one of the macros to declare your enum, what gets declared (among other things) is exactly the vanilla enum (or enum class) that you would expect. Not an enum like class, or anything like that. That means that wise_enums can be used exactly like regular enums in non-reflective contexts, because they are regular enums. They can be used as non-type template parameters, and they can be used in switch case statements, unlike any user defined type. This also means that upgrading a regular enum already widely used (non-reflectively) in your codebase to a wise enum is never a breaking change. No strange behavior, or edge cases, when used with other third party libraries (e.g. serialization), or standard library type traits. Second, all the functionality in defining enums is preserved. You can define enumor enum classes, set storage explicitly or let it be implicit, define the value for an enumeration, or allow it to be determined implicitly. You can also define enums nested in classes, which isn't supported in some smart enum libraries. Third, it's quite compile-time programming friendly. Everything is constexpr, and a type trait is provided. This makes it easy to handle wise enums in a specific way in generic code. For example, if you have a logger, it can't intelligently log a vanilla enum without more information, but it can log a wise enum, so you can use the type trait to handle them differently (with no performance cost of course). Fourth, it's careful with regards to performance and generated assembly. It makes zero heap allocations and does zero dynamic initialization, and does not use exceptions. The enum -> string is an optimal switch-case. String -> enum is currently a linear search; this may be changed in the future (most alternatives are not trivial to implement without doing heap allocations or dynamic initialization). The best known alternative is probably Better Enums. The biggest issue with Better Enums is simply that its macros don't actually create enums or enum classes, but enum like classes. This carries all of the disadvantages discussed in the previous section, and for me was just a deal breaker. There are also more minor issues like not being able to define a nested enum, having a lower default enumeration limit. Conversely, I'm not aware of any advantages, except one: it does support C++03, which wise enum never will, so it's a good choice for older codebases. A recent interesting alternative is Meta Enum. This does declare actual enum/enum classes. As benefits, it doesn't have any limit on the number of enumerations by design, and it doesn't require different macros for declaring enums nested inside classes. As far as I can tell though, the approach means that it can't support switch case generation (e.g. for to_string), nor can it support 11. It currently only seems to support 17 but 14 support may be possible. As far as I saw, neither library has something like the adapt macro, though I think either one could add it pretty easily. There are other implementations, but most of the ones I've seen are very clearly very short projects, lacking support for basic features (e.g. controlling enum values) and documentation. Overall, I feel like wise enum is the best choice for an enum library for a typical, modern C++ codebase. If any of this information is incorrect, please let me know and I'll make correcting it the highest priority. Wise enum tries to target each language version idiomatically. In 11, template variables, which are the recommended interface in 14/17 are not available so using the typical class template static interface instead is necessary. Many functions lose constexprin 11. The difference between 14 and 17 is mostly in the types used, discussed in the next section. There are two types that you can customize in wise_enum, by defining macros: the optional type, and the string type. | Type | 11/14 default | 17 default | customize macro | type alias | | ------------- | ----------------- | ---------- | --------------- | --- | | optional | wise_enum::optional| std::optional| WISE_ENUM_OPTIONAL_TYPE| wise_enum::optional_type| | string | const char *| std::string_view| WISE_ENUM_STRING_TYPE| wise_enum::string_type| If you only support 17, the defaults should be fine. If you're on 11/14, the defaults are fine as well, but if you want to be forward compatible I'd consider rounding up a stringview implementation somewhere and using that. Otherwise, since const char*and `stringview` don't have the same interface, you may have breakages when upgrading. Finally, if you're supporting both 11/14 and 17 I'd definitely define both macros so the same type is used in your builds. You can define the macro either in your build system, or by having a stub header that defines them and then includes wise_enum.h, and only including via the stub. You can also customize the use of exceptions. If you use the CMake option NO_EXCEPTIONSor otherwise define the macro WISE_ENUM_NO_EXCEPT, then wiseenum should be fully compatible with -fno-exceptions. The API never directly throws exceptions anyhow; the only change in behavior is that the provided optional(and `compactoptional ) will abort rather than throw onvalue()` calls if the optional is empty, similar to the behavior of most (all?) standard library optional implementations when exceptions are disabled. Over time I'd like to leverage some of the capabilities of wise enum to do other useful enum related things. I have a compact optional implementation included now in wise enum. The key point is that it uses compile time reflection to statically verified that the sentinel value used to indicate the absence of an enum, is not a value used for any of the enumerators. If you add an enumerator to an enum used in a compact optional, and the value of the enum is the sentinel, you get a compilation error. One problem where C++ gives you little recourse is when you have a runtime value that you want to lift into a compile time value. Consider the following: template class MyDerived : MyInterface {...}; unique_ptr make(MyEnum e); It's hard to write makewithout boilerplate in the general case. You need to manually switch case over all enumerator values, and in each case put the compile time constant into the template. Wise enum will shortly (exact interface being worked out) provide a facility that takes an enum and a lambda, does the switch case over all values internally, and calls the lambda making the enum available as a compile time constant. Planned for the future. There are some known limitations: to_stringwill not work. You can declare the enum and use all the other API. This is both because it doesn't jive at all with the implementation, and even conceptually it's not clear how you would handle a conversion to string since multiple strings would be associated with the same value. create_generatedscript to create a file with as many as you need, and replace wise_enum_generatedwith that file. The default limit may be raised or lowered based on feedback. An alternative solution here would be to create headers in advance that raise this number, but leave the onus on the user to include them (so users who don't need a large number aren't slowed down) BOOST_PP? I started with it, but the limit on sequences was very disappointing (64) and there didn't seem to be any easy way to change it. So then I started down the codegen route, and once there, I wasn't using very much. I know there are always people who prefer to avoid the dependency, so I decided to drop it.
https://xscode.com/quicknir/wise_enum
CC-MAIN-2020-50
refinedweb
1,577
60.04
Ctrl+P This extension (previously known as DD-IDDE) enables developers of the open-domain chatbots with an option to design them within VS Code using customized Draw.io UI and Discourse Moves Recommendations to make dialog more smooth and natural. This extension uses DeepPavlov's Dialog Flow Framework as the runtime environment for the open-domain chatbots. It can be used to build simple chatbots using DF SDK or to build complex multi-skill AI Assistants using our DeepPavlov Dream platform. Discourse Moves Recommendation System has been built based on the Discourse & Speech Functions theory originally created by M.A.K. Halliday and further developed by Eggins & Slade in their book "Analysing Casual Conversation". This extension is built on top of the unnoficial integration of the Draw.io (also known as diagrams.net) into VS Code made by Henning Dieterichs, hediet on Github (Main Contributor / Author). .py Stay tuned for a demo! Here's a recording of introduction to DFF & DF Designer we've made back in the end of December. The image link below leads directly to the introduction of the DF Designer itself: git clone cd dialog_flow_sdk pip3 install -r requirements.txt docker-compose up --build NOTE: By default, the extension uses an SFC predictor running in the cloud, so you do not need to have the SDK running locally for predictions to work. You can still use a local predictor by changing the sfc-predictor-url in VS Code settings. sfc-predictor-url dialog_flow_sdk code . examples food.py start_node save Show Suggestions RESPONSE If you want to work not with speech functions, but with dialog acts: Dff > Vscode-drawio: Selected-predictor sfc midas Once you've designed your Discourse-Driven open-domain chatbot, you can run it: python3 examples/food.py git clone cd dream feat/speech-function-dist-book-skill local.yml python3 utils/create_local_yml.py -p -d assistant_dists/dream_sfc/ -s dff-book-sfc-skill /skills/dff_book_sfc_skill/ scenario main.py To use Discourse Moves Recommendation System using Speech Functions you need to add integration with Speech Functions classifier: /skills/dff_book_sfc_skill/scenario/sf_conditions.py dff_template_skill from df_engine.core.keywords import MISC import scenario.sf_conditions as dm_cnd line 14 start dream docker-compose -f docker-compose.yml -f assistant_dists/dream_sfc/docker-compose.override.yml -f assistant_dists/dream_sfc/dev.yml -f assistant_dists/dream_sfc/local.yml up --build docker-compose -f docker-compose.yml -f assistant_dists/dream_sfc/docker-compose.override.yml -f assistant_dists/dream_sfc/dev.yml -f assistant_dists/dream_sfc/local.yml exec agent python -m deeppavlov_agent.run -pl assistant_dists/dream_sfc/pipeline_conf.json Type your response. If you didn't edit the file of the demo dff_book_sfc_skill, you can type "Hi". After the response was returned by the bot, type "I love reading". As you custom Dream distribution is running (in Docker), you should see debug output from the system. 4. Alternatively, can talk directly via REST API. Go to localhost:4242 and send POST requests like this (where user_id should be different for each session): dff_book_sfc_skill localhost:4242 user_id { "user_id": "MyDearFriend", "payload": "Hi!" } then followed by: { "user_id": "MyDearFriend", "payload": "I love reading" } By this moment, we've arrived at ```book_start`` node. Now, you can see how Speech Function Classifier works in this situation. To trigger it, we want to say something that it would classify as "React.Rejoinder.Support.Track.Clarify". Here's one example: { "user_id": "MyDearFriend", "payload": "Why of course I do of course I do. Why do you ask?" } And at this very moment, you'll get the response that is conditioned by this very Speech Function: Bot: I think that reading is cool and all people should read books For a detailed description of each speech function and more examples of working with speech functions when building skills, see SF-augmented version of Book Skill: If you want to use dialog acts (from MIDAS) instead of Speech Functions: feat/is_midas Provide any user id and type your response. You can open the same *.py file with the Draw.io Dialog Designer and as .py file. They are synchronized, so you can switch between them as you like it. This is super pratical if you want to use find/replace to rename text or other features of VS Code to speed up your diagram creation/edit process. Use the View: Reopen Editor With... command to toggle between the text or the Draw.io-based DF Designer editor. You can open multiple editors for the same file. *.py View: Reopen Editor With... Special thanks to Yuri Kuratov, yurakuratov on Github (Senior Researcher at DeepPavlov.ai) If you like this extension, you might like our other Conversational AI tech too:
https://marketplace.visualstudio.com/items?itemName=deeppavlov.vscode-dd-idde
CC-MAIN-2022-33
refinedweb
771
50.84
On 07/27/2012 02:12 PM, Matthew Wilcox wrote:> On Fri, Jul 27, 2012 at 10:44:18AM -0600, Keith Busch wrote:>> Registers a character device for the nvme module and creates character>> files as /dev/nvmeN for each nvme device probed, where N is the device>> instance. The character devices support nvme admin ioctl commands so>> that nvme devices without namespaces can be managed.>> I don't see a problem here, but I'm no expert at sysfs / character devices.> Alan, Greg, anyone else see any problems with how this character device is> created / destroyed?This seems like something normally done via a control device that is addressible via bsg.This is -not- a NAK, but maybe the storage folks have a different preference for an admin-command path. Jeff
http://lkml.org/lkml/2012/7/27/370
CC-MAIN-2015-22
refinedweb
132
59.43
The following example shows how you can use advanced CSS to style the decrement and decrement buttons on a Spark TextArea control’s vertical scroll bar in Flex 4 by styling the #decrementButton and #incrementButton selectors. Full code after the jump. To use the following code, you must have Flash Player 10 and a Flex 4 SDK installed in your Flex Builder 3. For more information on downloading and installing the Flex 4 SDK into Flex Builder 3, see “Using the beta 4 SDK in Flex Builder 3”. <?xml version="1.0" encoding="utf-8"?> <!-- --> <s:Application <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; s|TextArea s|VScrollBar #decrementButton { baseColor: #FF0000; } s|TextArea s|VScrollBar #incrementButton { baseColor: haloGreen; } </fx:Style> <s:TextArea </s:Application>.
http://blog.flexexamples.com/2009/02/27/styling-the-decrement-button-and-increment-button-on-an-fxvscrollbar-control-in-flex-gumbo/
CC-MAIN-2017-22
refinedweb
127
53.21
You might have encountered some code like the one below, and wonder what is breakOut, and why is it being passed as parameter? import scala.collection.breakOut val map : Map[Int,String] = List("London", "Paris").map(x => (x.length, x))(breakOut) The answer is found on the definition of map: def map[B, That](f : (A) => B)(implicit bf : CanBuildFrom[Repr, B, That]) : That Note that it has two parameters. The first is your function and the second is an implicit. If you do not provide that implicit, Scala will choose the most specific one available. So, what’s the purpose of breakOut? Consider the example given at the beginning , You take a list of strings, transform each string into a tuple (Int, String), and then produce a Map out of it. The most obvious way to do that would produce an intermediary List[(Int, String)] collection, and then convert it. Given that map uses a Builder to produce the resulting collection, wouldn’t it be possible to skip the intermediary List and collect the results directly into a Map? Evidently, yes, it is. To do so, however, we need to pass a proper CanBuildFrom to map, and that is exactly what breakOut does. Let’s look, then, at the definition of breakOut: def breakOut[From, T, To](implicit b : CanBuildFrom[Nothing, T, To]) = new CanBuildFrom[From, T, To] { def apply(from: From) = b.apply() ; def apply() = b.apply() } Note that breakOut is parameterized, and that it returns an instance of CanBuildFrom. As it happens, the types From, T and To have already been inferred, because we know that map is expecting CanBuildFrom[List[String], (Int, String), Map[Int, String]]. Therefore: From = List[String] T = (Int, String) To = Map[Int, String] To conclude let’s examine the implicit received by breakOut itself. It is of type CanBuildFrom[Nothing,T,To]. We already know all these types, so we can determine that we need an implicit of type CanBuildFrom[Nothing,(Int,String),Map[Int,String]]. But is there such a definition? Let’s look at CanBuildFrom’s definition: trait CanBuildFrom[-From, -Elem, +To] extends AnyRef So CanBuildFrom is contra-variant on its first type parameter. Because Nothing is a bottom class (ie, it is a subclass of everything), that means any class can be used in place of Nothing. Since such a builder exists, Scala can use it to produce the desired output. A lot of methods from Scala’s collections library consists of taking the original collection, processing it somehow (in the case of map, transforming each element), and storing the results in a new collection. To maximize code reuse, this storing of results is done through a builder ( scala.collection.mutable.Builder), which basically supports two operations: appending elements, and returning the resulting collection. The type of this resulting collection will depend on the type of the builder. Thus, a List builder will return a List, a Map builder will return a Map, and so on. The implementation of the map method need not concern itself with the type of the result: the builder takes care of it. On the other hand, that means that map needs to receive this builder somehow. The problem faced when designing Scala 2.8 Collections was how to choose the best builder possible. For example, if I were to write Map('a' -> 1).map(_.swap), I’d like to get a Map(1 -> 'a') back. On the other hand, a Map('a' -> 1).map(_._1) can’t return a Map (it returns an Iterable). The magic of producing the best possible Builder from the known types of the expression is performed through this CanBuildFrom implicit. To better explain what’s going on, I’ll give an example where the collection being mapped is a Map instead of a List. I’ll go back to List later. For now, consider these two expressions: Map(1 -> "one", 2 -> "two") map Function.tupled(_ -> _.length) Map(1 -> "one", 2 -> "two") map (_._2) The first returns a Map and the second returns an Iterable. The magic of returning a fitting collection is the work of CanBuildFrom. Let’s consider the definition of map again to understand it. The method map is inherited from TraversableLike. It is parameterized on B and That, and makes use of the type parameters A and Repr, which parameterize the class. Let’s see both definitions together: The class TraversableLike is defined as: trait TraversableLike[+A, +Repr] extends HasNewBuilder[A, Repr] with AnyRef def map[B, That](f : (A) => B)(implicit bf : CanBuildFrom[Repr, B, That]) : That To understand where A and Repr come from, let’s consider the definition of Map itself: trait Map[A, +B] extends Iterable[(A, B)] with Map[A, B] with MapLike[A, B, Map[A, B]] Because TraversableLike is inherited by all traits which extend Map, A and Repr could be inherited from any of them. The last one gets the preference, though. So, following the definition of the immutable Map and all the traits that connect it to TraversableLike, we have: trait Map[A, +B] extends Iterable[(A, B)] with Map[A, B] with MapLike[A, B, Map[A, B]] trait MapLike[A, +B, +This <: MapLike[A, B, This] with Map[A, B]] extends MapLike[A, B, This] trait MapLike[A, +B, +This <: MapLike[A, B, This] with Map[A, B]] extends PartialFunction[A, B] with IterableLike[(A, B), This] with Subtractable[A, This] trait IterableLike[+A, +Repr] extends Equals with TraversableLike[A, Repr] trait TraversableLike[+A, +Repr] extends HasNewBuilder[A, Repr] with AnyRef If you pass the type parameters of Map[Int, String] all the way down the chain, we find that the types passed to TraversableLike, and, thus, used by map, are: A = (Int,String) Repr = Map[Int, String] Going back to the example, the first map is receiving a function of type ((Int, String)) => (Int, Int) and the second map is receiving a function of type ((Int, String)) => Int. I use the double parenthesis to emphasize it is a tuple being received, as that’s the type of A as we saw. With that information, let’s consider the other types. map Function.tupled(_ -> _.length): B = (Int, Int) map (_._2): B = Int We can see that the type returned by the first map is Map[Int,Int], and the second is Iterable[String]. Looking at map’s definition, it is easy to see that these are the values of That. But where do they come from? If we look inside the companion objects of the classes involved, we see some implicit declarations providing them. On object Map: implicit def canBuildFrom [A, B] : CanBuildFrom[Map, (A, B), Map[A, B]] And on object Iterable, whose class is extended by Map: implicit def canBuildFrom [A] : CanBuildFrom[Iterable, A, Iterable[A]] These definitions provide factories for parameterized CanBuildFrom. Scala will choose the most specific implicit available. In the first case, it was the first CanBuildFrom. In the second case, as the first did not match, it chose the second CanBuildFrom. Let’s see the first example, List’s and map’s definition (again) to see how the types are inferred: val map : Map[Int,String] = List("London", "Paris").map(x => (x.length, x))(breakOut) sealed abstract class List[+A] extends LinearSeq[A] with Product with GenericTraversableTemplate[A, List] with LinearSeqLike[A, List[A]] trait LinearSeqLike[+A, +Repr <: LinearSeqLike[A, Repr]] extends SeqLike[A, Repr] trait SeqLike[+A, +Repr] extends IterableLike[A, Repr] trait IterableLike[+A, +Repr] extends Equals with TraversableLike[A, Repr] trait TraversableLike[+A, +Repr] extends HasNewBuilder[A, Repr] with AnyRef def map[B, That](f : (A) => B)(implicit bf : CanBuildFrom[Repr, B, That]) : That The type of List("London", "Paris") is List[String], so the types A and Repr defined on TraversableLike are: A = String Repr = List[String] The type for (x => (x.length, x)) is (String) => (Int, String), so the type of B is: B = (Int, String) The last unknown type, That is the type of the result of map, and we already have that as well: val map : Map[Int,String] = So, That = Map[Int, String] That means breakOut must, necessarily, return a type or subtype of CanBuildFrom[List[String], (Int, String), Map[Int, String]]. This answer was originally submitted in response to this question on Stack Overflow.blog comments powered by Disqus Contents
http://docs.scala-lang.org/tutorials/FAQ/breakout
CC-MAIN-2017-04
refinedweb
1,399
60.24
Talk:Func capturezone From Valve Developer Community So, should the team be the color of the suitcase, or the other team who would be making the capture?--Foolster41 14:02, 13 Nov 2007 (PST) Attack/Defend Gameplay - I'm assuming there's not really any functionality really built-in for this gametype, but I'm trying to make it so there's a delay in capturing a point. Two things: (1) can I tie a flag to a specific capturezone and (2) can I delay the cap and use the control-point method (say that radial 5sec counter in the hud) to demote how much time is left to cap? ~ Hampster 21:09, 3 Dec 2007 (PST) - I don't think so, on both counts. However, you might be able to hack your way around it by using an actual capture point as part of the flag return area. I'm picturing three parts - a trigger that detects the flag (not sure how to do this part), a capture point, and a flag return. - When the trigger detects the flag, it enables the capture point via an Output, and disables it when the flag leaves the area. This way you can only capture the point if you have the flag. - Then, when the capture point is captured, it enables the flag return (func_capturezone) via an Output. Because the flag is already in the area, it gets captured. Just make sure you set the capture point to be worth 0 points. Then, set the flag's Outputs to reset the system when it's captured.
https://developer.valvesoftware.com/w/index.php?title=Talk:Func_capturezone&oldid=103138
CC-MAIN-2019-47
refinedweb
264
75.74
28 March 2008 15:18 [Source: ICIS news] TORONTO (ICIS news)--US-based Dow Chemical is aiming to achieve earnings per share (EPS) of well over $3 in the next industry downturn as it works to reposition its business away from the ups and down of the petrochemicals cycle, CEO Andrew Liveris said on Friday. ?xml:namespace> “My commitment to our stockholders is that at the next industry trough, Dow Chemical will have an earnings profile that is well north of $3 per share and we will provide steady earnings growth beyond that point," Liveris wrote in his annual letter to shareholders. As a result of strategic actions and its discipline, Dow was now positioned to move away from the ethylene-driven troughs that plagued its earnings profile for so long and transition to an earnings-growth company, he said in reviewing 2007. The $3 EPS target for a trough compares with EPS of $2.99 Dow reported in 2007, on sales of $53.5bn and net income of $2.9bn, and EPS of $3.82 in 2006, on sales of $49.1bn and net income of $3.7bn. The letter, along with Dow’s 2007 annual report published earlier on Friday, are available at on the company's website. Dow's shares were priced at $36.90 in early Friday morning trading in New York, up marginally by 0.5% from Thursday's closing price. Dow's shares were priced at $36.90 in early Friday morning trading in New York, up marginally by 0.5% from Thursday's closing price. (
http://www.icis.com/Articles/2008/03/28/9111775/dow-targets-eps-of-over-3-in-next-downturn.html
CC-MAIN-2015-18
refinedweb
262
73.17
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Swing / AWT / SWT Author Bad Background Refresh Randall Fairman Greenhorn Joined: Apr 18, 2011 Posts: 29 posted Jan 23, 2012 13:26:39 0 This is complicated to explain, but the gist is that the JVM is painting a JPanel to its background color when I don't want that to happen. Or, if I fix that, then one window's outline shows through and appears in the content area of a window that's on top of it. The UI consists of a JFrame holding a JDesktopPane which holds three JInternalFrame objects. One of these JInternalFrame objects contains an instance of my own class, called STextArea. In some round-about way, the problem seems to be due the fact that STextArea starts a Timer, where each time that the Timer goes off, the action command sets a flag and calls repaint() for STextArea. This flag indicates that the next call to paintComponent() should only draw a caret (in xor mode) and draw nothing else. In the example code shown below, I just have paintComponent() do nothing and return when that flag is set rather than drawing the caret since the problem still manifests. Here's the code -- I stripped it down as much as possible. If this is executed, you will see three sub-windows, where the one in the rear is titled Problem. Click on the Problem window to bring it to the front. The click on the Messages window at the lower-right to bring it to the front. You'll see that the refresh of the middle Messages window bleeds through to the content area of the Problem window. It bleeds through because of the calls to isOpaque(). However, if I get rid of those settings, then the call to STextArea.paintComponent() causes the JVM to fill the contents with the background color the first time that the Timer goes off. public class Main { public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { MainWindow theWindow = new MainWindow(); } }); } } public class MainWindow extends JFrame implements ActionListener { public MainWindow() { super("Main Window"); JDesktopPane mainFrame = new JDesktopPane(); mainFrame.setOpaque(false); this.add(mainFrame); ProbWindow srcArea = new ProbWindow(); MessageWindow msgArea = new MessageWindow(); MessageWindow msgArea2 = new MessageWindow(); srcArea.setLocation(0,0); msgArea.setLocation(25,25); msgArea2.setLocation(50,50); mainFrame.add(srcArea); mainFrame.add(msgArea); mainFrame.add(msgArea2); srcArea.moveToBack(); msgArea.moveToFront(); msgArea2.moveToFront(); initMenus(); this.pack(); this.setSize(600,500); this.setVisible(true); } private void initMenus() { JMenuBar theMenuBar = new JMenuBar(); JMenu theMenu = new JMenu("File",false); theMenuBar.add(theMenu); JMenuItem theItem = new JMenuItem("Quit"); theItem.setAccelerator(KeyStroke.getKeyStroke(KeyEvent.VK_Q,ActionEvent.CTRL_MASK)); theItem.addActionListener(this); theMenu.add(theItem); this.setJMenuBar(theMenuBar); } public void actionPerformed(ActionEvent e) { System.exit(0); } } public class MessageWindow extends JInternalFrame { public MessageWindow() { super("Messages"); JTextArea theText = new JTextArea(); theText.setEditable(false); this.add(new JScrollPane(theText)); theText.setText("a message"); this.setSize(400,100); this.setVisible(true); } } public class ProbWindow extends JInternalFrame { public ProbWindow() { super("Problem"); STextArea textArea = new STextArea(); JScrollPane scroller = new JScrollPane(textArea); scroller.setOpaque(true); this.add(scroller); this.setOpaque(false); this.setSize(400,100); this.setVisible(true); } } public class STextArea extends JPanel implements ActionListener { private javax.swing.Timer caretTimer = null; private boolean regularRepaintPending = false; private boolean paintCaretOnly = false; public STextArea() { // Start the caret blinking. caretTimer = new javax.swing.Timer(600,this); caretTimer.start(); this.setOpaque(true); this.setSize(400,100); this.setVisible(true); } @Override synchronized public void repaint() { this.paintCaretOnly = false; this.regularRepaintPending = true; super.repaint(); } synchronized public void repaintCaret() { if (this.regularRepaintPending == false) { this.paintCaretOnly = true; super.repaint(); } } synchronized protected void paintComponent(Graphics g) { if (this.paintCaretOnly == true) { paintCaretOnly = false; return; } // We are about to do a full repaint, so clear the pending flag. g.clearRect(0,0,10000,10000); this.regularRepaintPending = false; g.drawString("xyzpdq",5,15); } synchronized public void actionPerformed(ActionEvent e) { this.repaintCaret(); } } Randall Fairman Greenhorn Joined: Apr 18, 2011 Posts: 29 posted Jan 23, 2012 14:15:15 0 To help give a more complete picture of what should happen, here's a code snippet that could be used in place of the STextArea.paintComponent() given above. It will flash a small yellow circle over the text, much like the way a caret blinks. The more fundamental problem(s) remain, but this shows the role of the Timer. synchronized protected void paintComponent(Graphics g) { if (this.paintCaretOnly == true) { paintCaretOnly = false; g.setColor(Color.yellow); g.setXORMode(Color.white); g.fillOval(5,5,20,20); g.setPaintMode(); g.setColor(Color.black); return; } // We are about to do a full repaint, so clear the pending flag. g.clearRect(0,0,10000,10000); this.regularRepaintPending = false; g.drawString("xyzpdq",5,15); } Rob Camick Ranch Hand Joined: Jun 13, 2009 Posts: 2409 8 posted Jan 23, 2012 14:16:35 0 You don't need all the synchronized methods. Swing painting code and Timer code is executed on the EDT which is single threaded. I don't see the problems using JDK6_7 on XP. Randall Fairman Greenhorn Joined: Apr 18, 2011 Posts: 29 posted Jan 23, 2012 14:37:16 0 You don't see the problem? That's weird. I'm using XP and Java 1.6.0_26. No matter what combination of setOpaque() values I use, I get some kind of strange behavior. Either the STextArea gets erased when the Timer trips or the window experiences bleed-through from a window that's behind it. Randall Fairman Greenhorn Joined: Apr 18, 2011 Posts: 29 posted Jan 24, 2012 07:30:57 0 To make sure that there wasn't some problem with my IDE or some other oversight that I might have made due to my environment, I copied the code directly from the posting above to several .java files, then compiled and ran it from the command-line with javac and java. The problem is definitely there. Even if it's specific to my version of Java (which I have not tested), the fact that this problem arises at all is troubling. Either there's an error on my part (but I don't see it) or the idea of updating only a portion of a window, while leaving the rest unchanged, in response to a Swing Timer is off-limits as a strategy. I agree. Here's the link: subject: Bad Background Refresh Similar Threads JInternalFrame not displaying JTable KeyListener Stops Working Why Glass Pane becomes visible when resizing JInternalFrame? JPanel & JScrollPane problem Swing Problem All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/565313/GUI/java/Bad-Background-Refresh
CC-MAIN-2015-40
refinedweb
1,116
56.15
I have been waiting for something like Wix Code for over a year now. My website is for a service company that I run. I'm slowly rolling out the services in different locations (Cites, States, etc.). With each new location brings different cost, Commercial or Residential services, limited zip codes in some states, different packages with each service, and the list goes on. Up until now I have been creating a page that fits everything I just mentioned. As you all know this is very time consuming and results in a site that's not user friendly. With Wix Code I believe I can make all this happen with a clean user friendly site. My problem is I've spent two days reading through the information on Wix Code and I can't figure out where to start. The above link is to an Amazon site that as just about all the features I'm looking for. 1. Zip code lookup which tells the user if the service is available in their area. If so the price is updated along with the information on the page related to the service. If service is not available a message is displayed saying service is currently not available in their area. 2. Different packages are shown and when selected the page is updated. I don't think it can be done yet but I would also like to tie everything in with Wix Booking so that after everything is selected the user will be taken directly to the right service to complete online booking. I would also like the url's to read something like this to help with SEO mysite.com/city-state/name-of-service Our services are broken down in categories Commercial > Service Group 1 > Service 1 > Service Group 1 > Service 2 > Service Group 1 > Service 3 > Service Group 2 > Service 1 > Service Group 2 > Service 2 > Service Group 2 > Service 3 etc. Residential > Service Group 1 > Service 1 > Service Group 1 > Service 2 > Service Group 1 > Service 3 > Service Group 2 > Service 1 > Service Group 2 > Service 2 > Service Group 2 > Service 3 etc. Am I looking for too much and will I be able to make these changes on my own with my limited knowledge? Thank you all for any information that can get me started. Hi Marvin and welcome to the Wix code community. The services should probably go inside a collection, both for manageability and ease of use. Note that you can add boolean fields to collections as well. With the help of boolean fields you can for example differentiate between residential and non residential services, is fridge cleaning included, is stove cleaning included etc. You will also need a number field for the price, and also a field for included areas. Since the area list is relevant for multiple services, you will probably want to add it as a separate collection and use Reference fields to connect it to your services collection. Reference fields allows you to connect one collection record to another collection record. See more about reference fields here Next, you will need to design a form. This is mainly done using built-in elements (text box, text, input boxes, buttons etc) and the $w namespace which allows you to interact with page elements (show, hide, get value of input box, disable..etc) The last step is connecting the form logic with the actual collection data. In general you should query the areas collection to first remove unwanted results, and then populate the form according to the results you find. Note that at the moment it is not possible to integrate WixCode and WixBooking. Feel free to submit your suggestion on our Feature Request forum *My suggestions are based on my minimal understanding of the process that you are trying to implement. Thank you for the help, I should be able to get things started with this information but I might end up finding someone to that understands this better to make this happen. Thank you again
https://www.wix.com/corvid/forum/community-discussion/site-overhaul
CC-MAIN-2019-47
refinedweb
675
68.7
OLED Interfaced to NodeMCU 2,596 41 1 Featured Intro: OLED Interfaced to NodeMCU OLED!!). Step 1: Need to Be Collected Here is the list of components required to get started with the Instructable, Hardware Components - NodeMCU - 0.96” SSD1306 OLED - Bread Board - Jumper Wires - Micro USB Cable Software Components - Arduino IDE Step 2: Connections Create an instance for the SSD1306 OLED display in SPI mode. Connection scheme: 1. CS - D1 2. DC - D2 3. Reset - D0 4. SDA - D4 5. SCL - D3 6. VDD - 3.3v 7. GND - GND Check the schematic and pin configuration to make connections. Step 3: Library Download Before you download library you need Arduino IDE to get started. To download Arduino IDE and for NodeMCU setup, you can check my previous instructacle. Interface Servo Motor with NodeMCU Here’s the library you need for this project: OLED can be easily coded with a library file called Ug8lib. Ug8lib is a graphics library with support for many different monochrome displays. The library file can be downloaded by following steps - Go to Sketch - Include Library - Manage Library - Download U8glib library file. Step 4: Time to Play With OLED CODE #include <U8glib.h> U8GLIB_SSD1306_128X64 u8g(5, 4, 16, 2, 0); void setup() { /* nothing to do here */ } void loop() { u8g.firstPage(); /* Keep looping until finished drawing screen */ do { u8g.setFont(u8g_font_osb18); u8g.drawStr(30, 20, "Hello"); //(horizontal spacing,vertical spacing,"string") u8g.drawStr(20, 50, "Makers!"); } while(u8g.nextPage()); } Download the "OLED 5: Output Yipeeee!! That's all makers! I hope you found this instructable most useful. You have successfully completed one more NodeMCU Instructable. Stay Tuned for more Projects! You can contact me by leaving a comment. If you like this instructable probably you might like my next ones. Discussions 1 year ago Great work ... what if OLED done not have CS pin ? these OLED displays are common with six pins gnd , vcc , clk , data_in , rst , DC
https://www.instructables.com/id/OLED-Interfaced-to-NodeMCU/
CC-MAIN-2018-47
refinedweb
320
78.14
Introduction The main plan was to develop a COVID-19 tracker in Django that shows local data in the Philippines - until the Department of Health hid and disabled page where the stats can be scraped. I did some crowdsourcing on Facebook and everyone shared a photo of the daily stats per city - which wasn't what I expected - but it was my fault, my question wasn't clear. Coding I already have experience in playing with API's so I thought it would be easier now since I could just use my code as a reference. I found Chris Michael's from google and found it easy to use. The API returned: { "report": { "country":"philippines", "flag":"", "cases":7579, "deaths":501, "recovered":862, "active_cases":[], "closed_cases":[] } } The data isn't complete but that was fine. So in calling and using the API, I ended up with: api.py import requests def get_data(): url = "" headers = {'Accept': 'application/json'} r = requests.get(url, headers=headers) stat = r.json() stat_data = { 'stat': stat['report'] } return stat_data Which was pretty much the same as my first API experience: A Python Program that Tweets Fetched Data from the icanhazdadjoke.com Vicente G. Reyes ・ Mar 6 '19 ・ 3 min read Although this was not my first time using an API in Python, this was, however, my first time integrating an API with Django. I ended up with: views.py ... from django.views.generic import ListView from covid_virus_tracker.users import api ... class HomeView(ListView): def get(self, request): stat_data = api.get_data() return render(request, 'pages/home.html', stat_data) However, the api.py was registering as a python module and I honestly had a hard time figuring out why. Until I peeked into the source code and figured out that when importing in cookiecutter-django, you've to start from the project, down to the app, then to the file, if coming from another app. Ex: from project_name.users.models import view Then on the pages/home.html, all I had to do was: {{ stat.recovered}} {{ stat.cases }} {{ stat.deaths }} However, I felt like the site needed something more. I felt like I had to let the users know what's happening all over the world, so I integrated an API that scrapes blogs with a COVID-19 keyword. I found RapidAPI, which I had an account already, and found the contextualsearch API. The API returned: { "type": "news", "didUMean": "", "totalCount": 2922, "relatedSearch": [ "coronavirus", "ist", "updated", "pm", "daily", "cases", "delhi", "india", "published", "times", "sign", "video online" ], "value": [ { "title": "Grounded jazz saxophonist plans to 'dream big' after lockdown", "url": "", "description": "The coronavirus Great Pause trashed Flora Carbo's plans for a world tour with a new album. Now she's fighting snails and making plans.", "body": "Very large text size\nARTIST IN THE TIME OF COVID-19: Flora Carbo\nJazz saxophonist Flora Carbo has recently released a new album, Voice. The next logical step? A launch then a tour two things the Fairfield artist cant do thanks to the coronavirus.\nI had a six-week European trip cancelled; I was going to play at the Amersfoort Jazz Festival [Netherlands], attend the Jazzahead! conference in Germany and collaborate with some friends.\nJazz saxophonist Flora Carbo.\nCarbo was also going to capitalise on some serious buzz with gigs at the Melbourne International Jazz Festival (now cancelled, natch), where shes previously played with Maceo Parker and the Meltdown Big Band.\nAdvertisement\nShe was all set to front Flora Carbo Trio in the lunchtime series and I was going to be playing with [trumpeter] Niran Dasika as part of his group\".\nNow, Carbo is fighting snails out of her vegie garden, re-booking gigs, completing sudokus and 100 per cent subsisting on hot chocolate\".\nIm very lucky to have be", "keywords": "covid,time", "language": "en", "isSafe": true, "datePublished": "2020-04-27T01:35:36", "provider": { "name": "smh" }, "image": { "url": "", "height": 450, "width": 800, "thumbnail": "", "thumbnailHeight": 168, "thumbnailWidth": 298, "base64Encoding": null, "name": null, "title": null, "imageWebSearchUrl": null } } ] } Integrating was easier now since I've done it on the api.py. I ended up with: import requests import environ env = environ.Env() def get_data(): url = "" querystring = {"autoCorrect": "false", "pageNumber": "1", "pageSize": "10", "q": "covid", "safeSearch": "true"} headers = { 'x-rapidapi-host': "contextualwebsearch-websearch-v1.p.rapidapi.com", 'x-rapidapi-key': env("x-rapidapi-key"), } r = requests.get(url, headers=headers, params=querystring) data = r.json() article_data = { 'data': data['value'] } return article_data Notice that one difference is having to import environ and creating an env = environ.ENV() variable. I need this to hide my API keys and this is how cookiecutter-django hides it. All I had to do was place the keys in the .envs/local/.django & .envs/production/.django and I was good to go. The views.py is: ... from django.views.generic import ListView from covid_virus_tracker.users import services ... class IndexData(ListView): def get(self, request): article_data = services.get_data() return render(request, 'pages/news.html', article_data) On the pages/news.html, all I needed to do was loop through the data to show it. Like this: {% for item in data %} {{ item.title }} {{ item.datePublished|slice:":10" }} # | naturaltime from humanize didn't work so I had to slice through the date {{ item.description }} {{ item.provider.name }} The site is live and can be seen at. Discussion (14) quick markdown/DEV tip, make sure you put the language of your code next to your code block. Hi, what's the use of putting the language at the top? It is what enables syntax hilighting Isn't it highlighted already before you gave the tip? Your article does not look highlighted to me. That's weird. Check out all the code blocks on all the themes on the photos below: Hey Vicente! Thanks for sharing and this looks pretty cool. I was wondering if you ever considered using a realtime stream of COVID updates to build an event-driven application instead of relying on polling an API for data updates? I wrote a blog post about it here dev.to/tweettamimi/how-i-used-covi... let me know if you're interested to collab 🙌 Hey Tamimi, what I actually have in mind is, since someone from the Philippines is taking down notes on where the budget for COVID is going, I'm planning on adding the said data on the site. But it's still on planning stage tho. Awesome blog post! I'm actually still a novice in Django but I'm in. I could learn a thing or two from you anyways. Sounds good! Have you done any front end development with python as a backend (i.e. using Django)? We can start with the code snippet in my blodpost as the event-driven skeleton receiving updates from any of the data sources. Subscription to topics could be as easy as subscribing to "jhu/csse/covid19/test/cases/+/update/Philippines/#"for example to get all cases updates from the Philippines. It'll be a cool exploration of using Python/Django and messaging protocol mqtt Let's speak in the chat 😄 Such a great community project!! Thanks for sharing! Good job I was part of the DICT team that planned to develop our local tracker but they used Java and Angular hence deciding to make my own. Thank you.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/highcenburg/developing-my-local-covid-19-side-project-2oa6
CC-MAIN-2021-43
refinedweb
1,208
65.01
Too often today, enterprise applications are built to work, but not to last. Custom modules and services are inflexible to change and hard to maintain and debug. For the services that J2EE provides, it has been a boone to the development of quality software, but what about the services that aren't offered yet, or need to be lighter-weight? Carbon exposes technical components with a consistent and robust mechanism for management and configuration while providing lifecyle and component control services. The substistuted with alternate implementations. The management capabilities are provided though JMX, facilitating the maintenance and monitoring of services. Carbon is designed for J2SE 1.3 or higher but works well within J2EE servers and can use their JMX services. However, the rich configuration capabilities can be used for any part of a Java application independent of the component model. Configuration documents maintain their formatting across edits and it is easy to define new complex data types and programmatically construct, edit and save data. The configuration services are also enriched by the concept of deployment settings which will allow complex environments to be modeled and stored in a single directory structure, Jar file or JNDI service, while taking on the appropriate customizations for their location. For more information, see the carbon project homepage at. Carbon Services Framework Released (12 messages) - Posted by: Greg Hinkle - Posted on: June 04 2003 16:18 EDT Threaded Messages (12) - Carbon Services Framework Released by fmarchioni fmarchioni on June 07 2003 04:17 EDT - RE: Cluster Cache by Greg Hinkle on June 08 2003 12:01 EDT - Who cares? by Ken Egervari on June 08 2003 22:10 EDT - Other configuration/component frameworks? by Eric Pederson on June 09 2003 09:09 EDT - Other configuration/component frameworks? by Greg Hinkle on June 10 2003 01:41 EDT - Turbine, and the need for Carbon by Davide Baroncelli on June 10 2003 12:21 EDT - Re: Other configuration/component frameworks? by Tim Schafer on June 12 2003 08:57 EDT - Re: Other configuration/component frameworks? by Greg Hinkle on June 15 2003 12:36 EDT - Carbon services SQL module by Vikas Hazrati on July 30 2003 07:27 EDT - Would be nice to have some sample code by Luciano Fiandesio on June 10 2003 04:46 EDT - Examples wanted by cesar varona on June 10 2003 09:34 EDT - Carbon Skeleton/Sample Application by Jordan Reed on June 10 2003 09:08 EDT Carbon Services Framework Released[ Go to top ] Has anybody tested if it's able to create a Cache for clustered environment too? - Posted by: fmarchioni fmarchioni - Posted on: June 07 2003 04:17 EDT - in response to Greg Hinkle RE: Cluster Cache[ Go to top ] Has anybody tested if it's able to create a Cache for clustered environment too? - Posted by: Greg Hinkle - Posted on: June 08 2003 12:01 EDT - in response to fmarchioni fmarchioni Carbon does not yet have a supported cluster capable cache. There is, however, an experimental one based on the JavaGroups multicast capable clustering support. It still needs more work though. Who cares?[ Go to top ] Who cares? This is top priority news of the day? Yeah right. - Posted by: Ken Egervari - Posted on: June 08 2003 22:10 EDT - in response to Greg Hinkle Other configuration/component frameworks?[ Go to top ] After using ATG Dynamo for the last 4 years or so, I am now in the pure J2EE world. But I really miss the Nucleus, which is (among other things) a very nice component/configuration framework. What other component/configuration frameworks are out there, besides Carbon and the Bean Factory/Application Context stuff from Rod Johnson's book? - Posted by: Eric Pederson - Posted on: June 09 2003 09:09 EDT - in response to Greg Hinkle Other configuration/component frameworks?[ Go to top ] There are a couple of other frameworks that have a similar concept to them out there. The biggest would probably be the Avalon framework from the Apache group and the Core Services Framework (CSF) from HP, though there have been others. - Posted by: Greg Hinkle - Posted on: June 10 2003 01:41 EDT - in response to Eric Pederson I'd love to see some discussion on the differences between these and some idea of what things people would want to see in these frameworks. From my perspective, HP-CSF was an early and neat idea into the concept of components with lifecycle. It was even considered and I think accepted as the basis for JSR-111, but I don't know what its status is. Avalon seems to have some more complexity to them in that they allow for different implementations of a container (and there are a few you can try). For us, Carbon was primarily a way to make the job of framework writers easier. We had done it for many years and built many reusable components, but they we're missing that basic layer that had to be reconstructed for each new service. This was a way to build structure around that process and we feel we've made good progress. We're happy with the ability to easily and repeatably build manageable, configurable services that have some fairly advanced features. For me, the configuration aspect is the part that is the farthest ahead of the current specifications and there hasn't been many projects to tackle it. JConfig, preferences and digester(sortof) are all that come to mind. (Stuart Dabbs Halloway also wrote about some of these ideas.) Turbine, and the need for Carbon[ Go to top ] Having developed until now with Turbine as a services framework, which seems to be somewhat inadequate for "full" j2ee development (at least in the version 2.1 that is used as the basis for our architecture) I was searching for something that provides something similar to its concept of service, and maybe Carbon may fit our needs: we will try it out in the next few weeks. In the meanwhile, thanks for making it available as open source! - Posted by: Davide Baroncelli - Posted on: June 10 2003 12:21 EDT - in response to Greg Hinkle Re: Other configuration/component frameworks?[ Go to top ] I noticed confix on - Posted by: Tim Schafer - Posted on: June 12 2003 20:57 EDT - in response to Greg Hinkle by "preferences" you mean ? Re: Other configuration/component frameworks?[ Go to top ] Those are also interesting extensions of the basic configurations that people have been using since PropertyBundles appeared. I like the way JPreferences extends the util.prefs (what I was originally talking about), to support JavaBean syntax. - Posted by: Greg Hinkle - Posted on: June 15 2003 12:36 EDT - in response to Tim Schafer Carbon configuration, however, was designed to do more. We wanted a mechanism that could truely support the complex configurations that are needed for a large scale enterprise application. We spent years supporting a configuration service that had functionality similar to util.preferences and decided that we needed more. Carbon config supports a namespace of documents that each represent a complex configuration. This namespace can live in ldap, jars or on the filesystem and are automatically found by the service. (No more finding and reading files manually). This also gives Carbon config the control to do intelligent caching, and an event system for notifying on config changes. Carbon configurations are also rich-object's that are defined by an interface and they support arbitrarily complex types. Also, the configuration files are directly mapped to the bean such that altering one property will not cause formatting changes to the rest of the document. A pluggable data validation service rounds out the capabilities to make a pretty neat service. Carbon services SQL module[ Go to top ] I tried using this module iwth whatever limited documentation available and failed. Do you have any sample application to demonstrate how it works? - Posted by: Vikas Hazrati - Posted on: July 30 2003 07:27 EDT - in response to Greg Hinkle Would be nice to have some sample code[ Go to top ] The Sapient Carbon framework looks very well designed, also the documentation is eventually GOOD. But, why not to include some sample code to clarify some of the framework issues? - Posted by: Luciano Fiandesio - Posted on: June 10 2003 04:46 EDT - in response to Greg Hinkle Examples wanted[ Go to top ] I completely agree with you. Although the available documentation is much - Posted by: cesar varona - Posted on: June 10 2003 09:34 EDT - in response to Luciano Fiandesio better than the average, it seems a daunting task to get it working without any example or tutorial. > The Sapient Carbon framework looks very well designed, also the documentation is eventually GOOD. But, why not to include some sample code to clarify some of the framework issues? Carbon Skeleton/Sample Application[ Go to top ] Taking a break from the sessions at JavaOne we packaged up the Skeleton/Sample application we had been working for people to download and take a look at. - Posted by: Jordan Reed - Posted on: June 10 2003 21:08 EDT - in response to cesar varona Inside the carbon_sample_2_0.zip's Implementation is a structure meant to be a good starting point for developing J2EE applications based on the Carbon framework. The basic jars and config files an application would need. There's also a demo directory that can be directly copied into the Implementation. The demo contains an example component with configuration that gets called from a JSP through either a JavaBean or EJB. Also, you can take a look at the jUnit test harnesses and test configurations in the main source download. This is a lot more complex than what's in the sample structure.
http://www.theserverside.com/discussions/thread.tss?thread_id=19664
CC-MAIN-2014-15
refinedweb
1,620
51.18
The average business loses $13.5M per year to ineffective training (per 1,000 employees). Keep ahead of the competition and combine in-person quality with online cost and flexibility by training with Linux Academy. using System.IO; ... using (StreamReader reader = new StreamReader(myFileName)) { while (!reader.EndOfStream) { string line = reader.ReadLine(); string[] columns = line.Split(','); string var1 = columns[0]; string var2 = columns[1]; // etc. } } I could download the file to the local hard drive and use StreamReader,I'm afraid that's what you're going to need to do. FTP is a file transfer protocol; it's not for opening up files on a remote server. If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions. Connect with top rated Experts 10 Experts available now in Live!
https://www.experts-exchange.com/questions/28158160/How-to-read-each-line-of-a-file.html
CC-MAIN-2017-09
refinedweb
142
51.34
1 2 package org.campware.cream.om;3 4 5 import org.apache.torque.om.Persistent;6 7 /**8 * The skeleton for this class was autogenerated by Torque on:9 *10 * [Mon Apr 11 04:03:33 CEST 2005]11 *12 * You should add additional methods to this class to meet the13 * application requirements. This class will only be generated as14 * long as it does not already exist in the output directory.15 */16 public class InboxAttachment17 extends org.campware.cream.om.BaseInboxAttachment18 implements Persistent19 {20 }21 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/campware/cream/om/InboxAttachment.java.htm
CC-MAIN-2018-26
refinedweb
102
50.84
Flutter plugin adding ability to access WifiLock(); } Add this to your package's pubspec.yaml file: dependencies: wifi_lock: ^0.0.4 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:wifi_lock/wifi_lock.dart'; We analyzed this package on Jun 12, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/wifi_lock.dart. Run flutter format to format lib/wifi_lock.dart..
https://pub.dev/packages/wifi_lock
CC-MAIN-2019-26
refinedweb
109
50.53
Enabling JavaScript IntelliSense in External Libraries To get JavaScript IntelliSense working in VS 2008 SP1, you need to tell IntelliSense the location of the libraries that you're using. You do that by adding a special comment at the top of the .js file. It's three slashes followed by the reference in an XML element syntax. For example, my JScript.js file has a dependency on the jQuery library. So, here's what I add to the top of JScript.js: /// <reference path="jquery-1.2.6.js" /> IntelliSense then parses the referenced file. Just start typing and you start seeing the jQuery members: I'm not sure whether it's required to put the reference pointer at the very top of the page but my IntelliSense failed to work when I included comments *before* the reference. The following didn't work: /*-------------------------------------------------------------------- Don't do this!!!!!!!!!!!! --------------------------------------------------------------------*/ /// <reference path="jquery-1.2.6.js" /> If the developer of a library wants to provide even more info, IntelliSense will pick that up too. Just use the three-slash comments with the following syntax: trim: function(text) { /// <summary>Removes whitespace from the beginning and end</summary> /// <param name="text">The string to be trimmed</param> /// <returns>string</returns> return (text || "").replace(/^\s+|\s+$/g, ""); }, Here's how the summary appears in the IDE for the trim() function above: With IntelliSense, JavaScript debugging, and enhanced JavaScript formatting, I may start to hate JavaScript just a little less! <grin> Ken
http://weblogs.asp.net/kencox/enabling-javascript-intellisense-in-external-libraries
CC-MAIN-2014-42
refinedweb
246
63.19
Hello folks, In this document, I will write the basic java program to connect to HANA Database and retrieve the information. I am going to consume the data model which I created using “graphical” editor of “Calculation View”. The target of this document is to make you understand the how we can connect to the models in HANA database with JAVA. With this we can feel the power of united SAP and JAVA. I have created a view called “CALC_VIEW” using graphic editor of calculation view as shown below: In this view am trying to “Union” two tables “NYSE” and “NYSE_1” as you can see above. Now we have to launch eclipse IDE to write our java program. JAVA CODE: import java.sql.*; public class Connection { public static void main(String args[]) { try { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); java.sql.Connection conn=DriverManager.getConnection("jdbc:odbc:HANA","userid","pwd"); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery( "SELECT * FROM \"_SYS_BIC\".\"package/CALC_GRAPHIC\"" ); while( rs.next() ) { System.out.println( rs.getString(1)); System.out.println( rs.getString(2)); } rs.close() ; stmt.close() ; conn.close() ; } catch(Exception e) { System.out.println(e); } } } See we can now see fetched the records from “CALC_GRAPHIC” view using this java program as shown below: Thank your reading this blog 🙂 Do add your valuable suggestions to this document 🙂 Its important to note that the HANA JDBC drivers are not yet supported for custom development by SAP customers or partners. Hello Thomas, Thank you for mentioning it. I missed to mention it. I have one more doubt. Will PHP work on top of HANA DB? Am able to connect but “SQL” statements not getting executed. Will these be available in future versions of HANA? Regards, Krishna Tangudu We do want to open development to other platforms and techniques in the future. At some point we will support the ODBC/JDBC drivers for 3rd party development. I really couldn’t say if it will work in PHP, however, because I don’t know what technical problems you are hitting. Thanks for the information Thomas. hi Thomas Jung, What is update on this ? Are HANA JDBC drivers open for custom development yet ? Thanks Sandesh Yes, they are open for custom development now for quite some time. – Lars Krishna: One more good write-up! Thanks. Rama Hello Krishna, Good one Cheers, Avin Krishna Very nice… first of its kind blog…. Now I got a question , I am getting this error java.lang.ClassNotFoundException: com.sap.db.jdbc.Driver which means , its not being able to dynamically load the jdbc driver .. Nor does your class path work for me .. Can you elaborate on the lines below , and perhaps give suggestions Hi Can any other data structure can be used instead of result set for retrieving the data.
https://blogs.sap.com/2012/04/19/sap-hana-and-java-consuming-hana-information-models-using-java/
CC-MAIN-2018-17
refinedweb
466
58.58
Introduction to the Visio 2013 file format (.vsdx) Learn about the new file format in Visio 2013, explore some high-level concepts for working with the Visio 2013 file format programmatically, and create a simple console application that examines a Visio 2013 file. Visio 2013 introduces a new file format (.vsdx) for Visio that replaces the Visio binary file format (.vsd) and Visio XML Drawing file format (.vdx). Because the Visio 2013 file format is based upon Open Packaging Conventions and XML, developers who are familiar with these technologies can quickly learn how to work with Visio 2013 files programmatically. Developers who are familiar with the Visio XML Drawing file format (.vdx) from previous versions of Visio can find many of the same XML structures within the parts of .vsdx file format. Interoperability with Visio files is greatly increased since third-party software can manipulate Visio files at a file format level. The Visio 2013 file format is supported on Visio Services in Microsoft SharePoint Server 2013, without the need of an "intermediary" file format for publishing to SharePoint Server. There are several file types, by extension, that comprise the Visio 2013 file format. These extensions include: .vsdx (Visio drawing) .vsdm (Visio macro-enabled drawing) .vssx (Visio stencil) .vssm (Visio macro-enabled stencil) .vstx (Visio template) .vstm (Visio macro-enabled template) The Visio 2013 file format uses the Open Packing Conventions (OPC), which defines a structured means to store application data together with related resources using a container of some sort─for example, a ZIP file. At a basic level, a Visio 2013 file is really a ZIP container that contains other types of files. In fact, you can save a drawing in Visio 2013 as a .vsdx file, rename the file extension to “*.zip” in Windows Explorer, and then open the file like a folder to see the contents inside. Packages and Package Parts As started earlier, Visio 2013 files are ZIP containers or “packages” that hold other files (called “package parts”) within them. A package part can be an XML file, an image, even a VBA solution. The parts within the package can be further divided into two broad categories, “document parts” and “relationship parts.” The document parts contain the actual content and metadata of the Visio file, like the name of the file, the first page and all of the shapes that it contains, and even the data connections for the shapes. Images and text files within the package are considered document parts. Relationship parts are described in more detail later in this article. Package parts─both document parts and relationship parts─also have associated content types. These content types are strings that define a MIME media type. These content types specify and scope the kind of MIME types that can be contained in the file. Relationships The relationship parts (which end with the extension “*.rels” and are stored in a “_rels” folder) describe how the parts within the package relate to each other and provide the structure of the file. A standalone XML document uses the parent/child relationship of elements to determine the relationship of entities to each other. Other files may use other hierarchies or file folder structure to describe the interaction of content in the file. For the Visio 2013 file format, the package is a valid Visio file if it contains the correct set of parts and the package contains the relationships between the parts. Relationship parts are XML documents that describe the relationships between different document parts within the package. They define an association between two items: a specified source (defined by the name and location of the relationship file) and a specified target document part. For example, relationship parts are used to describe which shape masters are associated with the file, how pages relate to the file and to each other, or how images and objects relate to a specific page. Similarities and differences with Visio VDX schema As mentioned, past versions of Visio also included an XML-based file format, the Visio XML Drawing Format or .vdx. (In previous versions of Visio, the schema used for the Visio XML Drawing Format is called DatadiagramML.) Some pieces from the Visio XML schema have stayed the same between the two file formats. For example, the Windows element and its children remain unchanged─with the exception that the Windows element is now a root element of an XML document (window.xml). The largest difference between the XML Drawing Format and the Visio 2013 file format is the packaging. An XML Drawing Format file could be manipulated like a normal stand-alone XML; the Visio 2013 file format must be manipulated as a package. In the Visio 2013, the XML has been divided up into parts for easier consumption. Another noticeable change is that the Visio 2013 file format stores all document properties in document parts described by the OPC standard (app.xml, core.xml, custom.xml). However, there is one significant change that all Visio developers must be aware of: the introduction of the Cell, Row, and Section elements. In the XML Drawing File Format schema, individual rows and cells in the ShapeSheet are represented by named elements. For example, imagine that you have a document with a single page that contains a shape with a PinX value of “2” (meaning that the rotation pin of the shape is 2 inches from the left edge of the drawing). The relevant markup for that setting in the XML Drawing File Format is shown in the following code. Here, the PinX element is a child of the XForm element, which is in turn a child of the Shape element. This models the Visio ShapeSheet UI, where the PinX cell is included in the Shape Transform section of a shape. In the Visio 2013 file format, all cells in the ShapeSheet─PinX, LinePattern, an X cell in a MoveTo row in a Geometry section, etc.─are represented by one type of XML element, the Cell element. Different Cell elements are individuated from each other by the value of their N attribute. Thus, in the example from above, the data contained in the PinX cell in the ShapeSheet is stored in a VSDX file as shown in the following code. The Cell element for PinX (as well as other individual, named cells called “singleton cells” like LinePattern or LockSelect) is a direct child of the Shape element. No unique element is needed to represent the row that contains the PinX cell, as each shape can only have one PinX. What about sections that include tabular data, like Geometry sections? For the cells in those sections, the Visio 2013 file format schema uses Section and Row elements─also distinguished by their N attribute or T attribute as shown below─to contain the data. For example, the same shape from the previous example might contain data in the Geometry 1 section that looks like the following code in the XML Drawing schema. However, it looks like the following code in the Visio 2013 file. <Shape TextStyle=”3” FillStyle=”3” LineStyle=”3” Type=”Shape” ID=”1”> <!--- Other cells in the ShapeSheet --> <Section N="Geometry" IX="0"> <!--- Other cells and rows in the Geometry 1 section --> <Row IX="1" T="LineTo"> <Cell V="0" N="X" V=”Width*0” /> <Cell V="0" N="Y" V=”Height*0” /> </Row> </Section> </Shape> As explained above, the Visio 2013 file format leverages several well-understood technologies like ZIP files and XML to store data. To manipulate a Visio 2013 drawing at the file level, a solution need only to use the .NET Framework namespaces and classes associated with working with ZIP files or XML, like System.IO.Packaging or System.Xml. The key benefit to developers of the Visio 2013 file format is that you can read and write to Visio 2013 files without automating the Visio client application. Some scenarios that you might consider as a developer for working with Visio 2013 file format include: Checking individual Visio 2013 files for specific data. You can selectively read one item out of the ZIP container without having to extract the whole file. Updating libraries of Visio 2013 files with specific content. You can programmatically change the logo in all of the background pages to reflect new branding guidelines. Creating applications that consume Visio 2013 files. For example, you can build a tool that reads a Visio workflow diagram and then executes its own business logic based upon that workflow. Be aware that because these solutions use standard .NET Framework assemblies, most solutions that can be run on a client machine can also be run on a server! The most basic and fundamental task for any developer working with the Visio 2013 file format is opening the file as a package and then accessing individual parts within the package. The System.IO.Packaging.Package in the WindowsBase.dll contains many classes that enable you to open and manipulate packages and parts. In the following code sample, you can see how to open a .vsdx file, read the list of parts in the package, and get information about each part. To open a .vsdx file and view the document parts Open Visio 2013 and create a new document. Create a new document and save it to the Desktop. Open Visual Studio 2012. On the File menu, choose New, and then choose Project. Under Visual C# or Visual Basic, expand Windows, and then select Console Application. In the Name box, type ‘VisioFileExplorer’. The Console Application project opens. In the Solution Explorer, right-click VisioFileExplorer, and then click Add Reference. In the Add Reference dialog box, under Assemblies, expand Framework, and then choose WindowsBase. Paste the following code into the solution. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.IO.Packaging; using System.Diagnostics; namespace VisioFileExplorer { class Program { static void Main(string[] args) { try { Console.WriteLine("Opening the VSDX file ..."); // Need to get the folder path for the Desktop // where the file is saved. string dirPath = System.Environment.GetFolderPath( System.Environment.SpecialFolder.Desktop); DirectoryInfo myDir = new DirectoryInfo(dirPath); // It is a best practice to get the file name string // using a FileInfo object, but it isn’t necessary. FileInfo[] fInfos = myDir.GetFiles("*.vsdx"); FileInfo fi = fInfos[0]; string fName = fi.FullName; // We’re not going to do any more than open // and read the list of parts in the package, although // we can create a package or read/write what's inside. using (Package fPackage = Package.Open( fName, FileMode.Open, FileAccess.Read)) { // The way to get a reference to a package part is // by using its URI. Thus, we're reading the URI // for each part in the package. PackagePartCollection fParts = fPackage.GetParts(); foreach (PackagePart fPart in fParts) { Console.WriteLine("Package part: {0}", fPart.Uri); } } } catch (Exception err) { Console.WriteLine("Error: {0}", err.Message); } finally { Console.Write("\nPress any key to continue ..."); Console.ReadKey(); } } } } Press F5 to debug the solution. When the program has completed running, press any key to exit. For more information about the Visio 2013 file format, the Open Packaging Convention, or how to work with Visio 2013or Office OpenXML files programmatically, see the following resources:
http://msdn.microsoft.com/en-us/library/office/jj228622(v=office.15).aspx
CC-MAIN-2014-10
refinedweb
1,871
56.76
How can I get xmonad to match the left alt key and not the right alt key? Ubuntu binds these both to mod1 in xmodmap. You can look which keys are the mod keys with "xmodmap -pm". xmodmap also allows to change the mod key configuration. Just enter the apropiate keycodes into ~/.Xmodmap or ~/.xmodmaprc To find out which keycode the right alt key emits start "xev" like this: xev | grep -A2 --line-buffered '^KeyRelease' | sed -n '/keycode /s/^.*keycode \([0-9]*\) Not sure about alt, but the xmonad config archive has an example that uses Super instead of alt: import XMonad main = xmonad defaultConfig { modMask = mod4Mask -- Use Super instead of Alt , terminal = "urxvt" -- more changes } You'll need to compile the new config file, so you'll need haskell if you don't have it 504 times active 1 year ago Get the weekly newsletter! see an example newsletter
http://superuser.com/questions/68292/how-to-make-xmonad-ignore-the-right-alt-key-in-ubuntu/192268#192268
crawl-003
refinedweb
150
78.59
James Hague, a long time Erlanger, drives home a point or two regarding purity of paradigms in a couple of his latest blog posts. Here's his take on being effective with pure functional languages .. ." Purity is not necessarily pragmatic. In my last blog post I also tangentially touched upon the notion of purity while discussing how a *hybrid* model of SQL-NoSQL database stack can be effective for large application deployments. Be it with programming languages or with databases or any other paradigms of computation, we need to have the right balance of purity and pragmatism. Clojure introduced transients. Rich Hickey says in the rationale .. "If a pure function mutates some local data in order to produce an immutable return value, is that ok?". Transients in Clojure allow localized mutation in initializing or transforming a large persistent data structure. This mutation will only be seen by the code that does the transformation - the client gets back a version for immutable use that can be shared. In no way does this invalidate the benefits that immutability brings in reasoning of Clojure programs. It's good to see Rich Hickey being flexible and pragmatic at the expense of injecting that little impurity into his creation. Just like the little compromise (and big pragmatism) with the purity of persistent data structures, Clojure also made a similar compromise with laziness by introducing chunked sequences that optimize the overhead associated with lazy sequences. These are design decisions that have been taken consciously by the creator of the language that values pragmatism over purity. Enough has already been said about the virtues of purity in functional languages. Believe me, 99% of the programming world does not even care for purity. They do what works best for them and hybrid languages are mostly the ones that find the sweetest spots. Clojure is as impure as Scala is, considering the fact that both allow side-effecting with mutable references and uncontrolled IO. Even Erlang has uncontrolled IO and a mutable process dictionary, though its use is often frowned upon within the community. The important point is that all of them have proved to be useful to programmers at large. Why do creators infuse impurity into their languages ? Why aren't every language created as pure as Haskell is ? Well, it's mostly related to a larger thought that the language often targets to. Lisp started as an incarnation of the lambda calculus under the tutelage of John McCarthy and became the first significant language promoting the purely applicative model of programming without side-effects. Later on it added the impurities of mutation constructs based on the von Neumann architecture of the machines where Lisp was implemented. The obvious reason was to get an improved performance over purely functional constructs. Scala and Clojure both decided to go for the JVM as the primary runtime platform - hence both languages are susceptible to the pitfalls of impurity that JVM offers. Both of them decided to inherit all the impurities that Java has. Consider the module system of Scala. You can compose modules using traits with deferred concrete definitions of types and objects. You can even compose mutually recursive modules using lazy vals, somewhat similar to what Newspeak and some dialects of ML offer. But because you have decided to bite the Java pill, you can also wreak havoc through shared mutable state at the top level object that you compose. In his post titled A Ban on Imports Gilad Bracha discusses all evil effects that an accessible global namespace can bring to the modularity aspects of your code. Newspeak is being designed as pure in this respect, with all dependencies being abstract and need to be plugged together explicitly as part of configuring the module. Scala is impure in this respect, allows imports to bring in the world on to your module definitions, but at the same time opens up all possibilities of sharing the huge ecosystem that the Java community has built over the years. You can rightfully choose to be pure in Scala, but that's not enforced by the language. When we talk about impurity in languages, it's mostly related to how it handles side-effects and mutable state. And Haskell has a completely different take on this aspect than what we discussed with Lisp, Scala or Clojure. You have to use monads in Haskell towards any side-effecting operation. And people with a taste for finer things in life are absolutely fine with that. You cannot just stick in a printf to your program for debugging. You need to return the whole stuff within an IO monad and then do a print. The Haskell philosophy looks at a program as a model of mathematical functions where side-effects are also implemented in a functional way. This makes reasoning and optimization by the compiler much easier - you can make your pure Haskell code run as fast as C code. But you need to think differently. Pragmatic ? What do you think ? Gilad Bracha is planning to implement pure subsets of Newspeak. It will be really exciting to get to see languages which are pure, functional (note: not purely functional) and object-oriented at the same time. He observes in his post that (t)he world is slowly digesting the idea that object-oriented and functional programming are not contradictory concepts. They are orthogonal, and can be arranged to be rather complementary. This is an interesting trend where we can see families of languages built around the same philosophy but differing in aspects of purity. You need to be pragmatic to choose and even mix them depending on your requirements. 4 comments: > You cannot just stick in a printf to your program for debugging. You need to return the whole stuff within an IO monad and then do a print. Actually, you can do that - Debug.Trace.trace :: String -> a -> a. It uses the usual purity escape-hatch, System.IO.Unsafe.unsafePerformIO, and it works like it says on the tin: 'foo x = trace x (x++".txt")' etc. > It's good to see Rich Hickey being flexible and pragmatic at the expense of injecting that little impurity into his creation. Transients are fine, but Hickey is sacrificing purity for pragmatism and is receiving neither: "Transients do not support the persistent interface of the source data structure. assoc, conj etc will all throw exceptions, because transients are not persistent. Thus you cannot accidentally leak a transient into a context requiring a persistent." Yes, you can't leak them - he's traded runtime errors for exceptions. On the other hand, Haskell can give you the ST monad*. Inside of it you can operate & mutate as you please - with no worries about accidentally using a function which in this context only will throw exceptions - and then return a value to the outside, a value which appears pure & referentially transparent and which is in fact guaranteed to be so by the type system. Purity *and* pragmatism. * Great post! Purity is not a panacea. Thank you for this insightful exploration of pragmatic programming language trade-offs. I think purity will pay off when it comes to (automatic) parallelization. Having only read only values, for example, is great: one needs no access synchronization at all. @Ingo For cases where you need parallelization, you can preach purity through idioms and best practices even if your language does support impurity. Despite the fact that u have transients in Clojure you can very well localize their usage (as Rich Hickey has done in Clojure lib) and prevent them from appearing in code that need to be parallelized.
http://debasishg.blogspot.com/2010/01/pragmatics-of-impurity.html
CC-MAIN-2017-43
refinedweb
1,270
53.92
Roles are a robust feature of Ansible that facilitate reuse and further promote modularization of configuration, but Ansible roles are often overlooked in lieu of straightforward playbooks for the task at hand. The good news is that Ansible roles are simple to get set up and allow for complexity when necessary. Join me as I go through the basics of how to set up and deploy a simple Ansible role. (Not sure what Ansible is? Check out this post) BONUS: download the Ansible Roles cheat sheet below! The anatomy of an Ansible role The concept of an Ansible role is simple; it is a group of variables, tasks, files, and handlers that are stored in a standardized file structure. The most complicated part of a role is recalling the directory structure, but there is help. The built-in ansible-galaxy command has a subcommand that will create our role skeleton for us. Simply use ansible-galaxy init <ROLE_NAME> to create a new role in your present working directory. You will see here that several directories and files are created within the new role: The number of files and directories may appear intimidating, but they are fairly straightforward. Most directories contain a main.yml file; Ansible uses each of those files as the entry point for reading the contents of the directory (except for files, templates, and test). You have the freedom to branch your tasks and variables into other files within each directory. But when you do this, you must use the include directive in a directory’s main.yml to have your files utilized as part of the role. We will take a closer look at this after a brief rundown of each directory’s purpose. The defaults directory is designated for variable defaults that take the lowest precedence. Put another way: If a variable is defined nowhere else, the definition given in defaults/main.yml will be used. The files and templates directories serve a similar purpose. They contain affiliated files and Ansible templates (respectively) that are used within the role. The beautiful part about these directories is that Ansible does not need a path for resources stored in them when working in the role. Ansible checks them first. You may still use the full path if you want to reference files outside of the role, however, best practices suggest that you keep all of the role components together. The handlers directory is used to store Ansible handlers. If you aren’t familiar, Ansible handlers are simply tasks that may be flagged during a play to run at the play’s completion. You may have as many or as few handlers as are needed for your role. The meta directory contains authorship information which is useful if you choose to publish your role on galaxy.ansible.com. The meta directory may also be used to define role dependencies. As you may suspect, a role dependency allows you to require that other roles be installed prior to the role in question. The README.md file is simply a README file in markdown format. This file is essential for roles published to galaxy.ansible.com and, honestly, the file should include a general description of how your role operates even if you do not make it publicly available. The task directory is where most of your role will be written. This directory includes all the tasks that your role will run. Ideally, each logically related series of tasks would be laid out in their own files, and simply included through the main.yml file in the tasks directory. The test directory contains a sample inventory and a test.yml playbook. This may be useful if you have an automated testing process built around your role. It can also be handy as you are constructing your role but use of it is not mandatory. The last directory created is the vars directory. This is where you create variable files that define necessary variables for your role. The variables defined in this directory are meant for role internal use only. It is a good idea to namespace your role variable names, to prevent potential naming conflicts with variables outside of your role. For example, if you needed a variable named config_file in your baseline playbook, you may want to name your variable baseline_config_file, to avoid conflicts with another possible config_file variable defined elsewhere. Creating a simple role As stated earlier, Ansible roles can be as complex or as simple as you need. Sometimes, it is helpful to start simple and iterate into a more complex role as you shore up the base functionality. Let’s try that, and define a role called base_httpd that installs httpd with a simple configuration and a very simple website. To get started, we will need to create our role. We could create each directory and file by hand, but it is far simpler to let ansible-galaxy do the grunt work for us by simply running ansible-galaxy init base_httpd: Next, we can create our simple web page in the files directory. For our academic purposes, we can create a file named index.html containing some tried and true sample text: We will create a template of our httpd.conf file by copying an existing one from a fresh install of httpd. Let’s take this opportunity to define a couple of default variables in our role. We will do a default listening port of 80 and a LogLevel that will default to warn. We can do this by adding an entry to defaults/main.yml: You will notice here the template file has the .j2 extension as custom, and I have used grep to highlight where we have customized the template by replacing the default values in httpd.conf with Ansible variables. Then I show where the variables are defined in defaults/main.yml. Using defaults instead of vars here is preferred, as it allows for later customization without having to change the actual role. Now that we have our simple website and configuration, we will need to create the tasks to bring our webserver to life. For the sake of example, I will isolate our httpd setup to its own yaml file in tasks/httpd.yml and then include that file into the tasks/main.yml: We use the yum, template, copy, and service modules to install, configure, and start our webserver. It is worth noting here that I reference the httpd.yml, httpd.conf.j2, and index.html files without full paths as they are stored within the role. Deploying the role Most of the hard work was completed when we constructed the role itself. Deployment of the role, by comparison, is easy. We only need to set up a simple playbook and pull in the role using the appropriate keyword: We could easily add typical tasks after the role deployment, or we could also deploy additional roles by simply adding them to the list. Also, we can override the default variables we configured using the same variable names as shown below: Ansible roles provide a robust way to organize deployment artifacts and tasks for the purposes of reuse and sharing. The tooling provided in ansible-galaxy makes setting up a new role as simple as ever. Anybody with a foundational understanding of writing Ansible playbooks can just as easily create an Ansible role, download the cheat sheet below for future reference. To launch your first Ansible Hands-On Lab for free, read this post. For additional reading material regarding Ansible, here are some interesting posts: - Ansible Roles Cheatsheet - Using Ansible Facts to View System Properties - Ansible vs. Terraform: Fight! - Three Little-known File Manipulation Tactics in Ansible - Ansible and SSH Considerations - Learn Ansible by doing
https://linuxacademy.com/blog/red-hat/ansible-roles-explained/
CC-MAIN-2019-43
refinedweb
1,293
63.39
These are chat archives for angular/angular-2-ionic-2 ionViewCanEnter() { if(this.uls.isLoggedIn()){ return true; } else { return false; } } I have a little bit of an odd issue. Sometimes (not sure yet exactly what I did to cause this), buttons are not displayed correctly (unstyled with a gray rectangle as background) when I try to use them in components/pages. Also toolbar/navbarbuttons are displayed incorrectly/unstyled with icons being too small and the placement not being correct. Any hints? It seems that if I have a <button>Something</button>, it is not styled at all by default, unless I add the ion-button attribute, but that feels odd. I'm getting a feeling that I sort of need to inject/load something from Ionic, but most other ionic components and things seem to work just fine. (an example of a broken gray button) isLoggedIn()is probably a promise that you need to resolve ionic start testapp -t tabs --v2, added android ionic platform add androidand started it on android ionic run android. The error I'm getting on Android, when inspecting through Chrome states that none of the assets can be found, not main.js, main.css, polyfills.jsor cordova.js. What am I doing wrong? ionic build androidfirst node_modulesfolder and then run npm install node_modules, locking down all package versions (removing ^from in front of versions) and running npm install. Thanks for the tip Kyle ;) this.storage.set('user', data_json.nickname); this.storage.set('id', data_json.id); Storage The storage utilities have been moved outside of the framework to a separate library called @ionic/storage. This library can be installed by executing the following command: npm install @ionic/storage --save --save-exact It must be included in the app's NgModule list of providers: import { Storage } from '@ionic/storage'; ... @NgModule({ ... providers: [Storage] }) It can then be injected into any class that needs access to it: import { Storage } from '@ionic/storage'; ... export class MyAwesomePage { constructor(public storage: Storage) { } ionViewDidEnter() { this.storage.get('myKey').then( (value:any) => { console.log('My value is:', value); }); } }
https://gitter.im/angular/angular-2-ionic-2/archives/2016/10/23
CC-MAIN-2019-13
refinedweb
344
53.51
how do you align a text being shown like to the middle or the right? how do you align a text being shown like to the middle or the right? use gotoxy or other text spacing functions. in graphics.h there is centre_text,vert_dir etc etc.. - thanks for the reply bud but...i am the newbiest of the newbies...i dunnos what you just said =) im trying to create a program where it asks you to input anything and it will print of the screen what you just typed but instead, its aligned or justified to the right... also i read another topic where it says you can't or have to set a number of spaces to justify something to the right. is that true? I read that you can have anything a certain number of spaces somewhere... let me find it... here it is The manipulator to set something a certain number of spaces out it setw(# of spaces out) in order for setw to work, you need to include <iomanip.h>. here's how it would work #include <iostream.h> #include <iomanip.h> int main(void) { int a, b, c; cout <<"please enter three numbers with different numbers of digets\n"; cin >>a>>b>>c; cout <<"your numbers were, in reverce order:\n" << setw(5)<<c<<"\n"<<setw(5)<<b<<"\n"<<setw(5)<<a<<"\ n"; return 0; } well, very simple, but I think this is what you were looking for.
https://cboard.cprogramming.com/cplusplus-programming/11498-how-align-text.html
CC-MAIN-2017-22
refinedweb
243
82.04
Hello, when ending my app in the debugger, memory leaks will be shown like this: Detected memory leaks! Dumping objects -> C:\PROGRAM FILES\VISUAL STUDIO\MyProjects\leaktest\leaktest.cpp(20) : {18} normal block at 0x00780E80, 64 bytes long. Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD Object dump complete. How can I tell Visual c++ also to show me the memory leaks in a called dll? thx Ralf VC++ program prints memory leaks by calling _CrtDumpMemoryLeaks just before the program is closed. Memory leaks dump includes all libraries in the process. In your case, if executable uses dll, this dll doesn't have memory leaks. It have!!!! To check it, I write this code: char* fgh=new char[10]; But it will not be shown. Ralf You need to be more specific. Is this MFC application or not? Do you request memory leaks dump? Do you redefine new to DEBUG_NEW? Where is the code that creates memory leaks in both your posts? What is Dll type, how it is loaded by main executable? Hello Alex, thx. I am one step further! The calling app is mfc. The dll non-mfc. Now, I included #define _CRTDBG_MAP_ALLOC #include <stdlib.h> #include <crtdbg.h> in the header files of the dll. This leads to the following: 1) when I call _CrtDumpMemoryLeaks() I get a dump. But it says for every memory block that it is created in crtdbg.h (552), not the actual line of code that calls "new". 2) there is no automatic dump of the dll-leaks when the app ends. Can you help?! Thx Ralf You don't need to call _CrtDumpMemoryLeaks. MFC calls this function when all user libraries are unloaded. _CrtDumpMemoryLeaks just prints all undeleted allocations. So, if you call this function, it just prints everything that is allocated for now. _CRTDBG_MAP_ALLOC is used for malloc tracking, not for C++ new operations. It is not related here. So, if the whole project is MFC, every C++ allocation which is not released, must be printed in the end. Are you sure that allocation line in Dll is executed and the pointer is not released? Try to create minimal MFC exe and minimal Dll with default parameters. Exe depends on this Dll. Call Dll function which makes allocation. It should work. If it doesn't work in your case, there must be something else. Maybe your Dll uses it own heap? Hallo Alex, thx. The problem were C++ new. I found this and it helps: Ralf Hi , Pl tell me how to use this function with MFC. When and Where should I call that and how Can I see the output ? Thanks a lot Hello, do you refer to my link? In MFC you do not need it. The leaks will be shown at the end of the programme in debug mode with VC++ automatically. Ralf Ok .. But while using debug mode , I have not seen this information. Can you guide me more ? maybe there are no memory leaks? Then there is no report! Hi , Pl tell me how to make memory leaks with MFC. When and Where should I call that and how Can I see the output ? Thanks a lot e.g. char* zz=new char[10]; // without releasing later! somewhere in the code. If the DLL uses MFC (it's a MFC-extension DLL or regular DLL using MFC) and you have source files, then for detecting the source of memory leaks is enough to have defined DEBUG_NEW in each implemetation file (as Alex suggested before). Have a look at this short article: How to detect memory leaks in MFC? [Later edit] I've added that to Codeguru FAQ's: Last edited by ovidiucucu; December 26th, 2012 at 03:44 AM. Ovidiu "When in Rome, do as Romans do." My latest articles: Now, if your DLL doesn't use MFC, all I've said isn't possible because MFC uses it's own allocator (basically by overloading operators new and delete). So, I have a little question: is any good reason for you to develop DLLs which do not use MFC in an MFC-based project, other than for generating memory allocation headaches? Last edited by ovidiucucu; December 26th, 2012 at 04:51 AM. View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?531349-Show-memory-leaks-in-dll&p=2097805
CC-MAIN-2017-43
refinedweb
721
77.64
Note: These pages are being reviewed. Can I test changes to the IDE without going through the license check and so on? If you set the system property netbeans.full.hack to true, the following IDE behaviors will be disabled to make it quicker or more reliable to test other functionality: Auto Update background check (to see if updates are available); you can still use AU via Tools > Plugin Manager prompting about still-running tasks when shutting down license dialog import of old user directory IDE registration dialog dialog suggesting that you submit usage statistics welcome screen displayed by default and RSS feed refreshed blocking dialog when some modules could not be loaded use of ~/NetBeansProjects/for newly created projects ( java.io.tmpdirwill be used instead) resizing gesture submit dialog ( SubmitStatus.resize) weekly Maven repository indexing (can be configured in Options dialog) long package name for default group ID in new Maven project ( testused instead) This property is set by default when you: run the IDE from sources using ant tryme run the IDE from a netbeans.org module project using Run Project ( ant run) run a functional test using NbModuleSuiteor a unit test using NbTestCase If you need to test one of the suppressed behaviors (e.g. you are working on the license dialog), just do not set this property. For the ant tryme and ant run cases, add tryme.args= to nbbuild/user.build.properties or ~/.nbbuild.properties. Apache Migration Information The content in this page was kindly donated by Oracle Corp. to the Apache Software Foundation. This page was exported from , that was last modified by NetBeans user Lfischmeistr on 2013-11-25T13:00:42Z. NOTE: This document was automatically converted to the AsciiDoc format on 2018-02-07, and needs to be reviewed.
http://netbeans.apache.org/wiki/DevFaqNetBeansFullHack.asciidoc
CC-MAIN-2018-47
refinedweb
297
51.38
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org Portal:Toolforge/Admin/Kubernetes/RBAC and PSP This is a proposal for a design of Role-based Access Control (RBAC) and Pod Security Policy (PSP) system that will replace two of the four custom admission controllers currently in use in our Toolforge Kubernetes cluster in order to unblock the upgrade cycle. This design is live in the toolsbeta and tools 2020 Kubernetes clusters. Kubernetes RBAC Role-bindings Both PSPs and Roles are assigned at either the namespace level (rolebinding) or cluster level (clusterrolebinding) through bindings. A role binding links an API object to a user, serviceaccount or similar system object with one or more verbs. These verbs do not universally make sense for all API objects, and the documentation can be sparse outside of code-based, generated docs. In general, Toolforge user accounts are only permitted to act within their particular namespace, and therefore, they usually will have things applied via a rolebinding within the scope of their namespace. Pod Security Policies Full documentation on PSPs are available here: PSPs are a whitelisting system. This means that, at any given time, the object trying to take an action will use the most permissive policy it's rolebindings have allowed. The (cluster)rolebinding verb here is, literally, "use". PSPs are defined at the cluster scope, but they can be "use"d in a namespaced fashion, which helps us here. The privileged policy In the proposed PSP design, service accounts (automations) in the kube-system namespace can basically do anything. That way the cluster can actually function and controllers work. This "do anything" policy is named "privileged" and is as follows (in yaml). YAML Explanation This policy should also be applied to other overall controllers like the ingress controller and the registry-checking admission controller since they have to run in privileged mode. This policy is roughly the same as turning Pod Security Policies off for anything that can use it. System default policy This policy will not be applied to anything initially, but it is there to be used by services maintained by Toolforge administrators for the good of the system, not for tools themselves. This prevents a service from doing anything in a privileged scope or as root, but it does not specify any particular userid to run as. If we launch jobs or services that don't need to make changes inside Kubernetes itself, this would be the policy to apply. The current proposal for it is as follows: YAML Explanation This is something like what the Toolforge users will have except it does not specify a user ID (just not root) and prevents host mounts other than what users can see. This is meant to keep well-behaved services that need no special privs well-behaved. Toolforge user policies Toolforge user accounts, defined by their x509 certificates, each require an automatically-generated PSP in order to restrict their actions to the user id and group id of their accounts. This is defined inside the maintain_kubeusers.py script using API objects, but translated into YAML it looks like: YAML Explanation This is applied with a rolebinding, which means that the only place a Toolforge user can launch a pod is in their namespace. They also can only launch a service that has a security context including their user and group ID. They can apply supplemental groups other than the root group, but this is not likely to be used too often. The host paths are the ones currently allowed. Persisistent volumes are not currently in the design, but they are in there to "future proof" these policies. PSPs are defined at the cluster level, but each Toolforge user will have their own because of the UID requirement. That makes large changes annoying at least. Roles Root on the controlplane can use the "cluster-admin" role by default. Not much else should be using that. Special roles should be defined for Toolforge services that offer the minimum required capabilities only. Toolforge users can all use the same role defined at the cluster level (a "ClusterRole") with a namespaced role binding. Toolforge user roles The Toolforge users all share one cluster role that they can only use within their namespaces. YAML Explanation The easiest way to visualize all that is as a table. The reason there is so much apparent repetition is because in various editions of Kubernetes, the same resources appear under multiple APIs as features are graduated from alpha/beta/extensions into core APIs or the Apps API. In later editions (1.16, for instance) many of the resources under extensions are only found under apps. Most of this is likely not controversial, but there are some things to consider. Users can do nearly all of this in the current Toolforge. Something new is ingresses and networkpolicies. The reason they can launch ingresses is to be able to launch services that are accessible to the outside, and networkpolicies are, I think, required for ingresses to work properly. That last part about networkpolicies may be worth testing first. Each namespace should have quotas applied so scaling is not something I fear. "poddisruptionbudgets" are an HA feature that isn't something I think we should restrict, per se either. (see). Another consideration is that we may want to restrict deletecollection in some cases, particularly in configmaps where deleting all configmaps in their namespace will recycle their x509 certs and secrets where they might be able to revoke their own service account credentials inadvertently (rendering Deployments non-functional). One important note: for this and the PSP for Toolforge users to work right, it must be applied to both the toolforge user and the $namespace:default service account, which is what a replicationcontroller runs as (therefore the thing launching pods in a Deployment object). This last piece hasn't been included in maintain_users.py yet, but it will be before launch. Observer role Task T233372 See also Some other interesting information related to this topic:
https://wikitech-static.wikimedia.org/wiki/Portal:Toolforge/Admin/Toolforge_Kubernetes_RBAC_and_PSP
CC-MAIN-2022-40
refinedweb
1,012
60.75
This is the mail archive of the libstdc++@sources.redhat.com mailing list for the libstdc++ project. brent verner wrote: > > On 17 Aug 2000 at 14:46 (+0200), Levente Farkas wrote: > | hi, > | it seems to me that abs(int) is missing from std namespace!!! > > yes, it is. I'm currently _trying_ to sort out the c header madness, > but that work is going rather slow :\ Im willing to help it you give me some hints. > | int i = std::abs(4); > > | any tipp ? > > if this code _must work_ today, just remove the std:: from abs(int). I suppose you know not this code have to work:-) -- Levente "The only thing worse than not knowing the truth is ruining the bliss of ignorance."
http://gcc.gnu.org/ml/libstdc++/2000-08/msg00066.html
crawl-001
refinedweb
121
82.65
Rules Plugin logic is defined in rules: pure functions that map a set of statically-declared input types to a statically-declared output type. Each rule is an async Python function annotated with the decorator @rule, which takes any number of parameters (including zero) and returns a value of one specific type. Rules must be annotated with type hints. For example, this rule maps (int) -> str. from pants.engine.rules import rule @rule async def int_to_str(i: int) -> str: return str(i) Although any Python type, including builtin types like `int`, can be a parameter or return type of a rule, in almost all cases rules will deal with values of custom Python classes (defining a Python class creates a new type whose name is the same as the name of the class). Generally, rules correspond to a step in your build process. For example, when adding a new linter, you may have a rule that maps `(Target, Shellcheck) -> LintResult`: @rule async def run_shellcheck(target: Target, shellcheck: Shellcheck) -> LintResult: # Your logic. return LintResult(stdout="", stderr="", exit_code=0) You do not call a rule like you would a normal function. In the above examples, you would not say int_to_str(26) or run_shellcheck(tgt, shellcheck). Instead, the Pants engine determines when rules are used and calls the rules for you. Each rule should be pure; you should not use side-effects like subprocess.run(), print(), or the requests library. Instead, the Rules API has its own alternatives that are understood by the Pants engine and which work properly with its caching and parallelism. The rule graph All of the registered rules create a rule graph, with each type as a node and the edges being dependencies used to compute those types. For example, the list goal uses this rule definition and results in the below graph: @goal_rule async def list_targets( console: Console, addresses: Addresses, list_subsystem: ListSubsystem ) -> List: ... return List(exit_code=0) At the top of the graph will always be the goals that Pants runs, such as list and test. These goals are the entry-point into the graph. When a user runs ./pants list, the engine looks for a special type of rule, called a @goal_rule, that implements the respective goal. From there, the @goal_rule might request certain types like Console and Addresses, which will cause other helper @rules to be used. To view the graph for a goal, see: Visualize the rule graph. The graph also has several "roots", such as Console, AddressSpecs, FilesystemSpecs, and OptionsBootstrapper in this example. Those roots are injected into the graph as the initial input, whereas all other types are derived from those roots. The engine will find a path through the rules to satisfy the types that you are requesting. In this example, we do not need to explicitly specify Specs; we only specify Addresses in our rule's parameters, and the engine finds a path from Specs to Addresses for us. This is similar to Dependency Injection, but with a typed and validated graph. If the engine cannot find a path, or if there is ambiguity due to multiple possible paths, the rule graph will fail to compile. This ensures that the rule graph is always unambiguous. Rule graph errors can be confusing We know that rule graph errors can be intimidating and confusing to understand. We are planning to improve them. In the meantime, please do not hesitate to ask for help in the #engine channel on Slack. await Get - awaiting results in a rule body await Get- awaiting results in a rule body In addition to requesting types in your rule's parameters, you can request types in the body of your rule. Add await Get(OutputType, InputType, input), where the output type is what you are requesting and the input is what you're giving the engine for it to be able to compute the output. For example: from pants.engine.rules import Get, rule @rule async def run_shellcheck(target: Target, shellcheck: Shellcheck) -> LintResult: ... process_request = Process( ["/bin/echo", str(target.address)], description=f"Echo {target.address}", ) process_result = await Get(ProcessResult, Process, process_request) return LintResult(stdout=process_result.stdout, stderr=process_result.stderr, exit_code=0) Pants will run your rule like normal Python code until encountering the await, which will yield execution to the engine. The engine will look in the pre-compiled rule graph to determine how to go from Process -> ProcessResult. Once the engine gives back the resulting ProcessResult object, control will be returned back to your Python code. In this example, we could not have requested the type ProcessResult as a parameter to our rule because we needed to dynamically create a Process object. Thanks to await Get, we can write a recursive rule to compute a Fibonacci number: @dataclass(frozen=True) class Fibonacci: val: int @rule async def compute_fibonacci(n: int) -> Fibonacci: if n < 2: return Fibonacci(n) x = await Get(Fibonacci, int, n - 2) y = await Get(Fibonacci, int, n - 1) return Fibonacci(x.val + y.val) Another rule could then "call" our Fibonacci rule by using its own Get: @rule async def call_fibonacci(...) -> Foo: fib = await Get(Fibonnaci, int, 4) ... Getconstructor shorthand The verbose constructor for a Getobject takes three parameters: Get(OutputType, InputType, input), where OutputTypeand InputTypeare both types, and inputis an instance of InputType. Instead, you can use Get(OutputType, InputType(constructor arguments)). These two are equivalent: - Get(ProcessResult, Process, Process(["/bin/echo"])) - Get(ProcessResult, Process(["/bin/echo"])) However, the below is invalid because Pants's AST parser will not be able to see what the InputTypeis: process = Process(["/bin/echo"]) Get(ProcessResult, process) Why only one input? Currently, you can only give a single input. It is not possible to do something like Get(OutputType, InputType1(...), InputType2(...)). Instead, it's common for rules to create a "Request" data class, such as PexRequestor SourceFilesRequest. This request centralizes all of the data it needs to operate into one data structure, which allows for call sites to say await Get(SourceFiles, SourceFilesRequest, my_request), for example. See for the tracking issue. MultiGet for concurrency MultiGetfor concurrency Every time your rule has the await keyword, the engine will pause execution until the result is returned. This means that if you have two await Gets, the engine will evaluate them sequentially, rather than concurrently. You can use await MultiGet to instead get multiple results in parallel. from pants.engine.rules import Get, MultiGet, rule @rule async def call_fibonacci(...) -> Foo: results = await MultiGet(Get(Fibonnaci, int, n) for n in range(100)) ... The result of MultiGet is a tuple with each individual result, in the same order as the requests. You should rarely use a for loop with await Get - use await MultiGet instead, as shown above. MultiGet can either take a single iterable of Get objects or take multiple individual arguments of Get objects. Thanks to this, we can rewrite our Fibonacci rule to parallelize the two recursive calls: from pants.engine.rules import Get, MultiGet, rule @rule async def compute_fibonacci(n: int) -> Fibonacci: if n < 2: return Fibonacci(n) x, y = await MultiGet( Get(Fibonacci, int, n - 2), Get(Fibonacci, int, n - 1), ) return Fibonacci(x.val + y.val) Valid types Types used as inputs to Gets or Querys must be hashable, and therefore should be immutable. Specifically, the type must have implemented __hash__() and __eq__(). While the engine will not validate that your type is immutable, you should be careful to ensure this so that the cache works properly. Because you should use immutable types, use these collection types: tupleinstead of list. pants.util.frozendict.FrozenDictinstead of the built-in dict. pants.util.ordered_set.FrozenOrderedSetinstead of the built-in set. This will also preserve the insertion order, which is important for determinism. Unlike Python in general, the engine uses exact type matches, rather than considering inheritance; even if Truck subclasses Vehicle, the engine will view these types as completely separate when deciding which rules to use. You cannot use generic Python type hints in a rule's parameters or in a Get(). For example, a rule cannot return Optional[Foo], or take as a parameter Tuple[Foo, ...]. To express generic type hints, you should instead create a class that stores that value. To disambiguate between different uses of the same type, you will usually want to "newtype" the types that you use. Rather than using the builtin str or int, for example, you should define a new, declarative class like Name or Age. Dataclasses Python 3's dataclasses work well with the engine because: - If frozen=Trueis set, they are immutable and hashable. - Dataclasses use type hints. - Dataclasses are declarative and ergonomic. You do not need to use dataclasses. You can use alternatives like attrs or even normal Python classes. However, dataclasses are a nice default. You should set @dataclass(frozen=True) for Python to autogenerate __hash__() and to ensure that the type is immutable. from dataclasses import dataclass from typing import Optional @dataclass(frozen=True) class Name: first: str last: Optional[str] @rule async def demo(name: Name) -> Foo: ... Don't use NamedTuple NamedTuplebehaves similarly to dataclasses, but it should not be used because the __eq__()implementation uses structural equality, rather than the nominal equality used by the engine. Custom dataclass __init__() Sometimes, you may want to have a custom __init__()constructor. For example, you may want your dataclass to store a Tuple[str, ...], but for your constructor to take the more flexible Iterable[str]which you then convert to an immutable tuple sequence. Normally, @dataclass(frozen=True)will not allow you to have a custom __init__(). But, if you do not set frozen=True, then your dataclass would be mutable, which is dangerous with the engine. Instead, we added a decorator called @frozen_after_init, which can be combined with @dataclass(unsafe_hash=True). from dataclasses import dataclass from typing import Iterable, Tuple from pants.util.meta import frozen_after_init @frozen_after_init @dataclass(unsafe_hash=True) class Example: args: Tuple[str, ...] def __init__(self, args: Iterable[str]) -> None: self.args = tuple(args) Collection: a newtype for tuple Collection: a newtype for tuple If you want a rule to use a homogenous sequence, you can use pants.engine.collection.Collection to "newtype" a tuple. This will behave the same as a tuple, but will have a distinct type. from pants.engine.collection import Collection @dataclass(frozen=True) class LintResult: stdout: str stderr: str exit_code: int class LintResults(Collection[LintResult]): pass @rule async def demo(results: LintResults) -> Foo: for result in results: print(result.stdout) ... DeduplicatedCollection: a newtype for FrozenOrderedSet DeduplicatedCollection: a newtype for FrozenOrderedSet If you want a rule to use a homogenous set, you can use pants.engine.collection.DeduplicatedCollection to "newtype" a FrozenOrderedSet. This will behave the same as a FrozenOrderedSet, but will have a distinct type. from pants.engine.collection import DeduplicatedCollection class RequirementStrings(DeduplicatedCollection[str]): sort_input = True @rule async def demo(requirements: RequirementStrings) -> Foo: for requirement in requirements: print(requirement) ... You can optionally set the class property sort_input, which will often result in more cache hits with the Pantsd daemon. Registering rules in register.py register.py To register a new rule, use the rules() hook in your register.py file. This function expects a list of functions annotated with @rule. def rules(): return [rule1, rule2] Conventionally, each file will have a function called rules() and then register.py will re-export them. This is meant to make imports more organized. Within each file, you can use collect_rules() to automatically find the rules in the file. from fortran import fmt, test def rules(): return [*fmt.rules(), *test.rules()] from pants.engine.rules import collect_rules, rule @rule async def setup_formatter(...) -> Formatter: ... @rule async def fmt_fortran(...) -> FormatResult: ... def rules(): return collect_rules() from pants.engine.rules import collect_rules, rule @rule async def run_fotran_test(...) -> TestResult: ... def rules(): return collect_rules() Updated about a month ago
https://www.pantsbuild.org/docs/rules-api-concepts
CC-MAIN-2020-50
refinedweb
1,961
55.74
Arrays in Java. Working with Arrays - Kathlyn Reeves - 2 years ago - Views: Transcription 1 Arrays in Java So far we have talked about variables as a storage location for a single value of a particular data type. We can also define a variable in such a way that it can store multiple values. Such a variable is called a data structure. An array is a common data structure used to store a collection of values of the same type, where the collection s size does not change once it is declared. An array is a list of data items represented by a single variable name. It can be thought of as a collection of variables all of which are referenced using that single name. Each value in the array is called an element and individual elements are accessed using an index which is an integer enclosed in [ ]. The index specifies the position of a particular element in the array. Declaring arrays In defining an array in Java we have to indicate that it is an array and also specify the data type of the elements that will be stored there. For example if we wanted to define an array called grades that contained 5 elements of type double, we could write the following: final int NUM_GRADES = 5; double [] grades = new double [NUM_GRADES]; The word double indicates the data type of the elements in the array, the [] indicates that it will be an array, and grades is the name of the array. The expression to the right of the = operator instructs the compiler to allocate an array capable of storing 5 (the value of the constant NUM_GRADES) double values. NOTE: An array is a reference data type. The memory location of the array is an address to the starting location of the array itself. Array Initialization An array s items are automatically initialized to the default value of their type. For the numeric types the default values are 0.0 for floats and 0 for integers. For reference types the default value is null. Note that normal variables are not automatically initialized. This default initialization is for arrays only. Arrays can also be initialized using a special { notation. This notation can only be used at the time an array is being declared. It cannot be used for regular assignment statements. In the following example, a new int[] will be created and its size will be 7 (i.e., the number of values specified in brackets). int[] scores = {0, 1, 2, 3, 4, 5, 6 ; Accessing Array Elements An individual element of an array can be accessed by specifying the name of the array, followed by the element s position in the array (it s index) which is enclosed in brackets. Like characters 1 2 in a string, the first element of the array has an index of 0. An array element can be thought of as a variable whose type is the array s item-type. The array subscript operation validates each access to an array. If the subscript is outside the range of valid index values things can go wrong!!!! The array length property contains the number of elements in an array. It is found by giving the name of the array followed by a dot and the word length (e.g., grades.length). Note that length is a property, not a method so there are no ( ) after the word length as there is for a String object s length() method. Since the index of an array begins with 0, the index of the last element is always the array s length - 1. Thus the expression anarray[anarray.length - 1] can be used to access the last element of the array. Calling Methods with Arrays Arrays can be passed as parameters but again there has to be an indication that the parameter is an array. If we were to pass grades as a parameter, the receiving method needs to define a parameter capable of storing the array s handle. NOTE: When an array is passed as a parameter it really only passes the address of the starting location of the array (the handle). This is done by placing a pair of brackets between a parameter s type and its name. For example, if we had a method with an array parameter called anarray we could define it as follows: public static double average(double[] anarray) Processing Array Elements A for loop is ideal for accessing each of the elements of an array. For example, to get grade information into an array, we could write the following: System.out.print("Enter grade #" + (i+1)+ " of " + array.length + ": "); array[i] = keyboard.nextdouble(); A complete program for entering grades, determining their average, and printing them and the average is as follows: /* This program will read in grades, find the average and print out * the average of those grades Carter * */ import java.util.scanner; 2 3 public class Grades { public static void main(string[] args) { final int NUM_GRADES = 5; double[] grades = new double[num_grades]; readgradesinto(grades); // this is a call to a method that // passes an array parameter System.out.printf( "\nthe average is %.2f%n", average(grades)); print(grades); private static void readgradesinto(double[] array) { Scanner keyboard = new Scanner (System.in); System.out.println("To compute the grade "); System.out.print("Enter grade #" + (i+1)+ " of " + array.length + ": "); array[i] = keyboard.nextdouble(); // end of for loop public static double average(double[] array) { double sum = 0.0; sum = sum + array[i]; // end of for loop return (sum / array.length); public static void print(double[] array){ System.out.println("Grade #" + (i+1) + ": " + array[i]); The for each loop Java has a special version of the for loop called a for-each loop. In finding the average of grades we could write an average() method that calculates the sum of all grades as follows: double sum =0.0; for (double item : anarray) { sum = sum + item; return sum / anarray.length; This repeats the statement sum = sum + item; for each item in the array. NOTE: the for each loop can be used to read an array s items, but cannot be used to write into them. For example we could not use a for each loop for readgradesinto() above because that reads a number from the keyboard and writes it to an array element. However, we could use it for average() and print() since in those methods we are only reading an array s items. 3 4 Arrays and Memory An array s elements reside in adjacent memory locations and Java s subscript operator makes use of this adjacency to access any element of an array in the same amount of time. For example, when Java evaluates the expression anarray[i], it multiplies the index i by the size of one item (the amount of storage space that the data type uses) and adds the resulting product to the reference location specified in anarray. [0] [1] [2] [3] [4] [5] [6] The name of the array is actually a reference (handle) that points to the starting address of the array. anarray[4] multiplies 4 * size of one element resulting in the address of the fifth element of the array. An array is a random access data structure which means it takes the same amount of time to access any item in the. Arrays are not good, however, if we want to change the contents of the array by inserting an element in between existing elements. It is not uncommon for some positions in an array to be unused. For example, if we were reading an undetermined number of grades (with a maximum of 30) we would make our array size 30, but keep track of how many elements are actually used (i.e., read in). We could then pass that number as an additional parameter to any method that uses the array. The size of our array for practical purposes is not the actual size as determined by the length property, but rather the number of elements in the array that are actually used. The grades program would now be as follows. import java.util.scanner; public class GradesUndeterminedAmount { public static void main(string[] args) { final int NUM_GRADES = 30; double [] grades = new double[num_grades]; int sizeofarray; // The actual amount of grades read in sizeofarray = readgradesinto(grades); System.out.printf( "\nthe average is %.2f%n", average(grades, sizeofarray)); print(grades, sizeofarray); public static int readgradesinto(double[] array) { Scanner keyboard = new Scanner (System.in); int arraysize = 0; double grade; System.out.print("Enter grade #" + (arraysize + 1) + " enter 1 to quit ); grade=keyboard.nextdouble(); while (grade >= 0) { array[arraysize] = grade; arraysize++; // increase size by 1 4 5 System.out.print("Enter grade #" + (arraysize+1)+ " enter -1 to quit"); grade=keyboard.nextdouble(); return arraysize; public static double average(double [] anarray, int arraysize) { double sum = 0.0; for (int i = 0; i < arraysize; i++) { sum = sum + anarray[i]; return sum / arraysize; public static void print(double[] arr, int arraysize){ for (int i = 0; i < arraysize; i++) { System.out.println("Grade #" + (i+1) + ": " + arr[i]); Multidimensional Arrays Thus far we have only considered arrays that have a single dimension: length. That length determined the amount of space needed by the array. They are referred to as 1-dimensional arrays. We can also declare multiple dimensional arrays. For example, a 2-dimensional array could be used to store data in a table. A 2-dimensional array can be declared as follows: double [][] mytable = null; We use two pairs of brackets instead of one to indicate that this is a two-dimensional array. In general we would use N brackets for an N-dimensional array. We can allocate memory for a multi-dimensional array though the use of the new operator. mytable = new double[rows][columns]; Where rows represents the number of rows in our table and columns represents the number of columns. We can also initialize the positions in a multidimensional (in this case two) array as follows: double[][] anarray = { {0, 1, 2, 3, 4, 5, 6, {7, 8, 9, 10, 11, 12, 13 {14, 15, 16, 17, 18, 19, 20 ; This creates a two-dimensional array with 3 rows and 7 columns. To access elements in a one-dimensional array we used one subscript operator. To access an element in a 2-dimensional array, we use two subscript operators. return mytable[row][col]; // access element at row, col. 5 Introduction to Java Introduction to Java The HelloWorld program Primitive data types Assignment and arithmetic operations User input Conditional statements Looping Arrays CSA0011 Matthew Xuereb 2008 1 Java Overview A high Two-Dimensional Arrays Chapter 11 Two-Dimensional Arrays This chapter introduces Java arrays with two subscripts for managing data logically stored in a table-like format in rows and columns. This structure proves useful C Primer. Fall Introduction C vs. Java... 1 CS 33 Intro Computer Systems Doeppner C Primer Fall 2016 Contents 1 Introduction 1 1.1 C vs. Java.......................................... 1 2 Functions 1 2.1 The main() Function.................................... Part I. Multiple Choice Questions (2 points each): Part I. Multiple Choice Questions (2 points each): 1. Which of the following is NOT a key component of object oriented programming? (a) Inheritance (b) Encapsulation (c) Polymorphism (d) Parallelism ****** Introduction to Programming Introduction to Programming Lecturer: Steve Maybank Department of Computer Science and Information Systems sjmaybank@dcs.bbk.ac.uk Spring 2015 Week 2b: Review of Week 1, Variables 16 January 2015 Birkbeck Comp Arrays and Pointers. Class Notes. T h e G r o u p o f T h r e e Comp 2401 Arrays and Pointers Class Notes 2013 T h e G r o u p o f T h r e e Introduction To Arrays: In C programming, one of the frequently problem is to handle similar types of data. For example: if 8.1. Example: Visualizing Data Chapter 8. Arrays and Files In the preceding chapters, we have used variables to store single values of a given type. It is sometimes convenient to store multiple values of a given type in a single collection arrays C Programming Language - Arrays arrays So far, we have been using only scalar variables scalar meaning a variable with a single value But many things require a set of related values coordinates or vectors require 3 (or 2, or 4, or more) COUNTING LOOPS AND ACCUMULATORS COUNTING LOOPS AND ACCUMULATORS Two very important looping idioms are counting loops and accumulators. A counting loop uses a variable, called the loop control variable, to keep count of how many cycles Answers to Review Questions Chapter 7 Answers to Review Questions Chapter 7 1. The size declarator is used in a definition of an array to indicate the number of elements the array will have. A subscript is used to access a specific element AP Computer Science Java Subset APPENDIX A AP Computer Science Java Subset The AP Java subset is intended to outline the features of Java that may appear on the AP Computer Science A Exam. The AP Java subset is NOT intended as an overall 5 Arrays and Pointers 5 Arrays and Pointers 5.1 One-dimensional arrays Arrays offer a convenient way to store and access blocks of data. Think of arrays as a sequential list that offers indexed access. For example, a list of, Install Java Development Kit (JDK) 1.8 CS 259: Data Structures with Java Hello World with the IntelliJ IDE Instructor: Joel Castellanos e-mail: joel.unm.edu Web: Office: Farris Engineering Center 319 8/19/2015 Install Arrays. Atul Prakash Readings: Chapter 10, Downey Sun s Java tutorial on Arrays:. Arrays Atul Prakash Readings: Chapter 10, Downey Sun s Java tutorial on Arrays: 1 Grid in Assignment 2 How do you represent the state Introduction to Java Applications. 2005 Pearson Education, Inc. All rights reserved. 1 2 Introduction to Java Applications 2.2 First Program in Java: Printing a Line of Text 2 Application Executes when you use the java command to launch the Java Virtual Machine (JVM) Sample program Displays How to Program, 9/e Java How to Program, 9/e Education, Inc. All Rights Reserved. 1 Any computing problem can be solved by executing a series of actions in a specific order. An algorithm is a procedure for solving a 6.1. Example: A Tip Calculator 6-1 Chapter 6. Transition to Java Not all programming languages are created equal. Each is designed by its creator to achieve a particular purpose, which can range from highly focused languages designed) Some Scanner Class Methods Keyboard Input Scanner, Documentation, Style Java 5.0 has reasonable facilities for handling keyboard input. These facilities are provided by the Scanner class in the java.util package. A package is a Grade 5, Ch. 1 Math Vocabulary Grade 5, Ch. 1 Math Vocabulary rectangular array number model fact family factors product factor pair divisible by divisibility rules prime number composite number square array square number exponent exponential Object-Oriented Programming in Java CSCI/CMPE 3326 Object-Oriented Programming in Java Class, object, member field and method, final constant, format specifier, file I/O Dongchul Kim Department of Computer Science University of Texas Rio C++ Keywords. If/else Selection Structure. Looping Control Structures. Switch Statements. Example Program C++ Keywords There are many keywords in C++ that are not used in other languages. bool, const_cast, delete, dynamic_cast, const, enum, extern, register, sizeof, typedef, explicit, friend, inline, mut++ JAVA ARRAY EXAMPLE PDF JAVA ARRAY EXAMPLE PDF Created By: Umar Farooque Khan 1 Java array example for interview pdf Program No: 01 Print Java Array Example using for loop package ptutorial; public class PrintArray { public static C A short introduction About these lectures C A short introduction Stefan Johansson Department of Computing Science Umeå University Objectives Give a short introduction to C and the C programming environment in Linux/Unix Go 13 File Output and Input SCIENTIFIC PROGRAMMING -1 13 File Output and Input 13.1 Introduction To make programs really useful we have to be able to input and output data in large machinereadable amounts, in particular we have Topic 11 Scanner object, conditional execution Topic 11 Scanner object, conditional execution "There are only two kinds of programming languages: those people always [complain] about and those nobody uses." Bjarne Stroustroup, creator of C++ Copyright Using Two-Dimensional Arrays Using Two-Dimensional Arrays Great news! What used to be the old one-floor Java Motel has just been renovated! The new, five-floor Java Hotel features a free continental breakfast and, at absolutely no Lecture 4 Notes: Arrays and Strings 6.096 Introduction to C++ January 10, 2011 Massachusetts Institute of Technology John Marrero Lecture 4 Notes: Arrays and Strings 1 Arrays So far we have used variables to store values in memory for laterSci Sample CSE8A midterm Multiple Choice (circle one) Sample midterm Multiple Choice (circle one) (2 pts) Evaluate the following Boolean expressions and indicate whether short-circuiting happened during evaluation: Assume variables with the following names Using Files as Input/Output in Java 5.0 Applications Using Files as Input/Output in Java 5.0 Applications The goal of this module is to present enough information about files to allow you to write applications in Java that fetch their input from a file instead J a v a Quiz (Unit 3, Test 0 Practice) Computer Science S-111a: Intensive Introduction to Computer Science Using Java Handout #11 Your Name Teaching Fellow J a v a Quiz (Unit 3, Test 0 Practice) Multiple-choice questions are worth 2 points UNIT III: 1. What is an array? How to declare and initialize arrays? Explain with examples UNIT III: Arrays: Introduction, One-dimensional arrays, Declaring and Initializing arrays, Multidimensional arrays. Strings: Introduction to Strings, String operations with and without using String handling Two-Dimensional Arrays. Multi-dimensional Arrays. Two-Dimensional Array Indexing Multi-dimensional Arrays The elements of an array can be any type Including an array type So int 2D[] []; declares an array of arrays of int Two dimensional arrays are useful for representing tables VB.NET Programming Fundamentals Chapter 3 Objectives Programming Fundamentals In this chapter, you will: Learn about the programming language Write a module definition Use variables and data types Compute with Write decision-making statements Chapter 2 Introduction to Java programming Chapter 2 Introduction to Java programming 1 Keywords boolean if interface class true char else package volatile false byte final switch while throws float private case return native void protected break Lecture Set 2: Starting Java Lecture Set 2: Starting Java 1. Java Concepts 2. Java Programming Basics 3. User output 4. Variables and types 5. Expressions 6. User input 7. Uninitialized Variables CMSC 131 - Lecture Outlines - set Building Java Programs Building Java Programs Chapter 3 Lecture 3-3: Interactive Programs w/ Scanner reading: 3.3-3.4 self-check: #16-19 exercises: #11 videos: Ch. 3 #4 Interactive programs We have written programs that print Building Java Programs Building Java Programs Chapter 5 Lecture 5-2: Random Numbers reading: 5.1-5.2 self-check: #8-17 exercises: #3-6, 10, 12 videos: Ch. 5 #1-2 1 The Random class A Random object generates pseudo-random* numbers. EC312 Chapter 4: Arrays and Strings Objectives: (a) Describe how an array is stored in memory. (b) Define a string, and describe how strings are stored. EC312 Chapter 4: Arrays and Strings (c) Describe the implications of reading or writing A Comparison of the Basic Syntax of Python and Java Python Python supports many (but not all) aspects of object-oriented programming; but it is possible to write a Python program without making any use of OO concepts. Python is designed to be used interpretively. Data Structure. Lecture 3 Data Structure Lecture 3 Data Structure Formally define Data structure as: DS describes not only set of objects but the ways they are related, the set of operations which may be applied to the elements Memory management. Announcements. Safe user input. Function pointers. Uses of function pointers. Function pointer example Announcements Memory management Assignment 2 posted, due Friday Do two of the three problems Assignment 1 graded see grades on CMS Lecture 7 CS 113 Spring 2008 2 Safe user input If you use scanf(), include Java Review (Essentials of Java for Hadoop) Java Review (Essentials of Java for Hadoop) Have You Joined Our LinkedIn Group? What is Java? Java JRE - Java is not just a programming language but it is a complete platform for object oriented programming. Building Java Programs Building Java Programs Chapter 3 Lecture 3-3: Interactive Programs w/ Scanner reading: 3.3-3.4 self-check: #16-19 exercises: #11 videos: Ch. 3 #4 Interactive programs We have written programs that print Chapter 2: Elements of Java Chapter 2: Elements of Java Basic components of a Java program Primitive data types Arithmetic expressions Type casting. The String type (introduction) Basic I/O statements Importing packages. 1 Introduction Lecture 5: Java Fundamentals III Lecture 5: Java Fundamentals III School of Science and Technology The University of New England Trimester 2 2015 Lecture 5: Java Fundamentals III - Operators Reading: Finish reading Chapter 2 of the 2nd Moving from CS 61A Scheme to CS 61B Java Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope you Homework/Program #5 Solutions Homework/Program #5 Solutions Problem #1 (20 points) Using the standard Java Scanner class. Look at as an exampleof using the Programming Languages CIS 443 Course Objectives Programming Languages CIS 443 0.1 Lexical analysis Syntax Semantics Functional programming Variable lifetime and scoping Parameter passing Object-oriented programming Continuations Exception JAVA PRIMITIVE DATA TYPE JAVA PRIMITIVE DATA TYPE Description Not everything in Java is an object. There is a special group of data types (also known as primitive types) that will be used quite often in programming. For performance Part I:( Time: 90 minutes, 30 Points) Qassim University Deanship of Educational Services Preparatory Year Program- Computer Science Unit Final Exam - 1434/1435 CSC111 Time: 2 Hours + 10 Minutes 1 MG Student name: Select the correct choice: Introduction to Data Structures Introduction to Data Structures Albert Gural October 28, 2011 1 Introduction When trying to convert from an algorithm to the actual code, one important aspect to consider is how to store and manipulate A 2015 Free-Response Questions AP Computer Science A 2015 Free-Response Questions College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official Software and Programming 1 Software and Programming 1 Lab 3: Strings & Conditional Statements 20 January 2016 SP1-Lab3.ppt Tobi Brodie (Tobi@dcs.bbk.ac.uk) 1 Lab Objectives This session we are concentrating on Strings and conditional AP Computer Science Java Subset APPENDIX A AP Computer Science Java Subset The AP Java subset is intended to outline the features of Java that may appear on the AP Computer Science A Exam. The AP Java subset is NOT intended as an overall CS170 Lab 11 Abstract Data Types & Objects CS170 Lab 11 Abstract Data Types & Objects Introduction: Abstract Data Type (ADT) An abstract data type is commonly known as a class of objects An abstract data type in a program is used to represent (the Arrays - Introduction. Declaration of Arrays. Initialisation of Arrays. Creation of Arrays. Arrays, Strings and Collections [1] Arrays - Introduction Arrays, Strings and Collections [] Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Laboratory Dept. of Computer Science and Software Engineering University of Melbourne, modifier returnvaluetype methodname(list of parameters) { // Method body; } JAVA METHODS METHODS A Java method is similar to function in C/C++. It is a collection of statements that are grouped together to perform an operation. When you call the System.out.println method, for Passing 1D arrays to functions. Passing 1D arrays to functions. In C++ arrays can only be reference parameters. It is not possible to pass an array by value. Therefore, the ampersand (&) is omitted. What is actually passed to the function, Syntax and logic errors Syntax and logic errors Teacher s Notes Lesson Plan Length 60 mins Specification Link 2.1.7/p_q Learning objective Candidates should be able to: (a) describe syntax errors and logic errors which may occur Basic Programming and PC Skills: Basic Programming and PC Skills: Texas University Interscholastic League Contest Event: Computer Science The contest challenges high school students to gain an understanding of the significance of computation as well as the details of Java Basics: Data Types, Variables, and Loops Java Basics: Data Types, Variables, and Loops If debugging is the process of removing software bugs, then programming must be the process of putting them in. - Edsger Dijkstra Plan for the Day Variables Building Java Programs Building Java Programs Chapter 4 Lecture 4-1: Scanner; if/else reading: 3.3 3.4, 4.1 Interactive Programs with Scanner reading: 3.3-3.4 1 Interactive programs We have written programs that print console Section 6 Spring 2013 Print Your Name You may use one page of hand written notes (both sides) and a dictionary. No i-phones, calculators or any other type of non-organic computer. Do not take this exam if you are sick. Once
http://docplayer.net/21723203-Arrays-in-java-working-with-arrays.html
CC-MAIN-2018-43
refinedweb
4,223
51.78
Local product inventory feed specification The local products inventory feed is a list of the products you sell in each store. Some attributes are required for all items, some are required for certain types of items, and others are recommended. Note: Not providing a required attribute may prevent that particular item from showing up in results, and not providing recommended attributes may impact the ad's performance. Full and incremental feeds Inventory price and quantity can change frequently and on a store-by-store basis. Use incremental feeds to make quick updates to inventory data. Full local product inventory feed: Submit daily and include all of your inventory. The feed type is 'Local product inventory.' Incremental local product inventory feed: If the price and/or quantity of your items per store changes throughout the day, submit only the items that have changed with their new details multiple times throughout the day. The feed type is 'Local product inventory update.' The local product inventory update feed type processes faster than the full local product inventory feed, allowing for more-up-to-date information in your local inventory ads. Submit local product inventory feeds File type: The local product inventory feed is only available as a delimited text file or via API. XML files are not supported for this feed type at this time. Registering a new feed: You’ll follow the standard steps to register a new data feed, but you’ll select either "local product inventory" or "local product inventory update" as the feed type. Important: Some attributes in this local product inventory feed spec contain spaces and underscores. To make sure you submit attributes with correct characters and spacing, follow the guidelines below for your file type: - CSV feeds: Spaces are required. If the attribute has underscores, use a space instead of the "_". - XML API or JSON API: Underscores are required, and are converted into whitespace when received. Summary of attribute requirements Required inventory details These attributes describe basic inventory information per item per store. A unique alphanumeric identifier for each local store. You must use the same store codes that you provided in your Google My Business account. When to include: Required for all items. A unique alphanumeric product identifier for an item across all stores. If you sell the same item in multiple stores, you will have the same itemid appear for multiple store codes. You should include one itemid per store and use quantity to indicate how many of each item is in stock in that store. If you have multiple feeds of the same type for one country, ids of items within different feeds must still be unique. If your SKUs are unique across your inventory and meet the requirements below, we suggest you use your SKUs for this attribute. When to include: Required for all items. Important: - Use the same itemidvalues in both your local products and local product inventory feeds. - Starting and trailing whitespaces and carriage returns (0x0D) are removed. - Each sequence of carriage return (0x0D) and whitespace characters (Unicode characters with the whitespace property) is replaced by a single whitespace (0x20). - Only valid unicode characters are accepted; this excludes the following characters: - control characters (except carriage return 0x0D) - function characters - private area characters - surrogate pairs - non assigned code points (in particular any code point larger than 0x10FFFF) - Once an item is submitted, the id must not change when you update your data feed. - Once an item is submitted, the id must not be used for a different product at a later point in time. - Only include products that are available for purchase in stores. The number of items in stock for the store. If you submit items that are temporarily out of stock, you must include a value of '0' for this attribute. When to include: Required for all items. Important: - Google considers "in stock" items to be those with 3+ availability, "limited availability" to be 1-2, and "out of stock" to be 0. - For local inventory ads, the number expressed in quantity may be a placeholder representing availability. For Google Express, the exact quantity must be shared. The regular price of your item. If you submit price here and in the local products feed, this price will override the price in the local products feed for the associated store. When to include: Required for all items. Important: - This attribute is required in either the local products feed for national default pricing or in this feed for any store-specific overrides. Optional inventory details You can use these attributes to give additional information about the price, quantity, and availability of your items. The advertised temporary sale price that denotes a store-specific override of the 'price' attribute in this feed and the local products feed. We recommend submitting the 'sale price effective date' attribute for any items with sale prices, as this will determine when your sale price should be live. If the 'sale price effective date' isn't submitted, the sale price will be in effect for that item for as long as it is submitted in your feed. Note: Any ‘price’ value submitted in an incremental feed will not automatically remove a ‘sale price’ value from a previous feed. To remove a ‘sale price’ using the incremental feed, include an expired value in the ‘sale price effective date’ attribute. The dates during which the advertised sale price is effective. Note: Timezone is optional [YYYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]. If timezone is absent, Google assumes the local timezone for each store. Additionally, note that we are using 24h time for the hours values. Learn more about the format for this attribute. - 'in stock': Indicates that the item is in stock at your local store. - 'out of stock': Indicates that the item is out stock at your local store. - 'limited availability': Indicates that only a few items are left in stock at your local store. - 'on display to order': Indicates that the item is on display to order at your local store (e.g. a refrigerator that needs to be shipped from a warehouse). For items on display to order, submit the value 'on display to order' along with the value '1' for the attribute 'quantity'. Important: - Google considers "in stock" items to be those with 3+ availability, "limited availability" to be 1-2, and "out of stock" to be 0. - If you use a different value, your item will not be processed. The value you provide for this attribute may or may not appear in Google Shopping results as submitted. Note: You should only submit items that are out of stock if they have the availability attribute with the value ‘out of stock’ and the quantity attribute with the value '0'. Estimate of how many weeks worth of inventory you have. To calculate, divide the quantity available for purchase by average weekly units sold. Optional store pickup details You can highlight the store pickup option by adding the following 2 attributes to your feed. Add these attributes to your local product inventory feed for store-specific pickup information or add them to your local products feed for any items where the values are true in all stores (e.g. a customer can pick up the XYZ television in any of your stores nationally). Specify whether store pickup is available for this offer and whether the pickup option should be shown as buy, reserve, or not supported. - 'buy': the entire transaction occurs online - 'reserve': the item is reserved online and the transaction occurs in-store - 'not supported': the item is not available for store pickup Specify the expected date that an order will be ready for pickup, relative to when the order is placed. - 'same day': indicates that the item is available for pickup the same day that the order is placed, subject to cutoff times - 'next day': indicates that the item is available for pickup the following day that the order is placed
https://support.google.com/merchants/answer/3061342
CC-MAIN-2017-39
refinedweb
1,327
61.56
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Introduction To Functions3:40 with Kenneth Love Writing code in order is great, but often we need to have things happen multiple times or we just don't want to write the same code again. That's where functions come in. Functions are a great way to reduce the amount of code you have to write to make a script really useful. Anything you feel like you might want to do more than once, or anything you find yourself writing more than once, turn into a function. New Terms function - A block of code that can be called when needed. def - A keyword that defines a function. return - A keyword that sends data from a function back to whatever called it, whether that's another function or a variable assignment. - 0:00 [MUSIC]. - 0:05 We've been writing our scripts as things that just run straight through. - 0:08 They might have a loop or two, but it's all just going from one line to the next. - 0:12 Lots of times though we don't want to write our code like that. - 0:15 Lots of times we want to write a little snippet of code, - 0:17 and be able to call it multiple times, or just on demand. - 0:20 We call these little snippets functions. - 0:23 Let's write a couple of functions in our shell. - 0:26 We have to name our functions and they have - 0:28 to follow the same rules as variables for naming. - 0:31 So you can't put in numbers in the front - 0:32 and you can't have any hyphens or special symbols. - 0:35 We use the keyword def, to say that - 0:37 we're making a function or we're defining a function. - 0:41 And we'll have to provide an area for parameters - 0:43 or arguments, but we can leave that empty for now. - 0:44 So let's name our function say hello, and it's not going to take any arguments. - 0:50 So we use a colon, and then we would tab in. - 0:56 And you can do this with spaces or tabs again and. - 1:00 Then we have to put in what happens in our function. - 1:03 So we're just going to have it print below, and - 1:07 then once we get back to the chevrons, which shows - 1:10 that we can enter in commands, we can call our - 1:14 function, say hello, press return, and we get back, hello. - 1:21 That's pretty awesome. - 1:22 Okay, so what about a function that needs to take an argument? - 1:25 So, again we use def, and again you put in, - 1:33 the name. - 1:34 And then inside the parentheses you name each - 1:37 of the arguments that you wanna put in. - 1:38 If you have more than one of them you separate them with commas. - 1:41 But let's just use a single one for now. - 1:45 We'll use an argument named num. - 1:48 And then on the inside, we will print out whatever the num is, plus the number two. - 1:58 All right? - 1:58 So let's call that. - 2:00 And we're going to pass in four, and we go back six. - 2:05 And again, we just call it by using it's name. - 2:08 But since we said there was an argument, we have to provide the argument. - 2:12 If we don't provide the argument, we get an error. - 2:14 If we split a string into a list, we're able - 2:17 to assign the output of that into a new variable. - 2:20 So, how can we make our functions do that? - 2:22 In fact let's test that. - 2:24 Let's say that six equals add to two and we're gonna pass in four. - 2:30 If I look at six, six doesn't have anything in - 2:34 it, because nothing came back from our function we just printed. - 2:39 What we want to do is use a return statement, or the return key word. - 2:45 This makes our function send back some data. - 2:48 So let's define a new function named square, and it takes a number as well. - 2:56 But it returns that number times itself. - 3:01 So now, let's call square on the number five. - 3:06 This gives us back 25, but that's only because we're inside of the shell. - 3:10 If we weren't working inside the shell, we wouldn't see the 25. - 3:13 So, let's try adding that to something. - 3:16 And let's say two squared is equal to square two. - 3:22 Nothing comes out. - 3:23 And let's check the value of two squared. - 3:27 It's four. - 3:29 Breaking your code up into functions is a great - 3:31 way to help you organize your code and your thoughts. - 3:34 The most repetitive code you can write, the better. - 3:36 In our next video, we'll do even more involving functions.
https://teamtreehouse.com/library/python-basics-retired/putting-the-fun-back-in-function/introduction-to-functions
CC-MAIN-2017-22
refinedweb
912
91.11
Ruminations on technology, software and philosophy. I'm writing some code using the new Managed C++/CLI syntax and I ran into this error: error C2039: 'Dispose' : is not a member of 'System::IDisposable' the code I started with was this: image->Dispose(); // image implements IDisposable which gave me the same compiler error, so I wanted to eliminate a class/namespace error so I rewrote it as this: ((IDisposable ^)image)->Dispose(); Which gave the above error. Yikes! Here's the fix: use delete. Managed C++ now hides Dispose() inside the finalizer. Just delete the object, it handles the rest. Freaky. You've been kicked (a good thing) - Trackback from DotNetKicks.com High CPU Usage Fixer can also speed up your computer performance, deal with specific technical areas of your computer system, safeguarding them from unwanted errors or manipulating your system to let your computer operate on its full abilities.
http://www.atalasoft.com/cs/blogs/stevehawley/archive/2008/08/01/managed-c-and-idisposable.aspx
CC-MAIN-2015-27
refinedweb
149
51.18
QFileDialog::getOpenFileName() not initially focussed I am seeing "unexpected" (but not fatal) behaviour with the static function QFileDialog::getOpenFileName()--- and with QFileDialog::getSaveFileName()--- under Linux (Ubuntu, Unity desktop), Qt 5.7, and using the native file dialog. I do not know what the behaviour is under other flavors of Linux/desktops, nor under Windows. I start from a QDialog. I click a button whose handler calls QFileDialog::getOpenFileName(), with the calling dialog as the parent; I am passing no options. Note that this means I do not pass in QFileDialog::DontUseNativeDialog. There is nothing of note in the code. At this point the native "file selector" dialog opens. It is up-front. However, it is not "focussed". This means, for example that its title, native close/expand buttons, content is "dimmed". I can click on it. As soon as I do so, it becomes "fully focussed", so that its buttons/content etc. "light up" instead of being dimmed. Once that has happened, it never returns to "dimmed", no matter where I click in the windows/desktop. I believe this is the case only via the QFileDialog::getOpenFileName()-type static functions, not the instance ones (though I could be mistaken). I believe this only happens if I use the native dialog, not the Qt one. Wondering if anyone else is able to reproduce this behaviour under Linux, may require Unity desktop, I don't know. But if you have that set up you could have a look. If nobody does have that environment, doubtless I won't get anywhere with this question. - mrjj Qt Champions 2017 Hi Test Case MainWindow->button->released() Dialog d(this); d.exec() d->Button->released()-> QFileDialog::getOpenFileName(); It had focus on Linux mint XFCE. Also on windows 10. I have no UNITY to test on as i dislike it as much as you do windows. :) - SGaist Lifetime Qt Champion Hi, You should share a minimal sample that allows people to test your issue on their system. @SGaist Couldn't be more minimal (no need for a calling dialog, apparently): import sys from PyQt5 import QtWidgets class Main(QtWidgets.QMainWindow): def __init__(self): super().__init__() self.button = QtWidgets.QPushButton("Click", self) self.button.clicked.connect(self.openFileDialog) def openFileDialog(self): QtWidgets.QFileDialog.getOpenFileName(self) if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) main = Main() main.show() sys.exit(app.exec_()) @mrjj, and @SGaist I had not realised, the pattern is: first time QFileDialogis shown it is correctly focussed, subsequent times it is shown unfocussed. So you must: - Click button to show file dialog. - Click any of the buttons to exit file dialog. - Click button to show file dialog a second time. [I am not able to illustrate how it looks "focussed" at #1 but "dimmed" at #3, because the act of taking a screenshot causes it to become "focussed"/"undimmed", just like clicking anywhere does.] @mrjj In the light of this discovery, would you mind doing your tests again with a second click to display the file dialog again? - mrjj Qt Champions 2017 Oh, that i did when testing. In the Dialog, i open/closed the file dialog multiple times (in same run) on both platforms, @mrjj Yeah, thank you, not unexpected. Since it only happens when using the native dialog from Qt, plus it's to do with windows & focussing. Which is why I said initially that unless someone has Unity to test on, it probably would not get anywhere (and so didn't bother showing code for it) .... Ah well. Unless someone does have a similar environment I'll have to put it down to probably Unity only.... - SGaist Lifetime Qt Champion Are you using your distribution provided PyQt and Qt ? @SGaist Absolutely yes! As are all my users. Everybody wants to use standard, supported apt-getfor installing all packages they use under Linux (for Qt we're all at 5.7, provided for Ubuntu 17.04), nobody is interested in downloading packages from individual-product sites, that's why we're using Linux and not Windows.
https://forum.qt.io/topic/85725/qfiledialog-getopenfilename-not-initially-focussed
CC-MAIN-2018-34
refinedweb
672
58.48
Create a global object Hi would like create a Global object to expose function. And is it possible to add it in a plugin? thanks [edit: fixed typos / $chetankjain ] If I understood you correctly, you can create a class with a static accessor. Something like: @class Global { static Global * self(); private: Global(); static Global * instance; };@ As for the plugin, you can just modify the accessor to get the object from the plugin. I want make a global object like Qt Object to expose functions : "": Or something like @import "file.js" as myObject@ And this global Object is added with import of my QML plugin. This object could be coded in C++ or JS [edit: fixed hyperlink / $chetankjain ] Sorry, didn't realize you are talking about scripting. Haven't played with QML that much, bit if things are inherited from the QtScript module, it should go something like this: @engine.globalObject().setProperty("myObject", engine.newQObject(object));@ After that, your object will be available to qtscript. As for making it pluggable with both c++ and js, I can't really help. (I could think of a dirty way to achieve it, but IMO it is better to wait for our friendly Trolls to tell us the proper way) Thanks. They do it in Qt Declarative code (QDeclarativeScriptEngine constructor in QDeclarativeEngine code) I've an access to QDeclarativeEngine which use a QScriptEngine (internaly) on plugin init function. But i don't find how to access to a QScriptEngine. You could try using setContextProperty() on the root context of the engine -- this will effectively add the object as a context property that is available to all the elements in the engine. From a plugin, you can do so inside the "initializeEngine()": function. I believe there are plans to look at this more for a future release, so plugins can easily add "namespaces" like the Qt object with functions, enum values, etc. With this method, how create QScriptValue with newXXXX function ? "": Another solution that i will test, it's create a component with QDeclarativeComponent "": Make @import "file.js" as myObject@ equivalent in qmldir file could be interesting. [edit: fixed hyperlink / $chetakjain ] hi yan, you could also use the link feature for hyperlinks [quote author="chetankjain" date="1283761818"]hi yan, you could also use the link feature for hyperlinks[/quote] Sorry. I don't see this forum don't make auto-linked. yan wrote: bq. Sorry. I don’t see this forum don’t make auto-linked. That I believe is in the roadmap ... :) [quote author="yan_" date="1283758523"]With this method, how create QScriptValue with newXXXX function ?[/quote] With the setContextProperty() approach, you would use a QObject with slots or Q_INVOKABLEs for your "global" object, rather than a QScriptValue. The Minehunt demo shows this approach (adds a MinehuntGame object in the root context). [quote author="mbrasser" date="1283814387"]With the setContextProperty() approach, you would use a QObject with slots or Q_INVOKABLEs for your "global" object, rather than a QScriptValue. The Minehunt demo shows this approach (adds a MinehuntGame object in the root context).[/quote] Thanks. I've ever test it and it's work perfectly ^^ I find a solution for JS code :D 1- create a qml file wich define function on the root element (I use QtObject) 2- in initializeEngine(), use a"QDeclarativeComponent ":. 3- set qml file path or use setData 4- use create to make a QObject 5- add this QObject with setContextProperty Like QDeclarativeComponent use qml file, i think it's so possible to mix JS and C++ code - johnlamericain It's really impressive, thanks for that !
https://forum.qt.io/topic/765/create-a-global-object
CC-MAIN-2018-30
refinedweb
595
61.56
Return an entry from the group database #include <grp.h> struct group* getgrent( void ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getgrent() function returns the next entry from the group database, although no particular order is guaranteed. This function uses a static buffer that's overwritten by each call. The next entry from the group database. When you first call getgrent(), the group database is opened. It remains open until either getgrent() returns NULL to signify end-of-file, or you call endgrent(). The getgrent() function uses the following functions, and as a result, errno can be set to an error for any of these calls: /* * ); }
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/g/getgrent.html
CC-MAIN-2018-43
refinedweb
119
66.94
Dear MatPlotLib users, I am having trouble with the performance of matplotlib. For data analysis, I want to be able to place multiple graphs on screen, with multiple lines, each consisting of 16000 data points. I have benchmarked my solution, but it did not perform too well. For example: 6 graphs with 6 lines each, takes 12.5 seconds. This graph indicates my benchmark: In comparison, matlab takes only 2.48 seconds for drawing those. I also noticed that memory usage during the benchmark rises to too high levels. I have, during a different experiment, plotted 36 graphs with 1 line. This is about 9MB of total (x,y) data alltogether, but execution of the benchmark spikes 1GB of memory usage. My question: - Is this performance of matplotlib to be expected? - Can my code (see below) be improved in any way? Thank you very much in advance, Mike ··· ================================ The code I use for the benchmark for nr_of_graphs in range (1,7): for nr_of_lines in range(1,7): root = Tk.Tk() #nr_of_lines = int(argv[0]) #nr_of_graphs = int(argv[1]) m = myLinMultiPlot() m.drawxy("test {0}L on {1}G".format(nr_of_lines, nr_of_graphs), nr_of_graphs, nr_of_lines) root.mainloop() The code that plots the actual lines class myLinMultiPlot(Tk.Toplevel): def drawxy(self, test_name, plots, lines): pointsize = 16000 figure = Figure(figsize=(2,1), dpi=100) storage = [] axes_arr = [] for p in range(0,plots): for li in range(0,lines): shift = li * 100 axes = figure.add_subplot(plots,1,1 + p) axes_arr.append(axes) xarr = xrange(0,16000) yarr = [] for x in xarr: yarr.append(math.sqrt(x + shift)) strg = [xarr,yarr] storage.append(strg) startdraw = timeit.default_timer() for a in axes_arr: for l in storage: a.plot(l[0],l[1]) canvas = FigureCanvasTkAgg(figure, master = self) canvas._tkcanvas.pack(side=Tk.TOP, fill=Tk.BOTH, expand=1) canvas.show() canvas.blit() //This is the time depicted in my benchmark! durationdraw = timeit.default_timer() - startdraw
https://discourse.matplotlib.org/t/performance-after-benchmarking-is-low/16276
CC-MAIN-2021-43
refinedweb
318
61.12
. Just like constructors, destructors should not be called explicitly. However, destructors may safely call other member functions since the object isn’t destroyed until after the destructor executes. A destructor example Let’s take a look at a simple class that uses a destructor:. Under the RAII paradigm, objects holding resources should not be dynamically allocated. This is because destructors are only called when an object is destroyed. For objects allocated on the stack, this happens automatically when the object goes out of scope, so there’s no need to worry about a resource eventually getting cleaned up. However, for dynamically allocated objects, the user is responsible for deletion -- if the user forgets to do that, then the destructor will not be called, and the memory for both the class object and the resource being managed will be leaked!. Rule: If your class dynamically allocates memory, use the RAII paradigm, and don’t allocate objects of your class dynamically. Small thing. In the code for your "A destructor example" shouldn't lines 25,26 have the same indentation as line 28? For consistency's sake, yes. I've fixed the indentation. Thanks! hi. in a destructor example whats the necessary of this line : The getValue function is used on this line of code: It allows you indirect access to the value of the element at ar.m_array[5], which you can't access directly, because m_array is private. Can anyone explain for me what is the return type of getValue(int, int) of the above example, please: What does int& mean? A reference to an integer. See this lesson for more information. Hi! About "A warning about the exit() function": I believe, today OS allocate a virtual address space for each process, and deallocate it after the process has done its work. So, memory leaks aren't something you should worry about once you're done, only when you're working. Well, unless you're writing a code for some unusual OS, that can't do so (like a real-time OS -- googled). I'm not sure if I understand the section on RAII. Under the principles of RAII, is it correct to say that an object can dynamically allocate memory upon initialization, but an object should not be dynamically allocated itself? Right. Classes can clean up after themselves because they have a destructor. If you allocate those classes on the stack, they'll always be cleaned up properly (and resources will be released). However, if you dynamically allocate a class, and forget to delete it, then whatever resource the class was using will never be deallocated (until the program ends). So RAII says don't dynamically allocate stuff outside of classes, because it's too easy to accidentally end up in a situation where it doesn't get cleaned up properly. Hi Alex, I wondered why not simply wrote in line 25 of the first example: m_string = string , since both are pointers (after erasing the const, from constructor's parameter). So making a step by step debugging, I thought that the pointer "string" may point to an address in the stack, and when program leave the constructor's scope, we want the m_string pointer, point to an address in heap. Is the above a reasonable explanation? Yes. For future reference, this question is in regards to this example: Parameter string is a pointer to some array of const chars, but we can't assume that those chars have been allocated on the heap. If they've been allocated on the stack, then at some point they will die. When that happens, m_string would be left pointing to invalid memory, which would be bad. By making a copy of the array passed in on the heap, our class has control over the memory for that array. That said, I'm going to simplify this example. It's needlessly complex to introduce destructors, and we talk more about this subject in the next chapter when we get into shallow vs deep copies. 🙂 Hey Alex I think there's a problem in the first example on this page. In the 10th line you've declared string as a const but in the 14th line you're assigning null to it. This is actually okay. string is a pointer to a const char, not a const pointer itself. That means we can't alter the const data that the pointer is pointing to, but we can change what it is pointing at. On line 14, we simply change string to point at the string literal "". I'm not sure I understood RAII is it basically when you make a dynamically allocated variable with a constructor, and then once you're done with it and the destructor is called it "deletes" it Pretty much. It allows you to create objects that clean up after themselves. Hello Alex. I tried instantiating an object of MyString without any arguments, but the program failed to compile. I guess the default parameter for the class constructor does not work since it's a pointer. I don't really understand what may be wrong. Thanks for your help. The MyString class has a default constructor, so this should have worked. What error did you get? Hi! I have a question for this segment of code. How many times is destructor called? I guess for variables a and b, but what is with references and pointers? Thanks! Good question. References and pointers don't impact the lifetime of a variable, so there is no destructor call when they go out of scope. So there should be only two destructor calls here -- when a and b go out of scope. There's a memory leak though, because you've called "new" but didn't call "delete" Hello. I have two questions. Sorry for the long post. 1. I tried to add into your code a bit on the section: A destructor example, just to test my understanding of this tutorial. However, when I tried to create a vector of type Class (which has a constructor and destructor that deals with dynamically allocated memory), I had errors during the run-time of my program. I am assuming that I had a double delete error since vector follows the RAII paradigm and that the vector class cleans up by itself? I pasted the source code just to avoid any misunderstandings that I am making. I commented the section of code where the error is coming from. When I try to uncomment the code, that's where I encounter the run-time error. 2. I apologize in advance for this but the following code contains a mixture of C but I also tried to take some shortcuts on the code by using strdup function (from C POSIX library, to avoid writing all the copying string code again) just to see if new/delete is different from malloc/free and I still had the same error (when I try to uncomment the code below) even doing it on what it seems to be a "different way". I'm extremely grateful for these tutorials. Again, Thank you. First, use std::string instead of C-style strings. It's much easier. Second, this looks like a deep vs shallow copying issue. You're calling function push_back() with C-style strings instead of Items. So what C++ will do is see if it can turn the string into an Item, which it can, because you provided a constructor. So C++ will construct a temporary (anonymous) Item, and pass that to push_back(). Function push_back will make a copy. Here's where things start going wrong: When Vector makes a copy of your Item, it literally copies m_length and m_label -- this means the item in the Vector is pointing to the same allocated memory as the temporary object. Then, the temporary Item is destroyed, which deletes m_label. This means your Item in the Vector now has a hanging pointer, and thus, runtime error. There are a couple ways around this: 1) Create a copy constructor so that when Item is copied, it does a deep copy instead of a shallow copy. I talk more about these in lessons 9.11 and 9.12. 2) Use std::string. 🙂 Hello Alex, Thank you very much for your reply. I went ahead to read the lesson 9.11 (but not 9.12 yet) and finally implemented the copy constructor to do a deep copy :). Here's the code: I removed the code inside the destructor since I'm avoiding the use of dynamic memory allocation and the objects themselves will be destroyed once they go out of scope. I had trouble checking all the vector elements (using lesson 6.16 intro to vector and 6.12 For-each loops) and I decided to venture ahead to try out your Lesson 16.3 which talked about iterating over vector elements and that solved the problem :). Another interesting problem is that when I tried to explicitly initialize str::string to 0 (std::string str = 0) instead to an empty string (""), that gave me serious compilation errors: "...terminate called after throwing an instance of 'std::logic_error' what(): basic_string::_M_construct null not valid" which was a first time for me to see. Sorry for taking too much of your time :(. I took your advice to heart and use std::string! I'll be going over all your tutorials for sure since there is so much to learn. Again, thank you very much for your time. You're welcome. I think it's awesome that you did this. Experimentation is one of the best ways to learn! Will this delete the dynamic memory or there will be a leak? There are four problems here: 1) You're dereferencing pointer m_nID, but you've never assigned it to point at any memory address. So you're assigning nID to garbage memory. You probable intended to have your constructor dynamically allocate an integer for m_nID? 2) You're deleting m_nID, but you've never dynamically allocated it. This can cause bad things to happen. 3) delete[] is meant to be used for arrays, but you're deleting a scalar (single value). You should be using delete instead. 4) pSimple is not being deallocated. My guess is that deleting pSimple is crashing because of problems 1 and 2. hi alex, i am confused about your first example. How does getter function - char* getString() { return m_string; } return string literal of type char "Alex" while member variable char *m_string is the pointer that holds the adress of "A"? I would expect it to return the address of "A"... It does return just the address of "A". However, since 'l', 'e', 'x', and '\0' will be in sequential memory next to that 'A', we can use the char* pointer as a C-style string. Hey Alex! I'm a bit confused about the first example where in the constructor you had this bit Could you explain how this works to me and why did you do it? Thanks. Sure. This code starts m_length at 0. It then checks to see if string[m_length] is '\0' (the null terminator for the string). If so, we're done. Otherwise, m_length is incremented and we try again. By the time we're done with this loop, string[m_length] will be the terminator for the string, and m_length will be the index of the terminator. If I'm not wrong, the code "A destructor example" lacks from the "main" part where you create an object called "myName" and (I guess) you send the string "Alex" (assuming the are a "fixed" cout that says something as: Oops. I've fixed the example to include a main() function. 🙂 Sorry I’m a newbie, I have few questions in the first example: - Line 8: Is "const" necessary? - Line 12: Why do we have to plus 1 for the terminator? - Line 21: Why do we use "\0" instead of just “0” It's necessary if you want to be able to pass constant strings into the constructor. strlen() returns the length of the string without the terminator, so we have to add an extra character to ensure there's room for the terminator. '\0' is the null character. 0 would work as well, but I think '\0' makes it clearer that we're using this as a terminator. It's purely stylistic. What would happen if I put the two functions (GetString() & GetLength()) before the destructor inside the class (In the first example)? You mean just reorder the function so GetString() and GetLength() appear before the destructor? Nothing, the order of functions and member variables in a class doesn't matter. I'm lost on this line: public: MyString(const char *pchString="") So the constructor takes a const char* called pchString, but what is =" "??? Inst that a pointer? Is that a default value of one space? And how can you assign a value to it? Its a pointer? It's a default value for the pchString parameter. In this case, the default value is the C-style string "", which is an empty string (just a null terminator). That way, if we do something like this: s will be constructed with an empty string, which is what you'd probably expect/want. This uses concepts from lesson 6.8b -- C-Style string symbolic constants and lesson 7.7 -- Default parameter. Hello again Alex. I'm confused about passing "Alex" as a c-style string. I was under the impression that c-style strings had to be declared as an array as per lesson 6.6-- C-Style Strings. When your name is passed to MyString(const char *pchString) wouldn't *pchString be expecting a single char value vice a string? Shouldn't it be: Thanks. In this case, "Alex" is a C-style string literal. See lesson 6.8b -- C-style string symbolic constants for more information about this. in the lesson only main() is used to call the class and talking about destructors, i assume that it is called at the end of the block of any calling function. is that right? and what if the class call was made from other blocks nested in the function? A classes destructor is called when the class variable goes out of scope. For local variables, that's typically at the end of the block in which they're declared (which could be a nested block), but for other kinds of variables, it could also be elsewhere. For example, global variables get destructed when the program ends. Dynamically allocated variables get destructed when the program deletes them. hi alex, what are differenties between "Allocate a Simple dynamically" and "Allocate a Simple on the stack" as you mentioned at the bottom of this tutorials? i don't understand :/ Allocating a variable on the stack is just allocating a normal local variable. Allocating a variable dynamically is using dynamic memory allocation, as covered in lesson 6.9 -- Dynamic memory allocation with new and delete. If we instantiate a class by using new operator, we should call destructor explicitly. It won't be called automatically for the objects which are created on the heap. No. If you instantiate a class via new, you should destroy it via delete, not by calling the destructor. Destroying the object via delete will implicitly call the destructor. Simple well explained tutorial. In your last example on this lesson, you included the characters "->". Is this equivalent to the selector operator "."? The characters "->" are used for member selection from pointers. The following two lines are equivalent: i want to ask that for MyString(const char *pchString="") why "" are needed? what is the uses? The "" simbolyze an empty string. So, the function parameter has a default value of an empty string. For example, you can put "default string" and if the user didn't input any string when he called the MyString function, the parameter will have the default value of "default string". hi alex, just want to ask.destructor cannot be overloaded right? but it can be called more than 1 time. as soon as each object is destroyed. in this code: } // cSimple goes out of scope here Was the destructor called twice? Please explain. thanks Destructors can not be overloaded. They never take any parameters and never return anything. Destructors should not be called more than 1 time, which will happen automatically when the variable goes out of scope, or manually when you delete it. If you try to destruct a variable that has already been destructed, your program will crash. Shouldn't the line where memory for an array of Simple objects is being dynamically allocated read: Simple *pSimple = new Simple[2] (using brackets instead of parenthesis) And isn't it redundant to specify the object type at the beginning of the allocation? Couldn't the line read pSimple = new Simple[2] ? I thought that when dynamic memory is being allocated it was understood that the identifier before the = operator is a pointer of the type specified after the word new. If I was trying to allocate an array of Simple, then you would be correct. I was just trying to allocate a single one, passing the constructor the value 2. It IS redundant to specify the object type when doing an allocation -- however, C++ makes you do it anyway. In the next version of C++, you will be able to use the auto keyword to automatically infer the data type from the assignment. If I have a class named "thing" and I create a static instance of that class named "myThing" like this "static thing myThing". Is the destructor called on this instance? If so when? I would assume it would at the end of the programme but I really don't know. Well I just did a test myself and found that it was as I expected called at the end of the programme if anyone else wants to try this them selves this is what I did #include <iostream> using namespace std; class basic { public: basic() { cout << "constructor called" << endl; } ~basic() { cout << "destructor called" << endl; } }; static basic myThing; int main() { system("pause"); return 0; } It outputs: constructor called pause stuff destructor called I find a lot of time when I have questions in C++ I can answer them just by writing a test program like you did. The answer is that when you have static variables, they get constructed before main() and destructed just before your program ends (assuming you don't exit() early) Thanks, QUOTE: assuming you don’t exit() early ah, is there an easy way around this? to call the destructor on an exit() call? As far as I know, there's no way to explicitly call the destructor on an exit() call. However, instead of using exit() to terminate your program, there are other options. One way is to design your program so it "exits normally" (eg. at the end of main) instead of wherever it happens to be at the moment. A related idea is to throw an exception and catch it in main(), allowing the program to exit normally. Alex, I want to tell you how much I appreciate all of the time you have spent on this tutorial. I have used some of the other online tutorials, (cplusplus.com, cprogramming.com, etc) but I find myself most always coming back here whenever I dont understand something and need a really thorough, cogent, and easy to understand explanation. Hello, can I call the destructor explicitly? Sandor, technically you can... but generally you shouldn't. If the object is dynamically allocated, just delete it using the delete keyword, which will call the destructor anyway. If the object is not dynamically allocated, calling the destructor explicitly is actually dangerous. Consider what would happen if you explicitly call the destructor of a local variable -- the destructor would delete the variable. Then when the local variable went out of scope, the destructor would get called AGAIN. Then bad things happen. Ok, but how can I call it?.I thought of having for example several spaceship objects in the main function. Now for some reason one of those spaceships gets "shot down", so I want to call the destructor to "destroy" it, but I don't want to wait till the main function ends. Sandor, if you need objects that may be destroyed in the middle of a function, the best way to go is to dynamically allocate them in the first place and then delete them using the delete keyword. You should not need to explicitly call the destructor. In the case where you have multiple objects (eg. spaceships), usually the best way to go is an array of pointers. If you need a new spaceship, find the first empty array slot and allocate a new spaceship there. If a spaceship is destroyed, delete it and then set the pointer in the array to NULL. This way, you can have as many spaceships as there are elements in the array. This also gives you the ability to loop through the array to do things to all the spaceships (eg. move them). Use dynamic memory allocation to control the life time of the object. You can create anywhere and destroy it anywhere. When you delete an object, its destructor function will be executed Name (required) Website
http://www.learncpp.com/cpp-tutorial/8-7-destructors/comment-page-1/
CC-MAIN-2018-13
refinedweb
3,568
64.51
You can upgrade your Cloud Data Fusion instances and batch pipelines to the latest platform and plugin versions to obtain the latest features, bug fixes, and performance improvements. The upgrade process involves instance and pipeline downtime (see Before you start). Before you start Plan a scheduled downtime for the upgrade. The process takes up to an hour. Recommended: Before you upgrade, stop any running pipelines and disable any upstream triggers, such as Cloud Composer triggers. When the upgrade begins, all running pipelines stop. If you upgrade to versions 6.3 and above, if any pipelines are running beforehand, Cloud Data Fusion doesn't restart them. In earlier versions, Cloud Data Fusion attempts to restart them. - Install curl. Upgrading Cloud Data Fusion instances To upgrade a Cloud Data Fusion instance to a new Cloud Data Fusion version: In the Cloud Console, open the Instances page. Click on Instance Nameto open the Instance details page. This page lists instance information, including the instance id, region, current Cloud Data Fusion version, logging and monitoring settings, and any instance labels. Then perform the upgrade using either the Cloud Console or gcloud command-line tool: Console Click Upgrade for a list of available versions. Select the version that you prefer. Click Upgrade. Click View instance to access the upgraded instance. Verify that the upgrade was successful by reloading the Instance details page, and then clicking. gcloud Run the following gcloudcommand from a local terminal Cloud Shell session to upgrade to a new Cloud Data Fusion version. Add the --enable_stackdriver_logging, --enable_stackdriver_monitoring , and --labels flags if they apply to your instance. gcloud beta data-fusion instances update \ --project=PROJECT_ID \ --location=REGION \ --version=NEW_VERSION_NUMBER INSTANCE_ID After the command completes, verify that the upgrade was successful. From the Cloud Console, reload the Instance details page, and then click batch pipelines To upgrade your Cloud Data Fusion batch pipelines to use the latest plugin versions: Set environment variables. Recommended: Backup all pipelines. Run the following command, then copy the URL output to your browser to trigger a zip file download. echo $CDAP_ENDPOINT/v3/export/apps Unzip the downloaded file, then confirm that all pipelines were exported. The pipelines are organized by namespace. Upgrade pipelines. Create a variable that points to the pipeline_upgrade.jsonfile that you will create in the next step to save a list of pipelines (insert the PATH to the file). export PIPELINE_LIST=PATH/pipeline_upgrade.json Create a list of all of the pipelines for an instance and namespace using the following command. The result is stored in the $PIPELINE_LISTfile in JSONformat. You can edit the list to remove pipelines that do not need to be upgraded. Set the NAMESPACE_ID field to the namespace where you want the upgrade to happen. curl -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" ${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/apps -o $PIPELINE_LIST Upgrade the pipelines listed in pipeline_upgrade.json. Insert the NAMESPACE_ID of pipelines to be upgraded. The command displays a list of upgraded pipelines with their upgrade status. curl -N -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" ${CDAP_ENDPOINT}/v3/namespaces/NAMESPACE_ID/upgrade --data @$PIPELINE_LIST to enable Replication Replication can be enabled in Cloud Data Fusion environments in version 6.3.0 or above. If you have version 6.2.3, upgrade to 6.3.0, and then enable Replication. Granting roles for upgraded instances If you upgrade an instance from Cloud Data Fusion version 6.1.x to versions 6.2.0 or above, after the upgrade completes, grant the Cloud Data Fusion runner role and Cloud Storage admin role to Dataproc service account in your project. Adding network tags Network tags are preserved in your compute profiles when you upgrade from Cloud Data Fusion versions 6.2.x and above to a higher version. If you upgrade from version 6.1.x to version 6.2.0 and above, network tags are not preserved. This might cause your Dataproc cluster to get stuck in provisioning state, especially if your environment has restrictive networking and security policies. Instead, in each updated instances, manually add your network tags to each of the compute profiles it uses. To add the network tags to a compute profile: In the Google Cloud Console, open the Cloud Data Fusion Instances page. Click View Instance. Click System Admin. Click the Configuration tab. Expand the System Compute Profiles box. Click Create New Profile. A page of provisioners opens. Click Dataproc. Enter your desired profile information, including your network tags. Click Create. After you add the tags, use the updated profile in your pipeline. The new tags are preserved in future releases. Available versions for your upgrade In general, when you upgrade, we recommend using the latest version of Cloud Data Fusion environment so that your instances run in a supported environment for the longest possible time frame. For more information, see the Version support policy. Depending on your original version, upgrades to some versions might not be available. In those cases, you can upgrade to a version that supports upgrades to your desired version. Cloud Data Fusion supports the following version upgrades: Troubleshooting When you upgrade to version 6.4, there is a known issue with the Joiner plugin where you cannot see join conditions. For more information, see the Troubleshooting page.
https://cloud.google.com/data-fusion/docs/how-to/upgrading
CC-MAIN-2021-49
refinedweb
884
58.38
Delinquent Posted by brucechapman on April 8, 2009 at 10:36 PM PDT With computers you can have the joy of delinquency without anyone getting hurt. Delinquency being different to maliciousness of course. So when I saw that Throwable.initCause() method could throw an IAE if the argument was "this", the delinquent in me saw an opportunity for some harmless fun. public class TheDevilMadeMeDoIt { public static void main(String[] args) { Throwable t1 = new Throwable("t1"); Throwable t2 = new Throwable("t2"); t1.initCause(t2); t2.initCause(t1); t1.printStackTrace(); } } I'll leave the result as an exercise. Blog Links >> - Printer-friendly version - brucechapman's blog - 1592 reads by brucechapman - 2009-04-13 15:22Prague_hotel, Once you try it out you'll see that it is something completely different. My code does not generate a IAE (tho' maybe it should?). :-{)> by prague_hotel - 2009-04-13 10:49Unxpected IAE at: java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel Yours seems easier to try out. :)
https://weblogs.java.net/blog/brucechapman/archive/2009/04/delinquent.html
CC-MAIN-2015-22
refinedweb
159
57.87
Package::Rename - Rename or copy package Version 0.02 This module allows you to rename, copy or even remove packages from the perl namespace. This module defines the following functions. They are all optionally exported. Give a package a different name. This is the equivalent of first linking a package, and then removing its original name. Make a 'hard link' of a package, thus giving it a second name. Remove a package from the namespace. You probably don't want to use this yourself unless you really know what you're doing. Copy the complete contents of a package. Leon Timmermans, <leont at cpan.org> This code can cause serious mayham. Use it with care. Please report any bugs or feature requests to bug-package-rename at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. Perl looks up functions during compile time but methods run time. This fact can be useful (see namespace::clean for an example of that), but also to confusing. You can find documentation for this module with the perldoc command. perldoc Package::Rename You can also look for information at: This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~leont/Package-Rename-0.02/lib/Package/Rename.pm
CC-MAIN-2017-13
refinedweb
225
69.18
Multidimensional data analysis! @thomasaarholt Hi, I try to get quantified EDS from a sample with many elements (Al,C,Co,Cu,Cr,Mo,O,Pt,W,Zn) with a strong peak overlap between Cr-L (0.571 eV) and O-K (0.523 eV). One layer is expected to be CrC but appears as CrO... the quantification output 30% of O in it, which is very unexpected for several reasons. Checking the EDS spectrum manually I can see the peak shift by maybe 1 channel when seeing Cr-L instead of O-K but the model fitting doesn't. I am using the EDS model fitting and after several hours in the documentation I come here to ask: Is there a solution to this? is there a function somewhere that can for instance estimate Cr-L from the Cr-K intensity and subtract it from the O-K signal before saving the O-K intensity? I hope this question is clear enough! Regards, Olivier I am trying to run the following code : import hyperspy.api as hs input_filename = "file1.emd" spim = hs.load(input_filename)[-1] # the output of the load function is a list in which the last element is an EDSTEM object. spim.change_dtype("float") spim.crop(1,70,400) spim.crop(2,0.3,19.0) spim.decomposition(True) It ouputs : ValueError: All the data are masked, change the mask. It seems to me that the crop functions are at the source of this issue since when I comment them everything is fine. I am asking here to check if I am missing something. But I will post an issue on github if not. Hi, I'm not sure if it's just me missing something. But when I import images from Velox EMD (this particular file has 9538 HAADF frames), only the scaling from the first frame is retained in the axes_manager values. I've copied my code snippet below. %matplotlib qt import hyperspy.api as hs import numpy as np import matplotlib.pyplot as plt import scipy import hyperspy.misc as hsm #prevent figure opening plt.ioff() #load file s = hs.load("211217/HeatedTEM/HeatedTEM.emd") print("imported") print(s.axes_manager) #format and save image with time stamp for single_image in s: ####### Failed attempt at scaling single_image.axes_manager[1].scale single_image.axes_manager[0].scale single_image.axes_manager[1].offset single_image.axes_manager[0].offset single_image.axes_manager[1].units single_image.axes_manager[0].units a = single_image.plot(colorbar=False, scalebar=True, axes_ticks=False) plt.axis('off') plt.savefig('test/image %s.png' % str(s.axes_manager.indices), bbox_inches = 'tight', pad_inches = 0.1) plt.close() #single_image.save("test/image %s.png" % str(image_stack.axes_manager.indices)) But every single_image.axis_manager contains the same value as s.axis_manager (in this case 0.37nm) so the scaling is the same for every image Hello in Pycharm I had the below code that worked before with giving interactive hyperspy plots with s.plot() import matplotlib matplotlib.rcParams["backend"] = "Agg" import hyperspy.api as hs I am getting the following error and no plots shown. Please help if possible, and let me know if you need more information. Thank you! WARNING:hyperspy_gui_traitsui:The agg matplotlib backend is not compatible with the traitsui GUI elements. For more information, read WARNING:hyperspy_gui_traitsui:The traitsui GUI elements are not available. [<EDSTEMSpectrum, title: EDS, dimensions: (|4096)>, <EDSTEMSpectrum, title: EDS, dimensions: (|4096)>, <EDSTEMSpectrum, title: EDS, dimensions: (|4096)>, <EDSTEMSpectrum, title: EDS, dimensions: (|4096)>, <Signal2D, title: x, dimensions: (|512, 512)>, <Signal2D, title: HAADF,)>, <EDSTEMSpectrum, title: EDS, dimensions: (512, 512|4096)>] from scipy.misc import face img = face(gray=True) plt.figure() plt.imshow(img, cmap=cmap) plt.colorbar() Hey all, I am trying to get the decomposition of a sum of a section of the frames of a EDS signal. When I load the entire signal with sum_frames=True, the decomposition works fine. But when I load it with sum_frames=False and then sum all the frames using .sum('Time'), the resulting signal looks differently (lower intensity) and the decomposition works but the result is different (worse). What is the difference between the sum_frames=True argument and summing the frames after loading using .sum('Time')? Also, when I try to sum a section of the frames in time (e.g. the first 20) I get the error message: ValueError: All the data are masked, change the mask. Is there a way to avoid this error and get the decomposition over a range of time? Thank you! When I replace the test files with actual files I have on my device (same extensions), and I run the test script I get the following messages: WARNING:hyperspy.io:Unable to infer file type from extension 'ASW'. Will attempt to load the file with the Python imaging library. ERROR:hyperspy.io:If this file format is supported, please report this error to the HyperSpy developers. Hi, I am using EDS models and have trouble to restore the stored ones. I create a model m, fit it and store it, and also copy it to a EDS_model variable #create a model using all selected elements: m = si.create_model() m.fit() m.fit_background() #Reduce the element selection to the one of interest and quantify kfactors = Assign_elements2Quant() xray_lines_selected=si.metadata.Sample.xray_lines m_int_fit = m.get_lines_intensity(xray_lines_selected) m.store() EDS_model = m After this, the "si" is saved to a .hspy file. If i look at the EDS_model, it is fine, I can plot it and so on. EDS_model Out[80]: <EDSTEMModel, title: EDX> But then I try to load the .hspy file again, I see the model is there, with the components, but cannot restore it or plot it... why? l = hs.load(signal_type="EDS_TEM", escape_square_brackets=(True)) l.models Out[82]: └── a ├── components │ ├── Al_Ka │ ├── Al_Kb │ ├── C_Ka │ ├── Co_Ka │ ├── Co_Kb │ ├── Co_La │ ├── Co_Lb3 │ ├── Co_Ll │ ├── Co_Ln │ ├── Cr_Ka │ ├── Cr_Kb │ ├── Cr_La │ ├── Cr_Lb3 │ ├── Cr_Ll │ ├── Cr_Ln │ ├── Mo_Ka │ ├── Mo_Kb │ ├── Mo_La │ ├── Mo_Lb1 │ ├── Mo_Lb2 │ ├── Mo_Lb3 │ ├── Mo_Lg1 │ ├── Mo_Lg3 │ ├── Mo_Ll │ ├── Mo_Ln │ ├── O_Ka │ ├── W_La │ ├── W_Lb1 │ ├── W_Lb2 │ ├── W_Lb3 │ ├── W_Lb4 │ ├── W_Lg1 │ ├── W_Lg3 │ ├── W_Ll │ ├── W_Ln │ ├── W_M2N4 │ ├── W_M3O4 │ ├── W_M3O5 │ ├── W_Ma │ ├── W_Mb │ ├── W_Mg │ ├── W_Mz │ ├── Zn_Ka │ ├── Zn_Kb │ ├── Zn_La │ ├── Zn_Lb1 │ ├── Zn_Lb3 │ ├── Zn_Ll │ ├── Zn_Ln │ └── background_order_6 ├── date = 2022-02-08 12:09:22 └── dimensions = (96, 89|2048) Mymodel = l.models.a.restore() Traceback (most recent call last): File "C:\Users\oldo\AppData\Local\Temp/ipykernel_14272/3459143043.py", line 1, in <module> Mymodel = l.models.a.restore() File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\signal.py", line 82, in <lambda> self.restore = lambda: mm.restore(self._name) File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\signal.py", line 241, in restore return self._signal.create_model(dictionary=copy.deepcopy(d)) File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\_signals\eds_tem.py", line 745, in create_model model = EDSTEMModel(self, File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\models\edstemmodel.py", line 45, in __init__ EDSModel.__init__(self, spectrum, auto_background, auto_add_lines, File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\models\edsmodel.py", line 131, in __init__ Model1D.__init__(self, spectrum, *args, **kwargs) File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\models\model1d.py", line 279, in __init__ self._load_dictionary(dictionary) File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\model.py", line 306, in _load_dictionary id_dict.update(self[-1]._load_dictionary(comp)) File "C:\Users\oldo\.conda\envs\hspy_env\lib\site-packages\hyperspy\component.py", line 1223, in _load_dictionary raise ValueError( ValueError: _id_name of parameters in component and dictionary do not match environment.ymlfile. However, for my analysis I need to use nonlinear functionality, which isn't included in hyperspy v1.6.5 - but cloning RELEASE_next_minoris something which can't be tracked using an environment.ymlfile. Does anyone have a suggestion for how to get nonlinear functionality in a reproducible way? @LMSC-NTappy proposed forking RELEASE_next_minorto my own GitHub, pip installing the fork, and then leaving that fork unchanged - this is what I'll do for now. But I'm always grateful for other suggestions - thanks! line1=line_roi.interactive(data,color='yellow') ValueError: linewidth is not supported for axis with different scale. data.axes_manager Out[53]: <Axes manager, axes: (|118, 81)> Name | size | index | offset | scale | units ================ | ====== | ====== | ======= | ======= | ====== ---------------- | ------ | ------ | ------- | ------- | ------ x | 118 | | 0.043 | 0.14 | nm y | 81 | | -2.8 | 0.14 | nm Hi everyone, I am using Velox to acquire EM images and I am looking to use HyperSpy to help with the analysis. I do not have access to Velox on my personal computer, so I am using HyperSpy to load the EMD files. I am recording videos on Velox, however, to my knowledge it does not appear that I can save each individual frame of the video as a TIF in the Velox software itself (for single images I can easily export the data as a TIF on Velox). Using HyperSpy I am trying to convert the EMD files to TIF. However, I noticed that my scalebar is missing in the TIF when I open it later. Is there a way to save the scalebar information from the metadata using HyperSpy? I could come up with other solutions but I am looking for something more elegant since I know that HyperSpy can access the metadata. I see that there is a module for drawing scalebars in HyperSpy but I have not found a way to implement this correctly. Here is some very basic code that I have written in Python to save the EMD images to TIF: import hyperspy.api as hs import tkinter as tk from tkinter import filedialog root = tk.Tk() root.withdraw() file_path = filedialog.askopenfilename() s = hs.load(file_path) s.plot() # I can see the scalebar from the metadata save_input = input("Would you like to save? (Y/N)? ") if save_input == "Y" or save_input == "Yes" or save_input == "yes" or save_input == "y": save_path = filedialog.asksaveasfilename() s.save(save_path) # I can no longer see the scalebar here, how can I flatten the overlay? else: print("END OF PROGRAM.") Thanks for any help that you can provide! %matplotlib notebookdoes not work for the hyperspy plotting functions. notebookand widgetbackends, none of which seem to give a plot. Hi everyone, I have recorded a HAADF-STEM video in Velox with 500 image frames. During recording, I changed the magnification, zoomed in and out. I am already able to export each individual frame. However, the scalebar is the same for each frame (whereas in Velox it shows different scale bars for each individual frame). Do you have any solution for that? What did NOT work: single_image=complete_dataset.inav[frame_number] print(single_image.axes_manager[0].scale) (The type of the complete_dataset is signal2D). Thanks a lot for your help! :) Hi everyone, I am having trouble saving a signal or multiple signals loaded from .emd format to nexus format using the following command: file_writer("test.nxs",s) I got the following error : AttributeError: 'dict' object has no attribute 'dtype' it works fine using the following command , but the original metadata would be not saved: file_writer("test.nxs",s,save_original_metadata=False) Can someone help please, i need to save the metadata also . Hi all, I am new to hyperspy and still a bit lost getting the thickness out of an EEL spectrum. I have tried this: s_ll = hs.load("20220315/data.dm3") s_ll.set_microscope_parameters(beam_energy = 200, collection_angle=6.0, convergence_angle=0.021) s.set_microscope_parameters(beam_energy = 200, collection_angle=6.0, convergence_angle=0.021) s_ll.plot() s_ll.align_zero_loss_peak(subpixel=True, also_align=[s]) th = s_ll.estimate_elastic_scattering_threshold(window=10) density = 5.515 s_ll_thick = s_ll.estimate_thickness(threshold=th, density=density) and as a result I obtain this: s_ll_thick <BaseSignal, title: E8-FeNiO-PS71-T_0043 thickness (nm), dimensions: (1|)> What does it mean? I would like to obtain a value and not a signal. Sorry, if this is an obvious answer. ;-) Thank you for your help!
https://gitter.im/hyperspy/hyperspy?at=61f40070d41a5853f9725cfa
CC-MAIN-2022-21
refinedweb
1,945
50.94
Asked by: Bitmap decode byte array. Skia Decoder returns false Question - User8355 posted Hi everybody, I'm having a problem when transforming a byte array image to a bitmap. Some images are not shown and others yes(always the same images). All images are RGB, have the same dimensions and are got from a Blob field in a SQLite database. Debugging the process of transforming the byte array, the following error message is logged out: [skia] --- decoder->decode returned false I'm trying to modify my code // Loads a Bitmap from a byte array public static Bitmap bytesToBitmap (byte[] imageBytes) { Bitmap bitmap = BitmapFactory.DecodeByteArray(imageBytes, 0, imageBytes.Length); return bitmap; } However, I am unable to change some workarounds I've found here from java to C#. Could anyone help me? Thanks.Thursday, May 8, 2014 10:04 AM All replies - User44709 posted If it's JPG, then it's probably this bug. If possible, you might want to pre-convert all your images to PNG. If that's not possible, you might try loading it as a .NET Image, save it to a byte array as PNG and then load it into the Android Bitmap. Like this maybe? (this was all off the top of my head, but should give you the idea if it doesn't compile.) System.Drawing.Image image = null; using (MemoryStream ms = new MemoryStream(imageBytes)) { image = System.Drawign.Image.FromStream(ms); } byte[] newData = null; using (MemoryStream ms = new MemoryStream()) { image.Save(ms, ImageFormat.Png); newData = ms.ToArray(); } Bitmap bitmap = BitmapFactory.DecodeByteArray(newData , 0, newData .Length); return bitmap;Thursday, May 8, 2014 12:52 PM - User8355 posted Thank you @PeterDavis, Regarding the first thing you mention, it would be difficult for me to pre-convert the images because I have them only as blob files into a database, not as files. In reference of the second possible solution, is the type System.Drawing.Image available in Xamarin.Android? I can't find it and System.Drawing.Image doesn't compile. If there's no other solution I would have to transform all the blobs as jpg files, transform them into pngs and re-insert them into the database I use in my app. I hope not having to do all this process.Thursday, May 8, 2014 2:04 PM - User44709 posted Well that's embarrassing. That's what I get for coding off the top of my head. Sorry, didn't have a compiler handy. I have another possible solution. Give me a bit of time to get it put together.Thursday, May 8, 2014 2:20 PM - User44709 posted Sorry, I've been struggling to come up with a workable solution, but I'm not having a lot of luck. I'll need to think about this some more. Here's the issue: The solution is to create the FlushedInputStream as described here. You would then create the FlushedInputStream from your byte array and then use it. This gets around the bug in the decoder. The problem is you can't implement FlushedInputStream in C# because Xamarin wants a Stream passed to DecodeStream. It then wraps that in a regular InputStream which will then cause the same failure. The solution, I believe, is going to be to create java library that has the FlushedInputStream and loads your image via the FlushedInputStream and then returns the Android Bitamp to your app. I haven't done this before, so I'd be hesitant to give much more in the way of specifics, but short of using some other image library to handle your images, I don't see a better solution.Thursday, May 8, 2014 2:56 PM - User8355 posted Thanks a lot @PeterDavis?, Neither have I created a java library. I will try, if nobody gives other solutions, to do that searching how to do it. Thanks a lot for your time. And if someone else knows a solution, please share it ;)Thursday, May 8, 2014 3:11 PM - User44709 posted @RogierKoning? This may interest you. Good timing. and, May 8, 2014 7:04 PM - User30820 posted using System; using System.Net; using Android.Graphics; using System.Reflection; namespace Spotlight { class ImageFetchr { public static Bitmap GetImage(string url) { Bitmap _image = null; try { byte[] _byteArray; using(var _wClient = new WebClient()) { _byteArray = _wClient.DownloadData(url); } _image = BitmapFactory.DecodeByteArray(_byteArray, 0, _byteArray.Length); } catch(Exception _exception) { MethodBase _currentMethod = MethodInfo.GetCurrentMethod (); Console.WriteLine(String.Format("CLASS : {0}; METHOD : {1}; EXCEPTION : {2}" , _currentMethod.DeclaringType.FullName , _currentMethod.Name , _exception.Message)); } return _image; } } }Thursday, May 8, 2014 8:13 PM - User1669 posted Could you not decode the bitmap from a memory stream? Something like (off the top of my head): using(var ms = new MemoryStream(imageBytes)) { var myBitmap = BitmapFactory.DecodeStream(ms); } You may need to modify it as I'm doing this without a compiler handy at the moment.Thursday, May 8, 2014 9:05 PM - User44709 posted @rmacias? and @naynishchaughule.5750? - You cannot use DecodeByteArrayat all and you can't DecodeStreamwith a regular stream because of the bug that's causing the problem. The bug has to do with poor error handling in the decoder. The JPEGs he's reading are truncated and causing the decoding to fail. You can use DecodeStream, but you need to implement something like this And unfortunately you can't do that in C#. It has to be done in Java because of the way Mono wraps DecodeStream.Thursday, May 8, 2014 9:20 PM - User8355 posted Thanks a lot @rmacias?, @naynishchaughule.5750? and @PeterDavis?. DecodeByteArray doesn't work as well because the bug is in the BitmapFactory.Decode function. I'm trying to use the universalImageLoader project. However, after doing the parse of the java project I've realized that it only allows to load images from an uri. So I'm trying to modify the java project adding a public class FlushedInputStream extends ByteArrayInputStream which overrides the skip method to avoid the bug. I will continue on Monday and I'll tell you how was it.Friday, May 9, 2014 4:10 PM - User8355 posted I've been working on this for several days without success. I haven't been able to bind the project UniversalImageLoader modificated by me to a C# dll. What I've modified on the UniversalImageLoader project is what follows: interface ImageDownloader: Added method InputStream getStream(String imageUri, byte[] byteArray) throws IOException; BaseImageDownloader: Implemented method @Override public InputStream getStream(String imageUri, byte[] byteArray) throws IOException { return new FlushedInputStream(imagebytes); } FlushedInputStream: Created class public class FlushedInputStream extends ByteArrayInputStream { private ByteArrayInputStream in; public FlushedInputStream(final byte[] imagebytes) { super(imagebytes); in= new ByteArrayInputStream(imagebytes); } @Override public long skip(final long n) { long totalBytesSkipped = 0L; //If totalBytesSkipped is equal to the required number //of bytes to be skipped i.e. "n" //then come out of the loop. while (totalBytesSkipped < n) { //Skipping the left out bytes. long bytesSkipped = in.skip(n - totalBytesSkipped); //If number of bytes skipped is zero then //we need to check if we have reached the EOF if (bytesSkipped == 0L) { //Reading the next byte to find out whether we have reached EOF. int bytesRead = read(); //If bytes read count is less than zero (-1) we have reached EOF. //Cant skip any more bytes. if (bytesRead < 0) { break; // we reached EOF } else { //Since we read one byte we have actually //skipped that byte hence bytesSkipped = 1 bytesSkipped = 1; // we read one byte } } //Adding the bytesSkipped to totalBytesSkipped totalBytesSkipped += bytesSkipped; } return totalBytesSkipped; } } Now I have my personalized UniversalImageLoader.jar but when I try to bind it to C# with a Xamarin Studio binding project I get the following error: Error CS0234: The type or namespace nameDiskLruCache' does not exist in the namespace Com.Nostra13.Universalimageloader.Cache.Disc.Impl.Ext'. Are you missing an assembly reference? (CS0234) (UniversalImageLoader) DiskLruCache is a file I haven't modified at all. I'm exhausted of 'fighting' against this issue. Could anyone help me get rid of this error to be able to generate the binding successfully? Thanks.Wednesday, May 14, 2014 2:26 PM - User8355 posted I was able to generate the dll library from a jar of the UniversalImageLoader project. Editing a file inside the xamarin binding project called api.xml did the trick. On the line 180 it was the line: <class abstract="false" deprecated="not deprecated" extends="java.lang.Object" extends- And adding the visibility like this the binding was created: <class abstract="false" deprecated="not deprecated" extends="java.lang.Object" extends- However. The UniversalImageLoader has the same problem with the images. The images are not shown from an byteArray except the ones that where being shown properly in C# code as well. So unfortunately UniversalImageLoader hasn't done the trick.Friday, May 23, 2014 3:47 PM - User8355 posted Finally got it working!!! I had to make a workaround like this: /// Loads a Bitmap from a byte array public static Bitmap bytesToUIImage (byte[] bytes) { if (bytes == null) return null; Bitmap bitmap; var documentsFolder = Environment.GetFolderPath (Environment.SpecialFolder.Personal); //Create a folder for the images if not exists System.IO.Directory.CreateDirectory(System.IO.Path.Combine (documentsFolder, "images")); string imatge = System.IO.Path.Combine (documents, "images", "image.jpg"); System.IO.File.WriteAllBytes(imatge, bytes.Concat(new Byte[]{(byte)0xD9}).ToArray()); bitmap = BitmapFactory.DecodeFile(imatge); return bitmap; } Note that the file created was missing the ending byte of a .jpeg file "D9" so I had to add it manually. I know for fact that my images had this byte included, and I also tried to generate the bitmap through the byteArray adding "D9" with BitmapFactory.DecodeByteArray but it didn't work. So, the only workaround that works for me is creating a file from the byteArray and decoding that file. Hope it could help someone in the future. Thanks a lot @rmacias?, @naynishchaughule.5750? and of course @PeterDavis? for your time.Friday, May 23, 2014 4:12 PM - User60709 posted @RogierKoning? , Could You Please, post your biding project of the Universal Image Loader (all the changes you had to do). Cause i'm trying to get it right, but I only got the version 1.8.4 working. ThanksTuesday, July 8, 2014 10:20 PM - User35499 posted Yeah if you have this working, posting that code would be super helpful.Friday, August 22, 2014 7:05 PM - User205483 posted Hi guys! I may be late on this, but I went ahead and just used this library and resolved my issue:, June 27, 2016 2:23 PM - User240105 posted @RogierKoning? , thank you very much for your code. Your answer helped me to find my the problem with jpegs. However, the end-of-image is two bytes 0xFF, 0xD9. Also there is no need to save an new image bytes array. using (var stream = Assets.Open("480054_s800.jpg")) { var bytesWithEnd = ReadFully(stream).Concat(new byte[] { (byte)0xFF, (byte)0xD9 }).ToArray(); var bitmap = BitmapFactory.DecodeByteArray(bytesWithEnd, 0, bytesWithEnd.Length); var image = this.FindViewById<ImageView>(Resource.Id.fail_image); image.SetImageBitmap(bitmap); } public static byte[] ReadFully(Stream input) { using (System.IO.MemoryStream ms = new MemoryStream()) { input.CopyTo(ms); return ms.ToArray(); } }Tuesday, October 31, 2017 7:53 AM
https://social.msdn.microsoft.com/Forums/en-US/feff567e-4ca9-4e46-960e-51d98964bd6f/bitmap-decode-byte-array-skia-decoder-returns-false?forum=xamarinandroid
CC-MAIN-2021-31
refinedweb
1,845
58.48
Opened 9 years ago Last modified 4 years ago #2875 enhancement new IFTPShell access implementations are not complete Description (last modified by therve) This method is called in response of the CWD command in the ftp server. However the 2 implementations don't check enough things: - the one in ftp just do a listdir() - the one in vfs in trunk does nothing (and isn't totally fixed by #1264) The expected behavior should be documented, tests added, and both implementations fixed. Attachments (2) Change History (10) comment:1 Changed 9 years ago by therve comment:2 Changed 5 years ago by <automation> - Owner therve deleted comment:3 in reply to: ↑ description Changed 5 years ago by thijs - Cc thijs added comment:4 Changed 4 years ago by adiroiban - Cc adiroiban added - Keywords review added Hi, I would like to have this ticket closed. Below is the current code which checks that the folder/path exists and then checks that it can get the folder list. def access(self, path): p = self._path(path) if not p.exists(): # Again, win32 doesn't report a sane error after, so let's fail # early if we can return defer.fail(FileNotFoundError(path)) # For now, just see if we can os.listdir() it try: p.listdir() except (IOError, OSError), e: return errnoToFailure(e.errno, path) except: return defer.fail() else: return defer.succeed(None) From my point of view, this is a valid implementation which should work across multiple OS. How would you like this implementation fixed? Below is the code for the tests: def test_access(self): """ Try to access a resource. """ self.createDirectory('ned') d = self.shell.access(('ned',)) return d def test_accessNotFound(self): """ access should fail on a resource that doesn't exist. """ d = self.shell.access(('foo',)) return self.assertFailure(d,) There is a test for successful condition and a test for a missing path. There is no test for checking for permission errors. I added a integration test which uses 'chmod' for removing list permissions from a folder. The test is skipped on non Unix systems since there is no chmod command on Windows and I am not aware or any alternatives provided by FilePath. I have attached the diff. Please let me know what needs to be done to have this ticket closed. Thanks! Changed 4 years ago by adiroiban comment:5 Changed 4 years ago by therve - Keywords review removed - Owner set to adiroiban Listing is not enough, at least on posix systems. For example, if you have a directory owned by root, and the permissions are 644, you can do a listdir, but you can't change your path to that directory (as it's missing the 'x' bit). Changed 4 years ago by adiroiban comment:6 Changed 4 years ago by adiroiban - Keywords review added - Owner adiroiban deleted Hi, I have attached a new diff which used 'os.chdir' for Unix systems. I can also go for checking using os.access, but I went for EAFP technique as recommended by Python os.access documentation. --- On Windows, os.access will not work since it can not map execution permission. os.chdir also does not work since I think that internally it checks for execution permissions (maybe by using os.access). For example, if a set the permissions for a folder only to 'Read', and remove all other permissions (including List folder content), I can still browse folder's files in Windows Explorer, I can access child folders. os.listdir will work, but os.chdir will raise a WindowsError:5 permissions error. This is why I went for using os.chdir on Unix and os.listdir on non-Unix (Windows). The permission tests are skipped on non-Unix since os.chmod does not work on Windows. Please take a look at latest changes and let me know what needs to be changed. Cheers, comment:7 Changed 4 years ago by cyli - Owner set to cyli comment:8 Changed 4 years ago by cyli - Keywords review removed - Owner changed from cyli to adiroiban Thank you for working on this adiroiban! - I think it'd be better to add the cleanup as a callback to the Deferred that is returned by self.assertFailure - that way if the cleanup function raises any errors itself, the assertion itself will not fail (and the traceback may be more helpful). Also, I think it is unnecessary to remove the file, since each test has its own temporary directory in _trial_temp. Leaving the file may also be useful for debugging purposes, even if the permissions get reset. - On windows I think you can set permissions if pywin32 is installed:, if you wanted to test with that. - If there are any tests that you want to skip, a more detailed skip string would be great (for instance specifying briefly why the test is not supported). - Could you please include a news file with your patch? Replying to therve: fwiw, #4934 is gone now.
https://twistedmatrix.com/trac/ticket/2875
CC-MAIN-2016-30
refinedweb
827
73.07
This simple method construct Python data structure from XML in one simple step. Data is accessed using the Pythonic "object.attribute" notation. See the discussion below for usage examples. Discussion XML is a popular mean to encode data to share between systems. Despite its ubiquity, there is no straight forward way to translate XML to Python data structure. Traditional API like DOM and SAX often require undue amount of work to access the simplest piece of data. This method convert XML data into a natural Pythonic data structure. For example: >>>>> address_book = xml2obj(SAMPLE_XML) >>> person = address_book.person To access its data, you can do the following: person.gender -> 'm' # an attribute person['gender'] -> 'm' # alternative dictionary syntax person.name -> 'fred' # shortcut to a text node person.phone[0].type -> 'home' # multiple elements becomes an list person.phone[0].data -> '54321' # use .data to get the text value str(person.phone[0]) -> '54321' # alternative syntax for the text value person[0] -> person # if there are only one <person>, it can still # be used as if it is a list of 1 element. 'address' in person -> False # test for existence of an attr or child person.address -> None # non-exist element returns None bool(person.address) -> False # has any 'address' data (attr, child or text) person.note -> '"A <note>"' This function is inspired by David Mertz' Gnosis objectify utilities. The motivation of writing this recipe is for simplicity. With just 100 lines of code packaged into a single function, it can easily be embedded with other code for ease of distribution. known issues. A small nit. It should be noted that if your XML data has an attribute which is a Python keyword, this isn't going to work. For example, using "print" as an attribute is not going to work out well. You could fix this with a little work, say, wrapping attributes in an XMLAttr class, or something. Or, you could simply map names like "print" to python attributes "_print". Or, you can simply accept that this is a limitation of this recipe. :-) Overall, I think the second and third solutions are better than the first. use dictionary syntax. Support iteration. Fixed __getitem__() to better support iteration Multiple items. One thing about this that I find concerning is the possibility of having a schema (just in the abstract sense -- some structure in mind) where some element can have multiple children of the same name, but where that number could just as easily be one. It seems like in this situation, any code that uses this recipe will have to check whether or not the value is a list every time it accesses such a structure. Like, in your example -- the phonetag. If I were using this to insert into a database, I'd always want to get the phone numbers as a list, even if there were only one. (And it seems pretty silly to assume that everyone will have at least two.) Also, what about the reverse -- you're only expecting one value for some element, but it's an improperly constructed file that gives multiple. I suppose you could solve both of these with isinstance() idioms on a case-by-case basis, but it seems like that would get tedious. Can you think of an elegant, Pythonic solution to this? Because I actually encounter this problem all the time parsing similar data structures (GET query-strings, INI-style configuration files, etc.) and I have yet to find a solution I'm completely happy with. It becomes a list of 1. Hi Adam. I hear you. That't why it has some magic to treat a single element as a list of 1. For example there is only 1 person in this XML message. But you can do: If you get the error: TypeError: 'DataNode' object does not support item assignment. A simple fix... In rare cases, you may want to set an item back into the data structure. This worked for me (add to DataNode and fix indentation problems) def __setitem__(self, key, value): self._attrs[key] = value BTW, this is one of the best xml to object mapping snippets I have found. The array handling is particularly nice. If you are a perl programmer looking for a Python equivalent of XML::Simple this is the closest I have seen. I wish to read in an xml, extract data and put data in a .dbf. Can anyone help? Regards. davidgshi@yahoo.co.uk
http://code.activestate.com/recipes/534109/
crawl-002
refinedweb
744
66.44
more precisely this part: System.Collections.Generic.IEnumerable<GeometryBase> Thanks in advance. more precisely this part: System.Collections.Generic.IEnumerable<GeometryBase> Thanks in advance. Means a list of GeometryBase. The documentation of CreatePatch tells you what objects it takes: So create a list with your geometry, then feed it into CreateBrep. If you search the forum for IEnumerable and python you’ll find many discussions on the beast called IEnumerable… Hi @nathanletwory, It is no so much about creating a list of my curves but rather how to call this bloody overload method. I tried this before I ran out of ideas: brep = Rhino.Geometry.Brep.CreatePatch.Overloads[System.Collections.Generic.IEnumerable[Rhino.Geometry.GeometryBase],System.Int32,System.Int32,System.Double](curves_list,4,4,tol) # set up your parameters yourlist = [create your list here] yourstartingsurface = None # or the surface you retrieved somewhere from uspans = 4 vspans = 4 trim = True tangency = True # etc... note, fixedges is a list with four booleans, as per documentation # make the call thebrep = Rhino.Geometry.Brep.CreatePatch(yourlist, yourstartingsurface, uspans, vspans, trim, tangency, pointspacing, flexibility, surfacepull, fixedges, tolerance So I have to use Overloads[] only when I have same number of arguments? Update: We’re coming back to my original question. How to convert from list to IEnumerable? zip? Update2: NVM, I got it: lst = [1,2,3,4,5,6,7] list_enumerable = enumerate(lst) Why is this done? Why is enumerable used instead of a simple list? Nope that’s not it: Done: This is my convertor from python list to IEnumerable: That was tough to figure out, please add this to the examples McNeel def enumegator(py_list=None): if py_list == None:return net_enumerable = System.Collections.Generic.List[Rhino.Geometry.Curve]() for i in py_list: net_enumerable.Add(i) return net_enumerable IEnumerable as argument to a function is used to allow more flexible calls of said function. If the argument would be an array you could only call it by supplying an array. Since Array, List and some other collections all implement the enumerator interface they can be all used as argument on a function call which requires IEnumerable. Otherwise you would need to write a function overload for every type of collection. Ah, I see, this is that bad-typing disadvantage of .net (csharp) Hmm, actually if you think about it, this is a way csharp developers try to copy the beautiful duck-typing of python. Since this IEnumerable can be numerous list-types. if an rs… method exists, I always suggest you look at that code first to see how they structured it. In the case of AddPatch (which uses Brep.CreatePatch) you can see how they got all the different geometry types in there… Good point. I usually do that, in this case I started my development from a creating a planar surface, and got to patch with trial and error. At that point I was more focused on RhinoCommon and totally forgot to check rhinoscriptsyntax @Helvetosaur, another confusion is the decision where to look for the solution. The rhinoscriptsyntax is one thing, but they are not added to the documentation as examples. Also there are some examples in github, some examples in the documentation of RhinoCommon, and also some articles in the wiki.mcneel.* Also a lot of examples in discourse.mcneel.com I believe the solution should be some search engine to index all examples from these sourses. Yep, would be a good idea. I struggled with the same thing for an hour now, IEnumerable is tough to figure out… and to understand. Thanks for posting this here!
https://discourse.mcneel.com/t/how-to-use-brep-createpatch-method/84326
CC-MAIN-2021-04
refinedweb
597
56.35
I have found a scenario with the rules engine where the defaults in the Rule Composer do not produce the desired results when dealing with a node that contains repeating elements. The XML document that I was dealing with had the following structure. <Package> <Items> <Item>Description for Item 1</Item> <Item>Description for Item 2</Item> <Item>Description for Item 3</Item> </Items> </Package> In the Business Rule Composer when I add the schema in the Facts Explorer to the XML Schemas tab, the Rules Composer creates the following XPaths by default on the Item node. XPath Field: *[local-name()=’Item’ and namespace-uri()=”] XPath Selector: /*[local-name()=’Package’ and namespace-uri()=”]/*[local-name()=’Items’ and namespace-uri()=”] This set of default XPath statements will only Assert the first of the repeating elements. Before going into the modification for the XPath statements, lets look at what we are modifying. The way that I think of these XPath properties are that the selector XPath isolates a portion of the XML document (you can use many selectors within the same document) and the field XPath identifies specific items with the selector. In the rules engine, all fields inside the selector are grouped together as an object. If the selector matches multiple portions of the XML document then there are multiple objects asserted into the rules engine working memory. So in our situation we want each Item node of our XML to be a field object In order to get the rules engine to recognize that these elements repeat, you need to modify the XPath statements as follows: XPath Field: ‘.’ (without the single quotes) XPath Selector: /*[local-name()=’Items’ and namespace-uri()=”]/*[local-name()=’Item’ and namespace-uri()=”]’ Once this change has been made, these XPath settings will result in the Assertion of each of the three Item elements.
https://blogs.msdn.microsoft.com/skaufman/2004/12/30/the-rules-engine-and-repeating-elements/
CC-MAIN-2017-22
refinedweb
307
51.11
Displaying image overlays on image filenames in Emacs Posted March 21, 2016 at 11:21 AM | categories: emacs, orgmode | tags: | View Comments Table of Contents It has always bothered me a little that I have to add a file image after code blocks in org-mode to see the results. That extra work… I also don't like having to explicitly print the figure in the code, since that is the extra work, just in a different place. Today I look into two approaches to this. First, we consider something like tooltips, and second just putting overlays of image files right on the file name. The plus side of this is no extra work. The downside is they won't export; that will still take the extra work, but you needed that for the caption anyway for now. Here is a video illustrating the code in this post: Here is a test. import matplotlib.pyplot as plt plt.plot([0, 1, 2, 4, 16]) plt.savefig("test-fig.png") 1 Tooltip approach Building on our previous approach of graphical tooltips, we try that here to show the images. I have solved the issue of why the images didn't show in the tooltips before; it was related to how Emacs was built. I used to build it with "cocoa" support so it integrates well in OSX. Here, I have build it with gtk3, and the tooltips work with images. (defvar image-tooltip-re (concat "\\(?3:'\\|\"\\)\\(?1:.*\\." (regexp-opt '("png" "PNG" "JPG" "jpeg" "jpg" "JPEG" "eps" "EPS")) "\\)\\(?:\\3\\)") "Regexp to match image filenames in quotes") (defun image-tooltip (window object position) (save-excursion (goto-char position) (let (beg end imgfile img s) (while (not (looking-at image-tooltip-re)) (forward-char -1)) (setq imgfile (match-string-no-properties 1)) (when (file-exists-p imgfile) (setq img (create-image (expand-file-name imgfile) 'imagemagick nil :width 200)) (propertize "Look in the minibuffer" 'display img))))) (font-lock-add-keywords nil `((,image-tooltip-re 0 '(face font-lock-keyword-face help-echo image-tooltip)))) (font-lock-fontify-buffer) Now these both have tooltips on them: "test-fig.png" and 'test-fig.png'. 2 The overlay approach We might alternatively prefer to put overlays in the buffer. Here we make that happen. (defun next-image-overlay (&optional limit) (when (re-search-forward image-tooltip-re limit t) (setq beg (match-beginning 0) end (match-end 0) imgfile (match-string 1)) (when (file-exists-p imgfile) (setq img (create-image (expand-file-name imgfile) 'imagemagick nil :width 300)) (setq ov (make-overlay beg end)) (overlay-put ov 'display img) (overlay-put ov 'face 'default) (overlay-put ov 'org-image-overlay t) (overlay-put ov 'modification-hooks (list 'org-display-inline-remove-overlay))))) (font-lock-add-keywords nil '((next-image-overlay (0 'font-lock-keyword-face t))) t) Here is the example we looked at before. import matplotlib.pyplot as plt plt.plot([-0, 1, 2, 4, 16]) plt.savefig("test-fig.png") You may want to remove those overlays. Here is one way. Note they come back if you don't disable the font-lock keywords though. (ov-clear 'org-image-overlay) I know you want to do that so here is: (font-lock-remove-keywords nil '((next-image-overlay (0 'font-lock-keyword-face t)))) (ov-clear 'org-image-overlay) Note you still have to clear the overlays. Font lock doesn't seem to do that for you I think. Copyright (C) 2016 by John Kitchin. See the License for information about copying. Org-mode version = 8.2.10
http://kitchingroup.cheme.cmu.edu/blog/2016/03/21/Displaying-image-overlays-on-image-filenames-in-Emacs/
CC-MAIN-2017-22
refinedweb
594
55.03
There are several ways to save or read data from a mobile device. Reading and writing to the Internet can be done with Connection classes, but can be labourious without specific helper classes. The simplest way of saving your data is with a MoSync store. This tutorial looks at creating stores, writing to them and reading them. The simplest way of saving your data is with a MoSync store. Stores are saved on the device, and are supported on all MoSync-supported platforms. A store is a single file that can easily be read from and written to. On Symbian S60 platforms, stores can be shared between different applications; on other platforms stores are private and separate. To create a store, use the function maOpenStore(). This function takes two parameters, a const char* with the name of the store, and an int representing the options for the store. At the moment, there is only one option, you can choose to use it or not. This is defined as MAS_CREATE_IF_NECESSARY and it will create a new store if it doesn't exist. The function maOpenStore() returns an MAHandle, a reference to the store that you can use with other functions. MAHandle myStore = maOpenStore("MyStore", MAS_CREATE_IF_NECESSARY); If the store doesn't exist on the phone, it will be created and an MAHandle will be returned. If it does exist, it will just return the handle. If the file cannot be created (for example, if there is no room on the device or there is a fault on the device) the value returned will match one of the STERR error values. When you run your application in the MoRE emulator, you'll find that the stores are created in a folder called /stores in your output folder. Stores cannot currently be deployed with your application, and they have to be created at runtime. If you want to find out whether or not a store exists, call maOpenStore() with the flags set to 0. The value STERR_NONEXISTENT is returned if the store doesn't already exist. MAHandle testStore = maOpenStore("MyStore", 0); if(myStore == STERR_NONEXISTENT) { // Store doesn't exist. } else { // Store does exist, and testStore is a valid handle to it. } Stores don't have a sophisticated system for accessing and writing. They simply provide a method for saving application data to the device. Each time a store is written, it is overwritten. You cannot append to existing stores or edit the store directly. A store is written to from a data resource. This isn't resizable, but you can edit the contents, and you can write variables into it and read their values out again. You can create new data resources at runtime, but you need to define an MAHandle for it first. You also need to know how much data you need to store. To create a new data resource, you use the command maCreateData(). This returns either RES_OK if there is enough memory to create it or RES_OUT_OF_MEMORY if not. You need to pass the MAHandle to it as a parameter, along with the size. You can create a new, empty MAHandle with the function maCreatePlaceholder(). String password = "p45sw0rd"; MAHandle myData = maCreatePlaceholder(); if(maCreateData(myData, password.length()) == RES_OK) { } You can now write data using the method maWriteData(). String password = "p45sw0rd"; MAHandle myData = maCreatePlaceholder(); if(maCreateData(myData, password.length()) == RES_OK) { maWriteData(myData, password.c_str(), 0, password.length(); } The maWriteData() function needs the MAHandle of the data resource, a pointer to the object you want to write, the offset from the beginning of the data resource and the length of the data you wish to write. This means that you can write the variable to where ever you want in the data resource. If you are writing several pieces of data, you'll either need to keep track of their relative positions in the data, or user DataHandler as described below to do it for you. To write the data resource to the store, you use the method maWriteStore(). This takes the MAHandle of the store, and the MAHandle of the data. int result = maWriteStore(myStore, myData); Anything which was in the store previously is overwritten. If maWriteStore() returns > 0, then it has been written correctly. Other values should map to STERR. The one to watch out for here is STERR_FULL which means that there is no more storage left. int result = maWriteStore(myStore, myData); switch(result) { case > 0: // Everything is fine, and the data is saved. break; case STERR_FULL: // Failed, not enough space to save the data. break; case STERR_NONEXISTENT: // Failed, the store doesn't exist! break; case STERR_GENERIC: // Unknown error, possibly a device fault. break; } As a word of warning, although our example shows a password being written to a store you should be aware that this data is not completely private. Different systems provide different security measures. On a J2ME device, such data is fairly safe. A hacker would have to know a lot about the data to be able to access it. On S60 devices however, the user can freely browse the stores you've written using their file manager. Plain text stores can be opened up and read, and images you've written to the store can be displayed. If there is any sensitivity to the data you are writing, then it is strongly suggested that you encrypt it before you write it to the store. When you've finished writing, you also need to close the store. This is done with the function maCloseStore(). maCloseStore(myStore, 0); The second parameter (0 in this example), indicates whether or not the store should be deleted when you close it. If it is 0, you want to keep it; if you provide any other int value the store will be deleted. This is how stores are deleted, there is no other explicit function to delete stores. void deleteStore(const char* storeName) { MAHandle store = maOpenStore(storeName, 0); if(store != STERR_NONEXISTENT) { // Delete the store. maCloseStore(store, 1); } } Just as you write to stores from data resources, you read from them in the same way. The function maReadStore() copies the data in a store to a data resource, indicated by an MAHandle. MAHandle myData = maCreatePlaceholder(); MAHandle myStore = maOpenStore("MyStore", 0); if(myStore != STERR_NONEXISTENT) { // The store exists, so we can read from it. int result = maReadStore(myStore, myData); if(result == RES_OUT_OF_MEMORY) { // This store is too large to read into memory - error } } Once the store has been read into the data resource, the values can be read out of it. Of course you need to know how much data to read: you can get the size of the data resource with the function maGetDataSize(), and then read the data using the method maReadData(). char password[maGetDataSize(myData)]; maReadData(myData, &password, 0, maGetDataSize(myData)); To use maReadData, you need to know how many bytes to read from the data resource. In the example above, the password is the only data being stored, so we can easily read out all of the data. If the store contains both a username and a password, then it has to be a bit more sophisticated. You are probably not using fixed lengths for strings, so you need to store some extra data about how long each string is. If you put a byte containing the length of the string (or an int if it is a lot) before the actual string data, then you can read it back more easily. These are called Pascal strings or P-Strings, and you can also use them in MoSync resource files. To read out a P-string, you need to read the first byte. // Read a p-string. byte len = 0; // You now need to track your position in the data. int position = 0; // Read the length. maReadData(myData, &len, position, 1);// Read 1 byte from position // Move onto the next byte. position++; // Read the string. char password[len + 1]; //String may not be null-terminated. maReadData(myData, &password, position, len); // Add a null-terminator. password[len+1] = '/0'; // Move the position on, so we're ready for the next read. position += len; So, if you are reading more than one item out of the data, then you need to keep track of your place in the data. Once you've finished with the data, you can release it from memory. maDestroyObject(myData); This deletes the data resource from memory, but does not destroy the store. If you want to write several different values to the data resource, and then to the store, then there is a utility called DataHandler. This can help you write several values to the resource by tracking the offsets for you. You need to include the header file MAUtil/DataHandler.h #include "MAUtil/DataHandler.h" using namespace MAUtil; ... String username = "m0sync"; String password = "p45sw0rd"; // Save the values to a data resource. MAHandle myData = maCreatePlaceholder(); // The size of the data we need, including four bytes for // the length of each string. int size = username.length() + 4 + password.length() + 4; if(maCreateData(myData, size) == RES_OK) { DataHandler* handler = new DataHandler(myData); int usernameLength = username.length(); handler->write(&usernameLength, 4); handler->write(username.c_str(), usernameLength); int passwordLength = password.length(); handler->write(&passwordLength, 4); handler->write(password.c_str(), passwordLength); } int result = maWriteStore(myStore, myData); if(result > 0) { // Written successfully. maCloseStore(myStore, 0); } else { // Failed, delete the store. maCloseStore(myStore, 1); } As you can see from the above example, the DataHandler class takes away some of the complexity of reading and writing to stores, treating them more like a Java StreamReader or StreamWriter class. Unlike the example above where we were reading data out of the resource and having to keep track of the position, the DataHandler will manage it for us, and allow serial access.
http://www.mosync.com/docs/sdk/cpp/guides/storage/reading-and-writing-data/index.html
CC-MAIN-2013-48
refinedweb
1,621
65.32
The Gallery application is the display for the repository of images located on the SD card and accessible by various applications. Launch it, choose an image, and then select Menu→Share. A list of applications (such as Picasa, Messaging, and Email) appears, a convenient way to upload media from the device to another destination (see Figure 9-1). The flash.media.CameraRoll class is a subclass of the EventDispatcher class. It gives you access to the Gallery. It is not supported for AIR desktop applications. Selecting an Image You can test that your device supports browsing the Gallery by checking the supports BrowseForImage property: import flash.media.CameraRoll; if (CameraRoll.supportsBrowseForImage == false) { trace(“this device does not support access to the Gallery”); return; } If your device does support the Gallery, you can create an instance of the CameraRoll class. Make it a class variable, not a local variable, so that it does not lose scope: var cameraRoll:CameraRoll = new CameraRoll(); You can add listeners for three events: - A MediaEvent.SELECT when the user selects an image: import flash.events.MediaEvent; cameraRoll.addEventListener(MediaEvent.SELECT, onSelect); - An Event.CANCEL event if the user opts out of the Gallery: import flash.events.Event; cameraRoll.addEventListener(Event.CANCEL, onCancel); function onCancel(event:Event):void { trace(“user left the Gallery”, event.type); } - An ErrorEvent.ERROR event if there is an issue in the process: import flash.events.ErrorEvent; cameraRoll.addEventListener(ErrorEvent.ERROR, onError); function onError(event:Event):void { trace(“Gallery error”, event.type); } Call the browseForImage function to bring the Gallery application to the foreground: cameraRoll.browseForImage(); Your application moves to the background and the Gallery interface is displayed, as shown in Figure 9-2. When you select an image, a MediaEvent object is returned. Use its data property to reference the image and cast it as MediaPromise. Use a Loader object to load the image: import flash.display.Loader; import flash.events.IOErrorEvent; import flash.events.MediaEvent; import flash.media.MediaPromise; function onSelect(event:MediaEvent):void { var promise:MediaPromise = event.data as MediaPromise; var loader:Loader = new Loader() loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onImageLoaded); loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,onError); loader.loadFilePromise(promise); } The concept of MediaPromise was first introduced on the desktop in a drag-and-drop scenario where an object doesn’t yet exist in AIR but needs to be referenced. Access its file property if you want to retrieve the image name, its nativePath, or its url. The url is the qualified domain name to use to load an image. The nativePath refers to the hierarchical directory structure: promise.file.name; promise.file.url; promise.file.nativePath; Let’s now display the image: function onImageLoaded(event:Event):void { addChild(event.currentTarget.content); } Only the upper-left portion of the image is visible. This is because the resolution of the camera device is much larger than your AIR application stage. Let’s modify our code so that we can drag the image around and see all of its content. We will make the image a child of a sprite, which can be dragged around: import flash.events.MouseEvent; import flash.display.DisplayObject; import flash.geom.Rectangle; var rectangle:Rectangle; function onImageLoaded(event:Event):void { var container:Sprite = new Sprite(); var image:DisplayObject = event.currentTarget.content as DisplayObject; container.addChild(image); addChild(container); // set a constraint rectangle to define the draggable area rectangle = new Rectangle(0, 0, -(image.width – stage.stageWidth), -(image.height – stage.stageHeight) ); container.addEventListener(MouseEvent.MOUSE_DOWN, onDown); container.addEventListener(MouseEvent.MOUSE_UP, onUp); } function onDown(event:MouseEvent):void { event.currentTarget.startDrag(false, rectangle); } function onUp(event:MouseEvent):void { event.currentTarget.stopDrag(); } It may be interesting to see the details of an image at its full resolution, but this might not result in the best user experience. Also, because camera resolution is so high on most devices, there is a risk of exhausting RAM and running out of memory. Let’s now store the content in a BitmapData, display it in a Bitmap, and scale the bitmap to fit our stage in AIR. We will use the Nexus One as our benchmark first. Its camera has a resolution of 2,592×1,944. The default template size on AIR for Android is 800×480. To complicate things, the aspect ratio is different. In order to preserve the image fidelity and fill up the screen, you need to resize the aspect ratio to 800×600, but some of the image will be out of bounds. Instead, let’s resize the image to 640×480. The image will not cover the whole stage, but it will be fully visible. Take this into account when designing your screen. First, detect the orientation of your image. Resize it accordingly using constant values, and rotate the image if it is in landscape mode: import flash.display.Bitmap; import flash.display.BitmapData; const MAX_HEIGHT:int = 640; const MAX_WIDTH:int = 480;; if (isPortrait) { bitmap.width = MAX_WIDTH; bitmap.height = MAX_HEIGHT; } else { bitmap.width = MAX_HEIGHT; bitmap.height = MAX_WIDTH; // rotate a landscape image bitmap.y = MAX_HEIGHT; bitmap.rotation = -90; } addChild(bitmap); } The preceding code is customized to the Nexus One, and it will not display well for devices with a different camera resolution or screen size. We need a more universal solution. The next example shows how to resize the image according to the dynamic dimension of both the image and the stage. This is the preferred approach for developing on multiple screens:; // choose the smallest value between stage width and height var forRatio:int = Math.min(stage.stageHeight, stage.stageWidth); // calculate the scaling ratio to apply to the image var ratio:Number; if (isPortrait) { ratio = forRatio/bitmapData.width; } else { ratio = forRatio/bitmapData.height; } bitmap.width = bitmapData.width * ratio; bitmap.height = bitmapData.height * ratio; // rotate a landscape image and move down to fit to the top corner if (!isPortrait) { bitmap.y = bitmap.width; bitmap.rotation = -90; } addChild(bitmap); } Beware that the browseForImage method is only meant to load images from the Gallery. It is not for loading images from the filesystem even if you navigate to the Gallery. Some devices bring up a dialog to choose between Gallery and Files. If you try to load an image via Files, the application throws an error. Until this bug is fixed, set a listener to catch the error and inform the user: cameraRoll.browseForImage(); cameraRoll.addEventListener(ErrorEvent.ERROR, onError); function onError(event:Event):void { if (event.errorID == 2124) { trace(“you can only load images from the Gallery”); } } If you want to get a list of all the images in your Gallery, you can use the filesystem as follows: var gallery:File = File.userDirectory.resolvePath(“DCIM/Camera”); var myPhotos:Array = gallery.getDirectoryListing(); var bounds:int = myPhotos.length; for (var i:uint = 0; i < bounds; i++) { trace(myPhotos[i].name, myPhotos[i].nativePath); } Adding an Image You can add an image to the Gallery from within AIR. To write data to the SD card, you must set permission for it: <uses-permission android:name=”android.permission.WRITE_EXTERNAL_STORAGE” /> Check the supportsAddBitmapData property to verify that your device supports this feature: import flash.media.CameraRoll; if (CameraRoll.supportsAddBitmapData == false) { trace(“You cannot add images to the Gallery.”); return; } If this feature is supported, create an instance of CameraRoll and set an Event.COM PLETE listener. Call the addBitmapData function to save the image to the Gallery. In this example, a stage grab is saved. This feature could be used for a drawing application in which the user can draw over time. The following code allows the user to save his drawing, reload it, and draw over it again: var cameraRoll:CameraRoll; cameraRoll = new CameraRoll(); cameraRoll.addEventListener(ErrorEvent.ERROR, onError); cameraRoll.addEventListener(Event.COMPLETE, onComplete); var bitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight); bitmapData.draw(stage); cameraRoll.addBitmapData(bitmapData); function onComplete(event:Event):void { // image saved in gallery } Remember that the image that is saved is the same dimension as the stage, and therefore it has a much smaller resolution than the native camera. At the time of this writing, there is no option to specify a compression, to name the image, or to save it in a custom directory. AIR follows Android naming conventions, using the date and time of capture. In every course cenconted to programming i had there was some place where you can find and download examples from lessons to check the code and study it little more. Not sure about Multimedia authoring part 1 though, just don’t remember it was kind of not very involving course anyway, too fast, too neutral to students in my opinion Will be nice to have such a place for students in this course, so, if i miss one class, i can follow it’s learning curve Thanx!
https://www.blograby.com/developer/the-gallery-application-and-the-cameraroll-class.html
CC-MAIN-2018-30
refinedweb
1,452
50.53
> [snip] >>We've got font debs ready to go. > > > Please use non-reserved font names, so that Debian is allowed to add > missing glyphs to the fonts. Hi, I'm not sure I understand what you mean here. The idea behind using reserved font names is to avoid conflicting namespace between upstream and the various derivatives offering different Unicode coverage and features. It's about keeping users from expecting a feature which may not be present in a particular font. (See FAQ entries 2.7 and 2.8 for more details: ) . And yes, we'd like to get the right free software font licensing model *recognized* by the Debian community so that *any DD or Debian contributor* can improve the fonts either through patches sent upstream or through a derivative offering better coverage of a specific script (or Unicode block). It's true that there are *a lot* of missing glyphs that need adding. What we intend the license to provide is a good collaborative layer that will ultimately allow many more users to enjoy Debian in their own language :-D Ps: sorry for the late reply -- Nicolas
https://lists.debian.org/debian-legal/2005/12/msg00074.html
CC-MAIN-2018-09
refinedweb
188
63.29
- If every click or letter typed becomes a Redux action, then it can actually slow your app down. - Using the right directory structure for your app makes imports easier to understand and modify later on. - Choose the right directory structure for your app - Redux is a technology that you can gradually phase into your React app. - Once you get the hang of it, you can use Redux for new features you implement, as well as slowly optimizing your app to get the most out of Redux. When I first arrived at Agolo a year ago, I was handed a codebase developed entirely in Meteor 1.0. Meteor then (and now) defaulted to… @meteorjs: Our #Redux Migration (and 5 tips for adoption in a mature codebase) by @tomgoldenberg #reactjs When I first arrived at Agolo a year ago, I was handed a codebase developed entirely in Meteor 1.0. Meteor then (and now) defaulted to Blaze as a rendering engine, with some pros and cons. One of the first things I advocated for was migrating the front-end code to React. The migration from Blaze to React took some time (2–3 weeks), but was worth it. It gave me a chance to get my hands on the entire codebase and rewrite parts that were confusing and under-documented. Many months later, when our client-side application state became more complex, I explored Redux as a possible solution and pitched its advantages to my team. Although I was given the go-ahead to refactor our application state in Redux, I struggled with the task at the time. It has taken experience to realize some of my mistakes in structuring the code and data flow. So, I’d like to share some lessons learned from the experience in the hope that it may save time for other developers :). There are several approaches to structuring your code, which can make things confusing for a beginner. In a simple Redux example, the directory structure might look like this: Because of these examples, you might be tempted to put all actions in the actions folder, all components in the components folder, and so on. This quickly becomes a big mess in a complex app. Instead, a structure like this reads a lot easier: By doing this, you get nice references like: import * as actions from ‘../actions’; instead of this: import * as actions from ‘../../splash/actions’; A big part of writing good code is readability. Using the right directory structure for your app makes imports easier to understand and modify later on. 2. Remember: You don’t always need to Redux Observe the following tweet by Dan Abramov, creator of Redux: When you first learn about Redux (or Flux), the idea of a single source of truth for application state sounds like brilliant, novel idea. This can easily make one over-zealous, forcing one to convert every piece of state into a Redux reducer. However, this doesn’t make for very scaleable apps. If every click or letter typed becomes a Redux action, then it can actually slow your app down. First decide what state should be shared across views, and then use Redux for that. Other state that only affects a single view might be better off in a localized state. Find the right balance for your application, but remember that all good things can be taken too far. 3. Use descriptive names for action types Again, in the simple examples, you’ll often find action types such as this: export const UPDATE_USER = ‘UPDATE_USER’; export const SET_POSTS = ‘SET_POSTS’; It helps to be more descriptive, linking each action type with a particular view: export const UPDATE_USER = ‘accounts/UPDATE_USER’; export const SET_POSTS = ‘posts/SET_POSTS’; This makes for more readable logs, which can make debugging less painful. 4. Learn properly before refactoring your codebase By now, there are a plethora of resources for getting started with Redux. This can be good and bad, because you might not know where to start. Here is what I would recommend as steps to take before introducing Redux to your company. 5. Have a plan and pitch the business value As a programmer, it’s easy to fall into the trap of pitching an idea solely on its technical merits. Often, you will need to get the green light from someone who wants to know the business value of the proposal. In the case of Redux, I highlighted how it would make debugging much easier and our app easier to extend in the future. I also used my free time out of work to give a simple prototype of how our company app would look with Redux, and a realistic estimate on how long it would take to bring the changes to the entire app. In any organization, incremental changes are always easier to get approval for than radical, ground-up ones. Luckily, Redux is a technology that you can gradually phase into your React app. Just pick an aspect of your application state that is heavily shared across components and focus on bringing that into a Redux store. A good place to start is user account data, since this is usually shared across your application. Once you get the hang of it, you can use Redux for new features you implement, as well as slowly optimizing your app to get the most out of Redux.
http://blog.agileactors.com/blog/2016/9/27/our-redux-migration-and-5-tips-for-adoption-in-a-mature-codebase
CC-MAIN-2018-39
refinedweb
894
58.21
Description: Library for HopeRF RFM22 transceiver module ported to mbed. Original Software from Mike McCauley (mikem@open.com.au) . See « Back to documentation index RF22Router::RoutedMessageHeader Struct Reference Defines the structure of the RF22Router message header, used to keep track of end-to-end delivery parameters. More... #include <RF22Router.h> Detailed Description Defines the structure of the RF22Router message header, used to keep track of end-to-end delivery parameters. Definition at line 139 of file RF22Router.h. Field Documentation Destination node address. Definition at line 141 of file RF22Router.h. Originator flags. Definition at line 145 of file RF22Router.h. Hops traversed so far. Definition at line 143 of file RF22Router.h. Originator sequence number. Definition at line 144 of file RF22Router.h. Originator node address. Definition at line 142 of file RF22Router.h.
http://mbed.org/users/charly/code/RF22/docs/tip/structRF22Router_1_1RoutedMessageHeader.html
CC-MAIN-2013-20
refinedweb
135
54.29
I assume that you are familiar with the publish-subscribe pattern, the .NET way, through events. The publisher fires an event and the subscriber consumes that event. I also assume that you are familiar with the Invoke() method and InvokeRequired property. In a multithreading environment the UI thread is the only thread allowed to manipulate the controls that it creates. The running thread may use the InvokeRequired property to determine if a thread-context-switch is needed. Invoke() InvokeRequired Your best way to read this article is on a dual monitor system where the article is displayed on one monitor and the accompanying code is displayed on the other monitor. If a dual monitor system is not available to you, then you may care to flip between the article and the accompanying code displays. After you finish reading the article, read the accompanying code on your own in its entirety, as there are aspects of the code I did not go through in this article. I expect that you will to be able to follow the code on your own—enjoy! When Sir Isaac Newton, the man who is considered the Father of classical physics, was asked how he succeeded in coming up with a simple set of laws that govern the motion of objects, Isaac replied that he stood on the shoulders of giants. Isaac, of-course was referring to Galileo Galilei, Johannes Kepler and others who laid the foundation for his own work. In a similar manner I have stood on the shoulders of a giant in order to come up with this control. When I needed a SplitButton control I found an article in CodeProject entitled: SplitButton: an XP style dropdown split button, by Gladstone. The article took me halfway to where I needed to be. I needed to hide the internals of the control from the consumer of the SplitButton and provide a simple interface to the consumer. (I encourage you to take a look at Gladstone's article and demo.) I would like to further emphasize that this article does not come to out-do Gladstone's article. On the contrary, I consider Gladstone's contribution as great work. He took a ContextMenuStrip control and attached it to a regular button; thereby creating a SplitButton in a simple and easy to understand way. I will not repeat his work here nor will I explain it, but I will build upon it. I will further encourage you, the reader, to improve upon my work, publish your contribution and give Gladstone and me due credit. In order to make the button a self-contained button that follows the principles of object orientation, it needs two kinds of support mechanisms and interface sets: When the user of the application clicks on a SplitButton, see Figure 1 below, we need to distinguish between clicking on the entire button, the Button-Part and the Split-Part. Figure 1 OnClick() base.OnClick(e); OnMouseUp(MouseEventArgs mevent) base.OnClick() ButtonClick(this, e); OnClick(EventArgs e) Also, take a look at the event properties of the SplitButtonDropDown, part of the SplitButton design surface (SplitButton.cs [Design] view). See Figure 2 below. The ItemClick event is responsible for the specific menu selection. Figure 2 Towards the end of this article we will look more closely at the ItemClicked event. This is a good time to run the demo and see the events displayed in the window. Figure 3 is an example of the screen displaying the events when the button-part is clicked. <Figure 3 The SplitButton Control class is divided into the following regions: ButtonClick Protected Let's look at the OnClick() method, see Listing 1. The method has signature and body as follows: 1 protected override void OnClick(EventArgs e) 2 { 3 base.OnClick(e); 4 5 if (!IsMouseInSplit()) 6 { 7 EventFire(ButtonClick, e); 8 } 9 } Which bears the need for explaining the unexpected function call (line 7): EventFire(ButtonClick, e); One would expect the following code in Listing 2 instead of the EventFire(ButtonClick, e) call. EventFire(ButtonClick, e); EventFire(ButtonClick, e) 1 if (ButtonClick !=null) 2 ButtonClick(this, e); The above code, displayed in Listing 2, is the customary pattern for handling a button click event (or any other event). Line 1 (of Listing 2 checks to see if any invocation methods are bound to the event handler and line 2 calls the invocation methods (all of them). However, in a multithreading environment we may experience a thread-context-switch after line 1 and before line 2. Now if the new running thread (after the thread-context-switch) subtracted (or added) an invocation method from the event delegate, like so: splitButton1.ButtonClick -= new System.EventHandler(splitButton1_ButtonClick); Then the ButtonClick() EventHandler delegate in line 2 will be a different EventHandler than the one in line 1. Delegates are immutable and therefore adding or removing an invocation method from a delegate results in a new delegate instance. ButtonClick() EventHandler EventHandler So, if the operation within the second thread leaves the ButtonClick delegate devoid of invocation methods, then calling line 2 will result in an exception. An excellent explanatory work covering this topic can be found in Juval Lowy's extraordinary book, entitled: Programming .NET Components, 2nd Edition. See the explanation starting in Chapter 6's section entitled "Publishing Events Defensively" and ending in Chapter 8. A short synopsis of the explanation is as follows: Listing 2 can be rewritten as that in Listing 3, in an attempt to avoid the potential problem from a thread-context switch: EventHandler tmp = ButtonClick; if (tmp != null) tmp(this, e); However, the .NET optimizing complier may convert the code in Listing 3 back to the already known to be bad code in Listing 2 (it is an optimizing compiler). Listing 4 can solve this problem. Note that in Listing 4, the decoration of the method EventFire(): [MethodImpl(MethodImplOptions.NoInlining)] prevents the compiler from optimizing the function call away by inlining the EventFire(..) method and nullifying the effect we are trying to achieve. EventFire() [MethodImpl(MethodImplOptions.NoInlining)] EventFire(..) // ButtonClick is deffered to a method called from the publisher, // OnClick() method EventFire(ButtonClick, e); . . . // Helper method [MethodImpl(MethodImplOptions.NoInlining)] private void EventFire(EventHandler evntHndlr, EventArgs e) { // Make sure that the handler has methods bound to it. if (evntHndlr == null) return; evntHndlr(this, e); } The above code in Listing 4 does not handle correctly the possibility that one of the invocation methods bound to evntHndlr may throw an exception. Listing 5 solves this issue. evntHndlr [MethodImpl(MethodImplOptions.NoInlining)] private void EventFire(EventHandler evntHndlr, EventArgs ea) { if (evntHndlr == null) return; foreach (Delegate del in evntHndlr.GetInvocationList()) { try { evntHndlr.DynamicInvoke(new object[] { this, ea }); } catch (Exception /*ex*/) { // // Eat the exception // } } The above code in Listing 5 does not take into account the possibility that one of the methods may come from a section that requires a thread-context-switch. For example, code updating a control needs to run on the same UI thread that created the control. Hence the last evolution of the code is presented in Listing 6 below. [MethodImpl(MethodImplOptions.NoInlining)] private void EventFire(EventHandler evntHndlr, EventArgs ea) { if (evntHndlr == null) return; foreach (Delegate del in evntHndlr.GetInvocationList()) { try { ISynchronizeInvoke syncr = del.Target as ISynchronizeInvoke; if (syncr == null) { evntHndlr.DynamicInvoke(new object[] { this, ea }); } else if (syncr.InvokeRequired) { syncr.Invoke(evntHndlr, new object[] { this, ea }); } else { evntHndlr.DynamicInvoke(new object[] { this, ea }); } } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(string.Format( "SplitButton failed delegate call. Exception {0}", ex.ToString())); } } } Six iterations and now we have a robust EventFire(..) helper method. There are two methods that I provided as the interface specific for the SplitButton: ClearDropDownItems() AddDropDownItemAndHandle(string text, EventHandler handler) See the "Additional Interface Methods" region in the accompanying code for the above mentioned methods. This list of two methods can easily be expanded to include more methods and functionality in order to: I feel that adding menu items will be the most heavily used functionality and I expect that ClearDropDownItems() will be used rarely. I do not believe that the rest of the functionality will be used. However, I encourage you to enhance the control if you need the additional functionality. The client cannot bind an invocation method to a menu drop-down item since the menu drop-down is created on the fly after the client clicks on the split-part of the button. Therefore, we need a different mechanism. I have decided to use a Dictionary<> construct like so: Dictionary<> Dictionary<string, EventHandler> _dropDownsEventHandlers = new Dictionary<string, EventHandler>(); Where the key to this dictionary, the first generic type, is the text display of the drop-down menu item; and the EventHandler, the second generic type, is the delegate to which the invocation method is bound. An immediate consequence of this choice is the fact that we cannot have two positionally distinct drop-down items that share the same display text and behave differently. So, for example, we cannot have a menu item in position 0 displaying the text: "this is a menu item" then a menu item in position 1 displaying the text: "this is a menu item" and expect them to be bound to two distinct event handlers. However, I consider it to be a good choice. The above should make the adding functionality of a drop-down item, in the method AddDropDownItemAndHandle(), clear. See Listing 7. By the same token, ClearDropDownItems() should be clear as well. Please review the code in the accompanying code. AddDropDownItemAndHandle() #region Additional Interface Methods public void ClearDropDownItems() { SplitButtonDropDown.Items.Clear(); _dropDownsEventHandlers = new Dictionary<string, EventHandler>(); } public void AddDropDownItemAndHandle(string text, EventHandler handler) { // Add item to menu SplitButtonDropDown.Items.Add(text); // Add handler if (! _dropDownsEventHandlers.ContainsKey(text)) _dropDownsEventHandlers.Add(text, handler); } #endregion Listing 7 For emphasis' sake note that we did not bind any method to the drop-down items. Instead we have bound it to our Dictionary<> construct. Therefore, we will need to come up with an alternative method to bind the drop-down items to whatever invocation methods are now stored in this _dropDownsEventHandlers Dictionary<>. Dictionary<> _dropDownsEventHandlers Dictionary<> Let's review the steps in an event life cycle: We have two players For the sake of this discussion let's have a simple example to follow, see Listing 8 below. 01 public class Publisher 02 { 03 public event EventHandler SomeEvent; 04 05 public void FireEvent() 06 { 07 if (SomeEvent != null) // Step 2 -- Check 08 SomeEvent(this, EventArgs.Empty); // Step 3 -- call 09 } 10 } 11 12 public class Subscriber 13 { 14 public void OnSomeEvent(object sender, EventArgs e) 15 { 16 // Step 4 –- Do something useful 17 } 18 } 19 20 public class Controller 21 { 22 Publisher pub; 23 Subscriber sub; 24 25 public void Initializer() 26 { 27 pub = new Publisher(); 28 sub = new Subscriber(); 29 pub.SomeEvent += new EventHandler(sub.OnSomeEvent); 30 } 31 32 public void Doer() 33 { 34 pub.FireEvent(); // Step 1 -- trigger the event 35 } 36 } Note that after my lengthy explanation in the section entitled Event handling as to the fact that the code in lines 7 and 8 of Listing 8 is wrong for a multithreading environment; I turn around and go against my own advice. This is not an oversight. I would rather keep the code as simple as possible for the sake of this discussion. Otherwise, the above code in lines 7 and 8 are inappropriate for a control that may be consumed by a multithreading environment client and these lines should be replaced with a function call to EventFire(..). The event, in its lifecycle, will follow these milestones: FireEvent() Publisher Event Handler Subscriber OnSomeEvent() Now, we get to handle the click events from the drop-down, see within the accompanying code, you will find the SplitButtonDropDown_ItemClicked method: SplitButtonDropDown_ItemClicked private void SplitButtonDropDown_ItemClicked(object sender, ToolStripItemClickedEventArgs e) { // // Close the drop down first // SplitButtonDropDown.Close(); // // Translate the ItemClicked, event that was just fired by the // drop-down menu, to the event the user bound its handling // to in _dropDownsEventHandlers[<name of the drop down>] // string textDisplay = e.ClickedItem.Text; EventHandler adaptorEvent = _dropDownsEventHandlers[textDisplay]; // // Fire the new event // EventFire(adaptorEvent, EventArgs.Empty); } ContextMenuStrip ItemClicked SplitButtonDropDown_ItemClicked() EventHandlers _dropDownsEventHandlers The signature of SplitButtonDropDown_ItemClicked() is as follows: private void SplitButtonDropDown_ItemClicked( object sender, ToolStripItemClickedEventArgs e) The EventArgs derived class, ToolStipItemClickedEventArgs, contains the information as to which item was clicked. More importantly, it contains the display text of the drop-down item which we use as a key to retrieve the handler from the Dictionary<>. EventArgs oolStipItemClickedEventArgs Therefore, we can switch from handling the ItemClicked event and fire a secondary event as follows: string textDisplay = e.ClickedItem.Text; EventHandler adaptorEvent = _dropDownsEventHandlers[textDisplay]; EventFire(adaptorEvent, EventArgs.Empty); This concludes our handling of the click event from the button side. The consumer of our SplitButton will need to add its callback functionality. The form, Form1, in the demo project, during the load event, adds three drop-down items: splitButton1.AddDropDownItemAndHandle("Test 1", Test1Handler); splitButton1.AddDropDownItemAndHandle("Testing Testing", Testing2Handler); splitButton1.AddDropDownItemAndHandle("Testing testing testing", Testing3Handle); These items are added to the drop-down list in the order that they are called. Each method takes a signature of an EventHandler callback. These three EventHandler callback methods need to be defined and they are. For example Test1Handler is defined as follows: Test1Handler private void Test1Handler(object sender, EventArgs e) { textBox1.Text += "Test 1 was fired" + Environment.NewLine; } This is very simple for the consumer; all the heavy lifting is done within the SplitButton control. Moreover, if we decide to change the SplitButton control we may do so for as long as we do not change the interface to the client and the client will need no code change. PersistDropDownName These are some of the potential improvements and there are many more ways to improve upon this SplitButton. If you improve upon the current SplitButton, publish your work, and you will make the world a better place to live in. We have achieved our goal of providing a self-contained button following the object oriented methodology that provides a simple and easy interface to its consumers. The developer may drag the SplitButton onto the design-surface, and add drop-down items from the consumer side, albeit programmatically. Where do we go from here? We need to add design time support so the consumer, of the SplitButton, will be able to drag the button onto a design surface and in the properties window set the drop-down items. Until next.
http://www.codeproject.com/Articles/18447/SplitButton-a-NET-WinForm-control-Part
CC-MAIN-2014-41
refinedweb
2,401
53.71
Ask A Question NoMethodError in Contacts#edit Hi all, I am new to Ruby as well as Rails I am working my way through a treehouse course and I have hit a bit of an obstacle. This is what my routes file looks like I am working my way through a treehouse course and I have hit a bit of an obstacle. This is what my routes file looks like Rails.application.routes.draw do # For details on the DSL available within this file, see get '/contacts/:id/edit', to: 'contacts#edit' get '/contacts', to: 'contacts#index', as: 'home' get '/contacts/new', ato: 'contacts#new', as: 'new' get '/contacts/:id', to: 'contacts#show', as: 'shows' post '/contacts', to: 'contacts#create' end the method in the controller is as follows def edit @contact = Contact.find(params[:id]) end and this is what the edit.html.erb file looks like <%= form_for(@contact) do |c| %> <div> <%= c.label 'First Name' %> <%= c.text_field :fname %> </div> <div> <%= c.label 'Last Name' %> <%= c.text_field :lname %> </div> <div> <%= c.label :number %> <%= c.text_field :number %> </div> <div> <%= c.submit %> </div> <% end %> as is this will result in the following when I I try to load the edit page (example:) NoMethodError in Contacts#editShowing/Users/donovanneethling/Ruby/addressbook/app/views/contacts/edit.html.erbwhere line #1raised:undefined method `contact_path' for #<#<Class:0x007fca54438cf0>:0x007fca57e82a78> Did you mean? contacts_path However when I amend the edit route as follows get '/contacts/:id/edit', to: 'contacts#edit', as: 'contact' the page loads correctly. Would someone be able to explain this to me, I am guessing it has something to do with form_for... Would someone be able to explain this to me, I am guessing it has something to do with form_for... Is there any specific reason you adding the routes manually instead of using `resources :contacts` (this will make sure you get all the restful urls you need)? That should definitely fix your routing problems. If you keep to the conventions, Rails is easier to handle and doesn't give you headaches like this. If you keep to the conventions, Rails is easier to handle and doesn't give you headaches like this. As @jack mentioned, it's probably better using resources. This should fix things. If it doesn't please post your error with any potential stacktrace and the community will have a look.
https://gorails.com/forum/nomethoderror-in-contacts-edit
CC-MAIN-2021-04
refinedweb
392
65.32
Section 8.3 Files THE DATA AND PROGRAMS IN A COMPUTER'S MAIN MEMORY survive only as long as the power is on. For more permanent storage, computers use files, which are collections of data stored on the computer's hard disk, on a floppy disk, on a CD-ROM, or on some other type of storage device. Files are organized into directories (sometimes called "folders"). A directory can hold other directories, as well as files. Both directories and files have names that are used to identify them. Programs can read data from existing files. They can create new files and can write data to files. In Java, input and output is generally done using streams. Data is read from a file using an object belonging to the class FileInputStream, which is a subclass of InputStream. Similarly, data is written to a file through an object of type FileOutputStream, a subclass of OutputStream. It's worth noting right at the start that applets which are downloaded over a network connection are generally not allowed to access files. This is a security consideration. You can download and run an applet just by visiting a Web page with your browser. If downloaded applets had access to the files on your computer, it would be easy to write an applet that would destroy all the data on any computer that downloads it. To prevent such possibilities, there are a number of things that downloaded applets are not allowed to do. Accessing files is one of those forbidden things. Standalone programs written in Java, however, have the same access to your files as any other program. When you write a standalone Java application, you can use all the file operations described in this section. The FileInputStream class has a constructor which takes the name of a file as a parameter and creates an input stream that can be used for reading from that file. This constructor will throw an exception of type FileNotFoundException if the file doesn't exist. This exception type requires mandatory exception handling, so you have to call the constructor in a try statement (or inside a routine that is declared to throw FileNotFoundException). For example, suppose you have a file named "data.txt", and you want your program to read data from that file. You could do the following to create an input stream for the file as follows:FileInputStream data; // declare the variable before the // try statement, or else the variable // is local to the try block try { data = new FileInputStream("data.dat"); // create the stream } catch (FileNotFoundException e) { ... // do something to handle the error -- maybe, end the program } Once you have successfully created a FileInputStream, you can start reading data from it. But since FileInputStreams have only the primitive input methods inherited from the basic InputStream class, you will probably want to wrap your FileInputStream in either a DataInputStream object or an AsciiInputStream object. You can use the built-in DataInputStream class if you want to read data in binary, machine-readable format. Use the non-standard class, AsciiInputStream, which was described in the previous section, if the data in the file is in human-readable ASCII-text format. To create an AsciiInputStream for reading from a file, you could say:AsciiInputStream data; try { data = new AsciiInputStream(new FileInputStream("data.dat")); } catch (FileNotFoundException e) { ... // handle the exception } Once you have an AsciiInputStream named data, you can read from it using such methods as data.getInt() and data.getWord(), exactly as you would from any other AsciiInputStream. Working with output files is no more difficult than this. You simply create an object belonging to the class FileOutputStream. You will probably want to wrap this output stream in an object of type DataOutputStream (for binary data) or PrintStream (for ASCII text). For example, suppose you want to write data to a file named "result.dat". Since the constructor for FileOutputStream can throw an exception of type IOException, you should use a try statement:PrintStream result; try { result = new PrintStream(new FileOutputStream("result.dat")); } catch (IOException e) { ... // handle the exception } If no file named result.dat exists, a new file will be created. If the file already exists, then the current contents of the file will be erased and replaced with the data that your program writes to the file. An IOException might occur if, for example, you are trying to create a file on a disk that is "write-protected," meaning that it cannot be modified. After you are finished using a file, it's a good idea to close the file, to tell the operating system that you are finished using it. (If you forget to do this, the file will probably be closed automatically when the program terminates or when the file stream object is garbage collected, but it's best to close a file as soon as you are done with it.) You can close a file by calling the close() method of the associated file stream. Once a file has been closed, it is no longer possible to read data from it or write data to it, unless you open it again as a new stream. (Note that for most stream classes, the close() method can throw an IOException, which must be handled; however, both PrintStream and AsciiInputStream override this method so that it cannot throw such exceptions.) As a complete example, here is a program that will read numbers from a file named data.dat, and will then write out the same numbers in reverse order to another file named result.dat. It is assumed that data.dat contains only one number on each line, and that there are no more than 1000 numbers altogether. Exception-handling is used to check for problems along the way. At the end of this program, you'll find an example of the use of a finally clause in a try statement. When the computer executes a try statement, the commands in its finally clause are guaranteed to be executed, no matter what.import java.io.*; // assume that the AsciiInputStream class is also available public class ReverseFile { public static void main(String[] args) { AsciiInputStream data; // stream for reading data PrintStream result; // stream for output double[] number = new double[1000]; // array to hold the numbers // read from the input file int numberCt; // number of items stored in the array try { data = new AsciiInputStream(new FileInputStream("data.dat")); } catch (FileNotFoundException e) { System.out.println("Can't find file data.dat!"); return; // end the program by returning from main() } try { result = new PrintStream(new FileOutputStream("result.dat")); } catch (IOException e) { System.out.println("Can't open file result.dat!"); System.out.println(e.toString()); data.close(); // close the input file return; // end the program } try { // read the data from the input file, numberCt = 0; while (!data.eof()) { // read to end-of-file number[numberCt] = data.getlnDouble(); numberCt++; } // then output the numbers in reverse order for (int i = numberCt-1; i >= 0; i--) result.println(number[i]); } catch (AsciiInputException e) { // some problem reading the data from the input file System.out.println("Input Error: " + e.getMessage()); } catch (IndexOutOfBoundsException e) { // must have tried to put too many numbers in the array System.out.println("Too many numbers in data file."); System.out.println("Extras will be ignored."); } finally { // finish by closing the files, whatever else may have happened data.close(); result.close(); } } // end of main() } // end of class File Names, Directories, and File Dialogs The subject of file names is actually more complicated than I've let on so far. To fully specify a file, you have to give both the name of the file and the name of the directory where that file is located. A simple file name like "data.dat" or "result.dat" is taken to refer to a file in a directory that is called the current directory (or "default directory" or "working directory"). The current directory is not a permanent thing. It can be changed by the user or by a program. Files not in the current directory must be referred to by a path name, which includes both the name of the file and information about the directory where it can be found. To complicate matters even further, there are two types of path names, absolute path names and relative path names. An absolute path name uniquely identifies one file among all the files available to the computer. It contains full information about which directory the file is in and what its name is. A relative path name tells the computer how to locate the file, starting from the current directory. Unfortunately, the syntax for file names and path names varies quite a bit from one type of computer to another. Here are some examples: - data.dat -- on any computer, this would be a file named data.dat in the current directory. - /home/eck/java/examples/data.dat -- This is an absolute path name in the UNIX operating system. It refers to a file named data.dat in a directory named examples, which is in turn in a directory named java,.... - C:\eck\java\examples\data.dat -- An absolute path name on a DOS or Windows computer. - Hard Drive:java:examples:data.dat -- Assuming that "Hard Drive" is the name of a disk drive, this would be an absolute path name on a Macintosh computer. - examples/data.dat -- a relative path name under UNIX. "Example" is the name of a directory that is contained within the current directory, and data.data is a file in that directory. The corresponding relative path names for Windows and Macintosh would be examples\data.dat and examples:data.dat. Similarly, the rules for determining which directory is the current directory are different for different types of computers. It's reasonably safe to say, though, that if you stick to using simple file names only, and if the files are stored in the same directory with the program that will use them, then you will be OK. In many cases, though, you would like the user to be able to select a file for input or output. If you let the user type in a file name, you will just have to assume that the user understands how to work with files and directories. But in a graphical user interface, the user expects to be able to select files using a file dialog box, which is a special window that a program can open when it wants the user to select a file for input or output. Java provides a platform-independent method for using file dialog boxes in the form of a class called FileDialog. This class is part of the package java.awt. There are really two types of file dialog windows: one for selecting an existing file to be used for input, and one for specifying a file for output. You can specify which type of file dialog you want in the constructor for the FileDialog object, which has the formpublic FileDialog(Frame parent, String title, int mode) where parent is meant to be the main application window (but can be null, at least on a Macintosh), title is meant to be a short string describing the dialog box, and mode is one of the constants FileDialog.SAVE or FileDialog.LOAD. Use FileDialog.SAVE if you want an output file, and use FileDialog.LOAD if you want a file for input. You can actually omit the mode parameter, which is equivalent to using FileDialog.LOAD. Once you have a FileDialog, you can use its show() method to make it appear on the screen. It will stay on the screen until the user either selects a file or cancels the request. The instance method getFile() can then be called to retrieve the name of the file selected by the user. If the user has canceled the file dialog, then the String value returned by getFile() will be null. Since the user can select a file that is not in the current directory, you will also need directory information, which can be retrieved by calling the method getDirectory(). For example, if you want the user to select a file for output, and if the main window for your application is mainWin, you could say:FileDialog fd = new FileDialog(mainWin, "Select output file", FileDialog.SAVE); fd.show(); String fileName = fd.getFile(); String directory = fd.getDirectory(); if (fileName != null) { // (otherwise, the user canceled the request) ... // open the file, save the data, then close the file } Once you have the file name and directory information, you will have to combine them into a usable file specification. The best way to do this is to create an object of type File. The file object can then be used as a parameter in a constructor for a FileInputStream or a FileOutputStream. For example, the body of the if statement in the above example could include:File file = new File(directory, fileName); PrintStream out = new PrintStream(new FileOutputStream(file)); ... // write the data to the output stream, out out.close(); Of course, you'll have to do something about handling possible exceptions, in particular the IOException that could be generated by the constructor for the FileOutputStream. But for the most part, FileDialogs and streams provide a reasonably easy-to-use interface to the file system of any computer. [ Next Section | Previous Section | Chapter Index | Main Index ]
http://math.hws.edu/eck/cs124/javanotes1/c8/s3.html
crawl-001
refinedweb
2,227
63.29
SSIS+XML+XSD+C# : Detect and log errors - Thursday, February 14, 2013 6:43 PM Hi every body, While validating my xml file with an .xsd schema, i need to detect all errors in order to load the line of data (or the key) + the error message ... then save it into a log file or idealy directly to a database. Exemple: ID Name Sex Age 1 Jo M L 2 Sandy F 18 3 Sam 1 30 Now, in my .xsd, the sex must be either M or F ... and the Age must be an integer. so i need to detect that both lines 1 and 3 are not good according to my xsd.... however the line 2 is good. In my Errors table, i need to log something like this (or anything else similar: 1 Age 'the value is not an integer' 3 Sex 'The value is not in the enum {M,F}' Thank you for your help. Nacer CREPUQ BI Developer All Replies - Thursday, February 14, 2013 7:06 PMModerator If the incoming XML is in non-compliance to XSD it would abort the package execution or you need to ignore errors and keep processing handling the error in an Event Handler, but the XML source does not capture the specifics you mentioned. I suggest you use an XML parser outside the package or inside a Script Task using pure code and .Net XML classes. You then log the results as you wish. - Thursday, February 14, 2013 7:12 PM Thank you arthur ... you said : ... inside a Script Task using pure code and .Net XML classes. You then log the results as you wish. This is exactly what i was trying to do. I wasn't able to find a .net script on the internet. can you recommand me one ? Thank you Nacer CREPUQ BI Developer - Thursday, February 14, 2013 7:39 PM Arthur's idea works by validating in the data flow. The code below is used to validate against an XML Schema file. This code is based on. This is not yet fleshed out and it writes to the Console (which doesn't exist in the SSIS package). The function that is defined as an argument of the Validate method (the (o, e) => etc) is going to fire everytime an error occurs. My thought is to add information for each node that fails the validation. You could get the ID for the node and the error. I just get the error and put it into a List<string>. Next you can iterate through that list and do something. If you use this in a Script component (not task) which is part of the data flow, you could use this as a Data Source and create an output into which you could send each error as it is found. using System.Xml.Linq; using System.Xml; using System.Xml.Schema; using System.IO; static void Main(string[] args) { List<string> errorList = new List<string>(); XmlSchemaSet schemas = new XmlSchemaSet(); schemas.Add("", XmlReader.Create(new StreamReader(@"XMLData\XMLData\File.xsd"))); XDocument doc1 = XDocument.Load(@"\XMLData\XMLData\DataElement.xml"); Console.WriteLine("Validating doc1"); bool errors = false; doc1.Validate(schemas, (o, e) => { Console.WriteLine("{0}", e.Message); errorList.Add(e.Message); errors = true; }); Console.WriteLine("doc1 {0}", errors ? "did not validate" : "validated"); Console.WriteLine(); foreach (string item in errorList) { // Do something with the error Console.WriteLine("Nother error {0}", item); } Console.ReadLine(); } <?xml version="1.0" encoding="utf-8" ?> <Root> <row ID="1" Name="Jo" Sex="M" Age="L"/> <row ID="2" Name="Sandy" Sex="F" Age="18"/> <row ID="3" Name="Sam" Sex="1" Age="39"/> </Root> <?xml version="1.0" encoding="utf-8"?> <xs:schema <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:complexType> <xs:attribute <xs:attribute <xs:attribute <xs:simpleType > <xs:restriction <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> </xs:attribute > <xs:attribute </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Russel Loski, MCT, MCSA SQL Server 2012, 2008, MCITP Business Intelligence Developer and Database Developer 2008 Twitter: @sqlmovers; blog: - Edited by Russ Loski Thursday, February 14, 2013 8:45 PM Spelled name wrong - - Thursday, February 14, 2013 9:10 PM Thank you very much Russel, i used this code to show somme data : MessageBox } .Show(args.Severity.ToString() + " *** "+ args.Exception.ToString() + " $$$ "+ args.Exception.LineNumber.ToString() + " ### "+ args.Exception.LinePosition.ToString()); Here is the result for both errors (see the 2 screen shots bellow...sorry it is in french) ... however this is not enough. i need : 1. the key (1 and 3) 2. The name of the attribut (Age and Sex) 3. The value (L and 1) Your help is much appreciated. Thank you Nacer CREPUQ BI Developer I updated my code as bellow ... the SourceObject is supposed to get the xml node that caused the exception. However my problem is that SourceObject is always null. public void ValidationEventHandler(object sender, ValidationEventArgs args) { XmlSchemaValidationException ex = (XmlSchemaValidationException) args.Exception; XmlNode node = (XmlNode)ex.SourceObject; - Edited by NacerCREPUQ Thursday, February 14, 2013 10:49 PM -
http://social.technet.microsoft.com/Forums/en-US/sqlintegrationservices/thread/ca2ec8f7-2f36-48b7-956a-ddf819dba2a9
CC-MAIN-2013-20
refinedweb
841
67.15
This directory contains the common Web Platform stuff that needs to be shared by renderer-side and browser-side code. Things that live in third_party/blink can directly depend on this directory, while the code outside the Blink directory (e.g. //content and //chrome) can only depend on the common stuff via the public headers exposed in blink/public/common. Anything in this directory should NOT depend on the non-common stuff in the Blink directory. See DEPS and BUILD.gn files for more details. Code in this directory would normally use blink namespace. Unlike other directories in Blink, code in this directory should: Use Chromium‘s common types (e.g. //base ones) rather than Blink’s ones (e.g. WTF types) Follow Chromium's common coding style guide
https://chromium.googlesource.com/chromium/src/+/437ee022676233d01e05b7d640c808e4d37bd05a/third_party/blink/common/
CC-MAIN-2019-43
refinedweb
129
68.16
Red Hat Bugzilla – Bug 421241 Review Request: php-ZendFramework - Leading open-source PHP framework Last modified: 2014-10-13 18:57:57 EDT Spec URL: SRPM URL: Description:. I'm getting a 502 proxy error trying to access the above links. Sorry - the host will be available again on January 2. I had a chance to briefly look at the spec file before it went down, and I think you should consider splitting the package in several subpackages, possibly at the "component" granularity as described in: it will be more work but otherwise you loose one of the big win of the Framework, that is, components are loosely coupled so it's possible to "pick" only the ones required for a given work. Gianluca, I guess you're right. However I'll need some time to get this done as many spec files and subsequent package review requests are needed thus I'm removing this request for now. Well, if you want to defer this it is ok. However, please note you just need one spec and one review. I was only suggesting that the single spec file would build a number of binary RPMS instead of a single one. This is done in several other packages and (more or less) detailed in Reopening. Updated SPEC URL: Updated SRPM URL: -- Since 1.5.0 is about to reach stable, I'm using the release candidates series for now. All components are split up into subpackages with interdependencies resolved via manual search and unit tests applied. All components have virtual requires and provides in the php-Zend(x) namespace. All significant rpmlint warnings and errors refer to files that are needed by the unit tests (subpackage -tests) and must not be modified in the distribution, otherwise complex patches to the tests have to be created and maintained. One issue remaining is that some unit tests explicitly try to create a cache directory (grep for cache_dir) locally below their _files directories, thus failing. Simply symlinking each _files directory below /var/tmp/php-Zendframework is not a clean solution since most of those already contain static test data so maybe /var/lib would be a better candidate. Any suggestions here are welcome. Seems like I missed the final release just by a few minutes. Updated SPEC URL: Updated SRPM URL: As mentioned on this bug: maybe y'all can collaborate on one packaging of this? *** Bug 439015 has been marked as a duplicate of this bug. *** Alexander, did you get in touch with Jess Portnoy from Zend to see if you can work together on the package? I guess he could have some insights to fix the test issue. Alternatively, you could post on fedora-devel or fedora-packaging to ask for help on the issue; chances are that's not a blocking one. Hi Alexander, Would be happy to collaborate with you. Please contact me at: jess at zend.com and we can discuss this further. Thanks, About the problem with the getTmpDir() function in the unit tests, I think a good solution would be to allow overriding of the default TMPDIR [currently evaluated from: dirname(__FILE__)/zend_cache_tmp_dir], maybe as an ENV var or better yet, a directive in Zend/tests/Zend/Config/_files/config.ini. This way, it can be set to a convenient location, sort of the same idea as the TMPDIR ENV variable. Does that sound reasonable to you? I will discuss it with the Zend Framework developers and keep you posted. oh, and in the meanwhile 1.5.2 is out Hi Jess, thank you for helping me here. For the review request the latest stable version has to be packaged always so I need to update for 1.5.2 first, as Gianluca mentioned. The config.ini solution sounds most reasonable to me as this requires only a single file to be provided or patched once in Fedora's package. On the other hand, using ENV only would require a user to properly set his environment first, i.e. it doesn't work out of the box and requires providing a special README.fedora file. Will the config.ini be part of 1.5.3? The other issue are the subpackages: Do you know any feasible way to accomplish interdependency determination between the components of the framework, preferably programmatic? Wil is concerned here, although I'm pretty optimistic my method of unit testing each component individually and isolated suffices as long as code coverage is near complete - I think this process could be automated as well utilizing mock, our package building chroot tool. We (the Fedora community) emphasize high package quality implying high granularity and removing redundancy where reasonable/possible. If you are certain that splitting up Zend Framework brings more problems than benefits, I'll remove the split. From my personal experience with the framework, many components can be used either individually or pulling in only few dependencies. Especially I don't see why everyone must have all Zend_Service:s installed as many rely on registration with the service providers and/or are very specific. ). Hi Alexander, In general, I am all for segmentation but I think here it's a bit over segmented. For instance, I think there's much sense in placing the unit tests and demos in separate RPMs. Maybe it would make more sense to group modules into a few meta packages? This way, you could have a package of "common" modules, another for demos, a third for unit tests and poteniotally a few other extra packages containing less popular modules. This way, the user won't have to go through a huge list of packages related to the framework but, on the other hand, won't have many redundant packages he will never use. Does that sound reasonable? My $0.02: I also think it's over-segmented, currently counting 59 packages. The only reason I'd want ZF segmented is to control which binary dependencies yum installs. I don't mind the bloat of PHP source files but I might not want certain extensions installed. How about using the packages' extension dependencies as a guide to grouping them? Extension dependencies: All the Zend_Service packages can definitely go in one package. That alone takes care of eleven subpackages. Hi Jess, hi Brad, taking both of your inputs into consideration I propose the following package setup: - php-ZendFramework (base package) Contains all commonly used modules including the MVC components and all non-base components not requiring additional binary modules (PHP extensions) Pulls in php-ctype, php-xml - php-ZendFramework-Services Contains Zend_Service* excluding those listed below, base package and per-service provider subpackages Pulls in php-soap - php-ZendFramework-Cache-Backend-Apc Pulls in php-pecl-apc - php-ZendFramework-Locale-Math Pulls in php-bcmath - php-ZendFramework-Search-Lucene Pulls in nothing as php-pecl-bitset doesn't seem to be available for Fedora (yet) but extra package anyway as it is not commonly used (I guess) - php-ZendFramework-Http-Client-Adapter-Curl Pulls in php-curl - php-ZendFramework-Pdf Pulls in php-gd - php-ZendFramework-Db-Adapter-Db2 Same situation as with Zend_Search_Lucene, ibm_db2 not available as of now (proprietary?) - php-ZendFramework-Db-Adapter-Firebird Same as Zend_Search_Lucene, no i(nter)base - php-ZendFramework-Feed Pulls in php-mbstring - php-ZendFramework-Cache-Backend-Memcached Pulls in php-pecl-memcache - php-ZendFramework-Db-Adapter-Mysqli Pulls in php-mysql (also contains mysqli) - php-ZendFramework-Db-Adapter-Oracle Its required extension is proprietary but available in the free beer manner, we can either drop it or require the user to take care of the dependency manually - php-ZendFramework-Db-Adapter* Pull in php-pdo - php-ZendFramework-Cache-Backend-Sqlite Fedora's PHP was complied with --without-sqlite and I can only find the PDO driver, drop the component or file a bug against php..? - php-ZendFramework-tests - php-ZendFramework-demos (php-ZendFramework-manual-en and -api-doc are submitted as separate packages / review requests already) One problem with Zend_Http_Client: I don't know about mime_magic (cannot find it) and it's declared deprecated, can it use php-pecl-Fileinfo automatically instead? The following required extensions are included with Fedora's PHP by default already: dom, hash, iconv, json, pcre, posix, Reflection, session, SimpleXML, SPL, standard, zlib Any suggestions? The setup still looks pretty cluttered, maybe you've got another idea for the grouping details. Please take note that I will be on vacation and thus unavailable for one week starting tomorrow, details are at Hi Alexander, This package segmentation seems better. I think all php-ZendFramework-Db-Adapters can be included in one package, though it is true one would usually need only one or maximum two of these. Maybe it's better to have all common DBs in one package which requires php-pdo and the rest in another package. This would mean you will install the Oracle stuff even if you only want to use DB2 which is not so nice but on the other hand, there's something logical about deviding the whole DB support into common and misc DBs. I think most PHP developers use PDO to access MySQL, PGSQL, SQLite, etc and a substantial lesser amount of developers use DB2 and OCI8 directly so the separation makes sense. MySQLi is somewhat hard to decide upon because it does not belong with the PDO and is pretty commonly used... I admit I am not sure as to whether it should be in it's own package or included somewhere else. About mime_magic, it is true that php.net declares it as deprecated by Fileinfo from PECL but their APIs are not entirely compatible and I think many people still use mime_magic. It's source is included in the PHP 5.2.6 [latest stable] source tree under ext/mime_magic and it would be very easy to make an RPM out of it. I will send you a spec file shortly. About mime_magic, after a second look: it seems that it's quite easy to keep using existing code originaly coded for mime_magic without changes as all you need to do is add one compatibility wrapper function. As stated in ext/mime_magic/DEPRECATED. Since it is so, I will discuss this matter with the ZF developers. I think the best thing would be to include this wrapper function in the event we have the Fileinfo extension loaded and not mime_magic and so it will be able to use both. Does this sound logical to you? (In reply to comment #14) > ). Well, given my interest in using the framework, I think I could go for it. Just ping me when the next iteration of the package is available Thank you, Gianluca. Will do. I am waiting for a new version which will include my patch to getTmpDir() and my suggested fix for the mime_magic issue. As soon as it's done we can go forward. Hi Jess, this is my status: Current packages (base) demos tests Cache-Backend-Memcached Cache-Backend-Sqlite Db-Adapter-Mysqli Db-Adapter-Db2 Db-Adapter-Firebird Db-Adapter-Oracle Feed Gdata (why doesn't this exist as a service component?) Pdf Search-Lucene Services There are some issues here: Cache-Backend-Sqlite requires php-sqlite but this is not available in Fedora's PHP build, it was disabled Mon Nov 8 2004 by Joe Orton, most probably in favor of pdo-sqlite. Can the back end handle pdo-sqlite instead of sqlite? If not we probably have to exclude it. Db-Adapter-Db2, Db-Adapter-Firebird and Db-Adapter-Oracle require extensions that must be compiled in while having the respective database software installed, in case of DB2 and Oracle this is impossible for Fedora since they are proprietary software, I don't know about InterBase/Firebird however it's also missing. We can of course offer Zend's adapters but the user is required to compile PHP and enable the affected dependencies himself. Feed has been separated because it requires mbstring; same goes for Pdf because of GD, Search-Lucene because of bitset; Locale-Math has _not_ been separated as planned since Date uses bcmath as well and cannot be separated as a major dependency for other base components. About mime_magic: I've checked our PHP build, the extension was disabled Wed Mar 21 2007, indeed in favor of Fileinfo. I'd really appreciate Fileinfo support from your side! A wrapper functions sounds completely reasonable. The total list of extensions required for the base framework package as of now: bcmath, ctype, curl, dom, hash, iconv, json, pcre, posix, reflection, session, simplexml, spl, zlib. Only two of them, bcmath and dom, require additional extensions to be pulled in which is pretty acceptable. About the getTmpDir() issue: Please keep in mind you're targeting a multi-user platform, setting a fixed value in a configuration file can still create clashes, especially if no cleanup is performed afterwards. Given both choices only, I'd rather go with the environment variable; if you want a much more flexible solution involving the configuration file, I suggest append a hash, timestamp or the uid to the base path given. Actually this is what I do in my tests :) Alternatively, you could go with world writable modes for all files and directories created during the tests. @Gianluca Thanks in advance for volunteering! Hi Alexander, About SQLite and the proprietary DB extensions, I think it's perfectly OK for the user to take care of the missing deps should he so desire. The unit tests themselves always check whether the extensions required are loaded and throw an exception if they're not so the user can easily tell what's required and take care of it. I see no problem with it as there are over 100 PHP extensions, naturally you can't provide them all. About getTmpDir(), I suggested: return getenv('TMPDIR').DIRECTORY_SEPARATOR . 'zend_cache_tmp_dir_'.date("mdyHis"); This way, the directory will be created with a timestamp which will avoid conflicts just as you suggested. I have also submitted a wrapping function to the ZF developers and am now waiting for them to apply both the getTmpDir() patch and this wrapper. Hi Jess, I will ask on fedora-devel how to handle absent or unavailable package requirements when providing the requiring package makes sense. Could be that it is not possible to provide the affected extensions at all because I (as the would-be package maintainer) cannot provide support (verifying/fixing bugs etc.) for them without non-Fedoran packages. Your getTmpDir() solution looks perfectly suitable to solve all issues identified so far, thank you! Hi Alexander, Are there any additional extensions are you missing in the FC repo besides mime_magic and third party, non open DB vendors?. The change is available from ZF's SVN trunk: About mime_magic vs. fileinfo, my wrapper function was not yet applied but I will become a ZF committer soon and just apply it myself. Please let me know how you did with getTmpDir(). Created attachment 309009 [details] Test run on ZendFramework 1.5.2 with PHPunit PHP 5.2.5 (cli) (built: Apr 24 2008 10:37:47) PHPUnit 3.2.19 shell$ phpunit -d memory_limit=512M --verbose AllTests.php \ > ~/ZendFramework-1.5.2-testall-$(date +%F)-tap.log 2>&1 All packages and dependencies identified so far were installed. Hi Jess, as you can see in above's comment I've attached the log from a complete 1.5.2 test run, maybe this is helpful to you. (In reply to comment #25) > Are there any additional extensions are you missing in the FC repo besides > mime_magic and third party, non open DB vendors? Surprisingly, yes: php-bitset (php-pecl-bitset) is not available for Fedora. Pecl says someone from Zend maintains it but there has never been an update after initial release in 2005. What are the drawbacks of using Search_Lucene without bitset? Is it still considered maintained? If not, becoming the packager of php-pecl-bitset would also mean becoming the maintainer in Fedora World and I don't have any spare capacities to cope with this. Otherwise, i.e. if Zend is still maintaining it, I'd package it. >. This improves our situation of testing most components properly right now but I wouldn't consider it the final solution. Our goal should be to have a blank PHPunit run on AllTests.php walk through after a fresh installation without any failures and/or errors. > The change is available from ZF's SVN trunk: > I've just checked out the trunk but will make an inspection and test run tomorrow. > About mime_magic vs. fileinfo, my wrapper function was not yet applied but I > will become a ZF committer soon and just apply it myself. Sounds good. > Please let me know how you did with getTmpDir(). Will do next time. Hi Alexander, About bitset, the person listed as maintainer no longer works at Zend. I don't believe it will be maintained further. However, the use of bitset is not mandatory. Comment in library/Zend/Search/Lucene/Index/SegmentInfo.php on line 167 reads: bitset if bitset extension is loaded or array otherwise. Therefore it is possible to use w/o having bitset loaded. This is also true to mime_magic, BTW. I will review your log and see what I can do about it. Hello again, I reviewed the log and the good news are most of the problems are mostly around the same lines of the getTmpDir(), that is, the tests are trying to create and manipulate files within the ZF tree which a normal user cannot do if the tree is located under /usr/share. The problem here is more vast than just getTmpDir() it appears as though the entire concept was not considered when the tests were designed. I will discuss it with the ZF developers and see how fast we can address this. Thanks, Hi Jess, any news? I just did a test run on revision 9862 but getTmpDir() doesn't seem to be in usage yet with the exception of a few optional tests. Regards Alex Hi Alex, Regarding the permissions issue, which is cross wide and not only specific to the cache component as was quite obvious from the report you attached, I am still waiting for the ZF maintainers to come up with a general fix. I have explain the issue to them and suggested a few options to fix this. The mime_magin vs. fileinfo issue is suppose to be resolved. Please see: For more inforamtion. I will, of course, notify you as you soon as they release a version that does not suffer from these issues. For now, can you please verify the Zend/Http/Client.php fix? Thanks, Sorry guys, I'm a bit lost by now... Are the remaining issues really a blocker or we can go ahead with the review in the meanwhile? Well, I don't want to speak on Alex's behalf but this is how I see our status: 0. The mime_magic vs. fileinfo issue is resolved [assuming it works well] 1. About the unit tests, some of them work properly, some do not due to permissions issues. I don't think this is a show stopper, it should most certainly be fixed but I think we can release the first RPM version as is and work on it as we go along. The problem here is not periodic and I am trying to motivate the ZF maintainers to address it. Alex, are there any other issues I can help promote? Thanks, Hi Jess, hi Gianluca, (In reply to comment #31) > For now, can you please verify the Zend/Http/Client.php fix? I've just verified this in HEAD: With php-Fileinfo installed, the correct mime type is returned; without, the default (application/octet-stream). I consider this fixed and not having this feature in 1.5.2 yet is definitely not a blocker issue. There's one minor non-blocker issue you could actually try to promote: Since Zend has decided to not only provide a zipped version of the framework but also a tarball which is most likely meant to target platforms like Fedora (UN*X-likes), please fix file permissions before every release. In 1.5.2, all Gdata-related files have their executable bit set. Right now, this needs to be handled by my installation script. Gianluca: I consider all previous blockers fixed and will upload a 1.5.2 package right away. Updated SPEC URL: Updated SRPM URL: Changes: - update to 1.5.2 - new package split - removed Cache-Backend-Sqlite, Db-Adapter-Db2, Db-Adapter-Firebird, Db-Adapter-Oracle - removed optional php-bitset requirement from Search-Lucene, not available - removed virtual requires and provides, not necessary anymore Oops, old package URL. Correct one: Hi Alex and Gianluca, Just wanted to let you know the ZF team will release the 1.6 version in approx. 2 weeks. It will include the patches for both the mime_magic/fileinfo and the fix for the cache unit test. I have also written to them about the package permissions issue [the executable bit being turned on where it is not needed] and I asked for them to fix it. Hey Alex and Gianluca, ZF 1.6RC1 is out. This includes the fix for the mime_magic-fileinfo issue. The unit tests issue is not yet addressed unfortunately but the ZF developers are aware of it. Thanks, OT: too bad is still open ;) Updated SPEC URL: Updated SRPM URL: Change: - update to 1.6.0RC1 - added php-Fileinfo dependency The Fileinfo/mime_magic issue is fixed now. SQLite support via PDO in Zend_Cache_Backend_Sqlite is still missing (non-blocker). Also, we can definitely go without the working unit tests for now but can we continue collaboration on issues after the Fedora/RHEL release? Anyway, I'll open a Zend Tracker account. Gianluca: Ready for a review or do you want to wait for 1.6 final? Talking about ZF issues still open, I'd love to see ZF-2151 fixed, maybe utilizing libyaml/php-yaml. Hi Alex and Gianluca, About the unit tests, yes, of course you can count on our collaboration. I find this issue to be very important and I was assured it will get proper consideration in the future. About 2151: There is a PECL extension that can do the trick [] and also a nice article about how to utilize it []. I will discuss this with the ZF guys and see how I can further promote it. Things are a bit busy for me so I am not sure I commit to taking ownership and writing this ZF component but we'll see :) I urge you to open an account so we can collaborate in an even more affective manner :) P.S Great work and thanks Hi Jess, thank you for your pledge. About 2151: I know all YAML alternatives for PHP pretty well and all of them have their pitfalls, including syck; ATM syck-php segfaults for Fedora, at least on x86_64 and also i386 IIRC. spyc and Horde_Yaml are too slow because processing is done via pure PHP code and while seeming mature, syck seems to be unmaintained (0.55 released 2005 May 18) and still sticks around with YAML 1.0; libyaml on the other hand is considered alpha state despite being very stable (I'm using it for production!), is under active development, offers YAML 1.1 support and has php-yaml as third-party binding which is at least capable of deserialization already; I've added a patch to get parser errors into PHP and sent it to the original author but no reply so far. If necessary I'll add serialization support myself. Please consider the alternatives :) Best regards Hi Alex, I will checkout the YAML issue. Maybe it's worth improving syck, making it support YAML 1.1 and fix the seg fault[s]. I am not sure it is but it's worth looking into. I will review the code first time I get. Also, I run: find /tmp/ZendFramework-1.6RC1/ -type f -perm /u+x,g+x And discovered they still have the exec bit set for some files without cause, I will open an issue about it and remind them once more. Again, thanks for all the hard work. I think this process is very productive in helping ZF become better. Hi again, I looked at PECL and saw the last update on the syck extension was issued on 2007-11-22 [version 0.9.2]. It's declared to be beta but that doesn't necessarily mean anything. I compiled it and it loaded properly on my FC 8. Is there a certain scenario that causes the seg fault? I am now interested :) Ok, I think it's time to get this into Fedora, let's see if I can carve some time for the review Hi Jess, hi Gianluca, I'd really love to see syck back maintained and supporting YAML 1.1 or even 1.2 ( does it); as far as I can judge from the alternatives' code basis, libyaml seems to be most suitable for extension as it clearly follows formal grammar / parser theory patterns (BNF tokens etc.) but maybe I'm just wrong here and underestimating syck. Fedora's php-syck still seems to come from the original syck tarball instead of php-pecl-syck, thus being utterly obsoleted. I'll file a bug report here right away. Actually the file permissions issue is a classic amongst developers working in heterogeneous environments as soon as the Redmond OS is involved. Btw. you can also use -perm /111 to get u+x,g+x,o+x. Hey, collaboration is what keeps the community alive and Free Software is what makes us able to help our neighbor :) Gianluca: Great! I know we're all busy and really appreciate your commitment here. ==== REVIEW CHECKLIST ==== - package named according to package naming guidelines - spec filename matches %{name} - package meet packaging guidelines - package licensed with open source compatible license (BSD) - license matches actual license - license file included in %doc - spec written in American english - spec legible - source match upstream 196aef8904be20c199e536480f92c5c9 ./ZendFramework-1.6.0RC1.tar.gz - successfully builds in mock for rawhide x86_64 - no locales - no shared libraries - package is not relocatable - package owns all directories it creates - file permissions set properly - contains proper %clean - macro usage is consistent - package contains code - no large documentation - not a GUI app needing a .desktop file - rpmlint output not clean php-ZendFramework-Cache-Backend-Apc.noarch: W: no-documentation php-ZendFramework-Cache-Backend-Memcached.noarch: W: no-documentation php-ZendFramework-Cache-Backend-Memcached.noarch: W: filename-too-long-for-joliet php-ZendFramework-Cache-Backend-Memcached-1.6.0-0.1.rc1.fc9.noarch.rpm php-ZendFramework-demos.noarch: W: no-documentation php-ZendFramework-demos.noarch: E: htaccess-file /usr/share/php/Zend/demos/OpenId/mvc_auth/html/.htaccess php-ZendFramework-Feed.noarch: W: no-documentation php-ZendFramework-Gdata.noarch: W: no-documentation php-ZendFramework-Pdf.noarch: W: no-documentation php-ZendFramework-Search-Lucene.noarch: W: no-documentation php-ZendFramework-Services.noarch: W: no-documentation php-ZendFramework-tests.noarch: W: no-documentation php-ZendFramework-tests.noarch: E: zero-length /usr/share/php/Zend/tests/Zend/Filter/_files/file.1 php-ZendFramework-tests.noarch: E: non-executable-script /usr/share/php/Zend/tests/Zend/Text/Figlet/GenerateDummies.sh 0644 php-ZendFramework-tests.noarch: E: zero-length /usr/share/php/Zend/tests/Zend/Text/Figlet/InvalidFont.flf php-ZendFramework-tests.noarch: W: hidden-file-or-dir /usr/share/php/Zend/tests/Zend/Auth/Adapter/Digest/_files/.htdigest.1 php-ZendFramework-tests.noarch: E: zero-length /usr/share/php/Zend/tests/Zend/Soap/_files/cert_file php-ZendFramework-tests.noarch: E: zero-length /usr/share/php/Zend/tests/Zend/Http/Client/_files/testHeaders.php php-ZendFramework-tests.noarch: W: file-not-in-%lang /usr/share/php/Zend/tests/Zend/Translate/_files/test_fileerror.mo php-ZendFramework-tests.noarch: W: file-not-in-%lang /usr/share/php/Zend/tests/Zend/Translate/_files/testmsg_en.mo php-ZendFramework-tests.noarch: W: file-not-in-%lang /usr/share/php/Zend/tests/Zend/Translate/_files/testmsg_ru(koi8-r).mo php-ZendFramework-tests.noarch: W: file-not-in-%lang /usr/share/php/Zend/tests/Zend/Translate/_files/translate_bigendian.mo 10 packages and 0 specfiles checked; 6 errors, 15 warnings. I think the "no-documentation" warnings could be ignored on subpackages. You may consider adding the license file to silence it filename-too-long-for-joliet is annoying if this is going to be burned on a CD/DVD. I am not sure if that's a blocker htaccess-file IIRC htaccess files are ignored by the default apache installation. please check if we can remove it without impacting the demo. I think all the -tests W and E can be safely ignored. Please just double check the .htdigest.1 and the zero lenght files are really the test targets. One last remark. If there is some post installation steps the user is supposed to do, please add a README.Fedora file with the appropriate note. So, I can't see any blocker here, the package is APPROVED. Gianluca: I've fixed the "no-documentation" issue by adding the license to all subpackages. The joliet problem should go away with 1.6 stable; for the rest I've verified the files are necessary and correct for the unit tests. Right now there's nothing special required to have Zend Framework run on Fedora, if it ever will, I'll add a README. Thanks again! Jess: Which way do you recommend for staying in contact, do you want to be set CC for Zend Framework related bugs in RedHat Bugzilla or should I open a Zend Jira account and forward bug reports where applicable? Sorry guys for taking so long but whenever I tried posting Bugzilla's update seemed to get into my way.. New Package CVS Request ======================= Package Name: php-ZendFramework Short Description: Leading open-source PHP framework Owners: akahl Branches: F-9 InitialCC: Cvsextras Commits: yes Hi Alex, I'd love to be set CC. It can't hurt :) cvs done. All builds successful. If anyone wants this into EPEL/RHEL, I'll consider it - but for now, the implicated long-term responsibility is quite daunting. Thank you Jess and Gianluca for your help! Hi Alex, I was thinking, it might be a good idea for the RPM %post to run the following: sed 's@\(^include_path\s*=\s*".*\)"@\1:%{_datadir}/php"@' -i /etc/php.ini So that the prefix for ZF is found within PHP's include path. I don't think it's a must and can also think of reasons why not to interfere with the php.ini but still, thought I'd suggest it none the less. Hi Jess, nice idea but actually useless for Fedora: include_path is not set in php.ini by default, instead it's hard-coded by a patch to .:/usr/share/pear:/usr/share/php (in C: INCLUDE_PATH=.:$EXPANDED_PEAR_INSTALLDIR:${EXPANDED_DATADIR}/php) Our policy is to install PEAR packages into /usr/share/pear and all other PHP packages into /usr/share/php; if anyone wishes to override the path, he or she has to make sure the default still applies. OK, I would love to hear to reason for this patch someday but it seems in that case it is indeed useless :) Package Change Request ====================== Package Name: php-ZendFramework New Branches: el5 el6 Owners: heffer Ah, sorry. Drop the el5. PHP in EL5 is too old for ZendFramework. Git done (by process-git-requests). EL-5 dropped. For the records, RHEL 5.6 got a php53 package which I guess is possible to use as a dep for the EL-5 branch Package Change Request ====================== Package Name: php-ZendFramework New Branches: el7 Owners: heffer Git done (by process-git-requests).
https://bugzilla.redhat.com/show_bug.cgi?id=421241
CC-MAIN-2017-26
refinedweb
5,331
55.13
11 September 2012 10:05 [Source: ICIS news] SINGAPORE (ICIS)--Taiwan’s state oil refiner CPC Corp has bought by tender 5,000-10,000 tonnes of methyl tertiary butyl ether (MTBE) for delivery in November, market players said on Tuesday. The parcels, which were for delivery into ?xml:namespace> CPC is now covered for November and will next seek for spot December cargoes, they added. The origins of the potential one or two cargoes have yet to be defined by the seller, traders said. “It is really a good buy,” one Chinese trader said. MTBE demand has been under downward pressure, buying interest dwindles ahead of “The market is rather stagnant now. There are offers, but just no buyers,” the Chinese trader added. “Supply condition for MTBE is quite good for October,” one trader said. MTBE is used as an additive to boost octane levels in
http://www.icis.com/Articles/2012/09/11/9594353/Taiwans-CPC-buys-5000-10000-tonnes-MTBE-for-Nov-delivery.html
CC-MAIN-2015-22
refinedweb
147
72.56
Why the hell would I want to mix Tensorflow+Windows? I am a huge Linux fan, I have been using Linux distros on my laptop since 2008-2009. On Linux, everything installs well and everything is good – Tensorflow even had a build guide for windows in version <1.0 which just said switch to Linux. I use a Thinkpad X1 extreme as my personal laptop and I like to mix between casual browsing + movies + data science + machine learning. Despite the hate for Windows, let me take 10 seconds here to explain why I want(ed) Tensorflow on Windows. On Linux, you have the open source GPU drivers and then you have the proprietary NVIDIA drivers. NVIDIA drivers are great, but since they don’t have 100% kernel level integration, they can’t do switching, and I need that. When I want to do machine learning, I want to be able watch a movie or so, without having to end my experiments or restart my workspace. On Linux, that’s the biggest drawback. No distro, driver or spaghetti script allowed me run CUDA without having to restart my desktop environment and re-route to the NVIDIA GPU from the Intel one. Optimus is a big win! This is pretty much an instruction guide to get Tensorflow 2.0alpha working on Windows with VS2019 and CUDA 10.1. Head down below for the pre-installation guide, build guide and a usage guide. You tried pip install tensorflow-gpu==2.0.0-alpha0 and it failed! Same here. I installed it under a conda environment, and it miserably failed. The install went through and the moment you do import tensorflow, everything goes bonkers. Initially as a casual thought, I got on Windows to try this experiment. I spawned up a miniconda environment, and ran pip install tensorflow and I ran into DLL hell on import. On a different note, Miniconda is great, because of their MKL support, being soooo windows friendly, and not annoying like their big brother Anaconda. Instead of quitting here, the most natural response after this seemed to be to build tensorflow for source, rather than using a pre-packaged version. A customly built tensorflow also has the advantage of being perfect for the set of libraries I have, and the hardware I have. For example, I don’t have to compile for every possible architecture of compute capability for my graphic card, but I can do it for my specific version alone. Let’s get Ready Things are not so direct with Tensorflow 2.0. There is a reason it is still in alpha, and not even in Beta. Cuda 10.1 isn’t officially supported by tensorflow, and neither is Visual Studio 2017. There are a couple of tricks here and there, which we got to jump through before we have a full fledged build environment. Assumptions - You have Windows 10 - You wanted the latest and greatest and went ahead to install Cuda v10.1. You probably have also managed to install CudNN 7.0 - Again, you live on the edge, so you went ahead and downloaded Visual Studio 2019. I had installed the Python Development tools as well while installing this. - You have installed Python one way or the other. Either you have installed it using the Anaconda distribution, or you have installed it using the official installer of Python from python.org . I’ve used Miniconda to create an environment based on Python 3.6. Anaconda is great because of its built in support for MKL etc. Also, a lot of modules are packaged, so you do not have to waste time setting up an environment. - You seem to want to break an arm and get Tensorflow 2.0 alpha on this unsupported environment. Ofcourse, the reasons for which you want it include Eager execution and the possibility of a Keras frontend. - You seem to have certain build tools – such as Git (SublimeMerge is a great visual Git client). You also seem to know how to work with the command line to a certain extent. I use the barebones git for windows. - You probably tried the official build guide and gave up The Basic stuff The following is pretty much a copy paste of Tensorflow’s official install guide. I’ve made some tweaks here and there. This would probably get outdated soon once 2.0 goes stable. - Download the Tensorflow 2.0 source : This involves a git clone. You could also download the entire source archive from - You might have to switch to the 2.0 alpha release branch if you used Git to download it. This is possible with git checkout r2.0 - Do a git pullafter a checkout, just to see whether there are some newer commits deviating from the master branch. - Alternatively, you could check out the fixed branch called tf2_winfixes from my fork of tensorflow at - Download Bazel v20. The latest version at the time of writing is Bazel 24.1 and that semi supports Visual Studio 2019. It also gets confused with Cuda 10.1, but that is probably because of a tensorflow script. However, no support is better than semi support. - Copy this to somewhere in Windows where your PATH variable points to , or add the path of the folder in which bazel exists to PATH. - Cheat 1 – Cheat Tensorflow/Bazel into thinking you really have Visual Studio 2017. This seemed to be the most seemless way to get an older version of Bazel to work with our newer versions of Visual Studio. - Navigate to the folder in which most of Visual Studio lies in. On my PC, this is : C:\Program Files (x86)\Microsoft Visual Studio - Make a hard link! mklink /D 2017 2019 - This just makes a fake folder called 2017 which in reality points to 2019. This is better than a shortcut, because a shortcut is identified with an extension, and hardlinks are stored on the filesystem. - Cheat 2 – Modify relevant Bazel scripts and Cuda Scripts to prevent build errors in the following step - For your ease, I’ve made a fork of tensorflow, with my fixes. Have a look at . Ideally, just copy the changes from the two files I have modified, or download my version of the files and patch it to your tensorflow folder. - You could download the entire repository if you want, and switch to the branch tf2_winfixes instead of cloning the official tensorflow branch. I would NOT suggest that as you would miss future changes of Tensorflow 2.0 (I will not attempt to keep the repo updated You have the all the building blocks. All you have to do is create an environment and just press build. Build! So now that you have your code and tools ready, lets set up a build environment and start the build. This is pretty much the same as Tensorflow’s official guide. 1. Make a Python Virtualenv Python Virtual environments are great! They prevent you from polluting the other python libraries already installed in your system, and to an extent, they offer portability. I use conda for my virtual environments, but you have a bunch of options – You could you virtualenv barebones, or the virtualenvwrapper or Pyenv, or even pipenv. Open up an Anaconda Prompt (or a prompt where your python commands work) conda create --name tf_build_env python=3.6.5 For a change, we will use a version of Python that is officially supported, therefore do not forget to introduce python=3.6.5 2. Install Tensorflow’s dependencies I’m using Miniconda, but I wouldn’t take the risk to use conda. So this is where we mix regular pip with a conda environment. Let’s get your tensorflow dependencies in order. The keras stuff actually allow to use tensorflow’s keras as a frontend (new as of 2.0 alpha) conda activate tf_build_env # Don't forget to activate your environment! pip install six numpy wheel pip install keras_applications==1.0.6 --no-deps pip install keras_preprocessing==1.0.5 --no-deps In addition, Install MSYS2 and add that to your PATH. Download it from here. If you install it to the default location, append C:\msys64\usr\bin to your path. Don’t forget to install the unzip and the patch packages for MSYS, I was stuck here for a while without reading that detail! pacman -S unzip patch If you have problems with your path, that is not going to work. Instead, navigate to the bin/ folder of your MSYS installation, start an MSYSterminal, and execute the command over there. 3. Configure, Build and Install. To configure, execute the following, from within the tensorflow source : python ./configure.py This is going to ask you multiple questions. It’s quite interactive, you don’t have to be a pro Linux or a windows user to understand that. We are almost there. To build, do the following :- bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package --define=no_tensorflow_py_deps=true --copt=-nvcc_options=disable-warnings That would take an hour or so. In my case, I left it on the entire night. Just follow it for the first few minutes to check if something is broken. If you’re using a laptop, plug it in, this is going to be CPU intensive. Once you have a build, it’s time to put all those different things that you built into a python package. The following does that and stores the built package in C:\tmp\tensorflow_pkg :- bazel-bin\tensorflow\tools\pip_package\build_pip_package C:/tmp/tensorflow_pkg All you have to do now is to install it with pip. Pip supports wheels, and the package you built is a wheel so all you need to do is the following (remember to replace the version number with the version of your build, or just install whatever tensorflow package was built) :- pip install C:/tmp/tensorflow_pkg/tensorflow-version-cp36-cp36m-win_amd64.whl That’s it, you’re good to go. You go tensorflow installed. Baby Steps – Your first code If you’re not sure what your first script could be, just use the script I used to train and test MNIST. It is a copy-paste of the code on the tensorflow website. As usual, if you have any issues with the installation, comment and I’ll try to help you out. And, feel free to comment, as well, if you want to thank me xd,. 5 thoughts on “Tensorflow 2.0 alpha + Cuda 10.1 + Visual Studio 2019 Windows Build guide” YOU’RE A SAINT InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found. Saint saint! “This would probably get outdated soon once 2.0 goes stable.” 2.0 is officially released. Is there an updated version of this most-valuable guide in the offing? I shall have a look at it 🙂
https://prassanna.io/blog/tensorflow-2-0-alpha-cuda-10-1-visual-studio-2019-windows-build-guide/
CC-MAIN-2021-04
refinedweb
1,809
74.19
font size in matplotlib Hello! Could somebody please help me find a way to change the font size in Matplotlib plots. Particularly, I would like to change the size of the axis ticks labels. Thank you. vladimir Hello! Could somebody please help me find a way to change the font size in Matplotlib plots. Particularly, I would like to change the size of the axis ticks labels. Thank you. vladimir Is this example from the docs what you want?... Or even just plot(x^2,(x,0,2), fontsize=20) Note that these answers are for sagecommands. If you are looking for how to deal with matplotlib plots, it might be better to post to the matplotlib help list. Or personally, I usually just look at the matplotlib gallery (...) and find something close to what I want. Yes, I was talking about the direct usage of Matplotlib. An currently the only solution I found is this one: yticks(fontsize=SIZE) should work: import pylab import matplotlib pylab.clf() #pylab.rc('text', fontsize=18) #This is a 'bad' but effective way to change the default font size pylab.plot(xdata, ydata, '--') pylab.xlabel(r"$x$",fontsize=24) pylab.ylabel(r"$y$",fontsize=24) pylab.yticks(fontsize=40) pylab.yticks(fontsize=40) pylab.savefig('transspeclin.pdf', format='pdf') pylab.savefig('transspeclin') pylab.show() Asked: 2011-12-05 01:00:49 -0500 Seen: 13,110 times Last updated: Dec 05 '11 How to set matplotlib backend from SageTeX? How can I add arrows at the end of sage plot axis? sage vs. python integers and floats in pandas, matplotlib etc. Animate wireframe in matplotlib using IDLE How to change axes labels font size in PGF (possible bug?)? Plotting arrows at the edges of a curve How to make a custom divergent colormap? How can matplotlib graph axis be moved? Sage Os X app and MatPlotLib / LaTex connection
https://ask.sagemath.org/question/8520/font-size-in-matplotlib/
CC-MAIN-2017-34
refinedweb
314
67.35
Cheat Sheet Sage Instant Accounts For Dummies : F1: Wherever you are on Sage, you can press F1 to get the relevant help topics for that screen. F2: This handy key pulls up a small on-screen calculator you can use to check your numbers – to see that an invoice adds up correctly before entering it, for example. F3: Press this key when you’re entering details on a product invoice to open up the Edit Item Line window. Accounts Professional users can also use this key to open up the Edit Item Line window from a sales or purchase order. F4: By pressing this function key, you can view the full list of a field with a dropdown arrow. It opens a calculator in a numbers field or a calendar in a date field. F5: This key opens a currency converter in a numeric field and a spell checker in a text field. F6: Press this key to copy the information from the cell above into the current cell you’re working on. This function comes in handy when you’re entering a batch of invoices. F7: This key inserts a line above the one you’re working on. F8: Press this key to delete the line you’re on. F9: This key calculates the net amount of an invoice and the VAT element if you only have the gross amount (the amount inclusive of VAT). your sales and purchase invoices. Enter all receipts and payments from cheque stubs and paying-in slips. Enter directs debits, Bankers’ Automated Clearing Services (BACS), transfers and so on from bank statements. Reconcile bank accounts, including credit cards. Enter journals (or run wizards) for accruals, prepayments, depreciation and so on. Enter stock journals or run the Open/Closing Stock wizard Run VAT return (if due). Enter your PAYE journals and VAT journals and run the wizards if required. Run Aged Debtors and Aged Creditors reports for the period. Run Profit and Loss and Balance Sheet for the period. Run month-end (Tools >Period-End >Month-End).
http://www.dummies.com/how-to/content/sage-instant-accounts-for-dummies-cheat-sheet-uk-e.html
CC-MAIN-2016-07
refinedweb
342
73.17
In previous post, I’ve explained I want to pilot my sprinklers with a netduino board. I’ve already write couple of articles around it, including how to create a HTTP web server, set up the date and time, manage parameters, launch timers, securing the access, pilot basic IO. I’ve also shown couple of examples including this Sprinkler solution during the French TechDays. The video is available. I just love .NET Microframework (NETMF) so good to have no OS such as Linux or Windows, just a managed .NET environment! During the TechDays, I get questions on the electronic part of this demo. So in this post, I’ll explain how I did it and show code example to make it happen. Back to my Sprinklers, the brand is Gardena. The electro valves I have to pilot are bi valves. They need a positive 9V pulse to open and a 9V negative one to close. Gardena do not publish any information regarding there valves but that is what I found with couple of tests. The netduino board have a 3.3V and a 5V alimentation and the intensity is limited if alimented with the USB port. So not really usable to generate a 9V pulse. Plus I don’t want to mix the netduino electric part and the valve one. So I will use simple photosensitive octocouplers. The way it’s working is simple, you have a led and a photosensitive transistor, when lighted, the transistor open. The great advantage is you have a very fast switching totally isolated circuit. I pick a cheap circuit with 4 octocouplers (ACPL-847-000E) as I will need 4 per valves. The basic idea is to be able to be able to send some current in one way to open the valve and in the other to close it. And to pilot it, I will use the digital IO from the netduino. I will need 2 IO per vavle. One to pilot the “Open” and one to pilot the “Close”. I just can’t use only one IO as I will need to send short pulses to open and short pulses to close. I want to make sure I’ll close the valve as well as opening it. and not only one single pulse. One IO won’t be enough as I need to have 3 states: open, close and “do nothing”. When I will have the first IO open (let call it D0) at 1, I will open the valve. When the second one (D1) will be set at 1, I will close the valve. And of course when both will be at 0, nothing will happen as well as when both will be at 1. So I will need a bit of logic with the following table: So with a bit of logic, you get quickly that Pin On = D0 && !D1 and Pin Off = !D0 && D1 (I’m using a programming convention here). So I will need couple of inverters and AND logical gates. I’ve also choose simple and cheap ones (MC14572UB and CD74HC08EE4). They costs couple of euro cents. Those components have all what I need. For the purpose of this demo, I will use 2 inverted led (one green and one red) and will not send pulse but a permanent current. So it will be more demonstrative in this cold winter where I just can’t test all this for real with the sprinklers! I’ll need a new post during spring Now, when I put everything, here is the logical schema: I will have to do this for each of my sprinklers. I have 3 sprinklers in total. And here is a picture of a real realization: You can also see a push button in this picture (on the left with white and blue wires). I’m using it to do a manual open and close of the sprinklers. I’m using here the IO D10. When I’ll push the switch, it will close the valve if it is open and open it if it is closed. I’m done with the hardware part! Let see the code to pilot all this. The overall code for the Sprinkler class looks like this: public class Sprinkler { private bool MySpringlerisOpen = false; private int MySprinklerNumber; private bool MyManual = false; private OutputPort MySprOpen; private OutputPort MySprClose; private Timer MyTimerCallBack; private InterruptPort MyInterPort; private long MyTicksWait; public Sprinkler(int SprNum) { MySprinklerNumber = SprNum; MyTicksWait = DateTime.Now.Ticks; switch (SprNum) { case 0: MySprOpen = new OutputPort(Pins.GPIO_PIN_D0, false); MySprClose = new OutputPort(Pins.GPIO_PIN_D1, true); MyInterPort = new InterruptPort(Pins.GPIO_PIN_D10, false, Port.ResistorMode.PullUp, Port.InterruptMode.InterruptEdgeHigh); break; case 1: MySprOpen = new OutputPort(Pins.GPIO_PIN_D2, false); MySprClose = new OutputPort(Pins.GPIO_PIN_D3, true); MyInterPort = new InterruptPort(Pins.GPIO_PIN_D11, false, Port.ResistorMode.PullUp, Port.InterruptMode.InterruptEdgeHigh); break; case 2: MySprOpen = new OutputPort(Pins.GPIO_PIN_D4, false); MySprClose = new OutputPort(Pins.GPIO_PIN_D5, true); MyInterPort = new InterruptPort(Pins.GPIO_PIN_D12, false, Port.ResistorMode.PullUp, Port.InterruptMode.InterruptEdgeHigh); break; } if (MyInterPort != null) MyInterPort.OnInterrupt += new NativeEventHandler(IntButton_OnInterrupt); } // manual opening based on an interupt port // this function is called when a button is pressed static void IntButton_OnInterrupt(uint port, uint state, DateTime time) { int a = -1; switch (port) { case (uint)Pins.GPIO_PIN_D10: a = 0; break; case (uint)Pins.GPIO_PIN_D11: a = 1; break; case (uint)Pins.GPIO_PIN_D12: a = 2; break; } if (a >= 0) { //wait at least 2s before doing anything if ((time.Ticks - MyHttpServer.Springlers[a].MyTicksWait) > 20000000) { if (!MyHttpServer.Springlers[a].MySpringlerisOpen) { MyHttpServer.Springlers[a].Manual = true; MyHttpServer.Springlers[a].Open = true; } else { MyHttpServer.Springlers[a].Open = false; } MyHttpServer.Springlers[a].MyTicksWait = DateTime.Now.Ticks; } } } // open or close a sprinkler public bool Open { get { return MySpringlerisOpen; } set { MySpringlerisOpen = value; //do harware here if (MySpringlerisOpen) { MySprOpen.Write(true); MySprClose.Write(false); } else { MySprOpen.Write(false); MySprClose.Write(true); MyManual = false; } } } public bool Manual { get { return MyManual; } set { MyManual = value; } } //read only property public int SprinklerNumber { get { return MySprinklerNumber; } } public Timer TimerCallBack { get { return MyTimerCallBack; } set { MyTimerCallBack = value; } } } Have a look at the previous posts to understand how to use it thru a web server. This part, is only the class to pilot the sprinklers. I know I only have 3 sprinklers so there are many things hardcoded. It’s embedded and no one else will use this code. It’s more efficient like this. The size of the program has to be less than 64K (yes K and not M or G!). The netduino board has only 64K available to store the program. The initialization of the class will create 2 OutputPort per valve. As explain in the hardware part, one to open and one to close the valve. It will also create one InterruptPort to be able to manually open and close the valve. In order to understand how those ports are working, please refer to this post.The initialization will setup to port with default values. False for the pin D0 which pilot the “open” valve and True for the pin D1 which pilot the “close” valve. The IntButton_OnInterrupt function will be called when a switch will be pressed. Depending on the pin, it will close or open the valve linked to the specific pin. The Open property will open or close the valve. In my project, I’ll use pulse to open the valve, for this demo, I’m using continued output so the led will be either red (close) or green (open). The 2 leds are mounted in an opposite way so when the current is in one way it will be red and in the other it will be green. The TimerCallBack function is used when a Sprinkler need to be switch off. The associated code is: static void ClockTimer_Tick(object sender) { DateTime now = DateTime.Now; Debug.Print(now.ToString("MM/dd/yyyy hh:mm:ss")); //do we have a Sprinkler to open? long initialtick = now.Ticks; long actualtick; for (int i = 0; i < SprinklerPrograms.Count; i++) { SprinklerProgram MySpr = (SprinklerProgram)SprinklerPrograms[i]; actualtick = MySpr.DateTimeStart.Ticks; if (initialtick>=actualtick) { // this is the time to open a sprinkler Debug.Print("Sprinkling " + i + " date time " + now.ToString("MM/dd/yyyy hh:mm:ss")); Springlers[MySpr.SprinklerNumber].Manual = false; Springlers[MySpr.SprinklerNumber].Open = true; // it will close all sprinkler in the desired time of sprinkling. Timer will be called only once. //10000 ticks in 1 milisecond Springlers[MySpr.SprinklerNumber].TimerCallBack = new Timer(new TimerCallback(ClockStopSprinkler), null, (int)MySpr.Duration.Ticks/10000, 0); SprinklerPrograms.RemoveAt(i); } } The ClockTimer_Tick fonction is called every 60 seconds. It check if a sprinkler need to be switch one. If yes, a timer is created and associated with the TimerCallBack timer. And this timer will be called after the amount of time programmed to be open. static void ClockStopSprinkler(object sender) { Debug.Print("Stop sprinkling " + DateTime.Now.ToString("MM/dd/yyyy hh:mm:ss")); //close all sprinklers if automatic mode for (int i = 0; i < NUMBER_SPRINKLERS; i++) { if (Springlers[i].Manual == false) { Springlers[i].Open = false; Springlers[i].TimerCallBack.Dispose(); } } } The function is quite simple, it just call the Open property to close all the spinklers. I’ve decided to do this as in any case, I don’t have enough pressure to have all them open. Of course, to be complete, all timers will be close. The Manual check will not close the sprinkler. So that’s it for this post. I hope you’ll enjoy it! And this time, I’m not in a plane to write this post, I’m on vacation
http://blogs.msdn.com/b/laurelle/archive/2012/02/20/some-hard-to-pilot-a-sprinkler-with-net-microframework.aspx
CC-MAIN-2014-52
refinedweb
1,578
68.06
Hi On Mon, Mar 20, 2006 at 08:47:48PM +0200, Oded Shimon wrote: > On Mon, Mar 20, 2006 at 11:54:18AM -0500, Robert Edele wrote: > > Here's part three of my snow asm patch, which covers the mmx and sse2 > > implementations of ff_snow_vertical_compose(). > > You might as well send all your mmx, a patch is only seperable when the > different parts really cover different things or are logical as seperate. > this might as well be just a single patch. > > > Index: libavcodec/i386/dsputil_mmx.c > > =================================================================== > > RCS file: /cvsroot/ffmpeg/ffmpeg/libavcodec/i386/dsputil_mmx.c,v > > retrieving revision 1.113 > > diff -u -r1.113 dsputil_mmx.c > > --- libavcodec/i386/dsputil_mmx.c 7 Mar 2006 22:45:56 -0000 1.113 > > +++ libavcodec/i386/dsputil_mmx.c 20 Mar 2006 16:45:48 -0000 > > @@ -2564,6 +2564,9 @@ > > } > > #endif > > > > +extern void ff_snow_vertical_compose97i_sse2(DWTELEM *b0, DWTELEM *b1, DWTELEM *b2, DWTELEM *b3, DWTELEM *b4, DWTELEM *b5, int width); > > +extern void ff_snow_vertical_compose97i_mmx(DWTELEM *b0, DWTELEM *b1, DWTELEM *b2, DWTELEM *b3, DWTELEM *b4, DWTELEM *b5, int width); > > I still HIGHLY dislike these declerations. A much better approach would be > to either use static functions and #include the .c file directly, or to use > some common header. I preffer the former as an added bonus there is no > (additional) namespace bloat. iam against #including the c file, dsputil_mmx.c is large enough and patch looks ok [...] -- Michael
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-March/014687.html
CC-MAIN-2015-06
refinedweb
227
64.91
Introduction Chainspace is a smart contract platform offering speedy consensus and unlimited horizontal scalability. Our goal is to build a blockchain platform that scales like the web scales: adding machines should increase overall capacity, and there should be no built-in scalability limitations. At present, our running code has two main components: blockmaniais a consensus component. It implements the leaderless consensus protocol detailed in the Blockmania academic paper. It takes transactions to any participating node as input, and outputs an immutable and deterministic total ordering of transactions. Each set of Blockmania nodes produces an immutable blockchain. chainspaceis a sharding component. It provides an implementation of the Sharded Byzantine Atomic Commit (SBAC) protocol detailed in the Chainspace academic paper. Within the overall Chainspace system, each Blockmania blockchain acts as a shard, and SBAC enables cross-shard transactions. You should use the Go implementation at. Originally, both of these components existed within the same Go codebase. As of this writing, we are halfway between splitting these two components, so Blockmania exists in two places: - the repo - the repo (where it’s built into the project) A project wanting only fast consensus, but no sharding, should be able use Blockmania by itself. For projects that need the added horizontal scalability of sharding, the Chainspace component would be added. From a platform development point of view, getting a build working with the extracted Blockmania codebase in the blockmania repo is a big priority. But at least this way you can use Blockmania on its own, if that’s what you want. Quickstart Installation - Install Go 1.11. Earlier versions won’t work. - Install tmux git clonethe code into $GOPATH, typically ~/go/src/chainspace.io/prototype make installwill install to $GOPATH/bin/chainspace. Make sure this location is on your path. make contract If everything worked, the chainspace command should be installed and available. Running it will give an overview of available functionality. Help flags are available, have a look. Initialize a network The chainspace init {networkname} command, by default, creates a network consisting of 12 nodes grouped into 3 shards of 4 nodes each. Initialize a new Chainspace network called “foonet” chainspace init foonet The setup you get from that is heavily skewed towards convenient development rather than production use. It will change as we get closer to production. Have a look at the config files for the network you’ve generated (stored by default in ~/.chainspace/{networkname}). The network.yaml file contains public signing keys and transport encryption certificates for each node in the network. Everything in network.yaml is public, and for the moment it defines network topology. Later, it will be replaced by a directory component. Each node also gets its own named configuration directory, containing: - public and private signing and transport encryption keys for each node - a node configuration file - log output for the node Running nodes In the default localhost setup, nodes 1, 4, 7, and 10 comprise shard 1. Run those node numbers if you’re only interested in seeing consensus working. Otherwise, start all nodes to see sharding working as well. Here’s how you can init a network and start each node individually chainspace run foonet 1 chainspace run foonet 4 chainspace run foonet 7 chainspace run foonet 10 A convenient script runner is also included, so you don’t need to start nodes individually in development. The short way to run it is script/run-testnet foonet This will fire up a single shard which runs consensus, and make it available for use. REST documentation Many parts of the system are available to poke at via a RESTful HTTP server interface. After starting a node locally, you can see what’s available by going to - where 9001 is the node’s HTTP server port as defined in ~/.chainspace/{networkname}/node-{nodenumber}/node.yaml. In the default generated config, REST port number is:{9000+nodenumber} for each node. Consensus Introduction Consensus algorithms allow nodes in a distributed system to agree on a specific order of transactions without a central authority. Incoming transactions arrive at nodes in an arbitrary order, potentially even at the same time. Participating nodes then talk to each other, and agree on an ordering of transactions which is guaranteed to be the same for all of them. Blockmania is a Byzantine Fault Tolerant consensus algorithm. It functions correctly even if 1⁄3 participants are faulty or actively acting as attackers, with no loss of liveness or safety. In conditions where more than 1⁄3 of nodes are bad, Blockmania prioritises safety over liveness. Attackers or faulty nodes can stop processing for the cluster, but they cannot inject bad data. Blockmania nodes group incoming transactions into blocks, and exchange signed data and witness statements which are then propagated to all participating nodes. Each node writes out the same signed sequence of blocks, or blockchain, as every other node participating in a given consensus group. A chain of concatenated block hashes ensures that data cannot be arbitrarily changed by any participating nodes, even dishonest ones. If you’re curious about how it works at a deeper level, please read the Blockmania academic paper. The Chainspace codebase builds Blockmania into itself, but doesn’t expose any network interface for sending transactions. If you’re a Go coder, you can however access it programmatically using Go, something like this: Sending data to consensus whne you have compiled a client. This only works in Go. import "chainspace.io/chainspace-go/node" s, err := node.Run(cfg) if err != nil { log.Fatal(err) } s.Broadcast.AddTransaction(txdata, fee) Using Blockmania by itself It is also possible to use Blockmania by itself. - Clone the Blockmania repo - Run make install - Run blockmania init -network-name foomania This will generate, by default, a 4-node Blockmania network. Config files are at ~/.blockmania/foomania. Each node gets its own separate configuration. You can run each node (1-4) by doing: blockmania run -network-name foomania -node-id 1 blockmania run -network-name foomania -node-id 2 blockmania run -network-name foomania -node-id 3 blockmania run -network-name foomania -node-id 4 For development purposes, there’s also scripts/run-testnet which will generate a testnet and run it in a tmux. Using REST Each Blockmania node starts a REST API by default. Docs are available at (for node 1, each successive node increments the port number by 1). Swagger docs are available at Subscribing to Blockmania events You should be able to see the websocket routes in the included Swagger documentation on each node (although you won’t be able to try it out as our Swagger server doesn’t support websockets). For standalone Blockmania, the websockets server(s) start on (for node 1, each successive node increment the port number by 1). Socket.io websockets don’t work, as they’re incompatible with the Go websocket implementation we’re using. Other websockets should work. Alternately, you can use the pubsublistener, a Go utility we’ve included for you to watch events as they happen. It works like this: pubsublistener -addrs 1=localhost:7001,2=localhost:7002,3=localhost:7003,4=localhost:7004 Try sending a transaction like this one through the { "tx": [ 1 ] } You should see the pubsublistener print out the Base64-encoded value of your input a few seconds after you send it. Sharding Introduction Chainspace allows multiple Blockmania blockchains to talk to each other. Each grouping of Blockmania nodes becomes a Chainspace shard. Shards can operate independently of each other, so the system as a whole can scale horizontally and benefit from parallelism. Chainspace is a protocol which coordinates operations across more than one shard. Crucially, operations are atomic across shards: a transaction touching multiple contracts in multiple shards will either succeed or fail in its entirety. Architectural overview There are a few different moving parts: - Clients - Contracts - Checkers - Subscriptions The basic application flow is as follows: - The client sends information to a contract as a network request (currently implemented as a REST service). The contract checks its local datastore, and returns a transaction. - The client sends the transaction to all nodes in the shard. Each node runs a checker on the transaction; the checker validates the transaction against the current state of the chain, signs the transaction and returns it. - The client bundles up all the transaction signatures, and sends the bundled signatures and transaction data to one of the nodes in the shard. If more than 2⁄3 of nodes agreed that the transaction is valid, the transaction is then sequenced into the blockchain. - Clients may subscribe to the blockchain to receive speedy notifications of updates. Clients, contracts, and checkers can be implemented in any language. Your first application Assuming you know one of the languages we’ve documented, follow along by selecting it from the code examples on the right hand side of the page. The easiest way to learn is by doing, so we’ll show you how to build a smart contract for coins as a learning exercise, and explain Chainspace in more detail along the way. Clients and client libraries Client libraries exist in multiple languages, and we’re in the process of writing more. Please let us know if you’re interested in contributing a client in your favourite language. Contract Checker Emulator Sending transactions TODO. Stu’s got a transaction we can use now I think. This will be where we introduce the idea of different clients for each language. We can make that point first (hopefully to reduce cognitive load on our users). But later, we can detail how the whole process works, so people can develop their own clients in languages we don’t yet support. Using clients Using REST Determinism Key-Value store The Key-Value store allows you to retrieve the current value of a given key. To use it: - Install Docker for your platform - Test that Docker is working. Run docker run hello-worldin a shell. Fix any errors before proceeding. - Set up a testnet Building Build the docker image for your contracts. You can do this using the makefile at the root of the repository: $ make contract In the future chainspace will pull the docker image directly from a docker registry according to which contract is being run. At present during development we’ve simply hard-coded in a dummy contract inside a Docker container we’ve made. You may need to initialize the contract with your network: run chainspace contracts <yournetworkname> create. Next, run the script script/run-sharding-testnet. $ ./script/run-sharding-testnet 1 This script will: - initialize a new chainspace network testnet-sharding-1with 1shard. - start the different contracts required by your network (by default only the dummy contract) - start the nodes of your network The generated configuration exposes a HTTP REST API on all nodes. Node 1 starts on port 8001, Node 2 on port 8002 and so on. Seeding trial objects using httptest In order to test the key value store you can use the httptest binary (which is installed at the same time as the chainspace binary when you run make install). $ httptest -addr "0.0.0.0:8001" -workers 1 -objects 3 -duration 20 This will run httptest for 20 seconds, with one worker creating 3 objects per transaction. When it starts running you should see output like this on stdout: seeding objects for worker 0 with 0.0.0.0:8001 new seed key: djEAAAAA7rrTmXyvwexDRbDXWAU4n/gJPBJOkB8BiXYe4+VKmaNHpYCMrXNVoA2Siiau+e9ouPOZOG5CNLhiCDQ2KAzU+9+36tPibLbBwYx/B7M9TpGbDgD7VBL5XakoVf87VQWx new seed key: djEAAAAAhXIV02TbSOW4+H1I/qOQ+8hOYnXRb/xVEAgkSzuEPgHM/BjK7g1Rv8IE5LmwmfxGnMrBMlO2XNX3W1wiZNxYkDB1ywDd210TGUt7Q7ZEqCqa/SCB7L3q6tfk2hy22cCU new seed key: djEAAAAATS95HL0ehRGfXleJaTkfdR8SqFSwtC1G34YhoGRhu4gqeqi6LMlzVkxTkbN/niEXcQI7dpFwSfcVuUQBmfHWZf8ZRuNNhyDqWHiR2nOEb5Y1vNiQPu3PVepaoaJFYZN4 creating new object with label: label:0:0 creating new object with label: label:0:1 creating new object with label: label:0:2 seeds generated successfully This shows you the different labels which are created by the tests and associated to objects. You can then use these to get the id / value of your objects. Retrieving objects Call this http endpoint in order to retrieve the chainspace object’s versionId associated to your label. curl -v -X POST -H 'Content-Type: application/json' -d '{"label": "label:0:1"}' Call this http endpoint to retrieve the object associated to your label: curl -v -X POST -H 'Content-Type: application/json' -d '{"label": "label:0:1"}' You can see that the versionId associated to your label evolves over time, when transactions are consuming them. Multiple shards Running the Key-Value store with multiple shards should work fine, but there’s a caveat. Objects will change their versionIds on each update, and currently we’re sharding on versionId. So if you’re retrieving using versionId, your object may appear to migrate between shards whenever you update it. If you retrieve your object by label instead, you should be able to retrieve it, but only if you know which shard it’s living in. TODO: this interacts with the planned Directory component. Pubsub Interested clients can receive streaming notifications of changes to the blockchain through either websockets or raw TCP sockets. This allows developers to: - react immediately to changes with low latency - stream changes to any datastore when query complexity is greater than what the built-in key-value store can provide. Websockets --enable-websockets chainspace footest --shard-count 1 --enable-pubsub true You should be able to see the websocket routes in the included Swagger documentation on each node (although you won’t be able to try it out as our Swagger server doesn’t support websockets). Assuming you’re connecting on the first node on your local machine, you can make a websocket connection: <script> var ws = new WebSocket("ws://0.0.0.0:9001/api/pubsub/ws"); ws.onmessage = function (event) { console.log(event.data); } </script> TCP sockets Port {7000 + nodenumber} by default (increments with the nodenumber). You can specify the port Smart contracts Overview The moving parts: - Contracts - Checkers - Emulator - Language-specific clients Contracts Checkers Emulator Clients Note: we have a healthcheck URL at /healthcheck for all contracts. Running a testnet So far, this documentation has assumed that all nodes are running on a single machine. What if you want to run a testnet, with each node on a separate machine on a local network, or across the internet? We have not yet implemented seed nodes, or a cryptographically secure method of peer discovery. So we have implemented some simple methods for stitching together networks until we are ready to commit to a final system. Nodes currently find each other in two ways: - mDNS discovery - registry mDNS discovery In development or on private networks, nodes can discover each other using mDNS broadcast. This allows zero-configuration setups for nodes that are all on the same subnet. Registry Use the Registry when configuring nodes across the public internet. We run a public registry at. You can run your own Registry if you want. Init your network with the --registry flag if you plan to use a Registry server: chainspace init foonet --registry registry.chainspace.io The Registry will then appear in each node’s node.yaml: registries: - host: registry.chainspace.io - token: 05b16f5d45377baff52c25e2c154a00b126f7b75b7345794d3e15535b49a03f955b9c355 The randomly-generated registry token ensures that a unique shared secret is used on a per-network basis so that multiple networks can share the same registry without any additional setup. Nodes will automatically register themselves with the network’s registry server when they start up. It is possible to use both the Registry and mDNS discovery at the same time, and have a sort of mixed network in operation. Performance testing Consensus The chainspace genload {networkname} {nodenumber} command starts up the specified node, and additionally a Go client which floods the consensus interface with simulated transactions (100 bytes by default). To get some consensus performance numbers, run this in 4 separate terminals: Starting genload on nodes manually (make sure you’ve already got nodes running!) rm -rf ~/.chainspace/foonet chainspace init foonet chainspace genload foonet 1 chainspace genload foonet 4 chainspace genload foonet 7 chainspace genload foonet 10 The client keeps increasing load until it detects that the node is unable to handle the transaction rate, based on timing drift when waking up to generate epochs. At that point the client backs off, and in general a stable equilibrium is reached. The genload console logs then report on average, current, and highest transactions per second throughput. A convenient script runner is included. Using the convenience script, you don’t need to start the nodes yourself; the script will handle it rm -rf ~/.chainspace/foonet chainspace init foonet script/genload-testnet foonet This will start nodes 1, 4, 7 and 10 in tmux terminals, pumping transactions through automatically. Sharding [TODO: Jeremy’s got some perf test scripts which we can document.] FAQs Blockmania If other consensus protocols could work, why did you build Blockmania? When we needed a consensus protocol for Chainspace, we couldn’t find a PBFT implementation we liked in a language we liked. Tendermint was the best candidate available, but at the time there wasn’t really formal description of how it works (this was remedied in summer 2018). So we decided to come up with our own consensus protocol, which we understood from the ground up. What’s the difference between Blockmania and similar protocols? - Blockmania is leaderless - Blockmania radically separates network code from consensus code Leaderlessness A lot of the complexity in other PBFT-descended protocols comes from the fact that each round of consensus is typically determined by a single leader. For the next round, or block, leadership then moves to another leader. This is fine when the leader is honest; but if the leader is an attacker, the other nodes all need to be able (a) to detect that the leader is an attacker, and (b) elect a new leader, while the older leader is sending potentially confusing or contradictory messages. The code needed to handle that is complex. We started with a simple idea. If leaders add a lot of complexity, can we get rid of leaders? And we found that we could. This allowed us to write simpler code. Simpler code is typically more secure and also easier to implement, so it was a big win. Separation of networking and consensus The fact that Blockmania is leaderless also allowed us to separate networking code from consensus. Nodes simply take transactions in, make blocks, and broadcast them to other nodes, as fast as they can. A separate step, totally separate from network code, allows each node to independently reach a view of what other nodes have seen based on a simulation and their signed witness statements. The complete separation of network code from consensus again simplifies the protocol, making it easier to implement and probably more secure than it would otherwise have been. Developer documentation TODO: there’s platform dev, and also a need for SBAC client dev in multiple languages. Platform development You will need to the following to get Chainspace running locally: To test that Docker is working run docker run hello-world in a shell. Fix any errors before proceeding. With these requirements met, run make install. This will build and install the chainspace binary as well as the httptest load generator. You can generate a new set of shards, start the nodes, and hit them with a load test. See the help documentation ( chainspace -h and httptest -h) for each binary. Committing Please use Git Flow - work in feature branches, and send merge requests to develop. Versioning The version of the Chainspace application is set in the VERSION file found in the root of the project. Please update it when creating either a release or hotfix using the installed Git hooks mentioned above. Adding dependencies Dependency management is done via dep. To add a dependency, do the standard dep ensure -add <dependency>. We attempted to use go mod (and will go back to it when it stabilises). Right now mod breaks too many of our tools. Client development We currently have a Chainspace client for JavaScript. More would be nice.
https://chainspace.io/docs/
CC-MAIN-2019-13
refinedweb
3,334
53.71
I have read about implementing the bowling game XP-style many years ago in Robert Martin's book 'Agile Software Development'. The episode can be found online as well. Recently he has recently been learning Clojure and attempted to implement the bowling game in Clojure. It is a nice exercise, and although I like Clojure, I do not regard myself capable in any way to repeat such an attempt. Apart from that Stuart Halloway, author of the excellent 'Programming in Clojure' book, has already done this in a much better way than I ever could. I'm slightly more familiar with Scala, so I thought it would be a nice exercise to try some functional bowling using that. My Scala knowledge is in a deplorable state, stuck at pre-beginner level, so I run the risk of making a complete fool of myself. However I'll take the chance and at least try to learn from the experience. First, let's re-iterate the rules of bowling: - The game consists of 10 frames.. So in some way we need to keep track of frame scores for a game. For simplicity, I'm just using a sequence of integers that represents a game. Each integer just represents the number of pins knocked down by each throw. This sequence should be divided, or transformed if you like, into a sequence of frames. I started with something like this: case class Frame(first:Int, second:Int, third:Int) { override def toString() = {"firstThrow: " + first + " secondThrow" + second + " thirdThrow: " + third} } def frames(g:List[Int]):List[Frame] = { g match { case Nil => Nil case x::Nil => List(new Frame(x, 0, 0)) case x::xs::Nil => List(new Frame(x, xs, 0)) case 10 :: x::xs => new Frame(10, x, xs.head) :: frames(xs.tail) case x::xs::xss if ((x + xs) == 10) => new Frame(x, xs, xss.head) :: frames(xss.tail) case x::xs => new Frame(x, xs.head, 0) :: frames(xs.tail) } } But this might be an excellent candidate for the thedailywtf. This surely cannot be the way how to write proper Scala, and apart from that, it's not very generic and extensible. So, after thinking about the matter a bit, a second attempt. First of all, why a class when we have functions? A sequence of frames can just be expressed as a list of lists, like so: def frames(g:List[Int]):List[List[Int]] = { if (g.isEmpty) Nil else { val throws = throws_for_frame(g) g.take(throws)::frames(g.drop(throws)) } } def frames(g:List[Int]):List[List[Int]] = { if (g.isEmpty) Nil else g.take(throws_for_frame_score(g))::frames(g.drop(throws_in_frame(g))) } def throws_for_frame_score(rolls:List[Int]):Int = { if (strike(rolls) || spare(rolls)) 3 else 2 } def throws_in_frame(rolls:List[Int]):Int = { if (strike(rolls)) 1 else 2 } def strike(rolls:List[Int]):Boolean = { rolls.headOption.getOrElse(false) == 10 } def spare(rolls:List[Int]):Boolean = { rolls.take(2).foldLeft(0)(_+_) ==10 } Not too bad, if you look at it from a distance some little helper functions like strike and spare even seem to be related to some of the bowling rules defined above. A frame is now just a list consisting of either 2 or 3 elements, depending on whether a strike or spare has been scored. Each element is an integer containing the pins that are knocked down. Scoring a game now becomes a rather trivial affair, we just take the 10 frames that are bowled and add up the pins scored in each frame. def score_game(g:List[Int]):Int = { framescores(g).foldLeft(0)(_+_) } def framescores(game:List[Int]):List[Int] = { frames_for(game).take(10).map(l => l.foldLeft(0)(_+_)) } def framesThrown(g:List[Int]) = { frames(g).length } In the REPL, you can easily test these functions: scala> framesThrown(List(4,5)) res1: Int = 1 scala> framesThrown(List(4,5,10,3,4,6,7,2)) res2: Int = 4 scala> framescores(List(4,5)) res3: List[Int] = List(9) scala> framescores(List(4,5,6,3)) res4: List[Int] = List(9, 9) scala> framescores(List(5,5,6,3)) res5: List[Int] = List(16, 9) scala> framescores(List(5,5,6,3,10,10,3)) res6: List[Int] = List(16, 9, 23, 13, 3) scala> framescores(List(10,10,10,10)) res4: List[Int] = List(30, 30, 20, 10) And to satisfy Uncle Bob, some unit tests: class BowlingTest { def repeat[T](n: Int)(what: => T): List[T] = { if (n==0)List.empty else what::repeat(n-1)(what) } @Test def scoreForTwoThrows = { assertEquals(9, score(List(4,5))) } @Test def strikeShouldGiveTwoExtraThrowsForScore = { assertEquals(List(30,30,20,10), framescores(repeat(4)(10))) assertEquals(24, score(List(5, 3, 4, 5, 3,4))) assertEquals(32, score(List(10, 3, 4, 5, 3))) } @Test def spareShouldGiveOneExtraThrowsForScore = { assertEquals(24, score(List(5, 3, 4, 5, 3, 4))) assertEquals(30, score(List(5, 5, 4, 5, 3, 4))) } @Test def spareAtEndShouldGiveOneExtraThrowsForScore = { assertEquals(60, score(List(4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 6, 5))) assertEquals(54, score(List(4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 5, 5))) } @Test def strikeAtEndShouldGiveTwoExtraThrowsForScore = { assertEquals(65, score(List(4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 10, 5, 5))) assertEquals(54, score(List(4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 5, 5))) } @Test def tears = { assertEquals(299, score(List(10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9))) } @Test def perfectGameShouldScore300 = { assertEquals(300, score(repeat(12)(10))) } @Test def allOnesShouldScore20 = { assertEquals(20, score(repeat(20)(1))) } } It's still a bit simplistic (for example, framesThrown doesn't really check whether a frame is finished, i.e. whether two or three throws have been made, there's no validation that the number of pins knocked down cannot be larger than 10, games can have an infinite length, etc, etc). However I'll leave it at this for the moment, and will try to perfect it later. It has already been a nice exercise so far in any case. As stated, my Scala knowledge is such that this implementation can most likely be heavily improved upon. If you have suggestions for improvements (or have an implementation of your own) your comments are highly appreciated. Update As Ilian Berci has pointed out, my satisfaction at my original attempt was completely misguided. Somehow I managed to misinterpret the bowling rules completely. I managed to get into my mind that a strike would give two extra throws, in each frame. However, if you have once taken up a game of bowling yourself you know that this is complete nonsense. It is only so that the next two throws (which are then part of another frame) contribute to the frame in which the strike is scored. It is only at the tenth frame that the bowler gets an extra frame (consisting of maximum two throws) if he scores a strike at his final games. So, my original framescores function and helper function looked like below: def frames(g:List[Int]):List[List[Int]] = { if (g.isEmpty) Nil else { val throws = throws_for_frame(g) g.take(throws)::frames(g.drop(throws)) } } def throws_for_frame(rolls:List[Int]):Int = { if (strike(rolls) || spare(rolls)) 3 else 2 } which is utterly wrong, since it places all the throws that contribute to a frame score in the same frame (and removes them from the remaining games list). This makes that a game in my original version could take up to 30 throws, instead of the 21 maximum throws possible. So, in my original version, the framescores functions behaved like this: scala> framescores(List(5,5,6,3)) res14: List[Int] = List(16, 3) scala> framescores(List(10,10,10,10,10)) res15: List[Int] = List(30,20) Fixing it didn't take much time however, which was a relief because I thought I had completely messed up. I've updated the post with a version which is (hopefully) now correct (so you can see the fix in the frame function and helper functions throws_for_frame_score and throws_in_frame), and also updated the tests into some more sensible ones. The wtf version is still left to its original, fixing that would probably even make it more horrible than its original. Thanks to Ilian for pointing this out, clearly can't beat Uncle Bob. Andrew Phillips - July 27, 2009 at 8:00 am Would Spare, Strike and "Normal" (for want of a better name) not be good candidates for a case class? In your second version you've defined functions for them that essentially know how to "extract" the relevant scores for this type of frame from the complete list of scores, which looks rather like a constructor. This would also seem to be a fairly natural place for validation. Furthermore, you'd be back to List[Frame] as the type of a game, which is a little more expressive than List[List[Int]]. Arjan Blokzijl - July 28, 2009 at 6:07 pm Hi Andrew, Indeed, my version deals with data structures, and functions that essentially know how to handle that (for which the first attempt was embarrassingly wrong by the way, see the update on the post). But your suggestion could certainly be a viable alternative. Perhaps I might give that a try as well, to see how it looks compared to my current version. For me, there's still a steep learning curve in writing proper Scala code. Jason Zaugg - July 27, 2009 at 8:59 am >rolls.headOption.getOrElse(false) == 10 I would write this: rolls.headOption.map(_ == 10) getOrElse false Or even better, using the typesafe equals from Scalaz 4 import scalaz.Scalaz._ rolls.headOption.map(_ === 10) getOrElse false Or even: import scalaz.Scalaz._ ~rolls.headOption.map(_ === 10) This expands with the help of implicits to: val mappedOption = rolls.headOption.map((i: Int) => Identity.ToIdentity(i).===(10)(Eq.IntEqual)) val wrappedOption = OptionW.OptionTo(mappedOption) val result = wrappedOption.unary_~(Zero.BooleanZero) // Zero.BooleanZero defines that 'false' is the 'zero', ie the default, for the type Boolean. println(result) ilan berci - July 27, 2009 at 8:59 pm Uncle bob would not be satisfied because the algorithm is flawed.. Remember that in the case of spares and strikes, some of the balls count their value in 2 or more frames.. scala> framescores(List(5,5,6,3)) res14: List[Int] = List(16, 3) should be: List(16, 9) Another example where the test case only asserts the error condition.. (The class example also exposes the same logic error).. please write your test cases prior to writing the code if you really want to appease uncle Bob. Arjan Blokzijl - July 28, 2009 at 7:09 am Hi Ilian, You're correct, my reading of the bowling rules was completely wrong. I should just read the rules better, or possibly go out more... In any case, I've updated the post with your comments to a version that now (hopefully) is correct, thanks for pointing this out. Sander Mak - July 27, 2009 at 9:50 pm I thought the pattern matching approach wasn't completely out of whack, this would be how I'd do it: Also note the type synonym Frame, which gives the nice name (though only used once in my code) without having to define case classes. Oh, and I noticed that I forgot to take(10) of the tokenized list. Furthermore I took the behavior from your example, so it suffers from the same problem ilan just pointed out I guess. Arjan Blokzijl - July 28, 2009 at 9:27 am Hello Sander, Thanks for your reply, I agree that you're pattern match at least looks better than mine. However, it is indeed flawed as Ilan has pointed out. Personally I find the implementation using little functions a bit more expressive, demonstrating a bit more what the rules of the game are. I've updated to post with a more correct implementation (although I left the first pattern match intact, would perhaps a good exercise to try if we still can capture the whole game in one pattern match). Arjan
http://blog.xebia.com/2009/07/25/functional-bowling-in-scala/
CC-MAIN-2015-11
refinedweb
2,060
68.3
Step 1. Make it COM-visible. Right click on the project In VS2005 and select "Properties". Under the "Application Tab" click the "Make assembly COM-Visible" checkbox. Then click on the "Build" tab and check the "Register for COM interop" checkbox in the "Output" section. Save. If you try and build the project now you will find that you can reference the project in VB6, and can instantiate it in code, but you cannot view the methods in the Object Browser or get VB6 IntelliSense in the methods. You can call the method and get your result back though. Step 2. Implement an interface. Your class should implement an interface. Convert your class from this: public class MyClass { public String testInOut(String sIn) { return ret = "Returning [" + sIn + "] at " + System.DateTime.Now.ToString(); } } to this: public interface IMyClass { String testInOut(String sIn); } public class MyClass : IMyClass { public String testInOut(String sIn) { return ret = "Returning [" + sIn + "] at " + System.DateTime.Now.ToString(); } } Or, even easier, select the class and method definition and right-lick, choosing "Refactor-Extract Interface..." and check the method in the box that pops up. This will generate an interface into a new file and modify your class to implement it. Step 3. Expose your methods. Get .Net to expose your methods to VB6. Do this by adding the [ClassInterface(ClassInterfaceType.AutoDual)] attribute to the public class, and importing the InteropServices library: ... using System.Runtime.InteropServices; namespace MyNamespace { [ClassInterface(ClassInterfaceType.AutoDual)] public class MyClass : MyProject.IMyClass { If you rebuild your .Net dll now, you should be able to see the methods in the VB6 Object Browser. Now we need to make sure we can deploy it remotely. Follow these additional steps. Step 4. Control your GUIDs. Add GUIDs to both the Interface and the public class. To get a GUID, click "Tools-Create GUID" and choose option 4 "Registry Format". Click "New GUID" and then "Copy". Paste as a "Guid" attribute into your interface, stripping out the curly brackets. Then generate a new GUID and paste an attribute into your public class. The GUID against the public class is the key one because it is the one that is looked up in the registry to determine which DLL your program will use. Specifying the GUIDs in these attributes ensure that the same GUID is used when you compile your build. [Guid("3A7E8E37-3B6B-4cda-9A47-EBD0D1D11812")] interface IMyClass and [ClassInterface(ClassInterfaceType.AutoDual)] [Guid("87E9EBBD-CE79-4336-BB7F-F070483C442C")] public class MyClass : MyProject.IMyClass Step 5. Sign the assembly with a strong name. Go into the "Project-Properties-Signing" tab and select "Sign Assembly" and choose the "string name" combo entry. Now we are read to deploy this remotely. Follow the steps below on the remote machine: Step 6. Install into the GAC. Copy the file to the remote machine (which already has the .Net framework installed) into some directory {myfolder}. Then copy it into the c:\windows\assembly folder. This is the location of the GAC (Global Assembly Cache). .Net applications on this machine will now be able to use it. However VB6 will still not be able to find it. Step 7. Register the DLL. Register the DLL with COM via the regasm tool: regasm c:\{myfolder}\MyProject.dll The regasm tool is installed as part of the .Net framework and can be found here: C:\WINDOWS\Microsoft.NET\Framework\v2.0.5072 This will register the type library. If you try to use the COM object via VB6 it will pick up the version from the GAC. The version you have copied to {myfolder} can be deleted if required. And that is it! 1 comment: Step 6 didn't work for me; I used "GACUTIL /i my.dll" at the command line and that did the trick
http://juststuffreally.blogspot.com/2008/02/steps-to-make-your-net-dll-useable-from.html
CC-MAIN-2017-30
refinedweb
629
69.28
30 December 2008 18:36 [Source: ICIS news] TORONTO (ICIS news)--Dow Chemical may be able to walk away from its takeover of Rohm and Haas “net cash positive” after Kuwait cancelled the K-Dow joint venture, analysts at HSBC said in a research note on Tuesday. ?xml:namespace> Kuwait – which could be as high as $2.5bn, according to some reports - as it was keen to maintain good business relations in the oil and gas rich Also, Dow would have to prove in court that In a comparable case, US petrochemicals producer Huntsman this month pocketed $1bn from private equity firm Apollo after Apollo pulled out of a deal to take over Huntsman and merge it with Hexion Specialty Chemicals. HSBC, for its part, remains “overweight” on Dow Chemical with a share price target of $30.00, based on an attractive valuation and a dividend yield of over 8% which Dow was not likely to cut, it said. Dow's stock was up 0.78% to $15.44 on Tuesday in Meanwhile, Rohm and Haas' shares were up 9.86% to $58.6 on speculation that Dow would be forced to go through with the takeover deal. ($1 = €0.71)
http://www.icis.com/Articles/2008/12/30/9181123/dow-could-walk-from-rohm-deal-cash-positive-hsbc.html
CC-MAIN-2015-22
refinedweb
201
69.21
ROS2 Publish and subscribe in same node I am trying below code where I have a ROS2 foxy node that first publishes camera frames to a topic camera_frame and then is supposed to be listening to another topic drive_topic. I have created a timer for continuously publishing messages. The issue is that this node would only publish messages, but it never triggers the listener_callback method. I suspect this could be because I have a timer for publishing messages, and it never comes out of that loop. I have confirmed that messages are being present on drive_node and from rqt_graph that the current node is subscribed to this topic. Can anyone help me why this node is not able to subscribe to drive_topic. import rclpy from rclpy.node import Node from sensor_msgs.msg import CameraInfo, Image from cv_bridge import CvBridge, CvBridgeError from std_msgs.msg import String from std_msgs.msg import Int16MultiArray import cv2, math, time class DroneNode(Node): def __init__(self): super().__init__('drone_node') self.publisher_ = self.create_publisher(Image, 'camera_frame', 10) timer_period = 0.5 # seconds self.timer = self.create_timer(timer_period, self.timer_callback) self.i = 0 # Create a VideoCapture object # The argument '0' gets the default webcam. self.cap = cv2.VideoCapture(0) # Used to convert between ROS and OpenCV images self.br = CvBridge() # Create the subscriber. This subscriber will receive an Image # from the video_frames topic. The queue size is 10 messages. self.subscription = self.create_subscription( Int16MultiArray, 'drive_topic', self.listener_callback, 10) self.subscription # prevent unused variable warning self.array = Int16MultiArray() def listener_callback(self, data): """ Callback function. """ # Display the message on the console print("Inside listener callback") self.get_logger().info('Receiving drone driving instructions') speed = data.data[0] fb = data.data[1] print(speed, fb) def timer_callback(self): while True: # Capture frame-by-frame # This method returns True/False as well # as the video frame. ret, img = self.cap.read() img = cv2.resize(img, (360, 240)) image_message = self.br.cv2_to_imgmsg(img) # cv2.imshow("Image", img) # cv2.waitKey(1) if ret == True: self.publisher_.publish(image_message) self.get_logger().info('Publishing images') self.i += 1 def main(args=None): rclpy.init(args=args) drone_node = DroneNode() rclpy.spin(drone_node) # Destroy the node explicitly # (optional - otherwise it will be done automatically # when the garbage collector destroys the node object) drone_node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() Keep this line after self.array = Int16MultiArray(), I think your code will work fine. The reason is subscribed is not initializing and your timer method directly start publishing. Remove this line. Thank you so much for reviewing the code above. I was able to get it to work by removing the while Truestatement within timer_callback(). Looks like I created an infinite loop within that call and it never returned to listen to subscribed topic. Thanks a lot for your help. Much appreciated. Can you please! drop your solution in answer and make your answer correct so your solution can help others. Thanks Looks like I can't make my own answer correct yet. But added here. Try now, I think now you have enough karma It worked, thanks
https://answers.ros.org/question/383279/ros2-publish-and-subscribe-in-same-node/
CC-MAIN-2022-27
refinedweb
508
53.78
>Listing One provides the code for an STL-style backtracking algorithm. The algorithm is implemented as a functor, which is a class with operator() defined so the class can be used like a function [1]. Listing Two contains a program that performs map coloring on the U.S. Listing One: An STL-style backtracking functor #ifndef BackTrack_h #define BackTrack_h template <class T, class I, class V> class BackTrack { public: // precondition: first <= last BackTrack(const T& first, const T& last); // Finds the next solution to the problem. Repeated calls // will find all solutions to a problem if multiple solutions // exist. // Returns true if a solution was found. // // Set first to true to get the first solution. // bool operator() (const I& begin, I end, bool& first); private: // Finds the next valid sibling of the leaf (end-1). // Returns true is if a valid sibling was found. bool FindValidSibling (const I& begin, const I& end); // Backtracks through the decision tree until it finds a node // that hasn't been visited. Returns true if an unvisited // node was found. bool VisitNewNode (const I& begin, I& end); void CreateLeftLeaf (I& end); T left_child; T right_child; V IsValid; }; template <class T, class I, class V> BackTrack<T,I,V>::BackTrack(const T& first, const T& last) : left_child (first), right_child (last) { } template <class T, class I, class V> bool BackTrack<T,I,V>::VisitNewNode(const I& begin, I& end) { // ALGORITHM: // // If the current node is the rightmost child we must // backtrack one level because there are no more children at // this level. So we back up until we find a non-rightmost // child, then generate the child to the right. If we back // up to the top without finding an unvisted child, then all // nodes have been generated. // Back up as long as the node is the rightmost child of // its parent. while (end-begin > 0 && *(end-1) == right_child) --end; I back = end-1; if (end-begin > 0 && *back != right_child) { ++(*back); return true; } else return false; } template <class T, class I, class V> bool BackTrack<T,I,V>::FindValidSibling (const I& begin, const I& end) { // Note that on entry to this routine the leaf has not yet // been used or tested for validity, so the leaf is // considered a "sibling" of itself. I back = end-1; while (true) { if (IsValid (begin, end)) return true; else if (*back != right_child) ++(*back); else return false; } } template <class T, class I, class V> inline void BackTrack<T,I,V>::CreateLeftLeaf (I& end) { *(end++) = left_child; } template <class T, class I, class V> bool BackTrack<T,I,V>::operator () (const I& begin, I end, bool& first) { const int size = end - begin; // If first time, need to create a root. // Otherwise need to visit new node since vector // contains the last solution. if (first) { first = false; end = begin; CreateLeftLeaf (end); } else if (!VisitNewNode (begin, end)) return false; while (true) { if (FindValidSibling (begin, end)) { if (end - begin < size) CreateLeftLeaf (end); else return true; } else if (!VisitNewNode (begin, end)) return false; // the tree has been exhausted, // so no solution exists. } } Listing 2: A program that colors a U.S. map with four colors, such that no adjacent states are the same color #include <assert.h> #include <functional> #include <vector> #include "BackTrack.h" enum State {ME, NH, VT, MA, CT, RI, NY, PA, NJ, DE, MD, DC, VA, NC, WV, SC, GA, FL, AL, TN, KY, OH, IN, MI, MS, LA, AR, MO, IL, WI, IA, MN, ND, SD, NE, KS, OK, TX, NM, CO, WY, MT, ID, UT, AZ, NV, CA, OR, WA}; const int NumberStates = 49; const int MaxNeighbors = 8; enum Color {Blue, Yellow, Green, Red}; inline Color& operator++ (Color& c) { c = Color (c + 1); return c; } inline State& operator++ (State& c) { c = State (c + 1); return c; } typedef std::vector<Color> Map; typedef Map::iterator MapIter; // store neighbor's of each state. // Neighbor [i][0] == # of neighbors of state i // Neighbor [i][j] == jth neighbor of state i State Neighbor [NumberStates][MaxNeighbors+1]; inline Connect (State s1, State s2) { int count = ++Neighbor [s1][0]; Neighbor [s1][count] = s2; count = ++Neighbor [s2][0]; Neighbor [s2][count] = s1; assert (Neighbor [s1][0] <= MaxNeighbors); assert (Neighbor [s2][0] <= MaxNeighbors); } void BuildMap () { for (int i = 0; i < NumberStates; i++) Neighbor [i][0] = State(0); Connect (ME,NH); Connect (NH,VT); Connect (NH,MA); Connect (VT,MA); Connect (VT,NY); Connect (MA,NY); Connect (MA,CT); Connect (MA,RI); Connect (CT,RI); Connect (CT,NY); Connect (NY,NJ); Connect (NY,PA); Connect (NY,OH); Connect (PA,NJ); Connect (PA,DE); Connect (PA,MD); Connect (PA,WV); Connect (PA,OH); // ... omitted to save space -- full source code available // on CUJ ftp site (see p. 3 for downloading instructions) Connect (UT,NV); Connect (UT,AZ); Connect (AZ,NV); Connect (AZ,CA); Connect (NV,OR); Connect (NV,CA); Connect (CA,OR); Connect (OR,WA); } struct ColorIsValid : public std::binary_function<MapIter, MapIter, bool> { bool operator() (const MapIter& begin, const MapIter& end) { State LastState = State (end-begin-1); Color LastColor = *(end-1); bool Valid = true; for (int i = 1; i <= Neighbor [LastState][0]; i++) { State NeighborState = Neighbor [LastState][i]; if (NeighborState < LastState && *(begin+NeighborState) == LastColor) return false; } return true; } }; int main (int argc, char* argv []) { Map tree (NumberStates); BackTrack <Color, MapIter, ColorIsValid> ColorMap (Blue, Red); BuildMap (); bool FirstTime = true; // find first 100 valid colorings of the U.S. for (int i = 0; i < 100; i++) bool Valid = ColorMap (tree.begin(), tree.end(), FirstTime); return 0; } There are five steps in using the backtracking algorithm:1. Define the types. 2. Define the operators. 3. Define a validator function. 4. Construct a BackTrack object. 5. Call it. 1. Define the Types The BackTrack class has three template arguments: template <class I, class T, class V> class BackTrack {...}; I is the iterator type for the container. T is the type of data being stored in the container, and V is a user-defined boolean function that will return true if the current decision tree is valid. You will typically use a discrete type for T, such as int or enum, and an STL container iterator for I. For the map coloring problem you may choose to use an enum for the colors, and to store them in a vector. Thus: enum Color { Blue, Yellow, Green, Red }; Add some typdefs to make simpler type names, and declare the container: typedef vector<Color> Map; typedef Map::iterator MapIter; Map tree (49); 2. Define the Operators BackTrack uses the following operators on T: &, ==, !=, =, and ++ (prefix). The compiler predefines these operators for int. If you are using int for T, your work is done. However, ++ is not defined for enum. So define your own: Color& operator++ (Color& c){...} 3. Define a Validator Function Derive your validator function from std::binary_function: template<class T> struct ColorIsValid: public std::binary_function<MapIter, MapIter, bool> { bool operator() (const MapIter& begin, const MapIter& end) const { // .. put validation code here } }; The validator function takes iterators pointing to the beginning and end of the container holding the tree. This function should return true if the coloring is valid. Remember that at the time of the call every element except the last has already been checked for validity, so you only need to check that the back element, *(end-1), is valid. The problem statement almost always exactly defines what the validator needs to test. For example, the n-color problem requires that no adjacent area have the same color. In pseudocode the test would look like this: for (each neighbor of *(end-1)) if (neighbor's color == *(end-1)'s color) return false; return true; 4. Construct a BackTrack Object To generate the decision tree the algorithm needs to know the valid range for T, the type. The BackTrack constructor accepts these values as parameters: BackTrack(const T& first, const T& last); Now you can construct a BackTrack object: BackTrack<Color, MapIter, ColorIsValid> ColorMap (Blue, Red); 5. Call the BackTrack Object BackTrack's operator() takes the begin and end iterators to your container, along with a bool parameter that specifies whether this is the first call. (I discuss that parameter shortly.) Finding a valid coloring now requires one line of code: bool FirstTime = true; bool IsValid = ColorMap(tree.begin(), tree.end(), FirstTime); This function returns true if a valid solution was found. If one was found, the tree now contains the coloring for each state. Many problems, including map coloring, have multiple solutions. Once you have found the first solution you can call operator() again to get the next solution. This works because the current position in the tree and the solution have the same representation. Backtracking from the current solution will result in the next valid solution to be found. The variable FirstTime is set to false when operator() is called, guaranteeing that subsequent calls find the next solution. Implementation Details operator() starts by creating an empty tree. It generates the first node by calling CreateLeftLeaf. This private function takes an iterator pointing to the end of the container, and appends the first valid value of T, which is stored in the private variable left_child. operator() now enters a loop that will generate the lower levels of the tree. The loop first passes the decision tree to the function bool FindValidSibling(). This function finds the first valid sibling of the current leaf. At the time of call the current leaf has not been checked for validity, so the function first checks if this leaf is valid. It does this by calling the user-defined validator function stored in the private variable IsValid. If the leaf is valid, FindValidSibling returns true without changing the decision tree. However, if the leaf is invalid, FindValidSibling will generate the next sibling to the leaf by using operator++ to increment the leaf's value. FindValidSibling successively calls IsValid and then increments the leaf until IsValid returns true or all siblings have been generated without success. If a valid sibling is found, true is returned. operator() checks the result of FindValidSibling. If it returned true, the node is valid. If the tree has been completely generated, the solution is valid and operator() returns true. Otherwise operator() must generate the next level in the tree and test it. So it calls CreateLeftLeaf and returns to the top of the loop, causing FindValidSibling to be called on the new tree. However, if the node was invalid, operator() must now backtrack to a valid node. Function VisitNewNode accomplishes this. It backs up one level as long as the current leaf is the last valid value of T, which is stored in private variable right_child. Once VisitNewNode finds a leaf unequal to right_child, it creates a new leaf by incrementing the current leaf's value and returns true. VisitNewNode returns false if it returns all the way to the root, having searched the entire tree. Efficiency How a problem is represented greatly influences how long it takes to find a solution. You must consider two things. First, the nearer to the root you can prune the tree the greater the reduction in run time. Second, you should remember that whenever the validator function is called, all but the last element have already been validated. Consider map coloring. You can color the states in any order you want. Suppose you order the states alphabetically. The algorithm would first color Alabama, then Arizona. Because these states are not adjacent, you would never be able to prune the tree at this level, since all possible colorings of these two states can lead to valid solutions. However, if you order the states by adjacency, so that Maine is colored first, followed by New Hampshire, you can prune at this level. Of the 16 nodes at level 2, four will be pruned, which removes over 1029 nodes from consideration. Conclusion Using my BackTrack class is straightforward. The hardest task is writing the validator function, but even this is easy, as the problem statement completely defines its functionality. I can typically whip up a solution in half an hour or less. After a few attempts I expect other users will be able to do the same. Choosing the best algorithm for a combinatorial problem is beyond the scope of this paper. There are many combinatorial problems for which backtracking does not work, or does not work quickly enough. The best survey of these algorithms that I have found is the book Heuristics: Intelligent Search Strategies for Computer Problem Solving, by Judea Pearl [2]. While out of print, the book's clear presentation of the topic will more than repay the effort it may take to find it. Even when backtracking is not the best choice, my class can still be useful. You can use it to unit test your specialized algorithm. Better yet, use BackTrack as a backup debug-only algorithm to double check the results of the specialized algorithm, using #ifdef #endif blocks to remove the code from your production builds. (This technique is discussed in Steve Maguire's Writing Solid Code [3].) References [1] Bjarne Stroustrup. The C++ Programming Language, 3rd Edition (Addison-Wesley, 1997), p. 514. [2] Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving (Addison-Wesley, 1984). [3] Steve Maguire. Writing Solid Code (Microsoft Press, 1993), pp. 33-38. Roger Labbe is the C++ Simulation Manager for DCS Corporation in Virginia, where he develops and manages avionics simulations as well as flight planning software. He has previously developed embedded software for flight management computers..
http://www.drdobbs.com/cpp/solving-combinatorial-problems-with-stl/184401194?pgno=2
CC-MAIN-2015-27
refinedweb
2,237
61.87
scikit metrics. If you're going to use optimise a model in scikit-learn then it better optimise towards the right thing. This means that you have to understand metrics in scikit-learn. This series of videos will give an overview in how they work, how you can create your own and how the gridsearch interacts with it. Notes This is the code used run the gridsearch. from sklearn.model_selection import GridSearchCV grid = GridSearchCV( estimator=LogisticRegression(max_iter=1000), scoring={'precision': make_scorer(precision_score), 'recall': make_scorer(recall_score)}, param_grid={'class_weight': [{0: 1, 1: v} for v in range(1, 4)]}, refit='precision', return_train_score=True, cv=10, n_jobs=-1 ) grid.fit(X, y); This is the code to view the results. pd.DataFrame(grid.cv_results_) Feedback? See an issue? Something unclear? Feel free to mention it here. If you want to be kept up to date, consider getting the newsletter.
https://calmcode.io/scikit-metrics/refit.html
CC-MAIN-2020-34
refinedweb
146
57.37
Barney Hilken <b.hilken <at> ntlworld.com> writes: > > After more pondering, I finally think I understand what the DORFistas want. Barney, 1) please don't personalise. I designed DORF, built a proof of concept, got it running, asked for feedback (got some very useful thoughts from SPJ), built a prototype, posted the wiki pages. I'm not a "DORFista"; I'm somebody who wants to improve the Haskell records namespace issue. 2) whether or not you think you know what some people want, you don't understand DORF, and you've mixed it up with SORF. You've then caused a great long thread of confusion. In particular, at exactly where DORF is designed to avoid (what I see as) a weakness in SORF, you've alleged DORF has that weakness. >Here is an example: > ... (by the way, you've used SORF syntax in those examples) > > It doesn't make any sense to apply your functions to my records or vice- versa, Exactly! and that's what the DORF design avoids, whereas SORF suffers from it. > but because we both chose the > same label, SORF uses the "same label" in the sense of the same String Kind. > the compiler allows it. Putting the code in separate modules makes no difference, since > labels are global. DORF's labels are not global, they're proxy _types_ so that the scope is controlled in the usual way. So using separate modules makes all the difference. > > Here is a simple solution, using SORF: > ... I think your "solution" would work just as well 'translated' into DORF. (But then it's a "solution" to something that isn't a problem in DORF.) > >... than building the mechanism in to the language as DORF does, ... > > Barney. > DORF is not "building the mechanism in to the language", nor is it introducing any new language features, only sugar. The prototype runs in GHC v7.2.1. All I've done is hand-desugarred. (Look at it for yourself, it's attached to the implementor's page.) SORF, on the other hand, needs user-defined Kinds, which are only just being introduced in v7.4, and don't yet include String Kind. AntC
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-February/022013.html
CC-MAIN-2014-35
refinedweb
362
75.71
Bootstrapping A third source of seed money for a new venture is referred to as bootstrapping. Bootstrapping is finding ways to avoid the need for external financing or funding through creativity, ingenuity, thriftiness, cost-cutting, or any means necessary. Many entrepreneurs bootstrap out of necessity. Income $1000 Expenses Rent $300 Car $200 Food $200 Utilities $150 Entertainment $100 $950 Student Budget What can a business do to immediately increase cash flow in order to meet their short term expenses? Think liquid assets… Personal Funds –The majority of founders contribute personal funds, along with sweat equity, to their ventures. Sweat equity represents the value of the time and effort that a founder puts into a new venture. Friends and Family –Friends and family are the second source of funds for many new ventures. Sources of Personal Financing Negotiate an interest only line of credit with your bank Interest paid on a loan is an expense Paying principle on a loan is not considered an expense and doesn't show up on your P&L: It does not affect your profitability, but it does affect your cash flow. Always Have access to cash Most businesses fail due to a lack of CAPITAL ($) How to control Cash Flow in your business Plan well! Timing is everything! Control your expenses! Set a Budget. Control your accounts receivable and payable Stay as liquid as possible: Don’t tie up all of your money in non-liquid assets. - Be frugal on initial start up costs, major improvements, equipment and machinery (Von’s) - Keep your inventories lean with-out missing sales What can a business do to quickly improve cash flow? How money flows in and out of your pocket (or checking account) Personal Cash Flow Every item on your financial statement can be looked at as a percentage of Gross sales. Every industry has a standard to which you can compare and budget your business: ie: Retail Industry: Gross Sales = $100,000 Payroll 8-10% Rent 7-10% Advertising 2%-2.5% Percentage of Sales It’s easier to do good things and help make a positive impact on the world with money than it is without it Making a profit does not mean that you are Greedy! Even Social Entrepreneurs and Non-Profits need to have more money coming in than they have going out of their business Without a profit they will not be able to continue their mission to help make the world a better place. What is a LOSS? In a for Profit business: revenue = Sales What is revenue? You made $25 Gross Income GI/NS = GM GI $25/ NS $75 = 33.3% GM Or R-C / R = GM% R $75-C$50=$25/R$75=33.3% Gross Margin = Gross income/ divided by Net Sales expressed as a percentage % Gross Income is the difference between what you paid for and how much your customer paid you for Gross margin of profit Total Liabilities/Shareholder Equity A high debt/equity ratio generally means that a company has been aggressive in financing its growth with debt (loans). Debt to Equity Ratio Friends and Family Credit Cards Should be used sparingly if at all. Peer-to-Peer Lending Networks Examples include Prosper.com and Zopa.com. Organizations that Lend Money to Specific Groups Count Me In, is an organization that provides loans of $500 to $10,000 to women starting or growing a business. Other Sources of Debt Financing 10-* Paul Roales helps start and fund small business Pearl Street Venture Funds Thursday: Guest speaker Business Angels Are individuals who invest their personal capital directly in start-ups. The prototypical business angel is about 50 years old, has high income and wealth, is well educated, has succeeded as an entrepreneur, and is interested in the startup process. The number of angel investors in the U.S. has increased dramatically over the past decade. Business Angels 1 of 2 10-* Net Worth = How much you own (Assets) - How much you owe (liabilities) Target Net Worth = (Age – 27) X Annual Pre-Tax Income / 5 Net Worth Budgets Are itemized forecasts of a company’s income, expenses, and capital needs and are also an important tool for financial planning and control. Think very hard before spending money to determine if it will benefit your business financially. Every dollar you choose to spend in your business should result in an increase in sales and profit. Pay off your debts and begin to show a profit and then you can afford to be generous. Return on Investment What might negatively affect a business's cash flow? Jill: Income = $1000 Expenses Rent (35%)- $350 Car (20%) - $200 Food (22.5%) -$225 Utilities (12.5%) -$125 Fun (10%) - $100 (100%) = $1000 Jack: Income = $1000 Expenses Rent (30%) - $300 Car (15%) - $150 Food (20%) - $200 Utilities (10%) - $100 Fun (20%)- $200 (95%) = $950 Student Income statement Expenses are greater than Revenue A Loss means Profit defined: Revenue - Expenses = positive Number Entrepreneurship A successful business has to make a profit Generating a large volume of sales doesn't always mean a business is profitable Importance of Financial Statements To assess whether its financial objectives are being met, firms rely heavily on analysis of financial statements. A financial statement is a written report that quantitatively describes a firm’s financial health. The income statement, the balance sheet, and the statement of cash flow are the financial statements entrepreneurs use most commonly. The Process of Financial Management 8-* Banks Historically, commercial banks have not been viewed as a practical sources of financing for start-up firms. This sentiment is not a knock against banks; it is just that banks are risk adverse, and financing start-ups is a risky business. Banks are interested in firms that have a strong cash flow, low leverage, audited financials, good management, and a healthy balance sheet. Commercial Banks 10-* Initial Public Offering An initial public offering (IPO) is a company’s first sale of stock to the public. When a company goes public, its stock is traded on one of the major stock exchanges. An IPO is an important milestone for a firm. Typically, a firm is not able to go public until it has demonstrated that it is viable and has a bright future. IPO 1 of 3 10-* Venture Capital (continued) Venture capitalists invest money in start-ups in “stages,” meaning that not all the money that is invested is disbursed at the same time. Some venture capitalists also specialize in certain “stages” of funding. Venture Capital 3 of 3 10-* Venture Capital Is money that is invested by venture-capital firms in start-ups and small businesses with exceptional growth potential. Venture-capital firms are limited partnerships of money managers who raise money in “funds” to invest in start-ups and growing firms. The funds, or pool of money, are raised from wealthy individuals, pension plans, university endowments, foreign investors, and similar sources. A typical fund is $75 million to $200 million and invests in 20 to 30 companies over a three- to five-year period. Charles River Ventures Final project Venture Capital 1 of 3 10-* Matching a New Venture’s Characteristics with the Appropriate Form of Financing or Funding Preparing to Raise Debt or Equity Financing3 of 3 10-* Preparing to Raise Debt or Equity Financing1 of 3 10-* Why Most New Ventures Need Financing or Funding 10-* Getting Financing or Funding Bruce R. Barringer R. Duane Ireland Chapter 10 10-* Forecasts: Are an estimate of a firm’s future income and expenses, based on past performance, its current circumstances, and its future plans. New ventures typically base their forecasts on an estimate of sales and then on industry averages or the experiences of similar start-ups regarding the cost of goods sold and other expenses. The Process of Financial Management 2 of 4 8-* Jill: Income $1100 Expenses Rent (32%) $350 Car (18%) $200 Food (20%) $225 Utilities (11%) $125 Entertainment (9%) $100 (90%) $1000 Jack: Income $1000 Expenses Rent (30%) $300 Car (15%) $150 Food (20%) $200 Utilities (10%) $100 Entertainment (20%) $200 (95%) $950 Student Income statement Cash Flow How cash flows in and out of your business Cash Flow is Key! You can show a profit on your P&L and still have cash flow problems – Why? PROFITABILITY Is the ability to earn a profit. Many start-ups are not profitable during their first one to three years while they are investing in resources, training employees and building their brands. CASH FLOW - a company’s ability to meet its short-term financial obligations. LIQUIDITY: how quickly assets can be converted into CASH (Which assets can you sell fast?) 8-* Make sure your gross margin percentage is greater than your expense percentage How to make a PROFIT $ Financial Management = Controlling money Financial management deals with two things: 1.) Earning money (revenue) and 2.) Managing a company’s finances (expenses) in a way that achieves the highest rate of return (profit) Financial Management 8-* The SBA Guaranteed Loan Program Approximately 50% of the 9,000 banks in the U.S. participate in the SBA Guaranteed Loan Program. The loans are for small businesses that are not able to obtain credit elsewhere. The 7(A) Loan Guaranty Program The most notable SBA program available to small businesses. SBA Guaranteed Loans 1 of 2 10-* Business Angels (continued) Business angels are valuable because of their willingness to make relatively small investments. These investors generally invest between $10,000 and $500,000 in a single company. Are looking for companies that have the potential to grow between 30% to 40% per year. Business angels are difficult to find. Business Angels 2 of 2 10-* Are we making or losing money? How much cash do we have on hand? Do we have enough cash to meet our short-term obligations? How do our expenses compare to those of our industry peers? The financial management of a firm deals with questions such as the following on an ongoing basis: Financial Management 8-* Yearly Sales = $200 million # of Stores = 220 Sales per store = $1 million Stores Ave size – 2200 Sq Ft Price per burger = $3.99 Yearly Sales = $490 million # of stores = 246 Sales per store = $2 million Stores Ave size – 3200 Sq Ft Price per burger = $2.95 Which is more profitable? The first step towards prudent financial management is keeping good records. Importance of Keeping Good Records 8-* Financial Objectives of a Firm1 of 3 8-* SBA Guaranteed Loans Commercial Banks Sources of Debt Financing 10-* Assessing a New Venture’s Financial Strength and Viability Bruce R. Barringer R. Duane Ireland Chapter 8 8-* Initial Public Offerings Business Angels Venture Capital Sources of Equity Funding 10-* Reasons that Motivate Firms to Go Public Initial Public Offering 2 of 3 Raises a firm’s public profile, making it easier to attract high-quality customers and business partners. Is a way to raise equity capital to fund current and future operations. Reason 2 Reason 1 11-* Two Most Common Alternatives Preparing to Raise Debt of Equity Financing2 of 3 Is getting a loan. Means exchanging partial ownership in a firm, usually in the form of stock, for funding. Debt Financing Equity Funding 11-* Creative Sources Debt Financing Equity Capital Personal Funds Alternatives for Raising Money for a New Venture 10-* Hiring interns. Sharing office space or employees with other Businesses. Buying items cheaply but prudently via options such as eBay. Avoiding unnecessary Expenses. Minimizing personal expenses. Obtaining payments in advance from customers. Leasing equipment instead of buying. Coordinate purchases with other businesses. Buying used instead of new equipment. Examples of Bootstrapping Methods 10-* Have you ever had a cash flow problem? What do you do about it? You paid your supplier $50 for the sweater Your customer bought it from you for $75 Net Profit $ = Gross Margin - Expenses Net Profit $/ Sales $ = Net Profit % What could happen to a business when they don't have a positive cash flow? What can a business do to prevent cash flow No description byTweet beth carrollon 22 March 2016 Please log in to add your comment.
https://prezi.com/990ctin66iiw/chapter-8/
CC-MAIN-2017-22
refinedweb
2,024
59.94
Haskell/Alternative and MonadPlus In our studies so far, we saw that both Maybe and lists can represent computations with a varying number of results. We use Maybe to indicate a computation can fail somehow (that is, it can have either zero results or one result), and we use lists for computations that can have many possible results (ranging from zero to arbitrarily many results). In both of these cases, one useful operation is amalgamating all possible results from multiple computations into a single computation. With lists, for instance, that would amount to concatenating lists of possible results. The Alternative class captures this amalgamation in a general way. Definition[edit] Note The Alternative class and its methods can be found in the Control.Applicative module. Alternative is a subclass of Applicative whose instances must define, at a minimum, the following two methods: class Applicative f => Alternative f where empty :: f a (<|>) :: f a -> f a -> f a empty is an applicative computation with zero results, while (<|>) is a binary function which combines two computations. Here are the two instance definitions for Maybe and lists: instance Alternative Maybe where empty = Nothing -- Note that this could have been written more compactly. Nothing <|> Nothing = Nothing -- 0 results + 0 results = 0 results Just x <|> Nothing = Just x -- 1 result + 0 results = 1 result Nothing <|> Just x = Just x -- 0 results + 1 result = 1 result Just x <|> Just y = Just x -- 1 result + 1 result = 1 result: -- Maybe can only hold up to one result, -- so we discard the second one. instance Alternative [] where empty = [] (<|>) = (++) -- length xs + length ys = length (xs ++ ys). In the example below, for instance, we consume a digit in the input and return the digit that was parsed. The possibility of failure is expressed by using Maybe. digit :: Int -> String -> Maybe Int digit _ [] = Nothing digit i (c:_) | i > 9 || i < 0 = Nothing | otherwise = do if [c] == show i then Just i else Nothing The. Now, (<|>) can be used to run two parsers in parallel. That is, we use the result of the first one if it succeeds, and otherwise, we use the result of the second. If both fail, then the combined parser returns Nothing. We can use digit with (<|>) to, for instance, parse strings of binary digits: binChar :: String -> Maybe Int binChar s = digit 0 s <|> digit 1 s Parser libraries often make use of Alternative in this way. Two examples are (+++) in Text.ParserCombinators.ReadP and (<|>) in Text.ParserCombinators.Parsec.Prim. This usage pattern can be described in terms of choice. For instance, if we want to give binChar a string that will be successfully parsed, we have two choices: either to begin the string with '0' or with '1'. MonadPlus[edit] MonadPlus is a class which is closely related to Alternative: class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a This definition is exactly like that of Alternative, only with different method names and the Applicative constraint being changed into Monad. Unsurprisingly, for types that have instances of both Alternative and MonadPlus, mzero and mplus should be equivalent to empty and (<|>) respectively. One might legitimately wonder why the seemingly redundant MonadPlus class exists. Part of the reason is historical: just like Monad existed in Haskell long before Applicative was introduced, MonadPlus is much older than Alternative. Beyond such accidents, there are also additional expectations about how the MonadPlus methods should interact with the Monad ones that do not apply to Alternative, and so saying something is a MonadPlus is a stronger claim than saying it is both an Alternative and a Monad. We will make some additional considerations about this issue in the following section. Alternative and MonadPlus laws[edit] Like most general-purpose classes, Alternative and MonadPlus are expected to follow a handful of laws. However, there isn't universal agreement on what the full set of laws should look like. The most commonly adopted laws, and the most crucial for providing intuition about Alternative, say that empty and (<|>) form a monoid. By that, we mean: -- empty is a neutral element empty <|> u = u u <|> empty = u -- (<|>) is associative u <|> (v <|> w) = (u <|> v) <|> w There is nothing fancy about "forming a monoid": in the above, "neutral element" and "associative" here is just like how addition of integer numbers is said to be associative and to have zero as neutral element. In fact, this analogy is the source of the names of the MonadPlus methods, mzero and mplus. As for MonadPlus, at a minimum there usually are the monoid laws, which correspond exactly to the ones just above... mzero `mplus` m = m m `mplus` mzero = m m `mplus` (n `mplus` o) = (m `mplus` n) `mplus` o ... plus the additional two laws, quoted by the Control.Monad documentation: mzero >>= f = mzero -- left zero m >> mzero = mzero -- right zero If mzero is interpreted as a failed computation, these laws state that a failure within a chain of monadic computations leads to the failure of the whole chain. We will touch upon some additional suggestions of laws for Alternative and MonadPlus at the end of the chapter. Useful functions[edit] In addition to (<|>) and empty, there are two other general-purpose functions in the base libraries involving Alternative. asum[edit] A common task when working with Alternative is taking a list of alternative values, e.g. [Maybe a] or [[a]], and folding it down with (<|>). The function asum, from Data.Foldable fulfills this role: asum :: (Alternative f, Foldable t) => t (f a) -> f a asum = foldr (<|>) empty In a sense, asum generalizes the list-specific concat operation. Indeed, the two are equivalent when the lists are the Alternative being used. For Maybe, asum finds the first Just x in the list and returns Nothing if there aren't any. It should also be mentioned that msum, available from both `Data.Foldable` and `Control.Monad`, is just asum specialised to MonadPlus. msum :: (MonadPlus m, Foldable t) => t (m a) -> m a guard[edit] When discussing the list monad we noted how similar it was to list comprehensions, but we didn't discuss how to mirror list comprehension filtering. The guard function from Control.Monad a list monad do-block is: pythags = do z <- [1..] x <- [1..z] y <- [x..z] guard (x^2 + y^2 == z^2) return (x, y, z) The guard function can be defined for all Alternatives like this: guard :: Alternative m => Bool -> m () guard True = pure () guard _ = empty guard will reduce a do-block to empty if its predicate is False. Given the left zero law... mzero >>= f = mzero -- Or, equivalently: empty >>= f = empty ... an empty on the left-hand side of an >>= operation will produce empty again. As do-blocks are decomposed to lots of expressions joined up by (>>=), an empty at any point will cause the entire do-block to become empty. Let's examine in detail what guard does in the pythags. an empty list produced by the call to guard in gd will cause gd to produce an empty list, with \_ -> ret x y z, which would otherwise add a result, not being actually |_________________________... | | | z 1 2 3 | |____ |____________ | | | | | | x 1 1 2 1 2 3 | |_ | |___ |_ | | | | | | | | | | | y 1 1 2 2 1 2 3 2 3 3 Each combination of z, x and y represents a route through the tree. Once all the functions have been applied, the results of each branch are concatenated together, starting from the bottom. Any route where our predicate doesn't hold evaluates to an empty list, and so has no impact on this concatenation. Exercises[edit] Relationship with monoids[edit] When discussing the Alternative laws, we alluded to the mathematical concept of monoids. It turns out that there is a Monoid class in Haskell, defined in Data.Monoid. A fuller presentation of monoid will be given in a later chapter. For now, it suffices to say that = (++) Looks familiar, doesn't it? In spite of the uncanny resemblance to Alternative and MonadPlus, there is a key difference. Note the use of [a] instead of [] in the instance declaration. Monoids are not necessarily "wrappers" of anything, or parametrically polymorphic. For instance, the integer numbers form a monoid under addition with 0 as neutral element. Alternative is a separate type class because it captures a specific sort of monoid with distinctive properties − for instance, a binary operation (<|>) :: Alternative f => f a -> f a -> f a that is intrinsically linked to an Applicative context. Other suggested laws[edit] Note Consider this as a bonus section. While it is good to be aware of there being various takes on these laws, the whole issue is, generally speaking, not one worth losing sleep over. Beyond the commonly assumed laws mentioned a few sections above, there are a handful of others which make sense from certain perspectives, but do not hold for all existing instances of Alternative and MonadPlus. The current MonadPlus, in particular, might be seen as an intersection between a handful of hypothetical classes that would have additional laws. The following two additional laws are commonly suggested for Alternative. While they do hold for both Maybe and lists, there are counterexamples in the core libraries. Also note that, for Alternatives that are also MonadPlus, the mzero laws mentioned earlier are not a consequence of these laws. (f <|> g) <*> a = (f <*> a) <|> (g <*> a) -- right distributivity (of <*>) empty <*> a = empty -- right absorption (for <*>) As for MonadPlus, a common suggestion is the left distribution law, which holds for lists, but not for Maybe: (m `mplus` n) >>= k = (m >>= k) `mplus` (n >>= k) -- left distribution Conversely, the left catch law holds for Maybe but not for lists: return x `mplus` m = return x -- left catch It is generally assumed that at least one of left distribution and left catch will hold for any MonadPlus instance. Finally, it is worth noting that there are divergences even about the monoid laws. One case sometimes raised against them is that for certain non-determinism monads typically expressed in terms of MonadPlus the key laws are left zero and left distribution, while the monoid laws in such cases lead to difficulties and should be relaxed or dropped entirely. Some entirely optional further reading, for the curious reader: - The Haskell Wiki on MonadPlus (note that this debate long predates the existence of Alternative). - Distinction between typeclasses MonadPlus, Alternative, and Monoid? and Confused by the meaning of the 'Alternative' type class and its relationship to other type classes at Stack Overflow (detailed overviews of the status quo reflected by the documentation of the relevant libraries as of GHC 7.x/8.x − as opposed to the 2010 Haskell Report, which is less prescriptive on this matter.) - From monoids to near-semirings: the essence of MonadPlus and Alternative by Rivas, Jaskelioff and Schrijvers (a formulation that includes, beyond the monoid laws, right distribution and right absorption for Alternative, as well as left zero and left distribution for MonadPlus). - Wren Romano on MonadPlus and seminearrings (argues that the MonadPlusright zero law is too strong). - Oleg Kiselyov on the MonadPlus laws (argues against the monoid laws in the case of non-determinism monads). - Must mplus always be associative? at Stack Overflow (a discussion about the merits of the monoid laws of MonadPlus).
https://en.wikibooks.org/wiki/Haskell/MonadPlus
CC-MAIN-2017-34
refinedweb
1,885
50.57
Singleton cluster servicePavel Orehov Mar 15, 2012 3:03 PM Hi, There was something called Clustered Singleton Service in previous JBoss versions (). Is there is something like that in AS 7 ? Thanks, Pavel 1. Re: Singleton cluster servicePavel Orehov Mar 18, 2012 10:00 AM (in response to Pavel Orehov) Still looking for the answer. 2. Re: Singleton cluster servicejaikiran pai Mar 19, 2012 2:15 AM (in response to Pavel Orehov) See if this helps 3. Re: Singleton cluster servicePaul Ferraro Mar 19, 2012 5:50 PM (in response to Pavel Orehov) There's an example of using SingletonService here: In summary, you can use SingletonService to decorate a Service<?> such that the service will only ever be started on one node in a cluster at any given time. 4. Re: Singleton cluster serviceAlexander Radzishevsky Jun 22, 2012 8:15 AM (in response to Paul Ferraro) Hi Paul, This example works fine bu failes when application is redeployed. Service is failed to start second time. Can you suggest how to fix it ? 5. Re: Singleton cluster serviceJoshua Davis Jun 29, 2012 6:40 PM (in response to Paul Ferraro) Paul, I've got an AS5 app that use a POJO service bean cluster singleton, just like Pavel (and others)l. I looked over this example, and I'm not sure how I would use that in the place of what I now have in AS5: A cluster singleton service that can be accessed from any node on the cluster by looking it up in HAJNDI. - How do you access this cluster singleton MSC bean from other nodes in the cluster? - It looks like the MSC bean has only the one getValue() method that can be called. AS5 POJO service beans have regular EJB-like interfaces with many methods. Is the MSC bean supposed to register a remote interface to a singleton EJB somehow? All I realy need is a way to get a remote interface to a singleton on the cluster. 6. Re: Singleton cluster serviceWolf-Dieter Fink Jun 30, 2012 12:55 PM (in response to Joshua Davis) You may have a look into the quickstart overview and this cluster-ha-singleton-ejb quickstart 7. Re: Singleton cluster serviceJoshua Davis Jul 1, 2012 6:28 PM (in response to Wolf-Dieter Fink) I've cloned that repo and switched to the cluster-hasingleton-ejb branch. I get the part where you are using an SLSB remote interface called ServiceAccess to provide access to the service, and I see how ServiceAccessBean is locating the real service and delegating to it. This is exactly what I'm doing in my app, except the inner service is a POJO Service, which the SLSB looks up in HA-JNDI. I'm not sure this answers my second question. In this example EnvironmentService is the cluster singleton, right? It only has a 'getValue()' method as far as I can tell. Can it have other methods that are called from ServiceAccessBean? In AS5, you could make a POJO service with an interface that had as many methods as you wanted. 8. Re: Singleton cluster servicePaul Ferraro Jul 2, 2012 1:11 PM (in response to Joshua Davis) The EnvironmentService referenced in the quickstart is a simple Service<String> whose getValue() returns a String. The target type of a service can be whatever you want, e.g. Service<ServiceAcccess>. 9. Re: Singleton cluster serviceJoshua Davis Jul 2, 2012 2:56 PM (in response to Paul Ferraro) Thanks Paul. So the service can be sort of a 'provider'... okay, but how would it return something that other nodes on the cluster can call? Would the MBean implementation implement the business interface as well as the Service<?> interface? I guess I don't really understand the 'service' concept that well. It seems very different from the AS 5 POJO service definition. 10. Re: Singleton cluster serviceJoshua Davis Jul 2, 2012 3:14 PM (in response to Joshua Davis) I read this in the Javadoc for org.jboss.msc.service.Service<T>: The value type specified by this service is used by default by consumers of this service, and should represent the public interface of this service, which may or may not be the same as the implementing type of this service. So, that means the service MBean can implement another interface and return itself via getValue()? 11. Re: Singleton cluster servicePaul Ferraro Jul 5, 2012 12:32 PM (in response to Joshua Davis) Correct. 12. Re: Singleton cluster serviceJoshua Davis Jul 5, 2012 4:08 PM (in response to Paul Ferraro) Thanks Paul. One more thing, just to make sure I understand before I go off and try it ... public class MyServiceBean implements implements Service<MyInterface>, MyInterface { ... public MyBusinessInterface getValue() { return this; } ... } The return value will be remotely invokeable from all other nodes in the cluster? 13. Re: Singleton cluster serviceWesley Janik Sep 5, 2012 10:22 PM (in response to Joshua Davis) Hi Joshua - did you ever resolve this issue? I'm struggling with the same scenario. I'm looking for a way to return something akin to a remote EJB proxy pointing specifically to the node the singleton service is running on. However, if I understand the JBoss remote proxies correctly, they purposely don't point to the node they were created on. In other words, if MyServiceBean in your example above looks up a remote reference to an EJB and returns that reference from getValue, that proxy isn't guaranteed to invoke the bean on the singleton node when it is used on some other node in the cluster. In essense, I'm trying to find a way to get something useful out of getValue that will allow me to invoke methods on an EJB guaranteed to be within the same node as the singleton service. Any ideas are greatly appreciated. 14. Re: Singleton cluster serviceJoshua Davis Sep 7, 2012 7:46 AM (in response to Wesley Janik) I have not had a chance to fully try out what Paul suggested. Busy with other things at the moment.
https://community.jboss.org/message/745218
CC-MAIN-2015-35
refinedweb
1,014
61.06
Spring 5: key features, trends and its love of reactive programming One of the biggest topics in Spring Boot 2.0 is the support for Spring 5. What does this mean in terms of features and how does Spring 5 embrace reactive programming? Stéphane Nicoll, software engineer at Pivotal answers these questions and more in anticipation of his JAX talk next week. JAXenter: One of the big topics in Spring Boot 2.0 is the support for Spring 5. What does this mean exactly? What’s new, feature wise? Stéphane Nicoll: Spring Framework 5 introduces a comprehensive support for reactive programming as an alternative to the traditional blocking model based on the Servlet API. Spring Boot 2 obviously integrates those changes with a new “webflux” starter based on Netty that gives you the same getting started experience than the one you know with the web starter. Spring 5 also embraces functional programming and provides several building blocks that allow you to compose your applications. If you decide to define your router that way (rather than using annotations) we will detect that and configure the server accordingly. A natural follow-up is to allow users to be “functional all the way” and our first-class support of the Kotlin programming language is quite natural there. Spring Boot 2 will also provide dedicated extensions for Kotlin so that you can use all the power of the language. JAXenter: Can you give us a short example of how Spring 5 embraces reactive programming? Stéphane Nicoll: The Spring Framework provides a consistent model and allows you to reuse prior knowledge to new paradigms. The support of reactive programming is in no way different. Here is a basic example of a reactive endpoint with Spring Framework 5: @RestController public class SpeakerController { private final SpeakerRepository repository; public SpeakerController(SpeakerRepository repository) { this.repository = repository; } @GetMapping("/speakers/{id}") public Mono<Speaker> getById(@PathVariable String id) { return repository.findOne(id).otherwiseIfEmpty(Mono.error( new ResponseStatusException(HttpStatus.NOT_FOUND, "not fou,d"))); } @GetMapping("/speakers") public Flux<Speaker> getAll() { return repository.findAll(); } @PostMapping("/speakers") public Mono<Void> save(Mono<Speaker> speaker) { return repository.save(speaker).then(); } } This controller is reactive end-to-end and feels quite natural to a user of the current generation. As usual, we react to method signatures and we manage the boilerplate for you. Another example is the use of our new web client that allows to easily compose several remote services invocation in a reactive fashion. As Spring Boot is much more opinionated, a new generation allows us to revisit these opinions and rework some of them when it makes sense. JAXenter: How is reactive programming reflected in Spring Boot 2.0? Stéphane Nicoll: Mainly as a supporting role: if you decide to use the WebFlux stack, we will auto-configure the necessary bits the same way we do it for WebMvc today. And if you decide to use the functional route, we will detect that as well. We have plans to rework existing components to support the reactive stack but mostly there is not going to be many user-facing changes. WebFlux will be the entry point, much like WebMvc is today. JAXenter: What’s new in Spring Boot 2.0 apart from Spring 5 support? Stéphane Nicoll: As Spring Boot is much more opinionated, a new generation allows us to revisit these opinions and rework some of them when it makes sense. For instance, we completely rewrote the Gradle plugin from scratch with the feedback of the community and we have heavily reworked how configuration keys are handled internally. This release is mainly about setting up firm foundations and a good base to build on. SEE ALSO: Spring Boot – what it is and how to get started JAXenter: What is your personal highlight in Spring Boot 2.0? And how does this highlight help developers? Stéphane Nicoll: Supporting two different web application paradigms is quite a challenge and requires revisiting some core principles in Spring Boot. Our goal is that the ‘getting started’ experience with WebFlux is as natural as the one we have now with WebMvc, which will allow you to easily try reactive programming. JAXenter: What is the main message of your JAX session that all visitors should take away? Stéphane Nicoll: Reactive programming is a new choice that you can make depending on the app that you are writing. If you decide to go that route, we will accompany you throughout your journey: you’ll find a familiar programming model and Spring Boot will auto-configure the infrastructure for you. You can even go more functional later if you decide to! And of course, there is nothing wrong with building traditional MVC apps: our support for Servlet-based web stacks continues to be around as a first-class option, remaining compatible with Servlet-based technologies as well as traditional datastore interaction styles. Thank you very much! Stéphane Nicoll will be delivering one talk at JAX 2017 which will focus on shedding some light on Spring 5, its themes and trends.
https://jaxenter.com/spring-5-interview-nicoll-pivotal-133805.html
CC-MAIN-2019-13
refinedweb
838
54.32
stat(2) [opensolaris man page] stat(2) System Calls stat(2) NAME stat, lstat, fstat, fstatat - get file status SYNOPSIS #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> int stat(const char *restrict path, struct stat *restrict buf); int lstat(const char *restrict path, struct stat *restrict. _ATTR_TRIGGER is set in the flag argument and the vnode is a trigger mount point, the mount is performed and the function returns the attributes of the root of the mounted filesystem. The buf argument is a pointer to a stat structure into which information */ Descriptions of structure members are as follows: st_mode). st_mtime Time when data was last modified. Some of the functions that change this member are: creat(), mknod(), pipe(), utime(), and write(2). st_ctime Time when file status was last changed. Some of the functions that change this member are: chmod(2), chown(2), creat(2), link(2), mknod(2), pipe(2), rename(2), unlink(2), utime(2), and write(2).. st_fstype A null-teminated string that uniquely identifies the type of the filesystem that contains the file. RETURN VALUES Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. ERRORS The stat(), fstat(), lstat(), and fstatat() functions will fail if: EIO An error occurred while reading from the file system. A loop exists in symbolic links encountered during the resolution of the path argument. ENAMETOOLONG The length of the path argument exceeds {PATH_MAX}, or the length of a path component exceeds {NAME_MAX} while _POSIX_NO_TRUNC is in effect. ENOENT A component of path does not name an existing file or path is an empty string.. The fstat() and fstatat() functions will fail if: EBADF The fildes argument is not a valid open file descriptor. The fildes argument to fstatat() can. The stat(), fstat(), and lstat() functions may fail if: EOVERFLOW One of the members is too large to store in the stat structure pointed to by buf. The stat() and lstat() functions may fail if: ELOOP More than {SYMLOOP_MAX} symbolic links were encountered during the resolution of the path argument. ENAMETOOLONG As a result of encountering a symbolic link in resolution of thepath argument, the length of the substituted pathname strings exceeds {PATH_MAX}. The stat() and fstatat() functions may fail if: ENXIO The path argument names a character or block device special file and the corresponding I/O device has been retired by the fault management framework. EXAMPLES Example 1 Use stat() to obtain); Example 2 Use stat() to get directory information. The following example fragment gets status information for each entry in a directory. The call to the stat() function stores file informa- tion in the stat structure pointed to by statbuf. The lines that follow the stat() call format the fields in the stat structure for presen- tation owners name if it is found using getpwuid(). */ if ((pwd = getpwuid(statbuf.st_uid)) != NULL) printf(" %-8.8s", pwd->pw_name); else printf(" %-8d", statbuf.st_uid); /* Print out group name if it ", datestring, dp->d_name); } Example 3 Use fstat() to obtain); Example 4 Use lstat() to obtain); USAGE. The stat(), fstat(), and lstat() functions have transitional interfaces for 64-bit file offsets. See lf64(5). ATTRIBUTES See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Interface Stability |Committed | +-----------------------------+-----------------------------+ |MT-Level |Async-Signal-Safe | +-----------------------------+-----------------------------+ |Standard |See below. | +-----------------------------+-----------------------------+ For stat(), fstat(), and lstat(), see standards(5). SEE ALSO access(2), chmod(2), chown(2), creat(2), link(2), mknod(2), pipe(2), read(2), time(2), unlink(2), utime(2), write(2), fattach(3C), stat.h(3HEAD), attributes(5), fsattr(5), lf64(5), standards(5) SunOS 5.11 10 Oct 2007 stat(2)
https://www.unix.com/man-page/opensolaris/2/stat/
CC-MAIN-2020-16
refinedweb
614
64.61
> Le 23 août 2018 à 07:34, Akim Demaille <address@hidden> a écrit : > > > >> Le 21 août 2018 à 10:46, Hans Åberg <address@hidden> a écrit : >> >> >>> On 19 Aug 2018, at 17:23, Akim Demaille <address@hidden> wrote: >>> >>>>> I would strongly suggest that you look at the examples/ in Bison, the C++ >>>>> calculator >>>> >>>> I did look at it - this example scared me off c++ parsers :-( >>> >>> Really??? Then I failed. Can you be more specific? >> >> I noticed: It does not have a Makefile, so one can’t just copy it and >> compile. > > Why not adding one, indeed. I installed this. commit de64159e7f3e0b1b7acd8c23cbd76495fe17c311 Author: Akim Demaille <address@hidden> Date: Sat Aug 25 08:04:37 2018 +0200 examples: calc++: a Makefile and a README * examples/calc++/Makefile, examples/calc++/README: New. * examples/calc++/local.mk: Ship and install them. * doc/bison.texi: Formatting changes. diff --git a/README b/README index 5370f335..8689bab8 100644 --- a/README +++ b/README @@ -41,6 +41,7 @@ note that the range specifies every single year in that closed interval. Local Variables: mode: outline +fill-column: 76 End: Copyright (C) 1992, 1998-1999, 2003-2005, 2008-2015, 2018 Free Software diff --git a/doc/bison.texi b/doc/bison.texi index 6cd52917..b7945343 100644 --- a/doc/bison.texi +++ b/doc/bison.texi @@ -11066,14 +11066,13 @@ use characters such as @code{':'}, they must be declared with @code{%token}. @node A Complete C++ Example @subsection A Complete C++ Example -This section demonstrates the use of a C++ parser with a simple but -complete example. This example should be available on your system, -ready to compile, in the directory @dfn{.... +This section demonstrates the use of a C++ parser with a simple but complete +example. This example should be available on your system, ready to compile, +in the directory @dfn{.../share/doc. @menu * Calc++ --- C++ Calculator:: The specifications @@ -11086,11 +11085,10 @@ actually easier to interface with. @node Calc++ --- C++ Calculator @subsubsection Calc++ --- C++ Calculator -Of course the grammar is dedicated to arithmetics, a single -expression, possibly preceded by variable assignments. An -environment containing possibly predefined variables such as address@hidden and @code{two}, is exchanged with the parser. An example -of valid input follows. +Of course the grammar is dedicated to arithmetics, a single expression, +possibly preceded by variable assignments. An environment containing +possibly predefined variables such as @code{one} and @code{two}, is +exchanged with the parser. An example of valid input follows. @example three := 3 diff --git a/examples/README b/examples/README index bfdd8c68..9780d829 100644 --- a/examples/README +++ b/examples/README @@ -5,18 +5,19 @@ A C example of a multi-function calculator. Extracted from the documentation. * calc++ -A C++ version of the canonical example for parsers: a calculator. -Also uses Flex for the scanner. Extracted from the documentation. +A C++ version of the canonical example for parsers: a calculator. Also uses +Flex for the scanner. Extracted from the documentation. * variant.yy -A C++ example that uses variants (they allow to use any C++ type as -semantic value type) and symbol constructors (they ensure consistency -between declared token type and effective semantic value). +A C++ example that uses variants (they allow to use any C++ type as semantic +value type) and symbol constructors (they ensure consistency between +declared token type and effective semantic value). ----- Local Variables: mode: outline +fill-column: 76 End: Copyright (C) 2018 Free Software Foundation, Inc. diff --git a/examples/calc++/Makefile b/examples/calc++/Makefile new file mode 100644 index 00000000..6b6c4998 --- /dev/null +++ b/examples/calc++/Makefile @@ -0,0 +1,27 @@ +# This Makefile is designed to be simple and readable. It does not +# aim at portability. It requires GNU Make. + +BISON = bison +CXX = g++ +FLEX = flex + +all: calc++ + +%.cc %.hh: %.yy + $(BISON) $(BISONFLAGS) -o $*.cc $< + +%.cc: %.ll + $(FLEX) $(FLEXFLAGS) -o$@ $< + +%.o: %.cc + $(CXX) $(CXXFLAGS) -c -o$@ $< + +calc++: calc++.o driver.o parser.o scanner.o + $(CXX) -o $@ $^ + +calc++.o: parser.hh +parser.o: parser.hh +scanner.o: parser.hh + +clean: + rm -f calc++ *.o parser.hh parser.cc scanner.cc diff --git a/examples/calc++/README b/examples/calc++/README new file mode 100644 index 00000000..679c4a1a --- /dev/null +++ b/examples/calc++/README @@ -0,0 +1,51 @@ +This directory contains calc++, a simple Bison grammar file in C++. + +Please, read the corresponding chapter in the documentation: "A Complete C++ +Example". It is also available on line (maybe with a different version of +Bison): + + +To use it, copy this directory into some work directory, and run `make` to +compile the executable, and try it. It is a simple calculator which accepts +several variable definitions, one per line, and then a single expression to +evaluate. + +The program calc++ expects the file to parse as argument; pass `-` to read +the standard input (and then hit <Ctrl-d>, control-d, to end your input). + +$ ./calc++ - +one := 1 +two := 2 +three := 3 +(one + two * three) * two * three +<Ctrl-d> +42 + +You may pass `-p` to activate the parser debug traces, and `-s` to activate +the scanner's. + +----- + +Local Variables: +mode: outline +fill-column: 76 +End: + +Copyright (C)Words: mfcalc calc parsers yy MERCHANTABILITY diff --git a/examples/calc++/local.mk b/examples/calc++/local.mk index 06c1ed67..28f9f908 100644 --- a/examples/calc++/local.mk +++ b/examples/calc++/local.mk @@ -83,3 +83,4 @@ endif calcxxdir = $(docdir)/examples/calc++ calcxx_DATA = $(calcxx_extracted) +dist_calcxx_DATA = %D%/README %D%/Makefile
https://lists.gnu.org/archive/html/bug-bison/2018-08/msg00052.html
CC-MAIN-2019-13
refinedweb
890
58.48
2020 is a Euros year, and guaranteed THE year that England end x years of hurt. Of course, it will not be plain sailing and I’m sure that there will be ups and downs along the way. To console and to celebrate, we need the England classics as the soundtrack. But how can we find the right songs for the right moments?! Fortunately, Spotify provides us with the songs AND the data to find the right tune to fit the mood. In this tutorial, we’re going to use the Spotipy module to extract data on a playlist of England songs. Then for each song, we’ll get a load of data points that tell us some details about the song – how happy it is, how easy it is to dance to and so on. Finally, we’ll make a table and plot to show how we can find the song to accompany England’s tournament! Packages in place and let’s go! import pandas as pd import matplotlib.pyplot as plt import matplotlib.patches as patches import seaborn as sns import spotipy import spotipy.util as util from spotipy.oauth2 import SpotifyClientCredentials import spotipy.oauth2 as oauth2 Before we do the fun stuff, we need to get authentication from Spotify to extract data. It is super simple, you just need to register here, start an ‘app’ and get an ID and secret. The Spotipy module then makes it easy to use the ID and secret to set up a session where we can interact with the Spotify API. There are loads of use cases for it here, but this tutorial will take us through how to get and make use of song characteristics. We’ll load the client ID and secret into variables, then use Spotipy’s authentication process to start a session. CLIENT_ID = "xxx" CLIENT_SECRET = "xxx" client_credentials_manager = SpotifyClientCredentials(client_id=CLIENT_ID, client_secret=CLIENT_SECRET) sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager) sp.audio_features(['4uLU6hMCjMI75M1A2tKUQC']) [{'danceability': 0.721, 'energy': 0.939, 'key': 8, 'loudness': -11.823, 'mode': 1, 'speechiness': 0.0376, 'acousticness': 0.115, 'instrumentalness': 3.79e-05, 'liveness': 0.108, 'valence': 0.914, 'tempo': 113.309, 'type': 'audio_features', 'id': '4uLU6hMCjMI75M1A2tKUQC', 'uri': 'spotify:track:4uLU6hMCjMI75M1A2tKUQC', 'track_href': '', 'analysis_url': '', 'duration_ms': 213573, 'time_signature': 4}] So we get some really cool data on a song, which Spotify has calculated based on features that it programatically identifies – if there is a distinct rhythm, it gets a high danceability score, if no voices are detected, it is high on the instrumentalness scale, and so on. We’ll go through a couple more later in the article, but all of the definitions of these audio features are here. What we need to do now is to create a dataset of these features for England songs. We could collect them individually, but surely a playlist exists somewhere with all these bangers. Fortunately, Spotify user ‘Cuffley Blade’ has done this for us. You can save the playlist for later listening here. We can call a playlist just like the track above with the .playlist() function, and feeding it an ID. This returns a huge dictionary with playlist data, then a track dictionary for each song in the playlist. It is way too big to feature here, so we’re going to navigate through the playlist dictionary and find the first track’s name and artist below: sp.playlist('28gX2hq23N4WonSnRtRcUu')['tracks']['items'][0]['track']['name'] 'World in Motion' sp.playlist('28gX2hq23N4WonSnRtRcUu')['tracks']['items'][0]['track']['artists'][0]['name'] 'New Order' Strong start for the playlist. But one song at a time would take forever, so let’s write something that will loop through the tracks in the playlist and take the artist, name, popularity score and ID, and store them in lists: #Separate out the track listing from the main playlist object playlistTracks = sp.playlist('28gX2hq23N4WonSnRtRcUu')['tracks']['items'] #Create empty lists for each datapoint we want to take artistName = [] trackName = [] trackID = [] trackPop = [] #Loop through each track and append the relevant information to the list for index, track in enumerate(playlistTracks): artistName.append(track['track']['artists'][0]['name']) trackName.append(track['track']['name']) trackID.append(track['track']['id']) trackPop.append(track['track']['popularity']) Let’s test this, and see if we have the songs that we saw in the database earlier: trackName ['World in Motion', 'Back Home', 'Vindaloo', 'Three Lions', 'Eat My Goal', 'Jerusalem', 'Come On England', "We're on the Ball - Official England Song for the 2002 Fifa World Cup", 'Is This The Way To The World Cup', 'Shout', 'Meat Pie, Sausage Roll - England Edit', "I'm England 'Till I Die", 'Whole Again', 'God Save The Queen'] Bloody. Yes. Crouch at the back post, Beckham straight down the middle, Joe Cole from his own half 😍😍😍 Couple of odd bits though, with songs have a “-” and other information. Let’s tidy those up but splitting the titles on the hyphen and keeping the first half trackName[7] = trackName[7].split(" - ")[0] trackName[10] = trackName[10].split(" - ")[0] trackName ['World in Motion', 'Back Home', 'Vindaloo', 'Three Lions', 'Eat My Goal', 'Jerusalem', 'Come On England', "We're on the Ball", 'Is This The Way To The World Cup', 'Shout', 'Meat Pie, Sausage Roll', "I'm England 'Till I Die", 'Whole Again', 'God Save The Queen'] Much better. We also took the track ID for each. Just like before, we can use these to get the song’s features. World in Motion was the first song in our list, let’s use the trackID list to get its features. sp.audio_features(trackID[0]) [{'danceability': 0.603, 'energy': 0.955, 'key': 1, 'loudness': -4.111, 'mode': 1, 'speechiness': 0.0458, 'acousticness': 0.0239, 'instrumentalness': 0.0451, 'liveness': 0.119, 'valence': 0.787, 'tempo': 123.922, 'type': 'audio_features', 'id': '08po8QZK3tihnLBZWATAki', 'uri': 'spotify:track:08po8QZK3tihnLBZWATAki', 'track_href': '', 'analysis_url': '', 'duration_ms': 270827, 'time_signature': 4}] Works just as before. We can now loop through these IDs and append relevant data to lists, like we did for the songs themselves. Brief definitions of the data we’re taking, but a reminder that the full information is here###. #How suitable the track is to bust a move, from 0 - 1 danceability = [] #Detects presence of an audience in the audio, 0 - 1 liveness = [] #How happy the track is, 0 - 1 valence = [] #How much the track is spoken word, vs song, 0 - 1 speechiness = [] #BPM tempo = [] #Is the track acoustic? 0 - 1 acousticness = [] #How intense the song is, 0 - 1 energy = [] for index, track in enumerate(sp.audio_features(trackID)): danceability.append(track['danceability']) liveness.append(track['liveness']) valence.append(track['valence']) speechiness.append(track['speechiness']) tempo.append(track['tempo']) acousticness.append(track['acousticness']) energy.append(track['energy']) Between these features, the track name, artist and popularity, we have 10 lists. A dataframe would make this much easier to read. Let’s join them up and take a look at our data dataframe = pd.DataFrame({'Track':trackName, 'Artist':artistName, 'Popularity':trackPop, 'Danceability':danceability, 'Liveness':liveness, 'Happiness':valence, 'Speechiness':speechiness, 'Tempo':tempo, 'Acousticness':acousticness, 'Energy':energy}) dataframe And now we have a data source for matching England songs to the tournament mood. Want something danceable at a high energy? Eat My Goal. Sad and low energy? Jerusalem. We can even use the dataframes .sort_values() functionality to do the lookup for us based on what we want to see: dataframe.sort_values("Happiness", ascending = False).head(3) Now we have the 3 happiest songs in the playlist ready to go, and tough to argue with any of these. Of course, you’d be unlikely to take a Jupyter notebook down the pub, or to your nearest riot, so I’d recommend making a print out graphic to take with you. #Set base style and size plt.style.use('fivethirtyeight') plt.figure(num=None, figsize=(6, 4), dpi=100) #Set subtle St. George's Cross underneath, don't want to come across strong rect = patches.Rectangle((0.4,0),0.2,1, color="red", alpha=0.01) plt.gca().add_patch(rect) rect2 = patches.Rectangle((0,0.4),0.4,0.2, color="red", alpha=0.01) plt.gca().add_patch(rect2) rect3 = patches.Rectangle((0.6,0.4),1,0.2, color="red", alpha=0.01) plt.gca().add_patch(rect3) #Plot data ax = sns.scatterplot(x="Danceability", y="Happiness", data=dataframe, s=100, color='#b50523') #Set title ax.text(x = 0.05, y = 1.15, s = "Finding the England Song for the Mood", fontsize = 15, alpha = 0.9) #Set Annotations ax.text(x = 0.79, y = 0.76, s = "Eat My Goal", fontsize = 10, alpha = 1) ax.text(x = 0.17, y = 0.38, s = "Jerusalem", fontsize = 10, alpha = 1) ax.text(x = 0.6, y = 0.28, s = "Vindaloo", fontsize = 10, alpha = 1) ax.text(x = 0.45, y = 0.95, s = "Is This The Way...", fontsize = 10, alpha = 1) #Set mood examples ax.text(x = 0.85, y = 0.95, s = "Trippier FK", fontsize = 10, alpha = 0.4) ax.text(x = 0.03, y = 0.05, s = "Mandzukic ghosts by Stones", fontsize = 10, alpha = 0.4) #Remove grid and add axis lines ax.grid(False) ax.axhline(y=0.005, color='#414141', linewidth=1.5, alpha=.5) ax.axvline(x=0.005, color='#414141', linewidth=1.5, alpha=.5) #Set axis limits ax.set(ylim=(0,1)) ax.set(xlim=(0,1)) #Set axis labels ax.set_yticklabels(labels=['0', '20', '40', '60', '80','100%'], fontsize=12, color='#414141') ax.set_xticklabels(labels=['0', '20', '40', '60', '80','100%'], fontsize=12, color='#414141') #Set axis titles plt.xlabel('Danceability', fontsize=13, color='#2a2a2b') plt.ylabel('Happiness', fontsize=13, color='#2a2a2b') #Plot ax.plot() In this tutorial, we have seen how we can navigate the Spotify API by using the Spotipy module. We have found out how we can get data about songs, and navigate a playlist to do this programatically for a group of tracks. As for wider Python skills, we have practiced how to loop through items and store information about each one. We have then joined this up into a dataframe for analysis and visualisation.
https://fcpython.com/blog/finding-the-right-england-songs-with-spotify-data-python
CC-MAIN-2021-43
refinedweb
1,664
59.4
Created on 2015-05-06 12:16 by magnusc, last changed 2015-06-29 07:05 by rbcollins. This issue is now closed. Hi I have a little issue with the current assertRaises implementation. It seems that it might produce a little different results depending on how you use it. self.assertRaises(IOError, None) will not produce the same result as: with self.assertRaises(IOError): None() In the first case everything will be fine due to the fact that assertRaises will actually return a context if the second callable parameters is None. The second case will throw a TypeError 'NoneType' object is not callable. I don't use None directly, but replace it with a variable of unknown state and you get a little hole where problems can creep in. In my case I was testing function decorators and that they should raise some exceptions on special cases. It turned our that I forgot to return the decorator and instead got the default None. But my tests didn't warn me about that. Bottom line is that if I use the first assertRaises(Exception, callable) I can't rely on it to check that the callable is actually something callable. I do see that there is a benefit of the context way, but in my opinion current implementation will allow problems to go undetected. My solution to this would be to rename the context variant into something different: with self.assertRaisesContext(Exception): do_something() A side note on this is that reverting back to the original behavior would allow you to reevaluate issue9587 for returning the actual exception. I don't think this is a bug. I think it is just a case that you have to be careful when calling functions, you actually do call the function. And that it returns what you think it does. I think the critical point is this: "It turned our that I forgot to return the decorator and instead got the default None. But my tests didn't warn me about that." The solution to that is to always have a test that your decorator actually returns a function. That's what I do. Possible solution is to use special sentinel instead of None. The patch looks like a nice improvement. Why not sentinel = Object()? +1 for the patch, once tests are added. This may "break" code in maintenance releases, but presumably that will be finding real bugs. It is hard to imagine someone intentionally passing None to get the context manager behavior, even though it is documented in the doc strings (but not the main docs, I note). If anyone can find examples of that, though, we'd need to restrict this to 3.5. "The solution to that is to always have a test that your decorator actually returns a function. That's what I do." Yes, I agree that with more tests I would have found the problem, but sometimes you forget things. And to me I want the tests to fail by default or for cases that are unspecified. I think the sentinel solution would come a long way of solving both the issue that I reported but still keep the context solution intact. Out of curiosity, would it be a solution to have the sentinel be a real function? def _sentinel(): pass Updated patch includes tests for the function is None. There were no tests for assertRaises(), the patch adds them, based on tests for assertWarns(). > Why not sentinel = Object()? Only for better repr in the case the sentinel is leaked. Other variants are to make it named object, such as a function or a class, as Magnus suggested. Made a couple of review comments on the English in the test comments. Patch looks good to me. New changeset 111ec3d5bf19 by Serhiy Storchaka in branch '2.7': Issue #24134: assertRaises() and assertRaisesRegexp() checks are not longer New changeset 5418ab3e5556 by Serhiy Storchaka in branch '3.4': Issue #24134: assertRaises(), assertRaisesRegex(), assertWarns() and New changeset 679b5439b9a1 by Serhiy Storchaka in branch 'default': Issue #24134: assertRaises(), assertRaisesRegex(), assertWarns() and Thank you David for your corrections. I noticed this is backwards incompatible for a small feature in Django. If you want to leave this feature in Python 2.7 and 3.4, it'll break things unless we push out a patch for Django; see. Given one failure there are probably more. So we should probably back this out of 2.7 and 3.4. Agree, this change breaks general wrappers around assertRaises, and this breakage is unavoidable. Likely we should rollback changes in maintained releases. The fix in Django doesn't LGTM. It depends on internal detail. More correct fix should look like: def assertRaisesMessage(self, expected_exception, expected_message, *args, **kwargs): return six.assertRaisesRegex(self, expected_exception, re.escape(expected_message), *args, **kwargs) I used special sentinel because it is simple solution, but we should discourage to use this detail outside the module. Proposed patch (for 3.5) uses a little more complex approach, that doesn't attract to use implementation details. In additional, added more strict argument checking, only the msg keyword parameter now is acceptable in context manager mode. Please check changed docstrings. It is possible also to make transition softer. Accept None as a callable and emit the deprecation warning. Yeah, the general case of wrappers is something I hadn't considered. Probably we should go the deprecation route. Robert, what's your opinion? I didn't find any problems while testing your proposed new patch for cpython and your proposed patch for Django together. New changeset 0c93868f202e by Serhiy Storchaka in branch '2.7': Reverted issue #24134 changes. New changeset a69a346f0c34 by Serhiy Storchaka in branch '3.4': Reverted issue #24134 changes (except new tests). New changeset ac13f0390866 by Serhiy Storchaka in branch 'default': Issue #24134: assertRaises(), assertRaisesRegex(), assertWarns() and Interesting, the last patch exposed a flaw in test_slice. def test_hash(self): # Verify clearing of SF bug #800796 self.assertRaises(TypeError, hash, slice(5)) self.assertRaises(TypeError, slice(5).__hash__) But the second self.assertRaises() doesn't call __hash__. It is successful by accident, because slice(5).__hash__ is None. New changeset cbe28273fd8d by Serhiy Storchaka in branch '2.7': Issue #24134: Use assertRaises() in context manager form in test_slice to New changeset 3a1ee0b5a096 by Serhiy Storchaka in branch '3.4': Issue #24134: Use assertRaises() in context manager form in test_slice to New changeset 36c4f8af99da by Serhiy Storchaka in branch 'default': Issue #24134: Use assertRaises() in context manager form in test_slice to Unfortunately, the revert wasn't merged to the 2.7 branch until after the release of 2.7.10. I guess this regression wouldn't be considered serious enough to warrant a 2.7.11 soon, correct? Hi, catching up (see my mail to -dev about not getting tracker mail). Deprecations++. Being nice for folk whom consume unittest2 which I backport to everything is important to me :).
https://bugs.python.org/issue24134
CC-MAIN-2018-26
refinedweb
1,149
67.45
SO life is tough and i had to fiddle around with this for a while before i could figure out which AMI to use! i swear i started at least 10 instances so far… i’m not sure if it’s because i’m stingy (i also chose t2.micro for FREE at the beginning hahahah) i just wanted to ‘get my feet wet’ as they all say right, but there were like a lot of cuda issues and stuff i think. i’m not even sure what was going wrong! finally, i found THIS: amazing stuff - step by step instructions!! apparently some problems can be resolved by a reboot, and then reinstalling some stuff (lucky he included the installers!) so…just use this guy man. ami name: ami-ccba4ab4 as usual i was stingy and used the cheapest possible - c4.xlarge *hint: it takes about 4 minutes to initialise so don’t faint!! i git cloned in, verified that my program can finally run (THANK GOD) - at the same speed as within my macbookpro LOL hands trembling (very drama, i know) - i clicked to spin up an instance with GPU… *p.s. - remember to terminate all your previous instances!!!!! if not you will be CHARGED. p2.xlarge GPU IS RUNNING NOW HOLY COW i can testify - it REALLY runs at 25 times the speed of a normal CPU… this is miraculous /#nevergoingbacktocpu ok one tiny error: _tkinter.TclError: no display name and no $DISPLAY environment variable look here for an answer!! i’m trying to add the below at the top of my script. fingers crossed that this works!!! import matplotlib matplotlib.use('Agg') to download your files from the remote location: scp -i newpython.pem -r [email protected]:~/gan_project/ .
https://www.yinglinglow.com/blog/2018/01/04/EC2-p2xlarge
CC-MAIN-2021-49
refinedweb
291
73.78
08 February 2011 06:33 [Source: ICIS news] By Felicia Loo ?xml:namespace> SINGAPORE Spot values of the octane booster in gasoline were steady at $940-950/tonne (€696-703/tonne) FOB (free on board) Import requirements from China were expected to surge after the holidays, as driving demand in the world’s top energy user as well as the number-one global auto market would peak in the next few months, traders said. Car sales in Meanwhile, Spot MTBE premiums in “Premiums are at the high $20/tonne [levels]. Blending margins are quite good,” said a trader. Blending margins, or the spread between naphtha and gasoline, had increased to $10.70/bbl on Monday’s market close from $9.30/bbl in late January, traders said. Import needs from In “ The cracker turnaround will be heavier this year in LG Chem is scheduled to shut its 760,000 tonne/year naphtha cracker in Daesan for a month-long turnaround and expansion works, starting on 15 March. The plant’s ethylene capacity will be increased to around 900,000 tonnes/year after the debottlenecking work. In addition, MTBE supply was generally tight, especially as petrochemical giant Saudi Basic Industries Corp (SABIC) was scheduled to take its 700,000 tonne/year MTBE plant off line from February to March. The plant is located at Al-Jubail in. In ($1 = €0
http://www.icis.com/Articles/2011/02/08/9433043/asia-mtbe-may-gain-on-higher-demand-plant-turnarounds.html
CC-MAIN-2014-41
refinedweb
229
62.48
I have a collection of hierarchical items in an unsorted collection. Each of these items has a field previousItem: public class Item{ public Item previousItem; } What is the most efficient way of processing these items so that the output is a collection, where the Item without any previousItem is the first Item in the collection and each subesequent item's previousItem is the previous item in the collection? My first idea would be to implement the Comparable interface in the Item class: public int compareTo(Item that) { final int BEFORE = -1; final int EQUAL = 0; final int AFTER = 1; if(this.previousItem==null){ return BEFORE; } if(that.previousItem==null){ return AFTER; } if(this.previousItem.equals(that){ return AFTER; }else if(that.previousItem.equals(this){ return BEFORE; } return EQUAL; } and then loop through the items an add them to a TreeSet: SortedSet<Item> itemSortedSet = new TreeSet<Item>(); for (Item item : itemCollection) { itemSortedSet.add(item); } Is there a more efficient way (less time to process/number of iterations needed) to order the collection so that they are in logical, hierarchical order? Your comparator would not work: it does not provide transitivity, i.e. it will do the wrong thing if you compare items A and C in the case of A->B->C. If no items can have the same previous item, your Item objects essentially form a basic linked list. If you happen to know which one is that last item, you can start from there with a single loop and unravel the whole structure: Item last = ...; while (last != null) { result.add(last) last = last.previousItem } If you do not have a way to find out which item is the last one, then you could use an IdentityHashMap to map each previousItem value to the Item object that uses it: IdentityHashMap<Item,Item> m = new IdentityHashMap<Item,Item>(itemset.size()); for (Item i : itemset) m.put(i.previousItem, i); Item i = m.get(null); while (i != null) { result.add(i); i = m.get(i); } This will unravel your unsorted collection starting from the item that has no previous node. Both methods have a roughly linear complexity w.r.t. the number of items, but they make two assumptions that may not be valid in your case: That each item can only be the previous item of at most one other node, i.e. that your items are a list instead of a tree. That there is a single "thread" of items. If either of these assumptions is not valid you will need a far more complex algorithm to sort this out. From what I see, your Item is already implementing a linked-list (if not by name), so the natural (for me) way to put it in a collection is to transform it into a real list recursively. After the items are transformed, you may need to reverse the list, according to your specification. public class Item { Item previousItem; public void putToCollection(ArrayList<Item> list) { list.add(this); if (previousItem == null) { Collections.reverse(list); } else { previousItem.putToCollection(list); } } } If I understand you correctly, you have a handle on the Item object without previous object. You could just loop through the Items (e.g. with a while-loop) and add them to an ArrayList or a LinkedList, because the add()-Method of those List implementations just appends a new Item at the end of the list. This way, you get through it in O(n)-time. You can use a HashSet<Item> in which you initially put all the items. Then you go over the items in a loop and remove every previousItem from your set. At the end you should be left with a single element in the set, the last one, which was not a previousItem for any other item. And then you can simply add items starting from that one and going through the links. Similar Questions
http://ebanshi.cc/questions/3883635/java-having-a-collection-of-items-where-each-item-has-a-field-previousitem
CC-MAIN-2017-22
refinedweb
648
63.8
4703Re: [Cheetahtemplate-discuss] [PATCH] Remove unnecessary dir()/set() calls in Template.__init__() Expand Messages - Oct 13, 2009On Mon, Oct 12, 2009, R. Tyler Ballance wrote: >This seems like a safe change. I would even go a step further in > When running cheetah.Tests.Performance.DynamicMethodCompilationTest > with 100000 iterations set, Template.__init__() is the most performance > sensitive call. performance and reducing namespace pollution: Template.Reserved_SearchList = set(dir(Template)) You can then go back to using self.Reserved_SearchList, which should be faster than a global lookup. (I'm only mentioning this because you're using the lookup in a loop.) -- Aahz (aahz@...) <*> "To me vi is Zen. To use vi is to practice zen. Every command is a koan. Profound to the user, unintelligible to the uninitiated. You discover truth everytime you use it." --reddy@... ------------------------------------------------------------------------------! _______________________________________________ Cheetahtemplate-discuss mailing list Cheetahtemplate-discuss@... - << Previous post in topic Next post in topic >>
https://groups.yahoo.com/neo/groups/cheetah-archive/conversations/messages/4703
CC-MAIN-2015-11
refinedweb
148
55.1
When I first came across Zustand, I couldn’t believe how easy it was to use. The learning curve is incredibly thin. If you are familiar with how immutable state works in React, then you will feel right at home working with Zustand. So, without further ado, let’s make ourselves a to-do list. Step 1: Setting Up Our Project The first thing that we are going to need to do is to get a base project setup, and we can do this by using the below command in our terminal/Powershell. npx create-react-app zustand-todo-demo --template typescript After running this command, this will create a basic starter typescript project for us in React. Next, go ahead and navigate to the newly-created directory. cd zustand-todo-demo And run these commands to install the packages we will be using. npm install zustand @material-ui/core @material-ui/icons uuid npm install --save-dev @types/uuid The next thing we need to do is get rid of all the unnecessary files in the src folder and also add a few of our own. Our src folder should look like this when we are done. src ├── model │ └── Todo.ts ├── App.tsx ├── index.tsx ├── react-app-env.d.ts └── todoStore.tsx With these steps out of the way, it’s now time to open our project up in a code editor. I will be using VS Code, but feel free to use any editor that you prefer. Now it’s time to start coding! Step 2: Creating Our Todo Model First, before introducing Zustand, we will create a model for our to-do list to describe what the data structure of each to-do will look like. Open the Todo.ts file and put the following code into it. export interface Todo { id: string; description: string; completed: boolean; } There’s not much to explain in the above code, but we are defining a type that TypeScript can use to provide us with auto-completion and ensure the proper data is passed around. With our model now created, it is time to introduce the crux of this tutorial, Zustand. Step 3: Creating Our Zustand Store We will now be creating the state management logic for our to-do app. This is where Zustand comes into play. The below code is what our finished store will look like. Don’t worry if you don’t understand everything, because I will be going over it line-by-line. import create from "zustand"; import { v4 as uuidv4 } from "uuid"; import { Todo } from "./model/Todo"; interface TodoState { todos: Todo[]; addTodo: (description: string) => void; removeTodo: (id: string) => void; toggleCompletedState: (id: string) => void; } export const useStore = create<TodoState>((set) => ({ // initial state todos: [], // methods for manipulating state addTodo: (description: string) => { set((state) => ({ todos: [ ...state.todos, { id: uuidv4(), description, completed: false, } as Todo, ], })); }, removeTodo: (id) => { set((state) => ({ todos: state.todos.filter((todo) => todo.id !== id), })); }, toggleCompletedState: (id) => { set((state) => ({ todos: state.todos.map((todo) => todo.id === id ? ({ ...todo, completed: !todo.completed } as Todo) : todo ), })); }, })); Lines 6 — 11 What we have here is an interface defining what our store is going to look like. There are 4 parts to this. We have todos on line 7 which is a list of type Todo. On line 8 we have a method called addTodo which, as you may have guessed, will be used for adding todos to our todo. On line 9 we have a method which we are using to remove todos from our to-do list. On line 10 we have a method that, given the id of our todo item in the list, will toggle the completed state of it. With our interface now defined, let’s dive into Zustand and how we manage state with it. Line 13 We are creating our store. You will notice that we will be using the hook naming convention (useStore), and that is because, well, we tap into our store using hooks provided by Zustand. More on this later. We are using a method built into Zustand called create, which, as it sounds, is responsible for creating our store. The create function takes an arrow function that has 3 different parameters (we will only be using the first parameter for this super simple tutorial) and returns an object which will match the interface that we defined above. The parameter set inside of this arrow function is very important, as it allows us to change the state of our store. You will see this in use in the code to follow. Line 15 What we are doing here is setting the initial state of our to-do list. In this instance, we are setting it to an empty list every time. Perhaps in a more real-world scenario, we would be making a network call or checking data stored locally to determine what the initial state is. Lines 17 — 28 We are defining how our addTodo method will perform when it is called in our code later. Things of note with this method are that it only needs to take the description for the todo item, we are using a library called uuid, and we are using a spread operator to add items to the array (this is because state is immutable and must be re-created every time). You are going to see on line 18 that we are calling the set method which I described above briefly. Basically, it is an arrow function that takes a parameter that matches the type of our interface that we defined above. This arrow function returns an object, and in that object, we can have what our partially updated state is gonna look like (in this case, it will be our todos being updated). We are not updating things like our different state managing functions, so we don’t have to pass them to the object. Lines 29 — 32 We are defining our removeTodo method will perform when it is called. All the things that I described about set and immutable state still apply here. Something to note here is that we are passing an id so that we know which element to remove. The way that the removal of elements is working for me here is by using the filter function to filter out all elements from the array that match the id passed (obviously, there should only be one). Lines 34 — 42 We are defining our toggleCompletedState method. Once again, everything that I discussed above with the set function still applies here. We are leveraging yet another array function built into JavaScript (the Array Map function). This function will take an array (in this case, our todos array) and return another array of the same length, except it will be transformed into a different structure. In our case, what we are doing here with our map function is leveraging another fancy JavaScript feature known as the ternary operator. So if the condition on line 37 is true, then we will run the code on line 38 and if it is false, the code on line 39 will be executed. On line 38 we are flipping the completed state on the object and on line 39 we are simply returning the same object because we don’t want to change the completed state unless it matches our id. With our store now created, there is really only one last step, and that is to use the code that we just wrote inside of our UI for the React App. Step 4: Using Zustand Inside of our React App Before we go any further. I would like to clean up the code in our index.tsx file to look like the below code. import React from "react"; import ReactDOM from "react-dom"; import App from "./App"; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById("root") ); With that out of the way, let’s connect Zustand to the UI. Below is the code for what our UI using Zustand is going to look like. No need to worry, as I will be going over the important parts below. import { useState } from "react"; import { Button, Checkbox, Container, IconButton, List, ListItem, ListItemIcon, ListItemSecondaryAction, ListItemText, makeStyles, TextField, Typography, } from "@material-ui/core"; import DeleteIcon from "@material-ui/icons/Delete"; import { useStore } from "./todoStore"; const useStyles = makeStyles((theme) => ({ headerTextStyles: { textAlign: "center", marginBottom: theme.spacing(3), }, textBoxStyles: { marginBottom: theme.spacing(1), }, addButtonStyles: { marginBottom: theme.spacing(2), }, completedTodoStyles: { textDecoration: "line-through", }, })); function App() { const { headerTextStyles, textBoxStyles, addButtonStyles, completedTodoStyles, } = useStyles(); const [todoText, setTodoText] = useState(""); const { addTodo, removeTodo, toggleCompletedState, todos } = useStore(); return ( <Container maxWidth="xs"> <Typography variant="h3" className={headerTextStyles}> To-Do's </Typography> <TextField className={textBoxStyles} label="Todo Description" required variant="outlined" fullWidth onChange={(e) => setTodoText(e.target.value)} value={todoText} /> <Button className={addButtonStyles} fullWidth variant="outlined" color="primary" onClick={() => { if (todoText.length) { addTodo(todoText); setTodoText(""); } }} > Add Item </Button> <List> {todos.map((todo) => ( <ListItem key={todo.id}> <ListItemIcon> <Checkbox edge="start" checked={todo.completed} onChange={() => toggleCompletedState(todo.id)} /> </ListItemIcon> <ListItemText className={todo.completed ? completedTodoStyles : ""} key={todo.id} > {todo.description} </ListItemText> <ListItemSecondaryAction> <IconButton onClick={() => { removeTodo(todo.id); }} > <DeleteIcon /> </IconButton> </ListItemSecondaryAction> </ListItem> ))} </List> </Container> ); } export default App; I’m not going to be touching on how Material-UI works in this walkthrough as it is outside the scope of this tutorial, but essentially it is going to allow us to easily style and layout our app. Without further, ado let’s see how Zustand works within our UI. You are going to see on line 44 that we are calling the function useStore() which is going to tap into our Zustand store, which we created above. Thanks to object destructuring, we can pull out all the functions as well as the state which we will be using below. On line 67 we are calling the addTodo method which is going to take a description and create a new to-do item on the list. On line 81 we are toggling the completed state of our to-do item. This gets toggled whenever we check or un-check a checkbox on each list item of our to-do list. On line 93 we are removing a to-do item from the list. This action will happen when the trash icon is clicked on. And finally, you are going to see on line 75 that we are looping over our list and generating a ListItem for each item in the to-do list. Wrapping Up Zustand is perhaps the simplest and least boiler-plately state-management solution that I have yet to use in React. It’s enjoyable to work with and doesn’t have a steep learning curve for those who already understand React. As always, I’m open to any comments and feedback that you might have. Feel free to share your favorite React state-management library. I’d love to hear. Additional Tutorials Here are some tutorials for other state-management libraries in React for those of you who are interested. Making a React To-Do List With Redux Toolkit in TypeScript Managing Local State in React with Apollo, Immer, and TypeScript Source Code on Github Further Reading Sharing Types Between Your Frontend and Backend Applications
https://plainenglish.io/blog/using-zustand-and-typescript-to-make-a-to-do-list-in-react
CC-MAIN-2022-40
refinedweb
1,865
62.17
Laravel News Digest – Behind the Scenes I’ve been writing a weekly Laravel newsletter for 6 weeks now. Every week I try to share a personal story, Laravel tip, or some form of story that’s on my mind. I also include a list of posts, random web links, quotes, and other things. Here is a full archive if you want to see them. In this post I wanted to share how I create these. At the start I created each one by hand, and by hand I mean I literally copy and pasted html tables all over the place and after I finally felt I had some sort of structure I would spend another hour or two making sure it worked in every mail client my devices support. Design I’m a big fan of minimal sites. I like the cleanliness and the focus on the fonts and words. More so than images and graphics. Of course with email you can’t get very fancy anyway or you could be in a world of hurt. I decided to go with a very clean response white and grey layout. I don’t have any stats to back this up but my thoughts are since I send these on a Sunday most of my readers will receive it on a mobile device, so having a minimal responsive theme was important. For the template I used Zurb email templates and just went with their basic example. One nice thing I did come across however is that you can use custom fonts in email with a few caveats. You can use Google fonts, or you can use your own if you purchase the font and import into CSS. In my case I just picked two from Google, Lato and Oswald, and went with it. I’ll be honest I’m not sure how widely these are supported but they worked on all my devices, plus I used the good old fashioned fallbacks of Arial and Helvetica. Another trick is to import the css file directly into the template. This way when you copy and paste into an inliner, it can pull styles from the source. Here is a simple way in blade: <style type="text/css"> {{ include(public_path().'/css/newsletter.css') }} </style> HTML I finally got fed up with the process of copy and pasting blocks of html tables and decided to make it better, and when I say better I mean better for me. You might think this is all dumb. :) The first step was getting away from HTML as much as possible. Nested tables are brutal without Dreamweaver and I’m not going down that path. I do love markdown and that’s what I wanted to write in. This gives me lots of benefits and my workflow is simple. Each week I create a new file and throughout the week I add to it anything I find interesting: notes, links, quotes, etc. I then take the finished markdown file and parse it. I follow this structure in my file: first section --break-- second section --break-- --posts-- <- This tag gets converted to a list of posts --break-- With this I just “break” apart each section, run it through markdown, and then loop it in a view: @section('content') @foreach ($contents as $section) @if (trim(strip_tags($section)) == '--posts--') @include(theme_view('newsletter.inc.posts')) @else @include(theme_view('newsletter.inc.section-start')) {{ $section }} @include(theme_view('newsletter.inc.section-end')) @endif @endforeach @stop The finished parsed file is then ready for me to go through as many times as I need. I read and re-read these way more than I should. I pretend I’m a sniper with only one shot, so I strive to make it the best I can. Once I’m comfortable with all the text then all that is left is to view source, copy and paste into the inliner, and schedule it for Sunday afternoon in my newsletter system. Stats To share a few stats so far I am getting around an 80% open rate with 1% unsubscribes. I’ve heard from a few sources that the bigger the list grows the open rate will typically decrease to around 50%, and the 1% is average. So I’m overall happy with my first few weeks into this experiment. When I started this I never imagined the time and effort that I would put into it. Thinking about it now I should have just imported a list of this weeks posts, sent those, and be done. :) Newsletter Join the weekly newsletter and never miss out on new tips, tutorials, and more. Laravel Jobs
https://laravel-news.com/laravel-news-digest-behind-the-scenes/
CC-MAIN-2018-47
refinedweb
773
78.59
src/examples/lee.c Tidally-induced internal lee waves This example illustrates the importance of non-hydrostatic effects for the generation of internal waves by barotropic tides flowing over a sill, as first studied by Xing & Davies, 2007. A close up of the full domain close to the sill is illustrated in the movie below. The sill geometry (hidden in Berntsen et al, 2009) is given by \displaystyle \begin{aligned} H(x) & = -50 + \frac{35}{1+(x/500)^4} & \text{ if }x < 0 \\ H(x) & = -100 + \frac{85}{1+(x/500)^4} & \text{ if }x > 0 \\ \end{aligned} The barotropic tide is imposed as inflow on the left boundary. Breaking, non-hydrostatic internal waves are generated on the lee side. Evolution of the temperature field The results at t=2/8 T and t = 3/8 T below, with T the M2 tidal period, can be compared, for example, with the corresponding results of Klingbeil & Burchard, 2013, Figure 9, a1 and b1. Temperature field at t=2/8 T Temperature field at t=3/8 T References Code We use the 1D (horizontal) non-hydrostatic multilayer solver. #include "grid/multigrid1D.h" #include "layered/hydro.h" #include "layered/nh.h" The Boussinesq density perturbation is given as a function of the “temperature” field T. #define drho(T) (1e-3*(T - 13.25)/(8. - 13.25)) #define T0(z) (8. + (13.25 - 8.)*(z + 100.)/100.) #include "layered/dr.h" The layer positions are remapped to \sigma levels and performance statistics are displayed. #include "layered/remap.h" #include "layered/perfs.h" // #include "profiling.h" The horizontal viscosity is set to 0.1 m/s2. 100 layers are used and the horizontal resolution of ~10 metres matches that of Klingbeil & Burchard, 2013. Monotonic limiting is used for vertical remapping. nl = 100; N = 2048; cell_lim = mono_limit; The maximum timestep is set to 100 seconds. The actual timestep is limited to about 5 seconds due to the CFL condition based on the maximum horizontal velocity and spatial resolution. Note that this is still much larger than the timestep used by Klingbeil & Burchard (0.56 seconds) and Bernsten et al., 2009 (0.3 seconds). An explanation for such a small timestep could be that the CFL restriction due to vertical motions can be quite restrictive for a vertically-Eulerian discretisation. For this (non-hydrostatic) example, the vertical velocities are of the same order as the horizontal velocities and since the vertical resolution is approx. 10 times larger than the horizontal resolution, the vertical CFL criterion is correspondingly smaller. This restriction is avoided with the vertically-Lagrangian solver. The M2 tidal period (in seconds). #define M2 (12.*3600. + 25.2*60.) The temperature profile T0(z) is imposed at inflow. The slightly complicated function below computes the vertical coordinate of a layer and returns the corresponding temperature. double Tleft (Point point) { double H = 0.; double zc = zb[]; for (int l = - point.l; l < nl - point.l; l++) { H += h[0,0,l]; if (l < 0) zc += h[0,0,l]; } zc += h[]/2.; return T0(zc); } The M2 tide with an amplitude of 0.3 m/s is imposed at inflow (left boundary) as well as the temperature profile. The outflow (right boundary) is free. event init (i = 0) { u.n[left] = dirichlet (0.3*sin(2.*pi*(t/M2))); u.n[right] = neumann(0.); h[right] = dirichlet(100./nl); T[left] = Tleft(point); T[right] = Tleft(point); The sill geometry, initial layer depths and initial temperature profile. foreach() { zb[] = x < 0. ? -50. + 35./(1. + pow(x/500.,4)) : -100. + 85./(1. + pow(x/500.,4)); double z = zb[]; foreach_layer() { h[] = - zb[]/nl; z += h[]/2.; T[] = T0(z); z += h[]/2.; } } } A naive discretisation of the horizontal viscosity. event viscous_term (i++) { if (nu_H > 0.) { scalar d2u[]; foreach_layer() { foreach() d2u[] = (u.x[1] + u.x[-1] - 2.*u.x[])/sq(Delta); foreach() u.x[] += dt*nu_H*d2u[]; #if NH foreach() d2u[] = (w[1] + w[-1] - 2.*w[])/sq(Delta); foreach() w[] += dt*nu_H*d2u[]; boundary ({w}); #endif // NH } boundary ((scalar *){u}); } } Outputs // fixme: plotting is (almost) the same as overflow.c void setup (FILE * fp) { fprintf (fp, #if ISOPYCNAL "set pm3d map corners2color c2\n" #else "set pm3d map interpolate 2,2\n" #endif "# jet colormap\n" )\n" "unset key\n" "set cbrange [8:13.5]\n" "set xlabel 'x (m)'\n" "set ylabel 'depth (m)'\n" "set xrange [-1500:2000]\n" "set yrange [-100:1]\n" ); } void plot (FILE * fp) { fprintf (fp, "set title 't = %.2f M2/8'\n" "sp '-' u 1:2:4\n", t/(M2/8.)); foreach (serial) { double z = zb[]; fprintf (fp, "%g %g %g %g\n", x, z, u.x[], T[]); foreach_layer() { z += h[]; fprintf (fp, "%g %g %g %g\n", x, z, u.x[], T[]); } fprintf (fp, "\n"); } fprintf (fp, "e\n\n"); // fprintf (fp, "pause 1\n"); fflush (fp); } event gnuplot (t += M2/1024.) { static FILE * fp = popen ("gnuplot 2> /dev/null", "w"); fprintf (fp, "set term x11 size 1024,300\n"); if (i == 0) setup (fp); plot (fp); fprintf (fp, "set term pngcairo font \",10\" size 1024,300\n" "set output 'plot-%04d.png'\n" "replot\n", i); } event figures (t <= M2/2.; t += M2/8.) { FILE * fp = popen ("gnuplot 2> /dev/null", "w"); fprintf (fp, "set term pngcairo font \",10\" size 1024,300\n" "set output 'T-%g.png'\n", t/(M2/8.)); setup (fp); plot (fp); } event moviemaker (t = end) { system ("for f in plot-*.png; do convert $f ppm:- && rm -f $f; done | " "ppm2mp4 movie.mp4"); }
http://basilisk.fr/src/examples/lee.c
CC-MAIN-2021-39
refinedweb
920
67.04
Django vs. Flask (2019 Comparison) 2018 which means Django is 13 years old and Flask is 8 years old. They are both extremely popular frameworks to build web applications with Python but they are so different in principle from each other. While Django follows a battery included approach making it a full packed, complete and opinionated framework, Flask is a micro framework, an un-opinionated framework that lets you choose what tools you can use for building a web app from the ORM to the templating engine. In this post we'll discuss some points you might want to consider if you need to learn Django or Flask or maybe both? Which framework you should consider using to build your next project? And in which situations? They are both so popular so there are more and more websites built using both of them and more job demands for both frameworks. Flask interface out of the box. Django has a predefined project directory structure while you can structure a Flask project as you want. When you every common web development task so you don't have to reinvent the wheel or waste your time building what other developers have already created. Just use the existing packages and concentrate on building the specific requirements for your project. The Django ORM is easy to grasp and straightforward and lets you express your businness domain requirements clearly then you have the Django admin interface, a complete web application that lets you do crud operations on your models such as creating, updating, deleting and displaying database records from an intuitive user interface generated on the fly without writing extra code. So thanks to these features, Django is your choice for either quick prototypes or final products. Now let's see how we can create a simple Hello World web application in both frameworks: Starting with Flask. Create a Python file: $ touch webapp.py Next, import Flask from flask package from flask import Flask Next, create an instance app = Flask(__name__) Also, create a view function that responds to HTTP requests when the main route / is visited: @app.route("/") def hello(): return "Hello, World!" This function responds with a Hello, World! response. Finally, add the code to run the application: if __name__ == "__main__": app.run() First we make sure, this Python file is running as an executable not included as a package then we call the app run() method to start the web application. Now you can run this app from terminal: $ python webapp.py You should get Running on (Press CTRL+C to quit) If you visit this address with a web browser you should get a web page with Hello, World! output. For Django, you should first generate a project (After installing Django) with the following command: $ django-admin startproject djwebapp $ cd djwebapp Next you should create a Django application: $ python manage.py startapp myapp Then change your settings.py to include this app in the Installed Apps array. Next open myapp/views.py and create a view function: from django.http import HttpResponse def index(request): return HttpResponse("Hello, World!") After that, you need to need to create the same example with Django. Conclusion Choosing the right framework depends on many things. You should take into consideration your goals. Are you just learning server side web development or you are building a project for a client? You should also consider your project requirements, some projects may be better developed in Django, other projects can be better created in Flask. Also remember if you need grained control over each part of your framework or you need to swap and make use of different existing tools you should use Flask where you only have the minimum functionalities and you can choose the other components by yourself.
https://www.techiediaries.com/django-vs-flask/
CC-MAIN-2021-39
refinedweb
631
61.67
unwrap Method (SQLServerStatement) Returns an object that implements the specified interface to allow access to the Microsoft JDBC Driver for SQL Server-specific methods. Parameters iface A class of type T defining an interface. An object that implements the specified interface. The unwrap method is defined by the java.sql.Wrapper interface, which is introduced in the JDBC 4.0 Spec. Applications might need to access extensions to the JDBC API that are specific to the Microsoft JDBC Driver for SQL Server. The unwrap method supports unwrapping to public classes that this object extends, if the classes expose vendor extensions. When this method is called, the object unwraps to the SQLServerStatement class. For example code, see Updating Large Data Sample, or unwrap Method (SQLServerCallableStatement). For more information, see Wrappers and Interfaces. isWrapperFor Method (SQLServerStatement) SQLServerStatement Members SQLServerStatement Class
https://technet.microsoft.com/en-us/library/dd571334.aspx
CC-MAIN-2016-36
refinedweb
138
50.63
- NAME - DESCRIPTION - JAVASCRIPT INTERFACE - Calling semantics NAME PerlSub - Encapsulate a perl CODE reference in JavaScript DESCRIPTION Perl subroutines that you pass to javascript will be wrapped into instances of PerlSub. More generally, any perl CODE reference that enters javascript land will become a PerlSub instance. ... $ctx->bind_function(perl_rand => sub { rand }); $ctx->bind_function(add => sub { my($a, $b) = @_; return $a + $b; }); ... $ctx->eval(q{ val = add(10, 20); // val now 30 say("10 + 20 = " + val); // 'say' itself is defined in the // stock context v1 = perl_rand(); v2 = perl_rand(); say(v1 + " + " + v2 + " = " + add(v1, v2)); }); JAVASCRIPT INTERFACE When you send a subroutine from perl to javascript, you'll get an instance of PerlSub. PerlSubs behave pretty much like native javascript Functions. Like Function instances, instances of PerlSub are invoked with a parenthesized list of arguments. var foo = perl_rand(); // invoke perl_rand, foo now random number var bar = perl_rand; // now 'bar' is another reference to perl_rand var baz = bar(); // invoke it, baz now a random number And as with any other object, you can manipulate properties and call methods of instances of PerlSub. perl_rand.attr = 'This is a function imported from perl'; add.toString(); // Please try this ;-) add.toSource(); // Or this! Instance methods PerlSub.prototype implements methods analogous to Function.prototype's. - call(thisArg, ARGLIST) Analogous to Function.prototype.call. - apply(thisArg, somerray) Analogous to Function.prototype.apply. - toSource( ) Analogous to Function.protype.toSource. Tries to uncompile and returns the perl code associated, depends on B::Deparse. - toString( ) Returns the literal string "sub {\n\t[perl code]\n}" Instance properties - name string Store the name on the original perl subroutine if named or the string "(anonymous)". - prototype object Remember that any Function instance has a property prototype, and PerlSub instances have one also. Please read your javascript documentation for details. - $wantarray boolean Determines the perl's context, false-> 'scalar', true-> 'list', to use when the subrotine is called. Defaults to true!!. See "Perl contexts". Constructor You can construct new perl functions from inside javascript. - new PerlSub(PERLCODE) var padd = new PerlSub("\ my($a, $b) = @_;\ return $a +$b\ "); Returns a new instance of PerlSub, that can be called in the normal way: padd(5, 6); // 11 PERLCODE, a string, is the body of your new perl subroutine, it's passed verbatim to perl for compilation. Syntax errors will throw exceptions at construction time. If you ever pass to perl instances constructed by PerlSub you'll see normal CODE references indistinguishable from anonymous subs. Calling semantics When you invoke a PerlSub from javascript, in most cases JSPL does the right thing. But there are differences in the way than javascript and perl behave respect to function calling that you should be aware of. Mainly when you export to javascript arbitrary perl functions or expose perl namespaces to javascript. Read this section for the gory details. Perl contexts In perl you call a subroutine in either scalar or list context. Perl determines this context automatically using the form of the expression in which the function is called. And perl lets your subroutine know which context is being used (via "wantarray" in perlfunc). Some subroutines behave differently depending on which context they where called. In javascript this concept doesn't exists. To make the problem of calling perl subroutines from javascript even more interesting, perl subroutines can return list of values where javascript functions always return a single value. To solve the first problem, the context in which the perl subroutine call will be made is taken from the $wantarray property of the instances of PerlSub. $wantarray defaults to true, which is the correct value to use in the vast majority of the cases. We explain why and when to use a false value below. For the return value of a PerlSub call you will get either a single value or a PerlArray. You'll get a single value when $wantarray is false or when the list returned has a single element, otherwise you'll get a PerlArray. You'll never receive arrays with a single element in them. This behaviour may be unfortunate but it makes the rest of the cases much more simpler. Besides, you can check trivially for that condition as follows: res = perl_sub_that_returns_a_list(); if(res instanceof PerlArray) { ... } else { ... } Having $wantarray default to true is the best thing to do because, on one side, perl subroutines returning single values are not affected, and on the other side it's the correct value to use for subroutines returning lists. You'll need to set $wantarray to false when: you need to call a perl subroutine that uses wantarray and/or you need to force 'scalar' context for the call. $wantarray usage example: // Asuming that 'perlfunc' is a PerlSub perlfunc.$wantarray = true; // this is the default listres = perlfunc(); // Called in 'list' context perlfunc.$wantarray = false; scalres = perlfunc(); // Called in 'scalar' context this In javascript every call to a function is a method call and a reference to the caller is visible inside the function as this. The function author decides if behave as a method, using this or as a simple function ignoring this. In perl method calls use a syntax different from regular calls. A subroutine called as a method sees its caller in front of the other arguments. When the caller to a PerlSub is either a PerlObject or a Stash, JSPL's engine will push it in the arguments. The call will use perl's method call semantics. In every other case, JSPL's engine assumes that you are creating or extending regular JavaScript objects with PerlSub-based methods and you need a way to get the value of this in a transparent way. Thats the purpose of the magical variable "$This" in JSPL. Perl code not aware of being called from JavaScript will see its arguments unmodified. Perl code that needs JavaScript's this gets it in $JSPL::This. And everyone is happy.
https://metacpan.org/pod/PerlSub
CC-MAIN-2015-40
refinedweb
981
64
I am using inheritance in a pretty standard exercise here. I have a Card class and then a few classes that extend the Card class of different Card Types. I however have a different type of class that still extends my Card class but is dealing with a LinkedList. I have all of this accomplished, however in the exercise, there is a step that states "Have a main() method populate a Billfold object with Card objects, and then call printCards()." Here is the BillFold class. import java.util.*; public class Billfold extends Card { LinkedList<Card> cards; Card firstCard; Billfold billf; public static void main(String[] args) { billf = new Billfold(); printCards(); } public Billfold() { cards = new LinkedList<Card>(); } public void addCard(Card newCard) { if(firstCard == null) firstCard = newCard; else cards.add(newCard); } public void printCards() { for(int i = 0; i < cards.size(); i++) { if(cards.get(i) != null) { super.print(); } } } } And then here is the original Card class BillFold is extending import java.util.*; import java.io.*; public class Card { private String name; private Card nextCard; public Card() { name = ""; } public Card(String n) { name = n; } public Card getNext() { return nextCard; } public boolean isExpired() { return false; } public void print() { System.out.println("Card holder: " + name); } public void setNext(Card c) { nextCard = c; } } So my question is: What does it mean to populate a Billfold object with Card objects in my Billfold Class? *Don't be confused by my inheritance info, this question is not about inheritance. Thanks in advance.
http://www.javaprogrammingforums.com/object-oriented-programming/5722-populate-object-other-objects.html
CC-MAIN-2016-07
refinedweb
246
62.48
On 23 Oct 2001, at 11:39, Robert Marcano wrote: Fo2pdf serializer based on Fop builds whole document in memory. That is why it uses so much memory. The only way to optimizy it is to optimize Fop. > I have doubts about using Cocoon+FOP to serve medium to large sized PDF > reports. I was doing more tests and noted that my servers needs 40 to > 60Mb of RAM per concurrent request to generated a 40 pages PDF report. I > think that the fo2pdf serializer is the responsible for this memory > usage, but i may be wrong. > > Someone has a clue of where to look to optimize this. I have tested > JInfonet JReport Enterprise server and it doesn't use this amounts of > memory (It is expensive and I will serve only a few reports). > > > > Robert Marcano wrote: > > > I haven't used Cocoon2 for about three months, and for the first time > > I need to generate a PDF file with data retrieved using SQL. My > > question is related to memory usage when using large xml structures. > > > > I generated a xml file using XSP and the ESQL stylesheets (a table > > with 1336 records) in order to try to isolate the problem source, this > > static file was saved and I copied it multiple times with diferent > > names to my web application in order to transform them with a XSL > > styleshhet to the XSL:FO namespace (a simple table with 3 columns), > > and serialized it with "fopdf" > > > > This is the sitemap fragment used: > > > > <map:match > > <map:generate > > <map:transform > > <map:serialize > > </map:match> > > > > I replaced the pipelines in cocoon.xconf with the NonCaching > > alternatives. When i access the first pdf file, it is generated and > > the heap grow to about 60Mb of RAM, when I access another one it grows > > near 40Mb, and it continues to grow with each new pdf request. The > > free JVM memory always remains low (around 10 or 15 Mb) so the memory > > is not reclaimed with garbage collection. > > > > I don't know what may be causing my problem, but if I'm not using > > caching pipelines, this Cocoon2 behavior is not normal. > > > > Note: I'm using Cocoon2rc1 > > > > Thanks in advance > > > > > > > > > > > -- > Robert Marcano (office: robmv@promca.com, personal: robert@marcanoonline.com) > System Architect IBM OS/2, VisualAge C++, Java, Smalltalk certified > > aol/netscape screen id: robmv > jabber id: robmv@jabber.org > msn messenger id: robert@marcanoonline.com > icq id: 101913663 > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org > For additional commands, email: cocoon-dev-help@xml.apache.org > Maciek Kaminski maciejka@tiger.com.pl --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200110.mbox/%3C3BD5B06F.309.49E18D1@localhost%3E
CC-MAIN-2016-26
refinedweb
447
54.63
12 July 2011 By clicking Submit, you accept the Adobe Terms of Use. In Part 2 of this series on creating Flex mobile skins, I discussed the effect of screen density (DPI) on component skinning and layout of mobile applications. I also showed how to use application scaling, density-specific bitmaps, and CSS media queries to adjust and accommodate for various DPI values. Aside from screen size, screen density, and form factor differences, Flex mobile application developers now have to address differences across platforms. In addition to Android, Flex 4.5.1 and Flash Builder 4.5.1 support two more platforms to target for Flex Mobile projects: Apple iOS and BlackBerry Tablet OS. Each of these platforms has their own distinct look and feel, UI patterns, and Human Interface Guidelines (HIG). The Mobile theme in Flex 4.5 doesn't cater to any one specific platform. Like Spark, the Mobile theme has a neutral look and feel with design elements that generally work across platforms. Depending on your needs and your customers' needs, you have the freedom to change the appearance of your applications quickly using CSS styling or you might opt for additional control with more advanced mechanisms such as custom skinning, FXG, or even platform-specific skins that blend in well with other native applications. Each platform supported by AIR has its own unique features, characteristics, and challenges. Flex 4.5.1 accounts for a few of these challenges already and gives you the option to add new, platform-specific behavior when necessary. This section briefly covers the major differences that can have an effect on both the visual and behavioral design of your application. AIR on Android adds keyboard support for buttons found on Android hardware: Home, Back, Menu, and Search. These buttons may be on the physical hardware or may be present on-screen. Some device manufacturers omit the Search button. The Back button on Android is used to navigate back to the previous activity, even if that activity is not part of the current application. iOS devices and the BlackBerry PlayBook don't have a dedicated Back button. Instead, iOS and PlayBook applications typically place an optional soft Back button in the top left corner of the screen. For these two platforms, the Back button's navigation behavior is local to each application. The Menu button on Android opens an options menu (that is, the ViewMenu mobile component in Flex 4.5). The options menu displays a list of commands (as buttons) that are used in the current activity. The closest equivalent on iOS is the action sheet. The action sheet displays a list of actions available for the currently active or currently selected item. The BlackBerry PlayBook has a touch-sensitive bezel that applications can use to display a container of arbitrary controls. The Search button on Android is typically used for search data within the application context. For example, a contacts application would provide a text search to filter contact data. The search activity on Android has a consistent look and feel for all applications. Neither iOS nor BlackBerry Tablet OS have application-specific search UI design elements. Android 2.x and iOS show a status bar at the top of the application that is hidden when an application is using full screen mode. In Android 3.x Honeycomb, the status bar at the bottom of the screen is always visible, full screen or not. Applications for the BlackBerry PlayBook are run in full screen by default. A swipe down from the top right corner will show the system status bar, but this will not resize the AIR application window. The default sans serif device font used by AIR differs by device. The PlayBook uses Myriad Pro, iOS uses Helvetica, and Android uses Droid Sans. To be clear, even when using the same font size across devices with the same DPI, font metrics will be different due to the different fonts in use. The amount of text you can fit in a given area will vary across screen sizes and platforms (see Figure 1). Single lines of text are generally ascent centered (that is, not including descenders in text height) across all platforms. Skins in the Mobile theme also use ascent centering. All three platforms have soft keyboard support in both portrait and landscape orientations. Some Android phones, however, have a permanent hardware keyboard and no soft keyboard (for example, Motorola Droid Pro), and others have an optional hardware keyboard that supersedes the soft keyboard when open. Android also allows third-party soft keyboards. Keyboards such as Swype and SwiftKey integrate seamlessly with AIR. The size of the soft keyboard varies by platform, device, and even device orientation. Soft keyboards for all three platforms are aligned with the bottom edge of the screen in all orientations. By default, Flex Mobile projects built in Flash Builder are automatically configured to shrink and grow the application height when the soft keyboard is activated and deactivated respectively. On Android and BlackBerry PlayBook, text entry in an AIR application's TextField and the new StyleableTextField controls work without any issues. The implementation of these controls on iOS, however, is a special case. In order to get full featured error correction and text selection on iOS, text editing occurs in a native control that is overlaid on top of the stage. This technique creates a few limitations when editing TextField and StyleableTextField content on iOS, including: restrictproperty is not supported while editing. The restrict behavior is applied after exiting a TextField. scrollRectproperties specified on ancestor containers. AIR on Apple iOS and BlackBerry Tablet OS supports 32-bit color. AIR on Android is currently limited to 16-bit color using RGB565. Each platform vendor has its own set of Human Interface Guidelines (HIG) also known as UI guidelines: Following these conventions will help you create Flex mobile applications that look and feel like their native counterparts. I've barely scratched the surface with respect to platform-specific issues. For more on this topic, you may want to visit and pttrns.com, two sites that are continuing to grow their catalogs of UI patterns. While AIR allows designers and developers to build applications on a common runtime, it's still important to know when and how to take platform-specific differences into account while designing your applications. Flex mobile projects give you a range of options from using a single UI across all platforms to redesigning your UI for each supported platform. Flex 4.5.1 and Flash Builder 4.5.1 provide basic tools for defining and using platform-specific styles and skins in your projects. In Part 2 of this series on creating Flex mobile skins, I discussed how to use the application-dpi custom media feature in a CSS media query to set DPI-specific styles such as font size and padding values. Flex 4.5 supports one additional custom media feature: os-platform . This feature allows developers to specify platform-specific styles. Here's a simple example that sets a default ActionBar chromeColor value, as well as Android-specific and iOS-specific values: ActionBar { chromeColor: #000000; } @media (os-platform: "Android") { ActionBar { chromeColor: #999999; /* dark gray */ } } @media (os-platform: "IOS") { ActionBar { chromeColor: #6DA482; /* blue */ } } This example code uses the default ActionBarSkin skin class from the Mobile theme and changes the chromeColor property. You can use this same technique to completely replace the default skin with your own platform-specific skin. The main drawback of using CSS for platform-specific skins is that all skins and their assets, regardless of platform, are compiled into your application. This is necessary since CSS media queries are computed at runtime, not compile-time. The end result is a larger binary file (APK, BAR, or IPA file), which increases download time and your application's footprint. Another way to get platform-specific skins and styles is to compile your application separately for each platform, each time overlaying a different theme via the theme compiler argument. This gives you platform-specific styling without bloating your application binary. Note that Flash Builder 4.5.1 does not have built-in support for per platform compiler arguments in a single project. You have three options: The third option provides the most control, giving you more options to add platform-specific behavior (for example, an on-screen back button for iOS and BlackBerry Tablet OS). If you choose to use themes this way, choose a build option that works best for your workflow. For more about theme support in Flex 4.x, see About themes in the Flex documentation. As usual, the simplest way to make changes to appearance is to use CSS styles. For this example, I'll expand the ActionBar example styles I showed earlier to add a few more platform-specific nuances. I have an app that I'm developing for a fictitious business reviews startup named holr!. They want their ActionBar to look like those in native applications on all three platforms, but they don't have the budget to spend on a custom skin. For this tutorial, I'll show CSS examples for each platform. Remember that these styles should be defined in the application MXML file either in an <fx:Style> tag inline or an external file. I'll start with the following application MXML, which I created using the View-based Application template from the Flex Mobile project wizard: <?xml version="1.0" encoding="utf-8"?> <s:ViewNavigatorApplication xmlns:fx= xmlns: <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; </fx:Style> <s:navigationContent> <s:Button </s:navigationContent> <s:actionContent> <s:Button </s:actionContent> </s:ViewNavigatorApplication> Ideally, if my goal is to have a single binary, I would add some logic to suppress the back button in the ActionBar when running on Android. For this styling example, I'll leave it alone to keep things simple. The Mobile theme does a pretty good job of reflecting the look of today's Android applications. The flat-styled buttons are perfect for the job. All I have to do is change the chromeColor property to use the client's signature red color (see Figure 2). Since I'll be using the same color across all platforms, I don't need to wrap this rule in a media query. s|ActionBar { chromeColor: #990000; } Titles in the ActionBar are left-aligned by default. If you want to center the title (see Figure 3), add titleAlign:center to the ActionBar style rule. ActionBar styling for iOS gets a little bit trickier. The title is center-aligned and the buttons have a distinct shape and some non-zero padding. Luckily, the ActionBar has a defaultButtonAppearance style with two options for normal and beveled. Setting the defaultButtonAppearance style of the ActionBar to beveled automatically sets several style values, including beveled buttons, padding around the navigation and action groups, and center title alignment (see Figure 4) . To see all the settings used for defaultButtonAppearance=beveled , find the Mobile theme's defaults.css file at 4.5.1/frameworks/projects/mobiletheme/defaults.css and look for usage of the .beveled style name selector. @media (os-platform: "IOS") { s|ActionBar { defaultButtonAppearance: beveled; } } Styling the ActionBar for BlackBerry can be a little more complicated. The reason I say that is because RIM provides their own Flash UI framework with their own MovieClip-based components. While it's possible to integrate the native RIM components, in this example, I'll stick with Flex. RIM uses an ActionBar-like container at the top of their built-in applications. Many of these apps use a background image or texture that shows through a transparent ActionBar. Like iOS, RIM uses center-aligned titles. However, RIM uses a normal font weight for their titles instead of bold. RIM uses rounded buttons on the left and right of the ActionBar. Unlike iOS however, RIM does not use an arrow-styled back button (see Figure 5). The PlayBook has an actual screen DPI of 170. This is high enough that the fontSize the Flex team selected for the 160 DPI classification is a tad smaller than what RIM recommends. I'll make some fontSize adjustments in this example to compensate. For a full set of styles with updated fontSize values for the PlayBook, download the defaults_BlackBerry_PlayBook.css sample file. You can get pretty close to the RIM look and feel, but you'll have to override a few CSS rules from the Mobile theme that use advanced CSS selectors. @media (os-platform: "QNX") { s|ActionBar { defaultButtonAppearance: beveled; } s|ActionBar #titleDisplay { fontSize: 22; fontWeight: normal; } s|ActionBar.beveled s|Group#navigationGroup s|Button { /* use the rounded button instead of the arrow button */ skinClass: ClassReference("spark.skins.mobile.BeveledActionButtonSkin"); } s|ActionBar.beveled s|Group#actionGroup s|Button, s|ActionBar.beveled s|Group#navigationGroup s|Button { fontSize: 16; fontWeight: normal; } } The final application file looks like this: <?xml version="1.0" encoding="utf-8"?> <s:ViewNavigatorApplication xmlns: <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace s "library://ns.adobe.com/flex/spark"; s|ActionBar { chromeColor: #990000; } @media (os-platform: "IOS") { s|ActionBar { defaultButtonAppearance: beveled; } } @media (os-platform: "QNX") { s|ActionBar { defaultButtonAppearance: beveled; } s|ActionBar #titleDisplay { fontSize: 24; fontWeight: normal; } s|ActionBar.beveled s|Group#navigationGroup s|Button { /* use the rounded button instead of the angled back button */ skinClass: ClassReference("spark.skins.mobile.BeveledActionButtonSkin"); } s|ActionBar.beveled s|Group#actionGroup s|Button, s|ActionBar.beveled s|Group#navigationGroup s|Button { fontSize: 20; fontWeight: normal; } global { fontSize: 20; } } </fx:Style> <s:navigationContent> <s:Button </s:navigationContent> <s:actionContent> <s:Button </s:actionContent> </s:ViewNavigatorApplication> A quick note about the advanced CSS selectors: The Mobile theme uses styleName and ID selectors to isolate styles to specific ActionBar skin parts. I do this to avoid inadvertently styling other components in the ActionBar content groups. For example, I don't want the buttons in the ActionBar to have the same fontSize value as the title text. I really want the title to stand out with larger text. Styling the built-in Mobile theme skins is pretty simple. Although platform-specific styling for the ActionBar is more complex, it isn't too difficult thanks to the defaultButtonAppearance and titleAlign styles. Now it's time for the hard part. This second tutorial will help you lay the groundwork to build your own themes for Flex Mobile projects. If you prefer, you can skip the main tutorial and go straight to my example iOS theme. This theme is just a proof of concept, but it does demonstrate how powerful Flex skinning can be. To clarify, the tutorial that follows explains how to create a theme overlay that uses the Mobile theme as a base theme. In general, Flex 4 theme authors should create overlays for the Spark theme for desktop projects or the Mobile theme for mobile applications. Many important style attributes are only valid when using a specified theme name. For example, chromeColor is only valid in the Spark and Mobile themes. If you create and use a standalone theme and reference chromeColor in CSS or MXML, the compiler will log an error. The first step to creating a custom theme is to create a library project. This project will output the theme SWC that you can reuse across any compatible project. Since you're extending the Mobile theme, you need to add its SWC in order to extend classes and reference other dependencies from the theme. Once it is added to the Build Path Libraries tree, the SWC should be expanded. This is required so that dependencies that you add from the Mobile theme SWC aren't also compiled into your custom theme. The default mobile project configuration (see /path/to/4.5.1/frameworks/airmobile-config.xml) adds the Mobile theme SWC by default. Later on when creating the Flex Mobile project, you don't have to add the Mobile theme SWC separately. This is required to support features such as Go To Definition for classes in the Mobile theme. The path that Flash Builder initially uses after you add the SWC is incorrect. Now that you have a project, a few more steps are required to configure it as a theme. -include-file defaults.css ../defaults.css. This completes the basic setup of a theme SWC. After defining styles in defaults.css and adding additional skin classes, the SWC file in the project bin folder can be used as a theme overlay in two ways: If you're producing themes, you can add metadata and a preview image along with your theme SWC or CSS file for a more polished importing experience. For more information on theme production, see Creating Themes. After setting up the theme, but before creating my skins, I'll set up a project to test my theme as I'm developing it. In this project, I use a few techniques to make iterating on my theme a bit easier. <fx:Script>tag. A simple statement such as ThemeClasses;is sufficient. You might be wondering why I don't use the Flex Theme properties page in the project Properties dialog box to set up a theme. The short answer is that the theme feature in Flash Builder was designed mainly for importing prebuilt themes that don't change. As I make incremental changes to the theme project, I want my mobile project to recompile. These steps are the minimum required to get Flash Builder to recompile the mobile project when the theme project changes. This phase makes up the bulk of the work. I've already described the mobile skinning development process in detail in Part 1 and Part 2 of this series on creating Flex mobile skins. To recap, the process is as follows: hostComponentproperty. chromeColor). skinClassand style values that are not affected by DPI. application-dpimedia query feature to filter values specific for each DPI for font size and padding styles. Since I've covered this process already in earlier articles, I'll skip the gory details. If you've followed the steps in this tutorial so far, you can copy the CSS example from the first tutorial in this article into the theme library project defaults.css file. When you run the mobile project, it will pick up the style rules from the theme. The iOS theme that I created relies heavily on custom skins based on the MobileSkin base class and the default skins from the Flex 4.5 Mobile theme. Since themes are intended to be used with any type of project, I didn't use the os-platform media query feature to filter style rules based on platform. This gives me the flexibility to use my iOS theme on Android or BlackBerry Tablet OS applications if desired. To try out the iOS theme with Flash Builder 4.5.1, download the mobiletheme_ios_usage.fxp sample file and follow the instructions below. Flash Builder automatically creates a run configuration for you. To package and deploy this project to an iOS device, you'll need to set up an Apple Developer account. For more information, read Andrew Shorten's article Using Flash Builder 4.5 to package applications for Apple iOS devices. To learn more about mobile skinning and the work I've done in this proof of concept, see my blog post titled Example: iOS Theme for Flex Mobile Projects. As you build mobile applications, consider which platform-specific patterns that you might apply to your project. As the smartphone and tablet market continues to grow and diversify, AIR and Flex will continue to provide a powerful cross-platform approach to mobile application development. For more on mobile development with the Adobe Flash Platform see The Mobile and Devices Developer Center. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.
https://www.adobe.com/devnet/flex/articles/mobile-skinning-part3.html
CC-MAIN-2015-32
refinedweb
3,326
55.13
I think wsdlgen with wstk is supposed to create the wsdl. Shashi Anand Senior Software Engineer Infogain India B 15 Sec 58, NOida, UP 201301, India -----Original Message----- From: Max Stolyarov [mailto:MStolyarov@Novarra.com] Sent: Monday, January 21, 2002 11:14 PM To: soap-dev@xml.apache.org Subject: Need Help on generating XSD document for RPC Web Service I created a Web Service that has a SOAP interface and which uses RPC messaging. I am using Apache SOAP2.2 implementation and Tomcat to run the service. My service is just a simple class registered as a web service, such as: public class myservice { public Hashtable send( MyMessage msg ) { Hashtable ht = new Hashtable(); return ht; } } As part of this class I defined a single method send, which accepts a single parameter which is an instance of MyMessage class. public class MyMessage { public MyMessage(){}; public void setMsgID( String id ) {}; public String getMsgID() { return msgID; }; } This class is implemented as a Java Bean. I have the service working fine, but now I am trying to create an XSD and WSDL document for it, and that is where I run into a problem. Does anyone know how to do it and where I can get an example of how to do it. Also, after creating an XSD document, what should I do to validate an instance XML documents against defined schemas. Thanks in advance. Max Stolyarov Max Stolyarov NOVARRA 3232 Kennicott Ave Arlington Heights, Illinois 60004 Phone: (847) 368-7800 x 252 Facsimile: (847) 590-8144
http://mail-archives.us.apache.org/mod_mbox/ws-soap-dev/200201.mbox/%3C310C7BE7CAC2D411AA5E0080C851C8FEE914E9@WINNT_4%3E
CC-MAIN-2019-18
refinedweb
254
60.95
UPDATE: Check out step 8 for the latest version of my scanner and a download link for the python scripts. Hi, :-) Step 1: Setting Up the Hardware I made the poles out of fairly cheap multiplex wood using a 2mm cutting bit on my CNC machine. This allowed me to pre-drill 2mm mounting holes for the Raspberry, so I just needed 2.5mm screws that would instantly fix the raspberry to my frame. For the PI Camera, I designed a small and easy to print bracket (as I need 40 of them, so it needed to be small) that can hold the camera securely and would easily allow me to change the angle the camera would be pointing at. To fancy up the poles I also added a 1meter strip of 60 LEDs to each one, to provide some extra light for the photos and just because it looked cool :-) Step 2: Connecting Everything Up Connecting 40 computers with ethernet and power was going to be messy, but I wanted to do it as efficient as possible. Unfortunately the Raspberry PI does not support Power-over-Ethernet, so I had to make this myself. I cut 40 ethernet cables, each 5 meters long. I kept all cables the same length so I know that what ever voltage I would lose over this distance would be equal for all and I would be able to adjust this on the power supply to get a very accurate 5v. As 100mb ethernet only requires 4 of the 8 cables inside an ethernet cable, I could use 2 for providing the 5v to the raspberry. So I ended up putting 80 (2x 40) connectors on the cables using just 6 of the 8 wires (2 not being used).. On the other side, I build a "power distribution board" from my single 60A 5v power supply to where I could easily connect all the 5v and ground wires to coming from each ethernet cable. Step 3: The Software I am using Raspian OS, just the default download from the raspberry pi website. To collect all the images, I am using a central file server (in my case I am using a Qnap). I configured the raspbian image to connect to the file server using cifs. This is done in the /etc/fstab file. I am also using the central file server to store my software, so I can make modifications without having to update every raspberry on its own. After I completed this image, I used dd (on my mac) to clone the SD card 40x for each raspberry. I wanted to write a "listening" script that each raspberry would run, listening to a particular network broadcast package that would trigger the camera and then save the photo and copy it to the file server. As I want all the images to be stored in a single directory (one directory per shot), I am using the local IP address of each raspberry (the last 3 digits) for a prefix of the filename. Here the python listening script I am using: #!/usr/bin/python import socket import struct import fcntl import subprocess import sys MCAST_GRP = '224.1.1.1' MCAST_PORT = 5007 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('', MCAST_PORT)) mreq = struct.pack("4sl", socket.inet_aton(MCAST_GRP), socket.INADDR_ANY) sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)])4 + '.jpg' pid = subprocess.call(cmd, shell=True) print "photo uploaded" To initiate all the raspberries to take a photo, I created a "send script". That would ask for a name. This name is send to the raspberries to include in the prefix of the filename. So I know who the images are from. Here the python send script: import socket import sys import time print 'photo name:' n = sys.stdin.readline() n = n.strip('\n') MCAST_GRP = '224.1.1.1' MCAST_PORT = 5007 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2) sock.sendto(n, (MCAST_GRP, MCAST_PORT)) The listening script checks the name received. If the name is reboot, reload or restart it does a special action, instead of shooting a photo. To configure what options I want to use for raspistill (the default image capture software on the raspberry for the PI camera) I am using an options.cfg file to configure this. Again this is stored on the central file server, so I can easily change the options. I did some testing to see how in-sync all the Raspberry Pies would take the photo. As they all receive the network broadcast package at the exactly same time, I found this worked great. I did a setup test with 12 units all taking a photo of my iPhone running the stopwatch app. Each photo captured he exact same 1/10th of a second.:
http://www.instructables.com/id/Multiple-Raspberry-PI-3D-Scanner/
CC-MAIN-2017-26
refinedweb
813
62.38
17489/pods-ip-address-from-inside-a-container-in-the-pod I'm trying to run Aerospike cluster in kubernetes and the config files need its own IP address. How do I get pods IP address from inside a container in the pod. Make sure that your pod yaml file has an environment variable that sets the ip address. You can do this by adding the config blog defined bellow: env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP Recreate the pod/rc and then try echo $MY_POD_IP Just do port forward. kubectl port-forward [nginx-pod-name] 80:80 kubectl ...READ MORE You can make sure that the replication ...READ MORE You can define a health check in ...READ MORE You can use the writemany Follow these steps Add --bind-address=0.0.0.0 option to the line Created ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/17489/pods-ip-address-from-inside-a-container-in-the-pod
CC-MAIN-2022-40
refinedweb
180
67.96
No Settings are available in "Preferred format", only preset defaults are used Bug Description = Transmageddon = [ Description ] Transmageddon ships gstreamer preset files in a shared directory, /usr/share/ [ Development fix ] Fixed in Quantal in an identical way. [ Regression potential ] If the fix doesn't work then it's possible that transmageddon won't be able to find its presets and will not be able to encode anything. [ Testing ] There should be no functional difference. Test with both the old and new transmageddon that you can still transcode some videos using a variety of settings, and that the settings you choose are the ones which are pplied. = Rhythmbox = [ Description ] No encoding preset is shipped with Rhythmbox, meaning that subpar defaults for encoding are selected. [ Development Fix ] Quantal's Rhythmbox has support for custom encoding settings. There, we've shipped the same set of "ubuntu-default" presets, but have set the default (in rhythmbox.gep) to use the user's selected custom settings. [ SRU fix notes ] We Break and Replace old transmageddons, which have a file conflict with this version. ** Thus, the Rhythmbox SRU MUST NOT be accepted, both into -proposed and -updates, without the transmageddon one. Otherwise we will introduce upgrade failures. ** [ Regression potential ] Perhaps the new settings will fail to apply and Rhythmbox will pick up a worse set than it had before. [ Testing ] Rip a CD with the new Rhythmbox with the default settings and determine that the quality and other encoder parameters are what they should be. [ Original Description ] This has been going on since 11.10 dev, maybe should be addressed In Edit > Preferences, on the Music tab the Settings button for Preferred Format is greyed out & unavailable. So there is no user available means to adjust encoding for vorbis, mp3, mp4, ect. Even using something like sound-juicer to change the very same 'Preferred.. ' is of no use, Rhythmbox doesn't use. ProblemType: Bug DistroRelease: Ubuntu 12.04 Package: rhythmbox 2.95-0ubuntu3 ProcVersionSign Uname: Linux 3.2.0-17- NonfreeKernelMo ApportVersion: 1.94-0ubuntu1 Architecture: i386 Date: Sat Mar 3 19:25:18 2012 InstallationMedia: Ubuntu 12.04 LTS "Precise Pangolin" - Alpha i386 (20120302) ProcEnviron: TERM=xterm PATH=(custom, user) LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: rhythmbox UpgradeStatus: No upgrade log present (probably fresh install) Related branches - Ubuntu branches: Pending requested 2012-07-31 - Diff: 70 lines (+40/-1)4 files modifieddebian/changelog (+7/-0) debian/control (+1/-1) debian/patches/series (+1/-0) debian/patches/transmageddon_specific_preset_directory.patch (+31/-0) Testing 12.04 beta I tried to rip a CD (Helb Alpert - he's the king right?) and found that it seems to only be encoding at 32kb/s by default. test@precisepan High Performance MPEG 1.0/2.0/2.5 Audio Player for Layer 1, 2, and 3. Version 0.2.13-4 (2011/09/29). Written and copyrights by Joe Drew, now maintained by Nanakos Chrysostomos and others. Uses code from various people. See 'README' for more! THIS SOFTWARE COMES WITH ABSOLUTELY NO WARRANTY! USE AT YOUR OWN RISK! Title : The Lonely Bull Artist : Herb Alpert Album : Definitive Hits Year : 2001 Comment : Genre : Unknown Playing MPEG stream from 01 - The Lonely Bull.mp3 ... MPEG 1.0 layer III, 32 kbit/s, 44100 Hz joint-stereo [2:17] Decoding of 01 - The Lonely Bull.mp3 finished. I played the file using VLC and found the codec info agrees with this stating 32kb/s - however when looking at the statistics the bitrate seems to be fluctuating which makes me think that the default settings are for very low quality VBR. Anyone know where Rhythmbox stores it's settings for calling the encoder (gstreamer?) these days? I've been playing about with rhythmbox.gep, but it's not particularly well documented.. I talked with Jonathan today about this. Currently the only way to get rhythmbox to use a new lame preset, is to first create the preset. In python this looks like something like: import gst; l = gst.element_ l.props.quality = 2.0; l.save_ This will create the preset in ~/.gstreamer- We will need to ship the preset so it ends up /usr/share/ Attached you will find the preset and the updated rhythmbox.gep which references this preset. And the gep file to be shipped so as the default player uses our new Lame preset. So just for info, if or when you have a moment I'm sure it will become apparent when it ships - The user will have a choice of some presets, somewhat similar to how soundconverter works , (descriptive presets, I believe 6 And - there are still some apps (third party), that use the gst profiles pipelines in gconf, are those profiles due to be removed? The encoder name is the preset is incorrect. Please use this one. I still though have 'Additional software needed...' when i select the mp3 preset ? On 04/20/2012 02:03 PM, Conor Curran wrote: > I still though have 'Additional software needed...' when i select the > mp3 preset ? > See the same thing Got this to work but have to go out so the location of the preset I'll check later In comment 4 you said "~/.gstreamer- I think the preset folder is where to go As far as the .gep & the preset name - got rid of "12:04-default" & just went with a plain word, doesn't really matter what , used high (no quotes) Also adjust the name in the .psr That stopped RB from complaining about missing decoder - Ran some test encodes of same track altering the quality= in the .psr, (used 2,0 & 6) & results where as expected, could be a little adjustment could be done to the preset as it currently is, for the most part good (been a while since I've used mp3 It was the quotes in the gep, please use this gep instead. Silly me. Doug, You beat me to it :) So what would you recommend for a vbr setting ? Right now it's on 2. Which I would hope averages out at around 190 +. I think the quality=2 is just fine, should avg. around 192 Plus users can adjust if inclined to explore a bit Possibly someone who uses/knows mp3 well could better advise but I'd be inclined to set the bitrate= line to either 96 or even 64 IIRC in VBR it just sets the absolute lower limit, so leaving down a bit gives VBR some room to do it's thing Just to add- don't know what the final on this is but a little fooling shows that adding a Profile is pretty straight up - so if there are complaints about no choice I'll just advise users as either editing the .psr as needed or even adding an additional profile or 2 screen is a "low" profile quality@ 6, br@ 32 just to mention - gnome has set the default quality of ogg vorbis to 0.3 (avg. 112) from the previous of 0.5 (avg 160 So it would be nice if that could get a preset also Have tried - cannot see how, all attempts lead to the 'Additional software needed...' Using the info from gst-inspect vorbisenc but no go? vorbis is working fine here now from a preset - stupid mistake, was naming the file with a .psr instead of .prs (GstVorbisEnc.prs Thanks Doug, Seb can we package this vorbis preset also as part of that SRU ? Just to note - sound-juicer is also using the same deal, it actually has a rhythmbox.gep in /usr/share/ So it can use the very same .prs's as RB, it just needs the preset=line(s) in it's included rhythmbox.geb (should this be a new bug or can S-J be marked as affected or too late?? Doug can you attach the vorbis preset ? Sure - everything but the quality= represents defaults as seen in gst-inspect. Have q @ 6 which represents about a 192 avg. The previous default was 5, current is 3 Seems to give consistent times checking with various players (vlc, audacious, totem, RB As far as the mp3 preset I get better time results acroos all players with perfect- And again, if possible. it seems soundconverter just copies & includes the current rhythmbox.geb which obviously is lacking the preset= line for mp3, vorbis, ect. I tried to follow your solution and was not able to reproduce it. I have a FLAC file that I transcode to ogg using Rhythmbox. I did not change the rhythmbox.gep. In ~/.gstreamer-0.10 I dropped the GstVorbisEnc.prs Doug McMahon. I also created a "presets" folder and placed a copy GstVorbisEnc.prs. I tried it three times. The first two times, only one copy of the GstVorbisEnc.prs existed at one of the previous location. The third time I placed a copy at both locations. I opened up Rhythmbox, transcoded the file and looked at its metadata. The bit rate was always 112. In previous versions (10.04), whatever setting I had in the ogg preset was used to transcode to ogg. I then compared the Rhythmbox.gep file in here and the default one. There were no changes to the ogg vorbis section. This is a workaround that I want to function. Still, the bug is not resolved as the settings button is grayed out and no changes to encoding quality can be made through the GUI. @Jesse Avilés - You must edit the rhythmbox.gep file & add a preset= line in oggvorbis section - not really the place here to provide support see here for how to, any questions ask in that thread http:// Note - this was fixed in rhythmbox ( 2.97-0ubuntu1) in 12.10, consideration to adding to 12.04 should be given 2.97-0ubuntu1 source should be a fairly easy build in 12.04 I confirmed this phenomena on 12.04 amd64. The preference button at Sound Juicer does not appear. I tried some workaround introduced on the Net (such as https:/ Everything is in place in RB 2.97 for the use of 'custom settings' in RB except there is no preset= in /usr/share/ Seriously don't get it - like pulling a tigers tooth to get something simple done. How could I use xingmux with the described method for rhythmbox 2.96? The question from ed10vi86 is important. Without xingmux, all variable rate mp3 created without xingmux will report an incorrect lenght. We used to be able to specify the xingmux option in the pipeline string in the previous version. I confirm that the mp3 encoded have the wrong length computed. Xingmux is missing. This used to be the working pipeline: audio/x- If someone could put it again, that'd be great. So I'm looking at patching this for Q and P now. I have some questions/confusion that hopefully you can help me out with Precise doesn't have the custom settings code at all, right? So what I'll do here is to add the new LAME and vorbis profiles and modify the .gep to use those. So there the defaults will be better, but still won't be customisable. For Quantal I'll upload a fix to use the rhythmbox- Am I understanding this right? Bah, looks like transmageddon ships these files with different settings. But it has http:// This bug was fixed in the package transmageddon - 0.20-1ubuntu2 --------------- transmageddon (0.20-1ubuntu2) quantal; urgency=low * Backport upstream commit 5a25dfee to use a transmageddon specific directory for the presets. (LP: #945987) * Revert changes from previous upload which aren't necessary now that there will be no file conflict. -- Iain Lane <email address hidden> Mon, 30 Jul 2012 17:15:29 +0100 Is there any way that users encoding to mp3 can get xingmux, it appears to me it's a different plugin/element than GstLameMP3Enc so don't see how it can be in the preset itself? This bug was fixed in the package rhythmbox - 2.97-1ubuntu3 --------------- rhythmbox (2.97-1ubuntu3) quantal; urgency=low [ Sebastien Bacher ] * debian/ - backport upstream patch to set keywords in the desktop entry, it makes easier to find totem in the unity dash or gnome-shell (lp: #1029964) [ Iain Lane ] * debian/*.prs, debian/ OGG and MP3 preset with improved defaults to use when encoding. Patch rhythmbox.gep to work with user-supplied custom settings (LP: #945987) * debian/ upstream bug#680842 (by Ryan Lortie) to not crash when opening preferences dialog by trying to free an already consumed value. (LP: #1030295) * Add Breaks and Replaces on versions of transmageddon which shipped the .prs files we are now shipping in -data. -- Iain Lane <email address hidden> Mon, 30 Jul 2012 17:46:42 +0100 I'm not sure; I don't see how to do it myself. If someone wants to investigate how we go about doing that then I'm happy to include it in a separate upload, but for now let's take care of just this issue. On 07/30/2012 02:06 PM, Iain Lane wrote: > I'm not sure; I don't see how to do it myself. If someone wants to > investigate how we go about doing that then I'm happy to include it in a > separate upload, but for now let's take care of just this issue. > Sounds reasonable - It may be that a bug should be filed with gstreamer, though what plugin not sure, xingmux shows in both the good & bad. Though considering the current lack of attention in Debian/Ubuntu to the .10 branch of gstreamer plugins it won't be me filing a new one upstream. Hi Doug, or anyone else, Would you please review the merge request here? I'd like to get this into precise because it is blocking another gstreamer bug causing cheese to hog cpu, especially on low power devices. The merge request is here: https:/ As far as I can tell, (& I've never used transmageddon before), the new package in 12.10 is broken & will not transcode anything. "$ transmageddon Traceback (most recent call last): File "transmageddon.py", line 863, in on_transcodebut self. File "transmageddon.py", line 736, in _start_transcoding self. File "/usr/share/ Gst. NameError: global name 'Gst' is not defined " If I were to purge RB* & install the previous transmageddon then it appears to work fine in a limited test. Does that need a new bug or is this break known & or expected?'d call this a fail just like in 12.10 due to Bug 1031439 Atm transmageddon does work in 12.10 when adjusted per the above bug report & with a patched gst-python, at least with-in the limitations of current bad plugin Just to fill out - in 12.04 adjusting transcoder_ gst.preset_ & providing a patched gst0.10-python, (gst/gst.defs), allows transmageddon to again transcode most files, exceptions noted ad nauseum elsewhere... A new transmageddon is in the queue. It should go in no earlier than gst0.10-python (bug #1031439). Preferably at the same time; the versioned dep should keep everything in line. Not sure from comment #42 why this failed verification, but as there is at the moment nothing further to be sponsored, I'm unsubbing sponsors for now. On 08/16/2012 03:43 PM, Bryce Harrington wrote: > Not sure from comment #42 why this failed verification, but as there is > at the moment nothing further to be sponsored, I'm unsubbing sponsors > for now. > Verification was asked on transmageddon in proposed, (comment 41 . It was a fail because the proposed transmageddon package failed to encode anything https:/ The new gst0.10-python is in now, so we should be able to verify this time. By "in", I mean in -proposed. Not sure what needs verification here To recap from here - All is fine in 12.10 - python-gst0.10 (0.10.22-3ubuntu1, transmageddon (0.21-1), RB As far as precise - with the new gst0.10-python (0.10.22- The previous upload to proposed is gone & was no good anyway (used Gst The only avail. transmageddon in precise now is 0.20-1 which works because it uses the gstreamer presets As mentioned in comment 43, taking the previously proposed transmageddon 0.20-1ubuntu0.1 & editing Gst to gst then all is fine in 12.04 as far as transmageddon Is there a redone transmageddon for precise? Hello Doug, or anyone else affected, Accepted rhythmbox:/ Well it's fixed in transmageddon I'm not sure how this can be considered fixed in precise when there is no option in RB to alter the settings, So a ~/gstreamer- Re-opening RB or should I file a new one ? In 12.10 you all are now providing a rhythmbox.gep & 2 presets that are incompatible. The effect of this is both mp3 & ogg encoding are unavailable & a "Install additional software required to use this format" box appears. Clicking it will be to no avail., when a preset is wrong the box appears & will not go away until the preset or .gep is corrected. The 2 .prs's supplied have this as the name - (in /usr/share/ [ubuntu-default] The rhythmbox.gep has this - preset = rhythmbox- So they need to be the same, pick one or the other (precise- /usr/share/ /usr/share/ (precise- preset = ubuntu-default preset = ubuntu-default I don't understand. Since you seem to understand the issue which clearly nobody else does then I suggest you provide a patch, lest we "screw it up" again. Will apologize for the comment - not really meant as written & will say the reason it doesn't work 'smoothly' OOTB in 12.10 is for a different reason than I thought. When a user first opens RB > Edit > Prefs> Music, ~/.gstreamerr- This causes the dialog about needing to install additional software which if clicked on for ogg will return a not found & the dialog will remain For mp3 it will install, * again the dialog will still be there. Only when a valid .psr is in ~/.gstreamerr- This will occur if the settings dropdown is expanded & Custom settings is chosen, it won't if Default or default is chosen So maybe not a big issue because sooner or later the user will pick Custom settings from the dropdown & a .prs will be created in the plugins folder & the dialog will go away. The 2 ways around if this was considered an issue would be to have ~/.gstreamerr- Doug, this sounds like an entirely different issue. Would you explain what this has to do with transmageddon moving it's presets files? Especially when most default installations have rhythmbox installed but not transmageddon? Hi James, On Wed, Sep 05, 2012 at 03:34:50PM -0000, James M. Leddy wrote: > Doug, this sounds like an entirely different issue. Would you explain > what this has to do with transmageddon moving it's presets files? > Especially when most default installations have rhythmbox installed but > not transmageddon? This bug is about rhythmbox mainly; it's just linked into the transmageddon and other SRUs by shared files and other shuffling that's required. I think a further /code/ patch to RB is needed to implement the behaviour Doug described (very clearly, thank you) in #55. But I do not think this necessarily needs to fail this SRU? Happy for it to do so and for RB to fall out again if you think it's critical for the issue to be properly fixed (and that the 12.04 released version is better in this respect than the proposed one), but I don't plan on working on such a patch myself. I think the transmageddon and python-gst0.10 parts are fine and can go in independently of the rhythmbox part. Cheers, -- Iain Lane [ <email address hidden> ] Debian Developer [ <email address hidden> ] Ubuntu Developer [ <email address hidden> ] Thanks for the explaination, I was getting confused there :) I hate to say it but I think we may need to open anther bug to track the original RB issue. The problem is, this has now been looped in with the transmageddon fix, and this bug is now blocking a cheese bug 981803 that is only semi-related. I can make it easier by opening another rhythmbox bug. Does everyone agree that's the right thing to do? I would think so except - for which RB? (& possibly what bug? The bug is fixed in 12.10 with the small exception that until a user actually picks a Custom setting they will get confusing & possibly incorrect info. As far as 12.04 it may be committed fixed to the extent that anyone intends to fix. A ubuntu-default preset is in place but because RB in 12.04 doesn't expose that Custom setting menu it's up to the user to create a .prs in ~/.gstreamerr- Additionally there could be a 12.10 bug on whether it should of gotten the 12.04 fix, (ubuntu-default), along with the RB fix of Custom Settings. Atm it doesn't look like having both presets avail. & in ~/.gstreamerr- In any event transmageddon is ok here transmageddon - 0.20-1ubuntu0.2 --------------- transmageddon (0.20-1ubuntu0.2) precise-proposed; urgency=low * Use correct variable name 'gst' instead of 'Gst', resolving failure to transcode. * Version python-gst0.10 dep to ensure we get the version which first exposed preset_set_app_dir which we require to change the preset path. transmageddon (0.20-1ubuntu0.1) precise-proposed; urgency=low * Backport upstream commit 5a25dfee to use a transmageddon specific directory for the presets. (LP: #945987) -- Iain Lane <email address hidden> Sun, 05 Aug 2012 23:01:07 +0100 This bug was fixed in the package rhythmbox - 2.96-0ubuntu4.2 --------------- rhythmbox (2.96-0ubuntu4.2) precise-proposed; urgency=low * debian/*.prs, debian/ OGG and MP3 preset with improved defaults to use when encoding. Patch rhythmbox.gep to work with user-supplied custom settings (LP: #945987) * Add Breaks and Replaces on versions of transmageddon which shipped the .prs files we are now shipping in -data. [ Sebastien Bacher ] * debian/ - backport upstream patch to set keywords in the desktop entry, it makes easier to find rhythmbox in the unity dash or gnome-shell (lp: #1029964) -- Iain Lane <email address hidden> Mon, 30 Jul 2012 18:03:50 +0100? On 09/23/2012 01:16 PM, Simon Butcher wrote: >? > Simon - generally LP isn't for support but seeing as how convoluted this report became a short explain. The fix in 12.04 with RB 2.96 is just to provide 2 new presets to override the default gnome ones which produced low quality encodes for .mp3 & .ogg You can see them in /usr/share/ adjust using available gstreamer options Or you can re-create the above locally in your home folder (~/.gstreamer- ~/.gstreamer- using available gstreamer options as seen in gst-inspect lamemp3enc , gst-inspect vorbisenc The fix for 12.10, RB 2.97 _additionally_ added a gui 'Custom Settings' that will create ~/.gstreamer- as specified thru the gui Sorry but When and how the fix for rhythmbox was released ? Please clean up the description field The bug is present with Ubuntu 13.04 (Raring Ringtail) AMD 64 when using Sound Juicer : The preference button at Sound Juicer does not appear. xingmux semmes to be stille needed, but I don't know how to add it. xingmux seems to be still needed, but I don't know how to add it. (sorry for the previous wrong message) Unassigning myself, anyone else feel free to work on any remaining issues for this bug. raring ringtail is still attempting to rip cd's at 11khz which might ok if you are listening to herb alpert (probably makes it more interesting) but if you are listening to anything made in the last 20 years (i.e electronic music with a huge dynamic range like say Reso , Noisia or Frederick Robinson) its annoying. Can we not just have a dialog pop up when you first try ripping that asks you which settings you want to use ? Cheers Amias Sorry that got ignored for all that long, I've renamed the profiles as Doug suggested in comment #53, closing that bug since I used bug #1086806 in the changelog and they are duplicate. Now we just need to tweak the gsettings key to make the Ubuntu profiles default Sorry to bring this up again, but despite all fixes, it still doesn't work here. Rhythmbox will ignore all settings and invariably create a quality 0.3 vorbis file, no matter what I do. I cannot change the quality via the gstreamer presets either: when a preset is given in the rhythmbox.gep file, Rhythmbox will complain that the gstreamer plugin for vorbis is missing. (maybe another bug?) Status changed to 'Confirmed' because the bug affects multiple users.
https://bugs.launchpad.net/ubuntu/+source/rhythmbox/+bug/945987
CC-MAIN-2017-04
refinedweb
4,147
73.68