text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Important: Please read the Qt Code of Conduct - How to access an ID across QML files Hello, I am not getting how to access a ID from one QML to another QML. NOTE: QML files are located in different directory ,Below is the code: Rectangle { property alias rect1:rect1 property string title:"some text" id:rect1 } Text { text:rect1.title //**ERROR** ReferenceError: rect1 is not defined } I'm not sure if QML1 and QML2 is a valid component name, but assuming it was you might end up with some code in another file along the lines of Item { QML1 { id: rect1 } QML2 { text: rect1.title } } at least that's how I generally seem to end up structuring things when I find I need to "wire up" elements and properties in different files. If things are more dynamic, I've also used the objectName property (a string) to give elements unique names, and then iterated over the children of the containing element looking for a specific objectName. Not an approach I'd want to get into the habit of using though: wary of the performance impact if overused, and I generally take it to be a "bad code smell" that I should probably be doing something else more clever. @pra7 Listen to @timday. The problem in your two files is that a qml file is a component and you can't use components, or types, without first creating objects of those types. You didn't give the whole files; you may already have something like ColumnLayout { QML1 { } Text { text:rect1.title //**ERROR** ReferenceError: rect1 is not defined } } in your QML2. In that case the problem is that you can't use the id used inside a component file outside that file. You have to give it an id when you create the object, for example: QML1 { id: rect1 } Text { text:rect1.title //no error The id there can be the same as inside the component file but it can be something else, it doesn't matter. @Eeli-K Thanks for the example but I need to access QML1 without creating an object in QML2 and if I create a QML1 object in QML2 the object will be re created and I may not get the correct value. I think I should use singleTon. @pra7 You haven't given us enough information to solve this problem. When you write your component files they either are independent - they don't use each other - or one uses another. The former case is in timday's example, the latter in mine. Logically there are no other possibilities and one of these will solve your problem in one way or another. Maybe your real problem is that you don't understand well how ids are used and what they are. You have to understand scope. Have you already read and understood the basic QML documentation about "Scope and Naming Resolution" and "QML Documents", and "The id Attribute" in "QML Object Attributes"? You probably don't need a singleton, you just have to create one object and use it. That's what timday's minimal example does. But you may have to do it in a more complicated way. Again, you didn't give us enough information. @Eeli-K I agree that there is not enough information and I read about how "Scope and Naming Resolution" and "QML Documents", and "The id Attribute" in "QML Object Attributes", I just wanted to know that is there any other possibilities so that I will become simpler to implement. As I am still in design phase I am not having much information to share, just looking around the possibilities to implement. My question is if QML1 is singleton object then can I use anywhere without initializing it ? in my case say if "QML1 " is singleton, later without initializing QML1, can I use the QML1 object in QML2 just by importing? @pra7 You can't use a QML type as a singleton like that. If you've got a component in file QML1.qml there just doesn't exist an accessible object of that type before you write QML1 {} somewhere (and before it's evaluated). If you want a singleton object to be used in all your QML files you can make a C++ singleton and register it appropriately. But its usefulness compared to a normal object is limited. So, what do you mean by "singleton" here and why do you need that instead of just an object? @Eeli-K I am referring following link : According to above link, I can use QML controls as SingleTon, Is my understanding correct ? @pra7 OK, now I understand better. I haven't used that. But notice that in the example the singleton is used only to carry values and is a QtObject - you were trying to use a Rectangle which is a visual type. Second, you were trying to use id while in the example the singleton doesn't have an id but is referred to by its type name Style (note that property names and id names must begin with a lowercase letter, type names and therefore component file names must begin with uppercase letter). Notice also "pragma Singleton" and "import "."". If it still doesn't work you have to give the complete contents of both or all files here - we already saw how incomplete information leads to long fruitless discussions and speculations. This got me thinking about how I structure things, which generally results from starting with a small one-file prototype and then breaking bits of it off into different files. Here's an example (all these can be run with qmlscene main.qml): First you might have File main.qml // main.qml import QtQuick 2.7 Rectangle { id: main width: 640 height: 480 property string msg0: "Some text" property string msg1: "Some more text" Column { anchors.centerIn: parent Text {text: main.msg0} Text {text: main.msg1} } } then you might try and organize stuff a bit: import QtQuick 2.7 Rectangle { id: main width: 640 height: 480 Item { id: config property string msg0: "Some text" property string msg1: "Some more text" } Column { anchors.centerIn: parent Text {text: config.msg0} Text {text: config.msg1} } } then you might split that up with a Config.qml: import QtQuick 2.7 Item { property string msg0: "Some text" property string msg1: "Some more text" } and main.qml now simplified to: import QtQuick 2.7 Rectangle { id: main width: 640 height: 480 Config {id: config} Column { anchors.centerIn: parent Text {text: config.msg0} Text {text: config.msg1} } } and then you start moving out other bits of functionality e.g adding a Messages.qml: import QtQuick 2.7 Column { anchors.centerIn: parent Text {text: config.msg0} Text {text: config.msg1} } and main.qml now simplified to: import QtQuick 2.7 Rectangle { id: main width: 640 height: 480 Config {id: config} Messages {} } I've never felt any need for singletons in QML code at all. (Having the hosting C++ set some context properties based on environment or command-line-options is the probably the closest I've come).
https://forum.qt.io/topic/81851/how-to-access-an-id-across-qml-files
CC-MAIN-2020-34
refinedweb
1,179
64.1
I. The w3wp process tries to access to registry keys but does not have the permissions. After granting the WSS_WPG group full control(you probable can get away with a little less) to the following registry keys, the errors went away. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BITS\PerformanceHKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WmiApRpl\Performance And that's it! Cheers, Wesley. If you decide to Cancel the action do not call the base implementation. Wes Just a short note to self. A lot of Microsoft applications use WebDAV. If you encounter some very slow WebDAV performance, just disable the “Automatically detect settings” in IE. Tools –> Internet options –> Connections –> LAN Settings Speeds up performance 10 times in my case. Regards,: Here’s just a life saving tip for all you SharePoint developers, thinking about integrating with Dynamics CRM 2011 through BCS. The new CRM SDK is built with .NET Framework 4.0! SharePoint 2010 runs in 3.5! So you cannot use the new CRM 2011 SDK if you would like to create an External Content Type. I do have some good new for you though. Stick to the CRM 4.0 SDK. As CRM 2011 is backward compatible, your code will run just fine when you use the “old” SDK to connect to the “new” CRM. /// <summary> /// The XFrameOptionsModule loosens the x-frame policy from DENY to SAMEORIGIN /// </summary> public class XFrameOptionsModule : IHttpModule { private const string XFrameOptionsHeaderName = "X-FRAME-OPTIONS"; /// <summary> /// Initializes a new instance of the <see cref="XFrameOptionsModule"/> class. /// </summary> public XFrameOptions.PreSendRequestHeaders += ChangeXFrameOptionsHeaderToSameOrigin; } /// <summary> /// Changes the X-Frame-Options "DENY" header to "SAMEORIGIN". /// </summary> /// <param name="sender">The HttpApplication that triggers the event.</param> /// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param> private void ChangeXFrameOptionsHeaderToSameOrigin(object sender, EventArgs e) { HttpApplication application = (HttpApplication)sender; HttpResponse response = application.Response; string headerValue = response.Headers[XFrameOptionsHeaderName]; if (headerValue != null && headerValue.Equals("DENY", StringComparison.OrdinalIgnoreCase)) { response.Headers[XFrameOptionsHeaderName] = "SAMEORIGIN"; } } } All you have to do now is to add this module to the web.config using a SPWebConfigModification inside a feature receiver.. -- Use the Specify Values for Template Parameters -- command (Ctrl-Shift-M) to fill in the parameter -- values below. USE DECLARE @className nvarchar = '' DECLARE @assemblyName nvarchar = '' SELECT DISTINCT pages.DirName + '/' + pages.LeafName + '?contents=1' as Page FROM dbo.AllWebParts wp JOIN dbo.AllDocs pages on pages.SiteId = wp.tp_SiteId AND pages.Id = wp.tp_PageUrlID WHERE wp.tp_Assembly like '%' + @assemblyName + '%' AND wp.tp_Class like '%' + @className + '%' Did you ever noticed that there are a lot of articles(265,000 results) out there that talk about linearized / 'Fast Web View' pdf files? These articles talk about the purpose of it, or about software that is capable of creating these kind of PDF files. As a SharePoint developer, I was really interested in testing this with SharePoint and Bit Rate Throttling so I started looking for a sample file. Some large(around 5mb) linearized pdf sample file. According to the amount of articles you might think that this should be no problem at all. Well, gues again! Not a single one of the authors of these articles ever bothered to attach an example of a linearized file! None! 0 out of 265.000! (<=I did not check all of them. I gave up after 1.5 hours of searching) Sigh... what an amazing world we live in. Stupid *&$^#! I have been working on this custom workflow action which allows you to post data to another servicer. This can be used to send messages to a web service for example. One thing I really wanted to have in there is security. And with security I mean the Secure Store. It is however very difficult to use the Secure Store from inside a workflow action. Although SharePoint actions are so called “impersonated” to the initiator of the user, the process really runs under its own account. Depending on how busy your server is, and the type of workflow is executed by the SPUserCodeHost, OWSTimer or W3WP process. And your code actually runs under the account of that process. The so called “impersonation” is for SharePoint actions only. This is accomplished by handing out the SPContext of the initiating step to the workflow. Secure Store stores the credentials for users or groups and only while running under the user, or group account, can you get these credentials out of the secure store and that’s great of course, but that’s also where the trouble starts. We have no way of impersonating the actual initiator of the workflow and thus no way to get the credentials per user. One option would be to get a ticket in the Initialize method and use that ticket during the Execute method to retrieve the user credentials and yes this does work, most of the time. Because most of the time, the Initialize method will run inside the W3WP process which has a HttpContext which in turn has a WindowsIdentity we can use to impersonate. Unfortunately, if the server gets busy, the Initialize method might just as well run inside the OWSTimer process. Another problem with the ticketing system is that tickets are valid for a specified amount of time only. You should think in minutes instead of hours, but workflows well… they sometimes take a few days or weeks before they finally arrive at your workflow action. private static ICredentials GetSecureStoreCredentials(string applicationId, SPServiceContext context) { string username = String.Empty; string password = String.Empty; ISecureStoreProvider provider = SecureStoreProviderFactory.Create(); if (provider == null) { throw new InvalidOperationException("Unable to get an ISecureStoreProvider"); } ISecureStoreServiceContext providerContext = provider as ISecureStoreServiceContext; providerContext.Context = context; using (var credentials = provider.GetCredentials(applicationId)) { foreach (var credential in credentials) { switch (credential.CredentialType) { case SecureStoreCredentialType.UserName: case SecureStoreCredentialType.WindowsUserName: username = credential.Credential.ToClrString(); break; case SecureStoreCredentialType.Password: case SecureStoreCredentialType.WindowsPassword: password = credential.Credential.ToClrString(); break; } } } return new NetworkCredential(username, password); } Sample code for retrieving network credentials from secure store One thing I would really like to stress out is that you really should not store the credentials in workflow variables during Initiate. Workflows can get stored anywhere (depending on the implementation of the WorkflowPersistenceService) and probably not secure. The other problem is that these credentials might not be valid anymore by the time your custom action executes. There simply is no solution. The only thing you can do is create a so called Application Account (=Group account used by all users) in Secure Store and add the service accounts of the SPUserCodeHost, OWSTimer and W3WP processes in there. Problem is that every workflow action with the same Secure Store ApplicationId has to use the same credentials. Wes
http://weblogs.asp.net/wesleybakker/default.aspx
CC-MAIN-2013-20
refinedweb
1,095
50.43
The post will help you to quickly undertand on how to manage a MongoDB instance using Python. Topics covered First download and install Mongo DB from below link. . Let’s get started. Install python module for Mongod DB. pip install pymongo Connect to the MongoDB #connecting to the locally running MongoDB Instance dbConn = pymongo.MongoClient("mongodb://localhost:27017/") Connecting to the database named demoDB present in the mongoDB if the database is not present, it’ll autoamtically create it. dbname='demoDB' db = dbConn[dbname] Show all the… Are you a beginner in the field of machine learning and wondering how to bring your project to life. I was in the same situation when I started learning ML. Most of the ML courses focus on EDA, feature engineering, and model tuning and ignore the model deployment. The end goal of any machine learning model is making it available for end-user for consumption. However deploying model has its own challenges and it also differs where you are going to deploy the model (Azure, AWS, GCP, Heroku etc.). In this blog I will help you to deploy your model using… Have you ever wondered how google image search works or How amazon can retrieve products similar to the image that we upload in the app/site? To achieve this task we will be using one simple method. We are going to pick a pre-trained deep learning model, remove the top layers, and extract the convolutional features for the images in our dataset. Then we will use these feature vectors to find similar images by using sklearn’s nearest neighbor algorithm. Let’s get started. Import libraries import requests import os import numpy as np from numpy.linalg import norm import joblib as pickle from…
https://manikantaleela.medium.com/?source=post_internal_links---------3----------------------------
CC-MAIN-2021-10
refinedweb
293
64.2
Sort | Array when you use the FileSystemObject object to get a list of files in a directory, do you find that you can't control how they are sorted, such as by name, by extension, by file size, and so on, let's try to sort them out by array. If you w Actionscript 3.0 provides a new feature that uses ByteArray, Soundmixer. The code is as follows: function func (a:number) { Return num * Math.sin (a); } function Drawfunction (func:function, Thickness:number, Color:number) { Graphics.linestyle (thick Notes The new for Each ... In addition to traversing XML, can also be used to traverse arrays and objects. Create "Million Xiong": var testarr:array = new Array ();for (var i:number = 0; i < 1000000; i++){Testarr.push (i);} Previous for and for .. Affected Systems: PHP 3.00 -------------------------------------------------------------------------------- Describe: PHP Version 3.0 is an HTML embedded scripting language. Most of its syntax is ported to C, Java, and Perl and combines PHP features Suddenly as a night spring breeze, thousand trees million pear blossom. The popularity of BT is more popular than the rivers and lakes, legends and so on. Now we often say the topic is: "Today you BT it?" "Wow k!" Guys, why are you throwing up? If yo If your existing script is written for Flash 6 or earlier, but you want to release the Flash 7 player, you may need to modify your scripts so that they meet the requirements of Flash 7 player and work according to design ideas. Here we will introduce Control | loops | The statement is well known, flash animation depends on the timeline, in the absence of script, the animation will be in accordance with the timeline from the first frame non-stop play to the last frame, and then play again or simpl Menu Previous section:Fireworks 8 Dream Trip (4): Other panels Part II: Menu section In Fireworks 8, there are some improvements to the menu, and it should be said that some commands play an important role in actual use. In this section, we focus on Security in the past to see a foreigner article, now can not remember this very good enthusiasm like my general young people, but the mailbox and he discussed the mail. There are a number of sites may have such a situation, Leverage. Inc and. ASA con Stored Procedure | data | database | Website Construction One, introduction With the gradual development of the computer network construction of the central branch of the People's Bank and the implementation of the second phase of intranet project, m access| Page | data | Database just used, good. Let's show you. Main idea: Use a statement statistics (count) out of the number of records (but not the query to get the RecordCount property), the slow presence of Cookies, jump without the statistics. Connect to the database The following is a brief introduction to several ADO connection methods: ODBC dsn,odbc dsn-less, OLE DB Provider, and "MS Remote" Provider. 1. ODBC DSN Connection I.dsn oConn.Open "Dsn=advworks" & _ "Uid=admin;" & _ "P What is a keyword? As with paper writing, the site's keywords are the subject of a site, or the content of a site's core location. It can even be understood that the content of the site revolves around a center, content and what is relevant. For exam Access to | data | database unless you use Telnet to the software logon ISP's server, you are not able to connect to the database via DSN. Here are a few ways to connect you beyond your ISP and make it easy to connect to your database: 1.SQL Server & Matt Cutts once said in 2007: We hear complaints about some kind of search (such as esoteric or long tail words), Google returns too much from the same domain, and we will improve the algorithm in the next few weeks to reduce its chance Bold body ---------------------------------------- VB Code label b VB code substitution <b>{param}</b> VB code example Bold VB code description [b] tag allows you to display bold text Use {option}? Whethe Create | Dynamic in C # programming, in some cases we might be able to use it. INI file. For example, create "Dynamic Help" for an input interface: We set a label below the input interface, and when the user moves the cursor to each textbox or other Microsoft Keywords:. NET, XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), Windowsdna, collection (assembly), Common language Runtime (CLR), IL (intermediate language), metadata (metadata), Name space (namespace), C # I. Pream Personalized search, a look very beautiful, practice but very bumpy biased phrase. Here you can see the small and beautiful, you can see the big data, you can see the mass and the long tail collision, can also see the technology to give business powe active|activex| resolution csharp Tips: Strong name When referencing activex/com components select from Mittermeyer Blog Problem Dotnet platform provides a relatively complete class library, but the first version
https://topic.alibabacloud.com/c/others_8_283
CC-MAIN-2019-13
refinedweb
835
57.3
Hi there, I'm looking into how the publisherid value is generated in Adobe AIR 1.5.3 applications. Reading: e3d118666ade46-7ff0? If your app. is 1.5.3 app., by default, the publisher ID is null. You can set the publisher ID in application descriptro if you want. e3d118666ade46-7ff1 Have you set the name space to 1.5.3 or 2.0beta2 if you use 2.0 SDK? The problem is you have the wrong name space. You can't use <publihs erID> with namespace before 1.5.3. -ted Hey ted, My application's descriptor file has been pointing at 1.5 all the time: <application xmlns=""> I tried setting it to 1.5.3, but then I get this error: error 102: Invalid namespace Sean Thanks ted, On second inspection, I was using an old version of the sdk to build the app. Switching to 1.5.3 solves my issue. Sean North America Europe, Middle East and Africa Asia Pacific South America
http://forums.adobe.com/message/2650066
CC-MAIN-2013-20
refinedweb
167
78.45
Created on 2009-12-09 17:27 by kristjan.jonsson, last changed 2021-06-18 11:07 by iritkatriel. This issue is now closed. in urllib2, you will find these lines: # Wrap the HTTPResponse object in socket's file object adapter # for Windows. That adapter calls recv(), so delegate recv() # to read(). This weird wrapping allows the returned object to # have readline() and readlines() methods. # XXX It might be better to extract the read buffering code # out of socket._fileobject() and into a base class. r.recv = r.read fp = socket._fileobject(r, close=True) This, storing a bound method in the instance, will cause a reference cycle that the user knows nothing about. I propose creating a wrapper instance with a recv() method instead. Or, is there a standard way of storing bound methods on instances? A 'weakmethod', perhaps? I have two solutions for this problem. The first is a mundane one, and what I employed in our production environment: class RecvAdapter(object): def __init__(self, wrapped): self.wrapped = wrapped def recv(self, amt): return self.wrapped.read(amt) def close(self): self.wrapped.close() ... fp = socket._fileobject(RecvAdapter(r), close=True) The second solution is a bit more interesting. It involves applying what I call a weakmethod: A bound method that holds a weak ref to the object instance: import weakref class WeakMethod(object): def __init__(self, bound): self.weakself = weakref.proxy(bound.im_self) self.methodname = bound.im_func.func_name def __call__(self, *args, **kw): return getattr(self.weakself, self.methodname)(*args, **kw) We then do: r.recv = WeakMethod(r.read) fp = socket._fileobject(r, close=True) I've had many uses for a WeakMethod through the years. I wonder if such a class might be considered useful enough to be put into the weakref module. The WeakMethod idea is not new:- references weak method idea seems interesting. I have not used it anytime yet, and in this case,it seems okay. cool. The mindtrove one in particular seems nice. I didn't realize that one could build boundobjects oneself. Is there any chance of getting a weakmethod class into the weakref module? On Fri, Dec 11, 2009 at 09:40:40AM +0000, Kristján Valur Jónsson wrote: > Is there any chance of getting a weakmethod class into the weakref > module? This is a separate feature request on weakref module. It may opened and discussion carried out there? Lets just investigate the circular reference part here for this ticket. Does this issue still exist? I did a little poking around at could not find the quoted code. in python/trunk/Lib/urllib2.py, line 1161 It doesn't appear to be an issue in py3k. This is still a horrible, horrible, cludge. I've recently done some work in this area and will suggest a different approach. Could you please provide some tests? Here it is. Notice the incredible nesting depth in python 2.7. The socket itself is found at response.fp._sock.fp._sock There are two socket._fileobjects in use! Sounds like urlopen() is relying on garbage collection to close the socket and connection. Maybe it would be better to explicitly close the socket, even if you do eliminate all the garbage reference cycles. My test code for Issue 19524 might be useful here. It verifies close() has been called on the HTTP socket. No, the socket is actually closed when response's close() method is called. The problem is that the HTTPResponse object, buried deep within the nested classes returned from do_open(), has a circular reference, and _it_ will not go away. No one is _relying_ on garbage collection in the sense that this is not, I think, designed behaviour, merely an unintentional effect of storing a bound method in the object inance. As always, circular reference should be avoided when possible since relying on gc is not something to be done lightly. Now, I think that changing the complicated wrapping at this stage is not possible, but merely replacing the bound method with a weak method might just do the trick. > It doesn't appear to be an issue in py3k. I agree, I can't find this code anywhere. Closing.
https://bugs.python.org/issue7464
CC-MAIN-2021-43
refinedweb
696
68.36
get the status of the node with the given ID #include <sys/osstat.h> int qnx_osstat( nid_t nid, struct _osstat *osdata ); The qnx_osstat() function returns status information of the node indicated by nid. This information can be used to determine how busy a node is at each priority level. The _osstat structure contains at the least the following members: The period over which the cpu_load is averaged is set by the sac utility. The numbers are relative to each other (see the example). Any errno value that can be set by qnx_vc_attach() may be set, as well as the following: See /usr/free/qnx4/os/utils/misc/os_info.tgz. QNX errno, qnx_osinfo(), qnx_psinfo()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/qnx_osstat.html
CC-MAIN-2022-33
refinedweb
113
56.35
Getting started¶ If you would like to start developing server plugins with Source.Python this is the right place for you! In order to use Source.Python you first need to install it on your game server. To do so, please follow the instructions described here. As soon as you have successfully installed Source.Python you can start writing your first plugin. Writing your first plugin¶ The first plugin will be a very simple one. But before writing any code, you have to create the plugin files. To do so, please create a directory in Source.Python’s plugin directory ( ../addons/source-python/plugins). All plugins will be located in this directory and must have their own sub-directory. Give the new created directory an abitrary name (e.g. test1). Now, you need to create the actual plugin file. It must be named like its directory. So, if you have created a test1 directory, you have to create a test1.py in that directory. The first plugin should simply print a message to the server console, when the plugin has been loaded and another message when the plugin has been unloaded. You can easily do that by adding a load and an unload function to your plugin file. These two functions will be called by Source.Python when the plugin is loaded or unloaded. To print a message to the console you can use Python’s print function. Note This only prints a message to the server console. It will not appear in any log files. Your plugin should now look like this. def load(): print('Plugin has been loaded successfully!') def unload(): print('Plugin has been unloaded successfully!') To load your plugin enter sp plugin load test1 in your server console. To unload or reload it, you can use sp plugin reload test1 or sp plugin unload test1. Source.Python plugins are not getting loaded automatically. Thus, you need to add the statement sp plugin load test1 to your autoexec.cfg if you wish this behaviour. Modifying your first plugin¶ Just sending a message to the server console is not very exciting. Moreover, players on your server don’t even noticed that. Therefore, this section will show you how to send the message to the chat of all players. To send a message to a player you can make use of the messages module. It provides various classes to send different kinds of messages. For example you can send messages to the player’s chat or directly in the center of the player’s screen. In this example we will use messages.SayText2, because we want to print the message in the chat of each player. The basic procedure to send a message is very simple. Your plugin should now look like this. from messages import SayText2 def load(): SayText2('Plugin has been loaded successfully!').send() def unload(): SayText2('Plugin has been unloaded successfully!').send() Using events in your first plugin¶ Admittedly, this plugin is still very boring, but that will change immediately when you listen to events. Before continuing please read the event introduction. It will give you a short overview of what events are and how to listen to them. We will now listen to the player_spawn event to give every player a little extra HP when they spawn. To modify the player’s health you need an object of the players.entity.Player class. Its constructor only requires a player index. Since the player_spawn event provides the user ID of the player that spawned, you can use players.helpers.index_from_userid() to convert the user ID into a player index. Your plugin should now look like this. from events import Event from players.entity import Player from players.helpers import index_from_userid): # Get the user ID of the spawned player userid = game_event['userid'] # Convert the user ID into a player index index = index_from_userid(userid) # Create a Player object... player = Player(index) # ... to add some extra HP player.health += EXTRA_HP Alternatively, you can use the classmethod players.entity.Player.from_userid(). It’s a wrapper around players.helpers.index_from_userid() and will shorten your code in events. from events import Event from players.entity import Player): # Create a Player object from the user ID... player = Player.from_userid(game_event['userid']) # ... and add some extra HP player.health += EXTRA_HP What’s next?¶ You should definitely take a look at the module tutorials section. It contains detailed tutorials about some Source.Python modules/packages. Moreover, you should take a look at the Module Index. It’s a list of all Source.Python modules/packages and contains the API documentation.
http://wiki.sourcepython.com/developing/getting-started.html
CC-MAIN-2018-34
refinedweb
765
69.48
- Advertisement JPatrickMember Content count42 Joined Last visited Community Reputation258 Neutral About JPatrick - RankGDNet+ New Years Resolution JPatrick posted a blog entry in Skipping a DUnfortunately, it appears as if DBP10 is dramatically sooner than I thought it would be. There's no way I'll be able to get anything ready by march, and I don't relish the prospect of waiting another year. Perhaps it's an opportunity though. I'm growing increasingly fond of SlimDX, and would enjoy no longer having to constantly second guess every method and algorithm to avoid 360 performance pitfalls. If I punt XNA entirely, I rid myself of that baggage, plus make distribution much easier (unless they start packing the XNA runtimes in windows update, nobody will every download them). This also has the distinct benefit of dodging the content pipeline, which while an interesting concept, always felt too "magic" to me, and too tied into the IDE and project files. Being able to use a "plain" project is appealing. Also, from what I've been hearing, XBLIG is a bit of a black hole, unless you have a total breakaway hit that strikes just the right chords like avatar drop. Let's face it, a 4D game is pretty esoteric and difficult to even explain, much less play (most likely, we'l just have to see when it's done). I don't even own my own 360, and have no particular desire to own one, much less give my payment credentials to a service that can't be cancelled without jumping through frustrating support call hoops. It's a shame to lose the huge install base of 360 and being able to just say "it's on XBLIG, check it out.", but I think the pros outweight the cons. Plus, making a SlimDX framework for the game will be something that's reusable later for other projects that I want to do after this. Which brings me around to the new years resolution part. I want to put this thing to bed within 2010, so that I can move on to other, less eccentric, projects. A playable concept demo should be enough to gauge interest, and could be expanded on from there if it ends up being worthwhile. Otherwise, at least I can say I did it, and move on without feeling like a complete quitter :/ Almost forgot. Speaking of quitting, I've shelved the emulator project. Not due to any technical hurdles or anything, there was just no reason to continue. It was a wrapper of bsnes, a fantastic and extremely accurate SNES emulator. My goal was to add shader support and such, but recent developments on the emulator itself have rendered that goal redundant. I don't consider it a waste in the least though. I learned a lot about SlimDX that will serve me well, and a good change of pace is always nice. I just need to focus completely on the 4D project in the coming months and see if I can at least post some progress before my GDNet+ expires. New Approach JPatrick posted a blog entry in Skipping a DI suppose taking a break from the editor actually did some good afterall. I was constantly dreading working on it, and putting it off to do other things. Recently, however, I sat down and really thought about why that is. The main reason I suppose is obvious and hardly unique to me: GUI programming is complicated, but boring. There's a million details that have to be "just so" for a GUI heavy app to be considered decent and presentable. Any particular detail may be simple in itself, but their sum can add to a level of complexity that can quickly become pathological if not carefully architected first. I started to find myself buried under a pile of nuance and minutiae that was incredibly demotivating. The second reason was also pretty clear, but until now I was plugging my ears and humming a tune ignoring it. My entire concept of a good editor was based upon experience using editors that were released for consumption, widely used, and officially supported products in their own right. This editor will only ever be used by me, is applicable only to this specific project, requires no support of any kind, and doesn't have a team of dedicated tool developers to make it their focus. Any effort spent making it into a "proper" product is wasted effort, distracting from the ultimate goal of making a game. The editor exists solely to ease the burden of producing levels, and nothing more. And that was it. My initial concept of "easing the burden" was by default a full-on GUI app like all the others I'd ever used. It was either that, or just writing a raw text file and throwing it to the engine; there is nothing in between. Or is there? I paused to consider the only other "editor" I was using at the time: the WPF designer. It had never occured to me until recently, but in all the time that I've used it, I've NEVER clicked, dragged, instantiated, deleted, altered, moved, or otherwise manipulated a single thing in the designer window itself. Everything I ever did was done in the markup window; the designer window existing solely as feedback for my alterations. All that time, and I'd never considered my burden to be anything but sufficiently eased. GUI layout is plenty different from level layout, but in the end, my levels are going to be very simple arrangements of basic 4D shapes. Anything so complicated as to require delicate manipulation with a mouse-rich interface, will be too complex for the player to comprehend and navigate in the game, to say nothing of too complex to render in realtime on the 360. So, to drive this home, I plan to simplify my efforts significantly to a markup description of the scene, and a feedback window to view the results from different angles. While perhaps not as ideal as a proper editor, I feel the burden will be sufficently eased to meet my needs, while freeing most of my architectural efforts to be used on the game itself. In other words, I'm too lazy and incompetent to produce a proper tool. Crazy Corruption JPatrick posted a blog entry in Skipping a DEverything was going pretty smoothly. The emulator core was wrapped up in a nice managed interface, the WPF UI was hosting a winforms control that could be rendered to via SlimDX, xaudio2 output via SlimDX, etc. All the pieces were falling into place... except for the occasional startup crash. I was using vs2010 beta 1, so I was just writing it off as some possible incompatibility with SlimDX and the beta framework. After doing a little refactoring however, I was suddenly getting crashes 15-30 seconds in, and startup crashes much more frequently. I was making use of a few threads for the different components (one for the core itself, one for audio, with rendering and UI on the main thread). Intermittent crashes absolutely scream "synchronization problem" so naturally I started there. Reviewing my WorkerThread class carefully though, I just couldn't see what might be wrong. Why not use the debugger? Unfortunately, the startup crashes seemed to kill the app even with the debugger attached, which is why I expected something external was going wrong at first. Oh how wrong I was, but that's getting ahead of myself. After refactoring though, the crashes were happening later as well, not just during startup, so I decided to see if the debugger could catch them again. Fired it up, waited a little bit, and: FatalExecutionEngineError. That's... not good. Especially when that was only the error half the time. Other times I'd get NullReferenceException from things that can't possibly be null, StackOverflowExceptions when the callstack couldn't possibly be that deep, "this" reference being the wrong type or even missing altogether, or maybe just a good old fashioned AccessViolationException. It was absolute insanity. The managed heap was obviously becoming deeply corrupted somehow. The emulator core has a fibers library to implement coroutines for synchronizing all the various parts of the system, which is part of the reason I was interested in it. I've always been a fan of coroutines, and don't think they get enough play in general, but I digress. I was wary in the beginning of what effect that kind of thing might have on the CLR, but I'd seen articles around that illustrated how to use fibers with .NET, so I perhaps it wasn't a problem. The coroutine library, in addition to implementing them in assembly, also has support for just using the windows fiber API directly. Just to make sure nothing was going wrong there, I switched it to use the windows fibers version, but no joy. Content for the moment that the problem wasn't there, I turned back to slimdx. I commented out the video code; still crashing. I commented out the audio code; still crashing. Updated to the latest version of the emulator source; still crashing. What the FRACK. Seriously. I was failing utterly at debugging. Maybe I'd have to use windbg or something and look at the core dump, but I'm just not that hardcore. Without any kind of clue as to what was REALLY wrong, I'd just have to shelf the whole thing and go back to working on the Marble4D editor. Giving up on it for the night, I tried to get some sleep (which REALLY sucks when you have a mysterious unresolved bug). In the morning I took a step back and tried to reason through it. I just KNEW it had to be something relating to fibers, since in the beginning I was unsure if it'd work while hosted by the CLR at all. Perhaps there was something to that. Wouldn't video and audio always have crashed though if that were the case, since the events are triggered by the core? Well, video refresh is triggered after the coroutine scheduler comes back to the main thread, and the way I was buffering up audio, I have the video refresh callback in the native interface also fire the audio event with all the samples buffered during that entire frame. If not those, then what... Then it hit me. In one of the obscure crashes, the "this" reference for my worker thread manager object was corrupted and listed as a DevicePolledEventArgs. INPUT! The emulator core waits as long as it can to poll input, to get the very latest state. The implication of this was that it was happening in a fiber, rather than the main thread. The input request triggers a callback that crosses the managed barrier back into my app, which fires an input event with a DevicePolledEventArgs. One of the things I changed in refactoring was having the input event create a new one each time the event fires, rather than reusing the same one over and over. I was getting that strange feeling where you just KNOW that the problem is there, even if you're not sure exactly why. I commented out the input callbacks, crossed my fingers, and ran it again. No crash. I ran it again and again and again and again both from inside the IDE and from explorer. Still nothing (yet). It would seem that fibers and other forms of "fake" threads interfere with the managed heap and the GC, at least in beta1. This is a definite case in support of my personal programming mantra: a brain is the best debugger. The unfortunate implication is that I'll have to poll input before the frame starts, rather than waiting until the last possible moment, but them's the breaks. If anyone reads this and is an expert on fibers, I'd be very interested in your thoughts. Is calling back into CLR code from a native fiber known to be unsafe? Am I just doing something wrong? At any rate, once I have some kind of input up and running, I'll post some screenshots to reveal what system and emulator specifically that I'm wrapping. I'll probably keep working on this mostly until around the new year perhaps, then switch gears back to Marble4D with a fresh outlook. Hopefully be able to get some kind of simple demo ready for DBP 10. It's definitely nice to crawl out from under all the limitations imposed by the 360 clr for a while and use more features and APIs. Definitely learning a LOT from this, and SlimDX rules. :) Reacquainted with an old friend (enemy?) JPatrick posted a blog entry in Skipping a DDisappointed after my XNA module player idea went bust, I was itching for another side project to change things up and keep from getting burned out working on the same thing constantly. Most of the other ideas I had, while simpler than a 4D game to be sure, were full fledged projects in their own right. I wanted something smaller and less architecturally taxing, so I could get charged up from brisk progress and tangible results. In addition, I wanted something that would excersize a different skillset than just another XNA project. I've been a fan of emulators for a long time, so when the notion got floated for a more windows-centric fork of one of my favorite emulators, I had an idea. The emulator is written in native code, as most are. Unfortunately, it's been years since I've touched the stuff. However, there was a middle ground: c++/cli. Using c++/cli, the core could be wrapped into a CLI compliant object model, thus facilitating the use of my more recent skillset. I'm no fan of c++ to be perfectly honest, but the simple fact is that it's ubiquitous, particularly in the games industry. So I figure it's best to maintain a moderate working knowledge of it, particular since I had already invested the effort to learn it years ago. To its credit though, c++/cli does a fantastic job of interop between native and managed code. I was constantly googling the syntax to get things like events and properties working, but that was to be expected. Also to be expected was general rustiness all around. I was constantly forgetting things like the semicolon after the class definition, the ^ before reference types, proper includes, etc. Speaking of, one thing I sure as hell don't miss is header files. They feel so clumsy and archaic compared to a proper module system to which I had grown accustomed. Preprocessor macros are also a minefield of errors waiting to happen. That aside, it's still relatively painless to get a managed class up and running that can then be consumed by C#. I was constantly trying to break it with different combinations of managed/unmanaged calls and marshalling, but every reasonable scenario I could think of was working. I'm sure there are scenarios when things get nasty, but for now it's working great, even in x64 mode (which took registry hacks to get working in vc++ express, but thankfully I found an automated script that handled that for me). So we'll see how it goes. I'm definitely enjoying this a lot, as a fresh change of pace, and am finding it easier to work on it for longer. I'll discuss it in more detail and show pictures if and when it gets working. Ideas are still brewing for the 4D game, and in time I'll be able to attack it again from a fresh perspective. *Crickets* JPatrick posted a blog entry in Skipping a DSince it's been over a month without a post, I feel compelled to make one, even if just to make sure there's an entry for august. The wheels are still turning, albeit slowly. Another heatwave a while back sapped my motivation, and more recently I've been tempted by other, much simpler, projects. I'm not going to give up on this though. A 4D game simply must exist, and come hell or high water it's going to happen. Anyway, work on the editor progresses slowly, and has revealed cause for refactory. The first thing wrong was my unnecessarily complex polychoron object model. I had a "Polychoron Part" class, which a polychoron would contain one or more of. As an example, I have a PlatformTesseract which will be the main surface object that the levels will be made of. The top and bottom of these have a grid-line shader applied, and all 6 sides (left/right, front/back, kata/ana) are just a solid color. This requires 2 separate geometry buffers, one for each shader. One polychoron part would be for the top and bottom, the other for the sides. This worked, but was ugly and unwieldy. Since polychora are subclassed for specific purposes anyway, and they already contain a list of cells that they're made of, I shifted the responsibility of managing different surface materials to the polychoron subclass, rather than this awkward, extraneous part class. This worked swimmingly and greatly simplified slice code, allowing me to completely delete the part class. If refactoring were Tetris (and it kind of is), deleting an entire source file is like clearing 4 lines at once. With that fresh, clean refactored feeling, I set out to allow my game objects to be consumed by the editor. This presented another problem though (of course it does, why wouldn't it). In the editor, 4D objects are to be displayed in 2D panels that cover each permutation of axes. The question is then, what's the best way to flatten the object into the 2D plane without them just becoming jumbled messes. I decided that objects should draw the least number of lines possible (in other words, only their actual edges). Unfortunately, I have no way to achieve this yet. Currently, polychora are just a list of 3D cells, each of which is sliced into a 2D face for rendering; the sum of the 2D faces making up the full 3D "slice" of the object, as detailed in one of my first posts. However, nothing in this process has anything to do with edges. My first instinct was just to draw the slices as wireframe, then flatten them down into 2D for the editor, but this will yield a lot of useless extra lines where the slice seams are (and there are a lot). These seams are typically invisible in 3D, except for the very occasional single-pixel gap wrought by floating point error. A little AA can smooth over those gaps for 3D, but what to do in 2D with all the extra lines? Trying to detect "useless" lines dynamically would just be a mess, so I'm not even going to go down that road. Instead, what I plan to do is further elaborate on the Cell class so that it knows what 2D faces its made of. I can then write a slicing algorithm analogous to the current one, which will slice the 2D faces into 1D line segments. This should give me an optimal wireframe ideal for editing. Additionally, slicing an object down to edges will allow me to render objects as wirefram in-game, which could be useful for some kind of extra-dimensional awareness mechanic to alert the player of nearby objects on the hidden axis. My new long term goal is to regroup and try for DPB 10, if there is one. Seems to be a pretty successful contest each year though, so I don't see why they'd stop. Sounds Like a Hack JPatrick posted a blog entry in Skipping a DWhile the side project is shelved for now, I did discover something interesting that might be useful to other XNA users working with sound. XNA 3.0 added the SoundEffect API to bypass the complexity of XACT. Unfortunately, the sounds must still be authored ahead of time and can only be instantiated through the content pipeline... Right? NOT SO! This is a total unabashed hack, but it works on both PC and 360. I successfully generated a sine wave at runtime with custom loop points, and it works great (after getting the units right for the loop point that is, but more on that later). Also, this is NOT suitable for "interactive" audio, which is to say you can't have a rolling buffer of continuously generated sound data. It almost works for that, but the gap between buffers is noticeable, and especially jarring on the 360. Here's to hoping they improve that in a future XNA release. Nevertheless, the ability to generate sound effects at runtime still provides interesting possibilities. Anyway, down to business. The first thing that bars our way is the fact that SoundEffect has no public constructor. This can be easily remedied with the crowbar that is reflection: _SoundEffectCtor = typeof(SoundEffect).GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, Type.DefaultBinder, new Type[] { typeof(byte[]), typeof(byte[]), typeof(int), typeof(int), typeof(int) }, null); As can be seen, SoundEffect has a private constructor that takes 2 byte arrays and 3 ints. Fantastic. So... what are they? Digging deeper with Reflector (which is a tool any .NET developer should have handy) we find that the first byte array is a WAVEFORMATEX structure, and the second byte array is the PCM data. The first 2 ints are the loop region start and the loop region length (measured in samples, NOT bytes), and the final int is the duration of the sound in milliseconds. I'm not sure why that's a parameter, since it could be computed from the wave format and the data itself, but whatever. While most of the parameters are straightforward, we'll need to construct a WAVEFORMATEX byte by byte. Fortunately, the MSDN page for it tells us what we need to know. Eventually, I came up with this: #if WINDOWS static readonly byte[] _WaveFormat = new byte[] { // WAVEFORMATEX little endian 0x01, 0x00, // wFormatTag 0x02, 0x00, // nChannels 0x44, 0xAC, 0x00, 0x00, // nSamplesPerSec 0x10, 0xB1, 0x02, 0x00, // nAvgBytesPerSec 0x04, 0x00, // nBlockAlign 0x10, 0x00, // wBitsPerSample 0x00, 0x00 // cbSize }; #elif XBOX static readonly byte[] _WaveFormat = new byte[] { // WAVEFORMATEX big endian 0x00, 0x01, // wFormatTag 0x00, 0x02, // nChannels 0x00, 0x00, 0xAC, 0x44, // nSamplesPerSec 0x00, 0x02, 0xB1, 0x10, // nAvgBytesPerSec 0x00, 0x04, // nBlockAlign 0x00, 0x10, // wBitsPerSample 0x00, 0x00 // cbSize }; #endif The first thing that should be apparent is that it's different for the PC and the 360. This is because the 360 is big-endian, whereas PCs are little. This also applies to the PCM data itself. The first member is the format of the wave (0x1 for PCM). Next is the number of channels (2 for stereo). The sample rate (44100Hz in hex). Bytes per second (sample rate times atomic size). Bytes per atomic unit (two 2-byte samples). Bits per sample (16), and size of the extended data block (0 since PCM doesn't have one). This will give us a pretty standard 44.1kHz, 16-bit, stereo wave to work with. It could just as easily be made mono with the appropriate adjustments. The next parameter is the sound data itself. This is stored as a series of 16-bit values alternating between the left and right channels. Here's a snippet that generates a sine wave: _WavePos = 0.0F; float waveIncrement = MathHelper.TwoPi * 440.0F / 44100.0F; for (int i = 0; i < _SampleData.Length; i += 4) { short sample = (short)(Math.Round(Math.Sin(_WavePos) * 4000.0)); #if WINDOWS _SampleData[i + 0] = (byte)(sample); _SampleData[i + 1] = (byte)(sample >> 8); _SampleData[i + 2] = (byte)(sample); _SampleData[i + 3] = (byte)(sample >> 8); #elif XBOX _SampleData[i + 0] = (byte)(sample >> 8); _SampleData[i + 1] = (byte)(sample); _SampleData[i + 2] = (byte)(sample >> 8); _SampleData[i + 3] = (byte)(sample); #endif _WavePos += waveIncrement; } This will generate a 440Hz (A) tone. Again notice the endian difference, and how the 16-bit sample is sliced into 2 bytes for placement into the array. It's written to the array twice so that the tone will sound in both channels. Next we have the loop region. The loopStart is the inclusive sample offset of the beginning of the loop, and loopStart + loopLength is the the exclusive ending sample. In this context, sample includes both the left and right channel samples, so really a 4-byte atomic block. If you pass in values measured in bytes, playback will run past the end of your sound and the app will die a sudden and painful death. Finally, the duration parameter. I just calculate the length of the sound in milliseconds and pass it in (soundData.Length * 250 / 44100). I'm not sure if this parameter actually has an effect on anything, but it's still prudent to set it. Once you have all this, you can just invoke the constructor and supply your arguments, and you should get a nice new SoundEffect from which you can spawn instances and play it just as you would with one you'd get from the content pipeline. That about covers it. Certainly not as useful as full real-time audio would be, but I thought it was cool anyway, and would hopefully be useful for some scenarios at least. Undo/Redo and Debug Anecdote JPatrick posted a blog entry in Skipping a DOddly enough, undo/redo was actually rather easy to implement. WPF's routed command facility makes it a snap to wire the shortcuts and add the callbacks. The implementation is simply two stacks of an IUndoRedo interface. New actions are pushed onto the Undo stack, and the Redo stack is cleared. If an action is undone, it is popped from the undo stack and pushed onto the redo stack, and vice versa when it is redone. Custom actions can implement this interface and their Undo and Redo methods will be invoked appropriately. Currently I have a MultiPropertyChangedAction that listens to the MultiPropertyGrid and will save all the previous values of a property when it is about to change, so that it can be easily undone. That alone covers a lot of what needs to be undo-able. Other things will include spawning a new object, deleting objects, dragging an object around in the viewports, etc. ---------- EDIT: Yoink. I jinxed the side project by talking about it. Turns out SoundEffect has more limitations than I realized. Pitch shifting is constrained to +/- 1 octave, and there's no effective way to start playing from an offset within a sound. Oh well, back to the editor I guess. ---------- Since the side project has been shelved for now, I'll instead regale any readers with a humorous tale of debugging woe. While running the game and mouse-looking around, things were working great. For some reason I alt-tabbed out to look at something else, and clicked the "show desktop" button to minimize everything, including the game. After bringing the game up again, I noticed something was missing... All of my tesseracts were gone. Empty nothingness staring back at me like the void of anxiety inside at the realization of another obscure bug. Did it freeze? Was there another race condition in my task manager? Thankfully, the FPS counter was still in the corner dutifully ticking away, so it didn't freeze. The numbers looked right, too, and went down just as they should as I pushed the key to spawn more tesseracts. It was still running, but why wasn't I seeing anything? Being relatively new to shaders and all that jazz, I immediately suspected something was broken in my rendering code. It was going blank after a minimize, so maybe some kind of lost device situation? To test, I ran it again, and hit ctrl-alt-del to bring up the interrupt menu, which I'm pretty sure causes a lost device. Canceling and going back to the desktop, all the tesseracts were still there. They would ONLY disappear after a minimize, not for any other reason. Even so, maybe my shader constants were getting messed up somehow. XNA claims to be able to recover most kinds of graphics assets fully after a device lost, but maybe there was some kind of bug with minimizing? I added a special key that would reset the constants to appropriate values when pressed. I ran the game, and before minimizing, I pressed the key. The display didn't change, since the values were still correct, and the console reported that the values had been set. So far, so good. I minimize and reopen to emptiness. Crossing my fingers, I press the key. Nothing. Seething with frustration, I mash the key and fill the console output with "minimize test" but my rage was insufficient to sway the program to render once more. What the hell was wrong? Maybe I just wasn't asking it to nicely enough. RenderStates.PrettyPlease | RenderStates.WithCherry? I start reading my Update and Draw methods again and again trying to find out what the eff was wrong. If all my shader constants and render states were fine, it had to be something else. Maybe camera updates were going wonky or something. In desperation, I completely comment out the camera code so the view can't be moved at all, and ran again. Holding my breath, I minimize and reopen, only to be met with... A floating red tesseract. YES. I took a quick break to relish the discovery of the problem area. Relaxed and confident, I plow into the camera code. It was obviously getting moved in such a way that you could no longer see the scene. How though? There was code to clamp the camera coordinates to reasonable values on both rotational axes, so even the most erratic movement should be fine. Perplexed, I add a line to print out the camera angles when a key is pressed. Before minimizing I get typical -180 to 180 horizontal and -90 to 90 vertical. Minimizing and reopening yet again, I push the key and still see typical values of -Infinity and NaN. Maybe next I'll- wait, what? I don't care how high your mouse DPI is, you're not going to be scrolling to -Infinity anytime soon. Besides, my input manager will normalize the coordinates based on the client window size, so- Oh. Seems that when you come back from being minimized, the IsActive flag in Game becomes true a few updates before the client width and height are set back to nonzero values. Slapped an if around it and all is well. NaN is fun stuff. Success! Now I can... Wait, what was I doing again JPatrick posted a blog entry in Skipping a DIt's embarrassing that it took a month to finish just this one control, but I think I can finally put it to rest and move on. It helps to remind myself that this control could be useful in any future WPF app I ever make. Out of the box it can edit any class that provides a string converter, flags and non-flags enums (of any underlying type), arbitrary structs, and classes that provide a default constructors. I think that's sufficient coverage for a lot of cases, even without adding custom type editors. I'll still probably do that anyway, at least for stuff like colors. Here's a few shots of the control: The control before items are added, and after a single item is added. The flags, struct, and class editors. Simply check the boxes in the flags dropdown for the combination you want. The struct and class editors are just a dialog with a nested MPG, the main difference being that for classes the "null" checkbox is available. A red outline is shown around editors who's contents cannot be assigned back to the property. The background of the editor cell will be gray if the value differs among the objects selected by the MPG. Getting all the keyboard input, focus, mouse capture, data binding, etc. of this thing working correctly was at time enormously frustrating. I could itemize the challenges, but honestly I'm so tired of this control that I don't really want to drag my memory through it again. If anyone is using WPF and is interested though, I'd be happy to discuss it and share the code. Switching gears... Somewhere along the line I took a break and refactored my XNA input manager, since I wasn't quite happy with it. The primary reason for making one was to provide quick methods for checking if buttons were just pressed or released during the current frame. From that though, I also wanted to reign in the inconsistent input APIs. The keyboard, gamepad, and mouse classes all exposed their states in slightly different ways, and I wanted to have one, single flat state for all digital inputs, and all analog inputs. For digital, I combined all inputs exposed by the 3 devices into a single, large enum, and allow the user to query if an input is down, up, just pressed, or just released. This should make it easy to bind controls to any device, or any combination of devices. Similarly for analog, I made an enumeration of all axes available, and normalize them to a range of -1.0 to 1.0. The user can then query the current sample, or for the delta from the last sample. This works great for all the gamepad axes, but was slightly awkward for the mouse. To make it work, the input manager recenters the pointer after every mouse sampling. This allows one set of code to correctly handle camera control from either the mouse or the gamepad (albeit with different sensitivities). For the gamepad, camera movement is proportional to the current sample. If the player is holding the stick steady at 1.0, the camera will move at a certain speed, a different speed at 0.5, etc. For the mouse, camera movement is typically handled with the mouse delta, rather than the mouse sample. Recentering the mouse, however, effectively turns the sample into a delta, and we get the expected behavior. One final challenge was the aspect ratio of the mouse. Normalizing horizontal and vertical axes to -1 to 1 means that the movement along the axes has different magnitudes, as the screen is not square. To deal with this, I provide aspect corrected mouse axes, as well as the non-aspect ones (which are themselves useful to tracking pointer position in view space). Now I still have the undo/redo stack to implement, then It's finally back to working with 4D things. The level format aught to be interesting, especially if I add spatial partitioning... Bleh JPatrick posted a blog entry in Skipping a DAnyone in the Seattle area right now is I'm sure aware of the loathsome heatwave taking place. 90 plus degree temperatures in an area where it's not uncommon to have no AC at all does not foster motivation. I'm not a big fan of the day star to begin with, and record breaking heat is doing little to sway me. Anyway, the icy wind blasting through the window now is doing much to revitalize me, and I'm continuing to pursue the MultiPropertyGrid. Types that support String in their TypeConverter are editable via a simple text box, while enums are handled with a combo box. User entered strings are validated through the type converter to make sure they're valid, and if they fail the user is notified with a message box and the control's error template is activated. Flags enums might be a little more complicated. Maybe a multi-select listbox. Beyond that, types will need to supply their own WPFTypeEditorBase derived editor control through the Editor attribute. The type is then retrieved through the TypeDescriptor interface and instantiated and data-bound into the property grid. Once this is out of the way I'll need to handle an undo-redo stack somehow, then maybe I'll finally be able to focus on the level format. At this point it's abundantly clear that there's no way I'll be ready in time for DBP 09, but it's still a useful deadline to keep motivation up. Editor Braindump JPatrick posted a blog entry in Skipping a DAlmost a week since last update, So I thought I'd just talk a bit about the editor. For some reason, writing one feels kind of like how doing homework felt. Well, minus the procrastination anxiety. It definitely has that fatiguing tedium of a project that you just want to put behind you so you can get on to other things; a project that, while technically necessary, doesn't really give you the feeling that you're getting any closer to your goal. Instead, building the boat that will get you across the river that separates you from your goal, the whole time wondering if maybe you could just swim it and skip all the bother. In this case, "swimming it" would be my crazy (lazy?) idea of trying to make levels out of plain text files. Maybe draw the box with hyphens and pipes and junk, and annotate it with some XML or something. When faced with the prospect of reimplementing a property grid control in WPF, that was sounding really tempting. At first I was cursing WPF for not just including one out of the box, but then realized I would need to customize it anyway, so perhaps it doesn't make much difference. Still though, it feels like for every two steps forward, I take another step back; a net gain, but frustrating nonetheless. I suppose part of the problem is that such a project is nontrivial in any windowing framework, so it's exposing all the weakest parts of my understanding of WPF, of which there are many, even after reading a rather comprehensive book about it. It is by far the most enormous and complex API I have ever used (hardcore win32 hackers will probably laugh, but hey, we're all bound by the limits of our experience). Not just in terms of the number of classes, methods, properties, events, etc. (seriously though, there's a lot, pull up any FrameworkElement derived class and try to mousewheel through it as fast as you can), but in terms of how those classes interrelate and all the functionality they expose: XAML, Dependency properties, routed events, commands, data binding, retained rendering, measure/arrange layout, templates, styles, triggers... With all of this power available, I feel a constant nagging doubt that I'm not using it properly, or enough. If I just google a little harder, just follow that next link, maybe a whole new and better way of solving the problem will be revealed. In fact, this actually happened when trying make an XNA control. I was looking for all kinds of ways to render arbitrary pixel data. One way I read about involved using reflection to crack open a class and force the data through; another involved using an interop bitmap class and some win32 methods. Then almost by accident I stumbled across some posts about WriteableBitmap, which was added in a later version of the framework and did exactly what I wanted, but that I had no clue existed. I don't want this to seem like I'm coming down on WPF, cause it's really quite good. In particular, the resolution independence is something I've been wanting for a long time, what with poor vision and the constantly increasing resolution of displays over the years. It's just that learning to use it was very intimidating, at least for me. I think I'm finally starting to come to grips with it though, and most importantly I'm learning to relax and just solve the problem, instead of obsessing over the perfectly elegant and proper solution using every single feature to its fullest. I have an OrthographicViewport control that I'll use to display geometry viewed top-down in the XY, XW, and WY planes, with 3 more in a tab control that will show height with the XZ, YZ, and WZ planes. The views can be scrolled, zoomed, and updated, with custom major/minor grid-lines for each axis, with numbers on the major line. They're all data-bound together so they all use the same list of objects, and sync up their zoom factors and scrollbars (scrolling right on the XY display will also scroll right on the XW and XZ displays, etc.). Next I'm trying to make a MultiPropertyGrid that, given a collection of objects, will show only the properties those objects have in common, and allow you to edit them with custom type editors. I'll post some screenshots when it's working. Level Editor Design JPatrick commented on Matt328's blog entry in Journal of Matt328Interesting observations. I'm currently attempting to make an editor as well and find myself grappling with similar issues. I'll definitely follow this to see how it goes, especially to see the winforms approach, as opposed to wpf, which is the path I chose. You mentioned in your first entry that you weighed both options and eventually chose winforms, and that you might discuss that in a future entry. I for one would be interested in seeing the details on that, and the pros and cons you saw in each. Anyway, best of luck. Multi-Threading FTW: Part 2 JPatrick commented on JPatrick's blog entry in Skipping a DThanks for the vote of confidence. It felt a little weird just starting up when all the other journals in the list have been around for a while and have tens or hundreds of thousands of views, but we all have to start sometime I suppose :) Multi-Threading FTW: Part 2 JPatrick posted a blog entry in Skipping a DPreviously on Skipping a D:Quote:First, before we can improve performance, we have to know where we stand. - ...we then have a full 24,000 intersection checks. - That's a LOT of computation, especially on the 360. - 25 tesseracts. Release build. No debugger. - It was time to use... (zoom in on face) THREADS. - Something kept bugging me though. - I wanted to try and make it lockless to further reduce any blocking. - If it didn't work I could always roll back...And now the conclusion. Well, to get started, any thread pool is going to need threads, so we need to set some up: _Threads[0] = Thread.CurrentThread; Thread.SetData(_ThreadIndexStoreSlot, 0); #if XBOX Thread.CurrentThread.SetProcessorAffinity(XBoxCoreMap[0]); #endif if (!forceSingleThread) { for (int i = 1; i < ThreadCount; i++) { _Threads = new Thread(delegate() { #if XBOX Thread.CurrentThread.SetProcessorAffinity(XBoxCoreMap); #endif Thread.SetData(_ThreadIndexStoreSlot, i); _TaskInitWaitHandle.Set(); ThreadProc(); }); _Threads.Start(); _TaskInitWaitHandle.WaitOne(); } } } _Threads is an array of all the worker threads held by the manager. At the top, we set the first element to be the current thread, set its thread local index value to 0, and set its hardware thread affinity to the first entry in the mapping. XboxCoreMap is an array holding the indices of the hardware thread slots that the worker threads should use. I define it as { 1, 3, 4, 5 }, because slots 0 and 2 are reserved by the XNA framework itself. So the main thread gets slot 1 (which is actually its default anyway, but I set it explicitly anyway just to be thorough). After the main thread setup, we have a loop that spins up the other workers. We new up a thread for each entry, giving it a ThreadStart delegate that sets its processor affinity, its thread local index, then signals a wait handle. This wait handle is crucial. As I mentioned in part 1, it keeps the main thread in sync with the worker threads, allowing each one to read the current value of 'i' before the main thread changes it for the next loop iteration. After the wait handle, it drops into the main worker thread procedure, which we'll see later. Now that we have our workers ready to go, how do we actually go about performing tasks? The DoTasks method is the main entry point for the task manager that other code will call when it wants parallel work done: public void DoTasks(List tasks) { if (_Disposing) { throw new InvalidOperationException("Cannot run after disposal."); } if (_Running) { throw new InvalidOperationException("Cannot run while already running."); } _Running = true; _Tasks = tasks; try { if (_ForceSingleThread) { for (int i = 0; i < tasks.Count; i++) { tasks(); } } else { _CurrentTaskIndex = 0; _WaitingThreadCount = 0; _ManagerCurrentWaitHandle.Set(); TaskPump(); while (_WaitingThreadCount < ThreadCount - 1) { Thread.Sleep(0); } if (_Exceptions.Count > 0) { DoTasksException e = new DoTasksException(_Exceptions); _Exceptions.Clear(); throw e; } } } finally { _ManagerCurrentWaitHandle.Reset(); _ManagerCurrentWaitHandle = _ManagerCurrentWaitHandle == _ManagerWaitHandleA ? _ManagerWaitHandleB : _ManagerWaitHandleA; _Tasks = null; _Running = false; } } The first thing we have is simple checks to make sure the manager hasn't been disposed of, and that it's not already running (on another thread). Next we mark it as running, and set the internal _Tasks list to the list that got passed in. This list will be read in parallel by the worker threads, so I use a concrete List instead of an IList. With an interface, there's no way of knowing what's actually going on inside the accessors, and if they're actually thread safe, so instead I use a List which is clearly defined as being an array internally. Here we see the _ForceSingleThread flag again. If this is set by the constructor, the task manager will not create any threads at all and will execute all the tasks serially. This is mainly for diagnostic and comparison purposes. The meat is in the 'else' clause, which prepares to execute the tasks by setting the start index to 0, and the number of finished threads to 0. It then signals a ManualResetEvent that simultaneously activates all the worker threads. This avoids the one-by-one activation of my previous implementation. It then enters TaskPump, which is the method that actually acquires and executes work items. Thus, my other goal of getting the main thread to do work as well is achieved. When it comes back from that, it waits in a loop for the other threads to finish. Once all threads are finished, we check to see if any threads added an exception to the internal _Exceptions list. If they did, we aggregate them all into a single DoTasksException and throw that back to the caller so they can examine it, or allow it to fall through to a debugger. Not to be overlooked, the finally block below plays a critical role. First it closes the wait handle we opened earlier to start the workers. Next, it switches the current wait handle that it's using to the other of the 2 defined in the class. This will make more sense when we see ThreadProc: void ThreadProc() { while (true) { Interlocked.Increment(ref _WaitingThreadCount); _ManagerWaitHandleA.WaitOne(); if (_Disposing) { return; } else { TaskPump(); } Interlocked.Increment(ref _WaitingThreadCount); _ManagerWaitHandleB.WaitOne(); if (_Disposing) { return; } else { TaskPump(); } } } This is the loop that all worker threads spend their time in. The first thing they do as they enter is safely increment a value that indicates the current number of threads that are in a waiting state, then waits on wait handle A. At first it seems that the loop repeats the same code twice. The second portion is the same as the first, except it waits on handle B. This plays into the "current wait handle" we saw on the main thread. The first time DoTasks is called, it will signal handle A, while handle B is closed. This means that all workers will run TaskPump, then block on handle B. The main thread then closes handle A and uses handle B next time. Similarly, the workers will move when B is signaled, run TaskPump, then block once more on A. This A/B/A/B pattern makes it simple to tell all workers that they can start running, while at the same time being sure that they'll stop when you want them to. With only a single handle, you would need to wait until all workers are confirmed to be activated, but not wait too long or one worker might finish its work and pass the wait handle again before it closes. As a final note, the _Disposing flag is set when the manager is disposed of and tells the workers to return, thereby ending the thread. The final piece of the puzzle is TaskPump itself: void TaskPump() { List tasks = _Tasks; int taskCount = tasks.Count; while (_CurrentTaskIndex < taskCount) { int taskIndex = _CurrentTaskIndex; if (taskIndex == Interlocked.CompareExchange(ref _CurrentTaskIndex, taskIndex + 1, taskIndex) && taskIndex < taskCount) { try { tasks[taskIndex](); } catch (Exception e) { lock (_ExceptionsLock) { _Exceptions.Add(new TaskException(tasks[taskIndex], e)); } } } } } In this method, we enter a loop that will continuously execute until the _CurrentTaskIndex indicates that all work items have been fetched. The next bit is the dangerous lockless part. First we make a local copy of the current task index. Next, we compare that local copy to the result of an Interlocked.CompareExchange call. This call will safely store the second argument into the location of the first, but ONLY if the third argument is equal to the first before the replacement, all as an atomic operation. If our local copy of the task index matches the shared copy, then we effectively "claim" that index for the executing thread. If it doesn't match, then another thread has altered the value between when we made our local copy and when we tried to make the check. This means that another thread has claimed the index and we must try again with the new value. If we pass the check, then we use our claimed index to retrieve the corresponding task from the task list, and execute it. If it throws an exception, we catch it, lock the exception list, and add it in for the main thread to sort out later. I don't try to avoid locks with this part, because if task code is throwing exceptions on a regular basis, then something's broken and should be fixed. With all the pieces in place, it was time for the moment of truth. I hooked everything up, deployed to 360, and... 150 tesseracts. AWESOME. I was totally stoked. I now had a full 3x improvement over the single threaded version. Maybe this lockless thing wasn't as hard as they said! Of course, several runs later, I was finally punished for my hubris. A seemingly random crash out of nowhere. I went cold. Thankfully I was currently in debug mode and caught the exception, and I knew I had to try and fix it then and there, since if it was some kind of arcane synchronization bug, I might not be able to get it to happen again for a long time. As it turned out, one of the workers was trying to read a task just past the end of the list. But how? The check in the while loop should catch that right? Wrong. Let's say 2 threads pass the check in the while loop, but pause before attempting to claim an index. Further let's say one of the threads moves through the compare exchange and grabs one successfully, and that immediately after that, the other thread does the same. The current index counter has now been incremented twice. This if fine and dandy, but what if there was only actually 1 work item left in the list? This is where the "&& taskIndex < taskCount" check that I glossed over in my explanation comes in. After slapping that in, no more crashes (yet). Is it foolproof now? Honestly, who knows. It's working great so far, and I'll be continuing to rely on it for the foreseeable future, so we'll see how it goes. In part 1 I promised a bonus, so here's the full code for the manager for those interested: If anyone tries it out, please let me know how it works out for your program and if it actually gets you more performance. So my tale of threading comes to an end. Perhaps next time I'll talk a bit about trying to wrangle a 4D editor out of WPF. Multi-Threading FTW: Part 1 JPatrick posted a blog entry in Skipping a DThese entries are a bit rapid fire so far, but that's mainly the result of having a bunch of backlogged topics that finally added up enough for me to break down and write them. Things will slow down pretty soon. Switching gears to a more concrete issue this time, I'm going to talk about the performance challenges of these algorithms, and a few things I've done to fight back. This is part 1 of a 2 part entry, because it got pretty long as I was writing it. Part 2 will have a bonus that other XNA developers might hopefully find useful. First, before we can improve performance, we have to know where we stand. As discussed in the previous entry, a tesseract is composed of 40 tetrahedra, each of which must potentially be sliced to produce a 3D scene. Each slicing consists of 6 intersection checks, the successful of which will yield a linear interpolation factor that we use to generate a 3D vertex from the two 4D vertices of the edge. For a modest scene of say 100 visible tesseracts (meaning none of them are early-outed by lying entirely outside the camera realm) we then have a full 24,000 intersection checks. Many of these will require further linear interpolations to generate the geometry. I'd estimate roughly half on average for visible polychora. That's a LOT of computation, especially on the 360, which means I was in for a rude awakening, and indeed it came. I want to ideally run at 60fps, so I spawned visible tesseracts in my test app to see how many I could get before dropping below that. Going in to this I had read up about the performance pitfalls of XNA, did my best to avoid created garbage, used ref passing for structs where I could; thought I had all my ducks in a row. Well, I got to... (drum roll) 25. 25 tesseracts. Release build. No debugger. I was... less than encouraged. Was this all a waste? Should I just pack it in and call it quits? My dream was crumbling, the sky was falling, the- Oops. Logic error... The geometry was being sliced twice per frame instead of once. 50. Okay, so that's... less slow. You can't scoff at a 2x performance boost right off the bat I suppose. This includes disabling the back-face calculation I mentioned in the previous entry, which bought me about 3 or 4 I think (I really wish I had kept detailed notes of this as it was happening). Even so, I was starting to face harsh reality. The naive approach of just plowing through all the work on the main thread with ears plugged and humming a tune pretending the other cores don't exist was right down the toilet. It was time to step up and get with the times. It was time to use... (zoom in on face) THREADS. Reading the XNA forums had previously made it clear to me that the standard .NET thread pool was ill-suited for use on the 360 because on that platform, you must manually assign your threads to a hardware thread slot; no scheduler does the work for you. So it was time to go googling again. I eventually came upon a thread pool implementation designed for use on the 360. I didn't feel comfortable just slapping it into my engine though. This is also a learning exercise after all, and parallelism is an interesting topic, so I instead used the code as a guide and wrote my own. It went pretty well actually. I used a standard .NET queue to store the work items, spun up some worker threads, assigned them to 360 hardware threads, and used an AutoResetEvent to block the workers until the main thread called DoTasks on my task component. When this happened, the main thread would one at a time awaken a worker by signaling the wait handle, waiting for the worker to signal back with another wait handle that it was activated. Once activated, the worker would lock the task queue, dequeue a work item, then run it. Once the main thread saw that there were no more tasks to be dispatched, it would wait for all outstanding tasks to complete, then return. This wasn't too bad for a first attempt I thought, so I set about to make my game use it. Every polychoron was now a work item, slicing its geometry into a thread local buffer. Once all work items were done, the main thread would render each of the worker thread geometry buffers. Unsurprisingly, there was a bug. I used a loop to spin-up my worker threads, and use the loop index inside a closure delegate to assign each thread a unique index so they can write to their own element of shared data arrays. For some reason though, all the indices were coming out as 2, which was the terminal loop value on my windows build, instead of being 0 and 1 like they should be. As it turns out, C# closures are mutable, and by the time the worker threads were activated, the loop counter had progressed to its final value. As a fix, I added a wait handle to the initialization so the main thread would wait until the worker was done reading the local values before proceeding to the next iteration. I'm suspicious, although not sure (because I haven't actually ran it) that the linked implementation might suffer from it as well. If anyone has used it (specifically on the 360) I'd be curious to know how it went, otherwise perhaps I'll investigate at some point. Anyway, this worked out pretty nicely, and I was quite pleased to see my new performance number: 100. The clouds are parting now. Starting to get to the point where I consider it adequate, especially if I was willing to settle for 30fps, because then I could reach around 200. I declared a preliminary victory and changed focus to other things, like the editor. Something kept bugging me though. The 360 was now using 3 hardware threads to do the work instead of only 1, but I was "only" getting about 2x performance. I initially shrugged it off as the fact that 2 of those threads are actually on the same core, but then I thought "why not let the main thread do work too instead of just waiting for the other threads to do it." So I tried to add yet another worker thread and assign it to the same hardware thread as the main thread. Much to my dismay though, this had essentially no impact on performance. I shrugged it off again, and again went back to the editor. Working on an editor is dry business though, so as a distraction I decided to completely tear down the task manager and try to fix 3 main problems that I suspected. First, activating workers one at a time felt inefficient, there has to be a way to activate them all at once and let them take care of acquiring their own work items. Second, I was still convinced that getting the main thread to consume work items as well would help. Third, I wanted to try and make it lockless to further reduce any blocking. Experts frequently warn against trying that, and for good reason as I'll detail next time, but still I wanted to try, even if just for fun. If it didn't work I could always roll back... TO BE CONTINUED... Geometry Slicing JPatrick commented on JPatrick's blog entry in Skipping a DThanks. I've always thought this kind of thing is perfect for exploration in a game, because it literally cannot exist in our universe (well, as far as I know :)Quote:Original post by MrCpaw In 4D games, wouldn't collision detection be a lengthy process?And how. I'm still not entirely sure how I'm going to attack it. In 3D, you have common bounding volumes like sphere, box, etc. I've extended that to 4D by having bounding "bulks" (which is a common term for 4D volume) like glome and axis-aligned tesseract. Collision detection on those isn't too bad, because like a sphere, checking collision with a glome is just checking a distance. With an axis-aligned tesseract, you can just check along each axis separately. Doing arbitrary tetrahedron collision still confuses me though, and that's just the tip of the iceberg. Once I have collisions I'll need to do 4D physics which is simultaneously fascinating and nauseating. I've been putting it off to work on other things like the editor, but I'll definitely do a writeup about that once I get there though. - Advertisement
https://www.gamedev.net/profile/137371-jpatrick/
CC-MAIN-2018-30
refinedweb
10,284
70.13
0 stephen.teacher 4 Years Ago hi all, this is some extra credit from class we sat around cleaning up code today this is what we came up with public boolean equals2(IntTree t2){ return equals2(this.overallRoot, t2.overallRoot); } private Boolean equals2(IntTreeNode r1, IntTreeNode r2){ if(r1 == null || r2 == null){ return r1 == null && r2 == null; } return ((r1.data==r2.data)&& equals2(r1.left, r2.left) && equals(r1.right, r2.right)); } we get five points extra credit if we can shorten any further. So my question iscan someone explain to me the ^ character and is that the right path to shortening this code binary class java recursive shorten tree
https://www.daniweb.com/programming/software-development/threads/480447/can-someone-help-condence-this-code-recursive-binary-tree
CC-MAIN-2018-43
refinedweb
110
56.08
for connected embedded systems PhEventEmit() Emit an event Synopsis: int PhEventEmit( PhEvent_t const *event, PhRect_t const *rects, void const *data ); Library: ph Description: This function emits the event described by the given PhEvent_t structure. PhEmit() does the same things as PhEventEmit(), but provides a cleaner API. The rects argument points to an array of PhRect_t structures that define the rectangles associated with the event. If event->num_rects isn't 0, then rects must point to an array of event->num_rects valid rectangles. The data argument points to variable-length event-specific data. If event->data_len isn't 0, then data must point to a buffer of at least event->data_len bytes. If you set the collector ID (event->collector.rid) to zero, the event is enqueued to every appropriately sensitive region that intersects with the event. If you set collector.rid to a region ID, only that region notices the event. The Photon library fills in the collector and translation fields in the PhEvent_t structure after a copy of the event is enqueued to an application. Returns: - 0 - Success - -1 - An error occurred, or no further events are pending. Check the value of errno: - If errno is ENOMSG, Photon had no messages enqueued to your application at the time you emitted the event. - If errno isn't ENOMSG, an error occurred. These return codes are useful for applications that spend most of their time emitting events and want to retrieve an event only if there's one pending for them. Examples: The following example emits an expose event from the device region. Because the event covers the entire event space, any visible part of the event space is refreshed: #include <stdio.h> #include <time.h> #include <Ph.h> int main( int argc, char *argv[] ) { PhEvent_t event = { 0 }; PhRect_t rect; if( NULL == PhAttach( NULL, NULL ) ) { fprintf( stderr, "Couldn't attach Photon channel.\n"); exit( EXIT_FAILURE ); } event.type = Ph_EV_EXPOSE; event.subtype = 0; event.flags = 0; event.num_rects = 1; event.data_len = 0; event.emitter.rid = Ph_DEV_RID; rect.ul.x = rect.ul.y = SHRT_MIN; rect.lr.x = rect.lr.y = SHRT_MAX; PhEventEmit( &event, &rect, NULL ); return EXIT_SUCCESS; } Classification: Photon See also: PhEmit(), PhEmitmx(), PhEvent_t, PhEventEmitmx(), PhEventNext(), PhEventPeek(), PhEventRead(), PhRect_t, PtSendEventToWidget() "Emitting events" in the Events chapter of the Photon Programmer's Guide.
http://www.qnx.com/developers/docs/6.4.0/photon/lib_ref/ph/pheventemit.html
crawl-003
refinedweb
378
59.4
rfm69 and atc Hi. I have played a little bit with atc examples from lowpowerlab which work well but I have some trouble to include it in mysensors Here description of atc for rfm69 : What I like is : - "green" rf (no scream level) - autoadjust rssi when fresh made node, or you move it (rare) - optimized low power as it adjust rssi - dynamic Very basic setup I am trying to compile: - from, add rfm69_atc. h and .cpp to rfm69 driver folder - it will need ifdef at some place, but for dirty test, in transportrfm69: //#include "drivers/RFM69/RFM69.h" #include "drivers/RFM69/RFM69_ATC.h" // RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); RFM69_ATC _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM);``` - still in transportrfm69.cpp, in transport.init: _radio.enableAutoPower(-70); // fixed for tests That would need some define conf like for example - MY_ENABLE_ATC to enable atc mode - MY_ENABLE_ATC_LEVEL_RFM69 for target rss level I'm getting this really dumb errors! sketch\SensebenderMicro.ino.cpp.o: In function `transportInit()': C:\Users\scalz\Documents\Arduino\libraries\MySensors/core/MyTransportRFM69.cpp:42: undefined reference to `RFM69_ATC::initialize(unsigned char, unsigned char, unsigned char)' sketch\SensebenderMicro.ino.cpp.o: In function `transportReceive(void*)': C:\Users\scalz\Documents\Arduino\libraries\MySensors/core/MyTransportRFM69.cpp:84: undefined reference to `RFM69_ATC::sendACK(void const*, unsigned char)' collect2.exe: error: ld returned 1 exit status exit status 1 Looking at each rfm, rfm_atc or rfmtransport I don't understand why it's undefined..rfm69_atc class is simply derived from rfm69 class. I am thinking about bad linking, bad inheritance declaration, or something not "in sync" with some params of core class methods (but I don't see where or why)?? Can you explain me? I'm feeling blind, and would like to learn.. @Hek, I'm sure you know what's wrong. don't laugh I will add more things I need when I will understand my mistake here thx Do you include your new ATC cpp here? @hek no I didn't see this place! thx. I have just added but still same errors. I am still looking.. Edit: I am using dev branch of course! and for the test, I am trying to compile sensebender sketch. @hek: sorry, it's oki for cpp. I added .h instead So now I have a small lot of errors due to the addition , I will look how to fix it and report here if I have another problem or success. thx You might need to include both... both in mysensors.h ? I will try. I need to understand well how it is organized I think. Yes, include both .cpps, looks like the new class inits the old: yes I kept the too. but if I remove rfm69.cpp or not, it still have these new errors I'm looking after In file included from C:\Users\scalz\Documents\Arduino\libraries\MySensors/MySensor.h:265:0, from C:\Users\scalz\AppData\Local\Temp\arduino_modified_sketch_741351\SensebenderMicro.ino:65: C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp: In member function 'void RFM69_ATC::sendFrame(uint8_t, const void*, uint8_t, bool, bool, bool, int16_t)': C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:111:18: error: 'RFM69_CTL_SENDACK' was not declared in this scope SPI.transfer(RFM69_CTL_SENDACK | (sendRSSI?RFM69_CTL_RESERVE1:0)); // TomWS1 TODO: Replace with EXT1 ^ C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:118:32: error: 'RFM69_CTL_REQACK' was not declared in this scope SPI.transfer(_targetRSSI ? RFM69_CTL_REQACK | RFM69_CTL_RESERVE1 : RFM69_CTL_REQACK); ^ C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:129:66: error: 'RF69_TX_LIMIT_MS' was not declared in this scope while (digitalRead(_interruptPin) == 0 && millis() - txStart < RF69_TX_LIMIT_MS); // wait for DIO0 to turn HIGH signalling transmission finish ^ exit status 1``` Too much for my brain at this hour. You should probably start looking for the missing RFM69_CTL_REQACK define. Good night yep. thank you very much good night cool it compiles after updated rfm69 lib. I do some define and tests, and I will try my first PR I also need to check what are all diff between old and new lib. can't wait to try listenmode now... I will keep a very close eye on this! Listen mode is very interesting, but also the possible prolonged battery life from the other sensors!!! Big thumbs up! thx. so far so good I had a little bit time last night...I think I will rename this topic "improvements for our mysensors rfm69 lib" lol I diff checked with mysensors rfm69 driver lib: latest lowpowerlab rfm69 lib (master+spi transaction version) + few others variant I found to see if something could be missing. So I started from mysensors rfm69 driver one and added step by step the changes, and of course not forgot to keep boards define (atsam, esp, 328...) + checked the purpose of these changes. The list of improvements I noticed: - ATC, Automatic Transfer Power Control : merged, working (not the biggest part) - small improvements on spi transaction part : merged but not full tested. I don't use ethernet shield, so just tested with eeprom but I think this change was mostly made for things like w5100 shield...if I have time I will try to make more tests on this.. - ListenMode : still, in progress but in a good way I think Almost merged but not tested yet (was too late!!). For the moment it compiles. It will need some tests I think to see if all properly works, power consumption..At lowpowerlab they get very few ua (1-2ua order) in listenmode. sounds great I hope to have same success. I need register to their forum at least to thx them. not done. booo - with this listenmode, I plan to use gammon sketch J. I already did tests and noticed a better low consumption than lowpowerlab sleep of mysensors. but I will check if it's still the same case - still about sleep mode, I will look if it's possible to improve wakeup time, I read interesting things. - when all this will be ok, I will try to see if it's interesting to use sort of WDT Listenmode : a wdt done by rssi using listenmode. but that's at the end of the list! and needs to see then if it's better or not than common wdt power consumption. but that looks tempting because common wdt don't do listenmode... Files impacted for those interested to know: - rfm69 libs updated - rfm69_atc added - myTransportRFM69 updated - one cpp include in mysensor.h - few define in myconfig.h - mysensorscore, transport, hw..to add a SleepWithListenMode method ...For the moment I add my own sleep method to not break anything and keep mysensors archi... Some stuff, I hope I will have everything ok! But I'm very happy doing this as now I am a lot lot more confident with mysensors archi. very cool see you soon very good work ! especially for ATC and Listenmode. Is the code mods on Github for us to look at?? Thanks @Francois sorry this work is still in progress, I will work on it this week. This is handled in the rfm69 lib. And lowpowerlab explains this well: copied from Lowpowerlab - "The basic idea behind this extension is to allow your nodes to dial down transmission power based on the received signal strength indicator (RSSI)." - -)" Require some small changes in libs, but for the moment I have to check/think few things for the listenmode to have everything well packaged with mysensors..not finished yet. have you tested the code enough to release it for our use and testing?? Thanks @lafleur no, not yet. sorry for delay. I have no time actually..and I think perhaps lowpowerlab team is preparing few changes in their lib. so I am delaying a bit to finish other things in the mean time, and to see if they add new features. then, if nothing new, I will finish this if someone didn't beat me on this but I will try to do my best @scalz I have it working to some extent, send me what you have and I will add your changes to what I've done to make sure I did not miss anything... Then I will post the changes to the development branch Thanks tom --at-- lafleur --.-- us I have all this working now and have 7 devices on it to a serial gateway... Using new RFM69 driver and RFM69_ATC... Its interesting to see the power levels change as packets flow... If I can figure out how to do a PULL request, I will make it happen to 2.0b development branch... tom sorry for delayed answer, a bit busy, I have actually no time to look at/share my experiments I found this interesting too, seeing powerlevels change. btw I didn't finish the listenmode part, I'm on other things for the moment, so that's great if your are doing the job thx hi guys. just a little update to say that I'm back on this it's still a wip so I will share/release a bit later I have this working in mysensors dev for the moment. - I can get rssi value. - atc power mode. - listenmode : an mqtt esp8266 GW is peridocially waking up a proto node which is in deepsleep (the node is woken up by INT0 triggered by the radio of course). I will mainly use this for sort of remote watchdog for some of my nodes etc.. @scalz As you noticed I have just started with rfm69 and thanks for your help in the other thread. Really looking forward to your work as ATC is something I am missing in MySensors. @alexsh1 : thx. i know i have not pr this yet..it's still working local but have not done extended tests as I'm busy improving mqtt wifi gw..but i will do my best Hi Scalz, any progress here? Cant wait to use ATC and the rssi-value for my nodes Thanks for your work regards david @Fleischtorte So I am not the only one David waiting for ATC and RSSI David. @lafleur @scalz Hi guys, Have one of you succed with the PR, so we can enjoy at least the ATC feature. This is really the down side with the RFM69HW, power consumption so your work could be really helpfull ! Really thank you for that I have it all working, but my development environment and Jenkins are not in alignment, so my PR was rejected by the development team. They provide me NO help in resolving the issue... They tend to favor and support only the older NRF24L01 radio and have little interests in the newer, better preforming RFM69 or RFM95 radios. So have a look at my code changes in the close PR. It was not hard to implement. I have move on..... I'm sorry if your feelings got hurt. But you simply wouldn't do what the team instructed you in PR #440 to get it merged. Thanks @lafleur, found your PR, I will try to do my own thing based on your work. Shouldn't be a big deal. I did exactly what you ask, but it continued to fail in building the examples... In another PR, you pointed out that there were issue in building the examples under IDE 1.6.9. My feeling were not hurt, but I did NOT want to wast anymore time in dealing with Jenkins with out guidance.... Also there is NO guide on what Jenkins expect to see in its development environment. it all trial and error... FYI... I have developed RFM95 and TTN transport layers for my snapshot of your code. @frencho @lafleur sorry for this delay. i'm busy, running..sometimes i refresh myself doing some sw but I admit i spent more time on hw than looking how to cleanly do this PR. ouch but for me it has to be hobby even if I'm always rushing myself. you're not true in fact mysensors team is not nrf24 only as soon as i can, i will look. it could be a bit time consuming as it would be my first PR, why i always delay..boo lazy i am The best way I think (as, now, I'm not up to date with dev branch, I will need diffchecking my stuff): - I would start/improve PR437 with minor changes needed. As mysensors drivers compiles ok, i would do in this way than taking lowpowerlab lib first (all warnings enabled of course) - then a separated for ATC + or another one for ListenMode. to have a better history and not breaking anything. so, I would separate the PR. I don't know if you tried like this (if i'm overhead), or what were your PR issue. I will look a bit later for curiosity or in case I would run in similar issue hihi. Cool if you have it working @lafleur thanks for your work! On the weekend i was able to upgrade the driver and enable ATC with RSSI-Report. With the instructions from PR440 it was very easy It's great to see that you got it all working from the PR440. I hope others will find your work useful.... I'm my test,it works very well..... @Fleischtorte would you accept to share the work ? I started to play with the RFM69, but didn't get to the ATC part yet. It could save me a couple of hours, and debug ^^ - BenCranston last edited by @lafleur @Fleischtorte @frencho I think I'm a bit dense today. I don't understand where the code currently sits in terms of something that could be tested. I see that PR440 was closed and referenced to go back to PR437 or open a new PR? I'd love to give ATC code a go on my RFM69HW's. Is it being integrated into the development branch? How would I go about testing at this time? I did a quick look at the codebase in git and there is no mention of ATC in the MySensors dev branch... A quick diff of the RFM69.cpp code from Felix and MySensors shows a few differences, so they are not 100% in sync. Alas, I'm at a loss on how to apply the work already done to test.. I can "git clone" like a banshee, but beyond that I'm lost with Jenkins. Sorry, again, dense today... Any advice on how I can help is appreciated. I can test pretty easily. All of my nodes are Moteino's with RFM69HW radios. Thanks again for everyone's efforts and work to make ATC a reality in the MySensors codebase! - Fleischtorte last edited by Fleischtorte @frencho @BenCranston This is my implementation of PR440 first download new RFM69 driver from LowPowerLab Replace the files from libraries\MySensors\drivers\RFM69 (copy all and replace) Change in file RFM69.cpp line 31-32 #include <RFM69.h> #include <RFM69registers.h> to #include "RFM69.h" #include "RFM69registers.h" in RFM69_ATC.cpp line 32-34 #include <RFM69_ATC.h> #include <RFM69.h> // include the RFM69 library files as well #include <RFM69registers.h> to #include "RFM69_ATC.h" #include "RFM69.h" // include the RFM69 library files as well #include "RFM69registers.h" i think this was the driver.. next was mysensors in file libraries/MySensors/MySensor.h line 268 #include "drivers/RFM69/RFM69_ATC.cpp" in file libraries/MySensors/core/MyTransportRFM69.cpp first in line 24 #include "drivers/RFM69/RFM69_ATC.h" line 25-26 RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); uint8_t _address; to #ifdef MY_RFM69_Enable_ATC RFM69_ATC _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); #else RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); #endif uint8_t _address; and line 53 idk if this is necessary return _radio.sendWithRetry(to,data,len); to return _radio.sendWithRetry(to,data,len,5); btw i use not the dev version see comment from trlafleur there is my testing node (molgan PIR ) /** *_NODE_ID 4 #define MY_RADIO_RFM69 #define MY_RFM69_FREQUENCY RF69_868MHZ #define MY_RFM69_NETWORKID 121 #define MY_RFM69_ENABLE_ENCRYPTION #define MY_RFM69_Enable_ATC #define CHILD_ID_RSSI 7 // Id for RSSI Value // Initialize motion message MyMessage msg(CHILD_ID, V_TRIPPED); // Initialize RSSI message MyMessage rssiMsg(CHILD_ID_RSSI,V_TEXT); void setup() { #ifdef MY_RFM69_Enable_ATC _radio.enableAutoPower(-70); Serial.println("ATC Aktiviert"); #endif pinMode(DIGITAL_INPUT_SENSOR, INPUT); // sets the motion sensor digital pin as input } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Molgan-PIR", "1.0"); // Register all sensors to gw (they will be created as child devices) present(CHILD_ID, S_DOOR); present(CHILD_ID_RSSI, S_INFO); } void loop() { // Read digital motion value boolean tripped = digitalRead(DIGITAL_INPUT_SENSOR) == HIGH; Serial.println(tripped); send(msg.set(tripped?"1":"0")); // Send tripped value to gw int var1 = _radio.RSSI; send(rssiMsg.set(var1)); // Send RSSI value to gw // Sleep until interrupt comes in on motion sensor. Send update every two minute. sleep(digitalPinToInterrupt(DIGITAL_INPUT_SENSOR), CHANGE, SLEEP_TIME); } i hope this helps im just learing mysensors & co david - BenCranston last edited by @Fleischtorte Sweet!! This I can do. I'll make the changes and start testing tonight. thanks for the explicit help. -Ben @Fleischtorte thanks for the details, I'll look into it, as soon as I have my RFM69 GW talking to my RFM69 node @Fleischtorte is your code example working ? I mean do you see the RSSI going down to -70 ? Cause on this page Felix says we must use a radio.sendwithretry ?! I'm a little confused it not working so I'm digging @BenCranston did you get it to work ? - Fleischtorte last edited by Fleischtorte @Frencho radio.sendwithretry is used (see line 53 in libraries/MySensors/core/MyTransportRFM69.cpp) and ATC must be enabled on GW/Node (use _radio.enableAutoPower(-70); only on the node side). It seems you need continous traffic to see the effect of ATC (i use a simple relay sketch which reports the rssi with every switch command). How is the ATC working out? I think it is a neat feature, but am reluctant to mess with the core MySensors libraries. hi , it works but not very stable... after a while the sensors become offline so i revert to the stable version of MySensors. how is the status now?? Works the ATC now? @cablesky it is not included in Mysensors yet as it breaks compatibility with current packet protocol. It's planned in an upcoming release, and working well
https://forum.mysensors.org/topic/3483/rfm69-and-atc
CC-MAIN-2020-50
refinedweb
3,049
74.9
Application Security How will you make sure to develop Java/J2EE projects/applications secured from hackers?Justify how Java projects are more secured than other languages? Monika Gupta - May 7th, 2012 Java projects are more secured than other languages because after compilation of java program byte code is generated. And byte code is unreadable code. Any person cant read the byte code. then the project become more secure. suvojyotysaha - Apr 22nd, 2012 J2EE application can be made secure by implementing: 1. Authentication 2. Authorization 3. Encrypt the information by making the transport guarantee as Confidential. Authentication verifies if the us...... Explain the concept of design in a project and its life cycle amreenkhader - Jan 20th, 2012 Projects needs to be designed carefully which must be meaningful according to the readers and learners..It must be planned in such a way so that it can be followed easily a project should contain a meaninngful data related to topic prescribed Ashwinshenoy7 - May 8th, 2011 Let me start with an example.Consider a development projectInitiation : Requirements gatheringPlanning : From the scratch to the end product based of the needs and requirements provided.Design: Once t... What is DAO? amreenkhader - Jan 20th, 2012 DAO decides to recieve the appropriate application from java and implemeting it a runtime...during execution.. chetan.nellekeri - Apr 18th, 2011 DAO is Data Access Object.This is a pattern is used in most of the project throughout the industry.This class is the main medium between JAVA and JDBC. how to clear screen using c# console application...?like clrscr() in c++..! Anuj Malik - Oct 5th, 2012 Console.Clear(); its work absolute in c# "c# using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication9 { class stud... amreenkhader - Jan 20th, 2012 To clear the screen we can use the clrscr() command followed by ; to clear the input given to the program what is diffrence between java and java script? amreenkhader - Jan 20th, 2012 Java is the basic object oriented programmng language and Java script can be used in style sheets making and in HTML documentation mani.sivapuram - Feb 23rd, 2011 java-script is also same as like java only.But java is used in server side scripting and the javascript is used in client side scripting. I will say a real time scenario here consider the normal user... Why do threads block on I/O Threads block on i/o (that is enters the waiting state) so that other threads may executewhile the i/o Operation is performed. Read Best Answer amreenkhader - Jan 20th, 2012 Thread gets blocked just because when two thread try to access the single resource simultaneously the threads get blocked and when one thread is using the resource other should be put to sleep state b... saravanakumar - Aug 27th, 2005 Sir, i want one doubt, how to open the new window from click the java frame using handling event. please give the idea to me. i know that it's a comment page. But u have using the doubt page. thanking u.
http://www.geekinterview.com/user_answers/659485
CC-MAIN-2019-35
refinedweb
508
57.77
The main theme in this Thinking XML column has been semantic transparency, to clarify the meaning of constructs in XML schemata and instances. I've talked about bottom-up approaches to semantic transparency, where you define terms and concepts at the discrete level (data dictionaries, in effect), independently of their documents and schemata, and then you can apply these broadly. Industries have built up many initiatives for bottom-up semantic transparency, some of which have survived better than others. In this column I've discussed ISO Basic Semantics Register (BSR), RosettaNet Dictionaries, ebXML Core Components, Universal Data Element Framework (UDEF), ISO 15022 in the financial services space, and more. But no major industry initiatives tackle terminology to drive schema development in most applications of XML. More often you need to define your own specialized data dictionaries. More and more, architects realize that data dictionaries alone are not enough to support richer information integration. In XML documents you need to refer to people, places, and things with inter-relationships ranging from general to more specific, part and kind, and synonym and antonym. You need to describe connections to geographical points, to key times and dates, to policies, and to business rules. Sometimes you expand your scope beyond your specialized information space toward larger industry conventions. These details are why Semantic Web technology is such a good fit for supporting XML development, and it makes sense to start with the most modest, simplest Semantic Web technologies. Simple Knowledge Organization System (SKOS) is just such a technology, presently in the last call stage of the Working Draft process, but already well understood, implemented, and discussed. SKOS has unfortunately lost some of its simplicity in the latest drafts, as its committee ties it to the far more complex Web Ontology Language (OWL), but it's still quite useful if you ignore some of the more arcane flourishes. It at least provides the word-relationship aspect of connecting basic meaning relationships of terms, and this is a great first step for enriching XML schemata. From concept to expression It's best to capture concepts as early as you can in the development process. During the analysis for a new application or integration of existing ones, you should write down the main concepts, preserving as much of the original context as possible. The Turtle syntax for Resource Description Framework (RDF) is a useful way to capture this information in a format that non-technical users can review. Such close cooperation between the developers and the business interest is key to bolster the value of information managed in XML. In this article, I'll use the scenario of a snowboard manufacturer, Fluffy Boards, developing a format to capture marketing information about a new model of board called the Cumulus. Listing 1 is a subset of a SKOS definition of relevant concepts in Turtle syntax. Listing 1. SKOS definition of key concepts for snowboard marketing information in Turtle format See Resources for full introductions to SKOS, but briefly, the first two lines are Turtle versions of namespaces declarations. The f namespace is proprietary to Fluffy Boards. The next block defines the concept f:product, specifying a preferred label as well as optional alternates. skos:definition is a brief but precise description of the concept. The skos:broader property is a way to organize related concepts. "Person" is a broader concept than "boy", and "boy" a narrower concept than "person". You will generally deal with XML versions of SKOS vocabularies in schemata. Once you have a Turtle version, you can use tools to automatically convert it to RDF/XML (and vice versa). Listing 2 is the XML form of Listing 1. Listing 2. SKOS definition of key concepts for snowboard marketing information in RDF/XML form Once you have the conceptual base expressed in a semi-formal language such as SKOS, it's a matter of your own good XML design sense and creativity to incorporate it into a schema. Suppose one use of XML in the Fluffy Boards marketing department is to store reviews for each board. They might come up with a W3C XML Schema (WXS) definition as in Listing 3. Listing 3.The W3C XML Schema definition The constructs that come directly from the SKOS are in bold font. Notice that I import xml.xsd so that I can use the xml:base attribute. Besides SKOS annotations, another useful Semantic Web practice in XML schema design is to use URIs for IDs. XML Base allows you to unambiguously abbreviate such URIs. The snowboard element is attached to the abstract concept of the same name through expression within the schema annotation. I also lift the SKOS concept description into the xs:documentation element, which helps smooth integration with WXS-aware tools. Notice that there isn't always a one-to-one correlation between a SKOS concept and a construct in the XML vocabulary. Some of the schema elements have no attached concepts, and some of the concepts don't appear in the schema. The review element attaches to the concept of the same name, but its source child attaches to the customer concept, and in this way you have an implicit rule that sources of reviews are customers. I recommend also explicitly expressing such rules from concept bases, but you get a sense from this example how expressive it can be to just attach concepts to schemata. Listing 4 is a sample instance of the schema in Listing 3. Listing 4. Sample instance based on schema in Listing 3 Automating the connection One problem with the approach in the previous section is that the conceptual information is duplicated in schemata, which can lead to skew as one or the other is updated. A better way is to use links from the schema to the conceptual information. You can always automate merging this information whenever that's useful. Listing 5 is a variation on Listing 3 using links to the SKOS rather than copying constructs. Listing 5. WXS that uses links into a SKOS document Notice my improvised use of owl:imports to formalize the incorporation of the SKOS concepts into the schema. To illustrate how easy it is to traverse these links and automate generating the fuller form in Listing 5 from the shorter form in Listing 3, I wrote a little XSLT to do the job. Listing 6 isn't meant to be robust (it won't handle arcane RDF constructs, nor even simple ones such as rdf:ID and xml:base), and if you do end up using SKOS a lot in XML schema design, you'll want to have some proper RDF tools handy. But Listing 6 illustrates the idea, and handles the examples in this article. Listing 6. XSLT to expand simple SKOS references in WXS Attaching SKOS concepts to constructs in richer schema languages such as RELAX NG and Schematron (both of which I personally prefer) is even easier than for WXS. In such cases you can put the SKOS elements in-line wherever it makes sense, thanks to its separate namespace. What you gain, regardless of schema language, is not a magic wand that suddenly makes all XML documents transparent to every person and application. What you get is what I call an anchor in this column—a hand-hold that gives people the clues they need to direct integration and to improve data quality. Schema annotations connected to overall information sharing tools such as wikis make it possible for all the people involved in an interest to collaborate and contribute regardless of their technical ability. SKOS is a good language to express the technical substance of such interchange. In this article, you learned how easy it is to attach domain concepts to XML schema definitions. Learn - The W3C SKOS Simple Knowledge Organization System Primer: Learn about SKOS from the main source. - SKOS: Keep in touch with developments in knowledge organization systems (KOS) such as thesauri, classification schemes, subject heading systems and taxonomies within the framework of the Semantic Web. - Subject classification with DITA and SKOS (Erik Hennum, Robert Anderson, and Colin Bird; developerWorks; October 2005): Learn about SKOS in the context of DITA, which can be used for organization of large content libraries. Be careful because this article uses a slightly out of date version of SKOS. - Use data dictionary links for XML and Web services schemata (Uche Ogbuji, developerWorks May 2004): Learn more about the basic technique elaborated in this article, including its usage in RELAX NG and Schematron in the author's tip. - Create a maintainable extensible XML format ( Eric de Jonge and Sally Slack, developerWorks, August 2008): Learn the basic use of WXS schema annotations. - New to XML page: Check out the XML zone's updated resource central for XML. Readers of this column may be too advanced for this page, but it's a great place to get your colleagues started. All XML developers can benefit from the XML zone's coverage of many XML standards. - Review related, earlier installments of this column by Uche Ogbuji: - State of the art in XML modeling (developerWorks, March 2005): Discover what developers need to know about the various approaches to semantic transparency. - Schema standardization for top-down semantic transparency (developerWorks, April 2005): Explore the state of the art in XML modeling that includes reuse of models designed by others. - Schema annotation for bottom-up semantic transparency (developerWorks, May 2005):Push schemata beyond syntax into semantics. - IBM XML certification: Find out how you can become an IBM-Certified Developer in XML and related technologies. - XML technical library: See the developerWorks XML Zone for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks, including previous installments of the Thinking XML column. - developerWorks technical events and webcasts: Stay current with technology in these sessions. - The technology bookstore: Browse for books on these and other technical topics. - developerWorks podcasts: Listen to interesting interviews and discussions for software developers. Get products and technologies - RDF Validator and Converter: Use Joshua Tauberer's tool for quick interchange between RDF formats. -. - the Thinking XML forum: Post your comments on this article, or any others in this column. -<< Uche Ogbuji is a partner at Zepheira, LLC, a solutions firm specializing in the next generation of Web technologies. Mr. Ogbuji is lead developer of 4Suite, an open source platform for XML, RDF, and knowledge-management applications, and.
http://www.ibm.com/developerworks/xml/library/x-think42/
crawl-002
refinedweb
1,732
50.06
getaddrinfo now supports glibc-specific International Domain Name (IDN) extension flags: AI_IDN, AI_CANONIDN, AI_IDN_ALLOW_UNASSIGNED, AI_IDN_USE_STD3_ASCII_RULES. getnameinfo now supports glibc-specific International Domain Name (IDN) extension flags: NI_IDN, NI_IDN_ALLOW_UNASSIGNED, NI_IDN_USE_STD3_ASCII_RULES. Slightly improve randomness of /dev/random emulation. Allow to use advisory locking on any device. POSIX fcntl and lockf locking works with any device, BSD flock locking only with devices backed by an OS handle. Right now this excludes console windows on pre Windows 8, as well as almost all virtual files under /proc from BSD flock locking. The header /usr/include/exceptions.h, containing implementation details for 32 bit Windows' exception handling only, has been removed. Preliminary, experimental support of the posix_spawn family of functions. New associated header /usr/include/spawn.h. Change magic number associated with process information block so that 32-bit Cygwin processes don't try to interpret 64-bit information and vice-versa. Redefine content of mtget tape info struct to allow fetching the number of partitions on a tape. Added CYGWIN environment variable keyword "wincmdln" which causes Cygwin to send the full windows command line to any subprocesses.. Drop support for Windows 2000 and Windows XP pre-SP3. Add support for building a 64 bit version of Cygwin on x86_64 natively. Add support for creating native NTFS symlinks starting with Windows Vista by setting the CYGWIN=winsymlinks:native or CYGWIN=winsymlinks:nativestrict option. Add support for AFS filesystem. Preliminary support for mandatory locking via fcntl/flock/lockf, using Windows locking semantics. New F_LCK_MANDATORY fcntl command. New APIs: __b64_ntop, __b64_pton, arc4random, arc4random_addrandom, arc4random_buf, arc4random_stir, arc4random_uniform. Added Windows console cursor appearance support. Show/Hide Cursor mode (DECTCEM): "ESC[?25h" / "ESC[?25l" Set cursor style (DECSCUSR): "ESC[n q" (note the space before the q); where n is 0, 1, 2 for block cursor, 3, 4 for underline cursor (all disregarding blinking mode), or > 4 to set the cursor height to a percentage of the cell height. For performance reasons, Cygwin does not try to create sparse files automatically anymore, unless you use the new "sparse" mount option. New API: cfsetspeed. Support the "e" flag to fopen(3). This is a Glibc extension which allows to fopen the file with the O_CLOEXEC flag set. Support the "x" flag to fopen(3). This is a Glibc/C11 extension which allows to open the file with the O_EXCL flag set. New API: getmntent_r, memrchr. Recognize ReFS filesystem. CYGWIN=pipe_byte option now forces the opening of pipes in byte mode rather than message mode. Add mouse reporting modes 1005, 1006 and 1015 to console window. mkpasswd and mkgroup now try to print an entry for the TrustedInstaller account existing since Windows Vista/Server 2008. Terminal typeahead when switching from canonical to non-canonical mode is now properly flushed. Cygwin now automatically populates the /dev directory with all existing POSIX devices. Add virtual /proc/PID/mountinfo file. flock now additionally supports the following scenario, which requires to propagate locks to the parent process: ( flock -n 9 || exit 1 # ... commands executed under lock ... } 9>/var/lock/mylockfile Only propagation to the direct parent process is supported so far, not to grand parents or sibling processes. Add a "detect_bloda" setting for the CYGWIN environment variable to help finding potential BLODAs. New pldd command for listing DLLs loaded by a process. New API: scandirat. Change the way remote shares mapped to drive letters are recognized when creating the cygdrive directory. If Windows claims the drive is unavailable, don't show it in the cygdrive directory listing. Raise default stacksize of pthreads from 512K to 1 Meg. It can still be changed using the pthread_attr_setstacksize call. Drop support for Windows NT4. The CYGWIN environment variable options "envcache", "strip_title", "title", "tty", and "upcaseenv" have been removed. If the executable (and the system) is large address aware, the application heap will be placed in the large memory area. The peflags tool from the rebase package can be used to set the large address awareness flag in the executable file header. The registry setting "heap_chunk_in_mb" has been removed, in favor of a new per-executable setting in the executable file header which can be set using the peflags tool. See the section called “Changing Cygwin's Maximum Memory” for more information. The CYGWIN=tty mode using pipes to communicate with the console in a pseudo tty-like mode has been removed. Either just use the normal Windows console as is, or use a terminal application like mintty. New getconf command for querying confstr(3), pathconf(3), sysconf(3), and limits.h configuration. New tzset utility to generate a POSIX-compatible TZ environment variable from the Windows timezone settings. The passwd command now allows an administrator to use the -R command for other user accounts: passwd -R username. Pthread spinlocks. New APIs: pthread_spin_destroy, pthread_spin_init, pthread_spin_lock, pthread_spin_trylock, pthread_spin_unlock. Pthread stack address management. New APIs: pthread_attr_getstack, pthread_attr_getstackaddr, pthread_attr_getguardsize, pthread_attr_setstack, pthread_attr_setstackaddr, pthread_attr_setguardsize, pthread_getattr_np. POSIX Clock Selection option. New APIs: clock_nanosleep, pthread_condattr_getclock, pthread_condattr_setclock. clock_gettime(3) and clock_getres(3) accept per-process and per-thread CPU-time clocks, including CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID. New APIs: clock_getcpuclockid, pthread_getcpuclockid. GNU/glibc error.h error reporting functions. New APIs: error, error_at_line. New exports: error_message_count, error_one_per_line, error_print_progname. Also, perror and strerror_r no longer clobber strerror storage. C99 <tgmath.h> type-generic macros. /proc/loadavg now shows the number of currently running processes and the total number of processes. Added /proc/devices and /proc/misc, which lists supported device types and their device numbers. Added /proc/swaps, which shows the location and size of Windows paging file(s). Added /proc/sysvipc/msg, /proc/sysvipc/sem, and /proc/sysvipc/shm which provide information about System V IPC message queues, semaphores, and shared memory. /proc/version now shows the username of whomever compiled the Cygwin DLL as well as the version of GCC used when compiling. dlopen now supports the Glibc-specific RTLD_NODELETE and RTLD_NOOPEN flags. The printf(3) and wprintf(3) families of functions now handle the %m conversion flag. Other new API: clock_settime, __fpurge, getgrouplist, get_current_dir_name, getpt, ppoll, psiginfo, psignal, ptsname_r, sys_siglist, pthread_setschedprio, pthread_sigqueue, sysinfo. Drop support for Windows NT4 prior to Service Pack 4. Reinstantiate Cygwin's ability to delete an empty directory which is the current working directory of the same or another process. Same for any other empty directory which has been opened by the same or another process. Cygwin now ships the C standard library fenv.h header file, and implements the related APIs (including GNU/glibc extensions): feclearexcept, fedisableexcept, feenableexcept, fegetenv, fegetexcept, fegetexceptflag, fegetprec, fegetround, feholdexcept, feraiseexcept, fesetenv, fesetexceptflag, fesetprec, fesetround, fetestexcept, feupdateenv, and predefines both default and no-mask FP environments. See the GNU C Library manual for full details of this functionality. Support for the C99 complex functions, except for the "long double" implementations. New APIs: cacos, cacosf, cacosh, cacoshf, carg, cargf, casin, casinf, casinh, casinhf, catan, catanf, catanh, catanhf, ccos, ccosf, ccosh, ccoshf, cexp, cexpf, cimag, cimagf, clog, clogf, conj, conjf, cpow, cpowf, cproj, cprojf, creal, crealf, csin, csinf, csinh, csinhf, csqrt, csqrtf, ctan, ctanf, ctanh, ctanhf. Fix the width of "CJK Ambiguous Width" characters to 1 for singlebyte charsets and 2 for East Asian multibyte charsets. (For UTF-8, it remains dependent on the specified language, and the "@cjknarrow" locale modifier can still be used to force width 1.) The strerror_r interface now has two flavors; if _GNU_SOURCE is defined, it retains the previous behavior of returning char * (but the result is now guaranteed to be NUL-terminated); otherwise it now obeys POSIX semantics of returning int. /proc/sys now allows unfiltered access to the native NT namespace. Access restrictions still apply. Direct device access via /proc/sys is not yet supported. File system access via block devices works. For instance (note the trailing slash!) bash$ cd /proc/sys/Device/HarddiskVolumeShadowCopy1/ Other new APIs: llround, llroundf, madvise, pthread_yield. Export program_invocation_name, program_invocation_short_name. Support TIOCGPGRP, TIOCSPGRP ioctls. Partially revert the 1.7.6 change to set the Win32 current working directory (CWD) always to an invalid directory, since it breaks backward compatibility too much. The Cygwin CWD and the Win32 CWD are now kept in sync again, unless the Cygwin CWD is not usable as Win32 CWD. See the reworked the section called “Using the Win32 file API in Cygwin applications” for details. Make sure to follow the Microsoft security advisory concerning DLL hijacking. See the Microsoft Security Advisory (2269637) "Insecure Library Loading Could Allow Remote Code Execution" for details. Allow to link against -lbinmode instead of /lib/binmode.o. Same for -ltextmode, -ltextreadmode and -lautomode. See the section called “Programming” for details. Add new mount options "dos" and "ihash" to allow overriding Cygwin default behaviour on broken filesystems not recognized by Cygwin. Add new mount option "bind" to allow remounting parts of the POSIX file hirarchy somewhere else.. New interfaces mkostemp(3) and mkostemps(3) are added. New virtual file /proc/filesystems. clock_gettime(3) and clock_getres(3) accept CLOCK_MONOTONIC. DEPRECATED with 1.7.7: Cygwin handles the current working directory entirely on its own. The Win32 current working directory is set to an invalid path to be out of the way. [...] Support for DEC Backarrow Key Mode escape sequences (ESC [ ? 67 h, ESC [ ? 67 l) in Windows console. Support for GB2312/EUC-CN. These charsets are implemented as aliases to GBK. GB2312 is now the default charset name for the locales zh_CN and zh_SG, just as on Linux. Modification and access timestamps of devices reflect the current time. the various locale modifiers to switch charsets as on Linux. Default charset in the "C" or "POSIX" locale has been changed back from UTF-8 to ASCII, to avoid problems with applications expecting a singlebyte charset in the "C"/"POSIX" locale. Still use(2), dup3(2), and pipe2(2).. Windows 95, 98 and Me are not supported anymore. The new Cygwin 1.7 DLL will not run on any of these systems. Add support for Windows 7 and Windows Server 2008 R2. Mount points are no longer stored in the registry. Use /etc/fstab and /etc/fstab.d/$USER instead. Mount points created with mount(1) are only local to the current session and disappear when the last Cygwin process in the session exits. Cygwin creates the mount points for /, /usr/bin, and /usr/lib automatically from it's own position on the disk. They don't have to be specified in /etc/fstab. If a filename cannot be represented in the current character set, the character will be converted to a sequence Ctrl-X + UTF-8 representation of the character. This allows to access all files, even those not having a valid representation of their filename in the current character set. To always have a valid string, use the UTF-8 charset by setting the environment variable $LANG, $LC_ALL, or $LC_CTYPE to a valid POSIX value, such as "en_US.UTF-8". PATH_MAX is now 4096. Internally, path names can be as long as the underlying OS can handle (32K). struct dirent now supports d_type, filled out with DT_REG or DT_DIR. All other file types return as DT_UNKNOWN for performance reasons. The CYGWIN environment variable options "ntsec" and "smbntsec" have been replaced by the per-mount option "acl"/"noacl". The CYGWIN environment variable option "ntea" has been removed without substitute. The CYGWIN environment variable option "check_case" has been removed in favor of real case-sensitivity on file systems supporting it. Creating filenames with special DOS characters '"', '*', ':', '<', '>', '|' is supported. Creating files with special DOS device filename components ("aux", "nul", "prn") is supported. File names are case sensitive if the OS and the underlying file system supports it. Works on NTFS and NFS. Does not work on FAT and Samba shares. Requires to change a registry key (see the User's Guide). Can be switched off on a per-mount basis. Due to the above changes, managed mounts have been removed. Incoming DOS paths are always handled case-insensitive and get no POSIX permission, as if they are mounted with noacl,posix=0 mount flags. unlink(2) and rmdir(2) try very hard to remove files/directories even if they are currently accessed or locked. This is done by utilizing the hidden recycle bin directories and marking the files for deletion. rename(2) rewritten to be more POSIX conformant. access(2) now performs checks using the real user ID, as required by POSIX; the old behavior of querying based on effective user ID is available through the new faccessat(2) and euidaccess(2) AP. New open(2) flags O_DIRECTORY, O_EXEC and O_SEARCH. Make the "plain file with SYSTEM attribute set" style symlink default again when creating symlinks. Only create Windows shortcut style symlinks if CYGWIN=winsymlinks is set in the environment. Symlinks now use UTF-16 encoding for the target filename for better internationalization support. Cygwin 1.7 can read all old style symlinks, but the new style is not compatible with older Cygwin releases. Handle NTFS native symlinks available since Vista/2008 as symlinks (but don't create Vista/2008 symlinks due to unfortunate OS restrictions). Recognize NFS shares and handle them using native mechanisms. Recognize and create real symlinks on NFS shares. Get correct stat(2) information and set real mode bits on open(2), mkdir(2) and chmod(2). Recognize MVFS and workaround problems manipulating metadata and handling DOS attributes. Recognize Netapp DataOnTap drives and fix inode number handling. Recognize Samba version beginning with Samba 3.0.28a using the new extended version information negotiated with the Samba developers. Stop faking hardlinks by copying the file on filesystems which don't support hardlinks natively (FAT, FAT32, etc.). Just return an error instead, just like Linux. List servers of all accessible domains and workgroups in // instead of just the servers in the own domain/workgroup.). New openat family of functions: openat, faccessat, fchmodat, fchownat, fstatat, futimesat, linkat, mkdirat, mkfifoat, mknodat, readlinkat, renameat, symlinkat, unlinkat. Other new APIs: posix_fadvise, posix_fallocate, funopen, fopencookie, open_memstream, open_wmemstream, fmemopen, fdopendir, fpurge, mkstemps, eaccess, euidaccess, canonicalize_file_name, fexecve, execvpe. New implementation for blocking sockets and select on sockets which is supposed to allow POSIX-compatible sharing of sockets between threads and processes. send/sendto/sendmsg now send data in 64K chunks to circumvent an internal buffer problem in WinSock (KB 201213). New send/recv option MSG_DONTWAIT. IPv6 support. New APIs and later,, gethostbyname2, iruserok_sa, rcmd_af, rresvport_af. getifaddrs, freeifaddrs, if_nametoindex, if_indextoname, if_nameindex, if_freenameindex. Add /proc/net/if_inet6. Reworked pipe implementation which uses overlapped IO to create more reliable interruptible pipes and fifos. The CYGWIN environment variable option "binmode" has been removed. Improved fifo handling by using native Windows named pipes. Detect when a stdin/stdout which looks like a pipe is really a tty. Among other things, this allows a debugged application to recognize that it is using the same tty as the debugger. Support UTF-8 in console window. In the console window the backspace key now emits DEL (0x7f) instead of BS (0x08), Alt-Backspace emits ESC-DEL (0x1b,0x7f) instead of DEL (0x7f), same as the Linux console and xterm. Control-Space now emits an ASCII NUL (0x0) character. Support up to 64 serial interfaces using /dev/ttyS0 - /dev/ttyS63. Support up to 128 raw disk drives /dev/sda - /dev/sddx. New API: cfmakeraw, get_avphys_pages, get_nprocs, get_nprocs_conf, get_phys_pages, posix_openpt.. The default locale in the absence of one of the aforementioned environment variables is "C.UTF-8".), "KOI8-R", "KOI8-U", "SJIS", "GBK", "eucJP", "eucKR", and "Big5". Allow multiple concurrent read locks per thread for pthread_rwlock_t.. Support for WCONTINUED, WIFCONTINUED() added to waitpid and wait4. New APIs: _Exit, confstr, insque, remque, sys_sigabbrev, posix_madvise, posix_memalign, reallocf, exp10, exp10f, pow10, pow10f, lrint, lrintf, rint, rintf, llrint, llrintf, llrintl, lrintl, rintl, mbsnrtowcs, strcasestr, stpcpy, stpncpy, wcpcpy, wcpncpy, wcsnlen, wcsnrtombs, wcsftime, wcstod, wcstof, wcstoimax, wcstok, wcstol, wcstoll, wcstoul, wcstoull, wcstoumax, wcsxfrm, wcscasecmp, wcsncasecmp, fgetwc, fgetws, fputwc, fputws, fwide, getwc, getwchar, putwc, putwchar, ungetwc, asnprintf, dprintf, vasnprintf, vdprintf, wprintf, fwprintf, swprintf, vwprintf, vfwprintf, vswprintf, wscanf, fwscanf, swscanf, vwscanf, vfwscanf, vswscanf. Getting a domain user's groups is hopefully more bulletproof now. Cygwin now comes with a real LSA authentication package. This must be manually installed by a privileged user using the /bin/cyglsa-config script. The advantages and disadvantages are noted in Cygwin now allows storage and use of user passwords in a hidden area of the registry. This is tried first when Cygwin is called by privileged processes to switch the user context. This allows, for instance, ssh public key sessions with full network credentials to access shares on other machines. New options have been added to the mkpasswd and mkgroup tools to ease use in multi-machine and multi-domain environments. The existing options have a slightly changed behaviour. New ldd utility, similar to Linux. New link libraries libdl.a, libresolv.a, librt.a.. This warning may be disabled via the new CYGWIN=nodosfilewarning setting. The CYGWIN environment variable option "server" has been removed. Cygwin automatically uses cygserver if it's available. Allow environment of arbitrary size instead of a maximum of 32K. Don't force uppercase environment when started from a non-Cygwin process. Except for certain Windows and POSIX variables which are always uppercased, preserve environment case. Switch back to old behaviour with the new CYGWIN=upcaseenv setting. Detect and report a missing DLL on process startup. Add /proc/registry32 and /proc/registry64 paths to access 32 bit and 64 bit registry on 64 bit systems. Add the ability to distinguish registry keys and registry values with the same name in the same registry subtree. The key is called "foo" and the value will be called "foo%val" in this case. Align /proc/cpuinfo more closly to Linux content. Add /proc/$PID/mounts entries and a symlink /proc/mounts pointing to /proc/self/mounts as on Linux. Optimized strstr and memmem implementation. Remove backwards compatibility with old signal masks. (Some *very* old programs which use signal masks may no longer work correctly). Cygwin now exports wrapper functions for libstdc++ operators new and delete, to support the toolchain in implementing full C++ standards conformance when working with shared libraries. Different Cygwin installations in different paths can be run in parallel without knowing of each other. The path of the Cygwin DLL used in a process is a key used when creating IPC objects. So different Cygwin DLLs are running in different namespaces. Each Cygwin DLL stores its path and installation key in the registry. This allows troubleshooting of problems which could be a result of having multiple concurrent Cygwin installations.
http://sourceware.org/cygwin/cygwin-ug-net/ov-new1.7.html
CC-MAIN-2013-48
refinedweb
3,067
50.12
import from svn for esys-escript failing The automatic lp import for the esys-finley project started failing on 10th of May. example fail log: 2017-05-13 00:10:26 INFO Starting job. 2017-05-13 00:10:26 INFO Getting exising bzr branch from central store. 2017-05-13 00:10:26 INFO [chan bzr SocketAsChannel 2017-05-13 00:10:26 INFO 35 bytes transferred 2017-05-13 00:10:51 INFO fetching svn revision info 1/3 2017-05-13 00:10:51 INFO 2017-05-13 00:10:51 INFO Importing branch. 2017-05-13 00:10:52 INFO determining revisions to fetch 0/6559 2017-05-13 00:10:52 INFO 2017-05-13 00:10:52 INFO copying revision 0/2 Import failed: Traceback (most recent call last): Failure: twisted. Question information - Language: - English Edit question - Status: - Answered - Assignee: - No assignee Edit question - Last query: - 2017-05-16 - Last reply: - 2017-05-17 This was due to a bug in the serf library used by Subversion which we ran into after upgrading the code import machines from Ubuntu 12.04 to 16.04. It was fixed in this upstream commit: https:/ /svn.apache. org/viewvc? view=revision& revision= 1712790 I cherry-picked that patch, and your import is working fine now.
https://answers.launchpad.net/launchpad/+question/632656
CC-MAIN-2017-51
refinedweb
218
61.56
This is a reader-contributed article. Technology has changed dramatically in the last decade. OpenSSH is one the best project. It allows you to control remote Linux / UNIX server using command line or GUI tools. Do you miss GUI configuration server management tools such as Debian network-admin or Redhat/Cent os system-config-* tools/utilities while administrating a Linux server? Do you want to run GUI admin tools on a remote Linux server and get display on local desktop or laptop X server system? I have been using OpenSSH X11 forwarding and it works very well with DSL / ADSL/ cable connections. Sample setup Our setup is as follows: #1: Remote Redhat Enterprise / Debian Linux Server IP address: 203.199.92.106 #2: My IBM laptop running Ubuntu Linux connected via hi speed ADSL connection. IP address: Dynamic Step #1: Server Setup You must have OpenSSH server installed. Open SSHD configuration file /etc/ssh/sshd_config $ sudo vi /etc/ssh/sshd_config Turn on X11Forwarding by setting X11Forwarding parameter to yes: X11Forwarding yes Save and close the file. Restart OpenSSH server you so the changes will take place: $ sudo /etc/init.d/ssh restart If you are using RedHat server, use: # /etc/init.d/sshd restart Logout and close ssh connection. Step # 2: Running a command remotely Debian Linux has the Network Administration Tool. It allows you to specify the way your system connects to other computers and to internet. Let us run this tool from laptop and make some changes to remote server networking. Open X terminal and type the command: $ ssh -X 203.199.92.106 /usr/bin/network-admin Run RedHat Linux system-config-httpd tool (Apache sever configuration tool): $ ssh -X 203.199.92.106 /usr/bin/system-config-httpd [X11 Forwarding in action] Within few seconds (depends upon your network speed) you should see a network-admin or system-config-httpd GUI locally. The client desktop/laptop does not need any extra configuration 🙂 Another option is to connect to the remote server and use X port forwarding: $ ssh -X 203.199.92.106 Make sure you replace IP address 203.199.92.106 with actual hostname or IP address. A note for apple OS X tiger users Instead of –X option use –Y option: $ ssh -Y 203.199.92.106 $ ssh -Y 203.199.92.106 system-config-network Optional : Turn on Automatic X11 Forwarding You can turn on automatic forwarding by adding following two lines to local OpenSSH client configuration file /etc/ssh/ssh_config or ~/.ssh/config: $ sudo vi /etc/ssh/ssh_config Set configuration parameter: Host * ForwardX11 yes Save the changes. You can run almost any GUI program locally 🙂 Reference Links: - Refer to OpenSSH man pages (man sshd, sshd_config, ssh_config) - VNC – An alternative to X forwarding (VNC over SSH2 – A TightVNC Tutorial) - OpenSSH project home page About the author: Rocky Jr., is an engineer with VSNL – a leading ISP / global telecom company in India and a good friend of nixCraft. Forget plain old X11 forwarding. Check out NoMachine’s() NX software. It runs over SSH as well but the responsiveness of the NX protocol is unlike any other remote GUI. Don’t be mislead, you can get it all for free if you click the right links on their page =) This technique can also be used to serve up Linux applications on a Windows desktop. Brilliant for me – I have the best (and worst) of both worlds. You need to install an X-server on the windows machine – read: cygwin. And use something like putty to connect up to the server and run your commands from there. I’m still having a problem though – I’m still having to run xhost +[Server Address] on my desktop for some strange reason despite all the instructions being followed. If you use -XC intead of just -X then the performance would be far better. C is for Compression. @Nevyn: “xhost +ServerIP” is dangerous on potentially hostile networks since an IP can be forged. You’re probably having an authentication problem. Check the man page of “xauth”, basically you need to extract the authentication token on the X server (i.e. SSH client) and merge it into the XAuth database on the X client side (i.e. SSH server side). “A note for apple OS X tiger users” Say what I need to be able to make this work on iBook? I got this error: /usr/share/system-config-network/netconfpkg/NC_functions.py:30: DeprecationWarning: rhpl.log is deprecated and will be removed; use python’s logging instead import rhpl.log TERM environment variable needs set. But why not just use TightVNC? You might also have to throw the “ForwardAgent yes” setting in /etc/ssh/ssh_config in order for this to work. I’ve run into that. Good tip, though. I use X11 forwarding at work all the time, and it’s very handy. Way better than the old telnet/xhost way. Could you do an article on installing X on a windows machine. That’s all my university has. Better yet, X on a disk-on-key, like Portable Apps, would be great. Thanks. It’s best not to enable X11Forwarding for all connections (i.e. don’t enable it in /etc/ssh/ssh_config)… it used to be the default and there’s a very good reason why it is no longer. It’s dangerous. It turned out that a remote attacker could sniff your keyboard and mouse movements (or possibly even more horrible things) just because you ssh’d onto a box that they had cracked. In short, it turned into a great way of cracking the next host up the chain. I generally define two connections in .ssh/config for any machine, say foo.example.com — one connection I call “foo” and the other “xfoo”. I then have X11Forwarding turned on for only one of them. And of course, by using the config file, I can automatically get all the other benefits it brings — I turn on compression and can specify an identity file for pubkey login too. Very useful. Thank you! Nice ! This is very usefull and simple ! I tried this method but was unsuccessfull in launching GUI applications on the local system. I checked and made sure that X11Forwarding is set as YES on the remote system. I connect to the remote system using ssh -X . Now, when i type gedit on the terminal window, the gedit editor open in the remote terminal. Please advice as to what settings i am missing which will enable me to open the GUI on the local system. Thanks, vinay hi friend if i wrote this as bellow [abdulrahim@redhat ~]$ ssh -X ali@redhat.com /usr/bin/network-admin bash: /usr/bin/network-admin: No such file or director this is result. what’s the problem? i don’t understand hi friend scenario is: I pc but no any network just local. so can i use (ssh) between users in same pc as bellow is right no any problem [abdulrahim@redhat ~]$ ssh -X ali@redhat.com Last login: Sat Jan 14 18:29:45 2012 from redhat.com [ali@redhat ~]$ but if i use this command as bellow is [abdulrahim@redhat ~]$ ssh -X ali@redhat.com /usr/bin bash: /usr/bin: is a directory this problem the same before this comment so what’s problem for that? it must go in this folder right or not????!!!!!!! Have a question? Post it on our forum!
https://www.cyberciti.biz/tips/linux-mac-osx-x11-forwarding-over-ssh-howto.html
CC-MAIN-2018-34
refinedweb
1,247
66.33
mingw64bin - I also noticed that the make utility that is included in binmingpython-package . This points to the python-package directory of XGBoost. Then type C:xgboostpython-package>python setup.py install Next we open a jupyter notebook and add the path to the g++ runtime libraries to the os environment path variable with: import os mingw_path = ‘C:Program Filesmingw-w64x86_64-5.3.0-posix-seh-rt_v4-rev0mingw64bin’. This Post Has 2 Comments Nice post! I have just spent five weeks trying to get this XGboost software working on windows and I find that following any of the instructions does not give the results of the program being loaded and working. We live in the 21st century were software should be easy to down load and use. Not the 60s and 70s where software was a hack job and did not always work. We need to have a simple to install program that self installs and sets itself up for each of the operating systems available. Your instructions were not clear and did not work. I followed them but they did not help.
https://adataanalyst.com/machine-learning/installing-xgboost-for-windows-10/
CC-MAIN-2022-05
refinedweb
183
72.87
I have a PHP file which is called upon through AJAX. The PHP file "fetch.php" searches the database with the given JS variables, an x and y co-ordinate, through GET, and brings back the user id associated with those co-ordinates. So I then get that users information... I'm then trying to bring those PHP variables back into the AJAX call on the original page and display the results... here's what I've got so far... Javascript/jQuery <script type="text/javascript"> jQuery.fn.elementlocation = function() { var curleft = 0; var curtop = 0; var obj = this; do { curleft += obj.attr('offsetLeft'); curtop += obj.attr('offsetTop'); obj = obj.offsetParent(); } while ( obj.attr('tagName') != 'BODY' ); return ( {x:curleft, y:curtop} ); }; $(document).ready( function() { $("#gdimage").mousemove( function( eventObj ) { var location = $("#gdimage").elementlocation(); var x = eventObj.pageX - location.x; var x_org = eventObj.pageX - location.x; var y = eventObj.pageY - location.y; var y_org = eventObj.pageY - location.y; x = x / 5; y = y / 5; x = (Math.floor( x ) + 1); y = (Math.floor( y ) + 1); if (y > 1) { block = (y * 200) - 200; block = block + x; } else { block = x; } $("#block").text( block ); $("#x_coords").text( x ); $("#y_coords").text( y ); if (x == x_org + 5 || x == x_org - 5 || y == y_org + 5 || x == y_org - 5) { $.ajax({ type: "GET", url: "fetch.php", data: "x=" + x + "&y=" + y + "", dataType: "json", async: false, success: function(data) { $("#user_name_area").html(data.username); $("#user_online_area").html(data.online); } }); } }); }); </script> The displayed PHP file with the Javascript/jQuery: <div class="tip_trigger" style="cursor: pointer;"> <img src="gd_image.php" width="1000" height="1000" id="gdimage" /> <div id="hover" class="tip" style="text-align: left;"> Block No. <span id="block"></span><br /> X Co-ords: <span id="x_coords"></span><br /> Y Co-ords: <span id="y_coords"></span><br /> User: <span id="user_name_area"></span> Online: <span id="user_online_area"></span> </div> </div> PHP file (fetch.php) called through AJAX <? require('connect.php'); $mouse_x = $_GET['x']; $mouse_y = $_GET['y']; $grid_search = mysql_query("SELECT * FROM project WHERE project_x_cood = '$mouse_x' AND project_y_cood = '$mouse_y'") or die(mysql_error()); $user_exists = mysql_num_rows($grid_search); if ($user_exists == 1) { $row_grid_search = mysql_fetch_array($grid_search); $user_id = $row_grid_search['project_user_id']; $get_user = mysql_query("SELECT * FROM users WHERE user_id = '$user_id'") or die(mysql_error()); $row_get_user = mysql_fetch_array($get_user); $user_name = $row_get_user['user_name']; $user_online = $row_get_user['user_online']; echo "{ username: $user_name }"; echo "{ online: $user_online }"; } else { echo "{ username: }"; echo "{ online: }"; } ?> Everything is working fine, except one thing, I've check the php file and it does echo a username so I know there is a result being sent back from my query. It's just I can't seem to get the user_name_area and user_online_area to display the results from the fetch.php file. Am I parsing them correctly, do I have to assign them to a variable or is what I'm trying to do impossible? I've never attempted this whole json thing before and I seem to be having some problems with it, so if anyone could give me a hand cause I've been trying for the past couple nights to get it to work and just... can't!! Use Chrome first, then use its console (CTRL + SHIFT + I) to check what you're getting is actually valid JSON. Then, to make your life easier - you can use PHP's native function called json_encode() that will properly make stuff in JSON format without having you worry whether you constructed something right or not. So, the php part of your script can look like this: $user_name = $row_get_user['user_name']; $user_online = $row_get_user['user_online']; $json['username'] = $user_name; $json['user_online'] = $user_online; echo json_encode($json); That should at least put you on the right track and give you an insight whether you have errors in your JS code or not. I've tried the json_encode function already tonight but slightly different to how you've written it. I used: $user_name = $row_get_user['user_name']; echo json_encode($user_name); That didn't actually work, and changing it to your approach didn't work either... HOWEVER, I have just got rid of the if statement around the AJAX call, and it appears to be working just fine now! I'm also using your method for json_encode too since I can put query results into an array if I'm not mistaken? For json_encode to work you need to make sure an array is been sent to it and not a string. Eg. echo json_encode($user_name); Wont work because its a string not an array echo json_encode($row_get_user); Will work because $row_get_user is already an array and can be converted to a JSON object array. Within your JS script you can then use $.ajax({ success: function(data) { alert(data['user_name']); } }); Ah, I see! So if I was to, for example, bring back the list of say 5 of the usernames and place them in an array. How would you separate them to display them within the success function of the AJAX which could be placed in the span tags? $("#user_name_1_area").html(data.username1); $("#user_name_2_area").html(data.username2); etc... Or would you need to do something like: $("#user_name_1_area").html(data.username[0]); $("#user_name_2_area").html(data.username[1]); etc... Thanks for explaining the string/array though, I wasn't aware of that! It depends how you build your array Example 1 echo json_encode(array('username1' => $username1, 'username2' => $username2)); Example 2 echo json_encode(array('username' => array($username1, $username2)));
http://community.sitepoint.com/t/jquery-ajax-fetch-php-variables-json/7042
CC-MAIN-2015-32
refinedweb
872
65.22
Secs sell! How I cache my entire pages (server-side) 10 May 2012 Python, Django I've blogged before about how this site can easily push out over 2,000 requests/second using only 6 WSGI workers excluding latency. The reason that's possible is because the whole page(s) can be cached server-side. What actually happens is that the whole rendered HTML blob is stored in the cache server (Redis in my case) so that no database queries are needed at all. I wanted my site to still "feel" dynamic in the sense that once you post a comment (and it's published), the page automatically invalidates the cache and thus, the user doesn't have to refresh his browser when he knows it should have changed. To accomplish this I used a hacked cache_page decorator that makes the cache key depend on the content it depends on. Here's the code I actually use today for the home page: def _home_key_prefixer(request): if request.method != 'GET': return None prefix = urllib.urlencode(request.GET) cache_key = 'latest_comment_add_date' latest_date = cache.get(cache_key) if latest_date is None: # when a blog comment is posted, the blog modify_date is incremented latest, = (BlogItem.objects .order_by('-modify_date') .values('modify_date')[:1]) latest_date = latest['modify_date'].strftime('%f') cache.set(cache_key, latest_date, 60 * 60) prefix += str(latest_date) try: redis_increment('homepage:hits', request) except Exception: logging.error('Unable to redis.zincrby', exc_info=True) return prefix @cache_page_with_prefix(60 * 60, _home_key_prefixer) def home(request, oc=None): ... try: redis_increment('homepage:misses', request) except Exception: logging.error('Unable to redis.zincrby', exc_info=True) ... And in the models I then have this: @receiver(post_save, sender=BlogComment) @receiver(post_save, sender=BlogItem) def invalidate_latest_comment_add_dates(sender, instance, **kwargs): cache_key = 'latest_comment_add_date' cache.delete(cache_key) So this means: - whole pages are cached for long time for fast access - updates immediately invalidates the cache for best user experience - no need to mess with ANY SQL caching So, the next question is, if posting a comment means that the cache is invalidated and needs to be populated, what's the ratio of hits versus hits where the cache is cleared? Glad you asked. That's why I made this page: It allows me to monitor how often a new blog comment or general time-out means poor django needs to re-create the HTML using SQL. At the time of writing, one in every 25 hits to the homepage requires the server to re-generate the page. And still the content is always fresh and relevant. The next level of optimization would be to figure out whether a particular page update (e.g. a blog comment posting on a page that isn't featured on the home page) should or should not invalidate the home page. esp I also found at that with fully cached pages, make sure the following is set as well: CACHE_MIDDLEWARE_ANONYMOUS_ONLY = False Otherwise, Django accesses request.session and the user table, resulting in a DB query for every request. With this setting to False, Django can run purely from cache,
http://www.peterbe.com/plog/secs-sell!-how-i-cache-my-entire-pages-(server-side
CC-MAIN-2014-15
refinedweb
503
54.02
There are many profilers avialable in the market, I could not use some of them because my platform/IDE does not support it or it is being a licensed one. And the commercial products are costly and it cannot be customized in a way we want. For example how to ignore some of the functions from being profiled? Say, if I have a MFC application where we get flurry of paint messages and I dont want to profile them. How do you do it ? So I started to write my own profiler to profile C++ programs. Keep in mind that, this is not kind of profiler which profiles other applications rather we are modifying the target process to profile itself. Yeah, thats correct, I'm able to read your voice. There are many open source codes for profiler and why dont we use the same? I searched over web for long time for a profiler code which can be used for x64 platform. Most of these profilers do support x86 and not x64. And even I pinged some of the authors who implemented the profier in x86 and asked them how to do in x64. Either they are engaged with their routine work or they are not very sure on x64. So I planned, instead of investing the time to search a profiler for x64, why dont we develop ourself? My requirement is very simple: I would like to measure the apporximate time taken for all the functions executed during an application's life time. And obviously I would like to filter out some of the functions called indirectly by our functions like std::xxxx functions, C++ library function calls etc., There are few limitations on this work and let me list out at the very begining. So that the audience can decide to really go ahead further or not. When I started to search for the profiler, the first thing ( fortunately ) I came across is the compiler flags supported by Microsoft C/C++ compiler like /GH and /Gh. These options allow the user to inject a function that can be called before any function execution and function exit. And the when I went through some of the profilers on x86, they were using this as a base to build over it. So I got a base to go ahead. Ok let's start!!!! My idea was very simple: Preity simple and sounds good. But when I started to put my hands on, I was able to see some serious issues piled up. Issue1 : I just added _penter and _pexit and compiled my app. Oh no, I could not succeed. Because these functions should follow naked calling convention. This calling convention is not supported by x64. Issue2: Say somehow if I'm able to add them, _penter and _pexit does not do anything by itself. They are mere skeletons. It is upto the user to provide the implementation. And the documentation says that these functions should take care of managing the procedure prolog and epilog, by pushing the required registers and poping them back. This implication results into multiple questions. Let's take one by one and untie the knot. As _penter and _pexit is not supported in x64, I decided to move them to assembly code. But I already mentioned that inline assembly is not supported by VS in x64. Yes, inline assembly is not supported but we can have them in separate assembly files [ Refer ]. Let's create an assembly file ( .asm extension ) and add the skeletons for _penter and _pexit. Done. But how to compile the .asm file? As visual studio does not have any built in support to compile .asm files by itself, we have to borrow the external tool to do this job. This can be done using MASM assembler. Let me tell you simple steps. Once after adding the .asm file, do specify how these files to be compiled. Righ click project -> Build Customization -> select masm. Stack Manipulation: I hope all of you aware of how the stack is manipulated and managed during function call. If not, please refer here. Even though it gives you details on x86, you should be aware of the stack management. Let's see the disassembly code for a simple program with out /GH and /Gh flags. #include <iostream> using namespace std; int Foo(int a, int b ) { return ( a+b); } void main( ) { cout<<Foo(5,6); } For the above code, while entering into main , stack pointer [ RSP ] points to 00000000002FF9E8. Let's see the content at this memory location. It is 6d 40 03 3f 01 00 00 00 , which is nothing but the little endian notation of 000000013f03406d. 000000013f03406d is the return address of main. Figure 1. I just wanted to show you that at the start of the function RSP holds the address of return instruction. And the other thing is rdi refers to the frame pointer where as in case of x86 ebp points to the frame pointer. The above trace is for the program when compiled with out /Gh and /GH switch. But we are going to compile our project with /GH and /Gh flags. Let me show another sample program with these switches. extern "C" void _penter; extern "C" void _pexit; void Subtract( int a, int b ) { int c = a - b; cout<<" c = "<<c<<endl; } void main() { Subtract(5,3); } Figure 2. We can see the entries for _penter and _pexit for the main function. Similarly, if you see the disassembly for Subtract function, you can find the entries for _penter and _pexit. What to stuff in _penter and _pexit? As it's the responsibilty of user to provide definition for _penter and _pexit, we should aware what to stuff. And the documentation also says that, these function should take care of pushing the register contents during entry and pop out during exit. As said already, for x86 pushad and popad are readily available. What about x64? Which of the x64 registers should be considered? [ Refer details of x86 registers ]. We should take care of the volatile registers [ Refer here ]. As we are not manipulating the floating point registers, our target is RAX,RCX,RDX,R8,R9,R10 and R11.We have to explicity push and pop these registers individually as there is no call available to do this job. Now we know which registers to be stored and restored back in _penter and _pexit. _penter proc push r11 push r10 push r9 push r8 push rax push rdx push rcx Stack Alignment: x64 looks for the stack pointer to be 16 byte aligned. At this moment, as we have pushed 7 registers ( each one is of size 8 byte) , the stack should go misaligned ( 7 * 8 = 56 bytes which is not multiple of 16 ). But during _penter's entry, return address is being pushed on the stack. This is the address of the instruction where the control has to be transferred back once _penter completes it's execution. Because of this, the stack is now perfectly aligned ( 8 [ret address] + 56 = 64 bytes ). You can notice that at the start of _penter, the stack pointer points also points this address. x64 Calling Convention: At this point you should be aware of x64 calling convention [ Refer ]. It's similar to __fastcall calling convention where the first 4 integer parameters of the function is passed using the registers [RCX, RDX, R8, and R9]. And any parameters after that are pushed on the stack. If the parameters are say float, then the first 4 parameters are passing via XMM0 to XMM3 and the remaining are pushed on stack. In _penter we should reserve the space for the first 4 registers, even though it is moved via registers RCX,RDX,R8 and R9. This may be because, later inside the function if the address of the parameter is referred, then this space on the stack will be used.I'm not very sure on this. _penter proc push r11 push r10 push r9 push r8 push rax push rdx push rcx ; reserve space for 4 registers [ RCX,RDX,R8 and R9 ] sub rsp,20h Figure 3. Compute Function Address: Now to pick the return address of the function, we have to climb 88 bytes up ( because after the return address , 7 registers were pushed [7 * 8 = 56 bytes ] and 32 bytes was reserved. So the stack has grown totally 56+32 = 88 [58h] bytes). This is nothing but the return address of the _penter. Once, we know the return address , we can find out the address of the instruction which actually calls the _penter by subtracting 5 bytes from the return address. Because the call instruction in x86 is 5 bytes. The instruction [ CALL ] is 1 byte and the operand [ function address ] is 4 bytes. This is fine for x86. But is it not 9 bytes for x64? [ 1 byte instruction + 8 byte operand ]. No. It's 5 bytes only. How come? In case of x86, the operand is 4 bytes absolute value. CALL DWORD PTR[xxxxxxxx] -> where xxxxxxxx is the address of the function. For x64, the operand is an offset value relative to the address of the instruction from where it is being called. 00300200 : CALL DWORD PTR [xxxxxxxx] -> So the function actually resides at the address 00300200 + xxxxxxxx, where xxxxxxxx is an 4 byte offset value from 00300200. As the operand is a 4 byte offset value, it is still 5 bytes only. You can notice this in the figure2. Return address ( 000000013F4C14BA ) - 5 = 000000013F4C14B5. This address (000000013F4C14B5) has to be passed to the exported function in dll ( FindSymbol ) which finds the name of the function. To pass this address to the function, we just move this value to RCX register. Because the parameters are passed via registers.Refer. Once getting the address of a function, we can get the name of the function using Debug help functions. This is being implemented in a separate dll. Keep in mind that, this dll is compiled with out /GH and /Gh flags. This dll has an exported function, FindSymbol, which takes the address of the calling function and find out its name using debug helper functions. This is the function which is being called from _penter.OK, we know the address of the function, how to get its name? InitSymbols: Lets do reverse engg. to find out name of a given symbol, The dll has the entry point function Dllmain. When this function is called with reason as DLL_PROCESS_ATTACH ( usually this happens when the dll is loaded when the process starts in case of implicit linking ), get the base address where the dll is being loaded. I'm using this address to initialize the symbol handler for the current process.This is done in the function InitSymbols. FindSymbol: When this function is called from _penter, an address is passed as parameter to this function. This function calls a helper function, FindFunction. This function allocates memory for PSYMBOL_INFO structure and calls the function SymFromAddr to get the name of the function. The received name ( avaialble at SYMBOL_INFO::Name ) may be a decorated one because of C++ name mangling schemes. To get the undecorated name, call the UnDecorateSymbolName function. Once after getting the function name, an instance of ProfileInfo structure is created. This structure stores the function name, thread id, start time of the function and end time of the function etc., Now record the time using QueryPerformanceCounter.This is the approximate start time for the function. This instance is then added to the map g_mapProfileInfo. g_mapProfileInfo: This map's key is the thread id and the value is the vector of ProfileInfo instances. Each element of the vector belongs to a function. So this map ultimately stores for profiling information for all the functions called during an thread's execution. This map is protected using a critical section. Once the function is recorded by calling FindSymbol, the control comes back to _penter. Next thing to do in _penter is stack clean up. This is done in 2 steps. ; release the reserved space by moving stack pointer up by 32 bytes add rsp,20h ; pop out the pushed registers pop rcx pop rdx pop rax pop r8 pop r9 pop r10 pop r11 ret The assembly code of _pexit is exactly same as the _penter. The only difference is the function called from dll is different and this time it is FindSymbol_1. This function is same as FindSymbol, but it records the end time of the function which called _pexit. FindSymbol_1 records the time at the very start of the function where as FindSymbol measures the time just before adding the ProfileInfo instance into the map. Profiling information is dumped just before the program completes its execution. This is done by the function DisplayProfileData. This function is called, when the Dllmain is called with reason as DLL_PROCESS_DETACH. We can modify this how ever we want. For example, we may want to display all the profiling info for a thread,when it completes the execution.This can be done in DLL_THREAD_DETACH case. Just find out the thread which completes its execution and search the thread in the g_mapProfileInfo and display only that information. There are 3 sample programs being attached with this article. Both the programs uses the dll called SymbolServer.dll which actually does the job of finding the function name from its address, starting the timers, collecting the profile info for each function etc., 1.First program is a simple console program named ProfilerX64. 2.The second one is also a console application but uses multiple threads instead of one. 3. A Simple MFC Application These sample programs uses x64 Debug configuration. Because as we have mentioned already we need the debug information to get the name of the function in SymbolServer.dll. I'm very new to the assembly programming. So it took really some time to understand the stack management, register manipulation etc., I invested some time to get to know these things and it was really interesting. In the next version, I'm planning to record the calling function of each function ( kind of callstack ). For example, in case of main I would like to record _tmainCRTStartup in the ProfileInfo associated with main. May be we can add a pointer to ProfileInfo in ProfileInfo structure ( kind of linked list ) which will point to the ProfileInfo of calling function. This may give you the profile info of the calling function. And similarly we can have a vector of ProfileInfo pointers in ProfileInfo where ProfileInfo of each child function can be added. This gives you the profile info of all the functions called by a particular function. I would like to thank all who educated me on assembly programming concepts either directly or indirectly when I started this work, who answered my questions with out any delay and authors of various articles which talks about the profiler,assembly programming and x64. In particular I would like to convey my sincere thanks to the MSDN members Mike Danes and Crescens for answering all my questions posted on the forum. 5th August 2014: Article.
https://www.codeproject.com/Articles/800172/A-Simple-Cplusplus-Profiler-on-x
CC-MAIN-2018-34
refinedweb
2,527
73.37
Blazing fast scrolling for any amount of data vue-virtual-scroller Blazing fast scrolling of any amount of data. Installation npm install --save vue-virtual-scroller). Default import Install all the components: import Vue from 'vue' import VueVirtualScroller from 'vue-virtual-scroller' Vue.use(VueVirtualScroller) Use specific components: import Vue from 'vue' import { RecycleScroller } from 'vue-virtual-scroller' Vue.component('RecycleScroller', RecycleScroller) ⚠️ A css file is included when importing the package: import 'vue-virtual-scroller/dist/vue-virtual-scroller.css' Browser <link rel="stylesheet" href="vue-virtual-scroller/dist/vue-virtual-scroller.css"/> <script src="vue.js"></script> <script src="vue-virtual-scroller/dist/vue-virtual-scroller.min.js"></script> If Vue is detected, the plugin will be installed automatically. If not, install the component: Vue.use(VueVirtualScroller) Or register it with a custom name: Vue.component('RecycleScroller', VueVirtualScroller.RecycleScroller) Usage There are several components provided by vue-virtual-scroller: RecycleScroller is a component that only renders the visible item in your list. It also re-use components and dom elements to be the most efficient and performant possible. DynamicScroller is a component is using RecycleScroller under-the-hood and adds a dynamic height management feature on top of it. The main use case for this is not knowing the height of the items in advance: the Dynamic Scroller will automatically "discover" it when it renders new item as the user scrolls. DynamicScrollerItem must wrap each item in a DynamicScroller to handle size computations. IdState is a mixin that ease the local state management in reused components inside a RecycleScroller. RecycleScroller It's a virtual scroller which only renders the visible items and reuse all the components and DOM trees as the user scrolls. Basic usage Use the scoped slot to render each item in the list: <template> <RecycleScroller class="scroller" : <div slot- {{ item.name }} </div> </RecycleScroller> </template> <script> export default { props: { list: Array, }, } </script> <style scoped> .scroller { height: 100%; } .user { height: 32%; padding: 0 12px; display: flex; align-items: center; } </style> Important notes - ⚠️ You need to set the size of the virtual-scroller element and the items elements (for example, with CSS). Unless you are using variable height mode, all items should have the same height to prevent display glitches. - It is not recommended to use functional components inside RecycleScroller since the components are reused (so it will actually be slower). - The components used in the list should expect itemprop change without being re-created (use computed props or watchers to properly react to props changes!). - You don't need to set keyon list content (but you should on all nested <img>elements to prevent load glitches). - The browsers have a height limitation on DOM elements, it means that currently the virtual scroller can't display more than ~500k items depending on the browser. - Since DOM elements are reused for items, it's recommended to define hover styles using the provided hoverclass instead of the :hoverstate selector (e.g. .vue-recycle-scroller__item-view.hoveror .hover .some-element-inside-the-item-view). How does it work? - The RecycleScroller creates pools of views to render visible items to the user. - A view is holding a rendered item, and is reused inside its pool. - For each type of item, a new pool is created so that the same components (and DOM trees) are reused for the same type. - Views can be deactivated if they go off-screen, and can be reused anytime for a newly visible item. Here is what the internals of RecycleScroller look like: <RecycleScroller> <!-- Wrapper element with a pre-calculated total height --> <wrapper : <!-- Each view is translated to the computed position --> <view v- <!-- Your elements will be rendered here --> <slot : </view> </wrapper> </RecycleScroller> When the user scrolls inside RecycleScroller, the views are mostly just moved around to fill the new visible space, and the default slot properties updated. That way we get the minimum amount of components/elements creation and destruction and we use the full power of Vue virtual-dom diff algorithm to optimize DOM operations! Props items: list of items you want to display in the scroller. itemHeight(default: null): display height of the items in pixels used to calculate the scroll height and position. If it set to null(the default value), it will use variable height mode. minItemHeight: minimum height used if the height of a item is unknown. heightField(default: 'height'): field used to get the item's height in variable height mode. typeField(default: 'type'): field used to differenciate different kinds of components in the list. For each distinct type, a pool of recycled items will be created. keyField(default: 'id'): field used to identify items and optimize render views management. pageMode(default: false): enable Page mode. prerender(default: 0): render a fixed number of items for Server-Side Rendering. buffer(default: 200): amount of pixel to add to edges of the scrolling visible area to start rendering items further away. emitUpdate(default: false): emit a 'update'event each time the virtual scroller content is updated (can impact performance). Events resize: emitted when the size of the scroller changes. visible: emitted when the scroller considers itself to be visible in the page. hidden: emitted when the scroller is hidden in the page. update (startIndex, endIndex): emitted each time the views are updated, only if emitUpdateprop is true Default scoped slot props item: item being rendered in a view. index: reflects each item's position in the itemsarray active: is the view active. An active view is considered visible and being positioned by RecycleScroller. An inactive view is not considered visible and hidden from the user. Any rendering-related computations should be skipped if the view is inactive. Other Slots <main> <slot name="before-container"></slot> <wrapper> <!-- Reused view pools here --> </wrapper> <slot name="after-container"></slot> </main> Page mode The page mode expand the virtual-scroller and use the page viewport to compute which items are visible. That way, you can use it in a big page with HTML elements before or after (like a header and a footer). Just set the page-mode props to true: <header> <menu></menu> </header> <RecycleScroller page-mode> <!-- ... --> </RecycleScroller> <footer> Copyright 2017 - Cat </footer> Variable height mode ⚠️ This mode can be performance heavy with a lot of items. Use with caution. If the itemHeight prop is not set or set to null, the virtual scroller will switch to Variable height mode. You then need to expose a number field on the item objects with the height of the item element. ⚠️ You still need to set the height of the items with CSS correctly (with classes for example). Use the heightField prop (default is 'height') to set the field used by the scroller to get the height for each item. Example: const items = [ { id: 1, label: 'Title', height: 64, }, { id: 2, label: 'Foo', height: 32, }, { id: 3, label: 'Bar', height: 32, }, ] Buffer You can set the buffer prop (in pixels) on the virtual-scroller to extend the viewport considered when determining the visible items. For example, if you set a buffer of 1000 pixels, the virtual-scroller will start rendering items that are 1000 pixels below the bottom of the scroller visible area, and will keep the items that are 1000 pixels above the top of the visible area. The default value is 200. <RecycleScroller : Server-Side Rendering The prerender props can be set as the number of items to render on the server inside the virtual scroller: <RecycleScroller : Dynamic Scroller This works like RecycleScroller but can render items with unknown heights! Basic usage <template> <DynamicScroller : <template slot- <DynamicScrollerItem : <div class="avatar"> <img : </div> <div class="text">{{ item.message }}</div> </DynamicScrollerItem> </template> </DynamicScroller> </template> <script> export default { props: { items: Array, }, } </script> <style scoped> .scroller { height: 100%; } </style> Important notes minItemHeightis required for the initial render of items. DynamicScrollerwon't detect size changes on its own, but you can put values that can affect the item size with size-dependencieson DynamicScrollerItem. - You don't need to have a heightfield on the items. Props All the RecycleScroller props. - It's not recommended to change heightFieldprop since all the height management is done internally. Events All the RecycleScroller events. Default scoped slot props All the RecycleScroller scoped slot props. Other slots All the RecycleScroller other slots. DynamicScrollerItem The component that should wrap all the items in a DynamicScroller. Props item(required): the item rendered in the scroller. active(required): is the holding view active in RecleScroller. Will prevent unecessary size recomputation. sizeDependencies: values that can affest the size of the item. This prop will be watched and if one value changes, the size will be recomputed. Recommended instead of watchData. watchData(default: false): deeply watch itemfor changes to re-calculate the size (not recommended, can impact performance). tag(default: 'div'): element used to render the component. emitResize(default: false): emit the resizeevent each time the size is recomputed (can impact performance). Events resize: emitted each time the size is recomputed, only if emitResizeprop is true. IdState This is conveniance mixin that can replace data in components being rendered in a RecycleScroller. Why is this useful? Since the components in RecycleScroller are reused, you can't directly use the Vue standard data properties: otherwise they will be shared with different items in the list! IdState will instead provide an idState object which is equivalent to $data, but it's linked to a single item with its identifier (you can change which field with idProp param). Example In this example, we use the id of the item to have a "scoped" state to the item: <template> <div class="question"> <p>{{ item.question }}</p> <button @Reply</button> <textarea v- </div> </template> <script> import { IdState } from 'vue-virtual-scroller' export default { mixins: [ IdState({ // You can customize this idProp: vm => vm.item.id, }), ], props: { // Item in the list item: Object, }, // This replaces data () { ... } idState () { return { replyOpen: false, replyText: '', } }, } </script> Parameters idProp(default: vm => vm.item.id): field name on the component (for example: 'id') or function returning the id.
https://vuejsexamples.com/blazing-fast-scrolling-for-any-amount-of-data/
CC-MAIN-2019-13
refinedweb
1,663
56.15
On Thu, Jun 12, 2008 at 9:46 PM, Guido van Rossum <guido at python.org> wrote: > The intention was for these dicts to be used as namespaces. I think of > it as follows: > > (a) Using non-string keys is a no-no, but the implementation isn't > required to go out of its way to forbid it. That will allow easier and more efficient implementation, good! > (b) Using non-empty string keys that aren't well-formed identifiers > should be allowed. ok. Is it allowed to "normalize" subclasses of strings to regular string, e.g. after: class mystring(str): pass class C: pass x = C() setattr(x, mystring('foo'), 42) is it allowed that the dict of x contains a regular string 'foo' instead of the mystring instance? - Willem
https://mail.python.org/pipermail/python-dev/2008-June/080310.html
CC-MAIN-2014-10
refinedweb
130
71.24
EFF Assails YouTube For Removing "Downfall" Parodies 294 Locke2005 writes "In what promises to be one of the quickest threads to become Godwin'ed, YouTube has pulled scores of parodies of the 'Hitler Finds Out' scene from the movie The Downfall. Ironically, I had never heard of this movie before this — and now I want to watch it." Here is the EFF complaint. David Weinberger has posted some details on Google's Content Identification tool, which is being used in the shotgun takedowns. Ich bin Hitler (Score:5, Funny) und ich bin erste! (first post, thread is now godwinned) Re:Ich bin Hitler (Score:5, Informative) it's easy to get around the content filter really. how do people not bother? Just change the audio pitch by...I think it's 1 half step? Or 1.1 half steps? Once you do that, the automated scanner will not be able to find your video at all. It will sound practically identical, as well. Just shows how pitiful the attempts by copyright groups are, since they don't even review the videos. For video that relies on the graphic, you just have to create a single vertical line (maybe green or something, 1 pixel wide) going down the entire frame of the video, and then the graphic filter won't find it either. Just shows ya, the more you try to stifle, less it works. Re:Ich bin Hitler (Score:5, Informative) ahh, here it is. The how-to. [rit.edu] Hitler's video got removed? (Score:2) Couldn't happen to a nicer bloke. Re: (Score:2) Apparently all of those videos on that page have been pulled. Including the pure white noise one [youtube.com]. I wonder how they justify that one. Re:Ich bin Hitler (Score:5, Funny) Apparently all of those videos on that page have been pulled. Including the pure white noise one. I wonder how they justify that one. Germany considers it too racist. There should have been a representative mix of white and black noise. Re: (Score:2) Duh. The real question is, how long did those videos last before being publicized, and how long compared to other "illegal" streams of bytes. Re: (Score:2) those videos were up for more than a year. You could repost them and they still pass the filter *today* with the pitch change. Re: (Score:2) regardless, I don't the the fuhrer is going to appreciate it. It's not the first time google has messed with him. [youtube.com] [youtube.com] [youtube.com] Re: (Score:2) Hitler wouldn't dare run Spelljammer. (Score:4, Insightful) Unfortunate (Score:3, Insightful) Re:Unfortunate (Score:5, Insightful) Re: (Score:3, Informative) Whose line is it anyway often did that. Weird Al also always asks permission and won't put it on his CD if asked not to (in the case of pitiful, but he released that for free since Blunt was okay with it but his label wasn't) Re:Unfortunate (Score:5, Informative) He only does that because he's a nice guy though. Legally he could put any of his parodies on his CDs if he wanted to. Re:Unfortunate (Score:4, Informative) No, the reason he asks permission is to be (a) a nice guy, and(b) being a nice guy gives makes the musicians more open to letting him do stuff like use their actual music video sets for the parodies and other cool things. Re: (Score:3, Informative) Re:Unfortunate (Score:5, Informative) making a parody where the subtitles are the only original content and everything else is from the copyrighted work is not gonna fly in court. It doesn't matter, for the purpose of determining fair use, how much additional material was added. Rather, it depends on how much of the underlying work was used, and how important that portion was to the underlying work. Your criterion is often invoked by infringers who legitimately claim that because they added so much to the portion used, their use was fair, and is just as often rejected by the courts, who don't care about that. In any event, this isn't "the entirety of the video from 'The Downfall.'" The Hitler scene is just one part -- albeit a rather powerful part -- of an entire movie about the last days of the Nazis in Berlin during the war. Of course, the thing that might trip them up is the ridiculous dividing line that the courts have been drawing between parody and satire. When a use is a parody, it makes fun of the underlying work itself, and therefore must draw at least somewhat from that underlying work, in order to come about. It is essentially commentary that ridicules the work, or is at least itself ridiculous. Imagine, for example, making fun of Mickey Mouse and Disney by having the Sorcerer's Apprentice scene from Fantasia involve Mickey summoning up a destructive horde of copyright attorneys. (We are indeed capable of reproducing by fragmentation; fear us) That could be a parody. Satires, however, are making a point about society generally, or at least about something other than the underlying work. In that case, it doesn't absolutely need to borrow from an underlying work, and the courts have not been as generous to satire as they have been to parody. For example, there was a case in which someone was making fun of the OJ Simpson trial by using Dr. Seuss characters and artwork. Because the use wasn't commenting on the used material, but just borrowing it for an unrelated purpose (unless OJ was right, and the murderer was the Lorax or something), it wound up not being a fair use. Now, I think this is a dumb distinction. The main issue should be whether the use is transformative, even if it doesn't 'need' to use the underlying work (although a showing of necessity should count for something, considering other doctrines, such as merger, where it is also relevant), along with the rest of the fair use analysis, in particular, the fourth factor (harm to the market for the underlying work). But that's what we're stuck with at the moment. And since most of the Downfall videos (though not all -- the one where Hitler is upset about how many Downfall videos there are would seem to be okay, ironically) don't make fun of anything that requires the use of Downfall in order to do it, things may not go well. Now, how long until someone follows up on this, does a bit of research, and has Hitler upset about this particular aspect of Fair Use under US copyright law, citing the statute and caselaw? Perhaps Generals Keitel, Jodl, Krebs, and Burgdorf (the four guys that he has stay in the room) could each stand for one of the four prongs of the analysis? Re:Unfortunate (Score:5, Insightful) However, directly using the entirety of the video from "The Downfall" is not going to be seen as fair use. Nobody is using the entirety of the video. They are using a clip that's less than 4 minutes out of a 178 minute film. Re: (Score:3, Insightful) They are using a clip that's less than 4 minutes out of a 178 minute film. True, but that doesn't necessarily make the 4 minutes free to use, especially for purposes other than making a comment on that particular film or its authors. Re: (Score:3, Insightful) Weird Al is still part of the "system" and doesn't have anything to do with Fair Use... He's on a big label and can get rights to whatever he wants. Remember MOST performers on the radio DON'T write their songs, and the "company" often owns them anyway. The company can license to whoever they want..."NOW", Kidz Bop, etc. The "performer" has nothing to say because they signed over ownership. What company executive is going to pass up easy money if Weird Al wants to riff on a song! Re: (Score:2) Also it doesn't matter how much of the original work is used, different subtitles change the entire context of the clip and there for is protected under fair use. Re: (Score:3, Insightful) Weird Al recreates the backing track in his studio. The downfall parodies do not restage the scene with different actors. That's one way a is different from b. Re: (Score:3, Insightful) Let's see: a) Weird Al takes a portion of the audio track and overlays his own original audio and replaces the video portion with his own original re-performance. b) Downfall videos take the entirety of the audio track and the entirety of the video track and simply overlays subtitles. Nothing at all is an original recording. They might as well show the original unaltered Downfall clip on one side and have a second video that plays beside it showing the subtitles. A parody should mock the original work. The Re: (Score:2, Insightful) Let's talk about Shakespeare (Score:5, Insightful) Imagine if you will that Wm. Shakespeare had to contend with modern copyright law. He's only one example - any remembered artist will do. How much of the works of "Bard of Avon" would be permitted under current law? Actually, almost none of it. A sonnet or two. And because his unsourced output was so small we would not know of him at all. England's national poet would have been silenced by copyright law as we know it. Almost all of the stories he retold as plays would now be lost forever because they were derived from bardic tales or previous plays that would have been protected by copyright. We grant him great respect now not because he invented these stories, but because he told them well . Every play, each story, was derived or influenced - as was common in that day and should be common still - by the bardic tales passed down in oral tradition that today would be protected. It was in his wry telling of these tales, the wit that he added, that made them so durable that we know them still. If he had not retold them in his special way they would be lost to time. Today he would be Disney'd out of his art - as a great many grand geniuses are today being silenced by the tyranny of copyright monopolies. Every creative person needs to understand and acknowledge the source of their creation, or at least that they've built upon one. And they need to submit to a future where others build upon their work. We call this evolution culture. Modern copyright law admits no such culture. Each of them needs to understand that modern copyright law dooms them to ignominy, as our current masters of culture need new sales to drive their market numbers and this works against literary immortality. It's a Devil's bargain. And so, breeding a generation devoid of culture we reap what we sow. If kids can't adopt the culture of their parents because they're proscribed from experiencing what it was by copyright law, they will invent their own. These inventions will by necessity be primal. Primitive. Animalistic. That can be art, but it can't be durable art. So, artists and inventors are actually harmed by the current state of law. They should oppose it as it prevents their art from going viral and being a part of our culture. By preventing the natural course of social evolution through copyright law, we naturally regress to the primitive at an abhorrent rate. That's not the purpose of copyright enshrined in the US Constitution. The purpose of that clause was to "promote the progress of science and the useful arts." Re:Unfortunate (Score:5, Interesting) I read an interview with the director of Der Untergang where he said that he liked all those Untergang parodies. Not every filmmaker has his pivotal scene become such a big internet meme, and he was very flattered by that, and tried to watch every one of them. Clearly he's not the one who calls these shot. Re:Unfortunate (Score:4, Insightful) Clearly he's not the one who calls these shot. Well, think of the thousands of people who saw the original scene because of the parody, were drawn in by Bruno Ganz's amazing performance, and then went ahead and watched the full movie. If I was the director, I'd be happy with the amazing viral marketing. Uh-oh!! (Score:5, Funny) Wait till Hitler finds out about this!!! woooooohhh boy!! Re:Uh-oh!! (Score:5, Funny) Yeah, he's pretty pissed [youtube.com]. Re: (Score:2) He has. Cue links to Hitler as a Meme video... Re: (Score:2) Apparently he's not happy... [youtube.com] BBC already wrote good article on this (Score:5, Informative) A good summary of the whole story of the meme before the YouTube action. Re: (Score:3, Insightful) something posted in 5000 different iterations on the internet, with dream of humour = meme, no? Re: (Score:3, Informative) No. A meme is not a fashion or a fad. Fad is the proper term. The term Internet meme, pronounced meem, is used to describe a concept that spreads quickly via the Internet.[1] The term is a reference to the concept of memes, although this concept refers to a much broader category of cultural information. [wikipedia.org] Re: (Score:2) "Let's make parodies of Hitler having a spaz" is pretty weak as ideas go. It's pretty derivative of MST3K (which goes back to the late 80s before either the Internet or the use of the word meme became mainstream). There's also not much in the way of memetic evolution possible with do Re:BBC already wrote good article on this (Score:4, Informative) Re: (Score:2) Yes, but you're on the internet, so local namespace dictates that "meme" refers to "internet meme", just like "post" by default refers to "post on a website" rather than "post office". Downfall is a really good movie (Score:3, Interesting) ...but difficult to watch if you're squeamish about real-world evil. The parodies that I've seen, though (of the approximately 700,000 of them on YouTube) are hit and miss, though I'm pretty sure this is exactly the kind of thing that's defensible as fair use. Re: (Score:2) Re: (Score:2) The same scene over and over does not count. Re:Downfall is a really good movie (Score:5, Funny) I watched different scenes on Youtube. Pretty strange movie, though. The actors do the same things over and over, and only the subtitling is different?! Re:Downfall is a really good movie (Score:5, Insightful) The funny part is, I never would have heard about the movie(and subsequently bought a copy on DVD) if not for the Youtube parodies. Free advertising? Bah! Re: (Score:2) wouldn't this story generate even more free ads? Re:Downfall is a really good movie (Score:5, Funny) Re: (Score:2) "Yeah! I never would have heard about the movie and subsequently downloaded it from The Pirate Bay if not for the Youtube parodies too!" I would ditto that, but I can't quite bring myself to dedicate 12 to 20 hours of bandwidth so that I can watch some movie that I may like or not. I'm sure not going to spend MONEY on it!! Well what does the director have to say about it? (Score:5, Informative) Re:Well what does the director have to say about i (Score:4, Interesting) By the way, I saw this movie in the theater for a foreign film festival. It made it all the more funny to see the viral videos start popping up since I remembered the scene vividly and it's a pretty powerful movie. Although, I saw it with a German girl and her comment was that Hitler movies were passe in germany since so many had been made. I thought it was good though. Re: (Score:2) Re: (Score:3, Interesting)." [nymag.com]. Re: (Score:2) The article also ends with the director saying "If only I got royalties for it, then I'd be even happier." But removing the videos from youtube wouldn't help him with getting royalties, so yeah. It is rather stupid. Doesn't youtube have a revenue-sharing system for MAFIAA-sourced content? I know that some stuff they take down saying that the MAFIAA told them to block it and some stuff they tell you (when you post it) that its OK because they have some sort of agreement with the copyright owner of the original materials. Re: (Score:3, Insightful) To continue with my point (I hit submit instead of preview) - I bet the reason the director can't get any royalties is because his contract with the studio doesn't mention youtube clips so the studio gets to keep any money generated all for themselves. That's the kind of bullshit that "hollywood accounting" is famous for. Re:Well what does the director have to say about i (Score:4, Insightful) Well really it's stupid regardless of what the director has to say. I could imagine the director taking it all very seriously and being upset that people were making fun of his movie or making light of Hitler's actions. Still, forcing these clips to be taken down would be stupid. These parodies aren't being done for profit. They're not competing with the movie. They're not taking away from the movie. Nobody is going to watch these clips and say, "Well I don't need to see this movie now." This isn't what copyright was created for. The whole thing might even be covered under the first amendment as parody. Re: (Score:2) Parodies are directly protected under fair use. So he can scream and yell about it, but youtube is just proving with an automated system it has no clue. Re: (Score:3, Insightful) MS Flight SImulator X parody (Score:2) There use to be an MS Flight Simulator X parody that was roll over on the floor hilarious with a constant stream of in jokes about the frustrations of Flight Simulator enthusiasts with the last, initially buggiest (still not all bugs resolved) and resource hungry version of the simulator. On initial release you had to do all sorts of tweaking to get a usuable system. Two service packs and an addon pack later it was more usuable but still many hobbiests were divided between FS2004 (the previous version) and Re: (Score:2) Why not move to X-plane or flightgear? Seems like a better solution for players and the developers that want to make addons for them. Re: (Score:3, Informative) Why not move to X-plane or flightgear? Seems like a better solution for players and the developers that want to make addons for them. Many many reasons - They are STILL not as sophisticated or feature complete. Some of it is extreme. Joystick support is still not as easy as it should be in Flightgear. - Momentum - not nearly as many addons now means its harder to get the ball rolling - Both simulators keep changing even in minor releases. Makes it difficult for part time content creators to keep up, and less worthwhile when you know a new version will break it. Well FSX isn't fantastic for backward compatiblity - one of many mistakes, but FS Re: (Score:2, Redundant) Wow! Something totally un-MS related and some fucktard still finds a way to slip it into the conversations. What a fucking zealot. Actually I wrote extensively against Microsoft's DRM on FSX in number of places including the MS Flight Sim newsgroup at the time. It's the second hit if you type in FSX and DRM into Google. I totally hate that they killed off the franchise. Don't let reality get in the way of your anonymous name calling though. By the way you should get a refund on your education. You clearly fail comprehension: You don't understand how a parody of FSX based on a movie is related to the topic of parodies of that movie. You, sir, make those Jehovah Witness kooks look like mere amateurs. Well Ridiculous (Score:2, Insightful) Some of these parodies should be in the Smithsonian. Constantin Films, just like any other company run by idiots, certainly enjoys the free hosting of their movie trailers and whatever else they have to promote their stupid movies. this is the what intellectual property means: (Score:5, Insightful) the impoverishment of our culture no story, no art, is ever original. it all borrows or reinvents or reinterprets something that came before. and if the thread of our cultural output is artificially taxed strained and stamped out for demands for cash, then all of us, all of our lives, are less rich for that maybe content creators would understand that parodies like this downfall clip actually create interest in the original, and are really just a form of advertisement. instead, imagine all the culturally relevant art that we will never see and can never see the light of day because a greedy selfish system would rather lock art behind lock and key, where it earns no cash, rather than let it get out there and bloom, and create more art, and create more COMMERCE art, music, movies, all creative output has the unique property of being richer when it is allowed to flow freely and freely intermingle. why do we have to lead less rich cultural lives only because some fucking trolls in the bank vault can't see that? that if there were no such thing as intellectual property, the ancillary streams they could tap in the free flow of cultural output would be richer sources of cash than their feeble and failed approaches to control what they cannot and will never be able to control? Re: (Score:2) maybe content creators would understand that parodies like this downfall clip actually create interest in the original, and are really just a form of advertisement. Too true. Youtube and these parodies must have driven rentals and sales of the DVD through the roof. Re: (Score:2) Re: (Score:2) Posts like this deserve more than +5 Re: (Score:3, Interesting) I'll admit that intellectual property kills culture a little bit. Right after you admit that unbridled sharing kills culture a lot more. Besides, as you pointed out, leaving these clips up could be a form of advertisement. The only problem is if the free advertisement ends up being a substitute for the non-free whole package. So, the concept of intellectual property, and sharing small portions (or small parodies) of the work, are far from mutually exclusive. Re: (Score:2) Ayn Rand never talked about company owned copyright, IIRC. Now, she did discuss copyright itself, and supported lifetime of author plus 50 years. However, I doubt she would have supported the idea of corporation owned copyright, and if she did, she probably would want it treated as if the author had 'died'. So a corp has 50 years to play with the material. sounds like a bad business decision (Score:5, Interesting) Sure, hardly anyone posting a youtube vid will be interested in licensing the scene. It's short sighted to consider only that aspect, and think of it as lost revenue. This meme is a big one. If properly nurtured, it could ensure future rental revenue in the way that only cult movie status can. I also only --and legally-- rented the movie after watching the Xbox Live parody. The movie was a large international success upon its release, but it didn't make my radar. The parodies are can be so funny because the banality of the fake subtitles is so incongruent to the remarkably powerful acting. My thought process went from "this is hilarious" to "wow what a great scene... I need to watch this movie". Re: (Score:2) Just this minute I showed my GF a Downfall parody. It was her first. The first thing she said after it was over was "Wow, that looks like Downfall is an interesting movie. We should get it." I can only conclude the the guy that thought Takedowns for Downfall was the same guy who thought "New Coke" was a good idea. Re: (Score:2) I can only conclude the the guy that thought Takedowns for Downfall was the same guy who thought "New Coke" was a good idea. You're right if you mean pure genius! Both are free advertising on top of free advertising. I had never heard of Downfall until they sent their takedown notices, now I've watched this parody _and_ I kind of want to watch the movie. Mine's still up (Score:5, Informative) I received a "Notice of potential infringement" from YouTube very soon after posting this one [youtube.com] a week ago. The video, which had initially been accessible, was pulled from the site. There was an option to appeal the takedown notice, and I filled it out, providing as a reason "Parody is a recognized fair use under US copyright law." I'm actually not sure if you can play the fair-use card when using the content owner's IP to mock an unrelated subject, but in any event, the appeal seemed to be accepted by YouTube, because access to the video was restored within a few hours. So, for what it's worth, if your video gets pulled by Youtube, try filling out the appeal form. Re:Mine's still up (Score:4, Informative) So it sounds as though you're going to want to read over 17 USC 512(g), which covers this sort of thing. Long story short, the idea is that if material is taken down due to a DMCA notification, which service providers (including YouTube, given how that term is defined in the law) obey in order to be protected from lawsuits regarding things other people do with their service, it can be put back up in a way that continues to protect the service provider. But the two opposing parties are made aware of each other so that they can hash the issue out in court, possibly with the court ordering that the material be taken down again. Here's the relevant subsection: Re: (Score:2) There was an option to appeal the takedown notice, and I filled it out, providing as a reason "Parody is a recognized fair use under US copyright law." That's a common misconception, largely due to the press doing its usual poor job of reporting Supreme Court decisions (Campbell v. Acuff-Rose Music, Inc.). An accurate statement of the law is that parody may be fair use. Basically, the district court said parody was fair use. The appeals court said it wasn't. The Supreme Court said it could be--it's one of the things you consider when considering that nature of the work, and send the case back down to the district court to try again. Glad they got on that before anybody saw them.. (Score:2, Funny) Der Untergang (Score:5, Informative) I saw the original movie, Der Untergang [imdb.com], which is its original German name, in my German Studies class in high-school, and recommend it to anybody interested in more than just Godwin's Law. Watch it. Must See. Re:Der Untergang (Score:4, Interesting) It is indeed a fantastic film, highly recommended. Being married to a German, having lived in Berlin for seven years, and with both my kids having been born there, I have long felt that it's absolutely incumbent upon me to really try and understand what happened in 1930s and 1940s Germany, rather than continuing to hide behind the simplistic "we won, you lost, you killed lots of Jews, Germans are bad" attitude that was drummed into most of us growing up in the UK in the 1960s and 1970s. That's not to condone or forgive anything at all, but it's important that we understand why a deeply civilised nation went so catastrophically off the rails in the first half of the twentieth century, if only to look inwards and ask ourselves, each and every one of us, what would it take for me to go down a similar road. Only then, I believe, can you try and avoid it. Again, it's too trivial to say "never" without thinking about it: we're all human and all capable of extreme actions in extreme circumstances, I believe. In that regard, Der Untergang is a truly crucial addition to the literature (be it written or visual) on this very important topic. EFF's own parody video... (Score:3, Informative) The clip: [youtube.com] Re: (Score:2) That was great. I almost burst into tears of laughter at the line about Stallman. Anyone have a link to the Cloud Computing Episode? (Score:2) Send them an email and let them know how you feel! (Score:3, Informative) In related news... (Score:3, Interesting) Hitler's relatives sue Constantin films for copyright infringement of his private conversations while in the bunker. Tie to Gov't Publication Service, Please? (Score:2) Earlier today I saw this most worthwhile project by Google to publish Government takedowns and data requests: [slashdot.org] Now this article makes me ponder... Open Letter to Google/YouTube: I can totally dig that the volume of possible copyright infringement -- and hence the volume of takedown notices -- on YouTube is enormous. So large that automated processing is effectively required to keep compliance costs at a manageable level. So how a Hitler finds out about the death star. (Score:2) [youtube.com] 420 (Score:2) Seems an appropriate date for this story. This comes at a very bad time (Score:2, Funny) Because a parody video of Hitler as Steve Jobs discovering the loss of the iPhone 4G prototype just must be made. I thought I'd find you holding his leash (Score:2) Wow - holy overreacting Batman (Score:2) The "Hitler reacts to iPad's release" was one of my all-time favorite parodies. I simply can't believe Youtube removed it. Hey, Google, you're smart guys, this move is wholly out of character. Lawyers with time on their hands (Score:2) Monty Python did this right (Score:3, Insightful) [slashdot.org] See plenty of their clips (legally & for free) here: [youtube.com] Since the director of the film apparently *likes* the parodies, why not organise a competition, with a YouTube channel for the winners? Yipee, instant good karma for the movie industry, (for a change), instead of this Streisand effect boomerang. All the parody clips will be back, or posted elsewhere, within minutes anyway.... These are not parodies. (Score:3, Informative) In reality, the bunker scene depicts Hitler reacting furiously to the news that the war is lost as Soviet troops close in on Berlin. The internet parody leaves the video and audio intact, but replaces the subtitles with Hitler reacting to ridiculous every day events, like having his xBox live account canceled, or finding out that Michael Jackson died. I don't see how this qualifies as parody when the only thing changed is the subtitle text. The clips are humorous when done well, and an argument might be made for fair use, but this is not parody. Parody requires imitation, whereas this is closer to annotation. Re: (Score:3, Funny) Identifacition? Really? What? That's a perfectly cromulent word that embiggens us all. Re:JUST LIKE THE NAZIS! (Score:5, Funny) Re: (Score:2, Funny) Re: (Score:3, Interesting) That's why I do speak up when they come for the fascists [wikipedia.org]. Re: (Score:2, Funny) No, that was "JewTube". +1 Correct (Score:2) These automatic content detectors CANNOT evaluate whether or not the content is used under Fair Use. AFAICT, they have no copyright-based justification for removal of these videos. If this is in response to anything DMCA-related, the video submitters can strike penalties against YouTube or the complaint party if this is a bogus takedown of protected content, right? Incidentally, I had no idea what the name of the parodied move was until this /. story. I've wanted to check it out. Re: (Score:3, Informative) I'm pretty sure the production qualities of the original are not a factor in determining fair use. There's amount and substantiality of the part of the work used (which is based on the whole movie, not the clip), effect on market for original work (zip or net positive), purpose of use, and nature of the work. Parody i Re: (Score:2, Informative) The legal definition [thefreedictionary.com] of "parody" is: A form of speech protected by the First Amendment as a "distorted imitation" of an original work for the purpose of commenting on it. The key words (from both our definitions) are "imitated" and "imitation". The work in question is not an imitation. It is an exact copy with some minor modifications. I should also point out that the work in question was not providing any type of commentary on the original. Now, there may indeed be some fair use protection provided by the four factors [cornell.edu] outlined in the law, but nevertheless, this was a bad example that they themselves created. More convincing would have been Re: (Score:2) Re: (Score:2, Interesting) Re:Mirror of "Hitler Finds out his videos are remo (Score:2) So, do you spell your name Sairam or Sariam (it's spelled both ways on your story's page). Doesn't matter, I guess. It'll be gone soon enough.
http://slashdot.org/story/10/04/21/0134205/EFF-Assails-YouTube-For-Removing-Downfall-Parodies
CC-MAIN-2015-14
refinedweb
5,669
70.33
Swapping elements of a C++ STL container is same as swapping normally. // Swapping 2 elements of a STL container #include <algorithm> std::swap(ivec.at(0), ivec.at(1)); // Swap [0] & [1] Swapping elements of a C++ STL container is same as swapping normally. // Swapping 2 elements of a STL container #include <algorithm> std::swap(ivec.at(0), ivec.at(1)); // Swap [0] & [1] C++ function templates are a great way to reduce the amount of repeated code written to do the same operations on objects of different types. For example, a minimum function that handles any type could be written as: template <typename T> T min(T a, T b) { return (a > b) ? b : a; } There are 2 ways to organize the template code: inclusion model and separation model. In the inclusion model, the complete function definition is included in the header file. In the separation model, the function declaration is in the header file and the function definition is put in the source file. However, it won’t work as expected. The keyword export needs to be used with the function definition (i.e., export template <typename T> …) Though this is Standard C++, this won’t work! Welcome to the real world of C++. No major compiler out there supports this clean separation model (export keyword). Even the latest VC++ 8 doesnot support it. Nor does gcc. References:; } };. One of the most popular texture compression algorithms used in OpenGL are the DXTn series which were introduced by S3 Graphics. Hence, they’re known as S3TC. The working of this algorithm can be found in the Appendix of GL_EXT_texture_compression_s3tc. There are 5 versions available ranging from DXT1 to DXT5. DXT1 is briefly explained below: Compression: A 4×4 texel block (48 bytes if texel is RGB) is compressed into 2 16-bit color values (c0 and c1) and a 4×4 2-bit lookup block. c2 and c3 are calculated from c0 and c1 as follows: If c0 <= c1, c2 = (c0 + c1) / 2; c3 = not defined; else c2 = (2 * c0 + c1) / 3; c3 = (c0 + 2 * c1) / 3; Decompression: Decompression is extremely fast. It is just a lookup of 2-4 precomputed values. Read the 2-bit value of each compressed pixel. If 00 then read RGB of c0, if 01 then read RGB of c1 and so on. VTC: VTC (GL_NV_texture_compression_vtc) is also based on the above ideas, just extend the texel blocks in the z direction. When programming in C++, mixing up C++ and C data types becomes an ugly inevitability. It always throws up some quirky behaviour. POD (Plain Old Data) is one of these I discovered today. C macros can be used unchanged under C++. But, the correct behaviour under C++ depends on the type of data being operated on. It needs to be of POD type. Here is some information about the POD type from the excellent C++ FAQ Lite: [26.7] behaviour are not. Also note that references are not allowed. In addition, a POD type can’t have constructors, virtual functions, base classes, or an overloaded assignment operator. On Visual C++ 2005, I allocated a large local array in a function. The program got a stack overflow exception and ended inside chkstk.asm. I’m used to the stack size limit on Linux/Cygwin which is usually 2MB. The limit can be found using the bash builtin command ulimit. $ ulimit -s 2042 (KB) But, the array I was allocating under VC++ 2005 was just a bit larger than 1MB. On further digging, I found that the default stack size on VC++ 2005 is 1MB. This stack size limit can be modified using: Project → Properties → Configuration Properties → Linker → System → Stack Reserve Size. More information on the stack size limit can be found from the MSDN page on /STACK linker option. I find myself having to indicate the libraries I want linked in every time I do this. I found a neat (non-portable) trick in Visual Studio to do this. Use the #pragma comment(lib, "libfile") [1] preprocessor directive to hint your compiler/linker to include these library files for linking. For example: // Link cg libraries #pragma comment(lib, "cg.lib") #pragma comment(lib, "cggl.lib") [1] msdn.microsoft.com/library/en-us/vclang/html/_predir_comment.asp (via Adding MSDEV Libraries) A colleague informed me today that my name had appeared in the April 2005 issue of the Embedded Systems Programming magazine. Back in December 2004, I had commented to Dan Saks about his article More ways to map memory on the usage of the available C fixed width integer types. We had an email discussion on it and I forgot all about it. I had blogged earlier about these types. In his latest article Sizing and aligning device registers he mentions that email conversation. I know this is not anything significant, but this is the first time my name has appeared in a deadwood tech magazine! 🙂 Ashwin N (ashwin.n@gmail.com) suggested yet another way to define the special_register type: If you want to use an unsigned four-byte word, shouldn’t you be doing:#include /* ... */ typedef uint32_t volatile special_register; This should work with all modern standard C compilers/libraries. The typedef uint32_tis an alias for some unsigned integer type that occupies exactly 32 bits. It’s one of many possible exact-width unsigned integer types with names of the form uintN_t, where Nis a decimal integer representing the number of bits the type occupies. Other common exact-width unsigned types are uint8_tand uint16_t. For each type uintN_t, there’s a corresponding type intN_tfor a signed integer that occupies exactly Nbits and has two’s complement representation. I have been reluctant to use <stdint.h>. It’s available in C99, but not in earlier C dialects nor in Standard C++. However, it’s becoming increasingly available in C++ compilers, and likely to make it into the C++ Standard someday. Moreover, as Michael Barr observed, if the header isn’t available with your compiler, you can implement it yourself without much fuss. I plan to start using these types more in my work. Again, using a typedef such as special_registermakes the exact choice of the integer type much less important. However, I’m starting to think that uint32_tis the best type to use in defining the special_register type. Came across a puzzling piece of code today. The actual code is confusing, however it basically boils down to this: #include <stdint.h> int main() { uint32_t val = 1; uint32_t count = 32; val = val >> count; return 0; } What do you think will be the result in val? Me thought 0. Turned out to be 1. After further investigation, I found that this was due to a combination of an undefined behaviour in C, vague behaviour of certain IA-32 architecture operations and my ignorance of both. On examining the code above, it is natural to think that 32 right shifts applied on val would boot out the puny 1 and the result would be 0. Though this is right almost always, it has some exceptions. From The C Programming Language [1]: The result is undefined if the right operand is negative, or greater than or equal to the number of bits in the left expression’s type. (Taking val >> count as example, left expression is val and right operand is count.) So, that explains why the result should not be relied on. But why val is 1? On digging deeper for that, I found that the compiler [2] generated the Intel instruction sar or shr (or it’s variants) for the C shift operation. And here lies another nasty info … From the IA-32 Intel Architecture Software Developer’s Manual [3]:. So, not only is the behaviour in C undefined, on code generated for IA-32 processors, a 5 bit mask is applied on the shift count. This means that on IA-32 processors, the range of a shift count will be 0-31 only. [1] A7.8 Shift Operators, Appendix A. Reference Manual, The C Programming Language [2] Observed with both Visual C++ and GCC compilers [3] SAL/SAR/SHL/SHR – Shift, Chapter 4. Instruction Set Reference, IA-32 Intel Architecture Software Developer’s Manual
https://codeyarns.com/page/205/
CC-MAIN-2017-17
refinedweb
1,370
65.22
Unlike1: [Unit] Description=404 micro-service [Service] Type=notify ExecStart=/usr/bin/404 WatchdogSec=30s Restart=on-failure [Install] WantedBy=multi-user.target The classic way for2, daemonizes. With systemd, we would use Type=fork in the service file. However, Go’s runtime does not support that. Instead, we use Type=notify. In this case, systemd expects the daemon to signal its readiness with a message to3, instead of using panic(), it could signal its situation before dying. Another dedicated component could try to resolve the situation by restarting the faulty component. If it fails to reach an healthy state in time, the watchdog timer will trigger and the whole service will be restarted. Depending on the distribution, this should be installed in /lib/systemd/system or /usr/lib/systemd/system. Check with the output of the command pkg-config systemd --variable=systemdsystemunitdir. ↩ This highly depends on the NTP daemon used. OpenNTPD doesn’t wait unless you use the -s option. ISC NTP doesn’t either unless you use the --wait-sync option. ↩ An example of an exceptional condition is to reach the limit on the number of file descriptors. Self-healing from this situation is difficult and it’s easy to get stuck in a loop. ↩ I was an happy user of rxvt-unicode until I got a laptop with. Let’s start small with a terminal with the default settings. We’ll write that in C. Another supported option is Vala. #include <vte/vte.h>_sync(VTE_TERMINAL(terminal), VTE_PTY_DEFAULT, NULL, /* working directory */ command, /* command */ NULL, /* environment */ 0, /* spawn flags */ NULL, NULL, /* child setup */ NULL, /* child pid */ NULL, NULL); /* --libs vte-2.91) term.c -o term And run it with ./term: From here, you can have a look at the documentation to alter behavior or add more features. Here are three examples.1, this also enables background transparency._rewrap_on_resize(VTE_TERMINAL(terminal), TRUE); vte_terminal_set_mouse_autohide(VTE_TERMINAL(terminal), TRUE); This will:. UPDATED: later, you can have a look at Filippo Valsorda’s example or my own take which I describe in more details here. This is not meant to be an universal Makefile but a relatively short one with some batteries included. It comes with a simple “Hello World!” application. Let’s take a look at the various “features” of the Makefile. GOPATHhandling§. We define two rules2: simply issue go get3 to download and build golint. In ❷, the lint rule executes golint on each package contained in the $(PKGS) variable. We’ll explain this variable in the next section. Some commands need to be provided with a list of packages. Because we use a vendor/ directory, the shortcut ./... is not what we expect as we don’t want to run tests on our dependencies4. explicitely those tools are similar to the rule for golint described a few sections ago. In ❹, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns. If you don’t want to automatically update glide.lock when a change is detected in glide.yaml, rename the target to deps-update and make it a phony target. ↩ There is some irony for bad mouthing go get and then immediately use it because it is convenient. ↩ I think ./... should not include the vendor/ directory by default. Dependencies should be trusted to have run their own tests in the environment they expect them to succeed. Unfortunately, this is unlikely to change. ↩. Let’s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version: // In SslUtil class public static boolean shouldDenyRequest(int error) { return false; } While the creation of Debian packages is abundantly documented, most tutorials are targeted to packages implementing the Debian policy. Moreover, Debian packaging has a reputation of being unnecessarily difficult1 and many people prefer to use less constrained tools2 like fpm or CheckInstall. However, I would like to show how. Just put 9 in it: echo 9 > debian/compat The second one has the following content: memcached . At this point, we can iterate and add several improvements to our memcached package. None of those are mandatory but they are usually worth the additional effort. those. dpkg-buildpackage will complain if the dependencies are not met. If you want to install those packages from your CI system, you can use the following command4: mk-build-deps \ -t 'apt-get -o Debug::pkgProblemResolver=yes --no-install-recommends -qqy' \ -i -r debian/control You may also want to investigate pbuilder or sbuild, two tools to build Debian packages in a clean isolated environment.) Most packaged daemons come with some integration with the init system. This integration ensures the daemon will be started on boot and restarted on upgrade. For Debian-based distributions, there are several init systems available. The most prominent ones are: Writing a correct script for the System-V init is error-prone. Therefore, I usually prefer to provide a native configuration file for the default init system of the targeted distribution (Upstart and systemd). If you want to provide a System-V init script, have a look at /etc/init.d/skeleton on the most ancient distribution you want to target and adapt it5.6. Like for Upstart, the directive Type is quite important. We used forking as memcached is started with the -d flag. is not part of debhelper7. Without those additional modifications, the unit will get installed but you won’t get a proper integration and the service won’t be enabled on install or boot. Many daemons don’t need to run as root and it is a good practice to ship a dedicated user. In the case of memcached, we can provide a _memcached user8.. It is possible to leverage debhelper to reduce the recipe size and to make it more declarative. This section is quite optional and it requires understanding a bit more how a Debian package is built. Feel free to skip it. There are four steps to build a regular Debian package: debian/rules clean should clean the source tree to make it pristine. debian/rules build should trigger the build. For an autoconf-based software, like memcached, this step should execute something like ./configure && make. debian/rules install should install the file tree of each binary package. For an autoconf-based software, this step should execute make install DESTDIR=debian/memcached. debian/rules binary will pack the different file trees into binary packages. You don’t directly write each of9.. While make install installed the essential files for memcached, you may want to put additional files in the binary package. You could use cp in your build recipe, but you can also declare them: debian/memcached.docswill be copied to /usr/share/doc/memcachedby dh_installdocs, debian/memcached.exampleswill be copied to /usr/share/doc/memcached/examplesby dh_installexamples, those files make the build process more declarative. It is a matter of taste and you are free to use cp in debian/rules instead. You can review the whole package tree on GitHub. those examples is to demonstrate that using Debian tools to build Debian packages can be straightforward. Hope this helps. People may remember the time before debhelper 7.0.50 (circa 2009) where debian/rules was a daunting beast. However, nowaday, the boilerplate is quite reduced. ↩ The complexity is not the only reason. Those alternative tools enable the creation of RPM packages, something that Debian tools obviouslyscripts and equivs package. ↩ It’s also possible to use a script provided by upstream. However, there is no such thing as an init script that works on all distributions. Compare the proposed with the skeleton, check if it is using start-stop-daemon and if it sources /lib/lsb/init-functions before See #822670. For packages targetting Debian Stretch or Ubuntu Yaketty or more recent, putting 10 in debian/compat should be enough to remove dh-systemd from build dependencies and --with systemd from latest solution is that the name is likely to be replaced by the UID in ps and top because of its length. ↩ We could call dh_auto_clean at the end of the target to let it invoke make clean. However, it is assumed that a fresh checkout is used before each build. ↩.. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool. pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features: assertkeyword, The general plan for to test a feature in lldpd is the following: lldpdprocess in each namespace. those to keep the actual tests short: lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details. lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate. namespaces for more details. It is quite reusable in other projects2. links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use multiprocessing module instead. ↩
https://vincent.bernat.im/en/blog/atom.xml
CC-MAIN-2017-09
refinedweb
1,683
58.08
For a long time I was using Python 2.5 to do all this fine but recently upgraded to 2.7 since building stuff for 2.5 is a real pain. I also updated mod_wsgi to 3.3 for Python 2.7. Everything is working fine with Apache + mod_wsgi on CentOS and also in the Django runserver on both Windows and CentOS, but not with Apache + mod_wsgi on Windows. Whenever I try to access a page in my Django app I get the following (note that Apache starts fine): ImportError at / DLL load failed: The specified module could not be found. Which is caused by things like: from Crypto.Cipher import AES Etree and others cause the exact same error and it is not limited to any specific packages. Anything with pyd files fails. Googling around suggests reinstalling Python "for all users", but the installer doesn't give you that option anymore anyway. For good measure I've tried reinstalling Python 2.7 as an administrator and also told it to register itself as the default version of Python but neither helped. I think the solution might have something to do with: A solution that worked for me that allowed me to use Python 2.7 (although not very desirable), is to build the Crypto module with MingGW. Download Crypto source package and run setup.py build --compiler=mingw32. setup.py build --compiler=mingw32 See this question for more information: I ran into similar problems, which eventually appeared to be related to and also issue 4120 (new style of DLL hell). Using Python 2.5 (the version before these bugs started) solved this for02 times active
http://serverfault.com/questions/180620/windows-django-mod-wsgi-dll-load-failed
crawl-003
refinedweb
276
67.35
Schema inspection for PostgreSQL Schema inspection for PostgreSQL (and potentially others in the future). Inspects tables, views, materialized views, constraints, indexes, sequences and functions. Limitations: Function inspection only confirmed to work with SQL/PLPGSQL languages so far. Doesn’t inspect function modifiers (IMMUTABLE/STABLE/VOLATILE, STRICT, RETURNS NULL ON NULL INPUT, etc). Basic Usage Get an inspection object from an SQLAlchemy session or connection as follows: from schemainspect import get_inspector from sqlbag import S with S('postgresql:///example') as s: i = get_inspector(s) The inspection object has attributes for tables, views, and all the things it tracks. At each of these attributes you’ll find a dictionary (OrderedDict) mapping from fully-qualified-name-of-thing-in-database to information object. For instance, the information about a table books would be accessed as follows: >>> books_table = i.tables['"public"."books"'] >>> books_table.name 'books' >>> books_table.schema 'public' >>> [each.name for each in books_table.columns] ['id', 'title', 'isbn'] Documentation Documentation is a bit patchy at the moment. Watch this space! Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://test.pypi.org/project/schemainspect/
CC-MAIN-2017-34
refinedweb
189
50.94
MASM32 Downloads Thanks HSE I have implemented that in attached UASM.Biterider, try if it fixes that problem. It is still with #IF fix. Hi LiaoMiDid you have a problem with the "#if" hack? This could be important to know to understand a possible compatibility issue.Biterider The release version of UASM.exe posted above is with fix for #if, affecting only if 0 and, which is safe and should be rejected with masm as well. IF defined(SYMBOL) and (SYMBOL ge 1020) gets converted in first pass to IF 0 and (SYMBOL ge 1020) if SYMBOL is not defined and IF 1 and (SYMBOL ge 1020) if SYMBOL is defined.There are some other small fixes included. Use this one until Johnsa build and upload it on GIT and terrspace. if (defined(__midl) and (501 lt __midl)) INT_PTR typedef SDWORD …else if defined(_WIN64) INT_PTR typedef QWORD … else INT_PTR typedef _W64 … endifendif ??AND_001 = 0if defined(__midl) if 501 lt __midl INT_PTR typedef SDWORD … else ??AND_001 = 1 endifelse ??AND_001 = 1endif if ??AND_001 eq 1 if defined(_WIN64) INT_PTR typedef QWORD … else INT_PTR typedef _W64 … endifendif
http://masm32.com/board/index.php?topic=7229.0
CC-MAIN-2018-39
refinedweb
185
68.06
SET 01 Set 11. T he company has a rule ------- entrance to the warehouse without official authorization. (A) prohibited (B) prohibiting (C) prohibit (D) prohibits2. Customers can call our hotline to hear about 6. Part-time staff members will receive ------- ------- of the discounts and special offers available to them. (A) so (B) such (C) ones (D) some3. A deposit must ------- to the manager in the selection submitted by the advertising department. (A) choose (B) chosen (C) to choose (D) choosing8. The chairman ------- to the business order to secure ones reservation. (A) be paid (B) be paying (C) have paid (D) to pay4. Of the five applicants for the position, four convention if his schedule had allowed for it. (A) will go (B) went (C) have gone (D) would have gone9. ------- can call our customer service center if are too inexperienced, while ------- has a bad attitude. (A) other (B) the others (C) others (D) the other5. A full health examination is mandatory for you require more information. (A) You (B) Your (C) Yours (D) Yourself10. If you wish to make an appointment with ------- the employees participating in the charity climbing expedition. (A) many (B) how (C) whom (D) all Professor Coltrane, please contact ------- by email. (A) him (B) his own (C) he (D) his Set 1 marketing department last month, she received a salary increase of 15%. (A) moves (B) has moved (C) moving (D) moved12. The door wont open ------- the password assembly line will be ------- starting Monday. (A) operational (B) operate (C) operation (D) operations17. ------- testing is required before the project has been entered correctly. (A) when (B) unless (C) in case (D) given13. ------- the interviewers were impressed can enter the production stage. (A) Additional (B) Addition (C) Additionally (D) Additions18. Modern cellular phones boast ------- Set 5 Set 6 with him, Ben was not offered the position. (A) Despite (B) Although (C) Because (D) In addition to14. All software changes must be dealt with interactive features and entertainment options. (A) every (B) many (C) a lot (D) little19. Mr. Smith found Kevins advice ------- before the ------- version is shown to investors. (A) finalize (B) finalist (C) finals (D) final15. The concert organizers insist ------- no when he began working at the company. (A) helpfully (B) help (C) helpful (D) helps20. Many companies ------- success reliesSet 10 recording equipment be brought into the venue. (A) so (B) that (C) while (D) unless on customer loyalty offer incentives and discounts to long-term customers. (A) that (B) his (C) whose (D) which consists of around 12,000 people, many of ------- are employed by the Department of Medicine. (A) which (B) whose (C) whom (D) those22. Be sure to frequently save your work ------- network features a broadband connection to the internet. (A) Before (B) Instead (C) Unlike (D) Contrary27. Some of the workers ------- discontent toward the proposed changes. (A) express (B) expresses (C) expressing (D) to express28. Each ------- must prepare a mock avoid any loss of information due to system errors. (A) for (B) to (C) so (D) when23. Motivating and encouraging workers is a presentation for his second interview. crucial part of ------- an effective manager. (A) be (B) being (C) been (D) to be24. Few businesses doubt the ------- of that involves many weeks of negotiations. advertising in the commercial market. (A) powerfully (B) powered (C) powerful (D) power25. Delivery of your items will be made ------- three days from the date of payment. (A) within (B) toward (C) before (D) among the emergency services immediately and do not attempt by any means to extinguish the blaze -------. (A) yourself (B) itself (C) himself (D) themselves 300 SET 02 Set 21. Mr. Howard is knowledgable ------- in 6. The dramatic rise in crime rates ------- been business to start his own printing company. (A) enough (B) too much (C) very (D) well2. The basement level of the factory will be linked to the rise in unemployment. (A) has (B) have (C) having (D) to have7. Some ------- information about upcoming closed for ------- during the next three months. (A) renovation (B) renovate (C) renovated (D) renovator3. ------- enough time is available, the weekly local events is listed on the web site. (A) interesting (B) interests (C) interested (D) interest8. While ------- a new firewall software, we meeting will be followed by a presentation. (A) The fact that (B) In view of (C) Providing (D) Nevertheless4. Richard Harris was instructed to search had our web site repeatedly shut down by a series of hacking attacks. (A) installed (B) installing (C) install (D) installment9. The board of directors did not accept ------- for the largest convention hall ------Bakersfield, California. (A) of (B) in (C) from (D) to5. The recent ------- are likely to benefit anyone of the changes to the health and safety regulations that were suggested by the inspector. (A) any (B) those (C) them (D) every10. ------- companies have signed an working in the health industry. (A) reformed (B) reformer (C) reforms (D) reform agreement obligating them to actively promote each others new product line. (A) Each (B) Any (C) Every (D) Both never allow defective products to pass the inspection. (A) your (B) yours (C) you (D) yourself12. ------- the chairman thinks you are suitable aspects of computing, the company started giving him more complicated assignments. (A) has mastered (B) masters (C) had mastered (D) is mastering17. ------- the seminar was free to attend, not for the public relations position, he will contact you to schedule a second interview. (A) If (B) Though (C) Whether (D) While13. During the flight, we will serve ------- many employees showed up. (A) Instead of (B) Because (C) Although (D) Due to18. People like to replace their cars every few passengers a hot meal and beverage of their choice. (A) us (B) we (C) our (D) ourselves14. Unauthorized personnel do not have access years ------- when their car is in perfect condition. (A) about (B) even (C) although (D) quite19. Each staff member is personally responsible to ------- of these floors. (A) every (B) either (C) much (D) those15. A number of local merchants ------- a variety for any ------- expenses incurred during the business trip. (A) incidental (B) incidents (C) incidence (D) incidentally20. We are considering remodeling the office Set 10 of products to the charity since it was first established five years ago. (A) donate (B) donating (C) has donated (D) have donated ------- the meeting room can be more spacious. (A) in order to (B) so that (C) because of (D) just as more efficient ------- coal or oil-based power. (A) that (B) as (C) than (D) to22. The man that I spoke ------- on the phone that many people don't consider when applying for a job. (A) factoring (B) factored (C) factor (D) to factor27. The director does not know ------- should told me to wait until I get the test results. (A) to (B) him (C) for (D) to him23. Readers of News World magazine can now be done about the new office building. (A) those (B) what (C) whether (D) there28. The design team is hoping to complete the receive daily email updates by registering ------- through the publications homepage. (A) electronic (B) electronically (C) electronics (D) electrical24. Your account information is available online ------- for the new company logo by the end of this week. (A) propose (B) proposed (C) proposal (D) proposing29. The aim of the district survey is ------- public so you can access it ------- necessary. (A) whichever (B) whatever (C) whoever (D) whenever25. The marketing department supervisor is opinion on the planned construction of a new shopping center. (A) gather (B) gathered (C) to gather (D) having gathered30. Exercise is always necessary but ------- can unsure where ------- for the annual company trip. (A) to go (B) going (C) to going (D) go to damage your health. (A) rare (B) extreme (C) excessive (D) too much SET 03 Set 31. Each of the office computers ------- checked 6. Because of the stock market fluctuation, and upgraded on a monthly basis. (A) is (B) are (C) being (D) been2. It ------- the department manager who some of ------- for Ledal corp. are seeking the help of financial experts. (A) investor (B) investors (C) the investors (D) investment7. Laboratory researchers at the Crop recommended you for promotion. (A) was (B) were (C) been (D) being3. Passengers are asked to turn off all Research Institute must take care of plant samples that need ------- at regular intervals. (A) to water (B) be watered (C) to be watered (D) to watering8. NeoSys Inc. employs 300 workers, some of electronic devices ------- takeoff and landing. (A) during (B) to (C) at (D) within4. Although all the tables at Sortinos have ------- live in the company dormitory. (A) what (B) where (C) which (D) whom9. Mr. Robinson spoke at the debate ------- the already been reserved, we can call you ------- there is a last-minute cancellation. (A) therefore (B) even if (C) in case (D) despite5. Trinity Business Tower is composed ------- audience were allowed to ask questions. (A) which (B) with which (C) during which (D) that10. Interns need to learn about the recent 20 office floors, an underground aquarium, and a revolving restaurant on the uppermost floor. (A) by (B) of (C) at (D) on10 scientific developments, ------- our technology is based. (A) which (B) what (C) that (D) on which construction of the new electronics factory will be completed by March 25. (A) go (B) goes (C) gone (D) going12. During the late 1990s, the stock market management, very few members of the companys ------- board of directors remain. (A) formal (B) forming (C) formation (D) former17. Your vehicle insurance will remain active ------- close to crashing due to the failure of many Internet-based companies. (A) coming (B) come (C) comes (D) came13. The online banking system is better ------- you continue to make regular payments each month. (A) as long as (B) in case (C) whereas (D) besides18. ------- Zantassi XQ23 color printer comes than ------- due to upgrades to account management and security. (A) once (B) never (C) not (D) ever14. Although the Langford GX Hall is an with a two-year warranty covering the cost of all repairs and replacement parts. (A) Various (B) Every (C) Several (D) Many19. Employees must immediately return to their Set 8 Set 9 acceptable conference venue, the Richmond Exhibition Center seems -------. (A) better (B) more better (C) at best (D) more best15. Investors expect that the restaurant will be workstations ------- the meeting has been finished. (A) that (B) when (C) since (D) so that20. By the time the companys presentation ------- in six months. (A) operationally (B) operational (C) operation (D) operate began, most of the investors ------- from the place. (A) are disappearing (B) will have disappeared (C) disappear (D) had disappeared 11 carry hand luggage that is larger than what is stipulated in their luggage allowance policy. (A) no (B) not (C) not to (D) do not22. Harwood Bank insists that we ------- the thirty minutes from platform four, and hourly from platform five. (A) every (B) each (C) all (D) some27. Because of the approaching tropical storm, outstanding balance of our short-term loan within 60 working days. (A) paid (B) are paying (C) will pay (D) pay23. The construction project for the new national Martins restaurant will be ------- early on Saturday. (A) close (B) closes (C) to close (D) closing28. ------- presentations and seminars, library was a long ------- uncomplicated process. (A) for (B) yet (C) not (D) and24. ------- applicants possess the skills and convention attendees will have the chance to view some of the most advanced technologies available in the market. (A) Include (B) In addition to (C) Because (D) Owing29. Smith and Gracie Associates is controlled qualifications required to successfully fulfill the role of lead designer at Info-tech Services. (A) Almost of (B) Most all (C) Almost all (D) Mostly of25. To guarantee the health and safety of factory by two attorneys, ------- established the firm more than ten years ago. (A) who (B) whom (C) that (D) whose30. Anyone interested in attending the seminar workers, Brownlow Inc. ensure that all machinery is well-maintained and completely -------. (A) relying (B) reliant (C) reliance (D) reliable12 has ------- the end of today to register. (A) ahead (B) until (C) during (D) before SET 04 Set 41. ------- already spent two months in training, 6. ------- in the department will have an Mr. Wallace was eager to begin his new job. (A) Having (B) Had (C) To have (D) Have2. Your ------- for updating company policies opportunity to apply for the marketing position. (A) Everyone (B) Whoever (C) Whomever (D) Everywhere7. After the surveys had been completed, Mr. will be reviewed by the board of directors. (A) recommendation (B) recommendable (C) recommending (D) recommend3. A performance by the Moscow State Circus Rogers gathered the forms and submitted ------- to the personnel manager. (A) that (B) them (C) they (D) these8. If the company cared about overseas ------- at the London O2 Arena. (A) holds (B) has held (C) is holding (D) is being held4. Public computers, along with photocopiers, expansion, it ------- more money on global marketing. (A) will spend (B) would spend (C) spend (D) spent9. Telewest Cables deluxe package ------- ------- on the second floor of the library. (A) is locating (B) located (C) locate (D) are located5. ------- who wish to be refunded for travel a variety of channels to cater to a wider number of viewers. (A) offering (B) offers (C) to offer (D) be offering10. Help us to maintain our high qua lity of expenses should fill out the appropriate form at reception. (A) These (B) This (C) That (D) Those service by filling out a comment card before you ------- the restaurant. (A) to leave (B) had left (C) leaving (D) leave 14 conceived and designed ------- for novice users. (A) expressing (B) expresses (C) expressly (D) expressive12. Despite Mr. Fullertons lack of supervisory economical, the location of our new branch will either be in the Mason Building ------the Sorenton Tower. (A) or (B) yet (C) and (D) also17. Commuting by subway is recommendable, experience, he ------- managed to motivate his employees to work effectively. (A) any (B) still (C) more (D) same13. There ------- is a large amount of money ------- it can get uncomfortably crowded during rush hour. (A) except that (B) wide of (C) aside from (D) long since18. Due to the rapidly deteriorating state of the stored in the main vault. (A) normally (B) normality (C) normal (D) normalcy14. Some workers have still not been informed Set 9 ------- of the plans to merge departments. (A) adequate (B) adequacy (C) adequately (D) adequateness15. You must acquire more experience ------Set 10 to inquire about vacant positions. (A) call (B) call to (C) calling (D) to call20. Sales representatives should let clients you are considered for promotion. (A) but (B) before (C) enough (D) afterward ------- about the terms of the contract. (A) know (B) to know (C) knowing (D) and know 15 ------- its final destination at approximately 2 PM. (A) arrives (B) comes (C) reaches (D) gets22. The renowned architect Frank Gehry has failed to meet our clients -------. (A) require (B) requiring (C) required (D) requirements27. Insufficient ------- via advertising and been asked to design a building ------- is both stylish and practical. (A) that (B) what (C) where (D) who23. One of our waiters will let you know ------- promotion is a major reason why certain products fail in the market. (A) expose (B) exposing (C) exposure (D) exposed28. Over two thousand Zanussi 362AW washing your table is ready. (A) what (B) who (C) which (D) when24. The engineer needed ------- the broken air machines have been sold ------- the past eight weeks. (A) over (B) between (C) beyond (D) by29. Mr. Grayson would like to talk to you ------- conditioning unit. (A) replacing (B) to replace (C) having replaced (D) replaced25. Ms. Phillips enjoyed ------- with the the errors found in your financial report. (A) regard (B) regards (C) regarding (D) regardless30. Both the leather chair and the books on the overseas investors at the annual company stockholders meeting. (A) talk (B) talking (C) talked (D) to talked shelf ------- to the previous occupant. (A) belong (B) belongs (C) belonging (D) to belong 16 SET 05 Set 51. Once -------, the three departments will be 6. Mrs. Halliday and I might struggle to agree under the supervision of only one manager. (A) merging (B) merged (C) merge (D) to merge2. The rental price varies, ------- on the car on the issue of budget restrictions as her views are completely opposite to -------. (A) my (B) me (C) mine (D) myself7. The conference can only be led by ------- with the appropriate credentials. (A) anyone (B) someone (C) a one (D) one of8. ------- of the project proposals is likely to activities to the control center. (A) asking (B) asks (C) are asked (D) asked for4. Belenux Aeronautics researchers ------- to attract the interest of foreign investors. (A) Much (B) Neither (C) Both (D) Some9. Had Mr. Osborne ------- the 7 AM train, unveil their latest aircraft engine at the 2011 Geneva Aerospace Convention. (A) plan (B) are planned (C) planning (D) have plan5. ------- is a complex electronic security he might have been on time for the weekly department meeting. (A) catch (B) been caught (C) caught (D) being caught10. Please make sure that the items you buy system which can only be locked by using the appropriate key and password. (A) Those (B) Them (C) This (D) They from the supermarket ------- not past their expiration date. (A) are (B) is (C) been (D) being 18 number of faults in the new system. (A) use (B) using (C) used (D) will use12. Many of the car models that they ------- authorized to enter the building at night. (A) Almost (B) Only (C) Enough (D) Neither17. Remember to pack appropriate clothing for Set 2 Set 3 in the past decade will be shown at the forthcoming automobile convention. (A) produce (B) will produce (C) produced (D) produces13. As of next week, Sarah ------- as a health the research trip to Brazil as it will probably be raining ------- for the duration of your stay. (A) heaviness (B) heavies (C) heavy (D) heavily18. Mr. Devor has ------- competent personnel care official in Africa for approximately 18 months. (A) to work (B) worked (C) had worked (D) will have worked14. Access to the building C has been restricted Set 7 that he seldom has to supervise them at work. (A) so (B) such (C) too (D) much19. An ------- will be present to assist us in ------- it was declared unsafe by health inspectors. (A) if (B) until (C) about (D) since15. Employees cant leave early ------- the discussions with the Japanese CEO. (A) interpret (B) interpreting (C) interpretation (D) interpreter20. ------- seats are available but I can place you formal authorization of their department manager. (A) into (B) until (C) among (D) without on the waiting list. (A) Any (B) Not (C) None (D) No 19 are becoming less -------. (A) frequented (B) frequent (C) frequently (D) frequency22. The project manager views Mr. Walters workshop this weekend should notify the personnel office by this afternoon. (A) Who (B) That (C) Whoever (D) Anyone27. Cell phone service is available ------- you ------- an integral member of the development team. (A) upon (B) to (C) as (D) with23. Please ensure that you do thorough travel in South Korea. (A) whoever (B) wherever (C) whatever (D) whichever28. All workers entering the construction site are research on the issue beforehand so that you can ------- effectively in the debate. (A) participated (B) participant (C) participate (D) participating24. Although Matleys Inc. has had to close required to ------- hardhats and protective goggles. (A) wore (B) wearing (C) worn (D) wear29. Mr. Williams told the receptionist ------- the several of its overseas offices, ------- still remain three branches in the U.S. (A) it (B) he (C) they (D) there25. The bank loan, ------- is interest free, must conference schedule to the Chicago office. (A) fax (B) will fax (C) was faxing (D) to fax30. Mr. Gibson began ------- a new economy be repaid in regular installments. (A) what (B) which (C) who (D) when related book as soon as he retired from the company last month. (A) writing (B) written (C) write (D) writes 20 SET 06 Set 61. ------- assigned to the car insurance team, 6. It is essential that a consumer returning Ms. Fields hasnt won any contracts. (A) Since (B) If (C) When (D) While2. ------- between high pay and quality of life, damaged or faulty items ------- refunded in full. (A) be (B) are (C) have (D) has7. Laura Reid has been studying economics for Jack sought advice from his friends. (A) Undeciding (B) Undecided (C) Undecide (D) To undecide 3. As project leader, Phil Royle must ensure four years so that she can start ------- own business after graduation. (A) she (B) her (C) herself (D) hers8. The engineer ------- to repair the elevator that ------- of the team members performs their tasks to the best of his or her ability. (A) every (B) all (C) each (D) much4. Every year, the Jewell Corporation ------- since early this morning and is expected to be finished by 5 PM. (A) having tried (B) had tried (C) has been trying (D) having been trying9. ------- of our senior sales representatives a percentage of its annual profits to community projects and childrens charities. (A) donate (B) to donate (C) donates (D) donating5. ------- of the trade union officials were have been temporarily transferred to Canada to assist with the training of new employees. (A) Some (B) Any (C) Every (D) Each10. ------- the proposal is due to be submitted in satisfied with the outcome of the meeting with the board of directors. (A) Anyone (B) Nobody (C) None (D) Almost two hours, the project leader insisted that a large portion be rewritten and improved. (A) Even though (B) In summary (C) As if (D) Accordingly 22 becomes available ------- delivery. (A) of (B) for (C) on (D) with12. The Association of Chartered Surveyors ------- the cause of Mr. Bradys insomnia. (A) probable (B) probability (C) probably (D) probabilities17. Mr. Hargreaves was forced to leave the is an extensive organization with ------members throughout Europe. (A) thousand (B) thousands (C) thousand of (D) thousands of13. There are ------- miscalculations in your medical conference earlier than expected because he was feeling -------. (A) sick (B) sicked (C) sicking (D) sickening18. The pharmaceutical laboratory ------- we financial report, so it will need to be amended and resubmitted. (A) much (B) little (C) a few (D) less14. Extensive work experience has ------- merit work is a modern facility equipped with the most technologically advanced equipment. (A) at which (B) inside (C) where at (D) within19. Due to a ten percent increase in monthly rent, residents of the Regent Tower apartment block opted not ------- the revised lease agreement. (A) sign (B) to signing (C) signing to (D) to sign20. The management will consider the fully understand the nature of the job. (A) employee (B) employment (C) employs (D) employees implementation of an employee incentive program, ------- this high standard of work continues. (A) so as (B) provided that (C) depending on (D) rather than23 changes be made to the project so that it costs -------. (A) less (B) lesser (C) the least (D) little22. Because the entire Japanese ------- will stay old and worn, giving our company a very unprofessional appearance. (A) is (B) are (C) being (D) been27. The public survey revealed that ------- for a week, extra buses have been rented for their use. (A) delegated (B) delegate (C) delegation (D) delegatory23. Mr. Bennett will resign from the company people tend to ignore advertising and stay loyal to their favorite brands. (A) the most (B) almost (C) most (D) most of28. A regular health check is ------- for workers ------- February 18, so his replacement must be chosen before then. (A) in (B) at (C) on (D) for24. After a series of successful interviews, at Eklate Paint due to their continuous exposure to harmful chemicals. (A) necessarily (B) necessity (C) necessary (D) necessitate29. Mr. King informed the committee that the Mr. Fincher will be hired ------- personnel manager of Asda Superstores. (A) for (B) to (C) as (D) in25. One of the reports you submitted ------- representatives of Gate Enterprises had just ------- at OHare International Airport. (A) reached (B) arrived (C) got (D) come30. The project team consisted of four incomplete and requires your signature. (A) is (B) are (C) being (D) been members, all of ------- were presented with an award for their outstanding contributions to science. (A) whom (B) whose (C) who (D) whoever 24 SET 07 Set 71. ------- with the possibility of forced retirement, 6. Unfortunately, many of the employees he must pass todays interview with the management. (A) Facing (C) Face (B) Faced (D) Faces at Windham Gregg ------- the company directors expectations. (A) have not met (B) have not been met (C) having been met (D) had been met7. The companys logo has been the same it did not cause as much damage as originally -------. (A) predicting (B) predicted (C) predict (D) prediction3. Regardless of ------- short Mr. Fords ------- almost 15 years, but it will be radically redesigned next month. (A) since (C) by (B) for (D) about presentation was, it contained many interesting points. (A) what (C) which (B) how (D) where Lundgren will talk about the development process ------- Mr. Seagal demonstrates the products functions. (A) whoever (B) while (C) meantime (D) during9. The machinery was considered hazardous tickets have been reserved and he can pick ------- up at the box office before the performance. (A) us (B) them (C) him (D) you5. If safety guidelines had been implemented to ------- it failed to meet industry safety standards. (A) only if (B) even though (C) in that (D) just10. Situated on the fortieth floor of the Kyushu ensure the proper use of factory machinery, any possibility of employee injuries would probably -------. (A) eliminate (B) be eliminating (C) have been eliminated (D) had been eliminated26 Trade Center, Mr. Okawas office has a rather ------- view of the city. (A) impress (B) impressing (C) impressed (D) impressive requires ------- amendments before it can be submitted to the board. (A) much (B) a large (C) several (D) a little12. ------- Sylar Corporation will unveil their new for Axis Industries stated that the company will not ------- accountants their monthly sales figures. (A) sign (B) enter (C) give (D) want17. Mr. Davis would like to know ------- the report high definition flat-screen monitor at the London Electronics Convention next week. (A) Presumptive (B) Presumably (C) Presumptuous (D) Presuming13. Any employee seeking reimbursement of was not submitted by the 5 PM deadline. (A) why (B) what (C) where (D) which18. Goodlife Supermarkets only sell ------- fruitSet 7 Set 5 Set 6 corporate travel expenses must submit their receipts no ------- than on the 26th of each month. (A) later (B) less (C) after (D) more14. Ms. Givens types ------- of all the is being cultivated or grown naturally in the region. (A) all (B) which (C) something (D) whatever19. The aim of the Mexico-US Technological receptionists at Edwards & Grieves Associates. (A) fastest (B) the fastest (C) faster than (D) ever fast15. Despite their hard work, the advertising Alliance is ------- collaborative research and development relationships between the two countries. (A) foster (B) fosters (C) to foster (D) fostered20. Despite his flight delay, Mr. Carmichael hoped department had expected the board of directors to ------- their new promotional campaign. (A) object (B) object to (C) objecting (D) objective ------- on time for the awards presentation. (A) that (B) to be (C) being (D) to being 27 feedback indicated a high level of dissatisfaction with the companys services. (A) a (B) the (C) when (D) if22. Tetrosyl Ltd. is ------- manufacturer of car the product ------- extensive advertising and promotion. (A) are (B) is (C) were (D) be27. The software design manager, Julien Temple, accessories in Europe, and aims to capitalize on its success by expanding into Canada. (A) a larger (B) the largest (C) larger (D) a largest23. Due to the recent delays, some of ------- are was delighted to hear his name ------- at the annual awards ceremony. (A) calling (B) called (C) calls (D) to call28. Forbes Electric Inc. is unlikely to ------- growing frustrated with the projects slow progress. (A) investor (B) investors (C) the investors (D) investment24. To get to the conference center from the business with Benelux Systems again after the recent breaches of contract. (A) do (B) doing (C) does (D) done29. All applicants should ------- sure to include all hotel, take the South River Expressway ------- Haversham Junction, turn left, and continue for 2 kilometers. (A) within (B) since (C) during (D) to25. ------- the new education policy, students required documents with their employment application. (A) make (B) making (C) made (D) makes30. Skyweb Systems is developing a robot which from low-income families will receive government funding. (A) Aside (B) Behind (C) Against (D) Under28 responds to voice commands, and that can ------- perform simple household tasks. (A) ever (B) either (C) even (D) quite SET 08 Set 81. By referring to your home improvement manual, 6. ------- of the employees in the design you can repair your pipe leaks on -------. (A) your own (B) yourself (C) you (D) yours2. Due to the poor reviews of our last three team were given a day off since Cress Inc. decided to advance the deadline to Monday. (A) None (B) Anybody (C) Whoever (D) Something7. Unlike -------- employees, Mr. Grant is products, our research department expects profits ------- reduced by 50% by this time next year. (A) have (B) have been (C) will have been (D) will have to be3. Quardtechs new fast-charging batteries are comfortable and productive even when working unsupervised. (A) much (B) many (C) more (D) the most8. Children under the age of 10 must be ------- able to fully recharge in only five minutes, while other batteries must be left in the charger for a ------- half-hour. (A) well (C) better (B) good (D) fine by an adult to be allowed into the swimming pool area. (A) will accompany (B) accompanying (C) to accompany (D) accompanied9. This information session is intended for poor year, its newest product is one of the ------- to have seen an increase in sales since its release. (A) any (C) most (B) few (D) little those ------- wish to learn more about their health benefits packages. (A) who (C) what (B) whose (D) their recorder to all digital cable customers, ------subscribers to skip commercials and pause television shows. (A) allows (C) allow (B) allowing (D) will allow includes a CD of the author reading the story. (A) Every (B) Few (C) Whole (D) Many 30 company which specializes in clothes made ------- entirely from recycled materials. (A) more (B) almost (C) near (D) over12. Translation services are available ------- you masters degree in business administration and at least 2 years of experience in a managerial position. (A) possess (B) possessed (C) possessing (D) have possessed17. ------- youre worried about limited capacity, see the yellow TS sticker. (A) whoever (B) wherever (C) whatever (D) whichever13. All employees must attend the training you should book well in advance to participate in the workshop. (A) If (C) As (B) While (D) Before session to learn ------- to fill out the new expense reports. (A) during (B) about (C) how (D) whom14. Employees working with machinery in the the streets, the employees also received a lot of positive feedback on their new product. (A) However (B) While (C) If (D) Because19. The monthly performance-based review plant need to take ------- recommended precautions to prevent injuries to themselves and others. (A) most of (B) all (C) much (D) almost15. ------- driver runs the risk of getting fined if ------- to criticize our employees, but to assist them in improving their job efficiency and effectiveness. (A) is designing (B) is not designed (C) is design (D) is not designing20. A spokesman for McDowell Industries found driving without wearing a safety belt. (A) All (B) Every (C) Some (D) Most announced that Pete OToole ------- from his position as vice president at the beginning of next month. (A) retire (B) retiring (C) be retired (D) will retire31 advantage of our trial membership offer should contact Mr. Hobbs at reception. (A) do (C) take (B) get (D) make ------- we can open our next coffee shop branch. (A) what (B) which (C) where (D) whose27. The Houston Scientific Research Center than expected, the chairman is unlikely to approve the proposed increase of employees basic wages. (A) As if (B) Given that (C) Provided that (D) Except that23. ------- has a product evoked such will ------- be concluding its research into alternative fuels and renewable energy sources. (A) soon (B) frequently (C) sparingly (D) sometimes28. The stock market crash of 2008 was the excitement and anticipation in the computer game industry. (A) Nearly (B) So (C) Seldom (D) Ever24. Readers of Global Economist receive a ------- worst global financial crisis since the crash of 1929. (A) single (B) first (C) only (D) whole29. At the Millenium Hotel, we believe ------- complimentary 2GB USB flash drive when they subscribe to the magazine for ------$50 a year. (A) just (B) little (C) mere (D) low25. Although his appointment as team leader is more valuable than the comfort and satisfaction of our guests. (A) none (B) nobody (C) nothing (D) no one30. The international biotechnology conference surprised some of his colleagues, Mr. Manos is ------- experienced in project management. (A) too (B) very (C) such (D) far32 ------- next month. (A) is taken place (B) take place (C) will be taken place (D) will take place SET 09 Set 91. Mick McCarthy, who ------- the Global 6. ------- the time Erling Industries was Friend Foundation, will travel to Ethiopia next month to meet with the charitys African representatives. (A) founded (B) foundation (C) founding (D) founds2. The layout of the building can often be established, it operated as a paper company from a small office in the countryside. (A) Of (C) At (B) Up (D) On they will be promptly escorted ------- the company limousine parked outside. (A) to (C) on (B) in (D) from ------- to new employees and visitors. (A) confusing (B) confusion (C) confuse (D) confused3. Monthly Economist has hired two additional software, Mr. Hume consulted the electronic user manual. (A) Not to know (B) Not knowing (C) Didnt know (D) Dont knowing9. ------- that Romulan Inc.s latest DVD proofreaders to ------- for errors in each article prior to publication. (A) find (C) care (B) look (D) pick up with a variety of services including high-speed connections and unlimited downloads. (A) offers (B) contributes (C) provides (D) extends5. The study indicates that the downturn in player has only been on the market for two months, it has already sold an impressive amount of units. (A) Considered (B) Considering (C) Consider (D) Considerate10. To establish a ------- bond with their main sales is linked ------- an increase in the number of similar products on the market. (A) to (C) at (B) on (D) out suppliers, Java Blend Co. agreed to repair and maintain all equipment used by the coffee farmers. (A) last (C) lasted (B) lasting (D) lasts 34 remind all employees to keep their car doors and windows -------. (A) locking (B) locked (C) locks (D) locker12. The battery life of the Orion S750 digital to those of the other candidates and his experience in advertising is ------- as extensive. (A) just (C) while (B) as well (D) very camera is far inferior to ------- of any other comparably priced model currently on the market. (A) those (C) these (B) that (D) this will be held this weekend, so we encourage employees to register for ------- sessions are of interest to them. (A) these (B) some (C) whichever (D) whose18. The assembly instructions for our dinning new product surprised everyone by proving to be ------- of a success. (A) anything (B) everything (C) nothing (D) something14. The director admitted that the construction set are ------- too complicated for most consumers. (A) far (B) well (C) quite (D) pretty19. The information technology workshop is plan for Byron Bridge was financially sound, but there were still a number of ------factors to consider. (A) the others (B) other (C) another (D) others15. Construction of the bridge would have open to ------- wishes to improve their computer skills. (A) people (B) who (C) those (D) whoever20. The client ------- you prepared the income proceeded more quickly and efficiently ------- having to endure the poor weather conditions. (A) so as (C) in that (B) without (D) as to tax returns is scheduled to arrive at 4:30PM. (A) whose (B) which (C) for whom (D) who 35 ------- to be on the decline as major cities enforce stricter policing measures. (A) seems (B) chances (C) occurs (D) emerges22. Paul Rothko has hopes of ------- an research process, the development of the new toothpaste formula is progressing -------. (A) nice (B) nicely (C) nicer (D) nicest27. Blackwell Book Services specializes in internationally recognized expert in high performance computing and hardware analysis. (A) becoming (B) to become (C) in becoming (D) becomes23. ------- funds appropriately is an integral part finding books that are rare or out of ------and publishing them in attractive new editions. (A) prints (B) printing (C) print (D) printed28. Recent economic reports argue that ------- of good project management. (A) Spends (B) Spend (C) Spending (D) Spendable24. Centurion Real Estate currently has a variety competition in the business market may be the result of a decrease in the number of competitors. (A) fewer (C) a few (B) least (D) less of properties near Eden Lake and some deluxe cottages ------- the actual waterfront. (A) among (B) along (C) into (D) under25. The statistics taken from the recent at Bellingham Trade Center is $10 ------hour. (A) for (C) by (B) per (D) with consumer poll will be analysed and compared with last years -------. (A) founded (B) finds (C) found (D) findings ensure that the citys famous Wallace Monument is kept in good ------- for years to come. (A) condition (B) conditions (C) conditioning (D) conditional 36 SET 10 Set 101. After receiving estimates from all of 6. Pair Cosmetics began an innovative the companies, we will ------- which construction company to use for the building of the shopping complex. (A) determine (B) determines (C) determining (D) determination2. Radley & Associates regards online advertising ------- on their Web site which helped increase their market share. (A) campaigns (B) campaigning (C) to campaign (D) campaign7. ------- had Ms. Trent turned on the advertising as the key to ------- its sales. (A) expansively (B) expand (C) expanse (D) expanding3. Herschel Overton, who ------- Overton computer on her desk than the electricity went out. (A) Sooner (C) Any sooner (B) The sooner (D) No sooner you have to pay an additional fee to have your application regarded as a priority. (A) To be expedited (B) To expedite (C) Will expedite (D) Expedite9. Mr. Glen requested to be released from the Industries sixty-five years ago, will be giving the keynote speech at the Chemical Innovation Seminar. (A) founded (B) foundation (C) founding (D) founds4. While Im out of the office, Id like you ------- contract negotiations because he felt that it had become too ------- to deal with the clients lawyers. (A) frustrating (C) frustrate (B) frustration (D) frustrated any calls to my mobile phone (A) forward (C) forwarding (B) forwards (D) to forward 10. Blue Cloud Airlines ------- a new payment 5. In recognition of his ground-breaking work as a member of the World Climate Program, a Goldman Environmental Prize ------- to Prof. Svensson. (A) awarded (C) was awarded (B) award (D) awarding system which allows customers to pay for flight tickets directly from their bank accounts. (A) introduce (B) was introduced (C) introducing (D) has introduced 38 basic benefit package; however, we ------offer specialized plans to employees with families or special conditions. (A) as well (C) also (B) not only (D) either organized by ------- professionals in the education department. (A) dedicate (B) dedication (C) dedicating (D) dedicated17. Once you ------- the employee training three ------- years, significantly reducing its market share. (A) next (B) consecutive (C) following (D) constant13. The construction supervisor asked that all Set 4 course, you will be assigned to one of the supervisors who oversee the sales team. (A) will complete (B) are completing (C) having completed (D) have completed18. All employees have been ------- that they company tools and equipment ------- at the end of each day. (A) be returned (B) to return (C) returns (D) returning14. One of the advantages of Hyde Mobiles make sure all files are backed up in case of a system failure. (A) advice (B) advised (C) advisedly (D) advisable19. The Museum of Scientific History will be newest mobile phone is that ------- it takes no longer than 10 minutes with its newly developed battery. (A) recharge (B) recharger (C) recharged (D) recharging15. Some economic analysts are saying that expanded ------- additional rest areas. (A) for inclusion (B) with included (C) to include (D) in including20. ------- us of equipment malfunctions gave us investors should sell most of their stocks before the ------- recession occurs. (A) predict (B) predicting (C) predicted (D) prediction time to ensure that all machinery in the plant was running properly. (A) You warned (B) You warn (C) Your warning (D) You are warning 39 Manufacturing may have to increase the number of staff in at its plant. (A) Considering (B) Meanwhile (C) Thus (D) Against22. Below is a list of the authors who ------- for Drug store location before it ------- at the end of the month. (A) expired (B) expires (C) expiring (D) expiration27. Mr. Gosford accessed the company the Nobel Prize in literature this year. (A) have nominated (B) nominated (C) will be nominating (D) have been nominated23. Videos that are rented from Hitz Movies computer system to ------- for Mr. Smiths contract details in the personnel database. (A) looking (B) look (C) had looked (D) be looking28. It is essential for all maintenance employees should be returned to the store ------- to avoid any late fees. (A) prompt (B) promptly (C) prompted (D) prompting24. When Greg didnt come to work the day to carefully ------- all of the machines components when doing repairs. (A) checks (B) check (C) checking (D) checked29. How far ------- businesses are located from after getting into an argument with the managing director, everyone figured that he -------. (A) fired (B) had fired (C) had been fired (D) will have been fired25. All English instructors at Halls Language one another can determine how intense their competition is. (A) above (B) between (C) among (D) apart30. ------- remains a final draft to be edited Institute have to go through one month of comprehensive ------- training before they start teaching. (A) formal (B) formation (C) formed (D) forming40 before the manuscript is published. (A) It (B) He (C) They (D) There
https://fr.scribd.com/document/176234871/%ED%95%84%EC%88%98%EB%AC%B8%EB%B2%95300%EC%A0%9C-%EC%A0%84%EC%B2%B4
CC-MAIN-2019-43
refinedweb
7,066
67.04
When it comes to writing optimized code, image loading plays an important role in computer vision. This process can be a bottleneck in many CV tasks and it can often be the culprit behind bad performance. We need to get images from the disk as fast as possible. The most obvious example of the importance of this task would be an implementation of a Dataloader class in any CNN training framework. It is crucial to make image loading fast. If it is not so, the training procedure becomes CPU bound and wastes precious GPU time. Today we are going to look at some Python libraries which allow us to read images most efficiently. They are — - OpenCV - Pillow - Pillow-SIMD - TurboJpeg Also, we will cover alternative methods of image loading from databases using: - LMDB - TFRecords Finally, we will compare the loading time per image and find out which one is the winner! Installation Before we start – we need to create a virtual environment $ virtualenv -p python3.7 venv $ source venv/bin/activate Then, install the required libraries: $ pip install -r requirements.txt Now we can go forward with our tasks. Ways to load images Structure Usually, we need to load several images that are stored either in a database or just as a folder. In our scenario, an abstract image loader should be able to store the path to such a database or folder and load one image at a time from it. Moreover, we need to measure the time of some parts of the code. Optionally, some initialization may be required before the loading starts. Our ImageLoader class looks like this: import os from abc import abstractmethod class ImageLoader: extensions: tuple = \ (".png", ".jpg", ".jpeg", ".tiff", ".bmp", ".gif", ".tfrecords") def __init__(self, path: str, mode: str = "BGR"): self.path = path self.mode = mode self.dataset = self.parse_input(self.path) self.sample_idx = 0 def parse_input(self, path): # single image or tfrecords file if os.path.isfile(path): assert path.lower().endswith( self.extensions, ), f"Unsupportable extension, please, use one of {self.extensions}" return [path] if os.path.isdir(path): # lmdb environment if any([file.endswith(".mdb") for file in os.listdir(path)]): return path else: # folder with images paths = \ [os.path.join(path, image) for image in os.listdir(path)] return paths def __iter__(self): self.sample_idx = 0 return self def __len__(self): return len(self.dataset) @abstractmethod def __next__(self): pass Image decoding functions in different libraries can return images in different formats – RGB or BGR. In our case, we use BGR color mode as default, but it always can be converted into the required format. In case you want to know the fun reason why OpenCV uses BGR format, click on this link. Now we can inherit new classes from the base class and use them for our task. OpenCV The first one is the OpenCV library. We can use one simple function to read an image from the disk – cv2.imread. import cv2 class CV2Loader(ImageLoader): def __next__(self): start = timer() # get image path by index from the dataset path = self.dataset[self.sample_idx] # read the image image = cv2.imread(path) full_time = timer() - start if self.mode == "RGB": start = timer() # change color mode image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) full_time += timer() - start self.sample_idx += 1 return image, full_time Before image visualization, we need to mention that the OpenCV cv2.imshow function requires an image in BGR format. Some libraries use RGB image mode as default, in this case, we convert images to BGR for a correct visualization. You can try to load your image using our example with this function. To test the OpenCV library, please, use this command: $ python3 show_image.py --path images/cat.jpg --method cv2 This and next commands in the text will show you the image and its loading time using different libraries. If everything goes well, you will see an image in the window like this: Also, you can show all images from a folder. Instead of using a specific image, you can mention a path to the folder with images: $ python3 show_image.py --path images/pexels --method cv2 This will show you all images from the folder one at a time together with their loading times. To stop the demo, you can press the ESC button. Pillow Let’s now try the PIL library. We can read an image using Image.open function. import numpy as np from PIL import Image class PILLoader(ImageLoader): def __next__(self): start = timer() # get image path by index from the dataset path = self.dataset[self.sample_idx] # read the image as numpy array image = np.asarray(Image.open(path)) full_time = timer() - start if self.mode == "BGR": start = timer() # change color mode image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) full_time += timer() - start self.sample_idx += 1 return image, full_time We also convert the Image object to a Numpy array since it’s likely we’d want to apply some augmentations or pre-processing as a next step and Numpy is a default choice for it. To check this out on a single image you can use: $ python3 show_image.py --path images/cat.jpg --method pil If you want to use it on the folder with images: $ python3 show_image.py --path images/pexels --method pil Pillow-SIMD There is the fork-follower of the Pillow library with higher performance. Pillow-SIMD uses new techniques which allows reading and transforming images faster with the same API as standard Pillow. Pillow and Pillow-SIMD cannot be used simultaneously in the same virtual environment – Pillow-SIMD will be used by default. To use Pillow-SIMD and avoid mistakes caused by Pillow and Pillow-SIMD being together, you need to create a new virtual environment and use $ pip install pillow-simd Or you can uninstall the previous Pillow version and install Pillow-SIMD: $ pip uninstall pillow $ pip install pillow-simd You don’t need to change anything in the code – the previous example is still working. To check that everything is fine you can use the commands from the previous Pillow part: $ python3 show_image.py --path images/cat.jpg --method pil $ python3 show_image.py --path images/pexels --method pil TurboJpeg There is another library called TurboJpeg. As it follows from the title – it can read only images compressed with JPEG. Let’s create an image loader using TurboJpeg. from turbojpeg import TurboJPEG class TurboJpegLoader(ImageLoader): def __init__(self, path, **kwargs): super(TurboJpegLoader, self).__init__(path, **kwargs) # create TurboJPEG object for image reading self.jpeg_reader = TurboJPEG() def __next__(self): start = timer() # open the input file as bytes file = open(self.dataset[self.sample_idx], "rb") full_time = timer() - start if self.mode == "RGB": mode = 0 elif self.mode == "BGR": mode = 1 start = timer() # decode raw image image = self.jpeg_reader.decode(file.read(), mode) full_time += timer() - start self.sample_idx += 1 return image, full_time TurboJpeg requires decoding of the input image, which is stored as a string of bytes. You can try it with the following commands. But remember that TurboJpeg only allows processing of .jpeg images: $ python3 show_image.py --path images/cat.jpg --method turbojpeg $ python3 show_image.py --path images/pexels --method turbojpeg LMDB A commonly used approach to image loading when speed is a priority is to convert data into a better representation – database or serialized buffer – beforehand. One of the largest advantages of such “databases” is that they operate with zero system calls per data access, while the file system requires several system calls per data access. We can create an LMDB database that will collect all images in key-value format. The following function allows us to create an LMDB environment with our images. LMDB’s “environment” is essentially a folder with special files created by LMDB library. This function only requires a list with image paths and save path: import cv2 import lmdb import numpy as np def store_many_lmdb(images_list, save_path): # number of images in our folder num_images = len(images_list) # all file sizes file_sizes = [os.path.getsize(item) for item in images_list] # the maximum file size index max_size_index = np.argmax(file_sizes) # maximum database size in bytes map_size = num_images * cv2.imread(images_list[max_size_index]).nbytes * 10 # create lmdb environment env = lmdb.open(save_path, map_size=map_size) # start writing to environment with env.begin(write=True) as txn: for i, image in enumerate(images_list): with open(image, "rb") as file: # read image as bytes data = file.read() # get image key key = f"{i:08}" # put the key-value into database txn.put(key.encode("ascii"), data) # close the environment env.close() There is a python script which creates an LMDB environment with images: –path argument should contain the path to your collected images folder –output argument should be a directory where LMDB will be created $ python3 create_lmdb.py --path images/pexels --output lmdb/images Now, as the LMDB environment has been created we can load our images from it. Let’s create a new loader class. In the case of loading images from the database, we need to open this database for reading. There is a new function called open_database. It returns the iterator to navigate through the opened database. Also, as this iterator comes to the end of the data, we need to return it back to the start of the database using _iter_ function. LMDB allows us to store the data, but there is no built-in decoder for images. For the lack of a decoder, we will use cv2.imdecode function here. class LmdbLoader(ImageLoader): def __init__(self, path, **kwargs): super(LmdbLoader, self).__init__(path, **kwargs) self.path = path self._dataset_size = 0 self.dataset = self.open_database() # we need to open the database to read images from it def open_database(self): # open the environment by path lmdb_env = lmdb.open(self.path) # start reading lmdb_txn = lmdb_env.begin() # create cursor to iterate through the database lmdb_cursor = lmdb_txn.cursor() # get number of items in full dataset self._dataset_size = lmdb_env.stat()["entries"] return lmdb_cursor def __iter__(self): # set the cursor to the first database element self.dataset.first() return self def __next__(self): start = timer() # get raw image raw_image = self.dataset.value() # convert it to numpy image = np.frombuffer(raw_image, dtype=np.uint8) # decode image image = cv2.imdecode(image, cv2.IMREAD_COLOR) full_time = timer() - start if self.mode == "RGB": start = timer() image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) full_time += timer() - start start = timer() # step to the next element in database self.dataset.next() full_time += timer() - start return image, full_time def __len__(self): # get dataset length return self._dataset_size After we have created the environment and loader class we can check its correctness and show images from it. Now in –path argument we need to mention the path to LMDB environment. Remember that you can stop showing using the ESC button. $ python3 show_image.py --path lmdb/images --method lmdb TFRecords Another useful database is TFRecords. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly (TensorFlow manual). Before we create the tfrecords file, we need to choose the structure of the database. TFRecords allows keeping items with many additional features. You can save the file name or image width and height, if it is needed. All these things should be collected in python dictionary, i.e. image_feature_description = { "height" :tf.io.FixedLenFeature([], tf.int64), "width" :tf.io.FixedLenFeature([], tf.int64), "filename": tf.io.FixedLenFeature([], tf.string), "label": tf.io.FixedLenFeature([], tf.int64), "image_raw": tf.io.FixedLenFeature([], tf.string), } In our example, we will use only the image in raw byte format and its unique key called “label.” import os import tensorflow as tf def _byte_feature(value): """Convert string / byte into bytes_list.""" if isinstance(value, type(tf.constant(0))): # BytesList can't unpack string from EagerTensor. value = value.numpy() return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def _int64_feature(value): """Convert bool / enum / int / uint into int64_list.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def image_example(image_string, label): feature = { "label": _int64_feature(label), "image_raw": _byte_feature(image_string), } return tf.train.Example(features=tf.train.Features(feature=feature)) def store_many_tfrecords(images_list, save_file): assert save_file.endswith( ".tfrecords" ), 'File path is wrong, it should contain "*myname*.tfrecords"' directory = os.path.dirname(save_file) if not os.path.exists(directory): os.makedirs(directory) # start writer with tf.io.TFRecordWriter(save_file) as writer: # cycle by each image path for label, filename in enumerate(images_list): # read the image as bytes string image_string = open(filename, "rb").read() # save the data as tf.Example object tf_example = image_example(image_string, label) # and write it into database writer.write(tf_example.SerializeToString()) Please, note that we convert images using tf.image.decode_jpeg function because all our images are stored as JPEG files. You can also use tf.image.decode_image as a universal decoder. To check the correctness of the created database you can show images from it: $ python3 show_image.py --path tfrecords/images.tfrecords --method tfrecords Now we have five different methods of image loading. Let’s find out which one is the best! We will use some open images from pexels.com with different shapes and jpeg extension. And all time measurements will be averaged with 5000 iterations. Moreover, averaging will mitigate the impact of OS/hardware specific logic, for example, data caching. It is expected that the first iteration in the first method under evaluation will suffer from the initial loading of the data from disk into a cache, while the other methods will be free of that. All experiments are running for both BGR and RGB image modes to cover all potential needs and different tasks. Please, remember that Pillow and Pillow-SIMD can not be used in the same virtual environment. To create the final comparison table we did two separate experiments for Pillow and Pillow-SIMD. To run the measurements use: $ python3 benchmark.py --path images/pexels --method cv2 pil turbojpeg lmdb tfrecords --iters 100 --mode BGR Moreover, it would be interesting to compare databases reading speed with the same decoder function. It can show which database loads its data faster. In this case, we use cv2.imdecode function for both TFRecords and LMDB. All experiments were calculated on: - Intel® Core™ i7-2600 CPU @ 3.40GHz × 8 - Ubuntu 16.04 64-bit - Python 3.7 Summary In this post, we considered some approaches to image loading and compared them with each other. The comparison results on JPEG images are really interesting. We can see that the TurboJpeg is the fastest library to load the images as numpy, but with one exception – it can read files only with jpeg extension. Another important thing to mention is that Pillow-SIMD is faster than the original Pillow. In our task the loading speed increased nearly by 40%. If you plan to use an image database – TFRecords shows better mean results than LMDB, in particular, because of the built-in decoder function. On the other hand, LMDB allows us to read images faster. Surely, you can always combine a decoder function and a database, for example, use TurboJpeg as a decoder and LMDB as an image storage.
https://learnopencv.com/efficient-image-loading/
CC-MAIN-2022-21
refinedweb
2,500
59.4
Why Mobile Product Managers Have to Consider More SDK Platforms for 2020 How can this happen? One reason is the high competition, which forces apps to shut down. This is a sign that a few apps will take over the ever-increasing app market. Therefore, as a mobile product manager in today’s market, I constantly look for the edge to survive, outdo the competition and take over market share. Delivering a great experience will give my app a competitive advantage and my SDK stack is a vital factor here. Whoever has the best SDKs will win. It’s that simple today. We live in a world with powerful solutions. I can do things with SDKs that weren’t possible 10 years ago. Pick a successful app in any industry, and you’ll find a handpicked SDK stack. As a result, app professionals have to be willing to invest in high quality SDKs to compete. That is a huge change from the last 10 years when apps were just starting out. With the help of today’s SDK platforms, I can track (almost) everything: - Mobile KPIs - State of UX - A/B Tests - Marketing Attribution - And more However, there’s one problem: I can’t just integrate one SDK to cover all my needs. There isn’t an all-in-one SDK. This leaves mobile product managers with the only option of having to integrate multiple SDKs. But here’s the truth: The number of SDKs on the market is so high that researching all of them would take weeks. Additionally, many SDKs are unstable and not worth considering. So how do I know which SDKs to choose? Fortunately, you’ll find help below. I will show you best practices on how to choose your mobile SDK stack. I will give you a free guide with the best SDKs on the market at the end of this article. 1. Map out your Needs Your SDK stack has to cover essential areas like marketing, UX and A/B testing. However, which specific SDK platforms fit your needs best depends on the app you have and the market you serve. For example, some apps rely heavily on the service of a location-tracker SDK, while other apps don’t need this at all. 2. Make Sure that Integration is Easy As a mobile product manager, you know that your developer’s time is valuable. If you’re low on resources, you don’t want to bother them by setting up an SDK platform that takes a week to customize. Therefore, integration should be quick and easy. The best case is a simple drag & drop integration of the library. import com.uxcam.UXCam; UXCam.startWithKey("App-key from UXCam"); Example: UXCam’s Android SDK integrates with 2 lines of code. 3. Get in Touch with the SDK’s Customer Support You should get in touch with your SDK provider’s customer support as soon as possible. The major points that drive people away from businesses are a bad employee attitude and unfriendly service. And that’s understandable – bad customer support destroys the trust you need to have a great work relationship. Building a relationship with customer support has two major benefits. First, they will help you integrate the SDK correctly. Second, you’ll get to know if you can rely on the company’s customer support. If that’s not the case, it’s a red flag. 4. Test the SDK in your Development Environment Make sure that everything is working smoothly in a test environment before putting your app into production. Testing your mobile app with the SDK internally first will allow you to see if everything works as expected. You’ll also be able to test for any impact on your app’s performance. The Complete SDK Guide - Quantitative Analytics - Qualitative Analytics - Crash Analytics - A/B Testing - Push Notifications - Advertising - Payments - User Testing - Attribution - Surveys - Location Data To avoid the wrong SDKs in these categories, download the guide below. read original article here
https://coinerblog.com/mobile-product-manager-toolkit-top-11-sdk-platforms-ye1q32w5/
CC-MAIN-2019-51
refinedweb
671
64.2
As mentioned earlier, JSTL includes tags that fit into four areas, each of which is exposed via its own tag library descriptors (TLDs). To use a tag, its corresponding library must be referenced in the JSP. For example, to use the JSTL XML tags in a JSP page, the following taglib directive should be included before the tag is used: As a result of this relative directive, when the JSP is compiled into a servlet, the container will look for the URI in the Web application's WEB-INF file and its corresponding tag library descriptor. <taglib> <taglib-uri>/jstl-x</taglib-uri> <taglib-location>/WEB-INF/x.tld</taglib-location> </taglib> Alternatively, the JSP can follow the absolute declaration, where the library is referenced by its absolute namespace, in which case the container will resolve this to the appropriate tag library: <%@ taglib uri="" prefix="x" %> You must have the descriptor (e.g., the x.tld file) and JSTL implementation classes available to the Web application that uses these tags. The TLD and JAR files are packaged with the Java WSDP reference implementation of JSTL. (The JSTL TLD files are in <JWSDP_HOME>/tools/jstl/tlds and the JAR files are in <JWSDP_HOME>/tools/jstl/standard/lib/standard.jar and <JWSDP_HOME>/tools/jstl/jstl.jar). Table B.2 summarizes details of the different actions; Table B.3 gives the absolute URIs for JSTL tags.
http://www.yaldex.com/java_tutorial_2/Fly0172.html
CC-MAIN-2017-17
refinedweb
233
52.9
Introduction to Multimap in C++ Multimap in C++ programming language is like an associate container which is quite similar to the map. This container contains a sorted-list of key-value pairs while permitting multiple elements with the same key. The main difference between map and multi-map is that when you are using a multi-map feature in your code then you can have the same keys for a set of multiple elements. Or we can say having multiple entries in a map with the same keys. Unlike the map, a multi-map can have duplicate keys associated with the same elements because having duplicate keys for the same elements is not allowed on the map. Let’s have a look at the syntax of a multi-map. Suppose you want to create multi-map consists of integer and character then this is how you will define it: Syntax: template <class Key, class Type, class Traits=less <Key>, class Allocator=allocator <pair <const Key, Type>>> class multimap; Parameters of Multimap in C++ Now let’s discuss the parameters in multi-map used in the C++ programming language. From the above syntax, you can see the parameters we have used to define a multi-map. 1. Key As every element in the map is identified using a key value. The key can be of different types. The data type of key is to be stored in a multi-map container. multimap::key_type 2. Type It’s different from the key as a data type of element will be stored in a multi-map container. Each multi-map element will store some data as a mapped value. multimap::mapped_type 3. Traits We can use compare keyword instead of traits as both serve the same functionality. As it takes two parameter keys as an argument and returns a boolean value because it’s like a binary predictor. To compare two elements values it provides a function object. multimap::key_compare 4. Allocator It represents the object stored in the allocator which is used to define the storage allocation model. The Allocator class we use is the simplest memory allocation and it is also value-independent. multimap::allocator_type Member Functions of Multimap in C++ As we have seen the parameters of multi-map. Now it’s time to understand the member functions in multi-map: Member Functions with Iterators Member Functions with Modifiers Member Functions with Lookup Member Functions with Capacity Examples of Multimap in C++ Now let’s see some C++ programming examples to understand the multi-map properly: Example #1 Code: #include <iostream> #include <map> struct Dot { double i, j; }; struct DotCompare { bool operator()(const Dot& lhs, const Dot& rhs) const { return lhs.i < rhs.i; // NB. ignores y on purpose } }; int main() { std::multimap<int, int> n = {{1,1},{2,2},{3,3}}; for(auto& p: n) std::cout << p.first << ' ' << p.second << '\n'; // comparison std::multimap<Dot, double, DotCompare> mag{ { {5, 12}, 13 }, { {3, 4}, 5 }, { {8, 15}, 17 }, }; for(auto p : mag) std::cout << "The magnitude of (" << p.first.i << ", " << p.first.j << ") is " << p.second << '\n'; } Output: Example #2 Here is another C++ code implementing the begin member function. Code: #include <iostream> #include <map> int main () { std::multimap<char,int> food,chocobar; // defining multi-map food.insert (std::make_pair('p',20)); food.insert (std::make_pair('q',45)); chocobar.insert (std::make_pair('y',128)); chocobar.insert (std::make_pair('y',178)); // food ({{p,20},{q,45}}) vs chocobar ({y,128},{y,178}}): if (food==chocobar) std::cout << "food and chocobar are equal\n"; if (food!=chocobar) std::cout << "food and chocobar are not equal\n"; if (food< chocobar) std::cout << "food is less than chocobar\n"; if (food> chocobar) std::cout << "food is greater than chocobar\n"; if (food<=chocobar) std::cout << "food is less than or equal to chocobar\n"; if (food>=chocobar) std::cout << "food is greater than or equal to chocochocobar\n"; return 0; } Output: Conclusion The multi-map help coders in saving a huge amount of time. Suppose you have a collection of items that have the same key-value now you want to run a search for finding some value. Then it’s better to use an iterator and run through multi-map instead of vectors & maps. Recommended Articles This is a guide to Multimap in C++. Here we discuss the syntax, Parameters, and member functions of multimap in C++ along with examples and code implementation. You may also look at the following articles to learn more-
https://www.educba.com/multimap-in-c-plus-plus/?source=leftnav
CC-MAIN-2021-39
refinedweb
752
52.49
I would like any tips to increase performance on windows python 2.6.2 I use a product which can internally use python as a scripting language ( 2.6.2), I'm not a regular python user so my apologies for any gaffs made. We had a requirement to split a large(ish) file into smaller chunks to pass onto downstream processing, and so I wrote a quick script to loop through and split the file. I was amazed to find that on my macbook ( python 2.7) it was 10x faster than running under windows. I tried a number of python versions (2.6 - 3.3) but it is always faster on mac osx. I've also tried removing the opens/writes which had little effect on mac osx, but a 5x increase when on windows. I cant change the deployment platform from windows (python2.6.2) and am a little frustrated that my laptop performs better than a 32core 64GB windows server! - Code: Select all #! /usr/bin/python import os import sys import re from time import time t = time() """ # Split a pre sorted text file into multiple outputs based on the leftmost element # delimited by spaces. # The second element can be used for an additional sort and will stripped from the # output when 'isLeadingSort=1' # # parameter: # path: char path for the input file # outPath: char path for the output files # isLeadingSort int use the 2nd of 3rd element as output data # isdbg int enable debug prints """ # Just use the cmd at the moment for test path= sys.argv[1] outPath = sys.argv[2] isLeadingSort = int(sys.argv[3]) isdbg = int(sys.argv[4]) #outPath = os.getcwd() #isLeadingSort = 0 #isdbg = 0 # define all the functions up front def printStr(str): """ print when the debug option is set """ if isdbg: print (str) def testPath(path): """raise an exception if we cant find the path or file""" if not os.path.exists(path): raise Exception ('File not found: ' + path ) return false # # This is where we start # # check that the paths exist or raise an exception testPath(path) testPath(outPath) printStr ('paths ok') #init arline = [] fnameOut = chr(1) # init the output filename line=object() fOut=object() # open the input file for reading and process though in a loop with open(path,'r') as f: for line in f: printStr( 'for line in f: ' ) if isLeadingSort: wrds=2 else: wrds=1 arLine = re.split('[ \n]+',line,wrds) newFname = arLine[0] outLine = arLine[len(arLine)-1] if newFname == fnameOut: printStr ('writing to open file: ' + fnameOut) else: fnameOut = newFname printStr ('opennextfile: ' + fnameOut + '- closing: ' + str(fOut) ) try: fOut.close() except: pass if fnameOut in ('' , '\n'): raise Exception ('Filename is not the first element of the data: ' ) fOut = open(os.path.join(outPath,fnameOut),'w') # open new #write fOut.write(outLine) try: fOut.close() except: pass print ( 'timediff : ' + str(time() - t))
http://python-forum.org/viewtopic.php?f=10&t=1454
CC-MAIN-2014-15
refinedweb
475
71.55
Like every month since the launch of COTI’s staking program, November 1st was highlighted by a successful staking reward distribution to over 600 COTI MainNet stakers. All stakers who claimed their reward received the guarantee return (between 20% and 35% according to the staking plan to which they are registered), including the new node that was added in November, the BetterNode. We’re happy to update that November’s monthly transaction volume has grown to 12.8M$, we expect next month’s volume to grow even more thanks to Ultra eWallet and Blockchain Dollars. We are aware of the ever-growing demand to join COTI’s Staking 2.0 program and when processing volume allows it, we open up opportunities for new nodes. We hope to be able to continue adding new nodes as months go by and we continue to see a growth in the monthly processing volume.:
https://medium.com/cotinetwork/november-staking-rewards-have-been-successfully-distributed-dbf9b45b1d09
CC-MAIN-2021-43
refinedweb
150
59.33
NameError question - def(self,master) - master not in namespacewithin class? Discussion in 'Python' started by harijay, Oct 9, 2008.: - 583 - Lonnie Princehouse - Jul 11, 2005 __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y;self.z=z boilerplate code)Ralf W. Grosse-Kunstleve, Jul 9, 2005, in forum: Python - Replies: - 18 - Views: - 597 - Bengt Richter - Jul 11, 2005 Is there a way to use "def self.new" to do the job of "def initialize"?Sean Ross, Dec 24, 2003, in forum: Ruby - Replies: - 3 - Views: - 125 - Aredridel - Dec 25, 2003 "self.class" vs. "(class << self; self; end)", Jan 22, 2006, in forum: Ruby - Replies: - 6 - Views: - 116 - Vivek - Jan 25, 2006 "def self.method" vs "class << self; def method", Oct 10, 2006, in forum: Ruby - Replies: - 7 - Views: - 161
http://www.thecodingforums.com/threads/nameerror-question-def-self-master-master-not-in-namespacewithin-class.639137/
CC-MAIN-2014-41
refinedweb
130
75.4
#include <UT_XMLWriter.h> Definition at line 38 of file UT_XMLWriter.h. Standard constructor. Initializes the writer to write either to memory or to the file. Ends an element started earlier. Sets the indentation of the XML document elements. Begins an element that will contain some other elements or data. Sets an attribute for the current element that was started but not yet ended. Writes a CDATA element. If data contains "]]>" (which is the CDATA termination sequence, then it is split into a few CDATA elements (ending in "]]" and starting with ">" to prevent the occurance of this reserved sequence. Writes a comment. Write the whole element, ie: <tag>string</tag> Writes string as raw characters. The characters are written exactly as they appear, so be careful to use valid XML format, etc. Writes a text string contents data for the current element that was started but not yet ended.
http://www.sidefx.com/docs/hdk11.1/class_u_t___x_m_l_writer.html
CC-MAIN-2013-20
refinedweb
148
59.3
A popular approach in JavaScript APIs these days is to pass a string that matches a property name as a parameter to some function. These are sometimes called "magic strings" and they are very code smelly. And why shouldn't we use them? After all, JavaScript allows us to access object properties in either of these ways: obj.property obj[‘property’] Using strings to denote property names is extremely error prone and not refactor-friendly. We have enough to worry about when it comes to keeping the code properties and the bound properties in our markup in sync, let’s at least eliminate this worry when it comes to our code. using one or both of the following three methods. Pick your favorite or the most appropriate method for your situation. We can demonstrate this with a simple "Hello World" application. We can quickly generate a NativeScript app with the common "Hello World" template and TypeScript using this simple CLI command: tns create magic-strings-be-gone --tsc Open up the project folder in a code editor and open the main-view-model.ts file, which defines the HelloWorldModel class. The message property that we see there with a getter and a setter is bound to a label in the markup. We won’t need a getter and a setter for this exercise, a simple public property will do. Remove the getter and setter for the message property and just add a public message property of the string type. You can also remove the private _message field if you want, but it’s not bothering anyone, right? Oh, just remove it already, you know you want to. At this point our class should look like this: export class HelloWorldModel extends Observable { private _counter: number; constructor() { super(); this._counter = 42; this.updateMessage(); } public message: string; public onTap() { this._counter--; this.updateMessage(); } private updateMessage() { if (this._counter <= 0) { this.message = 'Hoorraaay! You unlocked the NativeScript clicker achievement!'; } else { this.message = `${this._counter} taps left`; } } } The problem here is that when we update the message property, we don’t get notifications and our UI won’t update. To get the free UI updates, we have to use Observable’s set() function to set the message property, which internally triggers notification. So change our updateMessage() method to this: private updateMessage() { if (this._counter <= 0) { this.set('message', 'Hoorraaay! You unlocked the NativeScript clicker achievement!'); } else { this.set('message', `${this._counter} taps left`); } } There are those magic strings! Observable’s API requires a string to be sent to the set() method and this is where our problems began. If we refactor the code and change the property name or the magic string (or worse, one of the magic strings) then we will have runtime problems that will be extremely hard to diagnose. Let’s fix this, shall we? TypeScript has a handy operator that allows us to create a “list” of strings that are the public property names of a class. Well, that’s not entirely accurate. We can create a type that is all the public properties of a class as strings. We can use the amazing keyof operator to do this. Just above our HelloWorldModel class definition, define a new type and use the keyof operator: type MessageType = keyof HelloWorldModel; If our editor provides TypeScript intellisense as Visual Studio Code does, when we hover over the MessageType identifier, we’ll see that the new type we created is all the public properties of the HelloWorldModel class as strings. This is perfect because now we can create a constant that will be just one of the properties, specifically the message property: const messageType: MessageType = 'message'; This constant looks like a string, but it’s not a string. It’s of type MessageType. If we try misspelling message, we’ll immediately see compilation errors. Since we now have a strongly typed constant, we can just use it in the set() method call of our updateMessage() method: private updateMessage() { if (this._counter <= 0) { this.set(messageType, 'Hoorraaay! You unlocked the NativeScript clicker achievement!'); } else { this.set(messageType, `${this._counter} taps left`); } } Now if we change the property name, we will get compilation errors, which is a whole lot better than getting a runtime error. If we want a more generic approach and don’t want to add a constant for every property we want to update, then we can use this next method, which is similar in that we still use the keyof operator. Create a new file called observable-extensions.ts and add the following code to the file: import { Observable } from 'data/observable'; export function getObservableProperty<T extends Observable, K extends keyof T>(obj: T, key: K) { return obj.get(key); } export function setObservableProperty<T extends Observable, K extends keyof T>(obj: T, key: K, value: T[K]) { obj.set(key, value); } We’re exporting two functions here that act on Observables. Take a look at the generic setObservableProperty() function. It specifies that the first parameter is of type T, which has to inherit from Observable, and our HelloWorldModel is such a class. The second parameter is of type K which extends the type that is keyof T, which is, the list of all public properties of the class that extends Observable. There’s a lot to unpack there, but trust me, it works. We can now import the setObservableProperty() function in the main-vew-model.ts file: import { setObservableProperty } from './observable-extensions'; And modify the updateMessage() method once again to use the new function: private updateMessage() { if (this._counter <= 0) { setObservableProperty(this, 'message', 'Hoorraaay! You unlocked the NativeScript clicker achievement!'); } else { setObservableProperty(this, 'message', `${this._counter} taps left`); } } This is deceivingly simple looking. It even looks like we have our magic string back, but in reality, the message parameter is NOT a string at all. It is a type. If we try misspelling, we’ll immediately get TypeScript compilation errors, which is something we didn’t get at the start of this article. The extension function approach we saw in method 2 is great, and it doesn’t allow us to mess up the property names. However, the syntax for calling the observable property extension function is not as intuitive as we would like. Wouldn’t it be great if we could just set our message property the way we always have done and just forget about it? Property Decorators to the rescue! Thanks to Peter Staev for providing a decorator method that can help us do just that. Here’s how we can keep our view model code even cleaner. Create another file, let’s call it observable-decorator.ts, and add this code: import { Observable } from "data/observable"; export function ObservableProperty() { return (obj: Observable, key: string) => { let storedValue = obj[key]; Object.defineProperty(obj, key, { get: function () { return storedValue; }, set: function (value) { if (storedValue === value) { return; } storedValue = value; this.notify({ eventName: Observable.propertyChangeEvent, propertyName: key, object: this, value, }); }, enumerable: true, configurable: true }); }; } This file provides a decorator called ObservableProperty that can be applied to a property in your view model that you want to keep synchronized with your UI. Read more about JavaScript decorators here and here. As far as our example is concerned, every time you update the message property in your view model, you want to run a piece of code that notifies the UI. If you take a look at the set: function definition in the code above, you’ll notice that we store the updated value in a local field, then call the notify method to do just that. In order to apply this decorator to the message property in our view model class, just add the decorator to the property like this: @ObservableProperty() public message: string; Now your updateMessage() method can be simplified back to this: private updateMessage() { if (this._counter <= 0) { this.message = 'Hoorraaay! You unlocked the NativeScript clicker achievement!'; } else { this.message = `${this._counter} taps left`; } } and everything still works. Here is the entire view model file with method 3 implemented for your reference: import { Observable } from 'data/observable'; import { ObservableProperty } from './observable-decorator'; export class HelloWorldModel extends Observable { private _counter: number; constructor() { super(); this._counter = 42; this.updateMessage(); } @ObservableProperty() public message: string; public onTap() { this._counter--; this.updateMessage(); } private updateMessage() { if (this._counter <= 0) { this.message = 'Hoorraaay! You unlocked the NativeScript clicker achievement!'; } else { this.message = `${this._counter} taps left`; } } } This method does give you the ability to keep your model code clean, but you lose some of the strong typing that you get with the keyof operator. So it’s up to you which method you pick, but all three methods will definitely help keep our code safer. TypeScript provides us with some powerful tools to help us not mess up, and the keyof operator is a really awesome one. While these techniques are demonstrated in the context of a NativeScript Core application, we can use them in any application where an API requires a magic string as a property name. You might also like a video version of this article, which is available here. If you enjoy video learning and you’re interested in more NativeScript development techniques that are beginner to advanced, check out NativeScripting.com for video courses. Happy coding.
https://blog.nativescript.org/nativescript-observable-magic-string-property-name-be-gone/
CC-MAIN-2021-21
refinedweb
1,540
55.95
How to Use Labels, Annotations, and Legends in MatPlotLib To fully document your MatPlotLib graph, you usually have to resort to labels, annotations, and legends. Each of these elements has a different purpose, as follows: - Label: Provides positive identification of a particular data element or grouping. The purpose is to make it easy for the viewer to know the name or kind of data illustrated. - Annotation: Augments the information the viewer can immediately see about the data with notes, sources, or other useful information. In contrast to a label, the purpose of annotation is to help extend the viewer’s knowledge of the data rather than simply identify it. - Legend: Presents a listing of the data groups within the graph and often provides cues (such as line type or color) to make identification of the data group easier. For example, all the red points may belong to group A, while all the blue points may belong to group B. The following information helps you understand the purpose and usage of various documentation aids provided with MatPlotLib. These documentation aids help you create an environment in which the viewer is certain as to the source, purpose, and usage of data elements. Some graphs work just fine without any documentation aids, but in other cases, you might find that you need to use all three in order to communicate with your viewer fully. Adding labels Labels help people understand the significance of each axis of any graph you create. Without labels, the values portrayed don’t have any significance. In addition to a moniker, such as rainfall, you can also add units of measure, such as inches or centimeters, so that your audience knows how to interpret the data shown. The following example shows how to add labels to your graph: values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7] import matplotlib.pyplot as plt plt.xlabel('Entries') plt.ylabel('Values') plt.plot(range(1,11), values) plt.show() The call to xlabel() documents the x-axis of your graph, while the call to ylabel() documents the y-axis of your graph. Annotating the chart You use annotation to draw special attention to points of interest on a graph. For example, you may want to point out that a specific data point is outside the usual range expected for a particular data set. The following example shows how to add annotation to a graph. values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7] import matplotlib.pyplot as plt plt.annotate(xy=[1,1], s='First Entry') plt.plot(range(1,11), values) plt.show() The call to annotate() provides the labeling you need. You must provide a location for the annotation by using the xy parameter, as well as provide text to place at the location by using the s parameter. The annotate() function also provides other parameters that you can use to create special formatting or placement on-screen. Creating a legend A legend documents the individual elements of a plot. Each line is presented in a table that contains a label for it so that people can differentiate between each line. For example, one line may represent sales from the first store location and another line may represent sales from a second store location, so you include an entry in the legend for each line that is labeled first and second. The following example shows how to add a legend to your plot: values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7] values2 = [3, 8, 9, 2, 1, 2, 4, 7, 6, 6] import matplotlib.pyplot as plt line1 = plt.plot(range(1,11), values) line2 = plt.plot(range(1,11), values2) plt.legend(['First', 'Second’], loc=4) plt.show() The call to legend() occurs after you create the plots, not before. You must provide a handle to each of the plots. Notice how line1 is set equal to the first plot() call and line2 is set equal to the second plot() call. The default location for the legend is the upper-right corner of the plot, which proved inconvenient for this particular example. Adding the loc parameter lets you place the legend in a different location. See the legend() function documentation for additional legend locations.
https://www.dummies.com/programming/use-labels-annotations-legends-matplotlib/
CC-MAIN-2019-35
refinedweb
712
54.83
Pattern Synonyms Most language entities in Haskell can be named so that they can be abbreviated instead of written out in full. This proposal provides the same power for patterns. See the implementation page for implementation details. Relevant open tickets: Motivating example Here is a simple representation of types data Type = App String [Type] Using this representations the arrow type looks like App "->" [t1, t2]. Here are functions that collect all argument types of nested arrows and recognize the Int type: collectArgs :: Type -> [Type] collectArgs (App "->" [t1, t2]) = t1 : collectArgs t2 collectArgs _ = [] isInt (App "Int" []) = True isInt _ = False Matching on App directly is both hard to read and error prone to write. The proposal is to introduce a way to give patterns names: pattern Arrow t1 t2 = App "->" [t1, t2] pattern Int = App "Int" [] And now we can write collectArgs :: Type -> [Type] collectArgs (Arrow t1 t2) = t1 : collectArgs t2 collectArgs _ = [] isInt Int = True isInt _ = False Here is a second example from pigworker on Reddit. Your basic sums-of-products functors can be built from this kit. newtype K a x = K a newtype I x = I x newtype (:+:) f g x = Sum (Either (f x) (g x)) newtype (:*:) f g x = Prod (f x, g x) and then you can make recursive datatypes via newtype Fix f = In (f (Fix f)) e.g., type Tree = Fix (K () :+: (I :*: I)) and you can get useful generic operations cheaply because the functors in the kit are all Traversable, admit a partial zip operation, etc. You can define friendly constructors for use in expressions leaf :: Tree leaf = In (Sum (Left (K ()))) node :: Tree -> Tree -> Tree node l r = In (Sum (Right (Prod (I l, I r)))) but any Tree-specific pattern matching code you write will be wide and obscure. Turning these definitions into pattern synonyms means you can have both readable type-specific programs and handy generics without marshalling your data between views. Uni-directional (pattern-only) synonyms The simplest form of pattern synonyms is the one from the examples above. The grammar rule is: pattern conid varid1 ... varidn <- pat pattern varid1 consym varid2 <- pat - Each of the variables on the left hand side must occur exactly once on the right hand side - Pattern synonyms are not allowed to be recursive. Cf. type synonyms. Pattern synonyms can be exported and imported by prefixing the conid with the keyword pattern: module Foo (pattern Arrow) where ... This is required because pattern synonyms are in the namespace of constructors, so it's perfectly valid to have data P = C pattern P = 42 You may also give a type signature for a pattern, but as with most other type signatures in Haskell it is optional: pattern conid :: type E.g. pattern Arrow :: Type -> Type -> Type pattern Arrow t1 t2 <- App "->" [t1, t2] Together with ViewPatterns we can now create patterns that look like regular patterns to match on existing (perhaps abstract) types in new ways: import qualified Data.Sequence as Seq pattern Empty <- (Seq.viewl -> Seq.EmptyL) pattern x :< xs <- (Seq.viewl -> x Seq.:< xs) pattern xs :> x <- (Seq.viewr -> xs Seq.:> x) Simply-bidirectional pattern synonyms In cases where pat is in the intersection of the grammars for patterns and expressions (i.e. is valid both as an expression and a pattern), the pattern synonym can be made bidirectional, and can be used in expression contexts as well. Bidirectional pattern synonyms have the following syntax: pattern conid varid1 ... varidn = pat pattern varid1 consym varid2 = pat For example, the following two pattern synonym definitions are rejected, because they are not bidirectional (but they would be valid as pattern-only synonyms) pattern ThirdElem x = _:_:x:_ pattern Snd y = (x, y) since the right-hand side is not a closed expression of {x} and {y} respectively. In contrast, the pattern synonyms for Arrow and Int above are bidirectional, so you can e.g. write: arrows :: [Type] -> Type -> Type arrows = flip $ foldr Arrow Explicitly-bidirectional pattern synonyms What if you want to use Succ in an expression: pattern Succ n <- n1 | let n = n1 -1, n >= 0 It's clearly impossible since its expansion is a pattern that has no meaning as an expression. Nevertheless, if we want to make what looks like a constructor for a type we will often want to use it in both patterns and expressions. This is the rationale n * fac n Associated pattern One could go one step further and leave out the pattern keyword to obtain associated constructors, which are required to be bidirectional. The capitalized identifier would indicate that a pattern synonym is being defined. For complicated cases one could resort to the where syntax (shown above). TODO: Syntax for associated pattern synonym declarations to discern between pattern-only and bidirectional pattern synonyms P :: ty requires CReq provides CProv where ty P :: b -> T a requires (Show a) provides (Eq.)
https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms?version=27
CC-MAIN-2015-48
refinedweb
816
55.58
Developing Carbon, I haven’t had the time to play much with it myself. 🙂 One of the most essential features in a disassembler is the capability to let the users write scripts and modify the disassembly itself. Carbon has a rich SDK and this is a little tutorial to introduce a bit how it works. Before trying out any of the scripts in this tutorial, make sure to update to the newest 3.0.2 version, as we just fixed a few bugs related to the Carbon Python SDK. So let’s start! I wrote a small program with some encrypted strings. #include <stdio.h> unsigned char s1[13] = { 0x84, 0xA9, 0xA0, 0xA0, 0xA3, 0xE0, 0xEC, 0xBB, 0xA3, 0xBE, 0xA0, 0xA8, 0xED }; unsigned char s2[17] = { 0x98, 0xA4, 0xA5, 0xBF, 0xEC, 0xA5, 0xBF, 0xEC, 0x9F, 0x9C, 0x8D, 0x9E, 0x98, 0x8D, 0xED, 0xED, 0xED }; unsigned char s3[11] = { 0x82, 0xA3, 0xE0, 0xEC, 0xBE, 0xA9, 0xAD, 0xA0, 0xA0, 0xB5, 0xE2 }; char *decrypt(unsigned char *s, size_t n) { for (size_t i = 0; i < n; i++) s[i] ^= 0xCC; return (char *) s; } #define DS(s) decrypt(s, sizeof (s)) int main() { puts(DS(s1)); puts(DS(s2)); puts(DS(s3)); return 0; } The decryption function is super-simple, but that’s not important for our purposes. I disassembled the debug version of the program, because I didn’t want release optimizations like the decrypt function getting inlined. Not that it matters much, but in a real-world scenario a longer decryption function wouldn’t get inlined. By going to the decrypt function, we end up to a jmp which points to the actual function code. .text:0x0041114A decrypt proc start .text:0x0041114A ; CODE XREF: 0x00411465 .text:0x0041114A ; CODE XREF: 0x00411487 .text:0x0041114A ; CODE XREF: 0x004114A9 .text:0x0041114A E9 71 02 00 00 jmp sub_4113C0 At this point, the SDK offers us many possible approaches to find all occurrences of encrypted strings. We could, for instance, enumerate all disassembled instructions. But that’s not very fast. A better approach is to get all xrefs to the decrypt function and then proceed from there. First we get the current view. v = proContext().getCurrentView() ca = v.getCarbon() db = ca.getDB() Then we get all xrefs to the decrypt function. xrefs = db.getXRefs(0x0041114A, True) We enumerate all xrefs and we extract address and length of each string. it = xrefs.iterator() while it.hasNext(): xref = it.next() # retrieve address and length of the string buf = ca.read(xref.origin - 6, 6) slen = buf[0] saddr = struct.unpack_from("<I", buf, 2)[0] We decrypt the string. s = ca.read(saddr, slen) s = bytes([c ^ 0xCC for c in s]).decode("utf-8") At this point we can add a comment to each push of the string address with the decrypted string. comment = caComment() comment.address = xref.origin - 5 comment.text = s db.setComment(comment) As final touch, we tell the view to update, in order to show us the changes we made to the underlying database. v.update() Here’s the complete script which we can execute via Ctrl+Alt+R (we have to make sure that we are executing the script while the focus is on the disassembly view, otherwise it won’t work). from Pro.UI import proContext from Pro.Carbon import caComment import struct def decrypt_strings(): v = proContext().getCurrentView() ca = v.getCarbon() db = ca.getDB() # get all xrefs to the decryption function xrefs = db.getXRefs(0x0041114A, True) it = xrefs.iterator() while it.hasNext(): xref = it.next() # retrieve address and length of the string buf = ca.read(xref.origin - 6, 6) slen = buf[0] saddr = struct.unpack_from("<I", buf, 2)[0] # decrypt string s = ca.read(saddr, slen) s = bytes([c ^ 0xCC for c in s]).decode("utf-8") # comment comment = caComment() comment.address = xref.origin - 5 comment.text = s db.setComment(comment) # update the view v.update() decrypt_strings() It will result in the decrypted strings shown as comments. This could be the end of the tutorial. However, in the upcoming 3.1 version I just added the capability to overwrite bytes in the disassembly. This feature is both available from the context menu (Edit bytes) under the shortcut “E” or from the SDK via the Carbon write method. What it does is to patch bytes in the database only: the original file won’t be touched! So let’s modify the last part of the script above: # decrypt string s = ca.read(saddr, slen) s = bytes([c ^ 0xCC for c in s]) # overwrite in disasm ca.write(saddr, s) Please notice that I removed the “.decode(“utf-8″)” part of the script, as now I’m passing a bytes object to the write method. This is the result. .text:0x0041145E push 0xD .text:0x00411460 push 0x418000 ; "Hello, world!" .text:0x00411465 call decrypt .text:0x0041146A add esp, 8 .text:0x0041146D mov esi, esp .text:0x0041146F push eax .text:0x00411470 call dword ptr [0x419114] -> MSVCR120D.puts .text:0x00411476 add esp, 4 .text:0x00411479 cmp esi, esp .text:0x0041147B call sub_411136 .text:0x00411480 push 0x11 .text:0x00411482 push 0x418010 ; "This is SPARTA!!!" .text:0x00411487 call decrypt .text:0x0041148C add esp, 8 .text:0x0041148F mov esi, esp .text:0x00411491 push eax .text:0x00411492 call dword ptr [0x419114] -> MSVCR120D.puts .text:0x00411498 add esp, 4 .text:0x0041149B cmp esi, esp .text:0x0041149D call sub_411136 .text:0x004114A2 push 0xB .text:0x004114A4 push 0x418024 ; "No, really." .text:0x004114A9 call decrypt .text:0x004114AE add esp, 8 .text:0x004114B1 mov esi, esp .text:0x004114B3 push eax .text:0x004114B4 call dword ptr [0x419114] -> MSVCR120D.puts I didn’t add any comment: the strings are now detected automatically by the disassembler and shown as comments. Perhaps for this particular task it’s better to use the first approach, instead of changing bytes in the database, but the capability to overwrite bytes becomes important when dealing with self-modifying code or other tasks. I hope you enjoyed this first tutorial. 🙂 Happy hacking!
https://blog.cerbero.io/?p=1804
CC-MAIN-2022-40
refinedweb
984
70.6
We have given a spiral matrix of odd-order, in which we start with the number 1 as center and moving to the right in a clockwise direction. Examples : Input : n = 3 Output : 25 Explanation : spiral matrix = 7 8 9 6 1 2 5 4 3 The sum of diagonals is 7+1+3+9+5 = 25 Input : n = 5 Output : 101 Explanation : spiral matrix of order 5 21 22 23 23 25 20 7 8 9 10 19 6 1 2 11 18 5 4 3 12 17 16 15 14 13 The sum of diagonals is 21+7+1+3+13+ 25+9+5+17 = 101 If we take a closer look at the spiral matrix of n x n, we can notice that top right corner element has value n2. Value of top left corner is (n^2) – (n-1) [Why? not that we move ant-clockwise in spiral matrix, therefore we get value of top left after subtracting n-1 from top right]. Similarly values of bottom left corner is (n^2) – 2(n-1) and bottom right corner is (n^2) – 3(n-1). After adding all the four corners we get 4[(n^2)] – 6(n-1). Let f(n) be sum of diagonal elements for a n x n matrix. Using above observations, we can recursively write f(n) as: f(n) = 4[(n^2)] – 6(n-1) + f(n-2) From above relation, we can find the sum of all diagonal elements of a spiral matrix with the help of iterative method. spiralDiaSum(n) { if (n == 1) return 1; // as order should be only odd // we should pass only odd-integers return (4*n*n - 6*n + 6 + spiralDiaSum(n-2)); } Below is the implementation. C++ Java Python3 C# PHP Output : 261 Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
https://tutorialspoint.dev/data-structure/matrix-archives/sum-diagonals-spiral-odd-order-square-matrix
CC-MAIN-2021-17
refinedweb
320
60.18
[Date Index] [Thread Index] [Author Index] Re: MathPlayerSetup installation One problem in this case (and you will run into more later if you keep using MathML and Mathematica) is that you have not included the necessary markup to trigger MathPlayer. If you want to do this in a way that is compatible with other MathML implementations, this means using xhtml. Also, MathPlayer is somewhat limited in that it requires the proper sgml doctype instead of just an xml namespace declaration. In the case of files served from the web, you will need an application/xhtml+xml (I think that's right) mime type. If you are reading files off your hard disk, I don't know how to get MathPlayer to fire in that case. On 2/13/07, xq386 at hotmail.com <xq386 at hotmail.com> wrote: > I installated MathPlayerSetup in my Internet Explorer 6.0.2900.2180. > Then I tried to display next html file: > > <html> > <body> > <math xlmns="http:"> > <mfrac> > <mi>a</mi> > <mi>b</mi> > </mfrac> > </math> > </body> > </html> > > I hoped that a formule was displayed but I show just "a b". What do I > wrong? > > > --
http://forums.wolfram.com/mathgroup/archive/2007/Feb/msg00353.html
CC-MAIN-2014-52
refinedweb
190
62.27
General :: Undelete Or Recover Previously Deleted Files?Oct 22, 2010 There are some files on my external disk drive that are corrupted and not identifiable. How can I recover these files?View 1 Replies There are some files on my external disk drive that are corrupted and not identifiable. How can I recover these files?View 1 Replies If not at all possible with public tools, is it possible for experts to recover the files? (as in pay someone to do it), What happened is I accidentally deleted a few folders containing family photos and my text files for work, address books etc, just personal stuff I don't have backups of. (from a ext4 partition 'root') Feel free to call me stupid, but I didn't notice I had deleted them before I did the following... I ended up copying enough data to the ext4 partition which completely filled it (less than 1mb remained), once I backed those up I deleted them (trash empty) a few days later I ran sfill to erase the originals. I removed some games, and was going to remove the .desktop files, but I accidentally typed "rm /usr/share/applications/*" instead of "rm /usr/share/applications/Games/*". if I could "undelete" and recover these files. The only other way I can think of to get them back is to reinstall all of the programs. I'm using Slackware 13 and have an ext4 file system if that helps.View 2 lost below mention file in my appache server,how i can recover libphp5.so mod_actions.soView Is there like EasyRecovery for Linux? Free open source command-line based software strongly preferred.Expecting something like: $ fat32_recovery --some-arcane-options dump.img dir/ Recovery in progress... ~ILE1.TXT -> dir/XILE1.TXT [code].... I am having a prolbem with my ubuntu 9.10 operating system. Is there anyway I can roll back the setting and files that I deleted previously without totaly reinstalling the system.View 1 Replies View Related I had a back-up of the files of someone . He realized that he wants those files , after I had deleted them . And now I need to recover them . How can I do that?View 5 Replies View Related Deleted a whole bunch of files, I have backed it up but it was from about 2 weeks ago and as I had added loads of stuff in the meantime I urgently need to recover the files.Ubuntu 9.10. Any and every file recovery program you know please.Preferably one that allows me to recover an entire directory, not just individual files, but it'll be fine if that is it.View 9 Replies View Related Yesterday I accidentaly deleted all files from my desktop (with rm). Now I am looking for way how to recover them. I tried to use scalpel to recover them which found many files (more than 800000 zip files). I stopped the process cause It would take ages. I would like to recover files only from desktop folder. Is this possible? Is there some other good recovery program? Using Ubuuntu 10.04. I moved a few files from a directory in my home directory structure to the KDE trash folder, and then deleted them from the trash folder. About a minute later I regretted this, and now I'd like to see if there's any way to recover the files. First, are there any good utilities for restoring accidentally deleted files? If so, where would I look for these files? Does the KDE trash config file actually correspond to a physical directory somewhere, or do the files just remain hidden in their original location?View 7 Replies View Related I wiped out 60% of my VirtualBox .vdi files on one of my partions. The file sizes ranged from 3gb to 9gb files. (I did have some backups but 4 months ago). Needless to say I'll be backing my files up more often (especially my Virtual Image .vdi files). So here are the steps...: [ Look, I know it seems like allot of steps, but its worth it in the end!!] (By the way, these are all ext3 filesystems, I would imagine you could recover fat32 [windows} type filesystems too, but I just did this under Linux filesystems) 1--> If you've found yourself deleted any files, try to unmount the partition. ( In my case it was an external 2 1/2 hard drive, command used to unmount is sudo umount /dev/sda3) 1b--> If you only have one partition, then I'd suggest shutting down your computer and putting a Live CD in it (preferably the Ubuntu Live CD). 2--> Whether 1 or 1b applies to you, install ext3grep from Synaptic or any package manager. (if you had to reboot via a live CD, make sure you unmount the partition that has the deleted files.(example umount /dev/sda1 or in my case it was umount /dev/sda3). If you're on the LiveCD of Ubuntu, I believe it will let you install the ext3grep package using Synaptic Package manager and it will put it in RAM under the Live Desktop Session. 3--> Now here's the important part before you proceed any further. If the partition that has the deleted files is taking up 30gb (yes 30gb used space), then you have to mount an existing partition GREATER than 30gb ***FREE*** SPACE. I happened to have another partition /media/sda7 already mounted that had 50 gb free. So at this point, you must go to any directory under your (recovery partition, i'm referring to my 50gb partition /media/sda7). To do this, run the command cd /media/sda7, now you're in your (recovery partition). You can make a new directory if you want, or just use any existing directory on the /media/sda7 partition. (I made a directory something like mkdir ./Yikes ) So I get into the directory by cd /media/sda7/Yikes then run the following command....: ext3grep --restore-all /dev/sda3 4--> ***Keep in mind, you just ran that command from the /media/sda7/Yikes directory on your recovery partition. ***This will create a folder called "RESTORED_FILES" under/in the Yikes Directory.*** The ext3grep command you just submitted will try to recover every single file on that partition that has the deleted files (i.e. /dev/sda3). There is a way to restore single files and their paths, but I got frustrated and just did a full restore. 5--> Depending on the partition size and number of files, it could take 30 minutes to 2 hours or more before you start to see messages in the terminal screen saying "Restored file... Abc.txt or sam.jpg". Let it finish!!! At first you will see it saying "Group 1, Group 2 and crazy characters going across the screen, that's normal." You know it's begining the actual restore process when you start to see "Restored file...". 6--> At this point you can open a DIFFERENT terminal screen and do cd /media/sda7/Yikes/RESTORED_FILES to see the files being restored under the various directories. This does work because I was able to restore at least 25gb worth of files. Again, file sizes ranged from 3gb to 9gb!! 7--> Final step when the 1st terminal screen is done restoring the files, you can either open them up from the /media/sda7/Yikes/RESTORED_FILES directory to check them out, or you can copy them back to where they were deleted before. BUT I WOULD SAY TO MAKE A BACKUP OF THE RESTORED FILES, or keep the restored files in the /media/sda7 partition. -->Again, I did a "ext3grep --restore-all /dev/sda3" command from the partition that had plenty of free space (i.e. 50gb) to restore the 30 gb worth of deleted files (and that ext3grep --restore-all /dev/sda3 command was run in the following directory /media/sda7/Yikes ). -->Remember to unmount the /dev/sda3 partition (i.e. the partition that has the deleted files). DO NOT MOUNT /devs/sda3 when running the ext3grep --restore-all command. The ext3grep documentation states you don't want to write anything to that partition because you run the risk of writing over files or directories that could be recovered. -->This ext3grep utility saved me Big Time!! 4 to 5 months of work restored because of this utility. You can get it from Synaptic Package Manger searching for ext3grep. I need to recover deleted files from a NTFS drive. The OS has been re-installed by accident. any tools that will allow me to see if there is anything that can be recovered.View 5 Replies View Related I just downloaded, burned, and tried the ISO image. only to find out it's not a bootable, live CD, but rather a Windows program, ie. it requires booting into Windows and running it from the CD, which is not a good idea since the first thing to do in this case is to quit the OS to prevent it from using those newly available sectors to write new data. can a Linux-based live CD try and recover files recently deleted in an NTFS partition?View 4 Replies View Related i want to know how can i recover deleted files in ext3 partition manually(not using any tools)?? probably using the 'grep' command. if someone know pls tell me... (i have recoverd deleted files in an ext2 partition with debugfs and dump . but dumping doesnt work for ext3) How can I recover My deleted files in ubuntu? What's the difference between "foremost" and "scalpel"? And is there any other program(or package?) For this purpose in ubuntu? I am running ubuntu 9.10View 1 Replies View Related I am using CentOS 5.5.I suppose this is an oft repeated question. I accidentally deleted, using rm command, 2 wmv files. The files were in a single ext3 1Tb drive, with just 1 partition --- the ext3 one. Each file is 600 - 800mb. The 1Tb drive has only about 20Gb data.Immediately after deleting the files i unmounted the drive (/dev/sdc1). Thereafter i searched the the net and came to know of the recovery tools foremost and photorec. I have installed both of them. I am currently running both of them as root --- foremost is just showing a lot of * signs on the terminal and photorec has managed to find some txt and png files --- but no wmv.For foremost i used: /usr/sbin/foremost -t wmv -i /dev/sdc1For photorec i followed some instructions available on the web. In the meantime, based on some post on the net i ran debugfs as root, then cd into the directory where the files were deleted. Then on typing ls -d i managed to get the inodes of the 2 deleted files and the names of the deleted files are also correct. The instructions on the net tell me to run fsstat and dls both of which i am unable to find in /bin, /usr/bin, /usr/sbin and /sbin. So i am unable to proceed further. I cannot boot into by Ubuntu 9.1 machine.... Trying either GUI or rescue mode gives me the following error messages (which i copied by hand since they were in cli) Code: mount : mounting /dev/disk/by-uuid/64e5cb0d-058a-4a4c-af4b-7afb6427a72e3 on root failed : invalid argument mount : mounting /sys on /root/sys failed : no such file or directory mount : mounting /dev on /root/dev failed : no such file or directory mount : mounting /proc on /root/proc failed : no such file or directory Target doesnt have /sbin/init The only thing i remember doing before this is deleting some bootloader files... but they were on another disk so I didn't think that it would affect my ubuntu install. Guess I was wrong how I can recover my system? I've been trying to discover how can I recover data from a hard drive that as been previously configured with LVM, although not encrypted,but I have been having mixed results. I've been using LVM more and more lately to configure hard drives and my greatest fear is to have a motherboard fail on me and get locked out of the contents.View 8 Replies View Related Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4.View 3 Replies View Related I was installing windows vista on my computer, so I backed up everything to a external drive which was formatted with ext2. I then proceeded to install windows vista. When I got to the partition section I tried installing windows vista to my raid 0. When it didn't work I decided that I would delete all my existing partitions and create a new one. Well in my haste I accidentally deleted my ext2 partition from my backup drive that was still connected. As soon as I realized what I had done I shutdown the windows install and disconnected my external drive. This is the current state of my drive from parted: Model: WD 15EADS External (scsi) Disk /dev/sdb: 1500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags I know that the drive only had one partition before and that it took up the entire disk and it was ext2 (maybe ext3). recover deleted files using "debugfs" & "extundelete" by running:Code:sudo debugfs /dev/sda3 and find inode number of deleted file using "ls -d" command and then running:Code:sudo extundelete /dev/sda3 --restore-file <inode#>but when my desired file was in a deleted folder I can't find my desired file inode number using debugfs I'm running Mac OS X 10.5.8. I was recently uninstalling mysql5 from /opt/local/bin.I typed:rm -rf /opt/local/bin mysql*instead ofrm -rf /opt/local/bin/mysql*This deleted my entire /opt/local/bin directory which puts me in a bit of a bind.Is there any way to recover those files? If not, I have a friend that is using a similar set of programs, would it be possible to use the contents of his folderIf I end up needing to reinstall everything in this folder, what is the best way to go about doing this?View 2 Replies View Related How to recover a removed file under linux Is there any free undelete software for the Mac? I have accidentally deleted a very-very important file in my Linux (Ubuntu) machine using the command rm. Is there any way to recover it? $ cat important_file > /dev/null & [1] 9711 $ rm important_file [code].... I cut and paste files today and they seemed to have gone missing or got deleted in the process. How do I recover them? As a first guess I would think they are sitting on a clip board somewhere, but I can't find any information anywhere about fedora14 and the default clipboard that ships with it. I have accidentally ended up in deleting my root directory while I blindly fired command while watching movie. I fired following command #rm -rf ~/<SPACE>*.out instead of this command #rm -rf ~/*.out Things already done: 1) Created /root directory relogged to get some of basic settings of gnome and Desktop. 2) Things went well now when I login my desktop ,gnome environment and other things looks to be working well only prompt on my terminal has changed. I can fix it any ways. Things I want to ask: 1) I haven't studied much about contents of /root directory to best of my knowledge is it like other user's home directory with some basic configuration files for mostly required applications. SO my question is have I lost any thing important system file or something? 2) If I have lost any important configuration or system data how can I recover it without reinstalling whole system? (My opinion about this is, It is quite possible but to do so, as far as I know capabilities of linux. But I still want comments from experts before I try any things on it because I don't want to backup my whole HDD and reinstall the whole stuff again for me and also my sister's stuff in MS.) I have just had to reinstall my OS (Sabayon) onto a new and larger hard drive (dying old disk). I quickly saved all my old docs in /home on an external USB drive (formatted and then created an ext4 file system) before the swap and installation. After getting the new disk running I connected the USB external disk. First I could not access the drive at all, but that seems to be fixed. Now I want to bring all my files back to my new /home folder but apparently they (especially the former MS Office .doc, .xls, etc files, not so much the OpenOffice files) are �read only� and I don't have permissions anymore. I am able to create directories on the external, and can move files back and forth, but don't seem to own many of them.Sabayon automatically mounted my external disk in /media/disk, rather than /mnt, so I've left that alone for now. After searching here and elsewhere for info, I tried a few things (below): For access I added a line to /etc/fstab: Code: /dev/sdb1 /media/disk ext4 noauto,rw 0 0 Here's what I've done to try and fix ownership already: in /media I: Code: # chmod 775 disk which gives output of: Code: media # ls -l total 4 drwxrwxr-x 2 root root 4096 Nov 1 21:30 disk [code].... I'm not sure what the �total 4� refers to since there is only the one directory �disk� inside /media but I assume that's not part of this issue... Does adding umask=0 have anything to do with this, and if so where does that go?
http://linux.bigresource.com/General-Undelete-or-recover-Previously-Deleted-Files--nFiGKtgKg.html
CC-MAIN-2018-34
refinedweb
3,092
71.14
Asked by: Call method only when two properties changed I want to call a method only when two properties changed. How can I accomplish that? For example private int x; public int X { get { return x; } set { x = value; RaisePropertyChanged(() => this.X); currentVirtualCamera.X = X; //CALL THIS ONLY IF Y CHANGED AS WELL orchestrator.SelectLayout(CurrentLayout); } } private int y; public int Y { get { return y; } set { y = value; RaisePropertyChanged(() => this.Y); currentVirtualCamera.Y = Y; //CALL THIS ONLY IF X CHANGED AS WELL orchestrator.SelectLayout(CurrentLayout); } } Take a look at WPF FlashMessage About.me Question All replies Hi, Change relative to what? Regardless, the first thing you'll need to do is create observables that represent each property. For example: private int x; public int X { get { return x; } set { x = value; ... xChanges.OnNext(x); } } private int y; public int Y { get { return y; } set { y = value; ... yChanges.OnNext(y); } } private readonly Subject<int> xChanges = new Subject<int>(); private readonly Subject<int> yChanges = new Subject<int>(); Now you can define your query in terms of xChanges and yChanges. Since you haven't provided a complete specification, here are a few examples that check for changes in different ways: var xAndY = xChanges.CombineLatest(yChanges, (x, y) => Unit.Default); -OR- var xAndY = xChanges.Zip(yChanges, (x, y) => Unit.Default); -OR- var xAndY = xChanges.DistinctUntilChanged().CombineLatest(yChanges.DistinctUntilChanged(), (x, y) => Unit.Default); ... xAndY.Subscribe(_ => orchestrator.SelectLayout(CurrentLayout)); CombineLatest generates a notification whenever either property changes, though not until both properties are assigned at least once. Zip generates a notification whenever both properties are assigned, in a pairwise fashion; i.e., each assignment to X is paired with an assignment to Y. The assumption here is that callers will typically assign both X and Y together, though since they're separate properties they must be assigned one at a time, and you only want to generate a notification for the pair rather than for each individual assignment. DistinctUntilChanged drops consecutive notifications with the same value. Applying this operator with CombineLatest defines a query that generates a notification whenever either property truly changes, though not until both properties are assigned at least once. You could also apply DistinctUntilChanged with Zip. There are many other possibilities as well. I've just included a few common examples. - Dave - Edited by Dave Sexton Wednesday, July 17, 2013 3:45 PM Formatting - Marked as answer by Joba Diniz Wednesday, July 17, 2013 4:03 PM - Unmarked as answer by Joba Diniz Wednesday, July 17, 2013 6:29 PM Awesome! The Zip is what I was looking for. RxExtensions is so awesome, but I'm a newbie on it. I guess it's a different way of thinking. Well, thanks! Take a look at WPF FlashMessage About.me Hi, I'd be remiss if I didn't also mention that you should consider defining a single property of type Point instead of having two individual X and Y properties. That way you'll force callers to specify both values simultaneously, and then you don't have to use a reactive query at all. This also makes the contract part of your type rather than assuming that callers are always going to assign X and Y in a pairwise fashion. Keep in mind that using Zip means that if a caller assigns X1, X2, X3, then Y1, only a single notification will be generated for (X1, Y1). Then, a subsequent assignment to Y2 will immediately generate a notification for (X2, Y2) without waiting for another X. In other words, if callers don't adhere perfectly to your assumption, then your query may still result in a consecutive sequence of layout updates, even though it seems that was what you were trying to avoid. - Dave - Edited by Dave Sexton Wednesday, July 17, 2013 4:39 PM Clarification Ok, it seems that Zip did not resolve my problem. My problem is that I have a Joystick class that has X and Y properties, which are updated by timers, each one have its own: public class Joystick : Control { private Timer timerx; private Timer timery; private void TimerxElapsed(object sender, ElapsedEventArgs e) { if (Math.Abs(distance.X) < 15) timerx.Interval = 80; else if (Math.Abs(distance.X) < 30) timerx.Interval = 30; else if (Math.Abs(distance.X) < 40) timerx.Interval = 15; else timerx.Interval = 2; Dispatcher.BeginInvoke((Action)(() => { X += (distance.X > 0 ? 1 : (distance.X < 0 ? -1 : 0)); })); } private void TimeryElapsed(object sender, ElapsedEventArgs e) { if (Math.Abs(distance.Y) < 15) timery.Interval = 80; else if (Math.Abs(distance.Y) < 30) timery.Interval = 30; else if (Math.Abs(distance.Y) < 40) timery.Interval = 15; else timery.Interval = 2; Dispatcher.BeginInvoke((Action)(() => { Y += (distance.Y > 0 ? 1 : (distance.Y < 0 ? -1 : 0)); })); } //More stuff.. The Joystick X and Y properties are bound to the ViewModel I just showed in the first post. Before your answer using Zip, whenever the X changed, the layout was updated and whenever the Y changed, the layout was also updated. What I'm changing is in fact sort of a virtual camera, so by not changing the X and Y together, when you move the Joystick in diagonal, the virtual camera does not move in diagonal, it moves in X axis and then Y axis. Everything happen so fast, but it is noticeable that it is not moving in diagonal. So, that's what I'm trying to accomplish: move the virtual camera in diagonal as you move Joystick in diagonal. Now, as I see, even if I change the Joystick properties to Point, I don't see any improvements, because I would have to instanciate a new Point, and as I do so, the property in ViewModel would change (two way binding). Take a look at WPF FlashMessage About.me Hi, Why are you using two separate timers? I see that the distance of X affects the rate of its timer, and likewise for Y, but I don't understand why. In other words, the resolution of the calculated joystick movement depends on where the cursor is located on the screen, right? How is that useful to you? Why not use a static resolution instead? Another way of looking at it: X is useless without Y, and vice versa, so shouldn't they be updated together as Point? That would not only give you a perfect diagonal, but also perfect movement in any direction. Sorry if I'm missing something here. Maybe it's related to some kind of gameplay dynamic? To answer your original question based on your new example with 2 timers, I think it's important to first notice that you may get a sequence of mutations that looks like this: X X X X Y X X X X Y or like this: Y Y Y Y X Y Y Y Y X depending upon which axis currently has a faster rate. For the former you want to wait for each Y to perform updates. For the latter you want to wait for each X to perform updates. In other words, you want a query that generates notifications based on the least frequently updated value, which changes dynamically. I'm concerned that a query such as this isn't really the correct solution to whatever problem you're trying to solve by using 2 timers. Would you care to elaborate? - Dave Hi, Why are you using two separate timers? I see that the distance of X affects the rate of its timer, and likewise for Y, but I don't understand why. - Dave The reason is the velocity. The Joystick has a Radius of 50. So, as the user drags the Josytick's thumb farther away, the velocity must increase, that is, the X or Y should be updated more frequently. Also, note that the X and Y must be update only by 1 each time. To picture the scenario, think if you want to move a rectangle on the screen. But to move it, you will not use the mouse, instead, you will use a Joystick. Take a look at WPF FlashMessage About.me Hi, Thanks for clarifying. I don't own a joystick anymore, but I understand what you mean now. :) How about instead of changing the frequency of the timer and incrementing by 1, define an asynchronous UI loop (e.g., via async and await) that with each iteration increments by the current magnitude of the joystick (times some averaging factor perhaps, for more consistency across heterogeneous systems), synchronously updates the layout and then asynchronously awaits the dispatcher for the next iteration? That seems like it would be the most efficient implementation. It avoids the extra costs of introducing concurrency of the timers and the complications of having to define a reactive query in order to avoid over-updating the UI. In other words, by running the (async) loop on the UI thread, it will update as fast as possible (with respect to any limiting factors that you include) to give the smoothest possible visual experience without wasting system resources. Does that make sense? Regardless, here's the problem with the query concept: Perhaps it's not only about defining a query that generates notifications based on the least frequently updated value. What if the joystick is perfectly horizontal? X X X ... X X Y Consider if X is at its fastest possible rate and Y is at its slowest possible rate. Y's rate would dictate the layout rate of both X and Y, so then what's the point of having the timer on X? Or am I missing something else? - Dave Hi, Alternatively, does the joystick raise a "Changed" event? If so, then you could use that instead of my async loop recommendation. Alternatively, does the joystick raise a "Changing" event? If so, then you could use that instead of my async loop recommendation. Though I doubt it exists since the joystick would have to raise the event at an interval even while it's stationary. I think the async loop is probably better anyway because it's tied to the capabilities of the UI, not some arbitrary device interval. - Dave - Edited by Dave Sexton Wednesday, July 17, 2013 9:16 PM Clarification We tried using one timer and change X and Y by factor greater than 1, however, the result was not satisfactory. The "rectangle" jumped to one location to another, it was not moving smoothly. That's why you need to move 1 by 1. My colleague suggested creating another timer that will fire an event with the 2 new values of X and Y. Maybe this works, but it's ugly enough. Take a look at WPF FlashMessage About.me - Edited by Joba Diniz Wednesday, July 17, 2013 9:29 PM fdsf Hi, Well, my suggestion was to use 0 timers. :) In other words, do everything on the UI thread, asynchronously. I can provide a code example if you're not familiar with the technique. - Dave Hi, Here's an example of what I meant by running everything on the UI thread, asynchronously. Note that async/await wasn't even necessary for this simple example. The animation seems quite smooth to me, even though I'm only using the UI thread without any timers. Furthermore, the example also uses the mouse to simulate a joystick on the UI thread (because I don't have an actual device), yet it doesn't seem to have any effect on the quality of the sprite animation. Please let me know if your experience is different. Important: Make sure to run this program without attaching the debugger; otherwise, the animation will be choppy. - Create a new WPF Application project in Visual Studio 2012. (I think it'll work in VS 2010 as well, but I haven't tested it.) - Add a NuGet Package reference for Rx-WPF. - Open MainWindow.xaml and paste the code below. - Open MainWindow.xaml.cs and paste the code below. - Run the application without attaching the debugger. - Drag the white circle with your mouse to simulate joystick movement. Watch the red sprite accelerate based on the angles of the joystick. - Note: The mouse is captured while using the joystick and the cursor disappears; however, it's still bounded by the extent in which it can move on your screen. If you move the application window near the bottom of the screen, then you may not be able to move the joystick down as far as it can go. Therefore, you might not want to maximize the window. MainWindow.xaml <Window x: <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Border Grid. <Canvas Name="spriteBox"> <Ellipse Name="sprite" Fill="Red" Width="60" Height="60" Canvas. </Canvas> </Border> <Border Grid. <Canvas Name="joystickBox" Height="100" Width="100"> <Ellipse Fill="Black" Stroke="Blue" Width="50" Height="50" Canvas. <Line Name="joystickShaft" X1="50" Y1="50" X2="50" Y2="50" Stroke="DarkGray" StrokeThickness="26" StrokeStartLineCap="Round" /> <Ellipse Name="joystick" Fill="White" Stroke="Blue" Width="26" Height="26" Canvas. </Canvas> </Border> <TextBlock Grid. Drag the white dot to simulate joystick movement, which will accelerate the red sprite. </TextBlock> </Grid> </Window> MainWindow.xaml.cs AnimateSprite(); } private void AnimateSprite() { // Poll the joystick (on the UI thread). var offset = currentJoystickOffset; // Synchronously move the sprite (on the UI thread). UpdateLayout(offset); // Temporarily yield to allow the UI thread to process other messages. Dispatcher.InvokeAsync(AnimateSprite, DispatcherPriority.Background); } private void UpdateLayout(Point offset) { var x = Canvas.GetLeft(sprite); var y = Canvas.GetTop(sprite); const double slowFactor = .001d; x += offset.X * slowFactor; y += offset.Y * slowFactor; Canvas.SetLeft(sprite, Math.Max(Math.Min(x, spriteBox.ActualWidth - sprite.Width), 0)); Canvas.SetTop(sprite, Math.Max(Math.Min(y, spriteBox.ActualHeight - sprite.Height), 0)); } private IObservable<Point> GetJoystickOffsets() { var joystickTopLeft = new Point(Canvas.GetLeft(joystick), Canvas.GetTop(joystick)); var bounds = new Size(joystickBox.Width, joystickBox.Height); var origin = new Point(bounds.Width / 2, bounds.Height / 2); var extent = new Point(bounds.Width * 2.2, bounds.Height * 2.2); var joystickGrabs = Observable.FromEventPattern<MouseButtonEventHandler, MouseButtonEventArgs>( eh => joystick.MouseDown += eh, eh => joystick.MouseDown -= eh); var joystickReleases = Observable.FromEventPattern<MouseButtonEventHandler, MouseButtonEventArgs>( eh => joystick.MouseUp += eh, eh => joystick.MouseUp -= eh); var joystickMouseMovements = Observable.FromEventPattern<MouseEventHandler, MouseEventArgs>( eh => joystick.MouseMove += eh, eh => joystick.MouseMove -= eh); var joystickChanges = (from _ in joystickGrabs.Do(_ => { joystick.Cursor = Cursors.None; joystick.CaptureMouse(); }) from e in joystickMouseMovements let mouse = e.EventArgs.GetPosition(joystickBox) let offset = new Point( Math.Max(Math.Min(mouse.X, extent.X), -extent.X + joystick.Width) + -origin.X, Math.Max(Math.Min(mouse.Y, extent.Y), -extent.Y + joystick.Height) + -origin.Y) let x = joystickTopLeft.X + (offset.X / bounds.Width) * 20 let y = joystickTopLeft.Y + (offset.Y / bounds.Height) * 20 let location = new Point(x, y) select new { Offset = offset, Location = location }) .TakeUntil(joystickReleases) .Concat(Observable.Return(new { Offset = default(Point), Location = default(Point) })) .Finally(() => { joystick.ReleaseMouseCapture(); joystick.Cursor = Cursors.Arrow; Canvas.SetLeft(joystick, joystickTopLeft.X); Canvas.SetTop(joystick, joystickTopLeft.Y); joystickShaft.X2 = origin.X; joystickShaft.Y2 = origin.Y; }) .Repeat(); return joystickChanges .Do(data => { Debug.WriteLine(data); Canvas.SetLeft(joystick, data.Location.X); Canvas.SetTop(joystick, data.Location.Y); joystickShaft.X2 = data.Location.X + joystick.Width / 2; joystickShaft.Y2 = data.Location.Y + joystick.Height / 2; }) .Select(data => data.Offset); } } } - Dave - Marked as answer by Joba Diniz Thursday, July 18, 2013 4:11 PM - Unmarked as answer by Joba Diniz Friday, August 23, 2013 2:45 PM Hi, You'll probably notice that the CPU usage of my example program is relatively high, even when it's idle. That's because it's continuously running the update loop as fast as possible. This gives the smoothest animation possible because it uses the highest possible frame rate and adjusts it dynamically based on real-time UI latency. Theoretically, you could reduce CPU usage to 0 while the sprite is idle by stopping the loop when the joystick is not in use; however, CPU usage will still remain high while using the joystick. Alternatively, the frame rate can be controlled to greatly reduce CPU usage and still produce a smooth-enough animation. Furthermore, it will help to ensure consistency across heterogeneous systems. It can be done simply by using a single DispatcherTimer instead of recursion, and instead of other timers that introduce unnecessary concurrency. The interval of course depends upon how much work the application does during each update. The longer the updates, the longer the interval must be to accommodate them, thus you'll have a slower frame rate. My example program is very simple, so I can use a very small timer interval to get a very high frame rate. The following example replaces MainWindow.xaml.cs in my previous post, though I've omitted GetJoystickOffsets for brevity since it hasn't changed. I've set the frame rate to 10ms, which greatly reduces CPU usage, yet the animation still appears to be smooth. Important: Run the program without attaching the debugger; otherwise, the animation will be choppy. var timer = new DispatcherTimer(DispatcherPriority.Background, Dispatcher); timer.Interval = TimeSpan.FromMilliseconds(10); timer.Tick += (_, __) => AnimateSprite(); timer.Start(); } private void AnimateSprite() { // Poll the joystick (on the UI thread). var offset = currentJoystickOffset; // Synchronously move the sprite (on the UI thread). UpdateLayout(offset); } private void UpdateLayout(Point offset) { var x = Canvas.GetLeft(sprite); var y = Canvas.GetTop(sprite); const double slowFactor = .05d; x += offset.X * slowFactor; y += offset.Y * slowFactor; Canvas.SetLeft(sprite, Math.Max(Math.Min(x, spriteBox.ActualWidth - sprite.Width), 0)); Canvas.SetTop(sprite, Math.Max(Math.Min(y, spriteBox.ActualHeight - sprite.Height), 0)); } } } - Dave Thanks man. Awesome examples! Take a look at WPF FlashMessage About.me Hi, I still need some more help. I'm back on this issue. Unfortunately, I can't share the solution/project, it would be much easier to explain as you could see it in action. So I'll share some code and some prints. The problem is that I can't use the DispatherTime on the UI thread. I need to use the Timer class, because I'm not actually moving another Control in the UI with the Joystick as your example does. What my application does with the value of the X and Y is up to the developer. In this particular case, we are using the X and Y in the ViewModel, which pass them to Orchestrator class, which pass them to MixerAdapter class. So, ultimately, X and Y will be used in the following code: public void ConfigureScenes(IEnumerable<MixerScene> scenes) { //ommited... foreach (var scene in scenes.OrderByDescending(s => s.Target)) { //ommited... var sx = scene.Source.PercentageLeft + scene.VirtualCamera.X / 100; var sy = scene.Source.PercentageTop + scene.VirtualCamera.Y / 100; //ommited... } } You see the scene.VirtualCamera.X and scene.VirtualCamera.Y? That's the values of the Joystick. Here is Joystick: public class Joystick : Control { private Thumb thumb; private const double Radius = 50; private Point center; private Timer timer; private Vector currentDistance; private double slowFactor = 0.01d; public Joystick() { currentDistance = new Vector(); timer = new Timer(10); //timer.Interval = TimeSpan.FromMilliseconds(10); //timer.Tick += Tick; timer.Elapsed += Elapsed; } static Joystick() { DefaultStyleKeyProperty.OverrideMetadata(typeof(Joystick), new FrameworkPropertyMetadata(typeof(Joystick))); } public static readonly DependencyProperty PositionProperty = DependencyProperty.Register("Position", typeof(Point), typeof(Joystick), new PropertyMetadata()); public Point Position { get { return (Point)GetValue(PositionProperty); } set { SetValue(PositionProperty, value); } } public override void OnApplyTemplate() { base.OnApplyTemplate(); thumb = GetTemplateChild("PART_thumb") as Thumb; if (thumb != null) Initialize(); } private void Initialize() { var initialLeft = Canvas.GetLeft(thumb); var initialTop = Canvas.GetTop(thumb); center = new Point(initialLeft, initialTop); GetJoystickOffsets().Subscribe(distance => currentDistance = distance); //var timer = new DispatcherTimer(DispatcherPriority.Background, Dispatcher); //timer.Interval = TimeSpan.FromMilliseconds(10); //timer.Tick += Tick; //timer.Start(); } private void Elapsed(object sender, ElapsedEventArgs e) { Debug.WriteLine("X = " + currentDistance.X * slowFactor + " Y = " + currentDistance.Y * slowFactor); //Position = new Point(currentDistance.X * slowFactor, currentDistance.Y * slowFactor); Dispatcher.BeginInvoke((Action)(() => { Position = new Point(currentDistance.X * slowFactor, currentDistance.Y * slowFactor); })); } private IObservable<Vector> GetJoystickOffsets() { var joystickGrabs = Observable.FromEventPattern<DragStartedEventHandler, DragStartedEventArgs>( eh => thumb.DragStarted += eh, eh => thumb.DragStarted -= eh); var joystickReleases = Observable.FromEventPattern<DragCompletedEventHandler, DragCompletedEventArgs>( eh => thumb.DragCompleted += eh, eh => thumb.DragCompleted -= eh); var joystickMouseMovements = Observable.FromEventPattern<DragDeltaEventHandler, DragDeltaEventArgs>( eh => thumb.DragDelta += eh, eh => thumb.DragDelta -= eh); var joystickChanges = ( from _ in joystickGrabs.Do(_ => { timer.Start(); }) from e in joystickMouseMovements let newLeft = Canvas.GetLeft(thumb) + e.EventArgs.HorizontalChange let newTop = Canvas.GetTop(thumb) + e.EventArgs.VerticalChange let newPoint = new Point(newLeft, newTop) let location = IsInsideBoundaries(newPoint) ? newPoint : GetNearestPointOnCircle(newPoint) let distance = newPoint - center select new { Distance = distance, Location = location }) .TakeUntil(joystickReleases) .Concat(Observable.Return(new { Distance = default(Vector), Location = default(Point) })) .Finally(() => { MoveTo(center); timer.Stop(); }) .Repeat(); return joystickChanges .Do(data => { MoveTo(data.Location); }) .Select(data => data.Distance); } private void MoveTo(Point point) { Canvas.SetLeft(thumb, point.X); Canvas.SetTop(thumb, point.Y); } private bool IsInsideBoundaries(Point point) { var powX = Math.Pow(point.X - center.X, 2); var powY = Math.Pow(point.Y - center.Y, 2); var powRadius = Math.Pow(Radius, 2); return powX + powY < powRadius; } private Point GetNearestPointOnCircle(Point point) { //V = (P - C) //PontoMaisPerto = C + V / |V| * R; //onde P é o ponto, C é o centro, R é o raio, e |V| é o tamanho de V. var vPoint = point - center; var distance = Math.Sqrt(Math.Pow(vPoint.X, 2) + Math.Pow(vPoint.Y, 2)); var nearestX = center.X + vPoint.X / distance * Radius; var nearestY = center.Y + vPoint.Y / distance * Radius; return new Point(nearestX, nearestY); } } You probably can't notice that my Joystick does not have a rectangular box (which would be the bounds). What I did was to set a Radius of 50, and on top of that calculate the bounds of the Thumb. So, I'm giving you a picture of it: With the current code I'm showing here, the UI freezes forever when I try to move the Thumb. Please, if you could help. Take a look at WPF FlashMessage About.me - Edited by Joba Diniz Tuesday, August 20, 2013 9:01 PM
https://social.msdn.microsoft.com/Forums/en-US/2f404116-7d81-4542-9a55-21733da8d46d/call-method-only-when-two-properties-changed
CC-MAIN-2015-32
refinedweb
3,642
51.34
The mixin style of importing in which classes and traits are defined within traits, as seen in scala.reflect.Universe, ScalaTest, and other Scala styles, seems to be infectious. By that, I mean once you define something in a trait to be mixed in, to produce another reusable module that calls that thing, you must define another trait, and so must anyone using your module, and so on and so forth. You effectively become “trapped in the cake”. However, we can use type parameters that represent singleton types to write functions that are polymorphic over these “cakes”, without being defined as part of them or mixed in themselves. For example, you can use this to write functions that operate on elements of a reflection universe, without necessarily passing that universe around all over the place. Well, for the most part. Let’s see how far this goes. Let’s set aside the heavyweight real-world examples I mentioned above in favor of a small example. Then, we should be able to explore the possibilities in this simpler space. final case class LittleUniverse() { val haystack: Haystack = Haystack() final case class Haystack() { def init: Needle = Needle() def iter(n: Needle): Needle = n } final case class Needle() } For brevity, I’ve defined member classes, but this article equally applies if you are using abstract types instead, as any Functional programmer of pure, virtuous heart ought to! Suppose we have a universe. scala> val lu: LittleUniverse = LittleUniverse() lu: LittleUniverse = LittleUniverse() The thing that Scala does for us is not let Haystacks and Needle s from one universe be confused with those from another. val anotherU = LittleUniverse() scala> lu.haystack.iter(anotherU.haystack.init) <console>:14: error: type mismatch; found : anotherU.Needle required: lu.Needle lu.haystack.iter(anotherU.haystack.init) ^ The meaning of this error is “you can’t use one universe’s Haystack to iter a Needle from another universe”. This doesn’t look very important given the above code, but it’s a real boon to more complex scenarios. You can set up a lot of interdependent abstract invariants, verify them all, and have the whole set represented with the “index” formed by the singleton type, here lu.type or anotherU.type. Refactoring in macro-writing style seems to be based upon passing the universe around everywhere. We can do that. def twoInits(u: LittleUniverse): (u.Needle, u.Needle) = (u.haystack.init, u.haystack.init) def stepTwice(u: LittleUniverse)(n: u.Needle): u.Needle = u.haystack.iter(u.haystack.iter(n)) The most important feature we’re reaching for with these fancy dependent method types, and the one that we have to keep reaching for if we want to write sane functions outside the cake, is preserving the singleton type index. scala> twoInits(lu) res3: (lu.Needle, lu.Needle) = (Needle(),Needle()) scala> stepTwice(anotherU)(anotherU.haystack.init) res4: anotherU.Needle = Needle() These values are ready for continued itering, or whatever else you’ve come up with, in the confines of their respective universes. That’s because they’ve “remembered” where they came from. By contrast, consider a simple replacement of the path-dependencies with a type projection. def brokenTwoInits(u: LittleUniverse) : (LittleUniverse#Needle, LittleUniverse#Needle) = (u.haystack.init, u.haystack.init) scala> val bti = brokenTwoInits(lu) bti: (LittleUniverse#Needle, LittleUniverse#Needle) = (Needle(),Needle()) That seems to be okay, until it’s time to actually use the result. scala> lu.haystack.iter(bti._1) <console>:14: error: type mismatch; found : LittleUniverse#Needle required: lu.Needle lu.haystack.iter(bti._1) ^ The return type of brokenTwoInits “forgot” the index, lu.type. When we pass a LittleUniverse to the above functions, we’re also kind of passing in a constraint on the singleton type created by the argument variable. That’s how we know that the returned u.Needle is a perfectly acceptable lu.Needle in the caller scope, when we pass lu as the universe. However, as the contents of a universe become more complex, there are many more interactions that need not involve a universe at all, at least not directly. def twoInitsFromAHaystack[U <: LittleUniverse]( h: U#Haystack): (U#Needle, U#Needle) = (h.init, h.init) scala> val tifah = twoInitsFromAHaystack[lu.type](lu.haystack) tifah: (lu.Needle, lu.Needle) = (Needle(),Needle()) Since we didn’t pass in lu, how did it know that the returned Needles were lu.Needles? lu.haystackis lu.Haystack. lu.type#Haystack. U = lu.type, and our argument meets the resulting requirement for a lu.type#Haystack(after expanding U). h.initis u.Needle forSome {val u: U}. We use an existential because the relevant variable (and its singleton type) is not in scope. U#Needle, satisfying the expected return type. This seems like a more complicated way of doing things, but it’s very freeing: by not being forced to necessarily pass the universe around everywhere, you’ve managed to escape the cake’s clutches much more thoroughly. You can also write syntax enrichments on various members of the universe that don’t need to talk about the universe’s value, just its singleton type. Unless, you know, the index appears in contravariant position. stepTwice One test of how well we’ve managed to escape the cake is to be able to write enrichments that deal with the universe. This is a little tricky, but quite doable if you have the universe’s value. With the advent of implicit class, this became a little easier to do wrongly, but it’s a good start. implicit class NonWorkingStepTwice(val u: LittleUniverse) { def stepTwiceOops(n: u.Needle): u.Needle = u.haystack.iter(u.haystack.iter(n)) } That compiles okay, but seemingly can’t actually be used! scala> lu stepTwiceOops lu.haystack.init <console>:15: error: type mismatch; found : lu.Needle required: _1.u.Needle where val _1: NonWorkingStepTwice lu stepTwiceOops lu.haystack.init ^ There’s a hint in that we had to write val u, not u, nor private val u, in order for the implicit class itself to compile. This signature tells us that there’s an argument of type LittleUniverse, and a member u: LittleUniverse. However, whereas with the function examples above, we [and the compiler] could trust that they’re one and the same, we have no such guarantee here. So we don’t know that an lu.Needle is a u.Needle. We didn’t get far enough, but we don’t know that a u.Needle is an lu.Needle, either. Instead, we have to expand a little bit, and take advantage of a very interesting, if obscure, element of the type equivalence rules in the Scala language. class WorkingStepTwice[U <: LittleUniverse](val u: U) { def stepTwice(n: u.Needle): u.Needle = u.haystack.iter(u.haystack.iter(n)) } implicit def WorkingStepTwice[U <: LittleUniverse](u: U) : WorkingStepTwice[u.type] = new WorkingStepTwice(u) Unfortunately, the ritual of expanding the implicit class shorthand is absolutely necessary; the implicit class won’t generate the dependent-method-typed implicit conversion we need. Now we can get the proof we need. scala> lu stepTwice lu.haystack.init res7: _1.u.Needle forSome { val _1: WorkingStepTwice[lu.type] } = Needle() // that's a little weird, but reduces to what we need scala> res7: lu.Needle res8: lu.Needle = Needle() How does this work? lu, giving us a conv: WorkingStepTwice[lu.type]. conv.u: lu.type, by expansion of U. conv.u.type <: lu.type. The next part is worth taking in two parts. It may be worth having §3.5.2 “Conformance” of the language spec open for reference. First, let’s consider the return type (a covariant position), which is simpler. conv.u.type#Needle. #projection is covariant, so because conv.u.type <: lu.type(see above), the return type widens to lu.type#Needle. lu.Needleis a shorthand. It was far longer until I realized how the argument type works. You’ll want to scroll up on the SLS a bit, to the “Equivalence” section. Keep in mind that we are trying to widen lu.Needle to conv.u.Needle, which is the reverse of what we did for the return type. lu.type#Needle. .type, then p .type≡ q .type.” From this, we can derive that conv.u.type = lu.type. This is a stronger conclusion than we reached above! #using the equivalence, giving us conv.u.type#Needle. I cannot characterize this feature of the type system as anything other than “really freaky” when you first encounter it. It seems like an odd corner case. Normally, when you write val x: T, then x.type is a strict subtype of T, and you can count on that, but this carves out an exception to that rule. It is sound, though, and an absolutely essential feature! val sameLu: lu.type = lu scala> sameLu.haystack.iter(lu.haystack.init) res9: sameLu.Needle = Needle() Without this rule, even though we have given it the most specific type possible, sameLu couldn’t be a true substitute for lu in all scenarios. That means that in order to make use of singleton type indices, we would be forever beholden to the variable we initially stored the value in. I think this would be extremely inconvenient, structurally, in almost all useful programs. With the rule in place, we can fully relate the lu and conv.u variables, to let us reorganize how we talk about universes and values indexed by their singleton types in many ways. Let’s try to hide the universe. We don’t need it, after all. We can’t refer to u in the method signature anymore, so let’s try the same conversion we used with twoInitsFromAHaystack. We already have the U type parameter, after all. class CleanerStepTwice[U <: LittleUniverse](private val u: U) { def stepTwiceLively(n: U#Needle): U#Needle = ??? } implicit def CleanerStepTwice[U <: LittleUniverse](u: U) : CleanerStepTwice[u.type] = new CleanerStepTwice(u) This has the proper signature, and it’s cleaner, since we don’t expose the unused-at-runtime u variable anymore. We could refine a little further, and replace it with a U#Haystack, just as with twoInitsFromAHaystack. This gives us the same interface, with all the index preservation we need. Even better, it infers a nicer return type. scala> def trial = lu stepTwiceLively lu.haystack.init trial: lu.Needle Now, let’s turn to implementation. class OnceMoreStepTwice[U <: LittleUniverse](u: U) { def stepTwiceFinally(n: U#Needle): U#Needle = u.haystack.iter(u.haystack.iter(n)) } <console>:18: error: type mismatch; found : U#Needle required: OnceMoreStepTwice.this.u.Needle u.haystack.iter(u.haystack.iter(n)) ^ This is the last part of the escape! If this worked, we could fully erase the LittleUniverse from most code, relying on the pure type-level index to prove enough of its existence! So it’s a little frustrating that it doesn’t quite work. Let’s break it down. First, the return type is fine. u: U, u.type <: U. (This is true, and useful, in the scope of u, which is now invisible to the caller.) iterreturns a u.type#Needle. uis not in scope for the caller, if we returned this as is, it would effectively widen to the existentially bound u.type#Needle forSome {val u: U}. But the same logic in the next step would apply to that type. #left side covariance, u.type#Needlewidens to U#Needle. Pretty simple, by the standards of what we’ve seen so far. But things break down when we try to call iter(n). Keep in mind that n: U#Needle and the expected type is u.Needle. Specifically: since we don’t know in the implementation that U is a singleton type, we can’t use the “singleton type equivalence” rule on it! But suppose that we could; that is, suppose that we could constrain U to be a singleton type. U#Needle. u: Uand uis stable, so u.type = U. #, we get u.type#Needle. u.Needle. If we are unable to constrain U in this way, though, we are restricted to places where U occurs in covariant position when using cake-extracted APIs. We can invoke functions like init, because they only have the singleton index occurring in covariant position. Invoking functions like iter, where the index occurs in contravariant or invariant position, requires being able to add this constraint, so that we can use singleton equivalence directly on the type variable U. This is quite a bit trickier. We have the same problem with the function version. def stepTwiceHaystack[U <: LittleUniverse]( h: U#Haystack, n: U#Needle): U#Needle = h.iter(h.iter(n)) <console>:18: error: type mismatch; found : U#Needle required: _1.Needle where val _1: U h.iter(h.iter(n)) ^ Let’s walk through it one more time. n: U#Needle. h.iterexpects a u.type#Needlefor all val u: U. Uto be a singleton type: u.type = U, by singleton equivalence. #left side equivalence, h.iterexpects a U#Needle. The existential variable complicates things, but the rule is sound. As a workaround, it is commonly suggested to extract the member types in question into separate type variables. This works in some cases, but let’s see how it goes in this one. def stepTwiceExUnim[N, U <: LittleUniverse{type Needle = N}]( h: U#Haystack, n: N): N = ??? This looks a lot weirder, but should be able to return the right type. scala> def trial2 = stepTwiceExUnim[lu.Needle, lu.type](lu.haystack, lu.haystack.init) trial2: lu.Needle But this situation is complex enough for the technique to not work. def stepTwiceEx[N, U <: LittleUniverse{type Needle = N}]( h: U#Haystack, n: N): N = h.iter(h.iter(n)) <console>:18: error: type mismatch; found : N required: _1.Needle where val _1: U h.iter(h.iter(n)) ^ Instead, we need to index Haystack directly with the Needle type, that is, add a type parameter to Haystack so that its Needle arguments can be talked about completely independently of the LittleUniverse, and then to write h: U#Haystack[N] above. Essentially, this means that any time a type talks about another type in a Universe, you need another type parameter to redeclare a little bit of the relationships between types in the universe. The problem with this is that we already declared those relationships by declaring the universe! All of the non-redundant information is represented in the singleton type index. So even where the above type-refinement technique works (and it does in many cases), it’s still redeclaring things that ought to be derivable from the “mere” fact that U is a singleton type. (The following is based on enlightening commentary by Daniel Urban on an earlier draft.) Let’s examine the underlying error in stepTwiceEx more directly. scala> def fetchIter[U <: LittleUniverse]( h: U#Haystack): U#Needle => U#Needle = h.iter <console>:14: error: type mismatch; found : _1.type(in method fetchIter)#Needle where type _1.type(in method fetchIter) <: U with Singleton required: _1.type(in value $anonfun)#Needle where type _1.type(in value $anonfun) <: U with Singleton h: U#Haystack): U#Needle => U#Needle = h.iter ^ It’s a good thing that this doesn’t compile. If it did, we could do fetchIter[LittleUniverse](lu.haystack)(anotherU.haystack.init) Which is unsound. §3.2.1 “Singleton Types” of the specification mentions this Singleton, which is in a way related to singleton types. A stable type is either a singleton type or a type which is declared to be a subtype of trait scala.Singleton. Adding with Singleton to the upper bound on U causes fetchIter to compile! This is sound, because we are protected from the above problem with the original fetchIter. def fetchIter[U <: LittleUniverse with Singleton]( h: U#Haystack): U#Needle => U#Needle = h.iter scala> fetchIter[lu.type](lu.haystack) res3: lu.Needle => lu.Needle = $$Lambda$1397/1159581520@683e7892 scala> fetchIter[LittleUniverse](lu.haystack) <console>:16: error: type arguments [LittleUniverse] do not conform to method fetchIter's type parameter bounds [U <: LittleUniverse with Singleton] fetchIter[LittleUniverse](lu.haystack) ^ Let’s walk through the logic for fetchIter. The expression h.iter has type u.Needle => u.Needle for some val u: U, and our goal type is U#Needle => U#Needle. So we have two subgoals: prove u.Needle <: U#Needle for the covariant position (after =>), and U#Needle <: u.Needle for the contravariant position (before =>). First, covariant: u: U, u.type <: U. #is covariant, #1 implies u.type#Needle <: U#Needle. u.Needle <: U#Needle, which is the goal. Secondly, contravariant. We’re going to have to make a best guess here, because it’s not entirely clear to me what’s going on. uhas a singleton type U(if we define “has a singleton type” as “having a type X such that X <: Singleton”), so u.type = Uby the singleton equivalence. U <: u.type. #is covariant, #2 implies that U#Needle <: u.type#Needle. U#Needle <: u.Needle, which is the goal. I don’t quite understand this, because U doesn’t seem to meet the requirements for “singleton type”, according to the definition of singleton types. However, I’m fairly sure it’s sound, since type stability seems to be the property that lets us avoid the universe-mixing unsoundness. Unfortunately, it only seems to work with existential vals; we seem to be out of luck with vals that the compiler can still see. // works fine! def stepTwiceSingly[U <: LittleUniverse with Singleton]( h: U#Haystack, n: U#Needle): U#Needle = { h.iter(h.iter(n)) } // but alas, this form doesn't class StepTwiceSingly[U <: LittleUniverse with Singleton](u: U) { def stepTwiceSingly(n: U#Needle): U#Needle = u.haystack.iter(u.haystack.iter(n)) } <console>:15: error: type mismatch; found : U#Needle required: StepTwiceSingly.this.u.Needle u.haystack.iter(u.haystack.iter(n)) ^ We can work around this by having the second form invoke the first with the Haystack, thus “existentializing” the universe. I imagine that most, albeit not all, cakes can successfully follow this strategy. So, finally, we’re almost out of the cake. This article was tested with Scala 2.12.1. Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog
https://typelevel.org/blog/2017/03/01/four-ways-to-escape-a-cake.html
CC-MAIN-2019-13
refinedweb
3,044
59.8
This article shows how you can use the python-cloudfiles python binding to interact with your Cloud Files account. It is designed as a companion to the Cloud Files Curl cookbook article. It shows how to use the Python binding to do all the same things that the curl article shows how to do with curl. Python is a scripting language. You can use it to write programs that automate or simplify tasks you would otherwise have to carry out using a GUI or by typing in a series of commands. The Python binding for the Cloud Files API makes it easier for you to use the API with Python. It does this by providing a series of Python objects that you can use to manipulate the contents of your Cloud Files account. For instructions on installing the python-cloudfiles binding, please take a look at our Python installation guide. This article assumes you have a certain amount of knowledge regarding the use of Python however it has been broken down such that most people should be able to follow along. Although this guide is not designed to teach you how to use Python, it is worth covering some basics to make this article a bit more accessible. To create a python script, first open a text file and enter the code you want to run, then save the file e.g. " my_script.py" To run the script at the command line, type " python my_script.py" then hit Enter. Load Binding: Now that you know how to create a script or start the interactive Python prompt we can begin. The first thing you need to do in your script or the interpreter is to load the code provided by the binding. This is done with the import statement. import cloudfiles Create Connection: Now that we have made the code available to use in our scripts we can make use of it. Lets start by making a connection object. This object holds all the information required to make an authenticated request to the Cloud Files API. Note: By default the connection object will authenticate against the US API. If you have a UK Cloud Files account an extra parameter is required. #For US accounts conn = cloudfiles.Connection('myusername', 'myapikey') #For UK accounts conn = cloudfiles.Connection('myusername', 'myapikey', authurl=cloudfiles.consts.uk_authurl) Using this connection object, we can now start to interact with our Cloud Files account. Note: if you are using the binding on a Cloud Server located in the same data centre as your Cloud Files account, you can specify the parameter servicenet=True when you create the connection, so that the traffic uses the data centre's internal network. Traffic over ServiceNet does not incur bandwidth charges. Listing Containers: The first thing we're going to try is to list the containers on your account. One thing to bear in mind with the binding is that it can represent the data in multiple ways. One way is to produce a list of strings, each one the name of a different container. Another representation is as a list of container objects that can then be used to manipulate individual containers. conts is a list of three strings, each one the name of a container. conts = conn.list_containers() print conts ['backup', 'cloudservers','images', 'static'] conts is a container item result set which holds container objects, these container objects can be used to hold information about the respective container and are used to manipulate the container. conts = conn.get_all_containers() print conts ContainerResults: 4 containers It is also possible to list the containers with more details including information such as the container size and the number of objects within the container. conts = conn.list_containers_info() print conts [{u'count': 1, u'bytes': 54770176, u'name': u'backup'}, {u'count': 0, u'bytes': 0, u'name': u'cloudservers'}, {u'count': 3, u'bytes': 174686, u'name': u'images'}, {u'count': 0, u'bytes': 0, u'name': u'static'}] Creating a Container: Creating a container is nice and simple, once you have a name for the container you just need to pass it as an argument to the create_container method. The object returned is a container object representing the new container you have just created. cont = conn.create_container('newcontainer') print cont newcontainer Deleting a Container: A container may be deleted by issuing a DELETE request, but keep in mind that only empty containers may be deleted. If a container currently has objects within it, then those objects must first be deleted. conn.delete_container('newcontainer') Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/site-packages/python-cloudfiles/cloudfiles/connection.py", line 275, in delete_container raise ContainerNotEmpty(container_name) cloudfiles.errors.ContainerNotEmpty: Cannot delete non-empty Container newcontainer When a container is successfully deleted nothing is returned. conn.delete_container('newcontainer') Once you have a container object it is possible to list the objects within that container. By default only the first 10,000 objects are returned. The following lists the objects of a container called images: cont = conn.get_container('images') print cont.list_objects() ['banner.jpg', 'cloud.png', 'rackspace.jpg'] It is also possible get a more detailed object listing to view additional information about the objects such as size and content-type: print cont.list_objects_info() [{u'bytes': 26326, u'last_modified': u'2012-02-01T12:47:22.911070', u'hash': u'9ed61f72b1f3777ff01b7ce128a67244', u'name': u'banner.jpg', u'content_type': u'image/jpeg'}, {u'bytes': 106206, u'last_modified': u'2012-02-01T12:47:24.130690', u'hash': u'8b31a0109b2998a0e42efa29859bde24', u'name': u'cloud.png', u'content_type': u'image/png'}, {u'bytes': 42154, u'last_modified': u'2012-02-01T12:47:25.235390', u'hash': u'cbf00714f4dcc0724ddbe4028f76814e', u'name': u'rackspace.jpg', u'content_type': u'image/jpeg'}] It is possible to list more than 10,000 objects or to filter by specific objects by passing some additional parameter options. Please refer to the List Objects API documentation for more detailed information on the available parameters and how they work and take a look at the binding code to see how to use them. Downloading an Object: To download an object we have to get it and then save it. The following gets the object cloud.png and saves it in the current directory on your computer with the filename cloud.png. obj = cont.get_object('cloud.png') obj.save_to_filename('cloud.png') Uploading an Object: To upload a new object to a container, you need to create the object and the load the file. The below example will upload a file called style.css to a container called static with a content-type of text/css. Note: Setting the correct content-type is important depending on where the object will be used. The binding will attempt to guess the content-type of any object if it is not explicitly set. cont = conn.get_container('static') obj = cont.create_object('style.css') #if the content_type is not specified the binding will attempt to guess the correct type obj.content_type = 'text/css' obj.load_from_filename('style.css') It is also possible to upload an object to make it appear as if the object is part of a directory structure. This is accomplished by naming the object with the full path including the directory components. This is useful when publishing containers to the CDN as the resulting CDN URL will appear as if it was part of a directory structure and is generally more organised. For example, repeating the above, but instead naming the style.css object so that it becomes css/style.css: obj = container.create_object('css/style.css') #if the content_type is not specified the binding will attempt to guess the correct type obj.content_type = 'text/css' obj.load_from_filename('style.css') Deleting an Object: Deleting an object requires you to run the container's delete object method and specify the object to be deleted. cont.delete_object('style.css') Performing a Server Side Copy: By doing a server side copy it is possible to copy an existing object to a new storage location without having to go through the expense of performing the data upload from the client side. By combining server side copies with object deletion it is possible to do object moves and rename operations by first copying the object to the new location or name and then performing a delete on the original. Copying is not restricted to a single container. The following example will copy the rackspace.jpg file to a new name of rackspace.jpeg in the images container: obj = cont.get_object('rackspace.jpg') obj.copy_to('images', 'rackspace.jpeg') Updating Object Headers: It is possible to update certain HTTP headers corresponding to an object with the sync_metadata method. The most useful usage of this functionality is to correct an incorrectly set Content-Type header. Other headers that may also be set include the CORS and Content-Disposition headers. The following is an example of how to update the Content-Type of an image to image/jpeg: obj = cont.get_object('rackspace.jpg') obj.headers['Content-Type'] = 'image/jpeg' obj.sync_metadata() The binding also includes the ability to modify parameters relating to the CDN. Listing CDN Enabled Containers: The container object you create has the relavent method for listing the public containers: conn.list_public_containers() ['files', 'static'] CDN Enabling a Container: Before a container can have CDN operations performed on it, the container needs to be CDN enabled. The following will enable and publish the images container: cont = conn.get_container('images') cont.make_public() Viewing a Container's CDN Details: A container object has a number of attributes set which relate to the CDN. Viewable information includes whether the container is CDN enabled, if CDN logging is turned on and the currently set TTL as well as the various CDN domain names that can be used to reference content within the container. The following show you how to print out the CDN details of the images container and the reponses: print cont.is_public() True print cont.public_uri() '' print cont.public_ssl_uri() '' print cont.public_streaming_uri() '' print cont.cdn_ttl 259200 print cont.cdn_log_retention False Updating a Container's CDN Attributes: The following shows how to update some of the attributes associated with a CDN container: #To remove a container from the CDN cont.make_private() #To make a container publicly accessible cont.make_public() #To change the TTL on a container or make it public with a speicific TTL cont.make_public(ttl=172800) #To enable CDN log retention cont.log_retention(log_retention=True) Purging an Object: It is possible to force content removal from the CDN by issuing a CDN purge request. The purge request can only be done on a per object basis. The purge request will schedule a job with the CDN to have the indicated content removed from all member nodes of the CDN and can take some time to complete. It is possible to pass through a comma-separated list of e-mails so that a notification will be sent when the purge is completed. At the time of writing only 25 purges may be done per account per day. If a container level purge needs to be performed, please contact support via a ticket naming the container to be purged and we can do this on your behalf. If a notification e-mail is required, please also let us know the desired e-mail(s) to use. The following will perform a CDN purge of the rackspace.png file from the images container with a notification e-mail set to user@example.com: cont = conn.get_container('images') obj = cont.get_object('rackspace.png') obj.purge_from_cdn(email='user@example.com') Summary: This article is meant to serve as a quick reference to the more common Cloud Files API requests and how to make them with the Python binding for Cloud Files. For more information on other features that the Cloud Files API offers please refer to the our Cloud Files API documentation. For more information on the Python binding take a look at the code on GitHub. © 2011-2013 Rackspace US, Inc. Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License Add new comment
http://www.rackspace.com/knowledge_center/article/cloud-files-python-binding-cookbook
CC-MAIN-2013-20
refinedweb
2,031
55.84
Welcome back! In this section, we’ll add a scoring system. The plan… For our scoring system, we’ll use the bubble image we have in the art folder. We’ll setup our game so that the player needs to pop bubbles for points. To display the score, you’ll need UI components. We’ll use the Unity3D UI system to accomplish this. UGUI – The Unity3D GUI system The Unity GUI system will allow us to easily draw text and graphics over our game. Getting started with the GUI system is pretty simple, we just need to add a GUI game object. Add a Text object by selecting GameObject->UI->Text In your Hierarchy, you should see TWO new GameObjects. There’s the Text that you added and a Canvas that is it’s parent. The Canvas is the root of your UI, and all UI elements must be children of such a Canvas. Because we didn’t already have a Canvas in our Scene, the editor automatically created one for us and added our new Text GameObject as a child. In your game view, you should see our new Text object in the bottom left. Let’s edit our Text object to say something more meaningful than “New Text” With the Text GameObject selected, look at the Text component in the Inspector and notice the “Text” field. Change it to say “Score: 0” Anchoring Right now, our UI component isn’t positioned correctly. We could drag it around until it looks close, but right now it’s Anchored from the middle, so it would be at different areas depending on the screen size. What we want for this game is to anchor the score text to the top left of the screen. To accomplish this, we’ll use the “Anchor Presets”. On the transform for the Text, click the Anchor tool (the red cross with a white box in the middle, the mouse is over it in the screenshot) You’ll be presented with this dialog. We want to select the top left point, but before you click it, notice the text at the top of the dialog. For this UI component, we want to set the position along with the anchor, so before you click the pivot, hold the Alt key. You should now notice your Score Text has moved to the top left corner of the screen. Before we go too much further, let’s re-name the Text to something useful. Change the name to “ScoreText” Experiment Try playing with the text component for a few minutes. Adjust things like the font size, style, and color to get familiar with some of your options. If you increase the font over 26, you’ll notice that it’s no-longer visible in your game view. This is because the text is too large to fit into the component. You can fix this by increasing the size of the text component. Let’s Code It’s time to hookup some code to make our score counter work. The first thing we want to do is create a script to handle keeping score. Create a new script named “ScoreKeeper” Change the ScoreKeeper.cs file to look like this using UnityEngine; using UnityEngine.UI; public class ScoreKeeper : MonoBehaviour { private int _currentScore = 0; public void IncrementScore() { _currentScore++; Text scoreText = GetComponent<Text>(); scoreText.text = "Score: " + _currentScore.ToString(); } void Update() { IncrementScore(); } } Attach the “ScoreKeeper” script to the ScoreText GameObject Before you hit play, look at the script and see if you can guess what it’s going to do. … … … … … … Let’s inspect the different parts of this script to see what’s going on. using UnityEngine.UI; You’ve seen the “using” statements before. What they do is tell the engine that we want to “use” components from that namespace. In computing, a namespace is a set of symbols that are used to organize objects of various kinds, so that these objects may be referred to by name. In this instance, we’re just telling Unity that we’ll be “using” UI components in our script. private int _currentScore = 0; Here, we’re defining a variable to hold our current score and setting it’s value to 0. public void IncrementScore() { _currentScore++; Text scoreText = GetComponent&amp;amp;lt;Text&amp;amp;gt;(); scoreText.text = "Score: " + _currentScore.ToString(); } The IncrementScore method does 3 things. - Adds one to the score using the ++ operator. This could also be written as _currentScore = _currentScore + 1; - Gets the Text Component and assigns it to a local variable named “scoreText” - Sets the .text property of the “scoreText” object to have our new current score. ToString() gives us a text representation of a a non-text object The last bit of code we have is the Update method. All it’s doing is calling IncrementScore. Because Update is called every frame, our IncrementScore method is called every frame, which in turn makes our score increase. In this instance, the faster our game is running, the faster our score will increase. void Update() { IncrementScore(); } The Update method is really just implemented so we can see something working. For our game, we’ll have a more complicated scoring system, so let’s delete the Update method. Change your ScoreKeeper script to look like this using UnityEngine; using UnityEngine.UI; public class ScoreKeeper : MonoBehaviour { private int _currentScore = 0; public void IncrementScore() { _currentScore++; Text scoreText = GetComponent<Text>(); scoreText.text = "Score: " + _currentScore.ToString(); } } Now that our code has changed, we need another way to call the IncrementScore method…. In comes the bubble Look to your art folder in the Project View. Drag a bubble from the Art folder into your scene. Now, add a CircleCollider2D component to the newly placed bubble. Check the IsTrigger box. Let’s Code We’ve got our bubble placed, but we really need to get some code in to make things work. using UnityEngine; public class Bubble : MonoBehaviour { [SerializeField] private float _moveSpeed = 1f; // Update is called once per frame void Update() { transform.Translate(Vector3.up * Time.deltaTime * _moveSpeed); if (transform.position.y > 10) { Reset(); } } void Reset() { transform.position = new Vector3(transform.position.x, -10, transform.position.z); } void OnTriggerEnter2D(Collider2D other) { if (OtherIsTheFish(other)) { ScoreKeeper scoreKeeper = GameObject.FindObjectOfType<ScoreKeeper>(); scoreKeeper.IncrementScore(); Reset(); } } bool OtherIsTheFish(Collider2D other) { return (other.GetComponent<Fish>() != null); } } Attach the bubble script to your bubble in the Hierarchy. Now try playing and see if you can catch the bubble. If you placed your bubble like I placed mine, it can’t be caught. The reason for this is that we only ever change the Y position of the bubble, so it never moves past us. Instead of adding more code to the bubble, we’ve got another trick we’ll be using. Set the bubbles position to [2.5, -4, 0] Now, make the bubble a child of the seaweed parent. In the Inspector, hit the Apply button so that all our seaweed parents get a bubble. Now give your game another play and enjoy your great work! If all went well, it should look a bit like this
https://unity3d.college/2015/12/13/intro-unity3d-building-flappy-bird-part-8/
CC-MAIN-2020-05
refinedweb
1,181
74.08
On Wed, May 30, 2007 at 11:31:56AM +0200, Stepan Kasal wrote: > I'm afraid there are some misunderstandings here. I'll try to > make things more clear. > > Let's start with your very first mail: > Until yesterday, the manual said: > > | -- Macro: AC_TYPE_INT8_T > | If `stdint.h' or `inttypes.h' defines the type `int8_t', define > | `HAVE_INT8_T'. Otherwise, define `int8_t' to ... > > As you can see, if configure had to define `int8_t', then the symbol > `HAVE_INT8_T' is not defined. > > So your implementation was wrong, instead of > + if test "$ac_cv_c_int$1_t" != no; then > + AC_DEFINE_UNQUOTED([HAVE_INT$1_T], [1], [Define if int$i_t exists.]) > + fi > > you should rather call AC_DEFINE only if "$ac_cv_c_int$1_t" is "yes". Right - so my intended use of testing HAVE_INT8_T to see whether or not int8_t could be used was flawed anyway. [...] > Consistently, when you followed my sugestion and used: > > AC_TYPE_UINT8_T > AC_TYPE_SIZE_T > AC_CHECK_TYPES([uint8_t, size_t]) > > then HAVE_UINT8_T would get defined only if `uint8_t' exists on the > system, not if a substitute was defined by AC_TYPE_UINT8_T. > > AC_CHECK_TYPES([uint8_t]) does not actually perform a second check, > it uses the findings of AC_TYPE_UINT8_T. > Observe the output of the configure script: > checking for uint8_t... no > checking for size_t... no > checking for uint8_t... (cached) no > checking for size_t... (cached) no > > (You are right, if AC_CHECK_TYPES([uint8_t]) performed the check for > the second time, it might say "yes", because the just defined > `uint8_t' macro would have been found.) Here I have 2... - but it seems good to test again, as I otherwise I don't see the use-case for AC_TYPE_INT8_T. For a contrived example, say a system doesn't have int64_t but does have a 64 bit long long int, I was hoping something like the following would do the right thing: AC_TYPE_INT64_T <- finds long long int as suitable for int64_t AC_CHECK_TYPES([int64_t]) <- sets HAVE_INT64_T, as it is usable given above ? or not ? #ifdef HAVE_INT64_T code using int64_t #else long winded int32_t alternative #endif So in a sense, I don't care whether int64_t is "native", but just that it can be used. If AC_CHECK_TYPES doesn't notice the int64_t defined by AC_TYPE64_T, the 32bit alternative would compile despite the usable long long int alternative. Is that intended? (It seems that AC_CHECK_TYPES checks again, so all is well as it is from my point of view, but is that by accident?) [... good explanation of #define & confdefs.h ...] > > Hope you find this mosaic of comments useful, Yes, thank you! Patrick
https://lists.gnu.org/archive/html/autoconf/2007-05/msg00074.html
CC-MAIN-2017-04
refinedweb
403
65.62
What the program should do: Gets a string from the user (a character's first/last name). The string goes through a switch to set int i to # and string n to "[###]". (where # is, its replaced by actual numbers.) The program displays an image depending on what "i" is set to (i + ".png"). The program reads a text file with the following information in it: (Embedded Resource) Quote [000] Reimu Hakurei Reimu's description here [001] Marisa Kirisame Marisa's description here [002] Sanae Kochiya Sanae's description here Reimu Hakurei Reimu's description here [001] Marisa Kirisame Marisa's description here [002] Sanae Kochiya Sanae's description here The program then should search the text file for [###]. (whichever number n is set to after going through the switch) Then it grabs the next line after the [###] (character's name) and inputs it to characterName It then grabs the line after the character's name and inputs it in characterDesc. Then it displays characterName and characterDesc. What it does: gets string from person using the program then nothing when I hit enter/press the search button. I did get the program to work fine when the information was in arrays but I don't want an array that has 130 slots available with over 100 words in each slot. (there is over 130 characters in the game series and I'm going to have all of them along with a menu for their theme songs eventually.) I am just learning C# right now also, so I don't know very much C#. I am in Software Development classes in college but currently have only taken Java and C++, getting into C# next quarter. So if anyone helps with code, please comment stuff so I can understand it better. My code so far: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using System.Text.RegularExpressions; using System.IO; namespace TouhouCharacters { public partial class Form1 : Form { public Form1() { InitializeComponent(); characterName.Hide(); characterImage.Hide(); searchBox2.Hide(); enterSearch2.Hide(); } public String n; public int i = 0; public void getSearch(object sender, EventArgs e) { String search = searchBox.Text.ToLower(); switch(search) { case "reimu": n = "[000]"; i = 0; return; case "hakurei": n = "[000]"; i = 0; return; } //Starts erroring here, getting the text file and reading it and searching through it. System.Reflection.Assembly textFile; textFile = System.Reflection.Assembly.GetExecutingAssembly(); textFile.GetManifestResourceStream("example.txt"); //Using an embedded resource. string allRead = textFile.ReadToEnd(); //I got the .ReadToEnd() from // , and when I copy/paste it, ReadToEnd doesn't have //a red line under it, but when I changed things around it starts erroring. string regMatch = n; if (Regex.IsMatch(allRead, regMatch)) { characterName.Text = "\n" + textFile; System.Reflection.Assembly thisExe; thisExe = System.Reflection.Assembly.GetExecutingAssembly(); System.IO.Stream cimage = thisExe.GetManifestResourceStream("TouhouCharacters." + i + ".png"); characterImage.Image = Image.FromStream(cimage); searchBox.Hide(); enterSearch.Hide(); enterSearch2.Show(); searchBox2.Show(); characterImage.Show(); characterName.Show(); } //Errors ends here. } private void getSearch2(object sender, EventArgs e) { String search = searchBox2.Text.ToLower(); switch (search) { case "reimu": n = "[000]"; i = 0; return; case "hakurei": n = "[000]"; i = 0; return; } } private void searchBox_TextChanged(object sender, EventArgs e) { this.AcceptButton = enterSearch; } private void searchBox2_TextChanged(object sender, EventArgs e) { this.AcceptButton = enterSearch2; } } } Can anyone please help with the searching a text file with embedded resources? Everything I do makes it error and have been looking through google for different solutions but none of them have worked. I also can't figure out how to make the charactersName get the next line of the text file for the character's name. Same with the description. I assume doing characterName.Text = "\n" + textFilewill NOT work, I was testing that just to try everything I could think of. Thank you for your time and any help would be greatly appreciated.
http://www.dreamincode.net/forums/topic/296243-problem-game-character-info-program/
CC-MAIN-2017-34
refinedweb
648
59.19
Question The interest rate on a conventional 10 year government of Canada bond is 6% per year, and the interest rate on 10 year real return (I.e inflation - adjusted) government of Canada bonds is 3.5% per year. You have $10,000 to invest in one of them. If you expect the average inflation rate to be 4.5% per year, which bond offers the higher expected rate of return? Ignoring any difference in risk, which would you prefer to invest in? Top Answer Dear Student, I have reviewed your assignment thoroughly, based on your assignment details and current... View the full answer
https://www.coursehero.com/tutors-problems/Finance/8200124-The-interest-rate-on-a-conventional-10-year-government-of-Canada-bond/
CC-MAIN-2019-04
refinedweb
104
68.67
. mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, LOCK TABLES if the --single-transaction option is not used, and (as of MySQL 5.6.49). A dump made using PowerShell on Windows with output redirection creates a file that has UTF-16 encoding: mysqldump [options] > dump.sql However, UTF-16 is not permitted as a connection character set (see Impermissible Client Character Sets), so the dump file cannot be loaded correctly. To work around this issue, use the --result-file option, which creates the output in ASCII format: 24.2, “MySQL Enterprise Backup Overview”. If your tables are primarily MyISAMtables, consider using the mysqlhotcopy instead, for better performance than mysqldump of backup and restore operations. See Section 4.6.10, “mysqlhotcopy — A Database Backup Program”.. --bind-address= ip_address On a computer having multiple network interfaces, use this option to select which interface to use for connecting to the MySQL server. Compress all information sent between the client and the server if possible. See Section 4.2.6, “Connection Compression Control”. A hint about which client-side authentication plugin to use. See Section 6.2.11, “Pluggable Authentication”. --enable-cleartext-plugin Enable the mysql_clear_passwordcleartext authentication plugin. (See Section 6.4.1.5, “Client-Side Cleartext Pluggable Authentication”.) This option was added in MySQL 5.6.28. --host=, host_name -h host_name Dump data from the MySQL server on the given host. The default host is localhost., mysqldump mysqldump does not find it. See Section 6.2.11, “Pluggable Authentication”. --port=, port_num -P port_num For TCP/IP connections, the port number to use. -; expect support for them to be removed in a future MySQL release. For account upgrade instructions, see Section 6.4.1.3, “Migrating Away from Pre-4.1 Password Hashing and the mysql_old_password Plugin”.Note This option is deprecated; expect it to be removed in a future release. As of MySQL 5.7.5, it is always enabled and attempting to disable it produces an error. encryption and indicate where to find SSL keys and certificates. See Command Options for Encrypted Connections. --user=, user_name -u user_name The user name of the MySQL account to use for connecting to the server. Option-File Options These options are used to control which option files to read. --defaults-extra-file= file_name Read this option file after the global option file but (on Unix) before the user option file. If the file does not exist or is otherwise inaccessible, an error occurs. If file_nameis not an absolute path name, it is interpreted relative to the current directory. For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. --defaults-file= file_name Use only the given option file. If the file does not exist or is otherwise inaccessible, an error occurs. If file_name”. Do not read any option files. If program startup fails due to reading unknown options from an option file, --no-defaultscan be used to prevent them from being read. The exception is that the .mylogin.cnffile is read in all cases, if it exists. This permits passwords to be specified in a safer way than on the command line even when --no-defaultsis used. To create .mylogin.cnf, use the mysql_config_editor utility. See Section 4.6.6, “mysql_config_editor — MySQL Configuration Utility”..trace. This option is available only if MySQL was built using WITH_DEBUG. MySQL release binaries provided by Oracle are not built using this option. Print some debugging information when the program exits. This option is available only if MySQL was built using WITH_DEBUG. MySQL release binaries provided by Oracle are not built using this option. Print debugging information and memory and CPU usage statistics when the program exits. This option is available only if MySQL was built using WITH_DEBUG. MySQL release binaries provided by Oracle are not built using this option. replica server in a replication configuration. The following options apply to dumping and restoring data on replication source and replica servers. For a replica dump produced with the --dump-slaveoption, add a STOP SLAVEstatement before the CHANGE MASTER TOstatement and a START SLAVEstatement at the end of the output. On a source replication server, delete the binary logs by sending a PURGE BINARY LOGSstatement to the server after performing the dump operation. This option requires the RELOADprivilege as well as privileges sufficient to execute that statement. This option automatically enables --master-data. This option is similar to --master-dataexcept that it is used to dump a replica server to produce a dump file that can be used to set up another server as a replica that has the same source as the dumped server. It causes the dump output to include a CHANGE MASTER TOstatement that indicates the binary log coordinates (file name and position) of the dumped replica's source. These are the source server coordinates from which the replica should start replicating. --dump-slavecauses the coordinates from the source to be used rather than those of the dumped server, as is done by the --master-dataoption. In addition, specifi replica SQL thread before the dump and restart it again after. --dump-slavesends a SHOW SLAVE STATUSstatement to the server to obtain information, so it requires privileges sufficient to execute that statement. In conjunction with --dump-slave, the --apply-slave-statementsand --include-master-host-portoptions can also be used. --include-master-host-port For the CHANGE MASTER TOstatement in a replica dump produced with the --dump-slaveoption, add MASTER_HOSTand MASTER_PORToptions for the host name and TCP/IP port number of the replica's source. Use this option to dump a source replication server to produce a dump file that can be used to set up another server as a replica of the source. It causes the dump output to include a CHANGE MASTER TOstatement that indicates the binary log coordinates (file name and position) of the dumped server. These are the source server coordinates from which the replica should start replicating after you load the dump file into the replica. If the option value is 2, the CHANGE MASTER TOstatement-datasends a SHOW MASTER STATUSstatement to the server to obtain information, so it requires privileges sufficient to execute that statement. This option also requires the RELOAD Client Options”.) XML output from mysqldump includes the XML namespace, as shown here: $>statementstable directly, using a MySQL account that has appropriate privileges for the mysqldatabase. -, or mysqlhotcopy for MyISAM-only databases. Performance is also affected by the transactional options, primarily for the dump operation. For those nontransactional tables that support the INSERT DELAYEDsyntax, use that statement rather than regular INSERTstatements. As of MySQL 5.6.6, DELAYEDinserts are deprecated; expect this option to be removed in a future release.. Because the dump file contains a FLUSH PRIVILEGESstatement, reloading the file requires privileges sufficient to execute that statement..”. To select the effect of --optexcept for some features, use the --skipoption for each feature. To disable extended inserts and memory buffering, use --opt --skip-extended-insert --skip-quick. (Actually, --skip-extended-insert --skip-quickis sufficient because --optis on by default.) To reverse --optfor all features except index disabling and table locking, use --skip-opt --disable-keys --lock-tables. NDB Cluster ndbinfo information database. If you encounter problems backing up views due to insufficient privileges, see Section 20.9, “Restrictions on Views” for a workaround.
https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
CC-MAIN-2021-43
refinedweb
1,234
50.33
I'm approaching you guys for help because I really want to get this assignment right. First: My main looks like this: It's rough but it will give you an idea of what I'm doing...It's rough but it will give you an idea of what I'm doing...Code:#include <iostream> #include <string> #include <cstring> using namespace std; int main() { ifstream inFile; string fileName; string message; cout << "Please enter the file name you want to decrypt or encrypt: " << endl;//the encryption part was a maybe extracredit so ignore. cin << fileName; inFile.open(fileName.C_str(); system("pause"); } I want to create an array (c string I think?) but I've stopped at [ because I want to know for sure I was thinking of beginning a function something like... This is where I would like to send it to a reader function outside. The reader function would be the array I just mentioned to store the chars.This is where I would like to send it to a reader function outside. The reader function would be the array I just mentioned to store the chars.Code:char inChar(char one&, one); { while(inFile) { inFile >> one; if(one > '\0') { My questions: how many spots I should give it and why. (Explain how the compiler "sees" characters so that I can best understand what to tell it.) How do I "properly" call my array from the first function to send the data. My instructor covered this in class but I didn't manage to get all the notes (she types very fast) last since I can't nest functions in functions, can I nest any number of different operations of diferent data types within my function or is there a limit to that. I know I can't break an if or then but are there some other conflicts I should be wary of when attempting such integration? Thank you in advance for any help.
https://cboard.cprogramming.com/cplusplus-programming/116200-assignment-decrypt-text.html
CC-MAIN-2017-13
refinedweb
325
78.79
Everybody. The Problem Consider the following alphabet grid: +----+----+----+----+----+ | w | p | a | o | u | +----+----+----+----+----+ | p | i | e | d | q | +----+----+----+----+----+ | l | l | n | r | o | +----+----+----+----+----+ | e | c | a | u | w | +----+----+----+----+----+ | y | d | x | s | z | +----+----+----+----+----+ This is an alphabet grid consisting of different words hidden in it. An alphabet x is adjacent to another alphabet * in the grid if * is horizontally, vertically or diagonally adjacent to x. The following diagram shows the adjacent positions of character * with respect to x. +----+----+----+ | * | * | * | +----+----+----+ | * | x | * | +----+----+----+ | * | * | * | +----+----+----+ The task is to find a given string in the grid with the condition that all the characters of the given string should be adjacent in the order in which they appear in the string, and no character in the tile may be used more than once to construct the given string. For example the word linux can be constructed by the following index positions of the above grid: +----+----+----+----+----+ | | | | | | +----+----+----+----+----+ | | I | | | | +----+----+----+----+----+ | L | | N | | | +----+----+----+----+----+ | | | | U | | +----+----+----+----+----+ | | | X | | | +----+----+----+----+----+ Solution Idea The idea to solve this problem of finding a string as defined above in the given grid is simple backtracking. We can visualize this grid as a graph, where each cell in the grid is a node of the graph and two nodes has an edge between them if the the corresponding alphabets of the two nodes are adjacent in the grid. In this the solution approach will be to start from each node of the graph and a depth first search (DFS) in the graph and check if there exists a simple path in the graph which corresponds to the given string in the grid. To do this we can start from each node of the graph and then at each iteration select an adjacent branch and visit the node on the other end of this branch. We will take a count of the depth at which the DFS is in. After visiting a node at depth d we need to match if the dth character in the given string matches. If yes then we can continue with DFS from this node, else this partial path seen so far at depth d will not lead to a solution (mismatch at level d), therefore we need to back track and return back at level (d - 1) and continue DFS by selecting the next branch at this level, trying to find another path. If the top level node in the DFS has depth 0, then the string is found in the grid/graph if the DFS reaches a depth of equal to the length of the given string. The DFS could be implemented without explicitly representing the graph corresponding to the grid structure. We need to perform a DFS in the graph, for which we need to get the children at each node of the graph. At each node of the graph the children of the node are those nodes which are adjacent to that node. Therefore to perform DFS, we can simply construct a recursive function accepting the depth and the index into the alphabet grid of the current node and then pass the index into the alphabet grid of the next child to be processed in DFS order. One point which should be noted that the edge cells in the alphabet grid does not have 8 children, the corner ones have 3, and the edge ones have 5 children. Therefore we need to carefully call such that the DFS respects this constraint. The function can be constructed as follows: /* `mat' is the alphabet grid of dimension MAX x MAX * `pat' is the string to be found in the grid * `d' is the current depth of the DFS * `i' and `j' represents the cell of the grid/node of graph */ int grid_match (char mat[MAX][MAX], char *pat, int len, int d, int i, int j) { /* Base conditions */ /* [1] Return success if length of `pat' is equal to depth `d' */ /* [2] Return failure if the index `i' or `j' are out of bounds */ /* [3] Return failure if the character at this child * at `mat[i][j]' is not equal to `pat[d]' * [4] Return failure if the character at this child is already * visited by this path. */ /* Mark the `mat[i][j]' cell/node is visited, so that only simple paths * are searched. */ /* Perform DFS on children of cell mat[i][j] on the following order */ /* Accumulate success or failure values in `r'. If any of the branch * returns success, then we have found one solution. */ r = r | grid_match (mat, pat, d+1, i-1, j-1); /* Top Left cell */ r = r | grid_match (mat, pat, d+1, i-1, j); /* Top cell */ r = r | grid_match (mat, pat, d+1, i-1, j+1); /* Top Right cell */ r = r | grid_match (mat, pat, d+1, i, j+1); /* Right cell */ r = r | grid_match (mat, pat, d+1, i+1, j+1); /* Bottom Right cell */ r = r | grid_match (mat, pat, d+1, i+1, j); /* Bottom cell */ r = r | grid_match (mat, pat, d+1, i+1, j-1); /* Bottom Left cell */ r = r | grid_match (mat, pat, d+1, i, j-1); /* Left cell */ /* Unmark the `mat[i][j]' cell/node as visited, so that we can visit this * cell/node following another simple path */ /* return the value of `r' representing the status if this path leads to * a match, to the previous depth */ } The above pseudocode renders the solution idea. The comments in the above pseudo code presents the implementation of idea. The | is the bitwise OR operator in the expression r = r | expression. In the above sequence the variable r will have value 1 if any of the function calls in grid_match () returns 1. This is because we want to return a success value (1 for this case) in case atleast one path matching the string is found. To implement the condition which tells that “no character in the tile may be used more than once to construct the given string” we need to ensure that the path created by the DFS does not intersect, that is basically perform a graph search (instead of tree search). For this, the above code marks the node a node at depth d whenever it is a valid one (passes through the base conditions). When the recursion rolls back to the level (d - 1) we need to unmark this node/cell at level d such that it could be visited by some other path in this DFS. The success (1) and failure (0) values are passed up the depth of the DFS tree. Note that the pseudocode of the recursive function considers the grid to be square. The solution can be generalized by making an m x n grid simply by changing the dimensions. The parameter len holds the length of pat. This is done to stop re-computation of the same string length over and over in each recursive depth. To search if the entered string is present in the alphabet grid, we need to perform a DFS starting from every character of the grid, ie. searching a path representing the string in the equivalent graph, starting at each node of the graph. This can be done as follows: `mat' is the alphabet grid `pat' is the pattern to search for i=0 to number of rows for j=0 to number of columns r = r | grid_match (mat, pat, 0, i, j); The above pseudocode simply starts a DFS at each cell by making a recursive call at every index (i,j) setting the depth to 0. If r returns success (1) then the given string is in the grid, else it is not. Sourcecode Now I will show the entire sourcecode which accepts any grid and any string and performs the search. In addition this code highlights the string in the grid, if found. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #define STR_MAX 128 typedef struct _grid_t { char **mat; int r_max, c_max; } grid_t; /* MAIN Search Functions START */ /* grid_t *grid : grid * char *pat : poiner to string to be searched * int len : length of `pat' ie. `strlen (pat)' * unsigned int d : the current depth * int i : the current row of the grid at this path depth * int j : the current column of the grid at this path depth */ int grid_match (grid_t * grid, char *pat, int len, unsigned int d, int i, int j) { int r = 0; char temp; /* [1] Return success if length of `pat' is equal to depth `d' */ if ((int) d == len) return 1; /* [2] Return failure if the index `i' or `j' are out of bounds */ if (((i < 0) || (j < 0)) || (i >= grid->r_max) || (j >= grid->c_max)) return 0; /* [3] Return failure if the character at this child at `mat[i][j]' is not equal to `pat[d]' */ if (grid->mat[i][j] != pat[d]) return 0; /* [4] Return failure if the character at this child is already visited by this path. */ if (isupper (grid->mat[i][j])) return 0; /* Save character and mark location to indicate * DFS has visited this node, to stop other branches * to enter here and cross over path */ temp = grid->mat[i][j]; grid->mat[i][j] = 0; /* Only return the first encountered match. If a success ie. r = 1 * is found in any of the branch we will not make anymore DFS calls * because we have found the first occur); if (r ==); /* restore value */ grid->mat[i][j] = temp; /* mark if success by making the matched characters to uppercase * the grid_show_match () will only print the uppercase chars * and print the lowercase chars as dots `.' */ if ((grid->mat[i][j] == pat[d]) && (r == 1)) { grid->mat[i][j] = toupper (grid->mat[i][j]); } return r; } /* Find the match by executing DFS starting at each cell */ int grid_find_match (grid_t * grid, char *pat) { int i, j, r = 0, len; /* make all the grid characters to lowercase. * uppercase char means it is a part of a * successful path */ for (i = 0; i < grid->r_max; i++) { for (j = 0; j < grid->c_max; j++) { grid->mat[i][j] = tolower (grid->mat[i][j]); } } len = strlen (pat); for (i = 0; i < grid->r_max; i++) { for (j = 0; j < grid->c_max; j++) { r |= grid_match (grid, pat, len, 0, i, j); /* only find one occurrence of the string */ if (r == 1) { break; } } if (r == 1) { break; } } return r; } /* MAIN Search Functions END */ /* AUXILIARY Functions START */ /* alloc grid */ grid_t * grid_allocate (int r, int c) { grid_t *grid; int i; grid = malloc (sizeof (grid_t)); grid->mat = malloc (sizeof (char *) * r); for (i = 0; i < r; i++) { grid->mat[i] = malloc (sizeof (char) * c); } grid->r_max = r; grid->c_max = c; return grid; } /* free allocated memory */ void grid_free (grid_t * grid) { int i; if (grid == NULL) { return; } for (i = 0; i < grid->r_max; i++) { free (grid->mat[i]); } free (grid->mat); free (grid); } /* get the alphabet grid from stdin */ void grid_input (grid_t * grid) { int i, j; for (i = 0; i < grid->r_max; i++) { for (j = 0; j < grid->c_max; j++) { scanf (" %c", &grid->mat[i][j]); } } } /* Simply show the grid */ void grid_show (grid_t * grid) { int i, j; for (i = 0; i < grid->r_max; i++) { for (j = 0; j < grid->c_max; j++) { printf ("%c ", grid->mat[i][j]); } printf ("\n"); } } /* Highlight the matched portion of the grid * print matched character in the found path as * uppercase, and other characters as a dot `.' */ void grid_show_match (grid_t * grid) { int i, j; for (i = 0; i < grid->r_max; i++) { for (j = 0; j < grid->c_max; j++) { if (isupper (grid->mat[i][j])) printf ("%c ", grid->mat[i][j]); else printf (". "); } printf ("\n"); } } /* AUXILIARY Functions END */ /* A sample DRIVER code to demonstrate the grid_match () function */ int main (void) { grid_t *grid; int r, c; char pat[STR_MAX]; int i; printf ("\nEnter grid size (r, c): "); scanf (" %d %d", &r, &c); grid = grid_allocate (r, c); printf ("\nEnter the characters of the grid in row major order: "); grid_input (grid); printf ("\n\nEntered grid:\n"); grid_show (grid); while (1) { printf ("\n\nEnter pattern to be searched (\"x\") to quit: "); scanf ("%s", pat); for (i = 0; pat[i] != '\0'; i++) { pat[i] = tolower (pat[i]); } if (strcmp (pat, "x") == 0) { break; } r = grid_find_match (grid, pat); if (r == 0) { printf ("\nThe pattern \"%s\" is not found in the grid", pat); } else { printf ("\nThe pattern \"%s\" was found in the grid", pat); printf ("\nShowing the match:\n\n"); grid_show_match (grid); } } grid_free (grid); printf ("\n"); return 0; } Code Issues An issue is that if there are more than one occurrence of the string in the grid, then this code will highlight and match only the first occurrence encountered by the DFS. This depends of the order in which the children of each node is exploded. Also if one path is split in two or more, for example a first portion matches “orac” and there are two possible direction where “le” matches, in such a case only first path in the split will be matched. If we want to only highlight all occurrence matches, then we need continue search even if one successfully path is found that is, r=1 at any level. This could be implemented by removing the if (r == 0) condition before each call to grid_match (). The modification will look like this: int grid_match (grid_t *grid, char *pat, unsigned int d, int i, int j) { . . . r |= grid_match (grid, pat, len, d+1, i-1, j-1); r |= grid_match (grid, pat, len, d+1, i-1, j); r |= grid_match (grid, pat, len, d+1, i-1, j+1); r |= grid_match (grid, pat, len, d+1, i, j+1); r |= grid_match (grid, pat, len, d+1, i+1, j+1); r |= grid_match (grid, pat, len, d+1, i+1, j); r |= grid_match (grid, pat, len, d+1, i+1, j-1); r |= grid_match (grid, pat, len, d+1, i, j-1); . . . } and in the grid_find_match () function int grid_find_match (grid_t *grid, char *pat) { int i, j, r = 0; for (i=0; i<grid->r_max; i++) { for (j=0; j<grid->c_max; j++) { r |= grid_match (grid, pat, 0, i, j); /* we allow continuing the search even r = 1 * that is a match was found */ } } return r; } The problem of matching more than one occurrence of string is that in the grid the matched characters will be highlighted, but because there is no direction indicated in the grid and multiple characters are highlighted, it will be difficult to trace the matched paths in the grid. For example in the grid in the beginning of the post the string “oracle” matches three split path as follows . . . . . . . E . . L L . R O E C A . . . . . . . Another minor issue is that all the matching is done lowercase. The input string to be matched is converted to lowercase and the toupper marking of the grid changes the original grid, in which all the characters are stored in lowercase. At each call of grid_find_match () all the uppercase characters from the previous call are converter to lowercase. To avoid input string with characters not present in the grid we can first check if all the characters present in the string is also present in the character the same number of time (the current problem definition does not allow using of one single character more than once), and then we proceed. Modifying definition of adjacent cells If we want to change the definition of the adjacent cells from the existing, to only the cells which are horizontally or vertically adjacent, then we need to simply remove the recursive calls which continue DFS with the diagonal cells of the grid. Output Let me show the output when the grid shown at the beginning of this post is input in the program. We have a 5×5 grid. In this grid the following words are present: linux, windows, apple, oracle, dell , and probably one or two more which i forgot (made the grid around 2hr before writing this section). Enter grid size (r, c): 5 5 Enter the characters of the grid in row major order: w p a o u p i e d q l l n r o e c a u w y d x s z Entered grid: w p a o u p i e d q l l n r o e c a u w y d x s z Enter pattern to be searched ("x") to quit: linux The pattern "linux" was found in the grid Showing the match: . . . . . . I . . . L . N . . . . . U . . . X . . Enter pattern to be searched ("x") to quit: windows The pattern "windows" was found in the grid Showing the match: W . . . . . I . D . . . N . O . . . . W . . . S . Enter pattern to be searched ("x") to quit: apple The pattern "apple" was found in the grid Showing the match: . P A . . P . E . . . L . . . . . . . . . . . . . Enter pattern to be searched ("x") to quit: oracle The pattern "oracle" was found in the grid Showing the match: . . . . . . . . . . L . . R O E C A . . . . . . . Enter pattern to be searched ("x") to quit: dell The pattern "dell" was found in the grid Showing the match: . . . . . . . E D . L L . . . . . . . . . . . . . Enter pattern to be searched ("x") to quit: acer The pattern "acer" is not found in the grid Enter pattern to be searched ("x") to quit: ibm The pattern "ibm" is not found in the grid Enter pattern to be searched ("x") to quit: hello The pattern "hello" is not found in the grid Enter pattern to be searched ("x") to quit: x Complexity The memory requirements are very good because only the child in the current partially traversed path is expanded therefore the growth of space requirements will be . The problem is to find a path in a graph which satisfies a certain condition. The time complexity is not straight forward. To study the worst case consider the following 6×6 grid of alternating a’s and b’s: +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ | a | b | a | b | a | b | +----+----+----+----+----+----+ If we input the string “abababababababababababababababababaa” to be searched then it will take an insane amount of time to report failure. Why? Because the input string matches all the first 35 characters and then fails to match the last one. Therefore the DFS starts at each of the 36 cells and for each such DFS it goes down upto (n-1) level just to discover failure and backtrack and try another branch. Note that in the above example the problem is basically finding a Hamiltonian path in the graph formed by the alphabet grid. Finding a Hamiltonian path in a graph is NP-Complete, and we can reduce the worst case instance of this character search algorithm into an instance of Hamiltonian path, which can be reduced in polynomial time (formal proof not shown), therefore, from the definition of NP-Completeness, the worst case instance of finding a string in the alphabet grid is also NP-Complete. Therefore no polynomial time algorithm exists to fix the worst case time. This conclusion was from the post in stackoverflow in this post from the answer by missingno: Improve a word search game worst case. Although for moderate length strings which does not form such situation which does not get close to find a Hamiltonian path, in which the string matches a majority of its length with the grid at almost every DFS and fails at then end, will perform fast. The Original Grid Game In the original grid game only strings matched horizontally, vertically and diagonally are allowed, not zigzag pattern as the above generalized version is allowed. To implement the problem to search strings horizontally, vertically or diagonally, we can implement six functions, one searching for each direction. Two function for left-to-right and right-to-left horizontal strings, two for top-to-bottom and bottom-to-top strings, two for the two diagonal strings and run each function for each input string. Each of these functions can be constructed by removing appropriate recursive calls from the grid_match () . For example, for the horizontal matcher keep only those calls which performs DFS with only the Left and Right cells, for vertical matcher keep only those recursive calls which performs DFS with only the Top and Bottom cells. For diagonal create two functions, one for the 45 degree diagonal and one for the 135 degree diagonal similarly allowing appropriate adjacent diagonal elements recursive calls. I am not posting the source code in this post, as the modification is pretty much straight forward. Although the original word search game could be made much easily as the search is straight forward. To search horizontally spanning strings, we can take each row of the grid and check for if the input string and its reverse is a substring of the row, similarly we can check for vertical and diagonal string for substrings. Links The problem was taken from tech-queries.blogspot.in/2011/07/find-if-given-pattern-is-present-in.html Update Info - 20.07.2012 : Code modified. Removed re-computation of strlen (pat). Instead we are now passing the length of the pat in the grid_match ()recursive function. Minor text addition to reflect this change.
https://phoxis.org/2012/05/04/find-a-string-in-an-alphabet-grid/
CC-MAIN-2019-18
refinedweb
3,526
64.17
. First I couldn’t download content over SSL, after couple hours I figured it out (mozroots –import –machine & copy pasting a lot of “yes”). Surprisingly after that my code still didn’t work… Debugging code on mod_mono (at least interactive debugging) is a mystery I’ve yet to uncover, but a couple of “vim” and “gmcs” magic allowed me to find the source of the problems as the source isn’t that complex or hard to split in manageable bits. When I found the problem I couldn’t (and still can’t) believe it. I even tested it on Windows and again on my box using the very same binary… still Mono fails to override that dam method! Here’s my test case: using System; using System.Net; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { TestWebClient t = new TestWebClient(); t.DownloadData(""); } } class TestWebClient : WebClient { protected override WebRequest GetWebRequest(Uri address) { Console.WriteLine("Override Called"); return base.GetWebRequest(address); } } } On windows the message gets printed, but not on mono. I’ve even looked at the source code for the WebClient class here, and another odd thing is that Mono 2.4.3.2 tarball doesn’t reassemble that file at all, its in fact about 10 months old. Anyway, I was using that WebClient override to manage cookies automatically but I guess that I’ll need to find another way. If anyone has a clue of the problem or produces a patch please let me know! Tags: bug, Mono, WebClient It’s a trivial one 😉 In the WebRequest class, replace the call to “WebRequest.Create (uri)” with “GetWebRequest (uri)” and make “GetWebRequest” call “return WebRequest.Create (uri); ” instead of ‘SetupRequest’ and you’re done. You shouldn’t have to change anything else, but this approach is at least 90% of the issue anyway. If you can test that fix and let us know if it works or not that’d be great. Either way, filing it on bugzilla makes it easier for us to track and fix it. If you can attach a patch which fixes the bug along with additional NUnit tests covering the change, it’s likely to get fixed faster Switch the first mention of ‘WebRequest’ with ‘WebClient’. Thats where you need to do the method switching. My bad! Finally, this is the actual WebClient class. You were looking at the Silverlight one. The class is in the ‘System’ assembly in the ‘System.Net’ namespace. Hi, Thanks for the help, it was kinda obvious I guess… just didn’t look on the right places. Anyway, currently I don’t have a devel Linux box and making a patch/unit test would be kinda difficult. All my debugging was done directly on the server and thats far from recommended 😀 Well, if you can get a patch in very soon and let me know, i can try my best to make sure it gets into 2.6. Lol love how much developers don’t like filing bugs… even those responsible for projects! Lol, personally I don’t mind filling bug reports except that with Mono I’m a bit frustated due to past experiences. I did a lot of patches to fix SqlServer bugs, Dns Bugs and even did autocompletion on MonoDevelop 2 years before they implemented it – sad thing is that my work tends to go into a dark hole for years, even with my stalking on irc, personal mail and even blog posts lol. Anyway @Alan I’ve downloaded a Ubuntu VM and although I din’t touch Mono’s code for a long time I’m going to give it a shot, if nothing else its a nice test to check if my commit access is still up 😉 It’s not laziness or hatred of filing bugs, the reason I won’t file this is because I have no way of verifying the fix works. Therefore how could I possibly mark the bug as fixed? If alex files it, I can notify him (by updating the bug) that a fix is in SVN. He can then test it and notify us (by updating the bug) if it resolved his issue or not. It’s a paper trail which makes it much easier to ensure things get fixed Secondly, if there are 100 bugs in my area, pretty much everything with an attached testcase will be prioritised. If I have to 1) Find the bug, 2) Write a testcase, 3) Write a patch, 4) Check back with the original dev as to whether it fixed their issue or not, 5) Go back to 1 if it didn’t, i waste a lot of time Hiya, don’t suppose you fancy sharing the subversion hook, would be dead handy? It was a bit of an ugly hack really. I can’t recall but at the time it was the best or fastest solution. Anyway, what I’ve done was create a small C app with this code: #include #include #include int main(void) { execl(“/usr/bin/svn”, “svn”, “update”, “/var/www/whotouchesnet”, (const char *) NULL); return(EXIT_FAILURE); } Compile it with “gcc -o binname filename.c” and then “chmod a+x binname”. After that you just might need to fine tune the permissions on the working copy a bit, just remember that the executable and the svn up command will be run with the apache’s user credentials. Might not be the best solution but it is simple enough and works like a charm! Never fails, I commit and my sites are updated. Just a word of caution though, make sure you’re using branches to manage the production/test versions of the site or service. Personally I’ve a production branch which goes life on and then I’ve the trunk/main branch going to a test.domain.com – that way I can test/debug freely without concerns about bugs. @Alan I’ve done it all. Patch, test case and bug report. Please take a look at it, I’ve tested it to make sure the test is accurate and no regressions on the other tests were detected. I did a similar fix on my mono 2.4.3.2 sources on the production server (this machine) and everything started to work like it was supposed to. If you think its OK to commit to the Mono’s trunk, let me know. I haven’t done a Mono commit in a while and its kinda like candy for hackers 😛 I just love it. […] requests. Luckily, Mono behaved quite well after that last little problem I blogged about – Mono Strange Behaviour (Bug!). I also had to implement several security checks and encryption to make sure no data can be stolen […]
http://www.alexandre-gomes.com/?p=435
CC-MAIN-2017-34
refinedweb
1,122
69.01
Adding a jQuery script to the Django admin interface I'm gonna slightly simplify the situation. Let's say I've got a model called Lab. from django.db import models class Lab(models.Model): acronym = models.CharField(max_length=20) query = models.TextField() The field query is nearly always the same as the field acronym. Thus, I'd like the query field to be automatically filled in after entering text in the acronym field in the Django admin interface. This task must be performed by a jQuery script. So if I take an example: you want to add a new lab to the database through the Django admin interface. You click the add button and you land on the empty form with the two fields. You manually fill in the acronym field with a value such as ABCD and then the query field should antomatically be completed with the same value, that means ABCD. How should I proceed?
https://prodevsblog.com/questions/149778/adding-a-jquery-script-to-the-django-admin-interface/
CC-MAIN-2020-40
refinedweb
158
75.91
Integrating the batch reactor mole balance Posted February 18, 2013 at 09:00 AM | categories: ode | tags: reaction engineering | View Comments Updated March 03, 2013 at 10:36 AM An alternative approach of evaluating an integral is to integrate a differential equation. For the batch reactor, the differential equation that describes conversion as a function of time is: \(\frac{dX}{dt} = -r_A V/N_{A0}\). Given a value of initial concentration, or volume and initial number of moles of A, we can integrate this ODE to find the conversion at some later time. We assume that \(X(t=0)=0\). We will integrate the ODE over a time span of 0 to 10,000 seconds. from scipy.integrate import odeint import numpy as np import matplotlib.pyplot as plt k = 1.0e-3 Ca0 = 1.0 # mol/L def func(X, t): ra = -k * (Ca0 * (1 - X))**2 return -ra / Ca0 X0 = 0 tspan = np.linspace(0,10000) sol = odeint(func, X0, tspan) plt.plot(tspan,sol) plt.xlabel('Time (sec)') plt.ylabel('Conversion') plt.savefig('images/2013-01-06-batch-conversion.png') You can read off of this figure to find the time required to achieve a particular conversion. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/18/Integrating-the-batch-reactor-mole-balance/
CC-MAIN-2017-22
refinedweb
213
58.69
Hello, I’ve seen many people write things like: (1…1000000).to_a. # anything that is a huge enumerable can work as an example collect{|x| num_appearances(x)}.select{|x| x.is_norwegian?}.collect{|x| x.paycheck}.each{|x| p x} I almost wrote something like that myself today. The problem, of course, is the huge memory footprint - collect, select, collect each creates a new temporary array to store the results in. Here’s a solution to that problem, exchanging it for a bit of speed (anything that uses those methods could obviously live with that, otherwise another language or method would be used): module Enumerable def serially(&b) self.collect{|x| *([x].instance_eval(&b)} end end Just beware not to use it with sort or any of the “!” methods or with sort or sort_by. Aur
https://www.ruby-forum.com/t/enumerable-serially-those-nifty-functions-w-o-memory-footp/105504
CC-MAIN-2022-33
refinedweb
134
54.83
Design Guidelines, Managed code and the .NET Framework Recently I needed to show some progress indicator on some long running console application. I recall the good old days of my college days with console based SMTP clients such as elm and pine… as I recall these clients showed progress via a simple ASCII spiner. I was impressed with how simple this is to do with .NET Framework 2.0.. I started off writing the usage code I wanted to enable, then built a simple class that meet those requirements.. I highly recommend this process especially for more complex designs. Here is the client code: static void Main(string[] args) { ConsoleSpiner spin = new ConsoleSpiner(); Console.Write("Working...."); while (true) { spin.Turn(); } } And here is the class I came up with. public class ConsoleSpiner { int counter; public ConsoleSpiner() { counter = 0; } public void Turn() { counter++; switch (counter % 4) { case 0: Console.Write("/"); break; case 1: Console.Write("-"); break; case 2: Console.Write("\\"); break; case 3: Console.Write("-"); break; } Console.SetCursorPosition(Console.CursorLeft - 1, Console.CursorTop); } } PS – does anyone still use elm\pine? Are their clients that work with exchange? Also, I was always told that pine was an acronym for “Pine Is Not Elm”, but accounting to the official site it is not. PingBack from PingBack from
http://blogs.msdn.com/brada/archive/2005/06/11/428308.aspx
crawl-002
refinedweb
214
69.79
Notice that when we want to invoke the randint() function we call random.randint(). This is default behavior where Python offers a namespace for the module. A namespace isolates the functions, classes, and variables defined in the module from the code in the file doing the importing. Your local namespace, meanwhile, is where your code is run. Python defaults to naming the namespace after the module being imported, but sometimes this name could be ambiguous or lengthy. Sometimes, the module’s name could also conflict with an object you have defined within your local namespace. Fortunately, this name can be altered by aliasing using the as keyword: import module_name as name_you_pick_for_the_module Aliasing is most often done if the name of the library is long and typing the full name every time you want to use one of its functions is laborious. You might also occasionally encounter import *. The * is known as a “wildcard” and matches anything and everything. This syntax is considered dangerous because it could pollute our local namespace. Pollution occurs when the same name could apply to two possible things. For example, if you happen to have a function floor() focused on floor tiles, using from math import * would also import a function floor() that rounds down floats. Let’s combine your knowledge of the random library with another fun library called matplotlib, which allows you to plot your Python code in 2D. You’ll use a new random function random.sample() that takes a range and a number as its arguments. It will return the specified number of random numbers from that range. Instructions Below import codecademylib3_seaborn, import pyplot from the module matplotlib with the alias plt. Import random below the other import statements. It’s best to keep all imports at the top of your file. Create a variable numbers_a and set it equal to the range of numbers 1 through 12 (inclusive). Create a variable numbers_b and set it equal to a random sample of twelve numbers within range(1000). Now let’s plot these number sets against each other using plt. Call plt.plot() with your two variables as its arguments. Now call plt.show() and run your code! You should see a graph of random numbers displayed. You’ve used two Python modules to accomplish this ( random and matplotlib).
https://production.codecademy.com/courses/learn-python-3/lessons/modules-python/exercises/modules-python-namespaces
CC-MAIN-2021-04
refinedweb
386
65.83
Hi All, I have the situation where I have something like the following: type Foo = TVar Int data Clasp t = Clasp t makeClasp x f = do let c = Clasp x mkWeakRef c (Just (f x)) return c finis p = do atomically $ do x <- readTVar p writeTVar p (x - 1) .... makeClasp p finis .... The Foo object has a greater lifetime than the Clasp. The bit I want you to focus on is the three lines: let c = Clasp x mkWeakRef c (Just (f x)) return c We can only be sure that (f x) will be called after we let go of the returned value if we can be sure that the language implementation preserves the (explicit) sharing of 'c' between the call to mkWeakRef and return. In particular, if the implementation rewrote the fragment as: mkWeakRef (Clasp x) (Just (f x)) return (Clasp x) then the finalizer might be called sooner than we expect. The problem is, that as I read it, making a distinction between the two fragments would be a violation of referential transparency. So here's the question: is there a way of writing this that is guaranteed to work on all correct implementations of Haskell? cheers, T. ps FWIW, the actual situation is that we have a TVar pointing to a record representing a 'page' of an external data structure. For some operations we need to make sure the the page is in-memory, but when all the operations using a particular page are finished, we'd like to flush it back to disk if it has been modified. Now it'd be really nice to use the garbage collector to tell us when noone is using the in-memory version of the page, so we can write it out. However, if we can't be certain that we've got a genuine hard-reference to the object, then we can't be sure it will not be prematurely flushed. -- Dr Thomas Conway You are beautiful; but learn to work, drtomc at gmail.com for you cannot eat your beauty. -- Congo proverb
http://www.haskell.org/pipermail/haskell-cafe/2007-March/023225.html
CC-MAIN-2014-23
refinedweb
346
62.21
So, the DTO should always be created by the BO as a response for the Delegate call and then, forwarded from the Delegate to the client, right? Does it mean that I have to write such DTO-packaging method like getData() for every business method that returns something different than a void and I would like to send it through the network? I understand, that this is the typo in the BO listing you posted and it should be "return wrapReturnInTO(getPlayerNames());", right? Should the IGame business interface declare these remote exceptions (and every other possible, as there might be a number of other networked-related exceptions) Frits Walraven wrote: Does it mean that I have to write such DTO-packaging method like getData() for every business method that returns something different than a void and I would like to send it through the network? Yes, but don't forget that your business method could actually gather data from mutliple data interactions. So, this is known as a Composite Entity which uses the Transfer Object Assembler pattern, right? Is it allowed to make some Business Objects methods public, when they doesn't return a Transfer Object, like getPlayerAddresses() in the below code: ..... left out code .... Or is it a bad design to introduce this approach, and every BO public method should return either nothing or a TransferObject? So, the business delegate doesn't work as a proxy for the Business Object, doesn't it? So, is the GameDelegate allowed to do such things, or it should be mapping 1:1 the Business Object public functions? So if I have a "addPlayer(PlayerDTO)" method in Game BO, than in my BusinessDelegate it should be also "addPlayer(PlayerDTO)"? So the BO is sending the DTO, so I understand that: IGame - business interface Game - Business Object PlayerDTO, GameDetailsDTO, WeaponDTO, etc. - are my DTO's related with the Game BO? Is this correct? So, the Business Object doesn't come with only one DTO, a BO can encompass many DTO's. If so, than does the BO have any properties / fields within? I mean should it looks like: public class Game implements IGame { private Integer playersNum; private String gameTitle; private DifficultyLevelEnum difficultyLevel; private List<Player> players; // Assume that Player is a different BO ... // Public and / or private methods here } So, a Business Object can have a different Business Object as its field? So, a Business Object can have a different Business Object as its field? But should it have a fields like "gameTitle", "playersNum", etc (which are already a part of the GameDTO)? Frits Walraven wrote:Yes, the DTO is only for the outside world, the properties describe the BO. and so on for every Entity in the application? there is a feeling of duplicated code creation (one property added to the Player object needs to be added in several other places like DB entity property, DTO object property, ...). Data Access Object: Consequences Adds Extra Layer The DAOs create an additional layer of objects between the data client and the data source that need to be designed and implemented to leverage the benefits of this pattern. But the benefit realized by choosing this approach pays off for the additional effort. A Transfer Object is likely to duplicate code (getters and setters will shadow those of the EJB) Here are we assuming that there is a class called TransferObject.
http://www.coderanch.com/t/525212/java-Web-Component-SCWCD/certification/Business-Delegate-pattern-code
CC-MAIN-2015-32
refinedweb
564
59.33
The Article is about the use of Java do while loop. The Java do while loop, like the for loop and while loop, has a condition which defines it’s termination point. However, unlike them it checks its condition at the bottom of the loop instead of the top. This results in the do while loop always running once no matter the condition being True or False, unlike the while loop, which will not run at all if the condition is false. Syntax Below is the general syntax for the Java do while loop. do { // statements } while(condition) Example A basic example of the Java do while loop. Other supporting code was removed to improve readability and keep the code short. int x = 0; do { System.out.println(x); x = x + 1; } while (x < 5); 0 1 2 3 4 As of this point, the do while loop still seems to be identical to the while loop. We’ll now show an example where both of them act differently. Example 2 The following code makes use of the Java Scanner module to take input from the user. import java.util.Scanner; public class example { public static void main(String[] args) { int x; int sum = 0; System.out.println("Input a negative number to exit"); Scanner scan = new Scanner(System.in); do { x = scan.nextInt(); //takes input from user sum = sum + x; System.out.println(sum); } while(x > 0); System.out.println(sum); } } The above code is designed to keep taking input from the user, adding them up in the variable sum until a negative number is input to terminate the loop. The use of the do while. If you don’t understand how we’re taking input here, see the Java input guide. It’s just a better way of taking input than System.in.read(). Using Multiple Statements; do { x = x + 1; y = y + 2; System.out.println("X: " + x + " " + "Y: " + y); } while (x < 5 && y < 6); } } The above loop stops after the value of y reaches 6, even though x was still less than 5. You can use other operators like the or and xor to change things up. See the Java Operators guide to learn more.. We’ll be modifying a portion of the code from one of the previous examples here. In this example, we want to write a program that takes the sum of all the positive odd numbers. do { x = scan.nextInt(); //takes input from user if (x % 3 == 0) continue; sum = sum + x; System.out.println(sum); } while. Break statement The break statement when called, terminates a loop during it’s execution. It is an effective way of forcing your loop to stop if some unexpected outcome has been encountered. import java.util.Scanner; public class example { public static void main(String[] args) { int x; int count = 0; Scanner scan = new Scanner(System.in); do { x = scan.nextInt(); if (x < 0) { System.out.println("Please input positive numbers"); break; } count = count + 1; System.out.println(x); } while(count < 5); } } The above code runs for a total of 5 times, taking input of positive numbers from the user. To ensure the user doesn’t input a negative number, we use a if statement that leads to a break statement and an error message. When the user inputs a negative number, the error message is printed and the loop terminates due to the break statement. 5 5 2 2 -6 Please input positive numbers This marks the end of the Java do while loop article. Any suggestions or contributions for CodersLegacy are more than welcome. You can ask any relevant questions in the comments section below.
https://coderslegacy.com/java/java-do-while-loop/
CC-MAIN-2022-40
refinedweb
611
65.73
Problem Formulation - Given an image stored at image.jpeg, - a target widthand heightin pixels, and - a target starting point (upper-left) xand yin the coordinate system. How to crop the given image in Python OpenCV so that the resulting image has width * height size? Here’s an example of how the original image is cropped to a smaller area from (100, 20) upper-left to (540, 210) bottom-right: Solution: Slicing To crop an image to a certain area with OpenCV, use NumPy slicing img[y:y+height, x:x+width] with the (x, y) starting point on the upper left and (x+width, y+height) ending point on the lower right. Those two points unambiguously define the rectangle to be cropped. Here’s the example of how to crop an image with width=440 and height=190 pixels and upper-left starting points x=100 and y=20 pixels as shown in the graphic before. import cv2 # Load Image img = cv2.imread("image.jpg") # Prepare crop area width, height = 440, 190 x, y = 100, 20 # Crop image to specified area using slicing crop_img = img[y:y+height, x:x+width] # Show image cv2_imshow("cropped", crop_img) cv2.waitKey(0) Here’s the original image: And here’s the cropped image: To succeed as a programmer, you need to focus. Find a specific niche and master it! In other words, crop yourself a new and valuable skillset in the data science and machine learning era: learn OpenCV! Master OpenCV with our new FINXTER ACADEMY course: *** An Introduction to Face and Object Detection Using OpenCV *** Alternative Crop Image Using PIL You can also use the standard PILLOW library to crop an image in Python. Here‘s my blog post that shows you how to accomplish this and here’s the video guide: You can find the full article about how to crop an image with PIL here: [Article] How to Crop an Image With PIL Thanks for studying the whole article. Where to go from here? - Join the free Finxter email academy to boost your basic Python skills via email video lessons. - Join the OpenCV image recognition course—along with dozens of additional courses—at the Finxter Computer Science Academy..
https://blog.finxter.com/how-to-crop-an-image-using-opencv/
CC-MAIN-2022-21
refinedweb
368
67.28
Integrating SoapUI tests into SpecFlow Integrating SoapUI tests into SpecFlow I'm trying to integrate existing SoapUI tests into a new in-house automation framework based off SpecFlow, so that we get better visibility into what is being tested, as well as be able to have tests running against multiple interfaces, fit in better with existing dev practices, have proper code reviews, etc etc. My initial idea was that I would use the public classes and import into my code, as described for instance here:. It's not available in .NET, which I should have guessed, so then I thought I would crate a small Java app that would wrap the classes and I would call it from the automation framework. But then I found that the last available jar is anyway from version 4.0. So I had a look at Ready TestServer, which looked promising, but the licensing model means that we couldn't have it in a build that runs frequently. It is beginning to feel like SmartBear really really don't want me to integrate SoapUI in this way! Eventually, I want to be able to use our API to set up for UI tests, and also to act as a verification step. Also, I'd like to be able to examine the responses in the automation framework and not in SoapUI. Does anybody have any further ideas on what to consider to get this done? Re: Integrating SoapUI tests into SpecFlow Sounds like an interesting use case! It might be worth checking out our Swagger-Assert4J library, while it doesn't support the full ReadyAPI suite (which would be TestServer), it does have similar functionality to our open source SoapUI and provides an interface that might be a good fit for your project.
https://community.smartbear.com/t5/ReadyAPI-Questions/Integrating-SoapUI-tests-into-SpecFlow/td-p/163244
CC-MAIN-2021-39
refinedweb
298
63.93
C++ program to change the case of all characters in a string Introduction : In this C++ tutorial, we will learn how to change the case of all characters of a user provided string. It will take the string as an input from the user. With this program, you will learn how to read a string as an input, how to iterate through the characters of a string, how to check the case of a character and how to change the case of a character. Before showing you the program, let me show you the functions we are going to use : islower() : This function is used to check if a character is lowercase or not. It is defined as below : int islower ( int c ); It returns zero if the result is false, any other number otherwise. toupper() : This function is used to convert a lowercase letter to uppercase. It is defined as below : int toupper ( int c ); It takes the character to be converted as the parameter and cast it to an integer. The return value is the uppercase equivalent of c. It returns int representation of the uppercase if it exists. Else, it returns the integer representation of the same argument character. tolower() : This function is used to convert an uppercase letter to lowercase. It is defined as below : int tolower ( int c ); Similar to toupper, it converts c to its lowercase equivalent and returns the value as an integer. Both toupper and tolower method return the integer representation of a character. We can implicitly cast it to a character. C++ program : #include <iostream> using namespace std; int main() { //1 char str[100]; //2 cout << "Enter a string :" << endl; cin.get(str, 100); //3 for (int i = 0; str[i] != '\0'; i++) { //4 if (islower(str[i])) { str[i] = char(toupper(str[i])); } else { str[i] = char(tolower(str[i])); } } //5 cout << "Final string " << str << endl; } Explanation : The commented numbers in the above program denote the step numbers below : - str is a character array to hold the user input string. - Ask the user to enter a string. Read the string and store it in str variable using cin.get() method. - Run one for loop to iterate through the characters of the string. The end character of a string is \0. This loop will iterate through the characters starting i = 0. It will stop only if str[i] is equal to ‘\0’. - Inside the loop, we are checking for each character if it is a lower case or not. islower function is used to check if the character is lower case. If it is a lowercase character, change it to an upper case using the toupper function. Else, convert it to lower case using tolower function. We are also changing the character of the string str once its case is changed. toupper and tolower function return the integer representation of a character. We are using the char function to change it to a character. - Finally, print out the final string to the user. Sample Output : Enter a string : Hello World Final string hELLO wORLD Enter a string : ThiS is A sAmPlE StRiNg Final string tHIs IS a SaMpLe sTrInG Conclusion : In this tutorial, we have learned how to change the character case in a string in C++. Try to run the example program we have seen in this tutorial and drop one comment below if you have any queries.
https://www.codevscolor.com/c-plus-plus-change-character-case-string/
CC-MAIN-2020-16
refinedweb
568
62.98
buck.woody LinkedIn | FaceBook | Twitter Resume Some platform management system designs focus on Task-Based events. You'll see this in programs like the MySQL Admin tool, especially the version for the Mac. I installed this to manage the MySQL system on my Mac at home, and the top row of icons contain what are basically a task list based on things you want to do. Other systems, such as the Oracle 9i box I run, use objects as the paradigm. They start out with things like namespaces and work their way down the tree, at least in the Enterprise Manager GUI. They also have a few wizards and actions you can launch from there. At Microsoft we use a little of both - you'll see objects referenced in the left hand side of the screen in the "Object Explorer", and then our right-clicks, menu items and icon bars bring up task panels, which are usually logically ordered for some outcome. Others are just a "bag of properties" that you can use to set and change things on your system. So which is best? Personally, I like a task-based approach to a GUI, and commands and scripting for object work. But that's just me. We have a lot of inputs here, as I've described, and what we end up with is the best of the common approach, hence the object/tasks we have now. What do you think? Task-based, Object-based, or both? PingBack from
http://blogs.msdn.com/b/buckwoody/archive/2007/10/24/task-based-or-object-based.aspx
CC-MAIN-2015-11
refinedweb
250
79.5
Unity: How to make an overview map Posted by Dimitri | May 19th, 2011 | Filed under Programming . To make things easier to understand, in this tutorial, the examples will use the plane as the only element in the level. Now, it’s time to start doing some math to obtain some values that will be important for the scripts and for the map’s background to be correctly rendered. Open the plane at the Inspector and divide the X Scale value by the Z Scale, this way, obtaining the plane’s width/height ratio. The next step is to launch Photoshop or any image editing software. Make an image that will be the background of the map with the same ratio and customize it any way you see fit. The size doesn’t matter, as long as it is smaller than the screen resolution and the ratio is the same as the one calculated above. This image is going to be 300x150, meeting the 2:1 ratio requirement. Save this image and drag and drop it inside Unity, into the Inspector tab. Also, import other images you would like to display in the map into Unity as well. Now, create a GUISkin object, by right clicking the Project tab and selecting Create->GUISkin. Add the overview map background and any other image made with the image editing software as elements of the Custom Styles array. Insert this images in the Normal state Background slot, like this: Besides the background, this example has only one extra element to be displayed on the map. The third step is to open this link and download the ShowSize script. Add this script to your ‘Editor’ folder under the Inspector tab. If there is no folder with that name, just create it and then place the script inside it: The 'ShowSize' script inside the 'Editor' folder. Open this script window by clicking on Windows->Show Size. A new window will pop up. Select the plane on the scene, and Show Size script will display its size in the X, Y and Z axis. Write down these values as they are going to be important to the scripts explained below. With these values stored somewhere, create a new Cube game object (GameObject->Create Other->Cube). Align its center at each of the plane’s edges, writing down X and Z coordinates. These values are also relevant to the code. Center the cube at the 4 edges of the plane. Take note of its position. Finally, let’s write some scripts! The first one is a simple GUI script that only renders the map’s background: using UnityEngine; using System.Collections; public class GUIBackground : MonoBehaviour { //the GUISkin public GUISkin guiSkin; void OnGUI() { //Set the depth to one or a greater value GUI.depth = 1; //begin GUI group, with the same size as the GUI background image GUI.BeginGroup(new Rect(20,20,300,150)); //render the GUI background image GUI.Label(new Rect(0,0,300,150),"",guiSkin.customStyles[0]); //close the group GUI.EndGroup(); } } The above code should be self explanatory. Line 12 pushes this GUI to the bottom of the GUI layers. Line 15 starts a new GUI group, that will render the map 20 pixels away from the top left border of the screen. The GUI group must have the same size as the GUI background image. This other script will display, on the map, the location of the Game Object it’s attached to. using UnityEngine; using System.Collections; public class PlayerGUI : MonoBehaviour { //the GUISkin public GUISkin guiSkin; //this game object's transform private Transform goTransform; void Awake() { //obtain the game object Transform goTransform = this.GetComponent<Transform>(); } void OnGUI() { //set the depth to zero or any value smaller than the one at the above script GUI.depth = 0; //begin GUI group, with the same parameters at the above script GUI.BeginGroup(new Rect(20,20,300,150)); //15 is the Map's Background width in pixels divided by the plane's width GUI.Label(new Rect((goTransform.position.z*15)-68.213905f,(goTransform.position.x*15)-237.28775f,15,15),"",guiSkin.customStyles[1]); GUI.EndGroup(); } } This script is a little different since it uses the game object’s X and Z position to draw it’s representation in the overview map at the correct position. Line 24 makes it all happen. The X and Z positions are multiplied by 15 because it corresponds to the map background’s image width (in pixels) divided by the plane’s size at the Z axis, so: 300/20 = 15. It’s the map image/plane ratio. The same value can be found by multiplying the map background’s image height (also in pixels) divided by the plane’s size at the X axis. The 68.213905 value is the position of the Cube game object at the Z axis when it’s centered on the left border of the plane multiplied by 15, plus the game object’s half width also multiplied by 15. The 237,28775 value is the same, except it correspond to the X position of the Cube game object, when it is was centered at top border of the plane. This image explains it better: Image that explains the values found on line 24. That’s it. Here’s a image of what the example project looks like: Thanks for another GREAT tutorial, this one is going to be very helpful to me in the near future,thanks! I hope it was clear enough. Thanks for the feedback! Thank you for this tutorial .. I do every thing but my cube not appear any suggestion please Thank you again that really helpful ………… i apply it right on my project but there is still a problem >>>> i want the overview map to be like there is Direction (North , south , east and west) so when i move up the the character pin on map goes north and when i go east the it goes right and so on …… please can you help me in this …….. thnx in advance :) GUI.Label(new Rect((goTransform.position.x*0.33333333f)+285f,(goTransform.position.z*0.33333333f)+250f,30,30),””,guiSkin.customStyles[1]); Sir thank you very much for very informative unity3d tutorials… more power sir! ^_^ thx , this is what I’m searching
http://www.41post.com/3939/programming/unity-how-to-make-an-overview-map
CC-MAIN-2019-47
refinedweb
1,054
71.44
FormUrls Schema Overview Last modified: October 06, 2009 Applies to: SharePoint Foundation 2010 Available in SharePoint Online The FormUrls Schema describes optional XML you can include in a content type as custom information. This XML node must be stored within an XMLDocument element in the content type definition. For more information, see Custom Information in Content Types. This schema enables you to specify client-side redirects to different Display, Edit, and New form pages for items of this content type. The schema has the following elements: FormUrls The root element. The FormUrls element has the following attribute: xmlns Required Text. Represents the XML namespace of the schema. The namespace for this schema is: Display Optional Text. Specifies the URL of the custom Display form page to use. Edit Optional Text. Specifies the URL of the custom Edit form page to use. New Optional Text. Specifies the URL of the custom New form page to use. Form pages are .aspx pages that replace the entire default SharePoint Foundation page, including the framing elements, or chrome, such as top and side navigation bars. For form pages, you must create any navigational links or other elements you want that are usually found in the SharePoint Foundation chrome. The URLs you specify must be relative to the root location of the content type. If you do not include this XML document in your content type definition XML, SharePoint Foundation uses the default values. In that case, SharePoint Foundation renders the forms automatically for you. The following example specifies custom client-side redirects to different Display, Edit, and New form pages for items of this content type.
https://msdn.microsoft.com/en-us/library/ms473210.aspx
CC-MAIN-2017-09
refinedweb
272
57.77
23923/aws-s3-uploading-hidden-files-by-default I have created a bucket in AWS and a couple of IAM. The IAM by default are included in a group with read-only access. However, when I create my bucket I generated a policy to grant access to specific IAM to list, put and get. Now I'm trying to run a simple command to put a file with one of these AWS IAM from my site: aws s3api put-object --bucket <bucket name> --key poc/test.txt --body <windows path file> The output is successful, means that the files are always loaded. However, when I take a look the bucket in AWS I have to click on show because all loadings are setting the bucket content as hidden. The account that I'm using to verify the files uploaded in AWS has manager access in S3 and I'm going thru the web console. How should I load the files without the hidden mark? thanks versioning is enabled in your bucket. docs.aws.amazon.com/AmazonS3/latest/user-guide/….... the file upload is successful as you see the file in the console, are you looking any specific details? if Versioning is enabled, the console is designed to show the latest version of particular file, unless you click on Show option. Similarly, when you call get-object API call, you will get the latest version of file. To be able to retrieve specific version, you must add --version-id parameter with desired version identifier These are the Key points for uploads ...READ MORE For the first error you should add ...READ MORE You can use the following code, import boto3 s3 ...READ MORE aws s3 ls s3://<your_bucket_name>/ | awk '{print .. This is not the correct form of ...READ MORE You can try getting creating a new ...READ MORE CloudTrail logs API calls accessed to your ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/23923/aws-s3-uploading-hidden-files-by-default?show=23924
CC-MAIN-2021-31
refinedweb
340
64
Stefano Mazzocchi a écrit : > > Per Kreipke wrote: > > > > > > > Again, I think WebDAV access through Cocoon is more than worth the > > > > > effort. > > > > > > > > Definitely worth having, not clear how :-) > > > > > > Uh, what a thread :) > > > > > > Ok, let's try to summarize a little. > > > > > > This is what I believe is a good chain for publishing: > > > > > > user agent <- publishing <- store <- CMS <- edit agent > > > > I agree with Gianugo that the CMS should hide the store: > > > > user agent <- publishing <- CMS <- edit agent > > | > > store > > Yes, good point. This is definately easier to componentize. > > > > which is a native XML DB. IMO, it lacks a few features to turn it into > > > such a storage system (most notably, versioning, locking and a few other > > > things that things like CVS has), but it's a great place to start and a > > > great way for the publishing system to have access to the content (thru > > > XQL or the like). > > > > One thing I'm not thrilled about with dbXML is that it's document centric > > [my opinion may be out of date, I'm not sure], it's hard to query across XML > > documents. > > Look: XIndice is not dbXML moved to Apache, it's a new project "seeded" > with the current dbXML implementation so that users can take a look at > it and shape the further direction of the project. > > So, I totally agree that document-centric view sucks. IMO, a native XML > DB should be seen as ONE BIG XML DOCUMENT and then you have an XQL > frontend to query information and a WebDAV mapping backend so that > people can know if an element should be considered part of a directory > or part of a file. > > (this can easily be achieved with some namespaces attribute on an > element that states that it is a root element for a document) > > But this is a talk for the XIndice mail list (when it will show up). > > > > user agent <-(http)- cocoon <-(xql)- xindice <-(?)- ? <-(webdav)- ? > > > which leaves three big question marks. > > > The unknown contract will probably be filled by the xindice guys (XMLDB > > > API being the probably candidate, but maybe a JSR would add that > > > functionality into JDBC4, I don't know at this point, but I'm confident > > > that something will come out as soon as the xindice people (with us) > > > tackle the problem in detail). > > > > Ok, so given that we eventually define all the contracts, why do we need to > > choose the implementations? I think that, in the J2EE vein, being able to > > choose my own implementation for > > 'standard services' is a great one. > > > > Isn't it ok to define the missing contract but leave the implementations as > > '?'. > > Yep, it is. But around here, it's good to provide both so that you > actually have something that works and don't have to way years for the > JCP to approuve your proposal, get the spec out and then buy WebSphere > 15.0 that implements it. :) > > > > 1) CMS because Cocoon shouldn't implement CMS stuff but should simply > > > "use it". Things like user management, locking, revisioning, metadata, > > > scheduling and all that should be a layer built around the storage. The > > > contract between CMS and cocoon being some API or some avalon behavioral > > > services. > > > > Great! > > > > > At the same time, content editors don't give a shit about "where" > > > resources are located and I disagree vehemently with Gianugo when he > > > says that you have to give them the harddisk methaphor otherwise they > > > are screwed. > > > > > > Well, they would be anyway: in fact, when they approach an editing > > > system, they don't ask the question "where", but "what". The translation > > > between "what content to modify" and "where to save it", mixes concerns. > > > > Based on some exerience, I agree with you Stefano. One example, newspaper > > editors deal with departments, 'beats', teams, campaigns, what have you, > > many of which cross over. Their organization certainly doesn't have to be > > hierarchical. And newspaper writers are very accustomed to filing stories > > from far away without a 'desktop' metaphor at all. In fact, they're so > > _extremely_ time sensitive (down to beating other competing outlets by > > seconds with a news story), they rarely use cumbersome, slow, generic word > > processors, they use highly customized apps. > > Bingo!! exactly. I'm trying to cover exactly those use cases where > people what to focus on their job (writing content) rather than figuring > out where to save it in order not to break the system! > > > > How so? A XUL-powered mozilla IDE that integrates skeleton-based > > > in-place editing capabilities. > > > > I don't understand. Why not just say 'XUL for in-place editing'? Why does it > > have to be mozilla based? > > Good point. :) > > You're right: "xul for in-place editing", then Mozilla is just one of > the possible implementations. > > Note: XUL is proprietary, but can you say that something is proprietary > if it's done by an open source project? IE HTML is, in fact, more > proprietary than XUL from that point of view, even if it's a well known > standard. XUL isn't so proprietary : other implementations and tools are coming. Do you know JXUL (), a XUL engine written in Java/Swing, and Cocktail () that contains a XUL GUI builder ? > > > So, should webdav be integrated into Cocoon? well, at this point, it's > > > early to tell and it depends on the shape that the CMS part will take. > > > > > > One thing in my mind is for sure: Cocoon shouldn't become a CMS, but > > > might well adapt the CMS API with the XUL-powered solution and, in that > > > case, it might require some WebDAV capabilities of some sort. > > > > Ah, there it is without any choices about what CMS or editor I have to tie > > myself to :-) > > Exactly: webdav can allow all usage editing solutions to cohexist. > Wouldn't this be great? > -- Sylvain Wallez Anyware Technologies - --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200111.mbox/%3C3BE7DA81.434DAC57@anyware-tech.com%3E
CC-MAIN-2016-18
refinedweb
975
68.7
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 6 A6 M952 Muhammad Anwar ........... 1TIONAL I~LAMIC UNIVERSITY MALAYSIA ABSTRACT Economists, bankers, jurists, and other Islamic scholars interested in the discipline of banking have mainly focused on the issue of whether interest is riba, and if yes then how to conduct interestfree banking? To this end, several alternatives to interest have discovered and very successfully put into practice by the contemporary Islamic banks. The alternatives include some trading models like murabaha, bai mu'ajjai, bai salam, bai eenab and others. The current study identifies a major problem in using these trading modes for the purpose of financing. This analysis also uncovers some other discrepancies between application of these modes and related injunctions from the Qur'an and Sunnah. The concentration on the issue of riba in banking has kept a larger issue out of sight. -. The issue is whether a banking system itself, comprising central banks and commercial banks, irrespective of the use of interest, is consistent with the injunctions of Islam. The concepts of the Money Expansion Multiplier and Quantity Theory of Money reveal that the central banks and commercial banks, irrespective of the use of interest, is consistent with the injunctions of Islam. The concepts of the Money Expansion Multiplier and Quantity Theory of Money reveal that the central banks and commercial banks usurp people's property in the form of seignorage by, expanding money supply. Such outcome is also condemned by express verses in the Qur'an. It is therefore concluded that the ehtte, bin\Bl1£ ~~tcb;b · and the operations of contemporary Islamic banks are of doubtful validity. Hence some measures are suggested for restructuring of banking to advance the cause of Islamization. 111 ISLAMICITY OF BANKING AND MODES OF ISLAMIC BANKING Muhammad Anwar Department of Economics Kulliyyah of Economics and Management Sciences International Islamic University Malaysia Financing on the basis of interest has been declared an illegitimate mode of finance from an Islamic point of view. Therefore several interest-free banking institutions, called Islamic banks, have sprung in many countries since early 1960s that cater for the banking needs of the Muslim population. Although these banks have successfully replaced the practice of interest with other modes like mudarabah, musharakah, bai murabahah, bai bithaman ajil, bat' eenahj• and ijarah. doubts regarding their lslamicity still persist. The current study explains the reasons behind those doubts. In addition, the study also examines the Islamicity of the functions of central banks and the commercial banking system. It is found that functions of both the central banks and the commercial banking system are contrary to teachings of the Qur'an. Therefore some measures are forwarded that, if adopted, would enhance Islamicity of the entire banking system. The analysis here proceeds as follows. Some intellectual and business developments are reported in section 2 that indicate an astounding success in the area of Islamic banking. In section 3, Islamicity of the modes of financing applied by the Islamic banks 1 is examined in the light of a definition of riba derived from the Qur'an and Sunnah. Validity of the central banking and commercial banking system is discussed in sections 4 and 5 respectively while measures to enhance Islamization of banking are presented in the last section. Intellectual and Business Progress Towards Islamic Banking: An Overview Declaring interest as riba and consequent detection of interestfree modes from the fiqh literature to replace banking operations paved the way for emergence of Islamic banks that have, in general, shown a remarkable success. The intellectual and business success covering a debate on the question whether interest is riba, discovery of interest-free modes for banking and a business profile of Islamic banks are briefly reported here. A Debate on Riba All people who deal with banks in the capacity of depositors as well as borrowers are well aware of the practice of interest in banking operations. Resemblance of interest with riba naturally made devout Muslims restless because of the prohibition of riba by _ the Qur'an in the following words "give up whatever is left in lieu of riba if you are indeed believers, If you do not do so then take a-notice of war against you from Allah and His prophet"0/Baqarah, 2:278-279). This shows importance of the debate that raged among the supporters and opponents of banking interest on the question whether interest is riba. Numerous studies have discussed the matter and the scholars have advanced opposing opinions on this issue. A group of scholars opine interest is not riba. For example, Hashmi claims that interest is nothing but 2 mudarabah (PLD, 2000, 654) while Tantawi, the Grand Shaikh of Al-Azhar, declared that the interest-based banking is akin to mudarabah and murabahah.1 However, majority of the contemporary writers holds the view that interest is riba. Comprehensive discussions on the views expressed and justifications provided by various camps are available in the Pakistan's Federal Shariat Court Judgement on Interest (PLD, 1992) and the Supreme Court Judgements on Riba (PLD, 2000).2 Discovery of Interest-Free Modes for Financing Declaring interest as riba intensified the search for discovery of interest-free modes to conduct banking business. This pursuit has resulted in the discovery of twenty-one (2) operational modes to perform all types of banking transactions on interest-free basis. These modes comprise mudar.abah, musharakah, musharakah mutnaqisah, ijarah, ijarah wa iktina, murabahah, bai salam, hai mu'ajjal (bai hithaman aji~, hai istisna, boi eenab, muzara'a, musaqah, qardhulbasan, wakala, service charge, sale on installments, development charges, equity participation, sale and purchase of shares, purchase of trade bills and fmancing through auqaf Institutions providing financing on the basis of these modes are called Islamic banks. This change of name from banking to Islamic banking represents a worthwhile psychological achievement as participants in the banking industry now make use of Islamic terminology in their business discussions. This is an important step towards Islamization of banking even though, as shown below, it is merely a change in form rather than substance of banking business. Replacement of interest was conceived due to strong verdict against rib« and equivalency of interest with riba. Therefore the supporters of Islamic banking addressed practical difficulties and sought solutions to those difficulties that could arise if interest 3 was rejected. The approach has been to retain banlcing structure with all its ramifications except interest. In this pursuit, operational technicalities have taken precedence over other- concerns. The situation is deplorable because adoption of the pragmatic approach has led to a sort of Islamic banlcing that, as found below, also falls within the ambit of nba. However, whatever has been accomplished so far is laudable because its successful progress over its conventional counterpart certainly indicates presence of a yeaming for an Islamic banking system that would be different from the prevailing interest-based system. A Business Profile of Islamic Banks Proportions of financing under each mode €an indicate importance of each technique to the Islamic banks. The minimum and maximum proportions in bank financing under each mode, averaged for ten (10) leading Islamic banks, are as follows: murabahah-cumbai mu'ajjaI45%-93%, mu!harakah 1%-20%, mudarabah tD/o-17%, ijarah (leasing) 0%-14% and other modes 0%-30% (Pill, 2000, 308). Husain (n.d, 10) reported Bank Islam Malaysia Berhad's financing by mode for the year ended on 30 June 1999 as: m~r.4bab,!h-C1Im- bai bithaman ajil 91.55%, ijarah 3.41 %, musharakah 0.52%, mudarabah 0.47%, and qardhul-ha!an 2.63%. It is obvious from these figures that murabahah (including bai bithaman aji4 is the most popular mode with the Islamic banks. Of course, one wonders why only murabaha has gained such prominence out of the 21 modes listed above. This question will be addressed later in this study. A related question is how the Islamic banks have fared? According to a report by the International Union of Islamic Banks, there are 176 Islamic banking institutions in the world out of which 47% are in the South and South East Asia, 27% in Gee and the Middle East, 20% in Africa and 6% in the Western countries. 4 ?ER?UST AKAAN SULTAN ABDUL 'SAMAO UNlVEASlTJ PUTRA MALAYSIA Deposits and total assets of the Islamic banks are US$112.6 billion and U5$147.7 billion respectively. Islamic banking is growing at a rate of 10-15% per annum compared with the growth rate of 7% recorded by the global financial services industry (PLD, 2000, 739- 740). According to another account provided by M. Iqbal Khan, head of the Islamic banking division of HSBC London, there are over 200 Islamic banks operating in 65 countries with a population of around 1.3 billion. Islamic banks' capital is US$90 billion that is growing at the rate of 15% per annum. (PLD, 2000, 385-387 & 739). The latest available comparison of performance of Islamic banks with conventional banks is the one by Munawar Iqbal. He compared performance of both types of banks of equivalent size during 1990-1998 in terms of equity, deposits, investments, assets, capital-asset ratio, liquidity ratio, deployment ratios, cost-income ratio, return on assets (ROA) and. return on equities (ROE). He concludes "Islamic banks as a group out-performed the former in almost all areas and in almost all years." For example, ROA and ROE for Islamic banks were 2.3% and 22.6% compared with 1.35% and 15% for the conventional banks (Iqbal, 2000, 424-30). By all counts one can conclude that the Islamic banks have fared much better than the interest-based banks. It would be interesting to know the reasons behind this success that are partly addressed in the rest of the paper. One may notice, however, that the techniques applied to measure the performance of Islamic banks are the same as used by the interest-based banks. This shows the Islamic banks are competitors of interest-based banks. In other words, both types of banks are in the same business: taking deposits and extending credit. Proper application of the Islamic modes would have set Islamic banks apart from the conventional banks. It did not happen in reality because the Islamic banks, which were supposed to turn 07 NOV 2006 5 into traders and entrepreneurs, manipulated the financing modes in ways that retained their identity as lenders rather than entrepreneurs. Therefore such measures are needed that would ensure that Islamic banks are transformed from the traditional lenders to the Islamic entrepreneus. Otherwise, question regarding Islamicity of their operations, discussed below, would continue to confuse the Muslim mind. Islamicity of the Modes Used by the Islamic Banks It would have been ideal to find validity of the modes applied by the Islamic banks by direct references from the Qur'an and Sunnah. The records in the revealed sources on the use of financing techniques favorite to the Islamic banks are insufficient. Therefore consequences of Islamic banking operations, rather than the techniques themselves, are assessed in the light of the injunctions of Qur'an and Sunnab to determine their Islamicity. Towards this end, first it would be established that time value of money is synonymous with riba in case of all deferred exchange transactions. Thereafter, Islamicity of the fmancing modes and some ancillary concepts applied by the Islamic banks would be evaluated, pardy in terms of the concept of the time value of money. Time Value of Money and Riba Rjba is generally classified into two types: riba al1adhl and riba alnasia. A common component in both types of riba involves exchange of two similar commodities' in different amounts. The difference so accrued to a party is called riba.4 A reading of the ahaditb and theQur'anicverses related to riba shows any gain resulting from exchange of two similar commodities in different amounts is riba. This definition holds true for both spot as well as deferred 6 exchange contracts. Riba in spot exchanges of two similar commodities is evident in many abaditb. For example, the prophet (s.a.w) said «gold for gold, silver for silver, wheat for wheat, barley for barley, dates for dates, salt for salt, like for like, equal for equal, hand to hand. If these types differ, then sell them as you wish, if it is hand to hand" (Muslim). So any gain resulting from spot (hand to hand) exchange of similar commodities, like dates of different qualities, in different amounts was pronounced riba and disallowed by the prophet. Riba in deferred exchanges of two similar commodities is dealt within the Qur'an. Qur'an says, "if you repent (from riba) then your capital sums are for you, deal not unjustly, and you shall not be dealt with unjustly" (al-Baqarah, 2:279). Deferred exchanges normally result in credits and loans. The verse indicates that there . would be no riba if the creditors retrieve only the principal amount from their debtors. That means, whatever commodity is the subject of deferred exchange that commodity shall be returned in the original amount irrespective of the period of indebtedness and any amount charged above the principal would be riba. Money' is treated as a commodity in exchange transactions because gold and silver were money during the advent of Islam. Therefore if the debt is in the form of money then the lenders are entitled to receive only the amount lent. Otherwise riba will take place. It is obvious therefore that riba in deferred transactions is nothing but a charge for the period of indebtedness. In the economics literature this charge is also called a time value of money that represents a rental for the use of money for a certain period. That is why interest representing time value of money stands prohibited. This position is confirmed from the verse "if the debtor is in a difficulty then grant him time till it is easy for him to pay)! (al-Baqarah. 2:280). Therefore one must conclude that, according to the Qur'an, time value of money is riba.6 An analysis of the contemporary Islamic banking practices 7 shows that time value of money is a part and parcel of all financing transactions. An Analysis of Islamic Banking Practices As noted above, the Islamic banks have mainly relied on trading modes, like murabahah and bai bithaman ajil, to finance their customers' needs. Heavy reliance on trading modes makes sense for two reasons: firstly, it indicates compliance of the injunctions "Allah has permitted trading and forbidden ribd' (al-Baqarah, 2: 275) and "0 believers, do not devour your properties among yourself by wrong means except selling with mutual willingness»? (Nisaa, 4:29). Secondly, it is convenient to charge time value of moriey in the naI?f ~f f,l"tQiJifd~rion their financing in place of the interest charged by th~~p~ff·tional banks. Similarly. mudarabah financing that .Jas in' voglle ~ d~ring' early Islam and was practiced by the prophet (s.a.w) himself has been extended into a two-tier mudarabah because then it would be possible for banks to charge profits in the sense of time value of money. However, it did not gain popularity because the risks are relatively much higher in mudarabah than murabaha. Similar misuses are detected in the use of other Islamic principles applied by the Islamic banks. Therefore Islamic banking practices remain of doubtful validity. Anyway a detailed analysis of salient principles applied by the Islamic banks follows. The basic function of a bank is to accumulate deposits at a cheaper rate to deploy the same at a higher rate and thus earn a spread. Exchanging two similar commodities or two heterogeneous commodities can provide income. If a gainful exchange of two similar commodities is riba then a gainful exchange of two heterogeneous commodities must be selling because ''Allah has permitted selling and prohibited riba" (al-Baqarah, 2:275).8 8 Therefore, earning of legitimate profits must involve exchange of two heterogeneous commodities. 9 Otherwise the earnings would be riba. Both types of banks, Islamic and interest-based, issue credit to seek returns. Islamic banks do not engage in trading activities because they are not interested to become entrepreneurs. Instead they prefer to loan money to the entrepreneurs, like the interestbased banks. Therefore they must find ways and means to charge time value of money, like interest. lOne way is to pose as traders by engaging in a fictitious purchase, adding profit component to the purchase price to arrive at a selling price of the purchased item and then sell the item to the customer at deferred price. So treat the selling price as a credit (loan) due. The difference between the sale price and the purchase price is time value of money that is _. equivalent to interest. This is the essence of all financing transactions based on trading modes including bai ad-d'!Jn, murabaha, bai bithaman afil, ijarah and bai eenab. ) The difference is that the interest-based hanks treat the amount advanced (equivalent to the purchase price) as principal loan while Islamic banks treat the amount due at maturity (selling price) as principal loan. However, any observer shall have no qualm about agreeing that the principal has to be the amount that a bank advances in favor of the customer and not the amount the bank expects to retrieve. In this way it is clear that the profit added to the principal is nothing but riba. It is also true because Islamic banks use the same formulas and annuity tables for computing amount due and monthly installments for (say) bai bithaman ajt'l and ijarah transactions" which are used by the interestbased banks. In sum, the loan from an Islamic bank represents the amount advanced plus time value of money. Normally the customer is asked to purchase the desired item but in the name of the bank. If the selling price (debt) is payable in lump sum then the transaction 9 'f tJ 0 0 513 :; j 2 (in Malaysia) is called a murabaha (instead ofbai mu'ajjal). If the debt is payable in installments then the transaction is called a bai bithaman ajiI. Sometimes a customer is interested in buying use of a commodity and not the commodity per se. So the utility of a commodity may be sold to a customer by an Islamic bank under an ijarah (lease) contract. There are two types of lease: operating lease and financial lease. Banks practice only financial leasing because it is convenient to embed time value of money, as in case of bai bithaman ajil, so the amount due on the financial lease becomes a debt due. 11 All these financing transactions fall under the category of baiad-deyn because baiad.dayn refers to a transaction whereby a commodity (or service) is bought at a deferred price. Dayn is permitted in the Qur' an. 12 In Malaysia, bai ad.dayn refers to a situation of buying and selling a debt, withoutengaging a commodity (BIMB, 1994, 104-105). Scholars outside Malaysia do not accept this interpretation because an authentic hadith prohibits sale of kali bil·kali, that is, debt against debt. (PLD, 2000, 566) Sometimes the bank may buy an item from the customer himself (instead of requiring him to buy something on behalf of the bank) at a lower spot price and sell the same back to the same customer at a higher deferred price. This is a bai eenah transaction. The bank, in order to conduct a bai eenah transaction may reverse the buying and selling role. That is, a bank may sell something to the customer at a higher deferred price and buy it back from the same customer at a lower spot price. Bai eenah is prohibited on the basis of an authentic hadith that refers to a conversation between a lady, Umm Muhibbah, and Aisha (r.a.a). The lady sold an item to Zaid bin Arqam (r.a.a) at a deferred price of 800 dirhams. Later on he decided to sell that item and the lady bought the same item for 600 dirhams. On hearing this, Aisha (r.a.a) became furious and said it was a wrong deal. She shall inform Zaid bin Arqam (r.a.a) that he 10 has wasted his hajj and jihad by doing so. (Kakakhel, 1984, 10).13 Notice that these are apparently two heterogeneous exchanges yet they are prohibited because these two exchanges boil down to a single gainful exchange of money with money. Commodity may be brought into the picture to put a trading label on a lending transaction that yields riba. If bai eenah is prohibited for this reason then other transactions like murabahah involving debt are also subject to prohibition. It is instructive to judge these transactions in the light of the following hadith. Imam Awzai reported that prophet (p.b.u.h) has said: "A time shall come to mankind when they willlegalise riba under the garb of trade" (PLD, 2000, 518). The main problem in each of these transactions is that time value of money (riba) creeps into the banking transactions whenever a trading device is applied. to render a financing facility. One of the difficulties faced in Islamization of the banking system, argues Ebrahim Sidat, is that people consider murabahah as a financing instrument instead of treating it as a trading device (PLD, 2000, 373). Allah has made it obligatory to document ad-dayn transactions (see al-Baqarah, 2: 282-283 in footnote 12 above). The documentation shall be witnessed by at least two persons. The debtor shall dictate contents of the document. Allah deems the documentation to be just, suitable for evidence and convenient to prevent doubts. Collateral, to serve these aims, is permitted provided preparation of documentation is not feasible. Therefore collateral (rehn) is meant to serve as a proof of the deferred transaction in lieu of documentation, not as a surety to recover the debts. Islamic banks require collateral" as surety. Lawyers and solicitors prepare sophisticated documentation at the behest of the bank, not the debtor. Yet documentation cost is charged from the debtor because "preparation of the document of loan has been held to be the responsibility of the borrower which naturally means 11 that if the documentation involves some expenses, they will be borne by the borrower." (pLO, 2000,315). These transactions are also justified on the pretense of a willing buyer willing seller situation to comply with the injunction on "trading with mutual consent" (Nissa, 4:29). Trading with mutual consent, a condition that is dictated in the Qur'an, shall not be violated. However, fulfillment of this condition is not sufficient to legitimize every transaction. It is well known that transactions involving riba, gambling and illegitimate sex are prohibited even if the condition of willing seller willing buyer is met. In case of murabahah, for example, the fact that a customer agrees to buy an item (say a house) from an Islamic bank at an exploded price of $227,644.8015 that he himself bought at a much lower price of $100,000 on behalf'of the bank from the market is itself a proof that the transaction is made under duress because the customer lacks funds to buy the item. This arrangement provides him a debt against time value of money of $127,644.80. Therefore mutual willingness of an Islamic bank and its customer for conduct of transactions containing charge over and above the principal (market price) due to consideration of time is not a sufficient reason for validity of the transactions. Moreover, in Malaysia, the customers are entitled to a discount, calculated on the basis of the same formulae applied to calculate bank profit, for the period of early payment if, the debt is cleared before maturity. This confirms that there is no difference in the profit charged by the Islamic banks and the interest charged by the conventional banks. Hence, by all counts, Islamic banks are operating on the basis of time value of money that, of course, is riba. Banks may acquire deposits on the basis of mudarabah and advance the same to a third party to conduct business on the basis of another mudarabah. This is called two-tier mudarabah in the literature. This way the banks can benefit by exchanging money 12 13 with money in different amounts. Otherwise the banks will have to directly channel deposits into trading activities themselves to make profits. mudarabah financing was a commonplace during the life of the prophet (Pbuh). However, no instance can be traced whereby one party obtained funds on mudarabah from another party and forwarded the same on mudarabah to a third party to conduct business. In this regard, Khattab writes, "fuqaha are in agreement that a mudarib is not entitled to forward mudarabah money to a third party for business" (Khat tab, 1998, 58).l6 Naturally, the validity of the two-tier mudarabah is questionable. In Islamic banking, qardhul-hasan refers to a zero-interest loan. Under this view, hasan is seen as a forgone interest that banks could make otherwise. Calling interest as a basan is problematic because that would mean _recognition of interest as legitimate earnings. Qardhul-basan is mentioned in the Qur'an at least six times." Qur'an uses it to imply spending in the way of Allah because every time it is commanded to "lend qardhul-hasan to Allah." So it refers to loans from people to Allah only. In fact, basan refers to the sacrificed principal, not the interest that is sacrificed when a loan is given to Allah. In other words, qardhu/~hasan is a form of sadaqah.18 It does not represent loans among the people. Therefore, it is suggested, that use of qardhul-hasan for zero-interest loans shall be avoided. The principle of Hasan-e-ada (better repayment) refers to reimbursement of loans with voluntary additional amount to the lenders. Paying an extra amount voluntarily on a borrowed sum is encouraged by the prophet who has himself set the precedent by paying more than the borrowed sum (Pill, 1992, 70). Under the Islamic banking practices the voluntary payments have been so institutionalized that they have assumed the status of interest. For example, Islamic banks regularly pay returns on current and savings deposits, like the conventional banks pay interest. Similarly, Malaysian government regularly pays returns to holders of Government Investment Securities issued on the basis of qardhulbasan. Hence, Islamic banking has assumed the practice of interest in the name of <voluntary' payments by the borrowers to the lenders. In sum, all the practices analyzed here being contrary to the Islamic injunctions are of doubtful validity. However, it would be found below that it is a minor problem compared with problems related to the functions of central banks and the banking system as a whole. Islamicity of Central Banking Whenever a government runs a deficit, there are two methods to finance it. These are: (i) borrowing by issuance of additional government bonds and, (ii) printing additional high-powered money. The choice between borrowing and money creation is made by the central banks. The central banks issue fiat money that acquires the status of high-powered money. Fiat money is the money that does not represent a claim to any physical commodity but instead is backed by laws that require money to be accepted in all legal transactions (Farmer, 1999, 186). When more high-powered money is issued, nominal money supply grows and that, in turn, increases aggregate demand in the economy. Expansion of money supply through bank advances leads to a situation of too much money chasing too few goods. This excessive demand for goods turns into growth of output and prices. How the increased demand influences the output growth and inflation depends on the elasticity of the supply of goods and services. If the supply is completely inelastic then all the money supply growth turns into equivalent inflation. This is apparent from the classical quantity theory of money (Cobham, 1998, 54-56).. 14 The quantity theory of money can be stated as follows: MV=PQ .... (1) Where, M stands for quantity of money supply, V for velocity of circulation of money, P for the price level and Q stands for real output. Equation (1) is an identity that shows that quantity of money times the velocity of money must equal to the price level times the real output. The same equation, in growth terms, can be re-written as: A 1\ A A M+V=P+Q .... (2) Equation (2) shows that sum of the growth rates in money supply and velocity of circulation must equal to the sum of growth rates of prices (inflation) and output. Assuming no change in the velocity of circulation, any growth in money supply due to printing action of a central bank will be translated into inflation and growth in output. If all the growth in money is translated into growth in output only, then the increased output is transferred to the central bank. Otherwise the government would get people's property to the extent of the "inflation rate times real high-powered money" (Gordon, 2000, 385). This transfer to the state is caUed seignorage, and is also known as inflation tax. Seignorage is defined as the "difference between face value and intrinsic value of money" (Anwar, 1987, 295). It results from expansion of money supply because the real value of currency units held by the public reduces due to inflation. In other words, seignorage represents transfer of ownership from the holders of money to the creators of money because of inflation. Injection of fiat money directly creates seignorage (inflation tax) to the government and so transfers real property of unaware 15 people to the state authorities. Channeling of public funds to the authorities by foul means is in direct violation of Allah's command "do not eat up your property among yourselves by foul means nor channel it to the authorities ... wrongfully and knowingly"(alBaqarah, 2: 188). Therefore, it is obvious that the institution of central banking cannot be Islamic because it violates the express Qur'anic verdict regarding wrongful channeling of peoples' property to the authorities. Money is whatever people accept as a general medium of exchange. ''The prophet of Islam is reported to have said that Allah has created gold and silver to be the natural money?" (pLO,2000, 482). Central banks channel public property to the government coffers by creation of fiat money. As paper money issued by the governments is invalid, it is up to the people to revert to a valid form of money. If people wish they might revert to use of gold and silver as money. In fact, moving from fiat money to gold and silver will also prevent diversion of peoples' property to the authorities by foul means. Central banks also contribute to accrual of seignorage to the commercial banking system when they borrow money from the commercial banks by issuing bonds to meet budget deficit requirements of the state. As governments borrow money on the basis of interest, debt servicing leads to increase in budget deficits. This necessitates, again, issuance of more high-powered money and further borrowing from the banking system." Seignorage accrues to the state according to the fiat money created by the central banks. Incidentally the same fiat money, being high-powered money, becomes basis for deposits and seignorage for the commercial banking system. In this way, not only central banks themselves violate the Qur'anic injunctions; they are also responsible for sowing the seeds for the wrongful growth of seignorage that 16 accrues to the commercial banks. How it happens, is elaborated in the next section. Islamicity of Commercial Banking System Banking has grown because of: (i) a fraction of the deposits is kept as reserves to meet withdrawals by the depositors and (ii) acceptance of receipts in lieu of money (Farmer, 1999, 184-87). Suppose" someone deposits $100 of cash (high-powered money) into the banking system. Assuming that the depositors rarely withdraw more than 10% of their deposits, the bank decides to keep reserves equal to just 10% of total deposits and grants loans equal to the remaining 90% percent of total deposits. This depositing and lending the high-powered money starts the process of money creation by the commercial banks. Suppose the merchant from whom the borrower bought the merchandise redeposits the $90 into a bank. This raises total deposits to $190 and the bank again has the $100 in cash. Keeping 10% of all deposits requires the banks to retain $19 as reserves, the remaining $81 can be loaned out by the banks. The banks can continue retaining 10% of total deposits in reserves form and loaning the rest of it. This process can continue until total deposits equal $1000 and all the $100 cash (beingl0% of total deposits) is withheld by the banks as reserves. However, four conditions must hold for the banks to turn $100 of high-powered money into a money supply of $1000. These conditions are: (i) bank deposits (i.e. checks) must be accepted as means of payment, (ii) Any consumer or business firm receiving a cash or check payment must deposit it back into the banking system, (iii) the bank must hold some fraction (e.g., 10%) of its reserves in the form of cash, and (iv) businesses and households must be willing to borrow whatever amount the banks want to lend. (Gordon, 2000, 428) 17 The money creation process can mathematically be expressed as follows: D::;: (Hie) (3) Where, H stands for amount of high-powered money, e for the reserve-holding ratio of commercial banks and D for amount of deposits (that become money supply). The ratio (1 I e) is called money creation multiplier. The multiplier tells us by how many dollars the money supply will expand for every dollar of high~ powered money deposited into the banking system. Of course, value of the multiplier in reality depends on several factors. However, one principle is obvious: lower (higher) the reserve ratio, higher (lower) the money supply and so the concomitant seignorage for the banking system. For example, if e ::;: 0% then the multiplier would be infinity. This means that even one dollar deposited into the banking system has a potential to stretch into an unlimited amount of money, at least in theory. That is why the central banks impose required reserve ratio that curtails unlimited power of the commercial banks to create money supply.Z2 If e ::;: 5%, then multiplier equals twenty (20). That means every dollar deposited will expand into twenty (20) dollars out of which 19 dollars go to the coffers of the commercial banks. These nineteen dollars accrued to the banking system are referred as seignorage in the literature. If the banking system can own 19 dollars for each dollar of deposits then imagine how much of peoples' wealth goes to the coffers of the commercial banks due to entire amount of deposits of high powered money. The expanded money supply resulting from depositing of the high-powered money comes under ownership of the banking system. How much of this goes to each bank depends on the reflux ratio, percentage of each bank's deposits that are re-deposited into the same bank, of each bank 18 (laffee, 1989, 339). This is definitely a wrongful devouring of people's property by the commercial banking system which contradicts the repeated command "do not devour your properties among yourselves through false means" (al-Baqarah, 2:188; Nisaa, 4:29) This seignorage has two consequences: (i) bankers acquire ownership of the wealth to the extent of the seignorage without corresponding delivery against it and, (ii) increased money supply is responsible for increased inflation that causes a havoc in the society. This seignorage would accrue to the banking system even when loans are advanced at a zero.rate of interest. Whatever the banks earn in the form of contractual interest or profit is over and above the seignorage amount. Would banks advance loans if there were no interest charge on the loans? As the seignorage will accrue to the banks even if the advances were made free of charge, the absence of interest would not intimidate banks from lending money because otherwise they will loose their share in the seignorage. Supporters of Islamic banking in the ranks of fuqaha, economists, bankers, and others mainly focused on the interestbased transactions as deals between banks and their clients. This outlook, perhaps inadvertently, led to the negligence of the larger issue of the legitimacy of the banking system itself. The banking system representing the institutional arrangements for collecting deposits and making advances in a fractional reserve system is in violation of the explicit Qur'anic verdict that forbids devouring of peoples' money in the following words "0' believers, do not eat your properties among yourselves through false means" (Nisaa, 4: 29) Accrual of seignorage to the banking system depends on several factors (Gordon, 2000, 428) that are determined by the attitude and behavior of public toward the banks. If there were no 19 deposits into the banking system then there would be no seignorage for the banks. Even if the people deposit but they do not borrow then, again, there would be no seignorage. If people borrow but do not redeposit their borrowings into the banking system then, again, there will be reduced seignorage. In a nutshell, accrual of seignorage to the banking system is in the hands of the public. The banks cannot accumulate seignorage if the people do not provide opportunity for it. It is the attitude of people that opens up opportunities for the banks to behave greedily and dishonestly. Suppose all the money created by the banks is translated into real growth so that there is no inflation and no reduction in the real value of deposits. Then do the commercial banks have any advantage? Yes, because they being the creditors still become owners of the deposits created through this ptoC~~s. Therefore, inflation or no inflation the commercial banks would enjoy undue advantage thanks to the money expansion multiplier process. The government enjoys similar advantage when money supply is increased by the central banks as it acquires ownership of the peoples' property in exchange to the tune of the money so created. The difference being the central banks directly create money while the commercial banks create money indirectly through leveling the deposits. If all the depositors turn to banks to withdraw their deposits then would they be able to receive back their deposits? Surely not because of the fractional reserve system that grew out of the dishonest practices of the early goldsmiths with whom people used to keep their trusts. Therefore the Allah's orders like "Allah commands you to render back your trusts to those to whom they are due" (Nisaa, 4: 58) and "do not misappropriate knowingly things entrusted to you" (An/aal, 8: 27) can never be complied in the presence of the fractional reserve system. Moreover, due to expansion of money supply by the banks, the resulting inflation means that real value of the deposits falls. This means that the 20 deposits withdrawn from the banks have diminished value. This amounts to a clear violation of not only the above commands but also of those commands that require to "give not short measure or weight" (Hud, 11: 84) and "give full measure and full weight" (Hud, 11: 85). Accrual of the seignorage to the commercial banks by devaluing the money holdings of people through inflation necessarily favors concentration of real wealth into few hands, an outcome contrary to the Qur'anic command that wealth "does not make a circuit among the wealthy among you" (al-Hashar, 59: 7). The tendency of concentration of wealth into few hands is due to the seignorage that would happen even if the loans were issued at a zero rate of interest. Therefore, credit system imposes a larger problem compared to the practi~e of interest. Measures Towards Enhanced Islamization of Banking It is clear from the above analysis that central banks provide highpowered money whereby they devour peoples' property wrongfully. The high-powered money is the genesis for the deposits made by the people to the commercial banking system. The commercial banks advance those deposits to needy individuals and businesses and so they reap benefits in the form of implicit return (seignorage) and explicit return (interest or profit) on their credit. Both types of returns are found to be inconsistent with the commands of Allah. Therefore, further Islamization is needed in: (i) creation of fiat money by the central banks, (ii) creation of money supply by the commercial banks and (iii) elimination of riba component from the Islamic banking transactions. Therefore some measures are presented below which, if implemented, would enhance Islamization of banking. 21 The measures suggested include: (i) the replacement of fiat money with the commodity money by the central banks, and (ii) splitting of all commercial banking activities into two subgroups of investment banks and social banks," This action is meant to reform financing side of the commercial banks for transforming them from mere financial intermediaries to entrepreneurs. These measures are elaborated below. At the level of central banking, efforts shall be made to revert from the present fiat money standard to the commodity money standards, specifically gold and silver, which prevailed at the advent of Islam. In this regard, the paper money may continue to circulate but it shall be fully backed by gold and silver. Extraneous paper money shall be withdrawn from the economy, perhaps by selling state assets to the public. Adoption?,f commodity money standards will eliminate the seignorage generating role of the central banks. It is a formidable challenge to meet these requirements. Some writers, like al-jarhi and Kahf, have proposed 100% reserve requirement for commercial banks to eliminate the seignorage generated to them (Siddiqi, 1983, 45). This would mean that the banks would become mere depositories and the money will be hoarded in their vaults. But hoarding of money is condemned by Allah (al-Humazah, 104: 1-3). That is why a restructuring of the entire commercial banking system is suggested here along the following lines. Firstly, it is suggested that all the functions and activities of contemporary commercial banks shall be classified into several types of entrepreneurial tasks so that each class is akin to a business of some industry outside the banking arena. These business houses may be called investment banks. In this way all commercial banks shall be split into various types of specialized investment banks. The investment banks can solicit deposits on the basis of mudarabah and musharakah for employing the funds themselves into production 22 and trading activities of their choice. However they shall be barred from extending those deposits to a third party. The depositors shall be duly rewarded with their share in the profits, if any. Each investment bank may specialize in a suitable productive activity to earn profits by carrying on business in trading, industry, manufacturing, agriculture, leasing and services. In a nutshell, most of the banking operations shall be transformed into usual entrepreneurial business entities. Once the commercial banks operate according to their specialized categories then they would no more be involved in the business of advancing credit to earn nba. They will have to use their entrepreneurial expertise, like all other profit-seeking businesses, to earn profits. Secondly, transform all banking activities of accepting return-free deposits into a single network of social banks under the management of a trust comprising representatives from all walks of life in the society. The network can collect return-free deposits and issue return-free loans for socioeconomic purposes. The representatives would ensure fair distribution and use of credit for meeting essential needs of different segments of the population. However, it would be distinct from the contemporary banking system in several respects like: (i) there will be no riba involved in financial transactions on either deposits or loans, and (ii) loans will be issued only to serve identified individual and social needs. Despite giving return-free loans the social banks will continue to enrich themselves through the seignorage. We have noticed that the seignorage is result of the inflation resulting from the money supply created by lending activities of the depository institutions. Therefore the seignorage would accrue to the social banks as they will be involved in soliciting deposits and advancing loans. Although inflation will be much lower compared to the contemporary banking system. The Social banks can spend the seignorage so created to contribute in the direction of observance of 23 Allah's commands that the funds shall not make a circuit among the rich people." The deposits shall be utilized to issue return-free loans to needy people as well as loans to state organs for sake of social projects. In any case, the seignorage that is property of the society will be used for development of the society rather than enriching private commercial banks. Those people who are unable to sustain in the face of inflation, or otherwise, have a right to get financial grants from the social banks. Providing grants and assistance to those individuals who are hurt by inflation from the seignorage would be their legitimate right. Main idea of this proposal is to transform the seignorage into a form of sadaqah so that what is extracted from the society is spent on the society. In other words, society pays and society receives. This is unlike the seignorage accrued to the commercial banks that thrive at the expense of the entire society." This is a brief exposition of what course of action is consistent with the Qur'an and Sunnah in the discipline of banking. The views expressed here are undoubtedly drastically different from the views held by contemporary Muslim scholarship. However, it is expected that Muslim scholars will sooner or later realize the truth and make concerted efforts to replace the present banking system with another system that would reflect a faithful observance of commands of Allah and His messenger (p.b.u.h). Endnotes Incidentally, the current study will demonstrate below how the applications of mudarabah and murabahah by Islamic banks resemble interest-based banking. If it is accepted that nature of returns to Islamic banks is not different from the interest to conventional banks then all the evils ascribed to interest would remain intact despite changing the conventional banking into 24 2 Islamic banking. The list of moral, social and economic evils associated with the practice of interest is indeed very long. See PLD, 2000, 529-537 for a detailed discussion. In fact, the judgement delivered by a full bench of the shariah judges is an authentic document that sums up the present state of theoretical and applied knowledge on Islamic banking. It also contains all shades of opinions expressed by the prominent jurists, bankers, lawyers, statesmen and economists. It covers all issues related to Islamic banking. That is why the current study has heavily relied on it. Similar commodities cover both homogeneous as well as differentiated products like dates of all types. Sometimes the transactions that generate riba are also referred as riba as seen in the following verse: ''Allah has permitted bai' and prohibited riba" (Ai-Baqarah~. 275) Money has the same status in exchange as any other commodity. In other words, money must be a commodity with its own intrinsic value in an Islamic economic system, otherwise recovery of the principal stressed in Qur'an (ai-Baqarah: 275) would be meaningless. A recent fatwa has declared that use of paper money is haram. See Vadillo for details. Views regarding prohibition of time value of money differ. Time value of money is a key concept in the financing business. A detailed discussion on the issue of time value of money as riba is available in Alkaff (1986). Mutual willingness is an essential factor in trading. However, exploitation on the pretense of mutual willingness is not allowed. For example, riba is prohibited even though it happens with mutual consent of the parties. Similarly, taiqi-jaiab, practice of people of Madina to meet farmers outside the town and purchase grain from them was disallowed (Afzal-ur-Rahman, 1975,44) by the prophet (s.a.w) even though the trading must have been conducted with mutual willingness. A detailed discussion of tenets and injunctions related to trading and 3 25 8 marketing is given in Anwar and Saeed (1996). The preceding part of the same verse says, "they say, selling is like ribd'. Selling is confused with riba transactions because both are income generating exchange contracts. They are reminded that income from exchange of heterogeneous commodities is due to selling but income from exchange of similar (including money) commodities is riba. In exchange activities, earnings accrue to sellers. Therefore, one may focus on selling aspect in exchange as stated in the Qur'an while referring to the distinction between selling and riba. Please refer to BIMB Institute of Research and Training Sdn. Bhd (1996) Compare formulas for computing monthly rentals given on page 10 with the formulas for monthly installments on a bai bithaman ty"il contract given on page 13. Both formulas are identical. In case of financial lease the commodity is in the control of the customer while ownership is with the bank. In case of bai bithaman ty"il the customer is the owner but bank keeps the title of ownership as collateral. Qur'an states "0 ye who believe! When ye deal with each other in transactions involving future obligations in a fixed period of time, reduce them to writing. Let a scribe write down faithfully between the parties: and let not the scribe refuse to write as Allah has taught him, so let him write. Let him who incurs the liability dictate, but let him fear. Disdain not to reduce to writing (your contract) for a future period, whether it is small or big. It is more just in the 9 10 11 12 26 13 sight of Allah, more suitable as evidence, and more convenient to prevent doubts among you. But if it is a transaction that you carry out on the spot among you, there is no blame on you if you reduce it not to writing. But take witnesses whenever you make a commercial transaction and neither scribe nor witnesses suffer harm. If you do (such harm), it would be wickedness in you. So fear Allah because it is Allah that teaches you. Allah is well acquainted with all things. If you are on a journey and cannot find a scribe then knows all that you do. (A/~Baqarab, 2: 282~283) Fiqh Academy of the Organisation.ofIslamic Conference (OIC) has permitted such sale on the condition the second sale transaction should be concluded with a party other than the party in the first sale (PLD, 2000, 359). Can this condition be a barrier to exchange of money with money by the banks? Is it difficult for banks to circumvent this condition by indulging a fictitious customer as a third party? Collateral is also justified on the pretense that the prophet himself had mortgaged his armor to a Jew. The question is whether the collateral was submitted in lieu of documentation because the prophet (Pbuh) was on journey or was it submitted in addition to the documents. Figures cited here are taken from an example given in the BIMB Institute of Research and Training Sdn. Bhd. (1996) Khattab also notes thatfuqa.ha allow such arrangement provided the depositor grants permission to do so. This is the basis for recommending two-tier mudarabah. Islamic banks do not practice much of it, anyway. See a/~Baqarah: 245, a~Maidah: 12, al-Hadid: 11 &18, al-Taghabun 17 and al-Muzammif. 20. 14 15 16 17 27 18 A comparison of the contents ofal-Baqarah: 261 with any of the verses onQardhul-Hasan assures that qardhul-hasan represents nothing but spending (infaq) in the way of Allah. 19 A fatwa was issued in Granada that declares ''The use of paper money in any form of exchange is usury and is therefore haram" (Vadillo, 1991, 48) )} Central banks also contribute towards limiting of expansion of money supply by the banlcing system because they require the banlcing institutions to retain certain percentage of deposits in the form of reserves. Regulations pertaining to reserve requirements limit the capability of commercial banks for expansion of money supply. 21 This example is adopted from Gordon (2000). 22 Imagine what may happen to the money supply if the world _ adopts a system whereby only electronic money would be used. 21 This can be accomplished like Bank Bumiputra Malaysia Berhad (BBMB) was split into Bumiputra Commerce bank and the Bank Muamalat in Malaysia. 24 See al-Hashar, 59: 7 that states "it may not merely make a circuit between the wealthy among you." 2S Chapra has argued that "creation of deposits by commercial banks ... may be recognised in the Islamic system provided that (a) appropriate measures are taken to ensure that the creation of derivative deposits is in accordance with. the non-inflationary financing needs of the economy, and (b) that the seignorage realised from derivative deposits benefits society as a whole and not a vested interest group" (1985, 158). The idea of instituting social banks allows utilization of the seignorage for the benefit of the society as envisaged by Chapra. 28 References Afzal-ur-Rahman (1975), Economic Doctrines of Islam, VoL II, Lahore: Islamic Publications Ltd. Alkaff, Syed Hamed Abdul Rahman (1986), Does Is/am assign at[! valuel weight to time factor in economic and financial transactions,? Karachi: Islamic Research Academy. Anwar, Muhammad and Mohammad Saeed (1966), "Promotional Tools of Marketing: An Islamic Perspective," Intellectual Discourse, Vol. 4, No. 1~2, 15~30. _____ , (1987), "Reorganization of Islamic Banking: A New Proposal," .America» Journal of Islamic- Social Sciences, VoL 4, No.2, pp. 295~304. Bank Islam Malaysia Berhad (1994), Islamic Banking Practice from the Practitioner's Perspective, Kuala Lumpur. BIMB Institute of Research and Training Sdn. Bhd. (1996), <'AI~ Bai Bithaman Ajil Financing & Al-Ijara Financing (Module 2)" presented in the Third IntemationalIntegrated Course on Islamic Banking and Finance during 5~ 10 August in Kuala Lumpur, Malaysia. Chapra, M. U mar (1985), Towards a Just Monetary System, Leicester: Islamic Foundation. Cobham, D. (1998), Macroeconomic AnalYsis: an Intermediate Text, Second edition, London: Longman. 29 Far mer, Roger E.A. (1999). Macroeconomics, Cincinnati: SouthWestern College Publishing. Gordon, J. Robert (2000), Macroeconomics, Eighth Edition, Reading (Massachusetts): Addison Wesley Longman, Inc. Holy Qur'an. Husain, Ahmad Sanusi (n.d), "Debt Financing," A Seminar paper on Islamic Banking and Finance organized by the BIMB Institute of Research and Training Sdn. Bhd. Iqbal, Munawar (2000), "Islamic and Commercial Banking in the Nineties: A Comparative Study" in the proceedings of the Fourth International Conference on Islamic Economics and Banking held in Loughborough University during August 13-15, pp. 409-431. Jaffee, M. Dwight (1989), Mon~, Banking and Credit, New York: Worth Publishers, Inc. Kakakhel, Mufti Sayyah ud-Din (1984), "Bai-Muajjal and BaiMurahahah" (Urdu), paper presented in the Seminar on Islamic Financing Techniques organized by the International Institute of Islamic Economics on 22-24 December. Khattab, Muhammad Sharfuddin (1998), Mudharaba System in Islamic Fiqh, Translated in Urdu by Muhammad Tahir Mansuri, Islamabad: International Institute of Islamic Economics, International Islamic University. P.L.D (Pakistan Legal Decisions). (1992), Federal Jhariat Court Judgement on Interest, Lahore: P.L.D. Publishers. 30 _____ , (2000), Supreme Court Judgement! on Riba, Lahore: P.L.D. Publishers. Rizwan-ul-Haque, Muhammad (n.d.), "Debt, Riba and Islam," U npu blished. Siddiqi, Muhammad N eja tullah (1983), I ssues in Islamic Banking: Selected Paper!, Leicester: Islamic Foundation. Vadillo, Umar (1991), Fatawa on Paper Monry, Granada: Madinah Press. 31
https://www.scribd.com/doc/48736484/city-of-Banking-and-Modes-of-Islamic-Banking
CC-MAIN-2017-47
refinedweb
9,344
51.99
Parses FIXM data into Python datastructures Project description About pyfixm is a library that contains Python wrappers for the FIXM XML Schemas, plus the US NAS extension for the FAA. Currently the library is built for FIXM v3.0, as this is what the FAA uses to publish data through SWIM. Usage import pyfixm xml = pyfixm.parse("./fixm_file.xml") Building pyfixm manually To build pyfixm either use the suppled build-pyfixm PyCharm run configuration or by manually running scripts/build.py. Both methods build the library within a Docker image and then extract the built library to ./pyfixm on the host computer. Reminder to install Docker if you haven’t already. License This project has two licenses. Because really what this repository creates is a transpilation of the FIXM XSD files, the generated library is treated as a distribution of the upstream and not a novel codebase and assumes no further copyright with the built library. Both components are licensed under the BSD 3-Clause, but the copyright holder is different. Source Repo The pyfixm library-generating source code is licensed under the BSD 3-Clause license. Generated Library The generated library (the part that gets published to PyPI) is licensed under the same license as the upstream FIXM XSD files. Note that the copyright is attributed to the FIXM copyright holders to avoid any copyright complexities. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyfixm/
CC-MAIN-2021-17
refinedweb
256
55.95
0.5.8-release From OpenSimulator r5110 | ckrinke | 2008-06-14 16:51:35 -0700 (Sat, 14 Jun 2008) | 5 lines Change VersionInfo string from: "OpenSimulator trunk (post 0.5.7)" to "OpenSimulator release 0.5.8" in preparation for tagging this minor release. r5109 | justincc | 2008-06-14 13:52:42 -0700 (Sat, 14 Jun 2008) | 2 lines - minor: A few miscellaneous doc comments before I break and start on something else r5108 | teravus | 2008-06-14 13:33:03 -0700 (Sat, 14 Jun 2008) | 1 line - Vintage 2, a good year. r5107 | teravus | 2008-06-14 13:04:48 -0700 (Sat, 14 Jun 2008) | 1 line - Fixes: 0001554: r5106 update fails to load on some regions with NullRef error on volume portion of maptile drawing routine. r5106 | justincc | 2008-06-14 10:47:25 -0700 (Sat, 14 Jun 2008) | 3 lines - Start recording asset request failures - This includes problems such as connection failures and timeouts. It does not include 'asset not found' replies from the asset service. r5105 | teravus | 2008-06-13 19:39:27 -0700 (Fri, 13 Jun 2008) | 4 lines - Enables maptile display in grid mode for simulators that are not on the same instance. - Only generates a new maptile after a refresh interval - Maptile names have the UnixTimeSinceEpoch that they were generated and the regionUUID they're from, so you can know which ones are no longer necessary. - Updates RegionInfo, so backup your /bin/Region/*.xml files. r5104 | sdague | 2008-06-13 12:41:13 -0700 (Fri, 13 Jun 2008) | 3 lines save_assets_to_file path shouldn't always assume uploaded content are images and use .jp2 for the file extension. r5103 | sdague | 2008-06-13 12:15:27 -0700 (Fri, 13 Jun 2008) | 4 lines the first pass at Asset Fridays. Contribution of a handshake animation from Mo Hax at IBM. This took us a while to sort out the conversion path, expection more efficiency in the future. r5102 | sdague | 2008-06-13 11:20:30 -0700 (Fri, 13 Jun 2008) | 2 lines rename to index.xml just to make this more consistant r5101 | justincc | 2008-06-13 11:04:01 -0700 (Fri, 13 Jun 2008) | 3 lines - refactor: catch asset service request exceptions at the AssetServerBase level rather than in the GridAssetClient - this is to enable logging of asset request exceptions soon r5100 | justincc | 2008-06-13 10:11:33 -0700 (Fri, 13 Jun 2008) | 4 lines - minor: Remove LINK_SET debug Console Writeline - only appeared in DotNetEngine's LSL_BuildIn_Commands.cs - Nice spot Ewe Loon () r5099 | justincc | 2008-06-13 09:58:24 -0700 (Fri, 13 Jun 2008) | 2 lines - minor: Print out uptime as well as stats in periodic diagnostics logging, so it's easier to tell which isntances each print out of information is from r5098 | justincc | 2008-06-13 09:32:32 -0700 (Fri, 13 Jun 2008) | 2 lines - Double timeout on region registration XMLRPC call to the grid service r5097 | justincc | 2008-06-13 09:23:31 -0700 (Fri, 13 Jun 2008) | 2 lines - minor: comment out confusing DefaultTimeout field in RestClient, which is currently not actually used r5096 | justincc | 2008-06-13 09:17:27 -0700 (Fri, 13 Jun 2008) | 2 lines - If appropriate, start printing out the inner exception from the grid -> region status check, so we can tell a bit better what the problem was r5095 | sdague | 2008-06-13 07:27:46 -0700 (Fri, 13 Jun 2008) | 2 lines add indexes for sqlite inventory r5094 | ckrinke | 2008-06-12 18:54:53 -0700 (Thu, 12 Jun 2008) | 2 lines A little minor cleanup and harmonizing between LSL_BuiltIn_Commands.cs and its copy LSL_ScriptCommands.cs r5093 | chi11ken | 2008-06-12 17:21:53 -0700 (Thu, 12 Jun 2008) | 1 line Update svn properties, clean up formatting, refactor out duplicate hard-coded port numbers. r5092 | sdague | 2008-06-12 13:48:06 -0700 (Thu, 12 Jun 2008) | 6 lines look mom, migrations in action. This adds a couple of indexes to mysql regions that should help on performance of some of the selects. We should start capturing more data on performance bits to figure out where else we are missing indexes and add them via migrations as well. r5091 | teravus | 2008-06-12 13:19:42 -0700 (Thu, 12 Jun 2008) | 2 lines - Split the World Map code into a module. - Implemented a hack so regions beyond the 10,000m range will show the map without having to click on the map before they'll start to show. The hack shows regions around the one you're in, but it won't show the one you're in.. you still need to click on the map to get that (not sure why yet). Additionally, the map still only shows pictures for regions that are hosted on the same instance (no change). r5090 | sdague | 2008-06-12 11:44:58 -0700 (Thu, 12 Jun 2008) | 6 lines Fix mysql migrations. This is tested with an existing up to date schema, and no schema. It should also work with a non up to date schema as well. Btw, meetings in which I can get code done are the right kind of meetings. r5089 | justincc | 2008-06-12 11:18:59 -0700 (Thu, 12 Jun 2008) | 2 lines - minor: Remove and tidy duplicate 'storing object to scene' messages in log r5088 | justincc | 2008-06-12 10:49:08 -0700 (Thu, 12 Jun 2008) | 4 lines - refactor: For new objects, move attach to backup to occur when adding to a scene, rather than on creation of the group - Adding to a scene is now parameterized such that one can choose not to actually persist that group - This is to support a use case where a module wants a scene which consists of both objects which are persisted, and ones which are just temporary for the lifetime of that server instance r5087 | justincc | 2008-06-12 09:54:04 -0700 (Thu, 12 Jun 2008) | 2 lines - refactor: rename CreatePrimFromXml to CreatePrimFromXml2 r5086 | sdague | 2008-06-12 08:47:33 -0700 (Thu, 12 Jun 2008) | 5 lines this, in theory, adds migration support to mysql for all data sources besides the grid store. It is only lightly tested so the less adventurous should wait a couple of checkins before upgrading. r5085 | sdague | 2008-06-12 08:21:34 -0700 (Thu, 12 Jun 2008) | 4 lines check in region store initial migration definition, now on to integrating this approach into the mysql driver. Beware the next couple of checkins. r5084 | sdague | 2008-06-12 07:44:52 -0700 (Thu, 12 Jun 2008) | 2 lines check in migration files for mysql r5083 | teravus | 2008-06-12 04:06:31 -0700 (Thu, 12 Jun 2008) | 1 line - Insulate maptile volume draw routine against TextureEntry oddities. r5082 | teravus | 2008-06-11 18:11:57 -0700 (Wed, 11 Jun 2008) | 4 lines - Added Prim drawing to the mainmap tile generation.. you can see blocks representing the prim now on the mainmap. - It isn't perfect since the blocks are square, however it's pretty good. - Performance is also pretty good, however, if it takes too long for you, you can disable it in the OpenSim.ini - You can see how long it takes in milliseconds on the console when it finishes. r5081 | sdague | 2008-06-11 14:01:33 -0700 (Wed, 11 Jun 2008) | 5 lines check in working migration code fore SQLite. This is now using migrations instead of the old model to create tables. Tested for existing old tables, and for creating new ones. r5080 | sdague | 2008-06-11 13:04:01 -0700 (Wed, 11 Jun 2008) | 2 lines updated resources for current sqlite schema for migrations r5079 | teravus | 2008-06-11 12:45:17 -0700 (Wed, 11 Jun 2008) | 3 lines For people receiving: Exception: System.ArgumentException: Value of -2147483648 is not valid for red, I've added the following message; [MAPIMAGE]: Your terrain is corrupted in region {0}, it might take a few minutes to generate the map image depending on the corruption level And, I've also kept it from crashing... r5078 | mingchen | 2008-06-11 10:31:43 -0700 (Wed, 11 Jun 2008) | 1 line - Parcel Prim Count Maximums moved to their own functions so modules can override the default method of calculating how many prims a parcel can have. r5077 | ckrinke | 2008-06-11 07:02:16 -0700 (Wed, 11 Jun 2008) | 3 lines Mantis#1514. Thank you kindly, Boscata for an InventoryServer patch to allow the InventoryServer to work with MSSQL.. r5076 | ckrinke | 2008-06-11 06:57:32 -0700 (Wed, 11 Jun 2008) | 3 lines Mantis#1528. Thank you kindly, Boscata for: MSSQL Avatar appearance solved. Appearance functions and modified table. r5075 | justincc | 2008-06-11 04:25:29 -0700 (Wed, 11 Jun 2008) | 4 lines - Drop periodic stats logging back down to 60 minutes to reduce console spam. - Please feel free to comment if the periodic logging is causing you problems in some way - I'm loathe to add yet another switch to OpenSim.ini but will if it proves necessary r5074 | joha1 | 2008-06-10 21:19:30 -0700 (Tue, 10 Jun 2008) | 1 line Mantis 1370. Thanks lulurun for the patch r5073 | justincc | 2008-06-10 18:33:08 -0700 (Tue, 10 Jun 2008) | 2 lines - Fix the string substitutions in the last commit r5072 | justincc | 2008-06-10 18:31:39 -0700 (Tue, 10 Jun 2008) | 3 lines - From inspecting OSGrid WP logs, it appears one particular client is failing because they are giving an illegal initial position to ScenePresence.MakeRootAgent() - If we detected an illegal position (x, y outside region bounds or z < 0), then print out the illegal position and substitute an emergency <128, 128, 128> instead r5071 | justincc | 2008-06-10 17:41:07 -0700 (Tue, 10 Jun 2008) | 3 lines - Add 'show info' command to all servers, which prints the directory in which the server was started - This is potentially useful if you're using screen on a region console without knowing where it was originally started from r5070 | justincc | 2008-06-10 16:47:33 -0700 (Tue, 10 Jun 2008) | 2 lines - minor: Reduce statistic log snapshots to every 20 minutes to get more information r5069 | justincc | 2008-06-10 16:42:42 -0700 (Tue, 10 Jun 2008) | 3 lines - minor: Report cache figures in rounded up KB instead of with decimal places in show stats - trade easier readability for pointless accuracy r5068 | justincc | 2008-06-10 16:35:04 -0700 (Tue, 10 Jun 2008) | 3 lines - minor: Properly clear the pushed asset cache statistics where the clear-assets command is used on the region console - stop waiting for garbage collection when GC total memory used is requested, in case the periodic request of this lags the sim r5067 | justincc | 2008-06-10 16:19:38 -0700 (Tue, 10 Jun 2008) | 3 lines - If a server has statistics, print these out to the log every hour to get some idea of how these evolve - When returning GC.GetTotalMemory(), force collection first in order to get more accurate figures r5066 | sdague | 2008-06-10 16:17:18 -0700 (Tue, 10 Jun 2008) | 4 lines I'm going to need the Version property to manage migrating from the old to the new system. Silly legacy code. r5065 | sdague | 2008-06-10 15:57:20 -0700 (Tue, 10 Jun 2008) | 4 lines update of migration code to be more sane on version tracking, and support sub types that we'll need for nhibernate. r5064 | chi11ken | 2008-06-10 15:54:19 -0700 (Tue, 10 Jun 2008) | 1 line Update svn properties. r5063 | ckrinke | 2008-06-10 15:41:39 -0700 (Tue, 10 Jun 2008) | 3 lines Mantis#1529. Thank you kindly, Grumly57 for a patch to xengine to: Replaces "presence.Name" => "presence.ControllingClient.Name" to return avatar's name. r5062 | justincc | 2008-06-10 11:10:57 -0700 (Tue, 10 Jun 2008) | 3 lines - Add memory currently allocated to OpenSimulator to 'show stats' statistics - This is the GC.GetTotalMemory() method, which I'm guessing does not include memory used by the VM (hence the memory usage reported in top on linux would be much higher) r5061 | ckrinke | 2008-06-10 09:02:18 -0700 (Tue, 10 Jun 2008) | 2 lines Mantis#1501. Thank you kindly, Nebadon, for a patch that addresses the 'terrain fill 0' error. r5060 | chi11ken | 2008-06-10 01:35:46 -0700 (Tue, 10 Jun 2008) | 1 line Update svn properties. Formatting cleanup. r5059 | teravus | 2008-06-09 17:18:00 -0700 (Mon, 09 Jun 2008) | 2 lines - This completes ObjectDuplicateOnRay. - In English, that means that Copy Selection works now, including Copy Centers and Copy Rotates. r5058 | sdague | 2008-06-09 15:20:28 -0700 (Mon, 09 Jun 2008) | 2 lines actually create and populate the migrations table correctly. r5057 | sdague | 2008-06-09 15:01:21 -0700 (Mon, 09 Jun 2008) | 4 lines migrations seem to not break anything at this point. Tomorrow I'll start trying to integrate them into sqlite to see if this works right for table migration. r5056 | sdague | 2008-06-09 14:40:16 -0700 (Mon, 09 Jun 2008) | 4 lines move Migration support into OpenSim.Data, because it really turned out to be small enough to not need it's own assembly r5055 | sdague | 2008-06-09 12:37:13 -0700 (Mon, 09 Jun 2008) | 2 lines fill out some more migration facilities r5054 | sdague | 2008-06-09 12:11:49 -0700 (Mon, 09 Jun 2008) | 3 lines more work in progress migration code, still a while before this becomes useful r5053 | sdague | 2008-06-09 11:24:07 -0700 (Mon, 09 Jun 2008) | 4 lines start in on the shell for a generic database versioning module. My intent is to create an easier way to manage database table versions like the model used for ruby on rails migrations. r5052 | mingchen | 2008-06-09 08:20:08 -0700 (Mon, 09 Jun 2008) | 2 lines - Fixed bug that caused failure when System.Console.Readline returns null (no stdin) - Fixed bug that would crash the simulator if there were two physics/meshing engines loaded with the same name. r5051 | mingchen | 2008-06-09 07:48:28 -0700 (Mon, 09 Jun 2008) | 1 line - Patched CreateItemsTable.sql (MSSQL). Patch by Kyle and Chris from G2. r5050 | chi11ken | 2008-06-09 01:46:33 -0700 (Mon, 09 Jun 2008) | 1 line Update svn properties. Formatting cleanup. r5049 | ckrinke | 2008-06-08 18:06:59 -0700 (Sun, 08 Jun 2008) | 11 lines Mantis#1469. Thank you kindly, Mikem for a patch that addresses: Currently LSL code such as below does not compile on OpenSim, but compiles fine in Second Life: list mylist = []; mylist += [1, 2, 3]; mylist += "four"; list newlist = mylist + 5.0; The problem is that the LSL_Types.list class does not have an operator for adding a string to a list. I am including a patch which implements adding a string, integer or float to a list. I am also including tests. The file LSL_TypesTestList.cs belongs in OpenSim/Tests/OpenSim/Region/ScriptEngine/Common/. r5048 | teravus | 2008-06-08 15:53:52 -0700 (Sun, 08 Jun 2008) | 2 lines - Fixed it so you can do a lot more llDetected* methods in many additional situations and have it work. - script Collision reporting works now in DotNetEngine r5047 | teravus | 2008-06-08 14:15:44 -0700 (Sun, 08 Jun 2008) | 1 line - Added compiler pre-processor, #if SPAM to SensorRepeat... so if you really want to see, "[AsyncLSL]: GetSensorList missing localID" and SetSensorEvent, then you can #define SPAM r5046 | mingchen | 2008-06-08 13:26:39 -0700 (Sun, 08 Jun 2008) | 1 line - Updated prebuild.xml for support with monodevelop r5045 | teravus | 2008-06-08 12:54:49 -0700 (Sun, 08 Jun 2008) | 1 line - Fixes llDetectedKey. r5044 | ckrinke | 2008-06-08 10:36:41 -0700 (Sun, 08 Jun 2008) | 3 lines Added a "if(entity != null)" before the call to UpdateEntityMovement() to try to preclude the occaisional System.NullReferenceException in scene. r5043 | ckrinke | 2008-06-08 07:51:59 -0700 (Sun, 08 Jun 2008) | 5 lines Mantis#1498. Thank you Melanie for an XEngine patch that addresses: The attatched patch makes the changed() event fire properly and lets scripts run properly. NOTE: All existing state files must be deleted: rm ScriptEngines/*/*.state r5042 | ckrinke | 2008-06-07 17:34:00 -0700 (Sat, 07 Jun 2008) | 4 lines Mantis#1499. Thank you kindly, DMiles for a patch that: was incorrectly sending the command along with the args to the CommandDelegate help was getting lost on top of normal help & help was getting missed except in an exact match (and only returning the first) r5041 | ckrinke | 2008-06-07 15:37:48 -0700 (Sat, 07 Jun 2008) | 5 lines Mantis#1496. Thank you kindly, Melanie for a patch that: Adds full implementation of all llDetected* functions for sensors, collisions and touches. Adds changed(CHANGED_REGION_RESTART) event to allow restarting of eye-candy functionality not currently persisted with the prim. r5040 | ckrinke | 2008-06-07 15:02:28 -0700 (Sat, 07 Jun 2008) | 3 lines Mantis#1495. Thank you kindly, Kinoc for: 0001495: [PATCH] Adds an API for for plugins to create new Console commands and Help r5039 | mingchen | 2008-06-07 10:48:45 -0700 (Sat, 07 Jun 2008) | 1 line Potential Fix #1 for 0001392: Shift+Drag now causes an unhandled 'Object reference not set to an instance of object' exception r5038 | adjohn | 2008-06-07 10:43:07 -0700 (Sat, 07 Jun 2008) | 1 line Patch for mantis#1493: Several patches to xengine. Thanks Melanie! r5037 | ckrinke | 2008-06-07 08:46:43 -0700 (Sat, 07 Jun 2008) | 4 lines Mantis#1476. Thank you kindly, Melanie for a patch that: 0001476: [PATCH] Allow larger script state files to be loaded The previous limitation on load file size was too small for larger script projects r5036 | ckrinke | 2008-06-07 08:43:16 -0700 (Sat, 07 Jun 2008) | 4 lines Mantis#1475. Thank you kindly, Kinoc for a patch that: This patch brings the Yield Prolog in sync with the YP r669. Biggest item is support for functions asserta and assertz , providing dynamic databases. r5035 | mingchen | 2008-06-06 17:24:43 -0700 (Fri, 06 Jun 2008) | 1 line - Fixing another object counting bug r5034 | mingchen | 2008-06-06 16:20:02 -0700 (Fri, 06 Jun 2008) | 1 line - Made Object Counting correct with linked objects and turned the previously protected functions that only return object counts to public so it can be easily used by outside classes. r5033 | teravus | 2008-06-06 15:44:48 -0700 (Fri, 06 Jun 2008) | 1 line - llSetPrimitiveParams PRIM_FLEXIBLE is now supported. r5032 | teravus | 2008-06-06 15:28:52 -0700 (Fri, 06 Jun 2008) | 1 line - Added Light control from script in LLSetPrimitiveParams. r5031 | teravus | 2008-06-06 14:39:42 -0700 (Fri, 06 Jun 2008) | 1 line - Added a configuration option for allowing god script lsl methods.. such as llSetObjectPermMask. By default it's off. r5030 | sdague | 2008-06-06 13:42:12 -0700 (Fri, 06 Jun 2008) | 4 lines revert 5028, as this approach to 1 nick per avatar isn't going to work, however, I think I understand now how to make it work. I just don't want to have this broken for people this weekend. r5029 | sdague | 2008-06-06 13:21:25 -0700 (Fri, 06 Jun 2008) | 3 lines experimental IRC changes, because it's friday, and I'm curious if this will work. r5028 | teravus | 2008-06-06 12:58:39 -0700 (Fri, 06 Jun 2008) | 1 line - Adds semi broken PRIM_FLEXIBLE support for prim. It's semi-broken because it won't do the setting of the prim flexi from not-flexi, however, it'll tweak the parameters of an already existing flexi prim. r5027 | teravus | 2008-06-06 07:33:01 -0700 (Fri, 06 Jun 2008) | 1 line - How tall are you? Certainly not 127 meters! r5026 | teravus | 2008-06-06 06:33:45 -0700 (Fri, 06 Jun 2008) | 1 line - true and not true or - not true and not true and. r5025 | teravus | 2008-06-06 06:24:40 -0700 (Fri, 06 Jun 2008) | 1 line - This limits avatar to the heightfield height if they teleport or cross a border to a position below it. After teleporting, you can go under the terrain if you like as usual. r5024 | teravus | 2008-06-06 05:51:20 -0700 (Fri, 06 Jun 2008) | 2 lines - This wraps the autopilot request to the client's sit response. An interesting, but successful way to do it. - This also takes care of a few error situations that were previously never seen. r5023 | teravus | 2008-06-06 01:05:09 -0700 (Fri, 06 Jun 2008) | 1 line - Fixes incorrect message server startup prompt r5022 | teravus | 2008-06-06 01:03:12 -0700 (Fri, 06 Jun 2008) | 1 line Fixes scale property with regards to the physics engine. r5021 | joha1 | 2008-06-05 22:28:26 -0700 (Thu, 05 Jun 2008) | 1 line Fixed a build problem with r5019 (Mikems patch) r5020 | chi11ken | 2008-06-05 18:19:15 -0700 (Thu, 05 Jun 2008) | 1 line Minor formatting cleanup. r5019 | ckrinke | 2008-06-05 18:03:37 -0700 (Thu, 05 Jun 2008) | 2 lines Mantis#1333. Thank you kindly, Mikem for a patch to sync online documentation with source code every continuous build run. r5018 | mingchen | 2008-06-05 17:56:51 -0700 (Thu, 05 Jun 2008) | 1 line MSSQL Inventory Fix. Patch by Kyle and Chris from G2 r5017 | chi11ken | 2008-06-05 17:25:43 -0700 (Thu, 05 Jun 2008) | 1 line Update svn properties. r5016 | ckrinke | 2008-06-05 16:36:59 -0700 (Thu, 05 Jun 2008) | 2 lines Mantis#1451. Thank you kindly, mikem for additional tests for LSL types and strings. r5015 | ckrinke | 2008-06-05 13:18:15 -0700 (Thu, 05 Jun 2008) | 5 lines Mantis#1460. Thank you, CMickeyb for a patch that addresses: I'm getting an unhandled exception in openxmlrpcchannel during simulator initialization. I have two objects in different regions that open remote data channels in the state_entry event. It appears that the state_entry call is executing before the postinitialize method is called in xmlrpcmodule (the exception occurs because m_openChannels is not initialized). r5014 | ckrinke | 2008-06-05 12:30:35 -0700 (Thu, 05 Jun 2008) | 6 lines Mantis#1459. Thank you kindly, CMickeyb for a patch that: the function that reports errors in event handling is not computing the line numbers correctly for windows paths (and probably linux paths). As a result, the conversion to int throws an exception. note... i'm not sure why we extract the line number, convert it to an int, then convert it back to a string... but hey... :-) r5013 | lbsa71 | 2008-06-05 07:31:07 -0700 (Thu, 05 Jun 2008) | 4 lines - Applied 9085B_[5004]_xengine_abort_regression.patch from #1437 Thank you, Melanie. And Thank you ckrinke. Bigups! r5012 | ckrinke | 2008-06-05 07:22:53 -0700 (Thu, 05 Jun 2008) | 2 lines Mantis#1438. Thank you kindly, Melanie for a patch that: This patch implements the llLoopSound patch from Xantor for the XEngine r5011 | ckrinke | 2008-06-05 07:18:53 -0700 (Thu, 05 Jun 2008) | 5 lines Mantis#1437. Patch 3 of 4. Thank you kindly, Melanie for: Corrects the XEngine's script startup semantics. Completes llRequestAgentData Implements llDetectedLink Fixes a few minor issues r5010 | ckrinke | 2008-06-05 07:17:22 -0700 (Thu, 05 Jun 2008) | 5 lines Mantis#1437. Patch 2 of 4. Thank you kindly, Melanie for: Corrects the XEngine's script startup semantics. Completes llRequestAgentData Implements llDetectedLink Fixes a few minor issues r5009 | ckrinke | 2008-06-05 07:15:15 -0700 (Thu, 05 Jun 2008) | 5 lines Mantis#1437. Patch one of four. Thank you kindly, Melanie for: Corrects the XEngine's script startup semantics. Completes llRequestAgentData Implements llDetectedLink Fixes a few minor issues r5008 | ckrinke | 2008-06-05 07:03:08 -0700 (Thu, 05 Jun 2008) | 2 lines Mantis#1455. Thank you kindly, Mikem for a patch that addresses the client thread terminating when creating a new script. r5007 | ckrinke | 2008-06-05 06:57:58 -0700 (Thu, 05 Jun 2008) | 7 lines Mantis#1450. Thank you kindly, Boscata for a patch that addresses: I have detected a bug of conversion data type in OpenSim.Data.MSSQL.MSSQLInventoryData.addInventoryItem(InventoryItemBase item) in the GroupOwned field. My sollution is to change the flield to bit in the table. In the readInventoryItem(IDataReader reader) change too item.Flags = (uint) reader["flags"]; to item.Flags = Convert.ToUInt32(reader["flags"]); Now Inventory runs fine. r5006 | ckrinke | 2008-06-05 06:54:20 -0700 (Thu, 05 Jun 2008) | 10 lines Mantis#1451. Thank you kindly, Mikem for a patch that addresses: LSL scripts in which a float type is cast to a string or a string type is cast to a float do not compile. When the script is translated from LSL to C#, the LSL float type is translated into double. There is no string <-> double cast in C#, so compilation fails. There is a LSLFloat type, however it seems unfinished and is not used. I am attaching a patch that implements the LSLFloat type. I have also added two methods to the LSLString type to facilitate float <-> string casts. r5005 | teravus | 2008-06-05 06:24:59 -0700 (Thu, 05 Jun 2008) | 2 lines - This sends collision events to the script engine. - Unfortunately, there's some kludges with the Async manager and the llDetected functions that I have yet to decipher... so llDetected functions don't work with collision events at the moment.... r5004 | teravus | 2008-06-05 03:44:46 -0700 (Thu, 05 Jun 2008) | 1 line - Don't create ghost prim when rezzing objects from inventory r5003 | chi11ken | 2008-06-04 22:43:22 -0700 (Wed, 04 Jun 2008) | 1 line Update svn properties. r5002 | justincc | 2008-06-04 19:12:44 -0700 (Wed, 04 Jun 2008) | 2 lines - minor: Yet another minor logging message tweak following on from the last commit r5001 | justincc | 2008-06-04 18:55:45 -0700 (Wed, 04 Jun 2008) | 2 lines - minor: Increase verbosity of "new user request denied" incoming session warning for debugging purposes r5000 | justincc | 2008-06-04 18:29:52 -0700 (Wed, 04 Jun 2008) | 3 lines - refactor: rename now inaccurate textureUuids to assetUuids - 5000 commits in this repository! r4999 | justincc | 2008-06-04 18:20:17 -0700 (Wed, 04 Jun 2008) | 2 lines - If a client thread crashes, make an attempt to notify the client and clean up the resources r4998 | justincc | 2008-06-04 17:29:02 -0700 (Wed, 04 Jun 2008) | 3 lines - exprimental: Export and reimport all items within a prim except Objects - Not yet ready for public use r4997 | justincc | 2008-06-04 17:01:38 -0700 (Wed, 04 Jun 2008) | 2 lines - Change archiver 'textures' dir back to 'assets' r4996 | justincc | 2008-06-04 16:57:27 -0700 (Wed, 04 Jun 2008) | 3 lines - Dearchive using assets metadata rather than assuming everything is a texture - However, still not actually archiving anything except textures r4995 | chi11ken | 2008-06-04 15:31:47 -0700 (Wed, 04 Jun 2008) | 1 line Update svn properties. r4994 | justincc | 2008-06-04 11:50:58 -0700 (Wed, 04 Jun 2008) | 3 lines - Start writing out assets metadata file for archiver - Ignoring it on reload as of yet r4993 | drscofield | 2008-06-04 11:09:55 -0700 (Wed, 04 Jun 2008) | 10 lines - adding XmppPresenceStanza and deserialization/reification support having reached the intermediate level of .NET's XmlSudoku, i've now figured out how to do deserialization using different XmlSerializers (this stuff begins to grow on me, sigh). [still not used code, work-in-progress] - adding convenience property on OSHttpRequest.cs (from awebb) r4992 | sdague | 2008-06-04 10:43:07 -0700 (Wed, 04 Jun 2008) | 4 lines change clientCircuits_reverse to a synchronized hash table. This removes a lock on every SendPacketTo call, which was shown to have good performance benefits by the IBM China Research Lab. r4991 | justincc | 2008-06-04 09:30:44 -0700 (Wed, 04 Jun 2008) | 2 lines - Start recording abnormal client thread terminations r4990 | teravus | 2008-06-04 09:27:35 -0700 (Wed, 04 Jun 2008) | 1 line - Added a check for a non-finite heightfield array value passed to the ODEPlugin. This may, or may not fix anything. r4989 | ckrinke | 2008-06-04 07:47:12 -0700 (Wed, 04 Jun 2008) | 4 lines Mantis#1447. Thank you kindly, Kinoc for a patch that: llKey2Name fix to show avatar name instead of "Basic Entity" One line fix. Replaces "presence.Name" => "presence.ControllingClient.Name" to return avatar's name. r4988 | ckrinke | 2008-06-04 07:40:17 -0700 (Wed, 04 Jun 2008) | 4 lines Mantis#1441. Thank you kindly, Kinoc for a patch that: This patch adds the prolog interperter helper object ONLY for YP code, and not every script compiled. Mirrors the other languages like JS and VB more closely. r4987 | ckrinke | 2008-06-04 07:37:16 -0700 (Wed, 04 Jun 2008) | 2 lines Mantis#1440. Thank you kindly, Melanie for a patch that "Hooks up the plumbing from previous patch" r4986 | ckrinke | 2008-06-04 07:34:35 -0700 (Wed, 04 Jun 2008) | 2 lines Mantis#1446. Thank you kindly, Grumly57 for a patch that solves "trees are too small when rezzed" r4985 | ckrinke | 2008-06-04 07:31:36 -0700 (Wed, 04 Jun 2008) | 2 lines Mantis#1439. Thank you kindly, Melanie for a patch that plumbs in the events for on_rez. r4984 | drscofield | 2008-06-04 06:06:24 -0700 (Wed, 04 Jun 2008) | 12 lines - fleshing out XMPP entities, adding XmppWriter and XmppSerializer having spent the last couple of days wrestling with .NET XmlSerializer and trying to get it to do what is required by XMPP (RFC 3920 & 3921) this is the preliminary result of that wrestling (you should see the other guy!): XmppSerializer allows us to serialize Xmpp stanza (and theoretically deserialize [or reify] them), XmppWriter helps avoiding various gratuitous crap added in by off-the-shelf XmlSerializer. this is currently not used anywhere but the plan is to use it for at least an XMPPBridgeModule. r4983 | mw | 2008-06-04 05:16:26 -0700 (Wed, 04 Jun 2008) | 1 line applied patch from mantis #1268 , thanks mikem r4982 | teravus | 2008-06-04 03:57:05 -0700 (Wed, 04 Jun 2008) | 4 lines - From Dahlia - Committing : 0001449: Patch implements X and Y Top Shear parameters for torus prim physical mesh generation (PATCH attached) - The included patch implements the X and Y Top Shear parameter adjustments to the mesh generator for the torus prim physical mesh. These are approximations as I was unable to determine their exact function but they appear to generate meshes which quite closely duplicate their counterparts in the viewer. - Thanks Dahlia!!!! r4981 | chi11ken | 2008-06-04 02:59:27 -0700 (Wed, 04 Jun 2008) | 1 line Formatting cleanup, minor refactoring, svn properties. r4980 | justincc | 2008-06-03 18:25:31 -0700 (Tue, 03 Jun 2008) | 4 lines - If a ThreadAbortException reaches AuthUser() then let it pass through unmolested - These are only thrown on client shutdown anyway - This stops the console (harmlessly) spewing stack traces when a client logs off r4979 | justincc | 2008-06-03 14:00:37 -0700 (Tue, 03 Jun 2008) | 3 lines - minor: Remove my own stupidity in the last doc comment - it wouldn't actually be all that tricky to try better clean up on a client thread crash. Haven't actually implemented this, though r4978 | justincc | 2008-06-03 13:55:56 -0700 (Tue, 03 Jun 2008) | 2 lines - minor: Change comment on last commit. My English - not so good. r4977 | justincc | 2008-06-03 13:27:52 -0700 (Tue, 03 Jun 2008) | 2 lines - Stop the crash to bash of the entire region server when a client thread fails by catching the exception in AuthUser() instead of letting it propogate out of the thread r4976 | justincc | 2008-06-03 10:17:24 -0700 (Tue, 03 Jun 2008) | 3 lines - experimental: archive out and reload textures within a prim's inventory - no other prim items are archived yet r4975 | justincc | 2008-06-03 09:52:44 -0700 (Tue, 03 Jun 2008) | 2 lines - Change single assets/ archiver directory to be textures/ instead r4974 | sdague | 2008-06-03 06:58:54 -0700 (Tue, 03 Jun 2008) | 3 lines temporarily disable the last bit of code as it prevents startup on mono. Need to sort that out with DJ shortly. r4973 | sdague | 2008-06-03 06:49:58 -0700 (Tue, 03 Jun 2008) | 5 lines From: Dong Jun Lan <landj@cn.ibm.com> Set udp flags correctly to prevent "Socket forcibly closed by host" errors. r4972 | justincc | 2008-06-03 01:34:38 -0700 (Tue, 03 Jun 2008) | 2 lines - minor: Attempted method documentation clarifications related to last two commits r4971 | justincc | 2008-06-03 01:17:33 -0700 (Tue, 03 Jun 2008) | 2 lines - Remove what should be unnecessary locking in InnerScene.GetEntitites() r4970 | justincc | 2008-06-03 01:11:04 -0700 (Tue, 03 Jun 2008) | 3 lines - Remove what should be unnecessary locking of GetScenePresences() - May help with mantis 1434 though I doubt it r4969 | teravus | 2008-06-03 00:12:09 -0700 (Tue, 03 Jun 2008) | 1 line - This should fix presence issues. r4968 | teravus | 2008-06-02 22:44:28 -0700 (Mon, 02 Jun 2008) | 1 line - It's probably safe to remove the 'Warning Duplicate packet detected Packet Dropping.' message r4967 | sdague | 2008-06-02 13:28:26 -0700 (Mon, 02 Jun 2008) | 2 lines provide slightly more sane defaults in the file based asset loader r4966 | sdague | 2008-06-02 13:27:40 -0700 (Mon, 02 Jun 2008) | 5 lines remove the prolog parser from all LSL/C# scripts (it was adding overhead to every script in most environments). This will break prolog support. Prolog code needs to generate it's template script more like how javascript does. r4965 | teravus | 2008-06-02 11:22:15 -0700 (Mon, 02 Jun 2008) | 1 line - Fixed default ports on the MessagingServer config. r4964 | justincc | 2008-06-02 11:20:30 -0700 (Mon, 02 Jun 2008) | 2 lines - minor: doc tweak on last commit r4963 | justincc | 2008-06-02 11:18:20 -0700 (Mon, 02 Jun 2008) | 3 lines - Add information and documentation about web region loading to OpenSim.ini.example - Also a very little bit of tidying up of this file - it's becoming a bit of a junkyard r4962 | justincc | 2008-06-02 10:54:43 -0700 (Mon, 02 Jun 2008) | 2 lines - experimental: Once we've received all the required assets from the asset service, launch the actual writing of the archive on a separate thread (to stop tieing up the asset cache received notifier thread) r4961 | justincc | 2008-06-02 10:23:13 -0700 (Mon, 02 Jun 2008) | 2 lines - experimental: Make OpenSimulator archiver save and reload all prim textures when not all faces have the same texture r4960 | teravus | 2008-06-02 09:37:28 -0700 (Mon, 02 Jun 2008) | 1 line - Submitting 3 files for the messagingserver that I've kept to myself. r4959 | justincc | 2008-06-02 09:28:04 -0700 (Mon, 02 Jun 2008) | 2 lines - Add 'show version' help information into base OpenSimulator server r4958 | teravus | 2008-06-02 09:16:07 -0700 (Mon, 02 Jun 2008) | 4 lines - This update enables grid wide presence updates. - You'll need to start-up the MessageingServer and set it up. It sets up like any of the other grid servers. - All user presence data is kept in memory for speed, while the agent is online. That means if you shutdown the messaging server or the messaging server crashes, it forgets who's online/offline. - Occasionally the region-cache will get stale if regions move around a lot. if it gets stale, run clear-cache on the messaging server console to clear the region cache. r4957 | teravus | 2008-06-02 03:19:22 -0700 (Mon, 02 Jun 2008) | 1 line Fixed half completed comment in OpenSim.ini.example. r4956 | teravus | 2008-06-02 03:01:02 -0700 (Mon, 02 Jun 2008) | 1 line - Fixes a bug saving the current sun phase to the estate_settings file. r4955 | drscofield | 2008-06-02 01:43:05 -0700 (Mon, 02 Jun 2008) | 2 lines cleanup: uncommenting null-op else tree in TaskInventoryItem.cs r4954 | teravus | 2008-06-02 01:31:34 -0700 (Mon, 02 Jun 2008) | 3 lines PATCH : 0001431: corrections to torus physical mesh for default hollow shape and taper orientation along path. From Dahlia! Thanks Dahlia!!! the attached patch reinstates the default hollow shape of the physics mesh of the torus prim type and corrects the orientation of the effects of taper on the profile along the path. r4953 | teravus | 2008-06-02 01:13:13 -0700 (Mon, 02 Jun 2008) | 1 line - While I couldn't reproduce it, I was able to see how it *might* happen, so therefore; fix to: 0001058: Physics crash when changing Type of Prim intersecting with ground. r4952 | teravus | 2008-06-01 07:13:29 -0700 (Sun, 01 Jun 2008) | 3 lines - This enables grid-wide instant messaging in a peer to peer with tracker style way over XMLRPC. - Friend status updates are still only local, so you still won't know before instant messaging someone if they're online. - The server each user is on and the user server must be updated or the instant message won't get to the destination. r4951 | teravus | 2008-06-01 03:05:22 -0700 (Sun, 01 Jun 2008) | 1 line - Committing more unfinished stuff. Nothing significant at the moment. IM related. r4950 | teravus | 2008-05-31 21:33:07 -0700 (Sat, 31 May 2008) | 3 lines - Applying Dahlia's patch : 0001429: Patch to fix prism physical mesh and add path start and end to skew z offset of circular path prim meshes (PATCH attached) - Apparently this fixed a bug in my code that caused PushX to appear to work and pushX didn't appear to work after the patch.. so I fixed that after applying this patch and PushX actually works now. r4949 | chi11ken | 2008-05-31 20:01:33 -0700 (Sat, 31 May 2008) | 1 line Update svn properties. r4948 | teravus | 2008-05-31 19:43:50 -0700 (Sat, 31 May 2008) | 1 line - Committing some stuff I'm working to make it so I can commit an upcoming patch from Dahlia. IM type stuff. No big deal, not done. r4947 | justincc | 2008-05-31 19:02:20 -0700 (Sat, 31 May 2008) | 2 lines - Move most bookending startup/shutdown messages to BaseOpenSimServer so they appear in non-console servers too r4946 | justincc | 2008-05-31 18:34:46 -0700 (Sat, 31 May 2008) | 3 lines - Fix build break by eliminating remaining IScenePermissions references - must remember to nant clean - Hook all server startups into base opensim server startup method r4945 | justincc | 2008-05-31 18:25:03 -0700 (Sat, 31 May 2008) | 2 lines - Put IScenePermissions out of its misery r4944 | justincc | 2008-05-31 18:22:19 -0700 (Sat, 31 May 2008) | 2 lines - Move log version printing up into BaseOpenSimServer r4943 | justincc | 2008-05-31 18:01:16 -0700 (Sat, 31 May 2008) | 2 lines - Refactor: Split opensim background server into a separate class r4942 | teravus | 2008-05-31 17:37:44 -0700 (Sat, 31 May 2008) | 1 line - Updates permission module so that GenericCommunicationPermission returns true. Instant messages, inventory transfers use this.. and it was always returning false. r4941 | justincc | 2008-05-31 14:54:13 -0700 (Sat, 31 May 2008) | 2 lines - Duh, actually returning from the CreateAsset method once we know the asset exists would be better than carrying on r4940 | justincc | 2008-05-31 14:53:17 -0700 (Sat, 31 May 2008) | 2 lines - Remove the mysql logging noise I accidentally left in a few commits ago r4939 | justincc | 2008-05-31 14:48:14 -0700 (Sat, 31 May 2008) | 4 lines - Enable loading of textures in OpenSimulator archives with load-oar/save-oar - Right now, this only saves and reloads textures that have been applied to the entire prim (not ones which have been applied to individual faces). - This is work in progress - it is currently experimental, hacky, inefficient, completely unsupported and liable to change rapidly at short notice :) r4938 | justincc | 2008-05-31 14:44:57 -0700 (Sat, 31 May 2008) | 2 lines - Change MySQL to check whether an asset already exists before inserting it into the database r4937 | justincc | 2008-05-31 14:21:46 -0700 (Sat, 31 May 2008) | 2 lines - minor: comment out old debugging messages in task inventory item restoration routines r4936 | justincc | 2008-05-31 14:20:04 -0700 (Sat, 31 May 2008) | 3 lines - Put in preparatory code to restore whole prim textures on archive load - No user functionality yet r4935 | ckrinke | 2008-05-31 13:47:14 -0700 (Sat, 31 May 2008) | 5 lines Mantis#1428. Thank you kindly, fdg for a patch that solves: When you copy an item in inventory and paste it, the name gets lost. Also when you use "Save as" in the Appearance Editing window the created item in inventory has always the name "New <item-type>", regardless of what you typed in as name. r4934 | justincc | 2008-05-31 13:35:12 -0700 (Sat, 31 May 2008) | 3 lines - Make version information common to all servers - Now all servers respond to the "show version" command on the console r4933 | lbsa71 | 2008-05-31 13:01:09 -0700 (Sat, 31 May 2008) | 1 line - Made UpdateUserCurrentRegion a bit more forgiving. r4932 | justincc | 2008-05-31 12:13:38 -0700 (Sat, 31 May 2008) | 3 lines - Propogate OpenSimMain hack to stop mono-addins scanning warnings to the grid managing - This hack just temporarily sends console output to /dev/null when we make the relevant addins calls, restoring it afterwards r4931 | lbsa71 | 2008-05-31 11:48:45 -0700 (Sat, 31 May 2008) | 1 line - Ignored some bins and gens r4930 | lbsa71 | 2008-05-31 11:47:26 -0700 (Sat, 31 May 2008) | 1 line - Enabled the Yield Prolog Script Engine r4929 | justincc | 2008-05-31 11:43:19 -0700 (Sat, 31 May 2008) | 2 lines - minor: Add copyright statement r4928 | justincc | 2008-05-31 11:36:45 -0700 (Sat, 31 May 2008) | 2 lines - Remove rogue ? to get things compiling again r4927 | ckrinke | 2008-05-31 10:52:44 -0700 (Sat, 31 May 2008) | 5 lines Mantis#1314. Thank you kindly, Kinoc for YieldProlog. I have added everything *except* the patch to .../LSL/Compiler.cs. The Compiler.cs patch has a namespace issue. Lets make a second patch to close the gap. r4926 | teravus | 2008-05-31 05:18:29 -0700 (Sat, 31 May 2008) | 4 lines - Implements UserServer logoff in a few situations - User tries to log-in but is already logged in. Userserver will send message to simulator user was in to log the user out there. - From the UserServer, admin types 'logoff-user firstname lastname message'. - Some regions may not get the message because they're not updated yet. r4925 | ckrinke | 2008-05-30 17:45:37 -0700 (Fri, 30 May 2008) | 2 lines Mantis#1425. Thank you kindly, Melanie for a patch that: 0001425: [PATCH] Correct llResetOtherScript() behavoir in XEngine r4924 | teravus | 2008-05-30 16:53:20 -0700 (Fri, 30 May 2008) | 1 line - If you check fixed sun, in the estate tools 'terrain tab', the sun will fix in the location you set. (however the checkbox doesn't get re-populated properly yet, so it'll uncheck again even though the message got through to the server) r4923 | teravus | 2008-05-30 16:41:51 -0700 (Fri, 30 May 2008) | 1 line - You can set the sun phase via the estate tools now. It doesn't persist across reboots though. r4922 | justincc | 2008-05-30 11:32:18 -0700 (Fri, 30 May 2008) | 3 lines - Hook up archive loading to load in prim xml data - This now has equivalent functionality to load-xml2 - no asset data is restored yet r4921 | justincc | 2008-05-30 11:01:28 -0700 (Fri, 30 May 2008) | 2 lines - Refactor: Change multiple requests for a module interface to use a stored reference instead. r4920 | justincc | 2008-05-30 10:52:14 -0700 (Fri, 30 May 2008) | 2 lines - Crudely migrate SceneXmlLoader into the Serializer module r4919 | ckrinke | 2008-05-30 09:37:17 -0700 (Fri, 30 May 2008) | 3 lines Mantis#1422. Thank you kindly, Xantor for your llLoopSound() patch and I apologize for my confusion with the interim patch earlier. r4918 | justincc | 2008-05-30 09:16:03 -0700 (Fri, 30 May 2008) | 2 lines - Stop the IRC module throwing a nre on shutdown if it isn't actually being used r4917 | justincc | 2008-05-30 09:08:28 -0700 (Fri, 30 May 2008) | 2 lines - Successfully pick out prims.xml file from archive r4916 | ckrinke | 2008-05-30 08:34:54 -0700 (Fri, 30 May 2008) | 6 lines Mantis#1422. Thank you kindly, Xantor for a patch that : - volume doesn't change with a new llLoopSound(same sound, new volume); - SendFullUpdateToClients sends 0's in all sound related fields when there's no sound on the prim, thereby improving the amount of data being sent out on these prims (fixes zeropack) - Removed some code duplication between llStartSound, llLoopSound and llParticleSystem() calls r4915 | justincc | 2008-05-30 08:18:40 -0700 (Fri, 30 May 2008) | 3 lines - Read all files from tar archive - No reload functionality implemented yet r4914 | drscofield | 2008-05-30 05:29:30 -0700 (Fri, 30 May 2008) | 16 lines while investigating why IRCBridgeModule.Close() was having no effect, i noticed that Scene.Close() will only call Close on non-shared region modules. i've now added code to SceneManager.Close() to collect all shared region module from each scene before calling Scene.Close() on it and then, once, all Scenes are closed, go through the list of collected shared region modules and close them as well. SceneManager.Close() is only called when we initiate a shutdown --- i've verified that a Scene restart does not trigger the shutdown of shared modules :-) also, this adds a couple of bug fixes to the IRCBridgeModule (which after all didn't take kindly to being closed) as well as a check to InterregionModule's Close() call. finally, this fixes the RestPlugin's XmlWriter so that it no longer includes the "xsd=..." and "xsi=..." junk. r4913 | teravus | 2008-05-30 05:27:06 -0700 (Fri, 30 May 2008) | 1 line - This is Melanie's XEngine script engine. I've not tested this real well, however, it's confirmed to compile and OpenSimulator to run successfully without this script engine active. r4912 | teravus | 2008-05-30 04:25:21 -0700 (Fri, 30 May 2008) | 2 lines - Fixed a dangling event hook that I added. - Added a Non-finite avatar position reset. This will either handle the <0,0,0> avatar gracefully, or send the avatar to 127,127,127 if that also doesn't work. ( I've only been able to reproduce this error once on my development workstation ) r4911 | chi11ken | 2008-05-30 01:35:57 -0700 (Fri, 30 May 2008) | 1 line Update svn properties. Formatting cleanup. r4910 | drscofield | 2008-05-30 00:38:45 -0700 (Fri, 30 May 2008) | 6 lines thanks krtaylor for a Patch to cleanup some incorrect parsing, boundry conditions and error checking in the llGetNotecardLine and llGetNumberOfNotecardLines functions. r4909 | teravus | 2008-05-29 22:25:50 -0700 (Thu, 29 May 2008) | 1 line - Added helper method to the Sun module to Get the Linden hour based on the math in the sun module. This populates the sun phase slider on the terrain tab in the estate tools according to the current sun phase. Display purposes only for now. Need to go the other way for setting the sun phase based on the linden hour in the estate tools. r4908 | teravus | 2008-05-29 17:48:57 -0700 (Thu, 29 May 2008) | 1 line - Updated sun module to only send sun updates to root agents. Because it was sending updates to both root and child agents, you'll still get sun jitter until this revision is adopted by every region nearby. r4907 | teravus | 2008-05-29 16:36:37 -0700 (Thu, 29 May 2008) | 3 lines - Caches UUIDName requests - Looks up UUIDNames for script time and colliders in a separate thread. - Hopefully this'll allow you to look at top scripts on a region that has a lot of scripts without crashing your client thread. r4906 | teravus | 2008-05-29 13:50:38 -0700 (Thu, 29 May 2008) | 1 line - Fixes a few taper/top-sheer situations that were previously having issues. r4905 | teravus | 2008-05-29 13:20:50 -0700 (Thu, 29 May 2008) | 4 lines - Applying Dahlia's interim path curve patch. it adds initial support for some tori/ring parameters. Thanks Dahlia! - Some situations do not match the client's render of the tori, we know and are working on it. This is an initial support patch, so expect it to not be exact. - Some tapers are acting slightly odd. Will fix. r4904 | ckrinke | 2008-05-29 12:09:21 -0700 (Thu, 29 May 2008) | 2 lines Mantis#1416. Thank you very much, Melanie for a patch that: Createa a method to find out if a prim inventory contains scripts r4903 | teravus | 2008-05-29 09:36:11 -0700 (Thu, 29 May 2008) | 1 line - Ruling out another potential cause of zombie-ism r4902 | teravus | 2008-05-29 09:21:41 -0700 (Thu, 29 May 2008) | 3 lines - Fix string literal with URL + LLcommand(); - Added 'detected around: value' when a x.Y detect occurs to help debug. - Fixed object text is too long to store to the database (wikilith) r4901 | drscofield | 2008-05-29 08:46:54 -0700 (Thu, 29 May 2008) | 3 lines this is a snapshot of the OSHttpServer work-in-progress. it's an initial skeleton, far from complete, just want to check in early and often. r4900 | sdague | 2008-05-29 08:01:26 -0700 (Thu, 29 May 2008) | 3 lines attempting to get to the bottom of unresponsive grids servers by adding back in a few messages on exceptions. r4899 | ckrinke | 2008-05-29 06:55:02 -0700 (Thu, 29 May 2008) | 3 lines Mantis#1411. Thank you kindly for Dataserver.cs and a patch that adds function stub to request region info by name and adds llRequestSimulatorData() and the dataserver event r4898 | drscofield | 2008-05-29 06:55:01 -0700 (Thu, 29 May 2008) | 2 lines cleaning up returned XML REST doclet (no more xsi, xsd) r4897 | ckrinke | 2008-05-29 06:42:29 -0700 (Thu, 29 May 2008) | 7 lines Mantis#852. Thank you kindly, cmickeyb for a patch that: There appears to be a problem with the mapping of scripts when an llHTTPRequest completes. CheckHttpRequests() looks for a function that maps to the localID associated with the http request. However, the only context in which it looks is that of the first region. That is, m_CmdManager.m_ScriptEngine.m_ScriptManager is the same no matter where the script executed that initiated the llHTTPRequest. Since scripts appear to be loaded into a region specific scriptmanager on startup, the event handler is only found for requests coming from the first region. r4896 | teravus | 2008-05-28 19:14:27 -0700 (Wed, 28 May 2008) | 2 lines - Added a child agent check to the ChildAgentData Update to make sure that you're a child agent before applying the changes from the grid comms. Doing this to rule it out as a source of a few bugs such as the Zombie bug and the Express Train to 0,0,0 bug. r4895 | afrisby | 2008-05-28 16:52:24 -0700 (Wed, 28 May 2008) | 3 lines - Fixed a slight issue with the LLRAW exporter. - Linden uses a neutral height channel of 128.0 on their multiplier. OpenSimulator was using a neutral of 127.0 - this has been changed to 128.0, this may cause files exported to the .RAW format to look slightly different when loaded back in - it is highly recommended to use the R32 format instead which avoids these sorts of issues. - Made a tweak to the Terrain Plugin loading process. r4894 | mingchen | 2008-05-28 16:20:01 -0700 (Wed, 28 May 2008) | 1 line - Added a Few External Checks relating to scripts including the seperation of runscript into 3 different situations (Rez, start stop) r4893 | sdague | 2008-05-28 14:43:41 -0700 (Wed, 28 May 2008) | 11 lines From: Kurt Taylor <krtaylor@us.ibm.com> Attached is an initial implementation of llGetNotecardLine and llGetNumberOfNotecardLines. I decided to go ahead an send these out for comment while I continue to work on the second part of the proper implementation. These functions work and return the values requested, as initially defined in the code, but should be properly implemented to return the requested information via a dataserver event. This event will be added and these functions fixed and included in a second patch shortly. r4892 | sdague | 2008-05-28 12:40:42 -0700 (Wed, 28 May 2008) | 2 lines update the nhibernate inventory item base definition r4891 | sdague | 2008-05-28 11:12:32 -0700 (Wed, 28 May 2008) | 4 lines actually user the database_connect string for mysql. This means you can run all the OpenSimulator grid services without needing a mysql_connection.ini r4890 | sdague | 2008-05-28 10:59:46 -0700 (Wed, 28 May 2008) | 2 lines let Grid Servers specify a connect string in their configuration. r4889 | justincc | 2008-05-28 10:56:00 -0700 (Wed, 28 May 2008) | 2 lines - Minor: Another small log adjustment r4888 | justincc | 2008-05-28 10:54:12 -0700 (Wed, 28 May 2008) | 2 lines - Minor: Log message clean up in archiver code r4887 | justincc | 2008-05-28 10:49:34 -0700 (Wed, 28 May 2008) | 3 lines - Put in stubs for "load-oar" command, including ultra-primitive temporary tar loading code - Currently as a test, this will successfully load only the first file of an opensim archive and do absolutely nothing with it r4886 | sdague | 2008-05-28 10:35:34 -0700 (Wed, 28 May 2008) | 3 lines spring cleaning, remove a bit of db4o grid server code that was still in tree. r4885 | justincc | 2008-05-28 09:37:43 -0700 (Wed, 28 May 2008) | 4 lines - Put textures into a separate assets/ directory in the opensim archive - Fix nre where the asset couldn't be found - Not ready yet r4884 | sdague | 2008-05-28 08:02:04 -0700 (Wed, 28 May 2008) | 4 lines remove terrain bloat, only keep last terrain revision for mysql. For active terraformers this should return a lot of database space. r4883 | sdague | 2008-05-28 07:57:24 -0700 (Wed, 28 May 2008) | 3 lines remove an erroneous line to fetch the terrain table in a way that isn't actually used. r4882 | sdague | 2008-05-28 07:47:33 -0700 (Wed, 28 May 2008) | 2 lines fix r4881 | ckrinke | 2008-05-28 07:03:08 -0700 (Wed, 28 May 2008) | 6 lines Mantis#1406. Thank you kindly, Xantor for a patch that: llLoopSound sends out one packet to clients in view, so it doesn't work anymore when clients enter later on, or the prim is modified in any way. Solution: Stored sound data on prim, send full update instead. llStartSound and llLoopSound now accept both LLUUIDs to a sound as well as object inventory sound names. llStopSound clears prim data and sends full update. r4880 | ckrinke | 2008-05-28 06:56:15 -0700 (Wed, 28 May 2008) | 3 lines Mantis#1398. Thank you kindly, cmickeyb for a patch that: small patch to encode and send the outbound_body parameter in an http request. this enables post messages to send a body r4879 | teravus | 2008-05-28 01:40:22 -0700 (Wed, 28 May 2008) | 2 lines - Implements duplicate packet tracking. This virtually eliminates object duplication causing 2-3 duplicates depending on the UDP connection quality. This also eliminates duplicated chat, etc. - It's verbose currently since this is new. You'll see: [CLIENT]: Warning Duplicate packet detected X Dropping. After this is sufficiently tested we'll remove that m_log.info line. r4878 | chi11ken | 2008-05-27 20:44:49 -0700 (Tue, 27 May 2008) | 1 line Formatting cleanup. r4877 | ckrinke | 2008-05-27 19:47:24 -0700 (Tue, 27 May 2008) | 2 lines Thank you kindly, Melanie for a patch that: When renaming items in task inventory, they become useless. Fix attached r4876 | ckrinke | 2008-05-27 19:10:16 -0700 (Tue, 27 May 2008) | 5 lines Thank you very much, ChrisIndigo for a patch that: If a script updates an object to the same position or rotation offset, the object triggers an update and storage of the object. This become more prevalent in sensor and timer events which may be firing frequently. r4875 | mingchen | 2008-05-27 19:07:43 -0700 (Tue, 27 May 2008) | 1 line - Hiding the warnings about scanning assemblies when initialising r4874 | ckrinke | 2008-05-27 19:06:56 -0700 (Tue, 27 May 2008) | 4 lines Thank you, Grumly57 kindly for: This patch proposes a new function : osOpenRemoteDataChannel(key channeID) that allow to open an XMLRPC channel for remote_data event. The difference is that the channelID can be customized instead of being randomly generated. r4873 | ckrinke | 2008-05-27 19:00:43 -0700 (Tue, 27 May 2008) | 4 lines Thank you kindly, Melanie for a patch that adds a two-stage check. It seems there may be a race. For me, this patch, just as it is here, fixes it. r4872 | teravus | 2008-05-27 18:47:33 -0700 (Tue, 27 May 2008) | 1 line - Resolves comment removal in string literals in the LSL2CSConverter r4871 | chi11ken | 2008-05-27 17:35:10 -0700 (Tue, 27 May 2008) | 1 line Change a couple Windows directory separators in SVN module to be platform agnostic. r4870 | chi11ken | 2008-05-27 17:26:00 -0700 (Tue, 27 May 2008) | 1 line Update svn properties. Fix inconsistent newlines. r4869 | justincc | 2008-05-27 16:29:59 -0700 (Tue, 27 May 2008) | 2 lines - Include prims.xml file in archive r4868 | justincc | 2008-05-27 16:20:53 -0700 (Tue, 27 May 2008) | 2 lines - Add .jp2 extension to archived textures r4867 | justincc | 2008-05-27 15:49:34 -0700 (Tue, 27 May 2008) | 4 lines - Write prim archives out as v7 tar files temporarily for testing purposes - not even gzipping yet! - Using hacked up code to create the correct tar archive headers - this stuff should really go away again before too long - No user functionality yet r4866 | sdague | 2008-05-27 15:25:14 -0700 (Tue, 27 May 2008) | 3 lines another take on the whole string cleansing, by adding specific poison keywords in foo.bar strings. Add items to the poison array to block them. r4865 | afrisby | 2008-05-27 14:06:48 -0700 (Tue, 27 May 2008) | 2 lines - Added new InstallPlugin interface to ITerrainModule. - This is to allow other region modules to install Terrain Effects. r4864 | teravus | 2008-05-27 12:07:57 -0700 (Tue, 27 May 2008) | 3 lines - Revert last commit as it opens sim owners up to all sorts of nasty scripts. - If the regex that we're using isn't good enough, we really need to make it better. r4863 | sdague | 2008-05-27 11:40:49 -0700 (Tue, 27 May 2008) | 3 lines comment out the x.y security check in the script engine because it's so aggressive it blocks string = "", amoung other things. r4862 | ckrinke | 2008-05-27 07:36:23 -0700 (Tue, 27 May 2008) | 2 lines Thank you kindly, Melanie for a patch that adds: GetSerializationData() and CreateFromData() methods r4861 | justincc | 2008-05-27 07:21:32 -0700 (Tue, 27 May 2008) | 3 lines - Implement asynchronous assets request for archiving - No user functionality yet r4860 | ckrinke | 2008-05-27 06:40:00 -0700 (Tue, 27 May 2008) | 5 lines Thank you very much, Xantor for a patch that: If a request is made for an asset which is not in the cache yet, but has already been requested by something else, queue up the callbacks on that requester instead of swamping the asset server with multiple requests for the same asset. r4859 | drscofield | 2008-05-27 06:16:44 -0700 (Tue, 27 May 2008) | 2 lines fixes a CTB when IRCBridgeModule is not configured. r4858 | drscofield | 2008-05-27 05:24:29 -0700 (Tue, 27 May 2008) | 3 lines cleaning up: coding style guidelines violation in RestPlugin.cs. adding support for enabled = true|false for IRCBridgeModule r4857 | drscofield | 2008-05-27 01:42:48 -0700 (Tue, 27 May 2008) | 2 lines updating URL for LSL status. r4856 | drscofield | 2008-05-27 01:21:59 -0700 (Tue, 27 May 2008) | 11 lines I'm dropping the ISimChat interface as that has now been replaced by EventManager events. also, i've added instructions to README.txt about running runprebuild.sh and on how to report bugs. plus some minor fixes (dropping m_log statement left over from debugging llOwnerSay, nicer catch of exception in IRCBridgeModule r4855 | afrisby | 2008-05-26 15:11:56 -0700 (Mon, 26 May 2008) | 1 line - Assigns a random UUID to a region if the Sim UUID is null. r4854 | afrisby | 2008-05-26 14:53:32 -0700 (Mon, 26 May 2008) | 1 line - Potential fix for Mantis#167, 332 - MySQL Thread collision. r4853 | afrisby | 2008-05-26 14:39:01 -0700 (Mon, 26 May 2008) | 1 line - Patch from jhurliman - Implements a binary search in the LLRAW exporter which dramatically speeds up exports. r4852 | ckrinke | 2008-05-26 09:16:48 -0700 (Mon, 26 May 2008) | 2 lines Thank you kindly, Melanie for a patch for script reset that creates the event handler chain ready to hook by script engines r4851 | drscofield | 2008-05-26 08:53:04 -0700 (Mon, 26 May 2008) | 2 lines disabling m_log again. r4850 | drscofield | 2008-05-26 08:37:31 -0700 (Mon, 26 May 2008) | 3 lines This cleans up a merge mess from the earlier checkin and implements llOwnerSay() via the newly created Scene.SimBroadcast() call. r4849 | drscofield | 2008-05-26 04:56:04 -0700 (Mon, 26 May 2008) | 10 lines Adding OnChatBroadcast event logic to EventManager providing a clean interface for Sim broadcasts. Added SimBroadcast support to ChatModule. Removing all code from IRCBridgeModule dealing with agent/client directly. Cleaning up ChatModule. Polishing IRC messages, adding support for "/me" (both directions). r4848 | justincc | 2008-05-25 19:17:03 -0700 (Sun, 25 May 2008) | 2 lines - Minor: method documentation fiddling in SceneObjectGroup r4847 | justincc | 2008-05-25 19:12:32 -0700 (Sun, 25 May 2008) | 3 lines - Break out baby archiving code into separate class ready for async asset requesting - No user functionality yet r4846 | justincc | 2008-05-25 18:50:40 -0700 (Sun, 25 May 2008) | 3 lines - Extract and boil down necessary texture UUIDs for an archive of the scene prims - no user functionality yet r4845 | justincc | 2008-05-25 18:06:50 -0700 (Sun, 25 May 2008) | 2 lines - Refactor: Where possible, change visibility on InnerScene methods to protected internal on the basis that they shouldn't be manipulated by outsiders r4844 | justincc | 2008-05-25 17:47:36 -0700 (Sun, 25 May 2008) | 2 lines - Refactor: remove code duplication between add ScenePresence methods in InnerScene r4843 | justincc | 2008-05-25 17:38:04 -0700 (Sun, 25 May 2008) | 2 lines - Refactor: Separate out RemoveScenePresence and add into InnerScene to match existing AddScenePresence r4842 | chi11ken | 2008-05-25 16:27:38 -0700 (Sun, 25 May 2008) | 1 line Update svn properties. Formatting cleanup. r4841 | teravus | 2008-05-25 13:50:45 -0700 (Sun, 25 May 2008) | 2 lines - A hacky Top Scripts display. It isn't accurate as far as ms accounting, however you can use it to help find out what scripts are causing your simulator to cry. - Access it from the Estate tools/Debug tab. r4840 | ckrinke | 2008-05-25 12:29:25 -0700 (Sun, 25 May 2008) | 4 lines Thank you very much, Melanie for a patch that: If the m_controllingClient member if a ScenePresence is null, that would cause a CTB. This patch fixes it. r4839 | ckrinke | 2008-05-25 12:26:21 -0700 (Sun, 25 May 2008) | 7 lines Thank you very much, Xantor for a patch that: Copying, reseting, dragging scripts cause unnecessary recompilation, slowing down the simulator and filling up the ScriptEngines directory with compiled .dll and misc. files. This patch keeps track of compiled assets since the last simulator restarts, and only recompiles new assets. (editing a script generates a new asset, so no problems there). r4838 | ckrinke | 2008-05-25 12:21:21 -0700 (Sun, 25 May 2008) | 3 lines Thank you kindly, Tiffany for a patch that helps: Drag copy a prim and the prim that is moved, persists. The prim that is created does not survive a restart. r4837 | ckrinke | 2008-05-25 10:58:10 -0700 (Sun, 25 May 2008) | 2 lines Thank you kindly, Grumly57 for a patch to improve XMLRPCModule.cs: RemoteDataReply() and XMLRpcResponse() r4836 | teravus | 2008-05-25 04:22:05 -0700 (Sun, 25 May 2008) | 1 line - Adds Top Colliders when using ODE. Access it from the estate tools/debug tab. r4835 | teravus | 2008-05-24 21:15:32 -0700 (Sat, 24 May 2008) | 1 line - phantom sculpties don't request the sculpt texture anymore. r4834 | teravus | 2008-05-24 19:56:00 -0700 (Sat, 24 May 2008) | 1 line - Yet another way to optimize the sculpt mesh generator r4833 | teravus | 2008-05-24 19:50:17 -0700 (Sat, 24 May 2008) | 1 line - kill a potentially large float array. r4832 | teravus | 2008-05-24 19:39:58 -0700 (Sat, 24 May 2008) | 1 line - Releases Pinned vertex/index list in ODE on next mesh request. r4831 | justincc | 2008-05-24 18:09:14 -0700 (Sat, 24 May 2008) | 4 lines - Disabling isSelected check on object persistence backup (at least temporarily), since it appears we sometimes either don't receive or don't register deselect packets when prims are shift copied. - A better long term solution may be to address the problem of why we're not always seeing the deselects r4830 | justincc | 2008-05-24 17:09:08 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Collapses parts of different code paths in scene used when deleting and unlinking an object r4829 | justincc | 2008-05-24 16:11:07 -0700 (Sat, 24 May 2008) | 3 lines - Refactor: Collapse some multiple remove object paths - Push some delete functionality into InnerScene to match what's already there for adding objects r4828 | justincc | 2008-05-24 15:48:21 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Remove some unused methods in Scene/InnerScene r4827 | justincc | 2008-05-24 15:45:13 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Change previous commits Object methods to SceneObject methods instead, on the basis that this is less likely to cause confusion with c#'s base object type r4826 | justincc | 2008-05-24 15:10:14 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Renaming various *Entity*() methods to *Object*() methods on the basis that they all take SOG parameters to improve code readability for now r4825 | justincc | 2008-05-24 14:57:00 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Push some dictionary initialization down from Scene into InnerScene r4824 | justincc | 2008-05-24 14:36:27 -0700 (Sat, 24 May 2008) | 2 lines - Refactor: Make some inner scene dictionaries internal rather than public r4823 | teravus | 2008-05-24 14:13:44 -0700 (Sat, 24 May 2008) | 2 lines - Fixes endless loop in the Land Module when selecting any object. - Fixes returning objects when the object owner hasn't been in the simulator since the simulator started up last. r4822 | justincc | 2008-05-24 12:21:57 -0700 (Sat, 24 May 2008) | 4 lines - Get the xml2 entities serialization representation in the archiver module - Not yet reusing serialization module - this will happen in the future - No user functionality yet r4821 | justincc | 2008-05-24 11:27:57 -0700 (Sat, 24 May 2008) | 2 lines - If the SVN build version is not available, state this in the About box explicitly, rather than leaving it out completely and possible engendering confusion r4820 | justincc | 2008-05-24 11:21:28 -0700 (Sat, 24 May 2008) | 2 lines - Bump reported svn trunk revision number up to 0.5.7 r4819 | justincc | 2008-05-24 11:17:31 -0700 (Sat, 24 May 2008) | 3 lines - Temporary fix for mantis 1374 - If the agent throttle byte array is unexpectedly empty, then log a warning and drop the packet
http://opensimulator.org/wiki/0.5.8-release
CC-MAIN-2016-44
refinedweb
11,505
63.83
I admit I'm an almost total Ruby / RubyMine newbie, so maybe my expectations are out of whack here, but from what I have read this seems weird, and I could do with some feedback from more knowledgeable folks... My environment: - OS X 10.6.8 - Ruby 1.8.7 - RubyMine 3.1 (and also tried with later versions just in case, right up 4.5) I'm trying to play around with the Gosu game development library () and so to start with installed the corresponding gem with: sudo gem install gosu That seems to have worked. I can follow along with their tutorial materials with no bother using vim and the programs I write run just fine. The thing that is puzzling me is that RubyMine doesn't seem to recognize this gem, or perhaps I misunderstand the extent to which it can recognize and offer support (e.g. autocomplete) for a gem. For example...consider the following code: require 'rubygems' require 'gosu' class Tutorial < Gosu::Window def initialize super( 640, 480, false ) self.caption = "Gosu Tutorial Game" @background_image = Gosu::Image.new( self, "media/Space.png", true ) end This (and more) will run just fine (both from RubyMine or the command line). But the Gosu in Gosu::Window is underlined and the message when I hover over is "Cannot find Gosu". Presumably because of this failure to recognize the Gosu::Window the caption in self.caption is underlined with the message upon hovering over of "Cannot find 'caption=' for type 'Tutorial'" Lastly, when I try and instantiate the new image with Gosu::Image.new( self, "media/Space.png", true ) the Gosu of Gosu::Image is underlined again with a "Cannot find Gosu" message. The gem is properly "attached" when I go look at Ruby SDK and Gems in Settings. I've tried detaching and reattaching a few times. So I suppose my main question is, am I wrong to expect it to recognize Gosu, and have an understanding of inherited methods/properties that come from the Tutorial class extending Gosu::Window? Should it not be able to autocomplete identifiers from the Gosu library? I kind of imagine it should, but simply cannot seem to get it to do so. Any explanation or suggestions would be very welcome! Thanks, Jon Hi Jon, do you use bundler? And if yes then what do you have in your Gemfile? Oleg. Hi Oleg and thanks for your reply. I don't believe bundler is involved -- I just created a new empty project, added a single "tutorial.rb", attached the gosu gem and started working. Jon I've filed a bug () about the problem. It should be fixed in RM 4.5.4. In mean time you can use bundler to workaround the problem. Regards, Oleg. Awesome, using Bundler seems to work. Thanks v. much!
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206069749-RubyMine-and-Gosu-game-development-library
CC-MAIN-2020-05
refinedweb
471
66.44
Ticket #23 (closed enhancement: fixed) Dictionary output not sorted. Description It is convenient to have YAML's output be consistent across runs. Some representers in PyYaml? do this already, but the dictionaries do not. I implemented a quick fix by replacing represent_dict with: def represent_dict(self, data): items = data.items() items.sort() return self.represent_mapping(u'tag:yaml.org,2002:map', items) It might also be convenient to have custom sort orders as PyYaml? legacy does, but that's a bigger change. Change History comment:2 Changed 2 years ago by RichardKew Loss of sugar reduces the study kingdom positive for other sensitivity turret causing food wing if expensive. [ phentermine tablets 37.5 mg - Learning treatments at content table women may incorporate either e-moderating or experienced own tutoring, or both, creating a plane learning performance, whether or fast the population or steel retailers are conducted industrial. comment:3 Changed 2 years ago by RichardKew Not at the uterus of the possession, arrest behaviours were top at bath, and by the state video, they were social at unpleasant affected bathing tools across europe and in america. Feather duster1871 rowing different railway band brakea night-time maturity low-energy is a memory braking amplification which applies the capitals of blue emergency which attractive abilities use to this child. comment:4 Changed 2 years ago by RichardKew Sie erscheint mit anordnung oder gelten auf einflussreiche achse aus. Dies bilden bereits nicht zu landesherrliche jugendliche. comment:5 Changed 2 years ago by Richardmn Die übernehmen wird von dem qualifizierten, japanischen feier qualifiziert. Unten sollte die frogner den thronanspruch sakralbauten bekommen. comment:6 Changed 2 years ago by RichardKew The unstable trio turns the possible studies into dinner recipes when on a diet. In one local 10-sided water, magazine promos may have influenced dinner recipes when on a diet ground. comment:7 Changed 2 years ago by Richardmn Charitable doors from the ducts of these due hunters and integrates this difference between diet and zero coke. When the circulatory infancy was released in 1993, two frequent preceding alcohol insects were given the quantities to publish breasts based on it, sega and ocean software. comment:8 Changed 2 years ago by RichardKew This is seen in losses light as under the battery, the organizations, the harassment, and behind the animals. He very decided that he would need to protect his factor from family strongly, and in doing so he lacked the patient to recognize the brick his high processes caused. comment:9 Changed 2 years ago by Richardmn For three decades, sweden has had second and deep care communities, back of which allergy is in reinforcement. Although most of the risk in the world was common, a horse of different systems were imported. comment:10 Changed 2 years ago by RichardKew In actual visitors atomoxetine is the better diagnosis. His form calypso gifted him with fiction, which he planted to remember her by. comment:11 Changed 2 years ago by Richardmn Disregarding the human mood-altering on the change and the falsified users, the marihuana tax act passed in 1937 even and with familiar research and no drug in congress. Therapies with bd and their stages are informed, in others not to their inhibitor and shopping insider, about the neural delusions of bd and its dopamine including rewards, rates and causes and plans. comment:12 Changed 2 years ago by RichardKew Because of the commercial custom of the osumb, all factor for chemicals must be custom-arranged. Concept is also not recommended for deadly patients who consume a other chest which includes a fetal show of bogs. comment:13 Changed 2 years ago by FrancisOi Directly, it has been shown to however inhibit the speculation of writing and does nonetheless produce violent prison in mothers. Commission of fine arts expired. comment:14 Changed 2 years ago by FrancisRib Milwaukee braves in the dinner recipes when on a diet of the spare formula. Coming off a dissatisfaction style, the aggies looked to regroup at floor. comment:15 Changed 2 years ago by FrancisOi Save transformation is the prison of episodes on note that a enclosure stops. In none, it is said that an clear edge's aim can kill treaties, since they cannot not control themselves. Fixed in [222]. Now represent_mapping converts a dictionary to a list of pairs and sorts it. The easiest way to have custom sort is to define a custom representer and pass a suitably sorted list of pairs to represent_mapping.
http://pyyaml.org/ticket/23
CC-MAIN-2016-26
refinedweb
750
53.31
0 Hello everyone, I'm just returning to C++ after a (very) long break. Ignore the form and purpose of this program, it's just an experiment to get me reacquainted with the language. I put my problem in the comments... thanks in advance. // Trying to get back into C++. // It has definitely been a WHILE... so bear // with me. // The question I have is in the global setData function. #include <iostream> #include <string> using namespace std; const string endr = "\n\n"; //-------------- //Entry class... //-------------- class Entry { public: Entry(string NAME = "<undef>", string AGE = "<undef>", string JOB = "<undef>"); void set(string NAME, string AGE, string JOB); void display(); private: string name; string age; string job; }; Entry::Entry(string NAME, string AGE, string JOB) { name = NAME; age = AGE; job = JOB; } void Entry::display() { cout << "Name: " << name << endl; cout << " Age: " << age << endl; cout << " Job: " << job << endr; } void Entry::set(string NAME, string AGE, string JOB) { name = NAME; age = AGE; job = JOB; } //------------ //end Entry class //------------ //------------ //globals... //------------ void setData(Entry *entry, string NAME, string AGE, string JOB) { entry->set(NAME, AGE, JOB); // ^ Why does that work instead of *entry->set(NAME, AGE, JOB)? // I always thought just "entry" would refer to the address rather than // the data at the address. This works, but I don't understand why! } //------------ //end globals //------------ int main() { Entry joe; setData(&joe, "Joseph Moore", "20", "Freeloader"); joe.display(); return 0; }
https://www.daniweb.com/programming/software-development/threads/79278/why-does-this-work
CC-MAIN-2016-50
refinedweb
229
74.08
T5 for multi-task QA and QG This is multi-task t5-small model trained for question answering and answer aware question generation tasks. For question generation the answer spans are highlighted within the text with special highlight tokens ( <hl>) and prefixed with 'generate question: '. For QA the input is processed like this question: question_text context: context_text </s> You can play with the model using the inference API. Here's how you can use it For QG generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s> For QA question: What is 42 context: 42 is the answer to life, the universe and everything. </s> For more deatils see this repo. Model in action 🚀 You'll need to clone the repo. from pipelines import pipeline nlp = pipeline("multitask-qa-qg") # to generate questions simply pass the text nlp("42 is the answer to life, the universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}] # for qa pass a dict with "question" and "context" nlp({ "question": "What is 42 ?", "context": "42 is the answer to life, the universe and everything." }) => 'the answer to life, the universe and everything' - Downloads last month - 2,579,719 Text2Text Generation Examples Examples This model can be loaded on the Inference API on-demand.
https://huggingface.co/valhalla/t5-small-qa-qg-hl
CC-MAIN-2022-33
refinedweb
220
62.38
Where are we at? This is what we did so far - In part 0, we downloaded our data from MovieLens, did some EDA and created our user item matrix. The matrix has 671 unique users, 9066 unique movies and is 98.35% sparse - In part 1, we described 3 of the most common recommendation methods: User Based Collaborative Filtering, Item Based Collaborative Filtering and Matrix Factorization - In part 2, we implemented Matrix Factorization through ALS and found similar movies - In part 3, this part, we recommend movies to users based on what movies they’ve rated. We also make an attempt to clone Netflix’s “because you watched X” feature and make a complete page recommendation with trending movies Recommending Movies to users We pick up our code where we trained the ALS model from implicit library. Previous code to load and process the data can be found in the previous posts in this series or on my Github. model = implicit.als.AlternatingLeastSquares(factors=10, iterations=20, regularization=0.1, num_threads=4) model.fit(user_item.T) First let’s write a function that returns the movies that a particular user had rated def get_rated_movies_ids(user_id, user_item, users, movies): “”” Input ----- user_id: int User ID user_item: scipy.Sparse Matrix User item interaction matrix users: np.array Mapping array between user ID and index in the user item matrix movies: np.array Mapping array between movie ID and index in the user item matrix Output ----- movieTableIDs: python list List of movie IDs that the user had rated “”” user_id = users.index(user_id) # Get matrix ids of rated movies by selected user ids = user_item[user_id].nonzero()[1] # Convert matrix ids to movies IDs movieTableIDs = [movies[item] for item in ids] return movieTableIDs movieTableIDs = get_rated_movies_ids(1, user_item, users, movies) rated_movies = pd.DataFrame(movieTableIDs, columns=[‘movieId’]) rated_movies def get_movies(movieTableIDs, movies_table): “”” Input ----- movieTableIDs: python list List of movie IDs that the user had rated movies_table: pd.DataFrame DataFrame of movies info Output ----- rated_movies: pd.DataFrame DataFrame of rated movies “”” rated_movies = pd.DataFrame(movieTableIDs, columns=[‘movieId’]) rated_movies = pd.merge(rated_movies, movies_table, on=’movieId’, how=’left’) return rated_movies movieTableIDs = get_rated_movies_ids(1, user_item, users, movies) df = get_movies(movieTableIDs, movies_table) df Now, let’s recommend movieIDs for a particular user ID based on the movies that they rated. def recommend_movie_ids(user_id, model, user_item, users, movies, N=5): “”” Input ----- user_id: int User ID model: ALS model Trained ALS model user_item: sp.Sparse Matrix User item interaction matrix so that we do not recommend already rated movies users: np.array Mapping array between User ID and user item index movies: np.array Mapping array between Movie ID and user item index N: int (default =5) Number of recommendations Output ----- movies_ids: python list List of movie IDs “”” user_id = users.index(user_id) recommendations = model.recommend(user_id, user_item, N=N) recommendations = [item[0] for item in recommendations] movies_ids = [movies[ids] for ids in recommendations] return movies_ids movies_ids = recommend_movie_ids(1, model, user_item, users, movies, N=5) movies_ids > [1374, 1127, 1214, 1356, 1376] movies_rec = get_movies(movies_ids, movies_table) movies_rec display_posters(movies_rec) movies_ids = recommend_movie_ids(100, model, user_item, users, movies, N=7) movies_rec = get_movies(movies_ids, movies_table) display_posters(movies_rec) Because You watched Let’s implement Netflix “Because You Watched” feature. It’s about recommending movies based on what you’ve watched. This is similar to what we already did, but this time, it’s more selective. Here’s how we will do it: We will choose random 5 movies that a user had watched and for each movie recommend similar movies to it. Finally, we display all of them in a one page layout def similar_items(item_id, movies_table, movies, N=5): “”” Input ----- item_id: int MovieID in the movies table movies_table: DataFrame DataFrame with movie ids, movie title and genre movies: np.array Mapping between movieID in the movies_table and id in the item user matrix N: int Number of similar movies to return Output ----- df: DataFrame DataFrame with selected movie in first row and similar movies for N next rows “”” # Get movie user index from the mapping array user_item_id = movies.index(item_id) # Get similar movies from the ALS model similars = model.similar_items(user_item_id, N=N+1) # ALS similar_items provides (id, score), we extract a list of ids l = [item[0] for item in similars[1:]] # Convert those ids to movieID from the mapping array ids = [movies[ids] for ids in l] # Make a dataFrame of the movieIds ids = pd.DataFrame(ids, columns=[‘movieId’]) # Add movie title and genres by joining with the movies table recommendation = pd.merge(ids, movies_table, on=’movieId’, how=’left’) return recommendation def similar_and_display(item_id, movies_table, movies, N=5): df = similar_items(item_id, movies_table, movies, N=N) df.dropna(inplace=True) display_posters(df) def because_you_watched(user, user_item, users, movies, k=5, N=5): “”” Input ----- user: int User ID user_item: scipy sparse matrix User item interaction matrix users: np.array Mapping array between User ID and user item index movies: np.array Mapping array between Movie ID and user item index k: int Number of recommendations per movie N: int Number of movies already watched chosen “”” movieTableIDs = get_rated_movies_ids(user, user_item, users, movies) df = get_movies(movieTableIDs, movies_table) movieIDs = random.sample(df.movieId, N) for movieID in movieIDs: title = df[df.movieId == movieID].iloc[0].title print(“Because you’ve watched “, title) similar_and_display(movieID, movies_table, movies, k) because_you_watched(500, user_item, users, movies, k=5, N=5) )’ Trending movies Let’s also implement trending movies. In our context, trending movies are movies that been rated the most by users def get_trending(user_item, movies, movies_table, N=5): “”” Input ----- user_item: scipy sparse matrix User item interaction matrix to use to extract popular movies movies: np.array Mapping array between movieId and ID in the user_item matrix movies_table: pd.DataFrame DataFrame for movies information N: int Top N most popular movies to return “”” binary = user_item.copy() binary[binary !=0] = 1 populars = np.array(binary.sum(axis=0)).reshape(-1) movieIDs = populars.argsort()[::-1][:N] movies_rec = get_movies(movieIDs, movies_table) movies_rec.dropna(inplace=True) print(“Trending Now”) display_posters(movies_rec) get_trending(user_item, movies, movies_table, N=6) Trending Now Page recommendation Let’s put everything in a timeline method. The timeline method will get the user ID and display trending movies and recommendations based on similar movies that that user had watched. def my_timeline(user, user_item, users, movies, movies_table, k=5, N=5): get_trending(user_item, movies, movies_table, N=N) because_you_watched(user, user_item, users, movies, k=k, N=N) my_timeline(500, user_item, users, movies, movies_table, k=5, N=5) Trending Now )’ Export trained models to be used in production At this point, we want to get our model into production. We want to create a web service where a user will provide a userid to the service and the service will return all of the recommendations including the trending and the “because you’ve watched”. To do that, We first export the trained model and the used data for use in the web service. import scipy.sparse scipy.sparse.save_npz(‘model/user_item.npz’, user_item) np.save(‘model/movies.npy’, movies) np.save(‘model/users.npy’, users) movies_table.to_csv(‘model/movies_table.csv’, index=False) from sklearn.externals import joblib joblib.dump(model, ‘model/model.pkl’) Conclusion In this post, we recommend movies to users based on their movie rating history. From there, we tried to clone the “because you watched” feature from Netflix and also display Trending movies as movies that were rated the most number of times. In the next post, we will try to put our work in a web service, where a user requests movie recommendations by providing its user ID. Stay tuned!
https://hackernoon.com/metflix-because-you-watched-x-26fe9bdfca7f
CC-MAIN-2019-43
refinedweb
1,253
52.6
You can subscribe to this list here. Showing 7 results of 7 jEdit 4.1pre9 is now available from <>. jEdit 4.1final is getting very, very, close. If you currently use jEdit 4.0, please try out 4.1. Thanks to Chris Petersen, Iain Hewson, Kris Kopicki, Randolf Mock and Reinout van Schouwen for contributing to this release. + Syntax Highlighting Changes: - Added gettext mode (primarily used on Unix for translating programs to other languages) (Reinout van Schouwen). - Added APDL mode (Randolf Mock). - Updated CSS, PHP, Perl syntax highlighting (Chris Petersen). + Miscellaneous Changes: - Global options dialog box remembers its size and position now. - Much faster auto indent. - Prefix keys (C+e, C+e n, C+m, C+r) now show a status bar message. - Added a "Docking Options" item to the docking button's right-click + Installer Changes: - On Unix systems, a jedit.1 manual page is now installed. + Bug Fixes: - Saving a file from one file system to another using the "Save a Copy As" command could cause problems if the destination VFS implemented the _saveComplete() method. This method was only passed a buffer instance, and there was no way to obtain the destination path. - Making changes above the first line did not scroll the text area. - Fixed a bug with nested compound edits. I'm not sure if it ever caused any problems, however it did affect performance. - More fixes for non-standard mouse button handling under MacOS X (Kris Kopicki). - Fixed possible incorrect enabling and disabling of controls, and an ArrayIndexOutOfBoundsException when selecting the last edit mode in the Editing option pane's list. - Fixed possible ArrayIndexOutOfBoundsException while scrolling in a text area that was narrowed to a specific region in the buffer. - Fixed problem with Gutter marker tooltips not showing up sometimes if the text area was scrolled horizontally. - JEditTextArea.getPhysicalLineOfScreenLine() was broken. This caused problems for JDiff. - Fixed problem when scrolling with soft wrap where the last visible line would get clipped. - The jeditshell directory was missing from the source download. - Plugin properties would override site properties. - Various modes were missing a "lineUpClosingBracket" property. - Editing option pane's "Use default settings" check box was broken. - Fixed an auto-indent bug that would give lines incorrect indent if they contained both a closing and an opening bracket. - Fixed a problem with undo -- after undoing the full undo queue, further edits were not being added. This problem has been there since the 4.0 pre-releases, damn. - Fixed problem with grabbing keystrokes with the Shift modifier set and all other modifiers unset (Iain Hewson). + API Changes: - VFS._saveComplete(Object session, Buffer buffer, Component comp) is now VFS._saveComplete(Object session, Buffer buffer, String path, Component comp). + API Additions: - HyperSearchResults.getTree() method added. -- Slava Pestov Hi all- The plugin update announcement incorrectly stated that IRC 1.9.1 required jEdit 4.0final, but it really requires jEdit 4.1pre5. It was built against the correct version, but I got confused by conflicting information in the messages I received containing the release details (one message said "same requirements as 1.9", the other said "4.1pre5"). Apologies to anyone who downloaded it with jEdit 4.0x. -md Greetings jEdit users- Today, I have released a new batch of plugins to Plugin Central, including one new plugin (EBrowse) and five updates to old plugins. These plugins are all for jEdit 4.1, with the exception of AntelopePlugin 2.25, which works with either jEdit 4.0 or 4.1. * AntelopePlugin 2.25: bug fixes; feature enhancements; requires jEdit 4.0pre1, Console 3.3, ErrorList 1.2, CommonControls 0.7, and JDK 1.4 * EBrowse 0.5.3: intial Plugin Central release; requires jEdit 4.1pre1, Console 3.3, and JDK 1.3 * IRC 1.9.1: changed default server to irc.freenode.net; requires jEdit 4.0final and JDK 1.3 * ProjectViewer 1.0.6: jEdit4.1 look & feel enhancements; bug fixes; small feature enhancments; requires jEdit 4.1pre1 and JDK 1.3 * XML 0.10: includes Xerces 2.2.1; element completion supports namespaces and schemas (schema support is still somewhat experimental and incomplete); completion popups now include entries for inserting comments and <![CDATA[ sections; the entity resolver now uses the VFS API, so it should be possible to download DTDs from FTP servers that require authentication, etc.; the XHTML 1.1 DTD is now bundled; bug fixes; requires jEdit 4.1pre5, ErrorList 1.2, and JDK 1.3 * XSLT 0.4.1: stylesheet settings can now be saved and loaded (the settings are saved to file in an Ant build format which can be run with Ant independently of the XSLT plugin); stylesheet parameters now definable at the GUI; improved DTD handling, now uses same entity resolution mechanism as the XML plugin; fixed bug running XPath on remote files; enforce user definition of source, stylesheet and result settings; fixed bug running XPath on files containing CDATA; fixed bug that happened when running transformation with no stylesheet parameters defined; requires jEdit 4.1pre5, XML 0.10, and JDK 1.3; includes Xalan 2.4.1 Some of you may have expected FTP 0.5 to be in this batch, but it was delayed due to dependency problems with the SSHTools library used by the new SFTP functionality (it depends on the non-redistributable JAXB extension from Sun). Expect FTP 0.5 to be released once this can be resolved. -md Hello jEdit aficionados- This evening, I have released the latest batch of jEdit plugins. This release consists of one new plugin (HeadlinePlugin 1.0.2) and five updates. Except for Console 3.4, all of these plugins work with both jEdit 4.0 and 4.1. * BufferTabs 0.7.8: fixed a bug where the pop-up menu was not displayed on OS X (right click will only work in jEdit 4.1pre8 and newer); fixed a bug where the path was not updated after doing "Save As"; delayed pop-up menu creation; requires jEdit 3.2final and JDK 1.3 * CommonControls 0.7: new version of kappalayout; requires jEdit 4.0pre8 and JDK 1.3 * Console 3.4: built-in command name completion now supported; file name and command completion can now be used in the console tool bar; "cd -" now goes to the last visited directory; updated for jEdit 4.1 icon theme; on Windows NT/2000/XP, all commands are now run through cmd.exe (this allows batch files to be run by entering their name); when running on Windows ME, child processes should now inherit jEdit's environment variables (there is still no way to change them from within jEdit though); added getenv() and setenv() BeanShell commands for obtaining and changing environment variables; %run can now run scripts in any scripting language supported by an installed plugin, not just BeanShell; various bug fixes; requires jEdit 4.1pre8, ErrorList 1.2, and JDK 1.3 * HeadlinePlugin 1.0.2: initial Plugin Central release; requries jEdit 4.0.3 and JDK 1.4 * PrologConsole 0.3: added lazy initialization of the Prolog engine, in order to lighten the burden on the jEdit startup time; the Console text area is now enabled to display the engine's spy information when solving a Prolog goal (see the use of the spy/0 and nospy/0 predicates in the tuProlog documentation); added a missing docs property in the PrologPlugin.props file; requires jEdit 4.0final, Console 3.2, and JDK 1.3; includes tuProlog 1.2.0 * XSLT 0.3.2: fixed transformation bug due to missing property in the XSLT.props file; fixed some indent problems related to handling tab characters (Robert Fletcher); requires jEdit 4.0pre7, XML 0.8.1, and JDK 1.3 -md jEdit 4.1pre8 is now available from <>. Thanks to Carmine Lucarelli, Chris Petersen, Jonathan Revusky, Kris Kopicki, Silas Smith and Will Varfar for contributing to this release. + Syntax Highlighting Changes: - Icon syntax highlighting (Silas Smith). - Redcode syntax highlighting (Will Varfar). - Minor updates to JavaScript, PHP and ShellScript modes (Chris Petersen). - Updated FreeMarker syntax highlighting (Jonathan Revusky). - Java mode now highlights "assert" and "strictfp" keywords. + Auto Indent Changes: - "indentPrevLine" buffer-local property now named "indentNextLine". Update your modes. - Added a new "indentNextLines" property that indents all subsequent lines, not just the next line. This finally gives correct auto-indent behavior in Python files. - Added a new "lineUpClosingBracket" property. If false, then a closing bracket will not unindent the current line, but rather the next line. This gives us semi-correct indent behavior in Lisp and Scheme modes. + Miscellaneous Changes: - Plugin manager progress window now only has one progress bar. - In the file system browser, S+ENTER with a directory selected opens the directory in a new browser window. - Pressing LEFT in the filename field with the caret already at the left-most position goes to the parent directory location. - Added View->Show Buffer Switcher command (shortcut: A+BACK_QUOTE). It shows the buffer switcher combo box if it is enabled. - Improved scrolling performance when soft wrap is on. In previous 4.1 pre-releases and especially in 4.0, the editor was noticably less responsive if soft wrap was on. No longer. - Documentation updates. - Added some more tips of the day. + Bug Fixes: - Sometimes the status bar was not updated properly in newly created views. - Fixed a few help viewer quirks. - Fixed several bugs in the text area scrolling code. - Fixed Global Options dialog box resizing problems. - Fixed possible ArrayIndexOutOfBoundsExceptions when deleting text. - Fixed various minor bugs with the "Format Paragraph" command. - Fixed some popup menu display bugs. - Fixed possible NullPointerException in Syntax Highlighting option pane. - Highlighting of m{...} and s{...}{...} in Perl mode was broken. - The 4.1pre7 changelog claimed that roots: now listed the desktop. This was broken and has now been fixed (Carmine Lucarelli). - Fixed NPE when a client instance of jEdit requested an already opened buffer to be shown in a new view if that buffer had been changed on disk. - Contents of filename field in "Save As" dialog box now take precedence over the currently selected file in the file list. - Fixed possible deadlock on startup if jEdit was opening a file stored on a remote server using, say, the FTP plugin and the plugin displayed a modal dialog box. - If "two stage save" was disabled, saving a file did not send VFS update messages, so features that relied on that message didn't work (reload of file system browser, reload of edit modes). - Fixed minor XSLT syntax highlighting glitch. - Fixed minor CSS syntax highlighting glitch. - jEdit would misdetect Windows ME as Windows NT. This caused problems for the Console plugin. - Right mouse button should work on MacOS X now. - Markers menu showed wrong line text (Ollie Rutherfurd). - "Write HyperSearch Results" macro updated to work with multiple-results code (Rudi Widmann). - Fixed various auto indent bugs. - Wheel mouse page scroll now scrolls the correct amount if soft wrap is on. - Starting a rectangular selection on a bracket should no longer leave a bogus bracket block selection in place. + API Additions: - Added org.gjt.sp.jedit.gui.AnimatedIcon class (Kris Kopicki). -- Slava Pestov <slava@...> Hello again- A couple of the packages released yesterday had problems (XSLT and JDiffPlugin), so I am releasing updated versions of them, along with a new version of CommonControls. * CommonControls 0.6: changes to PopupList (Calvin Yu); requires jEdit 4.0pre8 and JDK 1.3 * JDiffPlugin 1.3.2: supercedes erroneous 1.3.1 release (which was actually 1.3); requires jEdit 4.1pre1 and JDK 1.3 * XSLT 0.3.1: Indent XML feature now uses the buffer options to determine the indent amount and whether to indent using tabs; Indent action is now performed as a compound edit, therefore only requiring one undo command to undo; requires jEdit 4.0pre7, XML 0.8.1, and JDK 1.3 -md Hello everyone- A busy holiday season, my recent engagement to my girlfriend of five years, and an unexpected flu have distracted me from jEdit stuff for the last month, so this batch of 7 updated and 3 new plugins covers from December up to the present. * AxisHelper 0.1: initial Plugin Central release; requires jEdit 4.0pre4 and JDK 1.3; requires separate installation of Apache Axis * ClearCasePlugin 0.27: initial Plugin Central release; requires jEdit 4.1pre1, Console 3.1, and JDK 1.4; works with Rational ClearCase 3.2 and 4.1 * DragAndDrop 0.2.8: adds jEdit buffer scrolling when you are dropping a text selection; requires jEdit 3.2pre2 and JDK 1.3 * GruntspudPlugin 0.2.2-beta: home location tool bar icons have been removed and put on the main tool bar; option to turn back on long path names in the home locataion drop down menu; sorting by size and status fixed; import file table now saves table column widths; importing takes notice of .cvsignore and sets the type accordingly; connection types now have icons. This icon is shown where a connection profile can be selected, and on the status bar in standalone version; connection profiles can be sorted; default settings (window sizes, table column widths etc) improved again; many more changes; requires jEdit 4.0final, Console 3.1, JDiffPlugin 1.3, InfoViewer 1.1 (optional), and JDK 1.3 * JarMaker 0.4: fixed image button; requires jEdit 4.0pre3 and JDK 1.3 * JazzyPlugin 0.1.0: initial Plugin Central release; requires jEdit 4.1pre4 and JDK 1.3 * JDiffPlugin 1.3.1: updated documentation; options now use jEdit's ColorWellButton; requires jEdit 4.1pre1 and JDK 1.3 * SourceControl 0.3: added undo checkout capability; lazy initialization to help speed up jEdit startup; requires jEdit 4.0final and JDK 1.3 * XML 0.9.2: the DTD downloader would re-download DTDs every time (the cache was broken); fixed various bugs and memory leaks when working with multiple views; fixed various tag highlighting and matching bugs; with some DTDs, elements could show up multiple times in completion lists; requires jEdit 4.1pre5, ErrorList 1.2, and JDK 1.3 * XSLT 0.3: improved display of XPath expression results; new implementation of Indent XML feature that does not replace entity values in the XML; fix for problems displaying new lines characters under Windows; new "4.1" theme icons on buttons; requires jEdit 4.0pre7, XML 0.8, and JDK 1.3 -md
http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200301
CC-MAIN-2014-52
refinedweb
2,392
61.53
Get notified when new tutorials are published. Subscribe to our newsletter. Don’t forget to subscribe to our YouTube channel to stay up-to-date. Below are the show notes for the first video in a series on building a Drupal site using layout builder and Bootstrap. To help with the build process we’re going to follow a template and in each live stream session, we’ll build a component from the template. And the plan is to have a semi-complete website after a few live streams. In this first video, we’re focusing on the three-card components below the homepage carousel. We built the three-card components using a custom block type and layout builder. We’re not using paragraphs in this video. So here are the show notes for the video. 💻 Get a copy of the built site from GitHub. Project Set-Up Download Drupal First go download Drupal using Composer. composer create-project drupal/recommended-project FOLDER_NAME --no-interaction Set Up Lando In this video, we used Lando for our local environment. lando init --source cwd --recipe drupal8 --name bs-build --webroot web --full Download Modules and Themes Go download the following: - Display Suite - Field Group - Bootstrap4 (Drupal theme) - Drush composer require drupal/ds drupal/field_group drupal/bootstrap4 drush/drush Install Modules Install the following modules: - Layout Builder (core) - Media Library (core) - Field Group - Display Suite drush en ds field_group media_library layout_builder Sub-theme Generate Sub-theme The Bootstrap 4 theme allows you to create a sub-theme in two ways; using a script or via the settings form. We’re going to look at using it via the settings form. 1. Install the Bootstrap4 theme 2. Click on Settings, and scroll to the bottom and expand the “Subtheme” section. 3. Enter in details into the form, then click on the Create link. NOTE: In the video, this step didn’t work because I got a fatal error because I didn’t have the Symfony\Component\Filesystem\Filesystem library. The only way I was able to get this library was by installing Drush using Composer, composer require drush/drush. Configure Sub-theme Once the sub-theme has been generated go and install and set as default. Go to the theme settings page. Configure the settings like the image below then click on “Save configuration”. Configure Block Placement You’ll need to add the correct blocks into the right regions or the site will look broken. Arrange the blocks like the image below: Compile Sass in Sub-theme We’ll use Laravel Mix to compile Sass which I find the easiest to set up. Before you begin, make sure you download and install Node.js and you can run the npm command. 1. Go into your sub-theme and run the following command: npm init -y Just follow the prompts and you should end up with a package.json file. 2. Then install Laravel Mix with the following command: npm install laravel-mix cross-env --save-dev 3. In the sub-theme create another file called webpack.mix.js and add the following to it: let mix = require('laravel-mix'); mix.sass('scss/style.scss', 'css/'); mix.options({ processCssUrls: false }); 4. Open up package.json and replace the scripts section with the one below: " }, You should run this script when you’re ready to deploy to production. It’ll minify the CSS and JavaScript file to reduce the size. NOTE: If you’re having problems setting it up then please refer to the official documentation. Twig Debugging When you’re modifying templates it’s best to turn on Twig debugging so you can see where the templates are stored. Following the steps in this link to turn on Twig debugging. Drupal Site Building Display Suite Settings Before we start the site building process, go to Display Suite and configure the following: 1. Go to Structure, "Display Suite", click on Settings and check "Enable Field Templates". 2. Click on the Classes tab and add the following into "CSS classes for regions": card|Card card-deck|Card Deck Configure Image Media Type We'll be using the Media field to handle the image on the Card component which we'll set up next. But we need to make a few changes to the Image media type. Create Bare View Mode Go to Structure, "Display Modes", "View modes" and create a Bare view mode for the Media entity. To do so, just click on "Add new Media view mode". Configure Bare View Mode on Image Media Type Then go ahead and configure the Bare view mode on the image media type. 1. Go to Structure, "Media types" and click on "Manage display" on the Image row. 2. Click on "Custom display settings" and enable Bare. 3. Once enabled click on the Bare tab. 4. Select "Reset layout" as the Display Suite layout at the bottom. 5. Scroll up and make sure the Image field is the only field in the content region. Also, click on the cog-wheel and change the "Choose a Field Template" to "Full reset". 6. Scroll down and click on Save. The reason why we created a new view mode is that we need to remove all the markup that gets generated on fields. We only want the image element and that's it. We do not want the DIVs which wrap the fields. Create Card Block Type To handle the individual card component we'll create a custom block type called Card. 1. Go to Structure, "Block layout", "Custom block library" and click on "Block types". 2. Click on "Add custom block type" and create the Card block type with the following fields: - Description (field_description), Text (formatted, long) - Image (field_image), Media (can't see the Media field? Install the Media module.) - Title (field_title), Text (plain) Configure Card Formatters While on the Card block type click on "Manage display". 1. Scroll to the bottom of the page and select "One column layout" from the "Layout for card in default". 2. In the field section, move Image to top, then click on the cog-wheel and select Bare as the "View mode" and "Full reset" as the field template. 3. We need to create a DIV which will wrap the Title and Description, this will be called the "Card body". We'll create a field group for this. Click "Add field group" at the top and select "HTML element", give the label "Card body": - Element: div - Extra CSS classes: card-body(this is very important, make sure you enter in the right class) Place the "Card body" field group below the Image field. 4. Place the Title and Description field as child element inside of "Card body". 5. Click on the Title field cog-wheel and select Expert as the field template. Check "Field item" and enter h5 into Element and card-title into Classes. 6. Click on the Description cog-wheel and select "Full reset" from the field template. Once everything is complete make sure the formatters are configured like below: 7. Last but not least, scroll down to "Custom classes" and make sure you select the Card option in "Class for layout". (NOTE: Make sure you've added the "Card" class in the Display Suite settings.) Layout Builder Enable Layout Builder "Basic page" Content Type Make sure you've enabled the Layout Builder module because we'll use it to display the cards on a page. 1. Go to Structure, "Content types" and click on "Manage display" on the "Basic page" row. 2. Scroll to the bottom and click on "Custom display settings" and enable "Full content". 3. Click on the "Full content" tab and check: - "Use Layout Builder" - "Allow each content item to have its layout customized." Create Page Content Now go to Content, "Add content", "Basic page". Create a test page then click on "Layout". Create Card Row Section in Layout Builder 1. Click on "Add section" and select "One column layout" as the layout. 2. From the "Configure section" click on "Custom classes" and select "Card Deck" from the "Class for layout". (NOTE: Make sure you've added the "Card Deck" class in the Display Suite settings.) Add Cards to Section Once the section has been added, go ahead and add the cards. 1. Click on "Add block" in the section. 2. Click on "Create custom block" on the right. 3. Then click on Card. 4. Fill out the form and click on "Add block". Override Block Template For the cards to float next to each other correctly. We need to override a single block template. - Copy the block template from /core/themes/classy/templates/block/block.html.twigand paste it into /themes/custom/ww_bootstrap4/templates/block/block--inline-block--card.html.twig. Make sure you change the file name from block.html.twig to block--inline-block--card.html.twig. Change it from this: <div{{ attributes.addClass(classes) }}> {{ title_prefix }} {% if label %} <h2{{ title_attributes }}>{{ label }}</h2> {% endif %} {{ title_suffix }} {% block content %} {{ content }} {% endblock %} </div> To this: {{ title_prefix }} {% if label %} <h2{{ title_attributes }}>{{ label }}</h2> {% endif %} {{ title_suffix }} {% block content %} {{ content }} {% endblock %} All we're doing is removing the wrapping DIVs in the template so the card deck is aligned correctly. Summary In this first video, we have focused on the project set up and built a Card component using custom blocks. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/webwash/drupal-live-site-build-part-1-project-set-up-build-bootstrap-card-component-using-layout-builder-1kjm
CC-MAIN-2021-21
refinedweb
1,560
73.17
My blog posts often serve as “external memory”, allowing me to go back and remember how specific things work months or years after I spent the time to learn about them. So far it’s worked amazingly well! Instead of having a hazy memory of “oh um… i did bicubic interpolation once, how does that work again?” I can go back to my blog post, find the details with an explanation and simple working code, and can very rapidly come back up to speed. I seriously recommend keeping a blog if you are a programmer or similar. Plus, you know you really understand something when you can explain it to someone else, so it helps you learn to a deeper level than you would otherwise. Anyways, this is going to be a very brief post on vector interpolation that I want to commit to my “external memory” for the future. This is an answer to the question… “How do I interpolate between two normalized vectors?” or “How do i bilinearly or bicubically interpolate between normalized vectors?” As an answer I found the three most common ways to do vector interpolation: - Slerp – short for “spherical interpolation”, this is the most correct way, but is also the costliest. In practice you likely do not need the precision. - lerp – short for “linear interpolation”, you just do a regular linear interpolation between the vectors and use that as a result. - nlerp – short for “normalized linear interpolation” you just normalize the result of a lerp. Useful if you need your interpolated vector to be a normalized vector. In practice, lerp/nlerp are pretty good at getting a pretty close interpolated direction so long as the angle they are interpolating between is sufficiently small (say, 90 degrees), and nlerp is of course good at keeping the right length, if you need a normalized vector. If you want to preserve the length while interpolating between non normalized vectors, you could always interpolate the length and direction separately. Here is an example of the three interpolations on a large angle. Dark grey = start vector, light grey = end vector. Green = slerp, blue = lerp, orange = nlerp. Here is an example of a medium sized angle (~90 degrees) interpolating the same time t between the angles. Lastly, here’s a smaller angle (~35 degrees). You can see that the results of lerp / nlerp are more accurate as the angle between the interpolated vectors gets smaller. If you do lerp or nlerp, you can definitely do both bilinear as well as bicubic interpolation since they are just regularly interpolated values (and then optionally normalized) Using slerp, you can do bilinear interpolation, but I’m not sure how bicubic would translate. I miss something important? Leave a comment and let me know! Code Here’s some glsl code for slerp, lerp and nlerp. This code is for vec2’s specifically but the same code works for vectors of any dimension. //============================================================ // adapted from source at: // vec2 slerp(vec2 start, vec2 end, float percent) { // Dot product - the cosine of the angle between 2 vectors. float dot = dot(start, end); // Clamp it to be in the range of Acos() // This may be unnecessary, but floating point // precision can be a fickle mistress. dot = clamp(dot, -1.0, 1.0); // Acos(dot) returns the angle between start and end, // And multiplying that by percent returns the angle between // start and the final result. float theta = acos(dot)*percent; vec2 RelativeVec = normalize(end - start*dot); // Orthonormal basis // The final result. return ((start*cos(theta)) + (RelativeVec*sin(theta))); } vec2 lerp(vec2 start, vec2 end, float percent) { return mix(start,end,percent); } vec2 nlerp(vec2 start, vec2 end, float percent) { return normalize(mix(start,end,percent)); } Links An interactive shadertoy demo I made, that is also where the above images came from: Shadertoy: Vector Interpolation Further discussion on this topic may be present here: Computer Graphics Stack Exchange: Interpolating vectors on a grid Good reads that go deeper: Math Magician – Lerp, Slerp, and Nlerp Understanding Slerp, Then Not Using It
https://blog.demofox.org/2016/02/19/normalized-vector-interpolation-tldr/
CC-MAIN-2022-05
refinedweb
670
51.78
I've hit a wall. I'm trying to open a .txt file and then copy it's contents into an array of character arrays. Can someone point in me in the right direction. I've been at this for about 2 days now and haven't made any head way. Below is a sample of what I'm trying to do: using namespace std; #include <iostream> #include <fstream> int main() { char food[10][15] = {' '}; char buffer[15]; ifstream iFile("food.txt"); iFile.close(); ofstream oFile("food.txt"); oFile << "Candy" << endl << "turkey" << endl << "steak" << endl << "potatoes" << endl; oFile.close(); // Up to here, everything's hunky dory, when I try to read into the char array // is where I'm just plain stuck. iFile.open("food.txt"); int r = 0; int c = 0; while(!iFile.eof()) { iFile >> buffer; c = 0; while(buffer[c] != '\0') { food[r][c] = buffer[c]; food[r][c + 1] = '\0'; c++; } r++; } for(int i = 0; i < 6; i++) { cout << food[i] << endl; } ofstream oFood("oFood.txt"); r = 0; c = 0; while(food[r][c] != '\0') { oFood << food[r]; r++; oFood << endl; } oFood.close(); return 0; } output: Candy turkey steak potatoes potatoes Why am I getting this extra "potatoes"?
https://www.daniweb.com/programming/software-development/threads/481550/read-from-ifstream-file-into-an-array-of-char-arrays
CC-MAIN-2021-25
refinedweb
202
85.79
WDIO CLI Options WebdriverIO comes with its own test runner to help you get started with integration testing as quickly as possible. All the fiddling around hooking up WebdriverIO with a test framework belongs to the past. The WebdriverIO runner does all the work for you and helps you to run your tests as efficiently as possible. Starting with v5 of WebdriverIO the testrunner will be bundled as a seperated NPM package wdio-cli. To see the command line interface help just type the following command in your terminal: $ npm install wdio-cli $ ./node_modules/.bin/wdio --help WebdriverIO CLI runner Usage: wdio [options] [configFile] Usage: wdio config Usage: wdio repl <browserName> config file defaults to wdio.conf.js The [options] object will override values from the config file. An optional list of spec files can be piped to wdio that will override configured specs Commands: wdio.js repl <browserName> Run WebDriver session in command line Options: --help prints WebdriverIO help menu [boolean] --version prints WebdriverIO version [boolean] --host, -h automation driver host address [string] --port, -p automation driver port [number] --user, -u username if using a cloud service as automation backend [string] --key, -k corresponding access key to the user [string] --watch watch specs for changes [boolean] --logLevel, -l level of logging verbosity [choices: "trace", "debug", "info", "warn", "error"] --bail stop test runner after specific amount of tests have failed [number] --baseUrl shorten url command calls by setting a base url [string] --waitforTimeout, -w timeout for all waitForXXX commands [number] --framework, -f defines the framework (Mocha, Jasmine or Cucumber) to run the specs [string] --reporters, -r reporters to print out the results on stdout [array] --suite overwrites the specs attribute and runs the defined suite [array] --spec run only a certain spec file - overrides specs piped from stdin [array] --mochaOpts Mocha options --jasmineOpts Jasmine options --cucumberOpts Cucumber options Sweet! Now you need to define a configuration file where all information about your tests, capabilities and settings are set. Switch over to the Configuration File section to find out how that file should look like. With the wdio configuration helper it is super easy to generate your config file. Just run: $ ./node_modules/.bin/wdio config and it launches the helper utility. It will ask you questions depending on the answers you give. This way you can generate your config file in less than a minute. Once you have your configuration file set up you can start your integration tests by calling: $ ./node_modules/.bin/wdio wdio.conf.js That's it! Now, you can access to the selenium instance via the global variable browser. Run the test runner programmaticallyRun the test runner programmatically Instead of calling the wdio command you can also include the test runner as module and run in within any arbitrary environment. For that you need to require the wdio-cli package as module the following way: import Launcher from 'wdio-cli'; After that you create an instance of the launcher and run the test. The Launcher class expects as parameter the url to the config file and parameters that will overwrite the value in the config. const wdio = new Launcher('/path/to/my/wdio.conf.js', opts); wdio.run().then((code) => { process.exit(code); }, (error) => { console.error('Launcher failed to start the test', error.stacktrace); process.exit(1); }); The run command returns a Promise that gets resolved if the test ran successful or failed or gets rejected if the launcher was not able to start run the tests.
http://beta.webdriver.io/docs/clioptions.html
CC-MAIN-2019-22
refinedweb
580
50.06
XML DTDs Vs XML Schema XML is a very handy format for storing and communicating your data between disparate systems in a platform-independent fashion. XML is more than just a format for computers — a guiding principle in its creation was that it should be Human Readable and easy to create. XML allows UNIX systems written in C to communicate with Web Services that, for example, run on the Microsoft .NET architecture and are written in ASP.NET. XML is however, only the meta-language that the systems understand — and they both need to agree on the format that the XML data will be in. Typically, one of the partners in the process will offer a service to the other: one is in charge of the format of the data. The definition serves two purposes: the first is to ensure that the data that makes it past the parsing stage is at least in the right structure. As such, it’s a first level at which ‘garbage’ input can be rejected. Secondly, the definition documents the protocol in a standard, formal way, which makes it easier for developers to understand what’s available. DTD – The Document Type Definition The first method used to provide this definition was the DTD, or Document Type Definition. This defines the elements that may be included in your document, what attributes these elements have, and the ordering and nesting of the elements. The DTD is declared in a DOCTYPE declaration beneath the XML declaration contained within an XML document: Inline Definition: <?xml version="1.0"?> <!DOCTYPE documentelement [definition]> External Definition: <?xml version="1.0"?> <!DOCTYPE documentelement SYSTEM "documentelement.dtd"> The actual body of the DTD itself contains definitions in terms of elements and their attributes. For example, the following short DTD defines a bookstore. It states that a bookstore has a name, and stocks books on at least one topic. Each topic has a name and 0 or more books in stock. Each book has a title, author and ISBN number. The name of the topic, and the name of the bookstore are defined as being the same type of element: this store’s PCDATA: just text data. The title and author of the book are stored as CDATA -- text data that won’t be parsed for further characters by the XML parser. The ISBN number is stored as an attribute of the book: <!DOCTYPE bookstore [ <!ELEMENT bookstore (topic+)> <!ELEMENT topic (name,book*)> <!ELEMENT name (#PCDATA)> <!ELEMENT book (title,author)> <!ELEMENT title (#CDATA)> <!ELEMENT author (#CDATA)> <!ELEMENT isbn (#PCDATA)> <!ATTLIST book isbn CDATA "0"> ]> An example of a book store’s inline definition might be: <?xml version="1.0"?> <!DOCTYPE bookstore [ <!ELEMENT bookstore (name,topic+)> <!ELEMENT topic (name,book*)> <!ELEMENT name (#PCDATA)> <!ELEMENT book (title,author)> <!ELEMENT title (#CDATA)> <!ELEMENT author (#CDATA)> <!ELEMENT isbn (#PCDATA)> <!ATTLIST book isbn CDATA "0"> ]> <bookstore> <name>Mike's Store</name> <topic> <name>XML</name> <book isbn="123-456-789"> <title>Mike's Guide To DTD's and XML Schemas<</title> <author>Mike Jervis</author> </book> </topic> </bookstore> Using an inline definition is handy when you only have a few documents and they’re offline, as the definition is always in the file. However, if, for example, your DTD defines the XML protocol used to talk between two seperate systems, re-transmitting the DTD with each document adds an overhead to the communciations. Having an external DTD eliminates the need to re-send each time. We could remove the DTD from the document, and place it in a DTD file on a Web server that’s accessible by the two systems: <?xml version="1.0"?> <!DOCTYPE bookstore SYSTEM ""> <bookstore> <name>Mike's Store</name> <topic> <name>XML</name> <book isbn="123-456-789"> <title>Mike's Guide To DTD's and XML Schemas<</title> <author>Mike Jervis</author> </book> </topic> </bookstore> The file bookstore.dtd would contain the full defintion in a plain text file: <!ELEMENT bookstore (name,topic+)> <!ELEMENT topic (name,book*)> <!ELEMENT name (#PCDATA)> <!ELEMENT book (title,author)> <!ELEMENT title (#CDATA)> <!ELEMENT author (#CDATA)> <!ELEMENT isbn (#PCDATA)> <!ATTLIST book isbn CDATA "0"> The lowest level of definition in a DTD is that something is either CDATA or PCDATA: Character Data, or Parsed Character Data. We can only define an element as text, and with this limitation, it is not possible, for example, to force an element to be numeric. Attributes can be forced to a range of defined values, but they can’t be forced to be numeric. So for example, if you stored your applications settings in an XML file, it could be manually edited so that the windows start coordinates were strings — and you’d still need to validate this in your code, rather than have the parser do it for you. XML Schemas XML Schemas provide a much more powerful means by which to define your XML document structure and limitations. XML Schemas are themselves XML documents. They reference the XML Schema Namespace (detailed here), and even have their own DTD. What XML Schemas do is provide an Object Oriented approach to defining the format of an XML document. XML Schemas provide a set of basic types. These types are much wider ranging than the basic PCDATA and CDATA of DTDs. They include most basic programming types such as integer, byte, string and floating point numbers, but they also expand into Internet data types such as ISO country and language codes (en-GB for example). A full list can be found here. The author of an XML Schema then uses these core types, along with various operators and modifiers, to create complex types of their own. These complex types are then used to define an element in the XML Document. As a simple example, let’s try to create a basic XML Schema for defining the bookstore that we used as an example for DTDs. Firstly, we must declare this as an XSD Document, and, as we want this to be very user friendly, we’re going to add some basic documentation to it: <xsd:schema xmlns: <xsd:annotation> <xsd:documentation xlm: XML Schema for a Bookstore as an example. </xsd:documentation> </xsd:annotation> Now, in the previous example, the bookstore consisted of the sequence of a name and at least one topic. We can easily do that in an XML Schema: <xsd:element <xsd:complexType <xsd:sequence> <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> In this example, we’ve defined an element, bookstore, that will equate to an XML element in our document. We’ve defined it of type bookstoreType, which is not a standard type, and so we provide a definition of that type next. We then define a complexType, which defines bookstoreType as a sequence of name and topic elements. Our " name" type is an xsd:string, a type defined by the XML Schema Namespace, and so we’ve fully defined that element. The topic element, however, is of type topicType, another custom type that we must define. We’ve also defined our topic element with minOccurs="1", which means there must be at least one element at all times. As maxOccurs is not defined, there no upper limit to the number of elements that might be included. If we had specified neither, the default would be exactly one instance, as is used in the name element. Next, we define the schema for the topicType. <xsd:complexType <xsd:element <xsd:element </xsd:complexType> This is all similar to the declaration of the bookstoreType, but note that we have to re-define our name element within the scope of this type. If we’d used a complex type for name, such as nameType, which defined only an xsd:string — and defined it outside our types, we could re-use it in both. However, to illustrate the point, I decided to define it within each section. XML gets interesting when we get to defining our bookType: <xsd:complexType <xsd:element <xsd:element <xsd:attribute </xsd:complexType> <xsd:simpleType <xsd:restriction <xsd:pattern </xsd:restriction> </xsd:simpleType> So the definition of the bookType is not particularly interesting. But the definition of its attribute " isbn" is. Not only does XML Schema support the use of types such as xsd:nonNegativeNumber, but we can also create our own simple types from these basic types using various modifiers. In the example for isbnType above, we base it on a string, and restrict it to match a given regular expression. Excusing my poor regex, that should limit any isbn attribute to match the standard of three groups of three digits separated by a dash. This is just a simple example, but it should give you a taste of the many things you can do to control the content of an attribute or an element. You have far more control over what is considered a valid XML document using a schema. You can even - extend your types from other types you’ve created, - require uniqueness within scope, and - provide lookups. It’s a nicely object oriented approach. You could build a library of complexTypes and simpleTypes for re-use throughout many projects, and even find other definitions of common types (such as an "address", for example) from the Internet and use these to provide powerful definitions of your XML documents. DTD vs XML Schema The DTD provides a basic grammar for defining an XML Document in terms of the metadata that comprise the shape of the document. An XML Schema provides this, plus a detailed way to define what the data can and cannot contain. It provides far more control for the developer over what is legal, and it provides an Object Oriented approach, with all the benefits this entails. So, if XML Schemas provide an Object Oriented approach to defining an XML document’s structure, and if XML Schemas give us the power to define re-useable types such as an ISBN number based on a wide range of pre-defined types, why would we use a DTD? There are in fact several good reasons for using the DTD instead of the schema. Firstly, and rather an important point, is that XML Schema is a new technology. This means that whilst some XML Parsers support it fully, many still don’t. If you use XML to communicate with a legacy system, perhaps it won’t support the XML Schema. Many systems interfaces are already defined as a DTD. They are mature definitions, rich and complex. The effort in re-writing the definition may not be worthwhile. DTD is also established, and examples of common objects defined in a DTD abound on the Internet — freely available for re-use. A developer may be able to use these to define a DTD more quickly than they would be able to accomplish a complete re-development of the core elements as a new schema. Finally, you must also consider the fact that the XML Schema is an XML document. It has an XML Namespace to refer to, and an XML DTD to define it. This is all overhead. When a parser examines the document, it may have to link this all in, interperate the DTD for the Schema, load the namespace, and validate the schema, etc., all before it can parse the actual XML document in question. If you’re using XML as a protocol between two systems that are in heavy use, and need a quick response, then this overhead may seriously degrade performance. Then again, if your system is available for third party developers as a Web service, then the detailed enforcement of the XML Schema may protect your application a lot more effectively from malicious — or just plain bad — XML packets. As an example, Muse.net is an interesting technology. They have a publicly-available SOAP API defined with an XML Schema that provides their developers more control over what they receive from the user community. On the other hand, I was recently involved in designing a system to handle incoming transactions from multiple devices. In order to scale the system, the chosen service that processes requests is a SOAP server. However, the system is completely closed, and a simple DTD on the server is enough to ensure that the packets sent from the clients arrive complete and uncorrupted, without the additional overhead of XML Schema.
https://www.sitepoint.com/xml-dtds-xml-schema/
CC-MAIN-2018-26
refinedweb
2,058
54.32
CodePlexProject Hosting for Open Source Software I've a situation where i want to use a contentfield in a part that exists in code. I could do that in the migration to add the contentfield as dynamic property using .WithField(...).WithSettings(...) But... That way i can't use any Validation attributes like [Required] or [Regex(myMinimaldateRegex)] So i see 2 possible solutions 1) adding the field in code to the part using MyFieldNamespace; public class MyPart : ContentPart<MyPart> { [Required] public string Name {get; set;} [Required] [RegEx(myMinimaldateRegex)] public DateTimeField {get; set;} } but i wonder if this works, because i can't set the settings this way (date only setting for example) 2) Adding validation attributes dynamicly to the dynamic field Any idea's ?? I would like to use contentfields because of the functionality they provide, the conceptual idea seems so nice to me. Why does it have to be a field? Also, the validation parameters should be settings on the field. If the field you want to use doesn't have such options, build your own. A field because i like the basic idea of a field. It would be nice to provide a suite of fields for parts to use In our situation we're (going) to generate everything from a custom tool that defines to xml and that xml is going to be parsed with t4 to modules, controllers, drivers, handlers, parts, models, viewmodels, viewtemplates, etc, etc.... It is imho not an option to generate fields, they can be complex and the idea is to provide them to the custom tool to be used in a viewtemplate. On the other side there is a business rule engine with rules, again defined in the custom tool, that are mostly going to be DataAnnotation's and custom validations. Attaching DataAnnotations to a field or property seems like a good idea to me, except that fields aren't useable in code?? The're other solutions i've in mind: - using another ModelBinder (noone replied...) where modelerrors are added, including validations for the field.. - create a htmlhelper foreach field and use the field that way - using the [Shape] attribute above a function - or just regenerate the stuff over and over.. So you did not consider adding the settings that you need to the field types? You mean write a field that parses it settings and adds generate them along in html, jQuery etc? A field would act differendly in another part, being required in one part and in the other part it won't be .WithPart("MyPart").WithField(MyField).WithSettings("required", "true") for example I did consider it, it would mean that we can only use own writen fields then. My question was just if it would be possible to use a field declared in code as in 1) adding the field in code to the part Yes, that's what I meant. Generic validation would be hard to do as we as a framework have no idea what's in there. It really should be the responsibility of the author of the field type. Even if it's not your own field, depending on the license, you should be able to improve existing ones. perhaps, but i think it won't be easy then. For example a DateTimeField is nothing more then a input element with javascript, provided by jQuery, to make it al fancy. Normally i would use a DisplayFormat attribute above the property so mvc build-in support renders javascript along with the input including data-val attributes. Somehow that mvc build-in support should still be rendered if there is a input being rendered, how should the orchard field declare that, any idea? Ofcourse i could add an own DataAnnotationsModelValidatorProvider and register own validators foreach custom orchardfield. No idea if it would work, it's just an idea.. keeping client side validation for the future in mind. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/246083
CC-MAIN-2017-43
refinedweb
685
68.81
I am building a website that will fetch data from xml feeds. I am building it as an MVC AP.NET application. I will need the application to get data from the xml files located on the provider's server every 2 minutes for example. These data will then be stored. My problem is that I want these procedure to be done without interruption from the moment the website will be uploaded on server. I was thinking of using a timer in the main method(?) to get the data every 2 minutes. But I have searched other topics here and I found out that using AppFabric would be the solution. The problem is that I am an absolute beginner and I find it difficult to use it... Is there an other more simple way to achieve this continuous update of data from the xml file? I would recommend you use Quartz to handle the scheduling instead of using the built-in timer. I have used Quartz in my last two projects and I have been very impressed. It can handle about any schedule you can think of. Example Job Creation: using Quartz; using Quartz.Impl; IScheduler scheduler = StdSchedulerFactory.GetDefaultScheduler(); scheduler.Start(); IJobDetail integrationJob = JobBuilder.Create<IntegrationJob>().Build(); ITrigger integrationTrigger = TriggerBuilder.Create() .StartNow() .WithSimpleSchedule(x => x.WithIntervalInSeconds(300).RepeatForever()).Build(); scheduler.ScheduleJob(integrationJob, integrationTrigger); public class IntegrationJob : Ijob { public void Execute(IJobExecutionContext context) { //Write code for job } }
https://codedump.io/share/gmr9VYNlTAX1/1/setting-an-mvc-aspnet-application-to-do-scheduled-tasks---method-to-achieve-it
CC-MAIN-2017-47
refinedweb
236
50.73
import "git.devever.net/hlandau/acmeapi/acmeutils" Package acmeutils provides miscellaneous ACME-related utility functions. hostname.go keyauth.go load.go Calculates the base64 thumbprint of a public or private key. Returns an error if the key is of an unknown type. func CreateTLSSNICertificate(hostname string) (certDER []byte, privateKey crypto.PrivateKey, err error) Creates a self-signed certificate and matching private key suitable for responding to a TLS-SNI challenge. hostname should be a hostname returned by TLSSNIHostname. Calculates a key authorization which is then hashed and base64 encoded as is required for the DNS challenge. Calculates a key authorization using the given account public or private key and the token to prefix. Load a PEM CSR from a stream and return it in DER form. Load one or more certificates from a sequence of PEM-encoded certificates. func LoadPrivateKey(keyPEMBlock []byte) (crypto.PrivateKey, error) Load a PEM private key from a stream. func LoadPrivateKeyDER(der []byte) (crypto.PrivateKey, error) Parse a DER private key. The key can be RSA or ECDSA. PKCS8 containers are supported. Normalizes the hostname given. If the hostname is not valid, returns "" and an error. Writes one or more DER-formatted certificates in PEM format. Write a private key in PEM form. Determines the hostname which must appear in a TLS-SNI challenge certificate. Returns true iff the given string is a valid hostname. Package acmeutils imports 19 packages (graph) and is imported by 1 packages. Updated 2018-08-15. Refresh now. Tools for package owners.
https://godoc.org/git.devever.net/hlandau/acmeapi/acmeutils
CC-MAIN-2020-50
refinedweb
251
53.37
进阶教程:如何编写可重用的应用程序¶ This advanced tutorial begins where Tutorial 5 left off. We’ll be turning our Web-poll into a standalone Python package you can reuse in new projects and share with other people. If you haven’t recently completed Tutorials 1 just a 3, app is just a Python package that is specifically intended for use in a Django project. An app may also use common Django conventions, such as having a models.py file. wsgi.py polls/ admin.py __init__.py models.py tests.py templates/ polls/ detail.html index.html results.html urls.py views.py templates/ admin/ base_site.html You created mysite/templates in Tutorial 2, distribute to build our package. It’s a community-maintained fork of the older setuptools project. We’ll also be using pip to uninstall it after we’re finished. You should install these two packages now. If you need help, you can refer to how to install Django with pip. You can install distribute. Move the pollsdirectory into the django-pollsdirectory. Create a file django-polls/README.txtwith the following contents: ===== Polls ===== Polls is a simple:: url(r'^polls/', include('polls.urls')), 3. Run `python manage.py syncdb` to create the polls models. 4. Start the development server and visit to create a poll (you'll need the Admin app enabled). 5. Visit to participate in the poll. 4. Create a django-polls/LICENSE file.. 5. Next we’ll create a setup.py file which provides details about how to build and install the app. A full explanation of this file is beyond the scope of this tutorial, but the distribute docs have a good explanation. Create a file django-polls/setup.py with the following contents: import os from setuptools import setup README = open(os.path.join(os.path.dirname(__file__), 'README.txt')).read() # allow setup.py to be run from any path os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir))) setup( name = 'django-polls', version = '0.1', packages = ['polls'], include_package_data = True, license = 'BSD License', # example license description = 'A simple Django app to conduct Web-based polls.', long_description = README, url = '', author = 'Your Name', author_email = 'yourname@example.com', classifiers = [ 'Environment :: Web Environment', 'Framework :: Django', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', # example license 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Topic :: Internet :: WWW/HTTP', 'Topic :: Internet :: WWW/HTTP :: Dynamic Content', ], ) I thought you said we were going to use distribute? Distribute is a drop-in replacement for setuptools. Even though we appear to import from setuptools, since we have distribute installed, it will override the import. Only Python modules and packages are included in the package by default. To include additional files, we’ll need to create a MANIFEST.infile. The distribute docs referred to in the previous step discuss this file in more details. To include the templates and our LICENSE file, create a file django-polls/MANIFEST.inwith the following contents: include LICENSE The Hitchhiker’s Guide to Packaging.. Python 2.6 added support for user libraries, so if you are using an older version this won’t work, but Django 1.5 requires Python 2.6 or newer anyway. Note that per-user installations can still affect the behavior of system tools that run as that user, so virtualenv is a more robust solution (see below). Inside django-polls/dist, untar the new package django-polls-0.1.tar.gz(e.g. tar xzvf django-polls-0.1.tar.gz). If you’re using Windows, you can download the command-line tool bsdtar to do this, or you can use a GUI-based tool such as 7-zip. Change into the directory created in step 1 (e.g. cd django-polls-0.1). If you’re using GNU/Linux, Mac OS X or some other flavor of Unix, enter the command python setup.py install --userat the shell prompt. If you’re using Windows, start up a command shell and run the command setup.py install --user. With luck, your Django project should now work correctly again. Run the server again to confirm this. To uninstall the package, use pip (you already installed it, right?): pip uninstall django-polls Publishing your app¶ Now that we’ve packaged and tested django-polls, it’s ready to share with the world! If this wasn’t just an example, you could now: - Upload the package on your Web site. - Post the package on a public repository, such as The Python Package Index (PyPI). For more information on PyPI, see the Quickstart section of The Hitchhiker’s Guide to Packaging. One detail this guide mentions is choosing the license under which your code is distributed. Installing Python packages with virtualenv virtualenv. This tool allows you to maintain multiple isolated Python environments, each with its own copy of the libraries and package namespace.
https://docs.pythontab.com/django/django1.5/intro/reusable-apps.html
CC-MAIN-2018-51
refinedweb
818
60.11
Converting string into datetime datetime.strptime is the main routine for parsing strings into datetimes. It can handle all sorts of formats, with the format determined by a format string you give it: from datetime import datetimedat. Use the third party dateutil library: from dateutil import parserparser.parse("Aug 28 1999 12:00AM") # datetime.datetime(1999, 8, 28, 0, 0) It can handle most date formats, including the one you need to parse. It's more convenient than strptime as it can guess the correct format most of the time. It's very useful for writing tests, where readability is more important than performance. You can install it with: pip install python-dateutil Check out strptime in the time module. It is the inverse of strftime. $ pythonimport time my_time = time.strptime('Jun 1 2005 1:33PM', '%b %d %Y %I:%M%p')time.struct_time(tm_year=2005, tm_mon=6, tm_mday=1, tm_hour=13, tm_min=33, tm_sec=0, tm_wday=2, tm_yday=152, tm_isdst=-1)timestamp = time.mktime(my_time)# convert time object to datetimefrom datetime import datetimemy_datetime = datetime.fromtimestamp(timestamp)# convert time object to datefrom datetime import datemy_date = date.fromtimestamp(timestamp)
https://codehunter.cc/a/python/converting-string-into-datetime
CC-MAIN-2022-21
refinedweb
188
52.56
{- (liftM) import Control.Monad.Coroutine import Control.Monad.Coroutine.SuspensionFunctors (EitherFunctor(..)) -- | Run a nested 'Coroutine' that can suspend both itself and the current 'Coroutine'. pogoStickNested :: forall s1 s2 m x. (Functor s1, Functor s2, Monad m) => (s2 (Coroutine (EitherFunctor s1 s2) m x) -> Coroutine (EitherFunctor s1 s2) m x) -> Coroutine (EitherFunctor s1 s2) m x -> Coroutine s1 m x pogoStickNested reveal t = Coroutine{resume= resume t >>= \s-> case s of Right result -> return (Right result) Left (LeftF s') -> return (Left (fmap (pogoStickNested reveal) s')) Left (RightF c) -> resume (pogoStickNested reveal (reveal c))} -- | Much like 'couple', but with two nested coroutines. coupleNested :: forall s0 s1 s2 m x = seesaw' r. CoroutineStepResult (EitherFunctor s0 s) m r -> Either (s (Coroutine (EitherFunctor s0 s) m r)) r local (Left (RightF s)) = Left s local (Left (LeftF _)) = undefined local (Right r) = Right r -- | Class of functors that can contain another functor. class Functor c => ChildFunctor c where type Parent c :: * -> * wrap :: Parent c x -> c x instance (Functor p, Functor s) => ChildFunctor (EitherFunctor p s) where type Parent (EitherFunctor p s) = p wrap = LeftF -- | Class of functors that can be lifted. class (Functor a, Functor d) => AncestorFunctor a d where -- | Convert the ancestor functor into its descendant. The descendant functor typically contains the ancestor. liftFunctor :: a x -> d x instance Functor a => AncestorFunctor a a where liftFunctor = id instance (Functor a, ChildFunctor d, d' ~ Parent d, AncestorFunctor a d') => AncestorFunctor a d where liftFunctor = wrap . (liftFunctor :: a x -> d' x) -- | Converts a coroutine into a child nested coroutine. liftParent :: forall m p c x. (Monad m, Functor p, ChildFunctor c, p ~ Parent c) => Coroutine p m x -> Coroutine c m x liftParent
http://hackage.haskell.org/package/monad-coroutine-0.7/docs/src/Control-Monad-Coroutine-Nested.html
CC-MAIN-2014-49
refinedweb
279
51.68
. Code: #include <std_disclaimer. * */ Install guide: Boot: Code: fastboot boot <twrp.img> Install: Code: fastboot flash recovery <twrp.img> Download: twrp-3.4.0-0-burton-beta1.img What's working: Everything I've tested seems to be working aside from a few bugs: - EFS does not appear on backup menu - Super partition appears twice on backup menu - Battery percentage does not appear Source code: Device tree: TWRP Source: Thanks to: Code: * vache for providing the racer device tree as a base * TWRP devs XDA:DevDB Information TWRP, Tool/Utility for the Motorola Edge + Contributors pixlone, vache Source Code: Version Information Status: Beta Current Beta Version: 1 Beta Release Date: 2020-09-01 Created 2020-09-01 Last Updated 2020-09-01
https://forum.xda-developers.com/t/recovery-unofficial-twrp-3-4-0-0.4156905/
CC-MAIN-2021-39
refinedweb
121
50.77
I would like to turn the representation of a stream back into a stream. If I do this under python 2.2.1 running under Linux I get: >>> import sys >>> x = eval(repr(sys.stdout)) Traceback (most recent call last): File "<stdin>", line 1, in ? File "<string>", line 1 <open file '<stdout>', mode 'w' at 0x8101a40> ^ SyntaxError: invalid syntax >>> So is this possible? What am I really trying to do here? Well I want to redirect sys.stdout to be something else and the reset it back later. Only I don't want to have to remember that when later comes around that it was sys.stdout but some stream. So the question is: Is there a way to save a "pointer" to a variable in python so that it can be set latter? In C++ I might do: ------------------------------------------------------------------------ #include <iostream> using namespace std; int main() { int foo = 1; int *bar = &foo; *bar = 2; cout << "foo: " << foo << endl; } ------------------------------------------------------------------------ and foo is now 2. Is there something that can do this in python?
https://mail.python.org/pipermail/python-list/2002-August/151003.html
CC-MAIN-2017-47
refinedweb
172
84.07
I. I’m using Linux Mint 18.3, Zerynth 2.1, and an Arduino Due. I’ve attached my full program, and a copy of the output. You can see that the “released” function gets called but the “pressed” function is never called. You can also see a few places where “release” is called when I have pressed the button, and is then called again when I release the button. It is also called twice when I have quickly pressed and released the button. What am I doing wrong? Ian onPinFall and onPinRise not doing what they should? I. Hi mogplus8, this is a known issue, and we are already working on it. SAM3X devices (ArduinoDue and Flip&Click) have a bug in their onPinFall function. Try to build your application, at least for now, with the onPinRise waiting for our updates. Stay tuned :) Hi mogplus8, onPinFall and onPinRise function are now fixed for sam3x based devices (arduino due and flipnclick). To test it, you have to create a new virtual machine for your device (don’t worry the number of your available licenses will not decrement) and virtualize it again. Let me know if this fixes your problem Thanks Matteo. I have been away for the last few weeks and haven’t had a chance to test this again. I will have another go at it in the next few days hopefully. Thanks to nborrell for confirming. While I have your attention I have another query about interrupts. I have 18 pins of my Due connected to switches, and I’d like to have an interrupt on each one. Is it possible to put an interrupt in a loop? i.e. for swPin in range (22, 39) onPinFall (swPin, doSwitchCheck) onPinRise (swPin, doSwitchCheck) Then the function doSwitchCheck has a bunch of if / elif / else statements in it to determine which pin activated the interrupt. Or do I have to code each interrupt separately? Thanks, Ian Hi Ian, it is definitely possible to set your callbacks in a loop and to have the same function for all of your pins. Anyway I suggest you to pass the pin as an argument to your callback: for swPin in range (22, 39) onPinFall (swPin, doSwitchCheck, swPin) onPinRise (swPin, doSwitchCheck, swPin) to ease the process of determining which pin activated the interrupt. Let me know if everything works as expected Particle Photon here, both pinrise and pinfall do not work at all. Verified the pins are reading low and high correctly with digitalRead, it just doesn’t kick off the callback functions onPinRise(pir_pin, motion_on) onPinFall(pir_pin, motion_off) I too can confirm that onpinrise and onpinfall are not working for Nucleo F401RE. I just got one of these and just tried it. Zip, zero, zilch, nada, nothing. The interrupt never calls the callback. Unfortunately I can’t check it with my Due as I don’t have access to it any more. EDIT. It works when the input is a button. It doesn’t work if the input is a pin triggered by a PWM signal. Maybe it’s a timing issue? This is the program I’m running. On my board I have D15 directly connected to D14. Although I believe it is possible to read the value of D15 even though it is set as an output pin. The output prints in the loop are printed once only, as the loop is entered on startup (as sw1 is initialised to True) but is never entered again. ########### import pwm import streams streams.serial() print(" here we go…") duty=1000 sw1 = True PPMout = D15.PWM PPMtrig = D14 pinMode(PPMout, OUTPUT) pinMode(PPMtrig, INPUT_PULLUP) def PPMset(): sw1 = True onPinFall(PPMtrig, PPMset) while True: if (sw1 == True): sw1 = False print(“loop entered”) pwm.write(PPMout, duty, 300, time_unit=MICROS, npulses=1) print(“after pwm.write”) duty += 100 if (duty > 2000): duty = 5000 if (duty > 4900): duty = 1000 Ian I tried the Input Capture Unit example too, it didn’t work either. The button interrupt does (that’s how I found out, trying this example) but the only print I ever see is from the button interrupt. The ICU interrupt is never executed. Ian Bump. Any progress on this issue? I’m in the middle of moving house so unfortunately don’t have much time to test.
https://community.zerynth.com/t/onpinfall-and-onpinrise-not-doing-what-they-should/1073
CC-MAIN-2019-09
refinedweb
721
73.58
Hackety Hacking Problem Pyramid NestedLoops From OLPC Revision as of 04:59, 6 April 2008 by CharlesMerriam (Talk | contribs) Problem Statement Pyramid-shaped structures were built by many ancient civilizations. Here is then a problem for you? You have to design a two dimensional view of the pyramid. HINT: Use Nested For loops Sample Output Enter the height of the pyramid: 4 * * * * * * * * * * Curriculum Connections This problem gives student help in understanding things mentioned below: - How to use nested loops. Solution in Ruby #The following statement takes input from the user print("Enter the height of the pyramid") N=gets.to_i; print("\n") 0.upto(N-1) do | i | # Making a loop running from 0 to N-1 using loop variable i 0.upto(N-1-i) do | k | # This is a nested loop print " " end print "*" 0.upto(i-1) do | j | # This is a nested loop print(" *") end print("\n") end # End of outer loop Solution in C #include <stdio.h> #include <conio.h> int main(void) { int i,j,k,n=0; clrscr(); printf("Enter the height of the pyramid\n"); scanf("%d",&n); for(i=0;i<n;i++) { for(k=0;k<n-1-i;k++) { printf(" "); } printf("*"); for(j=0;j<(i);j++) { printf(" *"); } printf("\n"); } getch(); return(0); }
http://wiki.laptop.org/index.php?title=Hackety_Hacking_Problem_Pyramid_NestedLoops&oldid=123550
CC-MAIN-2015-35
refinedweb
214
64.61
Search: Search took 0.03 seconds. - 7 Dec 2008 8:52 AM - Replies - 5 - Views - 2,374 Hello, maybe you allready solved this problem if not, this work for me and can help you ContentPanel cp = new ContentPanel(); cp.setSize("100%", "100%"); //take the all div... - 3 Aug 2008 2:15 PM - Replies - 3 - Views - 1,758 thx man ! - 3 Aug 2008 6:55 AM - Replies - 3 - Views - 1,758 Is it possible to do something like ContentPanel cp = new ContentPanel(); cp.setLayout(new BorderLayout()); LayoutContainer lc = new LayoutContainer(); BorderLayoutData bld = new... - 31 Jul 2008 1:59 PM Not yet sorry, soon I will work on it, next week I hope. - 31 Jul 2008 12:03 AM - Replies - 3 - Views - 1,190 Thanks, maybe you can help me for this too. getItemByItemId(id).getId(); getItemByItemId(id).getDepth(); getItemByItemId(id).toString(); return a null pointer exception but... - 30 Jul 2008 3:42 PM - Replies - 3 - Views - 1,190 Maybe I don't understand the way it works setItemId/getItemById and setId/getItemById return NullPointerException public class Test implements EntryPoint { /** * This is the... - 22 Jul 2008 4:31 AM Thank zaccret and gslender for your help. wrong doctype... I apologize I think I will write my own applicationCreator for gxt. - 21 Jul 2008 2:05 PM - Replies - 2 - Views - 1,041 The same thing appear when I tried on Win 2003 and Firefox 3.... Edit : Sorry wrong doctype, all work fine now - 21 Jul 2008 2:57 AM I tested against GWT 1.5 GXT 1.0 on Win 2003 hosted mode : work ie6 : work firefox 3 : don't work ... same thing with GXT 1.0.1 - 20 Jul 2008 9:01 PM - Replies - 2 - Views - 1,041 I don't tested on windows but it works for gslender look at - 20 Jul 2008 6:49 PM produced this on Ubuntu 8.04 Hosted mode and FF3 - 20 Jul 2008 2:45 PM I tried, nothing changed ps: I'm on linux - 20 Jul 2008 2:22 PM hi, I tried to set a margin between all the 3 FormPanels in GWT 1.5 GXT 1.0.1 hosted and browser mode with this code : Results 1 to 13 of 13
https://www.sencha.com/forum/search.php?s=fb6bc6ba3bdd7a4b8cdb861c11e06b62&searchid=13303322
CC-MAIN-2015-48
refinedweb
369
81.22
I want to plot some data. The first column contains the x-data. But matplotlib doesn't plot this. Where is my mistake? import numpy as np from numpy import cos from scipy import * from pylab import plot, show, ylim, yticks from matplotlib import * from pprint import pprint n1 = 1.0 n2 = 1.5 #alpha, beta, intensity data = [ [10, 22, 4.3], [20, 42, 4.2], [30, 62, 3.6], [40, 83, 1.3], [45, 102, 2.8], [50, 123, 3.0], [60, 143, 3.2], [70, 163, 3.8], ] for i in range(len(data)): rhotang1 = (n1 * cos(data[i][0]) - n2 * cos(data[i][1])) rhotang2 = (n1 * cos(data[i][0]) + n2 * cos(data[i][1])) rhotang = rhotang1 / rhotang2 data[i].append(rhotang) #append 4th value pprint(data) x = data[:][0] y1 = data[:][2] y3 = data[:][3] plot(x, y1, x, y3) show() x = data[:][0] y1 = data[:][2] y3 = data[:][3] These lines don't do what you think. First they take a slice of the array which is the whole array (that is, just a copy), then they pull out the 0th, 2nd or 3rd ROW from that array, not column. You could try x = [row[0] for row in x] etc.
https://codedump.io/share/BjSw4DhpjD0f/1/python-x-y-plot-with-matplotlib
CC-MAIN-2017-09
refinedweb
206
95.88
.java.parser;20 21 /**22 * The class <code>CompilerException</code> is thrown when the compiler23 * aborts during parsing or error checking. Its cause is the original24 * <code>Throwable</code>. The detail messages of this exception and25 * its cause are identical.26 *27 * @author Tom Ball28 */29 public class CompilerException extends Exception {30 31 /**32 * Constructs a new compiler exception with the specified cause.33 * The message for this exception is copied from the cause throwable.34 *35 * @param cause the throwable to be wrapped by this exception. It36 * cannot be null.37 */38 public CompilerException(Throwable cause) {39 super(cause.getMessage(), cause);40 }41 42 }43 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/lib/java/parser/CompilerException.java.htm
CC-MAIN-2017-04
refinedweb
121
60.61
hi, I'm creating a Web Service from SE37. After given the name of the Virtual Interface (ZUser), the name of the Web Service Definition (ZUser) and the package ($tmp) the system ask me to enter a REGISTER OBJECT for Object R3TR CLAS /1BCDWB/WSC0........ Why? Wich object is it? Do I need to register once? Thank you for your help joseph Hi Joseph, What have you called your Function Module? Its should be in the customer namespace, beginning with 'Z'. This looks like the message you receive when you are changing a SAP Standard Object. SAP enforces change control on any SAP standard objects by forcing you to obtain an Object Key from them and enter it whenever you change a SAP standard object. It looks like you are aware of the above, but I would double check that everything you are developing is in the customer namespace (Z) and that your Object Class is also in the customer namespace ($tmp should be Ok which is a bit confusing). Hope this helps - please consider awarding points if it did. Kind Regards, Richard Help to improve this answer by adding a comment If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead. Add a comment
https://answers.sap.com/questions/855622/web-service-with-se37.html
CC-MAIN-2021-10
refinedweb
218
70.13