text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
QSsl Configuration in VS2013 strange error Hi all, I have Qt 4.8.6 custom build with Vs 2013. In my project every thing it works fine except the QSslConfiguration. I have this in my header : #include <QSslConfiguration> HTTPDataReader::HTTPDataReader(){ QSslConfiguration sslconfigure(QSslConfiguration::defaultConfiguration());// error line } This line giving me errors with VS IDE (Qt 4.8.6 custom build with Vs 2013), but same file with Qt creator as IDE and same compiler(Qt 4.8.6 with Vs 2013) it works. Any idea to solve this. It seems , VS is missed some thing . Is there any way to check all my includes are working or not? It showing the errors : Error 1 error C2027: use of undefined type 'QSslConfiguration' Error 2 error C3861: 'defaultConfiguration': identifier not found Error 3 error C2079: 'sslconfigure' uses undefined class 'QSslConfiguration' Error 4 error C2027: use of undefined type 'QSslConfiguration' Error 5 error C3861: 'defaultConfiguration': identifier not found Error 6 error C2079: 'sslconfigure' uses undefined class 'QSslConfiguration' Error 7 error C2027: use of undefined type 'QSslConfiguration' Error 8 error C3861: 'defaultConfiguration': identifier not found thanks , Ni.Sumi did you forget to add the QtNetwork model to the VS project? you mean in the .pro ? If yes, I have added it and all other stuff like QNetworkAccessManager manager; QNetworkReply* reply are working fine. I have these errors only for the QSSlConfiguration and the QSslError . But currently, I don't have pro file with my project, I have build the project using the Cmake. @Ni-Sumi Hello Ms. Sumi, Perhaps you could check what was generated for the Makefile? If the appropriate include paths were passed on by CMake? Kind regards. Hi Mr. @kshegunov , Yes, I have checked the file and I can see that this line //Path to a file. QT_QTNETWORK_INCLUDE_DIR:PATH=C:/Qt/qt4.8.6_vs12/include/QtNetwork //The Qt QTNETWORK library QT_QTNETWORK_LIBRARY:STRING=optimized;C:/Qt/qt4.8.6_vs12/lib/QtNetwork4.lib;debug;C:/Qt/qt4.8.6_vs12/lib/QtNetworkd4.lib //Path to a library. QT_QTNETWORK_LIBRARY_DEBUG:FILEPATH=C:/Qt/qt4.8.6_vs12/lib/QtNetworkd4.lib //Path to a library. QT_QTNETWORK_LIBRARY_RELEASE:FILEPATH=C:/Qt/qt4.8.6_vs12/lib/QtNetwork4.lib @Ni-Sumi It is strange indeed. What about the compile lines from QtCreator and VS, you mentioned everything works fine with creator? Can you check the compile line for the offending file for each of the IDEs and post it here, there must be some flag that's missing ... or something of this sort ... @Ni.Sumi The .pro file seems to work fine here as the problem do not arise when calling qmake Visual Studio IDE does not use the .pro file however, you should check that your VS project has $(QTDIR)\include\QtNetwork as part of the "Additional Include Directories" (if you are using Qt VS addin just check Qt Network checkbox in the qt configuration of the project yes, Seems this is my problem. I have made a test project with checking and without Q Network at beginning of the project. Yes, the project with checked QNetwork did not go well. But I have created the project using the CMake, which is different way of creating Qt projects in the VS IDE from Menu/QtAddin (Nothing likenew project , next naming source files, check Qt widgets, Networks, OpenL etc...). Cold you please let me know , how can I do checking the QtNetwork this now in between the project ? ?? they must be way to do this. This is CMAKE file line which has all Qt link target_link_libraries(${PROJECT_NAME} Qt4::QtCore Qt4::QtDeclarative Qt4::QtDesigner Qt4::QtDesignerComponents Qt4::QtGui Qt4::QtOpenGL Qt4::QtNetwork Qt4::QtTest Qt4::QtXml Qt4::QtXmlPatterns) in VS linking But its completely strange for me, all network related stuff like QNetwork, QNetworkReply worked fine, the problem started only when I started adding #include QSslConfiguration and QSslErrors. @kshegunov , Yes, In Qt creator it works fine with the same complier(Vs 2013 with qt 4.8.6). And this si the error line which is causing the probelm in my project. QSslConfiguration sslconfigure(QSslConfiguration::defaultConfiguration());// error line amd these are errors error C2027: use of undefined type 'QSslConfiguration' error C3861: 'defaultConfiguration': identifier not found error C2079: 'sslconfigure' uses undefined class 'QSslConfiguration' @Ni.Sumi I mean the compiler invocation, not the error (I saw the error in your first post). It should look something like this: g++ -c -pipe -g -std=gnu++0x -Wall -W -D_REENTRANT -fPIC -DQT_QML_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I../../myproject -I. -I../../../qt/qt-5.6/qtbase/include -I../../../qt/qt-5.6/qtbase/include/QtWidgets -I../../../qt/qt-5.6/qtbase/include/QtGui -I../../../qt/qt-5.6/qtbase/include/QtCore -I. -I../../../qt/qt-5.6/qtbase/mkspecs/linux-g++ -o somesource.o ../../myproject/somesource.cpp / I've taken it from g++, but you get the idea - the compile line, the call that compiles the offending source. / Kind regards.
https://forum.qt.io/topic/68134/qssl-configuration-in-vs2013-strange-error/9
CC-MAIN-2020-10
refinedweb
810
54.42
I've been working on this for a while, I'm trying to create a simple tank shooter and I'm stuck at making the canons moveable. It seems like a stupid question, but do you know how I can do this? Try this simple script if you don't understand what I am trying to do. - Code: Select all import sys, pygame pygame.init() SIZE = width, height = 600, 400 screen = pygame.display.set_mode(SIZE) pos = [300, 200] horizontal_direction = 1 vertical_direction = -1 clock = pygame.time.Clock() while 1: clock.tick(50) screen.fill((255, 100, 150)) for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() if pos[0] == 300: vertical_direction *= -1 if pos[0] > 300: if horizontal_direction == 1 and pos[1] >= height: horizontal_direction *= -1 vertical_direction *= -1 if pos[0] < 300: if horizontal_direction == -1 and pos[1] >= height: horizontal_direction *= -1 vertical_direction *= -1 pygame.draw.line(screen, (0,0,0), (300, 400), pos, 5) pos[0] += horizontal_direction pos[1] += vertical_direction pygame.display.update()
http://www.python-forum.org/viewtopic.php?f=26&t=2772
CC-MAIN-2016-30
refinedweb
165
50.84
2010-03-24, Alan G Isaac wrote: > On 3/24/2010 5:25 AM, Guenter Milde wrote: >> The standard Python distutils allows specifying a dependency since >> version 2.5: > > > Are you assuming the user has installed easy_install? > I'm going to guess most Windows users have not. So they have the choice to either install setuptools and from that on install Python modules and packages in an easy way, or to download and install the roman module from the pypi URL (which we, of course will give in the Documentation). This is one click and one command with distutils --- something we can expect from the user if it allows us to become norm-conforming. > And what if the user has some other roman.py? > (E.g., I think there's one by Jim Walsh floating around.) > Will distutils be able to handle that properly? Like our present "solution", distutils will not be able to differentiate between two python modules with the same name. If both were installed, it depends on their position in sys.path which one is used. If they are incompatible, this is of course bad. (This is why the LaTeX license forbids distributing modified versions under the same name.) However, modes/packages from pypi will be unique-named, only modules "from the wild" might pose problems here. Additionally, distutils has the feature to specify a minimal version requirement. This would be an advantage over the present workaround in setup.py which *could* help to differentiate between the "official" (i.e. published on pypi) module and some other equally named file. > I still do not understand why it appropriate to invoke > such a dependency mechanism for a single, small module > that can easily be packaged. Because this is the standard way. It is established and the way Python users will expect. Because I don't want to update a module that exists independent from Docutils. Because I don't want multiple instances on my system. Günter On 3/24/2010 5:25 AM, Guenter Milde wrote: > The standard Python distutils allows specifying a dependency since > version 2.5: Are you assuming the user has installed easy_install? I'm going to guess most Windows users have not. And what if the user has some other roman.py? (E.g., I think there's one by Jim Walsh floating around.) Will distutils be able to handle that properly? (These are real not rhetorical questions.) I still do not understand why it appropriate to invoke such a dependency mechanism for a single, small module that can easily be packaged. Isn't there some way for it to go in the docutils namespace to avoid potential conflicts? The licensing issue seems manageable: just put it in docutils/notpublicdomain/psf/roman.py Alan On Wed, Mar 24, 2010 at 05:25, Guenter Milde <milde@...> wrote: > On 2010-03-23, David Goodger wrote: >>? Yes, I think so. I don't see any alternative. -- David Goodger <> On 2010-03-23, David Goodger wrote: > On Tue, Mar 23, 2010 at 10:35, Alan G Isaac <aisaac@...> wrote: >>> Someone wrote: >>>> I think docutils should never include roman but require it as a >>>> dependency. >> On 3/23/2010 7:41 AM, Guenter Milde wrote: >>> Agreed. This would also make the setup.py file simpler. ... >? Günter It's my pleasure to announce that I just uploaded rst2pdf 0.14 to the site at. Rst2pdf is a program and a library to convert restructured text directly into PDF using Reportlab. It supports True Type and Type 1 font embedding, most raster and vector image formats, source code highlighting, arbitrary text frames in a page, cascading stylesheets, the full restructured text syntax and much, much more. It also includes a sphinx extension so you can use it to generate PDFs from documents built with Sphinx. In case of problems, please report them in the Issue tracker () or the mailing list () This release fixes several bugs and adds some minor features compared to 0.13.2. Here are some of the changes: * Fixed Issue 197: Table borders were confusing. * Fixed Issue 297: styles from default.json leaked onto other syntax highlighting stylesheets. * Fixed Issue 295: keyword replacement in headers/footers didn't work if ###Page### and others was inside a table. * New feature: oddeven directive to display alternative content on odd/even pages (good for headers/footers!) * Switched all stylesheets to more readable RSON format. * Fixed Issue 294: Images were deformed when only height was specified. * Fixed Issue 293: Accept left/center/right as alignments in stylesheets. * Fixed Issue 292: separate style for line numbers in codeblocks * Fixed Issue 291: support class directive for codeblocks * Fixed Issue 104: total number of pages in header/footer works in all cases now. * Fixed Issue 168: linenos and linenothreshold options in Sphinx now work correctly. * Fixed regression in 0.12 (interaction between rst2pdf and sphinx math) * Documented extensions in the manual * Better styling of bullets/items (Issue 289) * Fixed Issue 290: don't fail on broken images * Better font finding in windows (patch by techtonik, Issue 282). * Fixed Issue 166: Implemented Sphinx's hlist (horizontal lists) * Fixed Issue 284: Implemented production lists for sphinx * Fixed Issue 165: Definition lists not properly indented inside admonitions or tables. * SVG Images work inline when using the inkscape extension. * Fixed Issue 268: TOCs shifted to the left on RL 2.4 * Fixed Issue 281: sphinx test automation was broken * Fixed Issue 280: wrong page templates used in sphinx Enjoy! I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/docutils/mailman/docutils-users/?viewmonth=201003&viewday=24
CC-MAIN-2017-04
refinedweb
953
66.64
Hotmail is probably the most popular e-mailing facility found on the web today. Millions of people around the world use it for everyday communication with friends and relatives. The main reason for it�s popularity is the fact that you can use the service absolutely free of charge. Users are able to read their e-mails. No longer! This article will enable you to build your own client, using a sure and solid way to communicate with Hotmail in the same way as Outlook does. It will be shown how the protocol can be used to your own advantage, it isn�t�s an enormous task to implement. It turned out to be not so difficult at all. After a review of the SourceForge article, a few things are noticeably interesting. The authentication is done via HTTP headers and is described in an RFC (2617). It�s! To build the client, two components need to be created: HTTPMailand is able to do HTTP authentication as described in the RFC. Instead of building a custom proxy, another method could be used. Microsoft ships a component called XMLHTTP that is able to make XML based HTTP requests. The use of this class presented several problems: msgfolderroot, returns all the possible responses for that request. So a query to obtain only the msgfolderrootwould return information about all the mailboxes. This isn�t fatal of course but it does show that the class could use some work. The proxy class that is built in this document does not show this problem. XMLHTTP. Before reading the rest of this article, I assume that you have basic knowledge of the following subjects: HTTPMail Please note that the code is fully documented in the source files. The proxy will be responsible for making HTTP requests to the Hotmail servers. This will require various things to be implemented: PROPFINDmethod, instead of the usual GET and POST. use and documented code! Let�s start with building the class framework. public class HotmailProxy { private CookieContainer ccContainer; public HotmailProxy () { ccContainer = new CookieContainer(); } } Ok, not too interesting. Now let�s get on with building the really important part, building the method that actually sends the client�s request to a remote host. It will implement all the proxy�s the request has been build,�s�t. Now for implementing). Using this example and the webpage provided at the top of this article, it will be easy to implement the rest of an e-mail client (it was for me!). First let�s begin again with the class framework: public class HotmailClient { private HotmailProxy hHttp = null; public HotmailClient() {} } Now for the only public method, Connect(). This method will connect to Hotmail using HTTP authentication, it will parse the response to obtain the URL of the msgfolderroot. Next this URL will be used to determine some mailbox information. First of all, the proxy class en occured while connecting to remote host.",e); } } This completes our first method. As you can see, it calls two other methods which we will now construct, these two contain the interesting parts, parsing the response! But first an important helper method needs to be build, without it all XPath queries fail. built; } To parse the response, there are webpage stated in the introduction, it�s also possible to use a network analyzer for this purpose. Probably you will need to use both. To parse the response, the XmlNamespaceManager is built, and XPath is used. The last method in this example will retrieve information about all mailboxes on the server. Basically it does the same thing as the first two methods combined, so you should already be able to build it yourself. The method is shown here for completion occured while " + "obtaining mailbox info: " + e.Message); } } } catch(Exception e) { // Something went wrong. throw new ApplicationException("Error occured" + " while retrieving available mailboxes.",e); } }! General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/IP/csaccesshotmailext.aspx
crawl-002
refinedweb
650
64.71
Ok in the java course that i am taking i am givin an outline like this: Ok so my question is in the Init(String s), how do i store the string value in the name variable?Ok so my question is in the Init(String s), how do i store the string value in the name variable?Code: public class sd { //declare a private integer instance variable to store count of diamonds; //declare a private double instance variable to store percentage; //declare a private String instance variable to store a name; //construtor bengin here... public sd() { //Initialize all instance vairables to zero; } void Init(String s) { //this method initialized the count and percentage to zero and stores the string value in the name variable } ............ String GetName() { //this method returns the name varialbe; Is it something like s = new String(); ? Sorry if this is a bit vague, for more info this is the actual lab go here. Let me know if you need to see my entire code for method and object class.
http://www.antionline.com/printthread.php?t=264895&pp=10&page=1
CC-MAIN-2014-15
refinedweb
173
64.04
SPIM_WriteTxData() Usage CY8CKIT-050mnauss_1472621 Aug 18, 2017 3:22 PM Greetings; I am using PSoC Creator 3.3 with the CY8CKIT-050. I am trying to communicate with a MPL115A1 Barometer via SPI 4-wire. I ran the example project of the SPI (Master, Slave) components and it worked just fine. So I tried to make my own file using what I gleaned from the example project to perform SPI communication with the MPL115A1 barometer breakout board. I have been doing hardware design for awhile, but am new to embedded code writing, so I am not real strong in that area. I have 4 components in this design (so far), SPI Master component (Ver 2_5) Tx and Rx Buffers set to 4, LCD component, and two control registers for debugging (because I have not learned the debugger yet). The LCD display works, the debug LED's work, but I have no data coming out of the MOSI pin, or any other SPI pin for that matter (Viewing pins with DSO). The hardware pins are set to strong drive, initial state low. I am not sure what the problem is as I am pretty much doing the same thing as in the example code. My code is below: /* MPL115 */ #include <project.h> #include <cylib.h> #include <stdio.h> //======================// // Macros // //======================// //======================// // Global Variables // //======================// /* ******************* MPL115A1 Functions **********************/ /******************************************************************************* * Function Name: ControlReg1_Write ******************************************************************************** * Function enables hardware to display value of register using two led's * Parameters ledZ ********************************************************************************/ void ControlReg1_Write (uint8 ledZ); // declaring the function void CyDelay(uint32 milliseconds); int main() { // CyGlobalIntEnable; /* Enable global interrupts. */ Tried remarking out this as a test (no -effect) //******************************************************************** // This section of code is just for debug, remove when not needed //******************************************************************** uint8 ledZ = 0x01; //code debug ControlReg1_Write(ledZ); CyDelay(250); ledZ = 0x00; ControlReg1_Write(ledZ); CyDelay(250); ledZ = 0x01; ControlReg1_Write(ledZ); CyDelay(250); ledZ = 0x00; ControlReg1_Write(ledZ); CyDelay(250); //******************************************************************* /* Place your initialization/startup code here (e.g. MyInst_Start()) */ // Start the LCD and Display LCD_Start (); LCD_Position(0u,0u); LCD_PrintString("Pressure->"); LCD_Position(1u,0u); LCD_PrintString("Temp->"); CyDelay(2000); ledZ = 0x01; ControlReg1_Write(ledZ); CyDelay(1000); //Start the SPI master component SPIM_Start(); for(;;) //This loops forever, part of design template { /* Place your application code here. */ SPIM_WriteTxData(0x24u); SPIM_WriteTxData(0x00u); SPIM_WriteTxData(0x00u); SPIM_WriteTxData(0x00u); ledZ = 0x02; ControlReg1_Write(ledZ); CyDelay (1000); ledZ = 0x03; ControlReg1_Write(ledZ); CyDelay (1000); } } // [] END OF FILE */ Well let me know if anything obvious pops out at you, and I appreciate any help offered, oh and if you know of a good debugger tutorial let me know. Thanks 1. Re: SPIM_WriteTxData() Usage CY8CKIT-050bob.marlowe Sep 30, 2015 10:55 PM (in response to mnauss_1472621) It is always advisable so post your complete project with all of your settings, so that we can have a look at this. To do so, use Creator->File->Create Workspace Bundle (minimal) and attach the resulting file. There is not much to learn with the debugger, just set a breakpoint where you want to stop inspect your variables (there is some help in Creator). Bob 2. Re: SPIM_WriteTxData() Usage CY8CKIT-050mnauss_1472621 Sep 30, 2015 11:17 PM (in response to mnauss_1472621) Okay, here is my project file. Thanks for the help. 3. Re: SPIM_WriteTxData() Usage CY8CKIT-050bob.marlowe Oct 1, 2015 6:48 AM (in response to mnauss_1472621) I built a loopback to prove that system is running as assumed. Small corrections made in source, see attached. Happy coding Bob 4. Re: SPIM_WriteTxData() Usage CY8CKIT-050mnauss_1472621 Oct 1, 2015 9:27 PM (in response to mnauss_1472621) Thanks a bunch for taking the time to look at my problem. I had just assumed I had a problem with my code, but after you mentioned that the code is working, I had to take a look at my hardware. I had a few issues. When I had my scope on the chip select line, I had no activity on ch 1 of my scope. Ch 2 was on MOSI and it was just putting out a low. Well I am somewhat a bonehead for not checking the hardware before posting my question, (but as I said I assumed it was my code). I pulled the little scope clip off my probe and put the probe directly on the MOSI pin and lo and behold I saw data on my scope, so it turns out my clip is very intermittent. I then check chip select pin again with scope and it was just logic high. I thought it must be the pin, so I moved the pin to P0[7], and I had chip select. just to be sure I rerouted back to p0[2] and again no chip select, so this pin appears to be faulty which is strange because I had used that pin for the last example project and it worked. I must of somehow damaged the pin. When I first got the board I ran some example code and found P0[4] to be faulty, so for some reason these pins are blowing. I do have a rev - CYKIT-050, is there any issues with that rev that could cause the pins to fail. Well anyway, thanks for helping me and getting be back on track. One more question, I noticed in you code that you wrote the Tx buffer to a variable mydata, but I do not see how you viewed it? Thank you very much 5. Re: SPIM_WriteTxData() Usage CY8CKIT-050bob.marlowe Oct 2, 2015 2:55 AM (in response to mnauss_1472621) Viewing "MyData" is quite easy: I set a breakpoint on the line with the "while" (by clicking on leftmost position in window) and when BP is reached single-stepping to read the 4 values from SPI while showing the local variables in debug window. Bob
https://community.cypress.com/thread/15704
CC-MAIN-2018-39
refinedweb
948
68.3
04 December 2012 16:05 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> PETRONAS and Progress said the project was advancing to its pre front-end engineering design (Pre-FEED) phase, following successful completion of a detailed feasibility study. The companies had announced plans for their Pacific Northwest LNG project last year. If realised, the facility will be built on The project’s LNG throughput is currently designed for about 3.8m tonnes/year of LNG. It will include two liquefaction plants, with the possible addition of a third plant at a later stage. If Pacific Northwest LNG proceeds, the estimated investment sum is expected to be up to $11bn (€8.5bn), depending on the final project scope. A final investment decision is expected by late 2014, with LNG exports to begin in 2018. However, the companies added that the project’s final throughput capacity and investment sum will depend on the Canadian government's approval of the planned takeover of Progress by PETRON
http://www.icis.com/Articles/2012/12/04/9621092/petronas-partner-advance-on-planned-canada-lng-export-project.html
CC-MAIN-2015-22
refinedweb
163
56.05
News Feed Item VANCOUVER, BRITISH COLUMBIA -- (Marketwired) -- 08/26/14 -- Asanko Gold Inc. ("Asanko" or the "Company") (TSX: AKG)(NYSE MKT: AKG) announces that it has entered into a settlement agreement ("Agreement") with a private Ghanaian company, Goknet Mining Company ("Goknet"), to eliminate Goknet's claim for a 2% net smelter return ("NSR") royalty on Phase 1 of the Company's flagship Asanko Gold Mine Project ("AGM" or the "Project"). The only material royalty now applicable to Phase 1 of the Project is the Government of Ghana's 5% NSR royalty. The financial terms of the Agreement are confidential, however they are not considered material to Asanko. The settlement involves cash, one million Asanko shares and the transfer to Goknet of two non-material exploration projects, Kubi and Diaso. Included in the agreement the Company will retain a right to match any future offer made to Goknet with respect to a disposal of the Diaso Project concessions. A map of the Asanko concessions in Ghana is shown in the attached figure or on the Company's website at. The Agreement is subject to certain conveyances, which are expected to be completed in due course. To view the map associated with this press release, please visit the following link:., which will be developed in two phases. Phase 1 of the project was approved for construction in July 2014. Phase 1 has 2.43Moz(2) of Proven and Probable Mineral Reserves contained in the Nkran, Adubiaso, Abore and Asuadai deposits and is targeting steady state production of 200,000oz/pa of gold(1) during Q2 2016. Construction is underway and first gold is targeted in Q1 2016. Phase 2 has an additional 2.37Moz(3) of Proven and Probable Mineral Reserves which are located at the Esaase deposit, approximately 30km north of the processing plant site. A scoping study is underway to investigate a Phase 2 expansion and is expected to be published in Q1 2015. Asanko is managed by highly skilled and successful technical, operational and financial professionals. The Company is strongly committed to the highest standards for environmental management, social responsibility, and health and safety for its employees and neighbouring communities. Notes: 1. PMI Gold Corporation's Definitive Feasibility Study ("DFS") on the Obotan Project, published in September 2012. See filing on. 2.. 3. Toll-Free (N.America): 1-855-246-7341 Telephone: +44-7932-740-452 Asanko Gold Inc. Greg McCunn, Chief Financial Officer Telephone: +1-778-729-0604 Asanko Gold Inc. General Website:.
http://news.sys-con.com/node/3160232
CC-MAIN-2015-32
refinedweb
415
54.32
I've been recently working on this program, it's supposed to be able to confirm whether or not the guess you input is the same as any of the three answers. The issue is that no matter what I put, it always tells me the guess was wrong. I've checked the output in various places for consistency in the variables, but they all look the way they should. If you could tell me what went wrong, and how to fix it, that'd be much appreciated. Thanks! import javax.swing.JOptionPane; import java.util.Scanner; import java.io.IOException; import java.io.File; import static java.lang.System.out; public class TestDialogBox { public static void main(String args[]) throws IOException{ Scanner riddleBook = new Scanner(new File("RiddleBook.txt")); //Initialize scanner for riddles & answers String guess = null; String answer1 = riddleBook.nextLine(); String answer2 = riddleBook.nextLine(); String answer3 = riddleBook.nextLine(); guess = JOptionPane.showInputDialog(riddleBook.nextLine()) ; if (guess == answer1 || guess == answer2 || guess == answer3) { JOptionPane.showMessageDialog(null, "Awesome!"); } else { JOptionPane.showMessageDialog(null, "Bummer. The correct answer could have been: " + answer1 + ", " + answer2 + ", or " + answer3); JOptionPane.showMessageDialog(null, "Your guess was: " + guess); //JOptionPane results are for demonstrating the answers and your guess didn't change } } } The text file that it scans is as follows: man humans people What goes on four legs in the morning, two legs in the afternoon, and three legs in the evening?
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/31398-solution-error-y-u-no-work-printingthethread.html
CC-MAIN-2014-15
refinedweb
232
58.79
”Proxy by Name”, a new feature in JNBridgePro 7.3 We’ve added a new feature to JNBridgePro 7.3 that we’re calling “proxy by name”. Proxy by name automatically maps the names of methods and constructor parameters from the underlying Java or .NET code into the generated proxies. It’s been in the top-requested-feature list for some time now, but until recently the .NET and Java APIs that were available haven’t been good enough for us to do a good job on this feature. We’re happy to now be able to make it available. Proxying parameter names is powerful because it means that the native parameter names will appear in tool tips and IntelliSense in Visual Studio, Eclipse, and other IDEs, rather than a meaningless alias. This helps document the APIs, and presents the information right where it’s used: in the code editor. Previously, the tool tip and IntelliSense pop-ups contained placeholder names like p1, p2, etc., which provided no documentation value. How it works: Java calling .NET Here’s how “proxy by name” works. Let’s start in the Java-to-.NET direction. Assume we have a C# class: public class DotNetClass { public DotNetClass(string myStringParam, int myIntParam, string my2ndStringParam) { } public static void myMethod(float thisIsAFloatParam, long thisIsALongParam) { } } Let’s proxy this in the usual way, then start creating a Java project that uses the proxy jar file. Note that the code completion pop-ups now show parameter names in the proxied methods and constructors. This wasn’t available in 7.2. After the code is entered, the parameter names in the proxies are still available by hovering the cursor over the method name: How it works: .NET calling Java What about .NET-to-Java? Here, things are a little different. Let’s start with Java class similar to the .NET class we’ve been using: public class JavaClass { public JavaClass(String myStringParam, int myIntParam, String my2ndStringParam) { } public static void myMethod(float thisIsAFloatParam, long thisIsALongParam) { } } You’ll need to compile your code using Java 8 or later, and to target Java 8 binaries. You’ll also need to make sure that you’ve told the compiler to save the parameter metadata. If you’re using Eclipse, for example, your project should have these compilation settings: Alternatively, if you’re using the command line, use the -parameters option: javac -cp classpath_here -parameters classes_to_compile Again, generate the proxies the usual way, and reference them in a Visual Studio project. When you enter the names of methods and constructors, or hover the mouse over completed code, IntelliSense and tool tips work just as expected, and include the names of the proxied parameters: We think you’ll find this new feature useful, and expect it’ll help speed up your development efforts. Let us know what you think!
https://jnbridge.com/blog/proxy-by-name-a-new-feature-in-jnbridgepro-7-3
CC-MAIN-2018-34
refinedweb
473
63.39
From: Ackley, Paul Sent: Monday, March 16, 2009 10:31 AM To: Mailing list for the SAXON XSLT and XQuery processor Cc: Ackley, Paul Subject: RE: [saxon] ArrayIndexOutOfBoundsException in Saxon 8.6 Yes I will try it when the debugging version is released. BTW - I would agree with your JVM bug theory. We've been using this for some time and have not had this problem before moving to BEA/Oracle's JRockit JVM running on Linux. I'm going to see if there have been any updates to the JVM for this. Thanks! Paul I've put a diagnostic patch into Subversion to print out relevant data structures when this occurs, and this will be in the 9.1.0.6 build when I get that out. I've done some careful desk-checking of the code and (a) I can't find anything wrong, and (b) there aren't even any suspicious circumstances, like use of shared memory areas or the NamePool. I'm increasingly suspecting a JVM bug. I know that sounds like desperation but they do happen. To give some background, Saxon is compiling a literal result element, and searches the tree representation of the stylesheet to find all in-scope namespaces, each of which is represented by an integer code. It puts these codes in a data structure called IntHashSet, and then later reads the values out into an array of integers. The size of the array is initialized to intHashSet.size(), and the codes are then extracted using an iterator. The iterator appears to return more codes than the size of the array. Michael Kay _____ From: Michael Kay [mailto:mi...@saxonica.com] Sent: 13 March 2009 00:13 To: 'Mailing list for the SAXON XSLT and XQuery processor' Subject: Re: [saxon] ArrayIndexOutOfBoundsException in Saxon 8.6 OK thanks for the update. I think I'm going to have to try and produce a diagnostic patch for you to install, if you would be willing. (As it's on an exception path, I may simply build this into 9.1.0.6, which I need to release soon anyway). Michael Kay
https://sourceforge.net/p/saxon/mailman/attachment/93C2B4781B9DE040A07DC8385980FB130C87E96AFC@qtdenexmbm20.AD.QINTRA.COM/1/
CC-MAIN-2018-17
refinedweb
357
64
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 8 years, 2 months ago. Wifly TCPSocketServer problems - Help - please ! The code below is modified from the Echo Server example. The Wifly connects fine but I cannot get the TCPSocketServer to work. If I try to Telnet to port 2000, I can get the command interface, but there is no response from port 12345. Also, when connecting with Telnet on port 2000, the code below gets executed ("Connecting from..."). What am I missing here ? I need a Socket Server on port 12345, not a client. Please help. #include "mbed.h" #include "WiflyInterface.h" Serial pc(USBTX, USBRX); WiflyInterface wifly(p28, p27, p26, p25, "SSID", "password", WPA); // m3pi int main (void) { wifly.init(); // use DHCP while (!wifly.connect()); // join the network TCPSocketServer server; server.bind(12345); server.listen(1); while (true) { printf("\nWait for new connection...\r\n"); TCPSocketConnection client; server.accept(client); client.set_blocking(false, 10000); printf("Connection from: %s\n\r", client.get_address()); char buffer[256]; while (true) { int n = client.receive(buffer, sizeof(buffer)); if (n <= 0) break; client.send_all(buffer, n); if (n <= 0) break; } client.close(); } } Question relating to: 1 Answer 8 years, 2 months ago. Hi Rolf, It appears that on some wifly modules, the module has to be rebooted to change the local port (on which the module is listening). I updated the wiflyinterface library. You can even find a program example at the end of the WiflyInterface webpage using TCPSocketServer. Cheers, Sam Hi Sam; Thanks for the quick reply. I actually updated the wiflyinterface lib this morning - wondering why I missed that yesterday... unfortunately my Wifly now does not connect any more (no other changes). Will go back to Wifly_configure and check....and let you know. Thanks, Rolfposted by 20 Dec 2012
https://os.mbed.com/questions/170/Wifly-TCPSocketServer-problems-Help-plea/
CC-MAIN-2021-10
refinedweb
317
61.83
I've been searching this for a little while now and I'm absolutely unable to figure out what the problem is. This is the code I'm having the issue with: std::vector<std::string> TexName; TexName.push_back("mega.png"); and I get... error: 'TexName' does not name a type I've included both vector and string library, theres no name conflicts, and I've tried using namespace std; but it didn't work. While searching I seen other people using similar code; with the vector of strings, and it seemed to work with them. Any help plz? EDIT: I figured it out. I didn't have it in a function. Oops. Thanks for your time.
https://www.daniweb.com/programming/software-development/threads/361760/does-not-name-a-type
CC-MAIN-2017-22
refinedweb
117
76.62
Show Table of Contents Chapter 2. Accessing Red Hat Gluster Storage using Amazon Web Services Red Hat Gluster Storage for Public Cloud packages glusterFS as an Amazon Machine Image (AMI) for deploying scalable network attached storage (NAS) in the Amazon Web Services (AWS) public cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for Amazon users. Red Hat Gluster Storage for Public Cloud provides highly available storage within AWS. Synchronous n-way replication across AWS Availability Zones provides high availability within an AWS Region. Asynchronous geo-replication provides continuous data replication to ensure high availability across AWS regions. The glusterFS global namespace capability aggregates disk and memory resources into a unified storage volume that is abstracted from the physical hardware. The following diagram illustrates Amazon Web Services integration with Red Hat Gluster Storage: Figure 2.1. Amazon Web Services integration Architecture Important The following features of Red Hat Gluster Storage Server is not supported on Amazon Web Services: - Red Hat Gluster Storage Console and Nagios Monitoring - NFS and CIFS High Availability Note For information on obtaining access to AMI, see. 2.1. Launching Red Hat Gluster Storage Instances This section describes how to launch Red Hat Gluster Storage instances on Amazon Web Services. The supported configuration for two-way and three-way replication is up to 24 Amazon EBS volumes of equal size. Table 2.1. Supported Configuration on Amazon Web Services - There is a limit on the total provisioned IOPS per volume and the limit is 40,000. Hence, while adding 24 PIOPS SSD disks, you must ensure that the total IOPS of all disks does not exceed 40,000. - Creation of Red Hat Gluster Storage volume snapshot is supported on magnetic, general purpose SSD and PIOPS EBS volumes. You can also browse the snapshot content using USS. See chapter Managing Snapshots in the Red Hat Gluster Storage 3.1 Administration Guide for information on managing Red Hat Gluster Storage volume snapshots. - Tiering feature of Red Hat Gluster Storage is supported in the Amazon Web Service environment. You can attach bricks created out of PIOPS or general purpose SSD volumes as hot tier to an existing or new Red Hat Gluster Storage volume created out of magnetic EBS volumes. See chapter Managing Tiering in the Red Hat Gluster Storage 3.1 Administration Guide for information on creation of tiered volumes. To launch the Red Hat Gluster Storage Instance - Navigate to the Amazon Web Services home page at. The Amazon Web Services home page appears. - Login to Amazon Web Services. The Amazon Web Services main screen is displayed. - Click the Amazon EC2 tab. The Amazon EC2 Console Dashboard is displayed. - Click Launch Instance.The Step 1: Choose an AMI screen is displayed. - Click My AMIs and select shared with me checkbox. Click Select for the corresponding AMI and click Next: Choose an Instance Type. The Step 2: Choose an Instance Type screen is displayed. - Select Large as the instance type, and click Next: Configure Instance Details . The Step 3: Configure Instance Details screen displays. - Specify the configuration for your instance or continue with the default settings, and click Next: Add Storage The Step 4: Add Storage screen displays. - In the Add Storage screen, specify the storage details and click Next: Tag Instance. The Step 5: Tag Instance screen is displayed. - Enter a name for the instance in the Value field for Name, and click Next: Configure Security Group. You can use this name later to verify that the instance is operating correctly. The Step 6: Configure Security Group screen is displayed. - Select an existing security group or create a new security group and click Review and Launch.You must ensure to open the following TCP port numbers in the selected security group: - 22 - 6000, 6001, 6002, 443, and 8080 ports if Red Hat Gluster Storage for OpenStack Swift is enabled - Choose an existing key pair or create a new key pair, and click Launch Instance. The Launch Status screen is displayed indicating that the instance is launching. >>IMAGE.
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/deployment_guide_for_public_cloud/chap-documentation-deployment_guide_for_public_cloud-aws
CC-MAIN-2019-04
refinedweb
676
55.84
Loader to do Polymorphic IO. More... #include <vsl/vsl_binary_io.h> #include <vsl/vsl_binary_loader_base.h> #include <vcl_vector.h> Go to the source code of this file. Loader to do Polymorphic IO. You should include this file if you want to do polymorphic IO (i.e. save a class by its base-class pointer.) Definition in file vsl_binary_loader.h. Loads object and sets base class pointer. Determines which derived class object on bfs belongs to, loads it and sets b to be a pointer to it. (Class must be one given to Loader by the append method). If bfs indicates a NULL pointer, b will be set to NULL. If b not initially NULL, *b will be deleted. Definition at line 95 of file vsl_binary_loader.h. Binary file stream output operator for pointer to class. This works correctly even if b is a NULL pointer Definition at line 106 of file vsl_binary_loader.txx.
http://public.kitware.com/vxl/doc/release/core/vsl/html/vsl__binary__loader_8h.html
crawl-003
refinedweb
151
71
From: Alexander Viro <viro@math.psu.edu> To: Linus Torvalds <torvalds@transmeta.com> Subject: [PATCH] one of $BIGNUM devfs races Date: Mon, 6 Aug 2001 07:58:37 -0400 (EDT) Cc: Richard Gooch <rgooch@ras.ucalgary.ca>, Alan Cox <alan@lxorguk.ukuu.org.uk>, linux-kernel@vger.kernel.org OK, folks - that's it. By all reasonable standards a year _is_ sufficient time to fix an obvious race. One in devfs/base.c::create_entry() had been described to Richard more than a year ago. While I respect the "I'll do it myself, don't spoil the fun" stance, it's clearly over the bleedin' top. Patch for that one is in the end of posting. Linus, see if it looks sane for you. Richard, _please_, stop adding features and spend some of the freed time fixing the long-standing security holes. E.g., readlink() on devfs is a big "boys, come and get some" sign on the kernel's arse. _Anyone_ can crash the box with devfs mounted on it as soon as rmmod removes a symlink. Yes, that one had been pointed out to you only a couple of months ago. It's less than a year, but could we please get it fixed in some reasonable time? By the end of December or something... When devfs went into the tree, the word was "at least it will make people look at the code". Well, it did. Veni, vidi, vomere. There are tons of bad races in devfs/base.c. Reported to you many times - just look through your mailbox. Richard, please, either fix the crap yourself or step down and admit that devfs is unmaintained. Saying that you'll fix it yourself is nice, but there's a point when it gets really old. And that point had been crossed _way_ back. --- S8-pre4/fs/devfs/base.c Sun Jul 29 01:54:47 2001 +++ /tmp/base.c Mon Aug 6 07:14:09 2001 @@ -789,47 +789,70 @@ static struct devfs_entry *create_entry (struct devfs_entry *parent, const char *name,unsigned int namelen) { - struct devfs_entry *new, **table; + struct devfs_entry *new; - /* First ensure table size is enough */ - if (fs_info.num_inodes >= fs_info.table_size) - { - if ( ( table = kmalloc (sizeof *table * - (fs_info.table_size + INODE_TABLE_INC), - GFP_KERNEL) ) == NULL ) return NULL; - fs_info.table_size += INODE_TABLE_INC; + if (name && namelen<1) + namelen = strlen (name); + + new = kmalloc(sizeof(*new) + namelen, GFP_KERNEL); + + if (!new) +.nlink = 1; + + /* Ensure table size is enough */ + while (fs_info.num_inodes >= fs_info.table_size) { + unsigned new_size = fs_info.table_size + INODE_TABLE_INC; + struct devfs_entry **table; + + table = kmalloc(sizeof(*table) * new_size, GFP_KERNEL); + + if (new_size <= fs_info.table_size) { + kfree(table); + continue; + } + if (!table) { + kfree(new); + return NULL; + } + fs_info.table_size = new_size; + if (!fs_info.table) { + fs_info.table = table; + break; + } + memcpy(table, fs_info.table, sizeof(*table)*fs_info.num_inodes); + kfree (fs_info.table); + fs_info.table = table; #ifdef CONFIG_DEVFS_DEBUG - if (devfs_debug & DEBUG_I_CREATE) - printk ("%s: create_entry(): grew inode table to: %u entries\n", - DEVFS_NAME, fs_info.table_size); + if (devfs_debug & DEBUG_I_CREATE) + printk("%s: create_entry(): grew inode table to:" + "%u entries\n", DEVFS_NAME, new_size); #endif - if (fs_info.table) - { - memcpy (table, fs_info.table, sizeof *table *fs_info.num_inodes); - kfree (fs_info.table); } - fs_info.table = table; - } - if ( name && (namelen < 1) ) namelen = strlen (name); - if ( ( new = kmalloc (sizeof *new + namelen, GFP_KERNEL) ) == NULL ) -.ino = fs_info.num_inodes + FIRST_INODE; - new->inode.nlink = 1; - fs_info.table[fs_info.num_inodes] = new; - ++fs_info.num_inodes; - if (parent == NULL) return new; - new->prev = parent->u.dir.last; - /* Insert into the parent directory's list of children */ - if (parent->u.dir.first == NULL) parent->u.dir.first = new; - else parent->u.dir.last->next = new; - parent->u.dir.last = new; - return new; + + new->inode.ino = fs_info.num_inodes + FIRST_INODE; + fs_info.table[fs_info.num_inodes++] = new; + + if (parent) { + new->prev = parent->u.dir.last; + /* Insert into the parent directory's list of children */ + if (parent->u.dir.first) + parent->u.dir.last->next = new; + else + parent->u.dir.first = new; + parent->u.dir.last = new; + } + return new; } /* End Function create_entry */ static void update_devfs_inode_from_entry (struct devfs_entry *de) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at
http://lwn.net/2001/0809/a/devfs-race-fix.php3
crawl-003
refinedweb
674
62.34
Introduction to Pandas hist() Pandas hist() function is utilized to develop Histograms in Python using the panda’s library. A histogram is a portrayal of the conveyance of information. This capacity calls matplotlib.pyplot.hist(), on every arrangement in the DataFrame, bringing about one histogram for each section or column. While investigating a dataset, you will frequently need to get a brisk comprehension of the conveyance of certain numerical factors inside it. A typical method of imagining the dissemination of a solitary numerical variable is by utilizing a histogram. A histogram separates the qualities inside a numerical variable into “bins” and checks the number of perceptions that fall into each receptacle. By picturing these binned includes in a columnar manner, we can get a prompt and natural feeling of the conveyance of qualities inside a variable. The matplotlib when imported will return back to the dataframe and finally when the programs are implemented in Python pandas, the information fought, you are prepared to move over to the Python note pad to set up your information for representation. Within the Python note pad, we should begin by bringing in the Python modules that you will be utilizing all through the rest of this formula. Syntax and Parameter Dataframe.hist(bins=10, layout=none, figuresize=none, sharez=false, sharey=false, xrot=none, yrot=none, ax=none, ylabelsize=none, xlabelsize=none, grid=true, by=none, column=none, data, **keywords) Where, Bins represents the number of histograms used. In the event that a whole number is given, bins + 1 container edges are determined and returned. In the event that containers are a succession, give receptacle edges, including the left edge of the first canister and right edge of the last canister. For this situation, bins are returned unmodified. It takes the value 10 by default. The layout represents the number of rows and columns that the histogram consists of in the dataframe. Figure size represents the size of the plotting graph and it is always represented as a tuple by default. On the off chance that subplots=True, share x pivot and set some x hub names to undetectable; defaults to True whenever hatchet is None in any case False if a hatchet is passed in. Note that going in both a hatchet and sharex=True will modify all x hub names for all subplots in a figure. It is considered as a Boolean value by default. Sharey also represents the same as sharex but it is with respect to the y-axis and the column values in the subplot and this is also by default a Boolean value and it is always taken as false by default. X and ylabelsize just represent the size of the specific axis labels in the histograms. Keywords represent all the matpolib keyword arguments that are passed and returned in the histogram. xrot and yrot represent the x and y-axis label rotations. Ax represents both the axes that have to be assigned as a parameter to define the histogram. Grid is to represent the grid lines that are present in the axis and by default, it is a Boolean value and it is assigned as true. The column represents all the columns that have to be assigned in the dataframe. By represents all the histograms that are distinguished with various groups. Data represents all the objects which are holding the information in Pandas. It finally returns the matplotlib library back to the dataframe. How dataframe.hist() function works in Pandas? Now we see a simple example of how the hist() function works in Pandas. Example: defining and implementing the length and breadth in the dataframe using the hist() function. import pandas as pd import numpy as np import matplotlib.pyplot as plt df=pd.Dataframe({ 'length': [4.2,4.8,3.2,4.9,6], 'width': [2.8,5.3,0.80,3.4,5.2]}, index=['index1', 'index2', 'index3', 'index4', 'index5']) hist=df.hist(bins=4) print(hist) Output: In the above program, we first import pandas and NumPy libraries. Then here since we need to define the histograms, we import a new library called matplotlib. Then we define the dataframe and describe the length and width of the boxes and define the indices. After the dataframe is described in pandas, we use the hist() function to define the histograms, and finally, the output is generated as a graph where all the length and width values are assigned which is showcased in the above graph plotted output. Conclusion Finally, I conclude by saying that the panda’s hist() technique additionally enables you to make separate subplots for various gatherings of information by passing a segment to the by the boundary. For instance, you can make separate histograms for various client types bypassing the user_type section to the by boundary inside the hist() strategy. Calling the data types characteristic of a dataframe will return data about the information sorts of the individual factors inside the dataframe. In our model, you can see that pandas accurately surmised the information kinds of specific factors, however left a couple as item information type. Recommended Articles This is a guide to Pandas hist(). Here we discuss How dataframe.hist() function works in Pandas and Example along with the output. You may also have a look at the following articles to learn more –
https://www.educba.com/pandas-hist/?source=leftnav
CC-MAIN-2021-43
refinedweb
891
63.19
Continuing the research and development for my cloud-based round-trip 2D Revit model editing project, I explained how I use the ExtrusionAnalyzer to create a plan view boundary profile for the furniture and equipment family instances, sort and orient its output curves, and determine their bounding box for visualisation. Today let's look at a simple loop visualisation implementation in a dynamically generated .NET form. The components of interest are: - Point2dInt – an integer-based 2D point - JtLoop – a closed polygon boundary loop - JtLoops – a list of boundary loops - JtBoundingBox2dInt – a bounding box for 2D integer points - GeoSnoop – display a collection of curves in a .NET form - Validation of results – reality check - Next steps – where to go from here - Download – try this yourself at home Point2dInt – an Integer-based 2D Point I introduced this class in the initial discussion on retrieving plan view room boundary polygon loops. The main idea is to have a robust lightweight data container for passing 2D point information back and forth between my Revit add-in, the cloud and mobile devices. The later development motivated the addition of a couple of convenience methods since the first publication: /// ( int x, int y ) { X = x; Y = y; } /// ); } /// <summary> /// Comparison with another point, important /// for dictionary lookup support. /// </summary> public int CompareTo( Point2dInt a ) { int d = X - a.X; if( 0 == d ) { d = Y - a.Y; } return d; } /// <summary> /// Display as a string. /// </summary> public override string ToString() { return string.Format( "({0},{1})", X, Y ); } /// <summary> /// Add two points, i.e. treat one of /// them as a translation vector. /// </summary> public static Point2dInt operator+( Point2dInt a, Point2dInt b ) { return new Point2dInt( a.X + b.X, a.Y + b.Y ); } } JtLoop – a Closed Polygon Boundary Loop This class consists of a simple list of 2D integer points representing a closed boundary loop. When a new point is added to the collection, it is compared to the last and ignored if they evaluate equal. This automatically suppresses too small boundary segment fragments. /// <summary> /// A closed polygon boundary loop. /// </summary> class JtLoop : List<Point2dInt> { public JtLoop( int capacity ) : base( capacity ) { } /// <summary> /// Display as a string. /// </summary> public override string ToString() { return string.Join( ", ", this ); } /// ); } } } JtLoops – a List of Boundary Loops Each room produces a collection of loops, since it may include holes. For the furniture and equipment, I am expecting to manage just one external boundary contour loop each. On the other hand, for the furniture, this class enables me to easily collect all the individual furniture loops together into one single object. The addition operator + is used to unite the room and furniture loops into a single container to pass to the visualisation method. The conversion to a list of Point instances is used to feed the System.Drawing.Drawing2D.GraphicsPath class AddLines method to display the loops in a form. /// <summary> /// A list of boundary loops. /// </summary> class JtLoops : List<JtLoop> { public JtLoops( int capacity ) : base( capacity ) { } /// <summary> /// Unite two collections of boundary /// loops into one single one. /// </summary> public static JtLoops operator+( JtLoops a, JtLoops b ) { int na = a.Count; int nb = b.Count; JtLoops sum = new JtLoops( na + nb ); sum.AddRange( a ); sum.AddRange( b ); return sum; } /// <summary> /// Return suitable input for the .NET /// GraphicsPath.AddLines method to display the /// loops in a form. Note that a closing segment /// to connect the last point back to the first /// is added. /// </summary> public List<Point[]> GetGraphicsPathLines() { int i, n; List<Point[]> loops = new List<Point[]>( Count ); foreach( JtLoop jloop in this ) { n = jloop.Count; Point[] loop = new Point[n + 1]; i = 0; foreach( Point2dInt p in jloop ) { loop[i++] = new Point( p.X, p.Y ); } loop[i] = loop[0]; loops.Add( loop ); } return loops; } } JtBoundingBox2dInt – a Bounding Box for 2D Integer Points I discussed the 2D integer-based bounding box implementation last week. As you can see there, it already includes a handy constructor taking a collection of loops to return their entire bounding box. I now added properties to return the aspect ratio and a System.Drawing.Rectangle to easily define the visualisation target rectangle and coordinate system transformation: /// > /// Return current width. /// </summary> public int Width { get { return xmax - xmin; } } /// <summary> /// Return current height. /// </summary> public int Height { get { return ymax - ymin; } } /// <summary> /// Return aspect ratio, i.e. Height/Width. /// </summary> public double AspectRatio { get { return (double) Height / (double) Width; } } /// <summary> /// Return a System.Drawing.Rectangle for this. /// </summary> public Rectangle Rectangle { get { return new Rectangle( xmin, ymin, Width, Height ); } } /// ); } } } } GeoSnoop – Display a Collection of Curves in a .NET Form Now comes the exciting part: extracting the loop information from my own data structures, setting up an appropriate .NET form and infrastructure, and passing the information across with a minimum of fuss. I had some fiddling to do to set this up optimally, I can tell you. I am very satisfied with the end result, though: /// ); graphics.Clear( System.Drawing.Color loops ) { JtBoundingBox2dInt bb = new JtBoundingBox2dInt( loops ); // gr = Graphics.FromImage( bmp ); DrawLoopsOnGraphics( gr, loops.GetGraphicsPathLines(), transform ); ); } } I bet you expected more than this, didn't you? To quote). Validation of Results Actually, this is the really exciting part. I mentioned that I was worried for a moment about the large number of loop vertices in the plan view of the desk. I was initially hoping for only four vertices, to represent a simple rectangle. After all, the plan view of a desk and chair looks like this in Revit: In my visualisation, the same desk and chair loops are displayed like this instead: The good news is: - We have indeed produced closed loops. - Their shape and location is correct. Where do all those bumps come from, though? The answer is easy and completely reassuring: the bumps are the desk drawer handles that stick out a little bit beyond the desktop surface. Looking at a 3D view in Revit from the top, the desk looks like this: My results reproduce this exactly. Looking at the chairs, I mentioned that some of the chair solids cause extrusion analyser failures, and I skip those. To be precise, I have two failures on each chair. Comparing the chair 3D view from the top in Revit with my results shows that the armrests are the components causing trouble: The rest matches up perfectly, once again validifying my approach. The bumps on the sides of the chairs are the armrest supports. I also cleaned up the form generation as much as possible. Resizing, zooming and panning are not supported. The form aspect ratio is adjusted up front to adapt to the loops to display: Once again, here is the same view in Revit: Next Steps Actually, the next steps are the really, really exciting part. Now I can turn to the implementation of my data repository and the task of hosting it in the cloud. I already discussed my tentative plans and high hopes for this. Let's see if I can live up to them. Adventure! Who knows what will come, and where this will lead? Download To wrap this up for the moment, here is GeoSnoopLoops.zip containing the complete source code, Visual Studio solution and add-in manifest of the current state of this external command.
https://thebuildingcoder.typepad.com/blog/2013/04/geosnoop-net-boundary-curve-loop-visualisation.html
CC-MAIN-2019-04
refinedweb
1,196
55.84
It might make a bit more sense if you talk a bit about your setup/toolchain (cross??) and version of libraries used.... -----Original Message----- From: linux-mips-bounce@linux-mips.org [mailto:linux-mips-bounce@linux-mips.org] On Behalf Of akshay Sent: Thursday, August 05, 2004 9:13 PM To: linux-mips@linux-mips.org Subject: pthread uClibc Hi, I am trying to use pthread on mips based platform. I have simple program to just create pthreads and when I run my program, it goes in infinite loop and never comes back. Though when I hit enter on console, I see following message on console. pt: assertion failed in manager.c:154. pt: assertion failed in manager.c:193. Can someone plz help me here . Here is the code for my program. ============================================================== #include <stdio.h> #include <pthread.h> void print_message_function( void *ptr ); pthread_t thread1; char *message1 = "Thread 1"; main() { int iret1, iret2; /* Create independant threads each of which will execute function */ iret1 = pthread_create( &thread1, NULL, (void*)&print_message_function, (v oid*) message1); printf("threads created ....\n"); /* Wait till threads are complete before main continues. Unless we */ /* wait we run the risk of executing an exit which will terminate */ /* the process and all threads before the threads have completed. */ pthread_join( thread1, NULL); printf("Thread 1 returns: %d\n",iret1); exit(0); } void print_message_function( void *ptr ) { char *message; message = (char *) ptr; printf("%s \n", message); } Thanks, Akshay
http://www.linux-mips.org/archives/linux-mips/2004-08/msg00035.html
CC-MAIN-2014-42
refinedweb
234
66.94
Making accessible React Native apps You will need React Native and Yarn installed on your machine. Some familiarity with React Native will be helpful. In this tutorial, you’re going to learn how to make React Native apps more accessible. Specifically, we’re going to cover the following: - What is accessibility? - Designing apps with accessibility in mind - Accessibility in React Native apps - Accessibility testing tools Of course, we cannot hope to cover everything about accessibility. It’s a pretty big subject and it’s a continuous journey. There’s always something that you can improve in order to make the experience just a little bit more pleasant for a certain user. Instead, what we hope to achieve in this tutorial, is to take that first step into making more accessible apps. You can view the code used in this tutorial on its GitHub repo. The starter branch contains the not so accessible version of the app, while the a11y branch contains the more accessible version. Prerequisites To follow this tutorial, you need to know the basics of creating a React Native app. The React Native development environment should also be set up on your machine. We will be using React Native version 0.56 in this tutorial. We’ll also be using Yarn to install packages. What is accessibility? Before we proceed, it’s important that we all agree on what accessibility is, in the context of a mobile app. Accessibility or a11y, means making your apps usable to both normal users and users with disabilities. Any person can have one or more form of disability. That usually includes but not limited to the following: - Visual impairments - examples include low vision, color-blindness, and total blindness. - Physical or motor disabilities - cerebral palsy, bone and joint deformities. - Mental disorders - autism spectrum disorders such as Asperger’s syndrome, and autistic disorder. - Hearing impairment - deafness and partial hearing loss. - Reading disabilities - Dyslexia. Accessibility means designing your apps in such a way that it takes all of these disabilities into consideration in order to make the user experience pleasant for everyone. What you’ll be building We won’t actually be building anything from scratch. Instead, we’re going to make a pre-built app more accessible. Here’s what the starter app looks like: This won’t be how the final output will look like because we’ll also be taking design into consideration (though, only a little because I’m not really a designer). If you want to follow along, clone the repo, switch to the starter branch and install the dependencies: git clone cd RNa11y git checkout starter yarn install react-native upgrade react-native link react-native run-android react-native run-ios Designing apps with accessibility in mind In this section, we’ll redesign the app so that it becomes more accessible. We will be using the dos and don’ts on designing for accessibility from the GOV.UK website as a guide. Specifically, we’re going to adopt the following dos from their guide: - Use simple colors - Make buttons descriptive - Build simple and consistent layouts - Follow a linear, logical layout - Write descriptive links and heading - Use good contrasts and a readable font size - Use a combination of color, shapes, and text - Make large clickable actions Right off the bat, you can see that the starter app violates some of these rules. The app is already following a few, but we can still improve on it. Use simple colors The starter app violates this rule because it’s using a dark color for its background. It’s not really easy on the eyes, so we need to update the app and card background: // file: App.js const styles = { container: { flex: 10, backgroundColor: "#FFF" // update this } }; // src/components/Card.js const styles = StyleSheet.create({ card: { width: 120, height: 140, backgroundColor: "#3e3e3e", // update this } }); Also, update the Header component to match. This is because the items in the status bar aren’t really very readable when using a dark background: // src/components/Header.js const styles = StyleSheet.create({ header: { paddingTop: 10, backgroundColor: "#ccc" // update this }, header_text: { fontWeight: "bold", color: "#333", // update this } }); Once that’s done, the content should now be more readable. Make large clickable actions Next, we need to make the buttons larger. This change is specifically useful for people with physical and motor disabilities, as they’re often the ones who have difficulty in pressing small buttons. If you inspect the app right now, you’ll see that there’s not much space we can work with. So even if we make the buttons larger, it will still be difficult to target a specific one because there won’t be ample whitespace between them. Though we still have some free space between each card so we’ll make use of that instead. In your Card component, include the Dimensions module so that we can get the device’s width. We’ll use it to determine how much width each card can use. In this case, we have two cards in each row so we’ll just divide it by two and add a padding. We’re also making the height bigger because we’re anticipating the buttons to become bigger: // src/components/Card.js import { View, Text, Image, StyleSheet, Dimensions } from "react-native"; // add Dimensions const { width } = Dimensions.get("window"); const cardPadding = 20; const styles = StyleSheet.create({ card: { width: (width / 2) - cardPadding, // update this height: 150, // update this } }); Next, we can now proceed with updating the size and padding of the button: // src/components/IconButton.js: const icon_color = "#586069"; const icon_size = 25; // update this const styles = StyleSheet.create({ icon: { // update these: paddingLeft: 10, paddingRight: 10 } }); At this point, each button should be huge and visible enough to click on. Make buttons descriptive Unfortunately, this isn’t really something that can be implemented all the time because of design constraints. If you check the app now, you’ll see that there’s not really enough space to accommodate labels for each button. There is a solution, but we will end up giving up the current layout (two cards per row) for a one card per row layout. So the only feasible solution is to have a walkthrough for new users. This way, you can teach what each button is used for. I won’t really be covering how to do that, but there’s a good component which allows you to implement it easily. Use good contrasts and a readable font size In my opinion, the app already has pretty good contrast. But to be on the safe side, we’ll tweak it some more. First, we have to differentiate between each individual card and the app’s background. We can do that by applying a darker background color: // src/components/Card.js const cardPadding = 20; const styles = StyleSheet.create({ card: { width: width / 2 - cardPadding, height: 150, backgroundColor: "#e0e0e0", // update this } }); Next, we need to differentiate between the card’s body and its contents: // src/components/Card.js const styles = StyleSheet.create({ name: { fontSize: 16, color: "#3a3f46", // update this } }); // src/components/IconButton.js const icon_color = "#3a3f46"; // update this const icon_size = 25; Lastly, we need to make the text larger. While there’s no general agreement as to what font size should we be using to optimize accessibility, a few people seem to swear by 16px so we’re also going with that: const styles = StyleSheet.create({ name: { fontSize: 16, // update this } }); We’ve skipped the following because we’re already following them: - Write descriptive links and heading - Follow a linear, logical layout - Use a combination of color, shapes, and text - Build simple and consistent layouts Once that’s done, the app’s design should be pretty accessible. Accessibility in React Native apps The previous section discussed mainly the visual component of accessibility. In this section, we’ll look at how to make the app more accessible for people who use screen readers. For those unfamiliar, a screen reader reads to users what they’re currently touching on the screen. This technology is mainly used by blind or visually impaired people. If a screen reader is enabled, the user has to double tap in order to activate the intended action. In order for a screen reader to be useful, we need to properly label all the relevant components that a user will most likely interact upon. In React Native, this can be done by adding accessibility props. Here’s an example of how we can add these props: // src/components/Header.js const Header = ({ title }) => { return ( <View style={styles.header} accessible={true} accessibilityLabel={"Main app header"} accessibilityRole={"header"} > <Text style={styles.header_text}>{title}</Text> </View> ); }; Let’s go through each of the accessibility props we’ve added to the Header component: accessible- accepts a boolean value that’s used to mark whether a specific component is an accessible element or not. This means that the screen reader will read whatever label you put on it. Be careful with using this though, as it makes all of its children inaccessible. In the Headercomponent above, this makes the Textcomponent inside the Viewinaccessible. So the screen reader won’t actually read the title indicated in the header. It will only read the accessibilityLabelyou’ve passed to the Viewinstead. It’s a good practice to only set the accessibleprop to trueif you know that the component doesn’t have any child that’s supposed to be treated as an accessible element. accessibilityLabel- the text you want the screen reader to read when the user touches over it. A good practice when using this prop is to be as descriptive as possible. Remember that the user will only rely on what’s being read by the screen reader. They actually have no idea of the context a specific component is in, so it’s always useful to repeat it in your labels. For example, each of the buttons in each card should still mention the name of the Pokemon. accessibilityRole- the general role of the component in this app. Examples include: button, link, image, text, and in this case header. Note that headerdoesn’t only indicate the app’s main header. It can also indicate a section header or a list header. The next component we’ll update is the IconButton because it’s important that the user knows that those buttons we’ve added are actually buttons: // src/components/IconButton.js const IconButton = ({ icon, onPress, data, label }) => { return ( <TouchableOpacity accessible={true} accessibilityLabel={label} accessibilityTraits={"button"} accessibilityComponentType={"button"} onPress={() => { onPress(data.name); }} > <Icon name={icon} style={styles.icon} size={icon_size} color={icon_color} /> </TouchableOpacity> ); }; ``` From the code above, you can see that we’re accepting a new `label` prop which we then use as the value for the `accessibilityLabel`. We’ve also set the component to be `accessible` which means that when the user’s finger goes over it, the screen reader will read out the `accessibilityLabel`. But what about `accessibilityTraits` and `accessibilityComponentType`? Well, they are the old way of setting the `accessibilityRole`. `accessibilityTraits` is only for iOS and `accessibilityComponentType` is only for Android. As [mentioned in the docs](), these props will be deprecated soon. We’re only using it because `TouchableOpacity` doesn’t seem to be accepting `accessibilityRole`. The trait (button) wouldn’t show up as I was testing with the accessibility inspector. We’ll go over this tool in the next section. Lastly, we update the `Card` component so it passes the correct labels to each of the IconButton. We’re also making the Pokemon Image and Text accessible: ``` javascript // src/components/Card.js const Card = ({ item, viewAction, bookmarkAction, shareAction }) => { return ( <View style={styles.card}> <Image source={item.pic} style={styles.thumbnail} accessible={true} accessibilityRole={"image"} accessibilityLabel={`${item.name} image`} /> <Text style={styles.name} accessibilityRole={"text"}> {item.name} </Text> <View style={styles.icons}> <IconButton icon="search" onPress={viewAction} data={item} label={`View Pokemon ${item.name}`} /> <IconButton icon="bookmark" onPress={bookmarkAction} data={item} label={`Bookmark Pokemon ${item.name}`} /> <IconButton icon="share" onPress={shareAction} data={item} label={`Share Pokemon ${item.name}`} /> </View> </View> ); }; ``` In case you’re wondering why we didn’t add the `accessible` and `accessibilityLabel` prop in the Pokemon label, it’s because the `Text` component is [accessible by default](). This also means that the screen reader automatically reads the text inside of this component. ## Accessibility testing tools In this section, we’ll take a look at four tools you can use to test the accessibility of your React Native app. ### Testing accessibility while developing the app In iOS, you can use the Accessibility Inspector tool in Xcode. Because it’s in Xcode, you have to run the app from Xcode. You can do that by opening the `RNa11y.xcodeproj` or `RNa11y.xcworkspace` file inside your project’s `ios` directory. Then run the app using the big play button located on the upper left side of the screen. Once the app is running, you can open the Accessibility Inspector tool by going to **Xcode** → **Open Developer Tool** → **Accessibility Inspector**. From there, you can select the running iOS simulator instance: Once you’ve selected the simulator, click on the target icon right beside the drop-down. This activates the inspection mode. You can then hover over the components which we updated earlier and verify whether the inspector is reading the labels correctly: For Android testing, you can use the [Accessibility Scanner]() app. Unlike the Accessibility Inspector in iOS, you have to install it on your emulator or device in order to use it. Once installed, go to **Settings** → **Accessibility** → **Accessibility Scanner** and enable it. Once it’s enabled, switch to the app that we’re working on and click the floating blue button. This will scan the app for any accessibility issues. Once it’s done scanning, you can click on any of the indicated areas to view the suggestion: The easiest way to solve this issue is by making the card’s background color lighter. You can also try increasing the contrast of the image as suggested. Interestingly, if you remove the accessibility props from the image and scan again, you’ll see that it will no longer complain about the contrast: ``` javascript // src/components/Card.js const Card = ({ item, viewAction, bookmarkAction, shareAction }) => { return ( <View style={styles.card}> <Image source={item.pic} style={styles.thumbnail} /> ... </View> ); }; ``` This can mean that the scanner only gets picky when you’ve marked a component as accessible. To test this assumption, try removing the accessibility props from the IconButton: ``` javascript // src/components/IconButton.js const IconButton = ({ icon, onPress, data, label }) => { return ( <TouchableOpacity onPress={() => { onPress(data.name); }} > ... </TouchableOpacity> ); }; ``` If you run the scanner again, you’ll see that it actually picks up on the issue: ### Manual accessibility testing As with anything, it’s always important to test things manually so you know the actual experience your users are getting. After all, accessibility is all about improving the user experience that your users get when using the app. #### Testing in iOS To test things manually in iOS, open Xcode and run the app on your iOS device. You can also do this from the simulator but that kinda beats the purpose of manual testing. You won’t really have an accurate “feel” of the experience if you’re just testing from a screen. Once the app is running on your device, go to **Settings** → **Accessibility** → **VoiceOver**. From there, you can select the **Speech** menu to change the voice (I personally prefer Siri Female). You can also adjust the speaking rate. Adjust a little bit more from the mid-point should be fast enough for most people. Once you’re done adjusting the settings, enable the **VoiceOver** setting then switch to the app. From there, you can tap on each of the accessibility areas that we’ve set to verify if it’s being read correctly. #### Testing in Android To test in Android, run the app on your Android device. Once the app is running, go to **Settings** → **Language** and set it to your preferred language. Next, go to **Accessibility** → **Text-to-speech** options and make sure the **Default language status** is fully supported. If not, you have to go to the language settings again and select a supported language. The equivalent of VoiceOver in Android is TalkBack, you can enable it by going to **Accessibility** → **TalkBack** then enable the setting**.** Once enabled, switch to the app and verify if the labels are read correctly as you tap. ## Further reading Here are some resources to learn more about accessibility: - [Accessibility by Rob Dodson]() - [React Native Accessibility: Creating Inclusive Apps in React Native]() - [React Native: Accessibility]() - [Accessibility Testing on Android]() - [iOS Accessibility Tutorial: Getting Started]() ## Conclusion That’s it! In this tutorial, you’ve learned how to make React Native apps more accessible to a person with disabilities. I hope that you’ll use the knowledge you’ve gained in order to make accessibility a part of your development workflow. Because all of your users deserve an equal or similar ease of use. You can view the code used in this tutorial on its [GitHub repo](). August 13, 2018 by Wern Ancheta
https://pusher.com/tutorials/accessible-react-native
CC-MAIN-2021-25
refinedweb
2,850
55.64
The net package When you want to deal with TCP and UDP sockets directly, the net package is here for you. TCP TCP guarantees that packets arrive eventually, and that they arrive in the order in which they were sent. Usually, on the server side, sockets are bound to a port, and then listen. When clients attempt to connect, they accept connections (and can later close them if so they wish). Accepting a connection via a server socket gives a TCPSocket - so, after a client has connected, the client and the server use the same data structure to communicate. A Socket, like a TCPSocket, has a reader / writer pair, since sockets are bidirectional communication channels. Which means they can write data to the writer, and read data from the reader. For more info on readers and writers, go ahead and read (heh) the documentation on the io package ServerSocket Here’s an example usage of ServerSocket serving as a makeshift HTTP server (don’t do that, though): import net/[ServerSocket] socket := ServerSocket new("0.0.0.0", 8000) socket listen() "Listening..." println() while(true) { conn := socket accept() "Got a connection!" println() while (conn in readLine() trim() != "") { // read the request } conn out write("HTTP/1.1 200 OK\r\n") conn out write("Content-Type: text/html\r\n") conn out write("\r\n") conn out write("<html><body>\ Hello, from the ooc socket world!</body></html>") conn out write("\r\n") conn close() } Don’t forget to call listen() before trying to accept() connections. TCPSocket Same as the ServerSocket, but on the client side. Make requests like that (or don’t - use a proper HTTP library): import net/[TCPSocket] socket := TCPSocket new("ooc-lang.org", 80) socket connect() socket out write("GET / HTTP/1.1\n") socket out write("Host: ooc-lang.org\n") socket out write("User-Agent: An anonymous admirer\n") socket out write("\n\n") line := socket in readLine() "We got a response! %s" printfln(line) Seriously. Use a proper HTTP library. But that’s an example. Also, don’t forget to call connect() before attempting to use out or in. UDP Unlike TCP, UDP is unidirectional - some sockets bind and only get to receive, and some sockets don’t bind and can only send. There’s also no guarantee that anything sent over UDP ever arrives, and order is not guaranteed either. UDPSocket When you create an UDPSocket, always specify a hostname (or an ip) and a port, like this: socket := UDPSocket new("localhost", 5000) If you want to receive datagrams, call bind(): socket bind() while (true) { buffer := socket receive(128) buffer toString() println() } If you want to send datagrams, just call send: socket send("udp is fun") That’s about it for now.
https://ooc-lang.org/docs/sdk/net/
CC-MAIN-2018-51
refinedweb
458
64.2
CodePlexProject Hosting for Open Source Software I'm not sure how to check for any kinds of permissions, nor how to do it from a view. I have overriden User.cshtml in my module and in there I want to do a check to only display the Dashboard link if the user has some kind of permissions that should let them enter the dashboard (e.g., editor, admin, super user, etc). Right now I just have it checking if the username is "admin", but that obviously won't work for lesser roles who still need to get into the dashboard. <span class="user-actions"> @if (WorkContext.CurrentUser.UserName.Equals("admin", StringComparison.OrdinalIgnoreCase)) { @Html.ActionLink(T("Dashboard").ToString(), "Index", new { Area = "Dashboard", Controller = "Admin" }) } @Html.ActionLink(T("Sign Out").ToString(), "LogOff", new { Controller = "Account", Area = "Orchard.Users", ReturnUrl = Context.Request.RawUrl }) </span> @Authorizer.Authorize(...) Thanks. Is there a simple way to check for permissions to Admin panel, or can I simply check if user is a member of a role like admin or editor? That'd be easier than hunting down all the various Permission's across all modules's namespaces. StandardPermissions.AccessAdminPanel This also bugged me so I opened an issue and created a patch :-) (not yet evaluated, though). randompete wrote: StandardPermissions.AccessAdminPanel Thanks Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/286072
CC-MAIN-2016-50
refinedweb
250
58.99
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Is there a way to do 301 redirects? I am using Odoo v8 website builder on an Ubuntu 14.04 clean install. I need to be able to redirect website page links and do not see any way to do it with a .htaccess file or anything within Odoo website builder to change web page URLs. Does anyone know of another approach to solving this? It is important for SEO to be able to name or redirect pages from another website once it is moved to an Odoo installation. You would need to create new controller that will return a redirection. To create a controller, you need to create class that inherits http.Controller and your redirection method should be decorated with @http.route @http.route(['/mypage'], type='http', auth="public", website=True) def redirect(self): return werkzeug.utils.redirect("/myNewPage",301) You can see the blog module from odoo v8 for more examples on the use of red
https://www.odoo.com/forum/help-1/question/is-there-a-way-to-do-301-redirects-76933
CC-MAIN-2017-09
refinedweb
188
67.55
I've been working on this code all night and i can't seen to find out how to do the math i think it needs a loop of some kind on line 17 and 18 also it wont print out the avg mpg please help def main(): 1 print("This program calculates fuel efficiency over a multi-leg journey.") 2 print("You should enter the gallons of gas consumed and miles traveled") 3 print("for each leg. Just hit <Enter> to signal the end of the trip.") 4 print() 5 6 7 total_distance, total_fuel = 0.0, 0.0 8 inStr = input("Enter gallons and miles (with a space between): ") 9 while inStr != "": 10 gallons,miles = inStr.split() 11 gallons = eval(gallons) 12 miles = eval(miles) 13 14 #This next line should print just the miles per gallon for the leg of the trip 15 you just received from the user 16 print("MPG for this leg: {0:0.1f}".format(miles/gallons)) 17 total_distance = miles + miles 18 total_fuel = gallons + gallons 19 inStr = input("Enter gallons and miles (with a space between): ") 20 21 22 print() 23 #This section should print the TOTAL distance & TOTAL fuel used during the trip 24 print("You traveled a total of {0:0.1f} miles on {1:0.1f} gallons." 25 .format(total_distance, total_fuel)) 26 #This section should print the average based on the TOTAL distance & TOTAL fuel 27 27 used during the trip 28 print("The fuel efficiency was {0:0.1f} miles per gallon." 29 .format(total_distance/total_fuel)) 30 31 if __name__ == '__main__': 32 main()
https://www.daniweb.com/programming/software-development/threads/416822/i-think-i-need-a-loop
CC-MAIN-2020-16
refinedweb
263
68.81
Core expressions (ref) From Nemerle Homepage << Back to Reference Manual. Core expressions Function call Call a function with given parameters. The type of the function call expression is the same as the type of the function return value; that is, if the function's type is 'a -> 'b, then the type of the function call expression is 'b. The value of the whole expression is the return value of the function. ref and out are used to denote a parameter passed by reference. Assignment Assign a value to a variable. The left side of the assignment expression must evaluate to a mutable variable. The type of the assignment is always void. In earlier versions there was <- assignment operator, which is now (Nemerle 0.2.x) deprecated. Match expression expr is matched sequentially to the patterns in given match cases. If one of the patterns is consistent with the value of expr, then the corresponding computation branch of the match case is evaluated. Patterns in all the match cases must be of the same type. Expressions being computation branches in all the match cases must share a common super type, which is the type of the entire match. A guarded pattern requires expression expr to be of type bool. An expression e satisfies the guarded pattern only if it is pattern-matched with pattern and expression expr is evaluated to true. An expression e satisfies this match case if and only if it satisfies one of the guarded patterns in this match case. Throw expression Throws given exception. The type of the expression thrown must be subtype of System.Exception. Try..catch expression If the evaluation of expr does not throw any exception, then the result is that of the evaluation of expr. Otherwise, the runtime type of the exception which was thrown is compared against each type description in handlers. The first matching handler is executed and its value returned. If none of the handlers matches, the exception is propagated. The type of the whole expression is the same as type of guarded expression. The value is the value of expression or launched handler. Please consult .NET specification if you want to know more about exceptions. The optional finally clause has the same meaning as below. Try..finally expression Evaluates the first expression and -- regardless of whether the evaluation has finished correctly or some exception has been thrown during the evaluation -- the second expression is evaluated. The value (and thus the type) of the whole expression is the value of the first expression. Unary operator application Unary operator for numeric types: - ++ prefix increment with void return type - -- prefix decrement with void return type - + a noop - - negation Integer types also have the ~ (bitwise negation) defined. Boolean type has ! (boolean negation). User defined types can have some other operators defined. Binary operator application There is a number of standard operators, which are predefined for arithmetic types (floating point and integer): - - subtraction - * multiplication - / division - < less comparison - > more comparison - <= less-equals - >= more-equals - == equality - != inequality - += addition with assignment - -= subtraction with assignment - *= multiplication with assignment - /= division with assignment For integer types only (int, uint, short, ushort, long, ulong, byte, sbyte) there are: - << left bitwise shift - >> right bitwise shift - % modulus - %= modulus with assignment - <<= left bitwise shift with assignment - >>= right bitwise shift with assignment - | bitwise or - & bitwise and - ^ bitwise xor - %|| bitwise `or' returning true iff result of `or' is nonzero - %&& bitwise `and' returning true iff result of `and' is nonzero - %^^ bitwise `xor' returning true iff result of `xor' is nonzero - |= bitwise or with assignment - &= bitwise and with assignment - ^= bitwise xor with assignment Type cast This expression allows dynamic type coercion. It is done during runtime and if it cannot be realized then System.InvalidCastException is thrown. If it succeeds, the type of this expression is equal to the type of type. Type enforcement This expression allows static type enforcement. It is checked during compile-time and an error is reported if expr type is not a subtype of type. It allows only type widening. If it succeeds, the type of this expression is equal to the type of type. One-case matching Equivalent to match (expr) { pattern => true | _ => false }. The usage of matches instead of is is now deprecated. 11.13. Dynamic type check Equivalent to match (expr) { _ is type => true | _ => false }. checked/unchecked blocks Turn on/off overflow checking for arithmetic operators. Checks are on by default. Block expression Expressions in the sequence are evaluated sequentially, and the value (and thus the type) of the sequence is the value of the last expression in the sequence. Values of expression (except for the last one) are ignored, and thus if the type of some expression is not void -- a warning is generated. The ; is optional after after } inside a sequence. The first form is just a standard execution of a sequence of expressions. The value (and type) of this block is the same as the last expression in the sequence. Note that block is always implicitly followed by ;, so to use it's value not in the end of some expression you'll need to surround it with ( ). The second form is a shortcut for matching parameters of a defined function with a given list of patterns. It is equivalent to making a tuple from parameters of function and creating match expression. def f (p1, p2, p3) { | (1, 3, "a") => 1 | _ => 2 } translates to def f (p1, p2, p3) { match ((p1, p2, p3)) { | (1, 3, "a") => 1 | _ => 2 } } It is also to note that when a function has only one parameter, the matching goes just on this parameter itself (no one-element tuple is created). Array constructor Create an array consisting of given elements. All elements must be of the same type. If the elements are of the type 'a then the whole expression is of the type array ['a]. The number in [] is array rank. It defaults to 1. If rank is specified, rows, columns and so on are specified using nested [], like in: array .[2] [[1, 2], [3, 4], [5, 6]] array .[3] [[[1, 2], [10, 20]], [[11, 12], [110, 120]]] Value definition Defines the binding between the variables in the pattern and the value of the expression expr which will be known to all subsequent expressions in the current block. Local function definition Defines the functions which will be known to all subsequent expressions in the current block. Names of all defined functions are put into the symbol space before their bodies are parsed. (Note that it implies that body of defined function is a subsequent expression too) Mutable value definition Defines a new variable, value of which can be changed at any time using the assignment expression.
http://nemerle.org/Core_expressions_%28ref%29
crawl-001
refinedweb
1,119
54.32
Includes enterprise licensing and support The Google Maps API for Flash provides a new way to add interactive Google Maps to your website, using Adobe's Flash® plugin to display dynamic map content! This API exists as a fully independent alternative to the existing JavaScript Maps API, and provides many of the features of that API while also adding the ability to mix Flash content with Google Maps. The Google Maps API for Flash is new, so we'd like to hear your feedback. We encourage you to join the Maps API for Flash discussion group to give us feedback. This documentation is designed for people familiar with Flash, ActionScript® programming, and object-oriented programming concepts. This documentation contains three independent tutorials, covering creation of a "Hello World" application in one of the possible Flash development environments: All new developers should read the tutorial appropriate for their development environment which explains how to write your first Google Maps Flash application.. In addition, this documentation is organized to cover the following key areas: Most of the documentation is focused on supporting Flex developers. However, we will strive to keep the documentation usable by all Flash developers. Providing a Flash version of the Google Maps API allows current Flash developers to easily integrate Google Maps into their existing Flash development environments. As well, the Google Maps API for Flash opens up a whole world of interactive possibilities for displaying and using map content for those developers not currently using Flash. This developer guide assumes you are familiar with Flash development and ActionScript programming. It also does not assume usage of any particular development environment, though we provide tutorials for different development environments. Note: this guide uses UNIX-like command line examples, and Macintosh screen shots, though usage should not appreciably change for other developers. Flash development can take many forms. Some developers/designers author purely within the Flash CS3 application to create and arrange content, and add ActionScript within that framework. Other developers use a full-featured IDE such as Adobe FlexBuilder® to create robust applications with heavy use of ActionScript. Some developers use the freely available Flex SDK® from Adobe and build their applications from the command line. The choice of development environment is up to you. This documentation will provide tutorials for all three approaches to get you going. However, the code samples within this documentation set will be provided as MXML files, for use within either FlexBuilder, or the free Flex SDK. It is relatively straight-forward to use the ActionScript code embedded within those files directly within Flash CS3. The Google Maps API for Flash now directly supports Adobe AIR® applications within the Flex development environment. Check out the tutorial for authoring AIR applications (experimental) within the FlexBuilder Tutorial. Developing Flash content that integrates Google Maps requires inclusion of a Google Maps API for Flash interface library within your application code. This library consists of a *.swc file within the lib directory of the Maps API for Flash SDK available at the following URL: The SDK includes two SWC files: a Flex version for use within FlexBuilder (or with the free Flex SDK), and a non-Flex version for use within Flash CS3. The Flex *.swc is denoted with a _flex suffix in the filename. These SWC files contain interfaces for all public classes in the Google Maps API for Flash development environment. Compiling your application with this library ensures that it can utilize and communicate with all public functionality of the runtime Google Maps API for Flash library, which is retrieved from Google's servers whenever a client loads your application. Note that the bulk of the code for actually running your Maps Flash application remains within the separate runtime Google Maps API for Flash library. This allows us to make enhancements, bug fixes, and modifications to the core library functionality without requiring you to recompile your application. Note that if you ever wish to utilize new functionality that requires new interfaces, you will need to download an updated SWC file and recompile your application. The interface library filename contains a suffix identifying its version number. For example, map_flex_1_7.swc identifies version 1.7 of the Flex interface library, while map_1_7.swc identifies version 1.7 of the Flash interface library. Once you've downloaded the interface library, create a development directory and place that file in the root of that directory. # #Create a development directory # hostname$ mkdir myflashapp hostname$ cd myflashapp # # Copy the Google Maps API for Flash SDK to the root of your working development directory # hostname$ cp ~/sdk.zip . # # Unzip the SDK. The SWC interface library is located within the "lib" directory # Offline ASDoc HTML documentation is available within the "docs" directory # hostname$ unzip sdk.zip Make note of this directory location. You will need it when you need to link to the proper SWC file during development. The Google Maps API for Flash, like the Google JavaScript Maps API, requires usage of a freely available developer key. You will need to specify this key within one of three possible locations: MXMLdeclaration Note that the API key is compiled into the SWF file and must match the domain where the SWF file is hosted, which may not necessarily be the location for the hosted HTML file. This document set will show Flex examples defining the API key within the MXML declarations. The Google Maps API for Flash interface library contains the ActionScript interfaces that allow you to communicate with the actual components provided through Google's runtime library. Occasionally, we will make updates to these components "under the hood." As long as the interfaces do not change, you don't need to do anything. The interface SWC file will automatically pick up the latest changes. If we introduce new functionality and features (and therefore need to update the interfaces), we will also need to update the interface SWC file, which we will provide for download. Such versions will be named according to the version scheme. (For example, the interface library associated with version "1.3" will be named "map_flex_1_3.swc." You will need to download a new version of this interface library before you can make use of these latest features. Make sure you monitor the Google Maps API group or blog to ensure you receive announcements for new versions of the Google Maps API for Flash. You may set your application to use a hard-coded version of the Maps API for Flash library by setting the Map's version property to a specific version. Note that this is not recommended as you will not benefit from any enhancements or bug fixes to the current implementation. The following tutorials walk you through creating a sample application using the Google Maps API for Flash in your preferred development environment: You do not need to read all three tutorials to get started, though it may be useful to you to compare the different development processes.
http://code.google.com/apis/maps/documentation/flash/intro.html
crawl-002
refinedweb
1,160
52.19
Ok I'm trying to make a strncpy prototype, and it's sort of working but on the first go it eats up the first letter and it has trouble quitting the loop, you have to enter Q twice. I've tried different implementations of the While loop and this is the only one so far to even come close to working, but again it's eating things up so I'm guessing I'm not supposed to be using getchar as my check. But I just can't figure out what else to do. Thanks again for your help guys. Code:#include <stdio.h> #define MAX 80 char ncpy(char * orig, char * add, int n); int main(void) { char ins[MAX+1], bee[MAX+1]; int n, L=0; printf("Input a line [q to quit]\n"); while(getchar() != 'q') { gets(bee); puts("Choose length to be copied"); scanf("%d", &n); // printf("%s %d", bee, n); ins[L] = ncpy(ins + L, bee, n); printf("\n%s\n", ins); L += n; printf("Input a line [q to quit]\n"); } return 0; } char ncpy(char * orig, char * add, int n) { int i; for(i=0; i<n; i++) orig[i] = add[i]; return (*orig); }
https://cboard.cprogramming.com/c-programming/92154-strncpy.html
CC-MAIN-2017-34
refinedweb
203
77.1
20 January 2010 04:54 [Source: ICIS news] SINGAPORE (ICIS news)--Taiwanese state-owned cracker operator Chinese Petroleum Corp (CPC) has increased operating rates at its three crackers to 90-95% on robust ethylene margins, an industry source said on Wednesday. Operating rates at the plants were around 85-90% in late December, the source added. High naphtha demand helped buoy Asian prices, keeping the spread between first-half March and second-half March contract at a wide backwardation of $9/tonne, ICIS pricing data showed. “The (naphtha) market is driven up by ethylene, so the crackers are running at higher rates of close to 95%,” said the source who insisted on anonymity. Asian ethylene spot prices were pegged at a firm $1,280-1,320/tonne (€896-924/tonne) CFR (cost and freight) NE (north east) ?xml:namespace> CPC had floated a tender to buy at least 30,000 tonnes of heavy naphtha for March delivery amid high operating rates. Its biggest cracker with 500,000 tonne/year capacity, is located in ($1 = €0.70)
http://www.icis.com/Articles/2010/01/20/9327212/taiwans-cpc-runs-crackers-at-95-on-robust-ethylene.html
CC-MAIN-2015-22
refinedweb
176
57.2
DEBSOURCES Skip Quicknav sources / nmap / 6.00-0.3+deb7u1 / nmap_tty10 /*************************************************************************** * nmap_tty.cc -- Handles runtime interaction with Nmap, so you can * * increase verbosity/debugging or obtain a status line upon request. * * * **********************. * * * ***************************************************************************/ #ifndef WIN32 #include "nmap_config.h" #endif #include "nmap.h" "nmap_tty.h" #include "utils; } static void tty_flush(void) { static HANDLE stdinput = GetStdHandle(STD_INPUT_HANDLE); FlushConsoleInputBuffer(stdinput); } ; } static void tty_flush(void) { /* we don't need to test for tty_fd==0 here because * this isn't called unless we succeeded */ tcflush(tty_fd, TCIFLUSH); } /* * Initializes the terminal for unbuffered non-blocking input. Also * registers tty_done() via atexit(). You need to call this before * you ever call keyWasPressed(). */ void tty_init() { struct termios ti; if(o.noninteractive) return;() { /* Where we keep the automatic stats printing schedule. */ static struct timeval stats_time = { 0 }; int c; if (o.noninteractive) return false; if ((c = tty_getchar()) >= 0) { tty_flush(); /* flush input queue */ //; } } /* Check if we need to print a status update according to the --stats-every option. */ if (o.stats_interval != 0.0) { struct timeval now; gettimeofday(&now, NULL); if (stats_time.tv_sec == 0) { /* Initialize the scheduled stats time. */ stats_time = *o.getStartTime(); TIMEVAL_ADD(stats_time, stats_time, (time_t) (o.stats_interval * 1000000)); } if (TIMEVAL_AFTER(now, stats_time)) { /* Advance to the next print time. */ TIMEVAL_ADD(stats_time, stats_time, (time_t) (o.stats_interval * 1000000)); /* If it's still in the past, catch it up to the present. */ if (TIMEVAL_AFTER(now, stats_time)) stats_time = now; printStatusMessage(); /* Instruct the caller to print status too. */ return true; } } return false; }
https://sources.debian.org/src/nmap/6.00-0.3+deb7u1/nmap_tty.cc/
CC-MAIN-2021-04
refinedweb
233
52.87
in reply to Re: Perl Style: Is initializing variables considered taboo? in thread Perl Style: Is initializing variables considered taboo? Of course I use strictures in all my code! Why else would I be asking about variable initialization if I did not predeclare them? The question was more on style (hence "Perl Style") and the fact that the pod for my in perlfunc does not specify if it actually initializes the variable to any specific value. Nevertheless, after your response I discovered that perlsyn pod does under "Declarations" and quite extensively explains the initialization details. It seems that in a few cases it is wise to initialize, but not to undef, because as you say, it is just plain wasteful. Though IMHO I still think it just looks neater in the code, but shall refrain from doing so to undef in the future. I have a faint memory that I picked up the practice many years ago from the Camel book (or PBP), and I think it states somewhere that it is good practice to always initialize variables, but I could be wrong... or maybe it's just an old habit from my background in C. Applications are especially when passing "positional" lists or assigning to lists, with "gaps". i.e. ($a,undef,$b)=@array; # ignore second parameter [download] function($a,undef,$b) {..} # treat second parameter as not suppl +ied [download] The latter is especially necessary if you are checking arguments within the sub for definedness and changing to default values. UPDATE: An extension of this case is a hash or array element which exists but is not defined! > Of course I use strictures in all my code! it's more about use warnings to be warned about undefined variables. UPDATE: and please be careful not to return undef to "explicitly" return false.. Cheers Rolf ($a, $b)= @array[1,3]; my ($minutes, $hours) = (localtime())[2,3] [download] -- TTTATCGGTCGTTATATAGATGTTTGCA and please be careful not to return undef to "explicitly" return false. perl -e 'foreach (@INC){print "$_\n" unless $_ eq q{.}}' | xargs grep +-R "return undef" | wc -l<br> 1752 [download]. In scalar context, return; evaluates to undef. In list context, it evaluates to the empty list. In both contexts, it evaluates to something which evaluates to false in boolean context. In list context, return undef; evaluates to a single-element list which evaluates to true in boolean context. maybe a little example will make it clearer DB<1> sub tst {return undef} DB<2> if ($a=tst()) {print "TRUE"} else {print "FALSE" } FALSE DB<3> if (@a=tst()) {print "TRUE"} else {print "FALSE" } TRUE DB<4> sub tst { return (); } DB<5> if ($a=tst()) {print "TRUE"} else {print "FALSE" } FALSE DB<6> if (@a=tst()) {print "TRUE"} else {print "FALSE" } FALSE [download] line 4 is of course redundant, return; and return (); do exactly the same thing. > I have found it wise to always return an explicit value, OK, if you wanna code more "explicitly", you should better define constants for TRUE, FALSE and FAILED. DB<9> use constant FAILED => (); DB<10> use constant FALSE => !1; DB<11> use constant TRUE => 1; DB<12> sub t_FALSE { return FALSE; } DB<13> sub t_TRUE { return TRUE; } DB<14> sub t_FAILED { return FAILED; } DB<15> if (@a=t_FAILED) {print "TRUE"} else {print "FALSE" } FALSE DB<16> if (@a=t_FALSE) {print "TRUE"} else {print "FALSE" } TRUE DB<17> if ($a=t_FALSE) {print "TRUE"} else {print "FALSE" } FALSE DB<18> if ($a=t_FAILED) {print "TRUE"} else {print "FALSE" } FALSE [download] If you wonder about my definition of FALSE, see Truth and Falsehood in perlsyn: Negation of a true value by "!" or "not" returns a special fals +e value. When evaluated as a string it is treated as '', but as a number +, it is treated as 0. [download] UPDATE: please note: FALSE is defined! DB<27> p !defined (FAILED) 1 DB<28> p defined (FALSE) 1 [download] i.e. FALSE acts like a defined value like in most other languages. On second thought FAILED should better be named EMPTY or NOTHING. Failure is just an interpretation of returning nothing. Of course I use strictures in all my code! I did say "if". Not everybody uses strict; and I don't know you from the next guy; so there is no "of course" about it. Why else would I be asking about variable initialization if I did not predeclare them? As for style, that is in the eye of the beholder. I find much perlstyle a total anathema; often supported by nothing more than justifictions. As long as you aren't using the very ancient C style where all the variables had to be declared at the beginning of the sub instead of when they are first used or close to where they are first used as in modern C, I don't see any problem at all. Often I see my @x=(); vs just my @x;, but I don't see any big point to get really riled up about! I seldom see a statement with a scalar like my $x=undef; vs my $x; because something happens with $x in the next line or two. An array or hash declaration happens more often closer to the beginning of the code and can have a wider distance between declaration and use. Of the universe of style issues, this issue probably shouldn't be at the "top of your hit parade". A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (433 votes), past polls
http://www.perlmonks.org/index.pl?node_id=856467
CC-MAIN-2014-15
refinedweb
938
70.02
Subject: Re: [boost] [python] Setting a pointer into an object From: David Abrahams (dave_at_[hidden]) Date: 2008-09-30 16:14:47 on Tue Sep 30 2008, "Robert Dailey" <rcdailey-AT-gmail.com> wrote: > Hi, > > I am trying to do the following: > > SomeClassOfMine* myInstance; // Assume valid pointer > boost::python::object myObject( myInstance ); > > However, doing so causes an exception to be thrown when I run my > application. The exception thrown is below (Note I'm using MSVC9): > > First-chance exception at 0x7c812aeb in Crusades_Debug.exe: Microsoft > C++ exception: boost::python::error_already_set at memory location > 0x0012f0cc.. > Unhandled exception at 0x7c812aeb in Crusades_Debug.exe: Microsoft C++ > exception: boost::python::error_already_set at memory location > 0x0012f0cc.. > > > How can I assign the pointer above to a boost::python::object? I'm > trying to give my python scripts access to an instance of a class that > I haven't really exposed to Python yet, so I'm not sure of the > consequences of that. I was going to extend the SomeClassOfMine class > to python later on after I verify that I can get this working. Once I > have myObject in a valid state, I will insert it into the __main__'s > __dict__ namespace for a specific module so that it has access to it > as a global python variable. Help is appreciated. Just wrap the class without exposing any of its interesting parts and you should be fine: class_<SomeClassOfMine>("SomeClassOfMine"); -- Dave Abrahams BoostPro Computing Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/09/142870.php
CC-MAIN-2021-10
refinedweb
260
56.05
Hi, I rapidly need help!!! I have developed this game, (or the classes anyway) and the compiler just throws a bunch of crap at me when I try to compile it. public class ship { public int fuel = 100; public int armour = 100; public int shields = 100; public int power = 100; public bool dead = false; public bool radprotect = false; public void laserdamage() { armour -= 30; power -= 10; } public void rocketdamage() { armour -= 50; power -= 20; } public void nukedamage() { dead = true; } } //Somewhere else... ship player = new ship(); Console.Writeline(player.armour); And it compiles sometimes but when it does it give me 5570268 as the value EVERYTIME!!! Help!!! Also, if it helps, the "Somewhere else" is in another class, in my Main() function.
https://www.daniweb.com/programming/software-development/threads/283770/help-with-c-game-classes
CC-MAIN-2017-34
refinedweb
119
68.81
The presentation of this document has been augmented to identify changes from a previous version. Three kinds of changes are highlighted: new, added text, changed text, and deleted text. Copyright © 2002 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use, and software licensing rules apply. NOTICE: This draft is for public discussion. XML namespaces provide a simple method for qualifying element and attribute names used in Extensible Markup Language documents by associating them with namespaces identified by URI references. This document is the first draft of a new 1.1 revision of the Namespaces in XML specification. It will incorporate several errata to the 1.0 specification, and one substantive change: the provision of a mechanism to "undeclare" prefixes. This document does not yet incorporate the errata; they will be added in a future draft. A The Internal Structure of XML Namespaces (Non-Normative) A.1 The Insufficiency of the Traditional Namespace A.2 XML Namespace Partitions A.3 Expanded Element Types and Attribute Names A.4 Unique Expanded Attribute Names B Acknowledgements (Non-Normative) C). Furthermore, the attribute value in the innermost such declaration must not be empty..0"?> .
http://www.w3.org/TR/2002/WD-xml-names11-20020403/
CC-MAIN-2015-40
refinedweb
193
52.05
Forum Index --- Comment #1 from Jon Degenhardt <jrdemail2000-dlang@yahoo.com> --- (In reply to Jon Degenhardt from comment #0) > Results for the count 9's program, against the 2.7, 14 million line file: > byLine: 8.98 seconds > byChunk: 1.64 seconds Update: The byLine Count 9s program given above is being affected by autodecode. Changing counting line as follows: from: chunkedStream.each!(x => count += x.count('9')); to: chunkedStream.each!(x => count += (cast(ubyte[])x).count('9')); Gives updated times: byLine: 3.13 byChunk: 1.64 This is more consistent. byChunk is faster than byLine in the Count 9s task (read and access data), by not by 5x. The 15x performance deficit of byChunk relative to byLine on the file copy task remains. -- Steven Schveighoffer <schveiguy@yahoo.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |schveiguy@yahoo.com --- Comment #2 from Steven Schveighoffer <schveiguy@yahoo.com> --- The issue is that LockingTextWriter accepts anything but character range. What is happening (I think) is that lockingTextWriter.put(someUbyteArray) is actually putting each ubyte into the stream one at a time after converting each ubyte into a dchar through integer promotion. This can't be the intention. I think instead of the constraint is(ElementType!A : const(dchar)) what we need is something more like: is(Unqual!(ElementType!A) == dchar) This might break code, but I think code that is already broken, such as the example, no? -- --- Comment #3 from Jon Degenhardt <jrdemail2000-dlang@yahoo.com> --- I've confirmed that File.byChunk with lockingTextWriter corrupts utf-8 encoded files. I used the unicode test file: and the example given with the File.byChunk documentation: // Efficient file copy, 1MB at a time. import std.algorithm, std.stdio; void main() { stdin.byChunk(1024 * 1024).copy(stdout.lockingTextWriter()); } This file copy program corrupts the unicode characters as described in Steven's comment. This is a quite problematic, both because of character corruption and because it is an example in the documentation. The new method, lockingBinaryWriter, copies the file correctly. It is available starting with 2.073.1. lockingBinaryWriter also copies the file quickly, eliminating the performance issue. It is appears from the PR for lockingBinaryWriter () that there was discussion of the roles of Binary and Text writer. Regardless of availability of the lockingBinaryWriter, the lockingTextWriter certainly looks broken when used with the ubyte data type. Personally, I think it makes sense for lockingTextWriter to assume ubyte arrays are correctly encoded, or perhaps are utf-8 encoded. This would potentially allow newline translation, something that the lockingBinaryWriter would presumably not do. -- Jon Degenhardt <jrdemail2000-dlang@yahoo.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|File.byChunk w/ |File.byChunk (ubyte) w/ |stdout.lockingTextWriter is |stdout.lockingTextWriter |very slow |corrupts utf-8 data (and is | |very slow) --- Comment #4 from Jon Degenhardt <jrdemail2000-dlang@yahoo.com> --- Changing the subject to reflect the more serious problem, corruption of utf-8 encoded data. As described in one of the comments, this corruption occurs in a file copy example in the documentation. -- --- Comment #5 from anonymous4 <dfj1esp02@sneakemail.com> --- static assert(is(ubyte:dchar)); This assert succeeds. Is it intended? It's why LockingTextWriter accepts bytes event though it's a text writer. -- --- Comment #6 from Steven Schveighoffer <schveiguy@yahoo.com> --- (In reply to anonymous4 from comment #5) > static assert(is(ubyte:dchar)); > This assert succeeds. Is it intended? It's why LockingTextWriter accepts > bytes event though it's a text writer. Yes, in the compiler, ubyte and dchar are fundamentally unsigned integer types. You can implicitly convert via integer promotion. Technically, char can integer promote to dchar as well. However, foreach(dchar d; someCharArray) is specialized in the compiler to go through auto decoding. The forums contain volumes of battles on auto-decoding and how the choice has affected D, I don't think we need to rehash it here. What we need to do is make it so obviously wrong code is not accepted for this function. I'll try and get a PR together. -- Steven Schveighoffer <schveiguy@yahoo.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |pull Hardware|x86 |All OS|Mac OS X |All Severity|enhancement |major --- Comment #7 from Steven Schveighoffer <schveiguy@yahoo.com> --- PR: -- --- Comment #8 from github-bugzilla@puremagic.com --- Commits pushed to master at Fix issue 17229 - do not use integer promotion to translate ubyte ranges to dchar ranges. Add unittest for Bug 17229 Merge pull request #5229 from schveiguy/fixtextwriter Fix issue 17229 - File.byChunk (ubyte) w/ stdout.lockingTextWriter corrupts utf-8 data (and is very slow) merged-on-behalf-of: Jack Stouffer <jack@jackstouffer.com> -- --- Comment #9 from github-bugzilla@puremagic.com --- Commits pushed to stable at Fix issue 17229 - do not use integer promotion to translate ubyte ranges Add unittest for Bug 17229 Merge pull request #5229 from schveiguy/fixtextwriter --
http://forum.dlang.org/thread/bug-17229-3@https.issues.dlang.org%2F
CC-MAIN-2017-17
refinedweb
795
53.17
iCelPropertyClass Struct Reference This is a property class for an entity. More... #include <physicallayer/propclas.h> Detailed Description This is a property class for an entity. A property class describe physical attributes of an entity. Definition at line 79 of file propclas.h. Member Function Documentation Activate this property class. This means it will process events again. Property classes are activated by default. Add a callback which will be fired when a property changes. Not all property class implementations actually have properties. It is safe to call this function with the same callback (nothing will happen in that case and false will be returned). Otherwise this function will IncRef the callback. Deactivate this property class. This means that events will no longer be processed. this property class was modified after the baseline. Return true if a property is read-only. Mark the baseline for this property class. This means that the status of this property class as it is now doesn't have to be saved. Only changes to the property class that happen after this baseline have to be modified. A property class doesn't actually have to do this in a granular way. It can simply say that it saves itself completely as soon as it has been modified after the baseline.. Get the associated position information. Can be 0 if this property class doesn't support positional information. Remove a callback. It is safe to call this function for a callback that is not registered. Nothing will happen in that case (except that false is returned). This function will DecRef the callback if it was present. Call this function if the property class'.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/structiCelPropertyClass.html
CC-MAIN-2015-40
refinedweb
282
61.63
User-defined method objects may be created when getting an attribute of a class (perhaps via an instance of that class), if that attribute is a user-defined function object, an unbound user-defined method object, or a class method object. class A(object): # func: A user-defined function object # # Note that func is a function object when it's defined, # and an unbound method object when it's retrieved. def func(self): pass # classMethod: A class method @classmethod def classMethod(self): pass class B(object): # unboundMeth: A unbound user-defined method object # # Parent.func is an unbound user-defined method object here, # because it's retrieved. unboundMeth = A.func a = A() b = B() print A.func # output: <unbound method A.func> print a.func # output: <bound method A.func of <__main__.A object at 0x10e9ab910>> print B.unboundMeth # output: <unbound method A.func> print b.unboundMeth # output: <unbound method A.func> print A.classMethod # output: <bound method type.classMethod of <class '__main__.A'>> print a.classMethod # output: <bound method type.classMethod of <class '__main__.A'>>. # Parent: The class stored in the original method object class Parent(object): # func: The underlying function of original method object def func(self): pass func2 = func # Child: A derived class of Parent class Child(Parent): func = Parent.func # AnotherClass: A different class, neither subclasses nor subclassed class AnotherClass(object): func = Parent.func print Parent.func is Parent.func # False, new object created print Parent.func2 is Parent.func2 # False, new object created print Child.func is Child.func # False, new object created print AnotherClass.func is AnotherClass.func # True, original object used The following is an example of using an user-defined function to be called multiple(∞) times in a script with ease. import turtle, time, random #tell python we need 3 different modules turtle.speed(0) #set draw speed to the fastest turtle.colormode(255) #special colormode turtle.pensize(4) #size of the lines that will be drawn def triangle(size): #This is our own function, in the parenthesis is a variable we have defined that will be used in THIS FUNCTION ONLY. This fucntion creates a right triangle turtle.forward(size) #to begin this function we go forward, the amount to go forward by is the variable size turtle.right(90) #turn right by 90 degree turtle.forward(size) #go forward, again with variable turtle.right(135) #turn right again turtle.forward(size * 1.5) #close the triangle. thanks to the Pythagorean theorem we know that this line must be 1.5 times longer than the other two(if they are equal) while(1): #INFINITE LOOP turtle.setpos(random.randint(-200, 200), random.randint(-200, 200)) #set the draw point to a random (x,y) position turtle.pencolor(random.randint(1, 255), random.randint(1, 255), random.randint(1, 255)) #randomize the RGB color triangle(random.randint(5, 55)) #use our function, because it has only one variable we can simply put a value in the parenthesis. The value that will be sent will be random between 5 - 55, end the end it really just changes ow big the triangle is. turtle.pencolor(random.randint(1, 255), random.randint(1, 255), random.randint(1, 255)) #randomize color again
https://sodocumentation.net/python/topic/3965/user-defined-methods
CC-MAIN-2021-10
refinedweb
536
62.85
NSE Academy is a subsidiary of National Stock Exchange of India. NSE Academy straddles the entire spectrum of financial courses for students of standard VIII and right up to MBA professionals. NSE Academy has tied up with premium educational institutes in order to develop pool of human resources having right skills and expertise which are apt for the financial market. Guided by our mission of spreading financial literacy for all, NSE Academy has constantly innovated its education template, this has resulted in improving the financial well-being of people at large in society. Our education courses have so far facilitated more than 41.8 lakh individuals become financially smarter through various initiatives. NCFM is an online certification programme aimed at upgrading skills and building competency. The programme has a widespread reach with testing centres present at more than 154+ locations across the country. Offline mode by filling up registration form available on the website > Education >Certifications >Register for Certification Once registered, a candidate is allotted a unique NCFM registration number along with an online login id and can avail of facilities like SMS alerts, online payment, checking of test schedules, online enrolment, profile update etc. through their login id. Contents CHAPTER 1 : MUTUAL FUNDS...............................................................................5 1.1 INTRODUCTION.............................................................................................5 1.2 MUTUAL FUNDS : STRUCTURE IN INDIA...........................................................7 1.3 WHO MANAGES INVESTORS MONEY?...............................................................8 1.4 WHO IS A CUSTODIAN?..................................................................................8 1.5 WHAT IS THE ROLE OF THE AMC?....................................................................9 1.6 WHAT IS AN NFO?.........................................................................................9 1.7 WHAT IS THE ROLE OF A REGISTRAR AND TRANSFER AGENTS?......................... 10 1.8 WHAT IS THE PROCEDURE FOR INVESTING IN AN NFO?.................................... 10 1.9 WHAT ARE THE INVESTORS RIGHTS & OBLIGATIONS?..................................... 11 1.10 POINTS TO REMEMBER................................................................................. 12 1 2.16 HOW DOES AUM AFFECT PORTFOLIO TURNOVER?............................................ 26 2.17 HOW TO ANALYSE CASH LEVEL IN PORTFOLIOS?............................................. 26 2.18 WHAT ARE EXIT LOADS?............................................................................... 26 2.19 POINTS TO REMEMBER................................................................................. 27 2 CHAPTER 6 : TAXATION..................................................................................... 52 6.1 CAPITAL GAINS TAXATION............................................................................ 52 6.2 INDEXATION BENEFIT.................................................................................. 53 6.3 DIVIDEND DISTRIBUTION TAX...................................................................... 53 6.4 WHY FMPS ARE POPULAR?............................................................................ 54 6.5 POINTS TO REMEMBER................................................................................. 54 CHAPTER 7 : REGULATIONS............................................................................... 56 7.1 OVERVIEW................................................................................................. 56 7.2 WHAT IS THE NAME OF INDUSTRY ASSOCIATION FOR THE MUTUAL FUND INDUSTRY?.. 57 7.3 WHAT ARE THE OBJECTIVES OF AMFI?........................................................... 57 7.4 PRODUCT LABELLING IN MUTUAL FUNDS RISKOMETER.................................. 57 7.5 ADVANTAGES OF MUTUAL FUNDS.................................................................. 58 7.6 WHAT IS A SYSTEMATIC INVESTMENT PLAN (S I P)?......................................... 59 7.7 WHAT IS SYSTEMATIC TRANSFER PLAN (STP)?................................................ 60 7.8 WHAT IS SYSTEMATIC WITHDRAWAL PLAN (SWP)?........................................... 61 7.9 CHOOSING BETWEEN DIVIDEND PAYOUT, DIVIDEND REINVESTMENT AND GROWTH OPTIONS WHICH ONE IS BETTERFOR THE INVESTOR?...................... 61 7.9.1 Growth Option.................................................................................. 61 7.9.2 Dividend Payout Option...................................................................... 61 7.9.3 Dividend Reinvestment Option............................................................ 62 7.10 POINTS TO REMEMBER ................................................................................ 63 3 Note: Candidates are advised to refer to NSEs Academy Ltd.. 4 CHAPTER 1 MUTUAL FUNDS 1.1 INTRODUCTION A mutual fund is a professionally managed type of collective investment scheme that pools money from many investors and invests it in stocks, bonds, short-term money market instruments and other securities. Mutual funds have a fund manager who invests the money on beha lf of the investors by buying / selling stocks, bonds etc. Increase/ Change Scheme Name Rs. Cr Growth Decrease (%) Asset Under Management (AUM) (YoY) 1,609,370 294839 22.43% Asset Under Management (AUM) (MoM) 1,609,370 169669 11.79% There are various asset classes in which an investor can invest his savings depending on his risk appetite and time horizon viz. real estate, bank deposits, post office deposits, shares, debentures, bonds etc. While investing in these asset classes an individual would need to study the risk and reward closely. Example Mr. X proposes to invest in shares of M/s. Linked Ltd. Indian Scenario In India gold has been the single largest form of savings. Bank deposits, post office schemes and other traditional savings instruments have been extremely popular and continue to be 5 so even today. Against this background, if we look at approximately Rs.16 lakh crores 1 which Indian Mutual Funds are managing, then it is no mean an achievement. However a country traditionally putting money in safe, risk-free investments has started to invest in stocks, bonds and shares thanks to the mutual fund industry. The Rs.16 Lakh crores stated above, includes investments by the corporate sector as well. Going by various reports, not more than 5% of household savings are channelized September 30, 2016, there are around 39 Mutual Funds in the country as per AMFI. Together they offer around 11460 schemes to the investor. Let us now look at some trends in mutual funds in India over the 10 year period from September 2015 to September 2016: Year on Year increase in number of Accounts / Folios in India Mutual Fund Industry 6 This module is designed to meet the requirements of both the investor as well as the industry , mainly those proposing to enter the mutual fund industry. Investors need to understand the nuances of mutual funds, the workings of various schemes before they invest; since their money is being invested in risky assets like stocks/ bonds (bonds also carry risk). The language of the module is kept simple and the explanation is peppered with concept clarifiers and examples. Let us now try and understand the characteristics of mutual funds in India and the different types of mutual fund schemes available in the market. There is a Sponsor (the First tier), who thinks of starting a mutual fund. The Sponsor approaches the Securities & Exchange Board of India (SEBI), which is the market regulator and also the regulator for mutual funds. The mutual fund industry is governed by the SEBI (mutual fund) Regulations, 1996 and such other notifications that may be issued by the regulator from time to time. The sponsor should have sound track record and general reputation of fairness and integrity in all his business transactions. Sound track record shall mean the sponsor should Be carrying out the business of financial services for not less than five years The net worth in the immediately preceding financial year is more than the capital contribution in the asset management company Has profits after depreciation, interest and tax in three of out the five preceding years including the fifth year The sponsor has contributed / contributes not less than 40% of the net worth of the asset management company 7 Once approved by SEBI,. 8 The custodian also participates in a clearing and settlement system through approved depository companies on behalf of mutual funds, in case of dematerialized securities. In India today, securities (and units of mutual funds) are no longer held in physical form but schemes net assets. An important point to note here is that this fee is included in the overall expenses permitted by SEBI. There is a maximum limit to the amount that can be charged as expense to the scheme, and this fee has to be within that limit. Thus regulations ensure that beyond a certain limit, investors money is not used for meeting expenses. 9T. Once these formalities are complete, the investor has to fill a form, which is available with the distributor or online.. In an online system, this entire process is carried out electronically from filling of forms to online payment to allotment of units in the demat account of the investor. 10 Fund Constituents 11 The offer document is a legal document and it is the investors obligation to read the OD carefully before investing. The OD contains all the material information that the investor would require to make an informed decision. It contains the risk factors, dividend policy, investment objective, expenses expected to be incurred by the proposed scheme, fund managers experience, historical performance of other schemes of the fund and a lot of other vital. 12 CHAPTER 2 : M UTUAL FUND PRODUCTS AND FEATURES EQUITY FUNDS A variety of schemes are offered by mutual funds. It is critical for investors to know the features of these products, before money is invested in them. Of the total assets under management of all mutual funds debt funds are the major contributor which includes income funds and gilt funds. 13 buy-back of fund shares / units, thus offering another avenue for investors to exit the fund. Therefore, regulations drafted in India permit investors in close ended funds to exit even before the term is over. 2.4.1 Introduction Equity funds account for around 30% of the total AUM managed by mutual funds. A scheme might have an investment objective to invest largely in equity shares and equity-related investments like convertible debentures. The investment objective of such funds is to seek capital appreciation through investment in this growth asset. Such schemes are called equity schemes. Equity funds essentially invest the investors money in equity shares of companies. Fund managers try and identify companies with good future prospects and invest in the shares of such companies. The prices of listed securities fluctuate based on liquidity, international scenario and numerous other factors. Therefore investment in equity funds carries higher risk. It is necessary for an investor to understand the features of equity investments in terms of risk and return before investing. Equity oriented Funds are funds that invest the investors money in equity and related instruments of companies. 14 Section 115 T of the Income Tax Act, 1961 lays down that equity oriented fund means such fund where the investible funds are invested by way of equity shares in domestic companies to the extent of more than65% of the total proceeds of such fund In case of equity funds investors need not pay long term capital gains. Hence it is important that this investment norm is met by the fund. Example Equity long term is a fund hosted by ABC Mutual Fund. This fund has invested 100% of the funds in international equities. Although this fund is also an equity fund from the investors asset allocation point of view, but the tax laws do not recognise these funds as Equity Funds and hence investors have to pay tax on the Long Term Capital Gains made from such investments. Equity Funds are of various types and the industry keeps innovating to make products available for all types of investors. Relatively safer types of Equity Funds include Index Funds and diversified Large Cap Funds, while the riskier varieties are the Sector Funds. like Index Funds, Infrastructure Fund, Power Sector Fund, Quant Fund, Arbitrage Fund, Natural Resources Fund, etc. These funds are explained later. Equity funds do not guarantee any minimum returns. In terms of risk barometer for equity funds, index funds are the least risky as they mirror the index stocks, followed by the diversified large cap funds. Mid cap and sector focus funds are considered more risky. 15 2.5 WHAT IS AN INDEX FUND? Index Funds invest in stocks comprising indices, such as the Nifty 50, which is a broad based index comprising 50 stocks. There can be funds on other indices which have a large number of stocks such as the Nifty Midcap 100 or Nifty 500. Here the investment is spread across a large number of stocks. In India today we find many index funds based on the Nifty 50 index, which comprises large, liquid and blue chip 50 stocks. The objective of a typical Index Fund states This Fund will invest in stocks comprising the Nifty 50. Tracking Error The difference between the returns generated by the benchmark index and the Index Fund is known as tracking error. By definition, Tracking Error is the variance between the daily returns of the underlying index and the NAV of the scheme over any given period.. 16 2.6 WHAT ARE DIVERSIFIED LARGE CAP FUNDS? Another category of equity funds is the diversified large cap funds. Cap refers to market capitalization. Market capitalization refers to aggregate valuation of the company based on the current market price and the number of shares issued. Accordingly companies are classified into Large cap companies typically the top 100 to 200 stocks which feature in Nifty 50 Mid cap companies Stocks below large cap which belong to the mid cap segment Small cap companies Typically stocks with market capitalization of less than Rs. 5000 cr. Large cap funds restrict their stock selection to the large cap stocks It is generally perceived that large cap stocks are those which have sound businesses, strong management, globally competitive products and are quick to respond to market dynamics. Therefore, diversified large cap funds are considered as stable and safe. The stocks command high liquidity. However, since equities as an asset class are risky, there is no return guarantee for any type of fund. These funds are actively managed funds unlike the index funds which are passively managed, In an actively managed fund the fund manager pores over data and information, researches the company, the economy, analyses market trends, caps to midcaps and finally to small caps,. 17 2.8 WHAT ARE SECTORAL FUNDS? Funds that invest in stocks from a single sector or related sectors are called Sectoral funds. Examples of such funds are Banking Funds, IT Funds, Pharma Funds, Infrastructure Funds, etc. Regulations do not permit funds to invest over 10% of their Net Asset Value in a single company. This is to ensure that schemes are diversified enough and investors are not subjected to concentration risk. This regulation is relaxed for sectoral funds and index funds. Example AAA Mutual Fund has a banking sector fund. The fund objective is to generate continuous returns by actively investing in equity and equity related securities of companies in the Banking Sector and companies engaged in allied activities related to Banking Sector. Example XYZ Mutual Fund has recently launched a quant fund. The SID (scheme information document) specifies the use of a quantitative model for aspects like Stock price parameters based on periodic moving average of price, market capitalization Financial parameters based on key indicators such as EPS, PE, PAT, EBDIT margins (historical and forecasted). 18 than companies that pay out dividends. A growth fund aims to produce capital appreciation by investing in growth stocks. They focus on industries and specific companies that are in the phase of signification revenue growth rather than high dividend payouts. These companies are in the growth phase and hence require a holding period of 5-10 years. Hence a higher risk tolerance is required. The time horizon for return is medium to long term. Example PU Mutual Fund has a Growth companies fund that has an investment objective to invest in companies / stocks with high growth rates or above average potential. The fund managers will follow an active investment strategy and will be focusing on rapid growth companies (or sectors). The selection of stocks will be growth measures such as Enterprise Value/EBITDA (Earnings before Interest, Taxes, Depreciation, and Amortization), forward price/sales, and discounted EPS (Earning per Share). The primary focus will be to identify high growth companies, especially in sectors witnessing above average growth. A combination of top-down (macro analysis to identify sectors) and bottom-up approach (micro analysis to pick stocks within these sectors) will be employed. The switch between companies and sectors to be identified based on relative valuations, liquidity and growth potential. stocks price can go down rather fast, while in Value investing, the risk is that the investor may have to wait for a really long time before the market values the investment correctly. 2.9.6 ELSS Equity Linked Savings Schemes (ELSS) are equity schemes, where investors get tax benefit upto Rs. 1.5 lacs under section 80C of the Income Tax Act. These are open ended schemes but have a lock in period of 3 years. These schemes serve the dual purpose of equity 19 investing as well as tax planning for the investor. However it must be noted that investors cannot, under any circumstances, get their money back before 3 years from the date of investment. These are funds which do not directly invest in stocks and shares but invest in units of other mutual funds which in their opinion will perform well and give high returns. Almost all mutual funds offer fund of funds schemes. Let us now look at the internal workings of an equity fund and what must an investor know to make an informed decision. AUM is calculated by multiplying the Net Asset Value (NAV explained in detail later) of a scheme by the number of units issued by that scheme. A change in AUM can happen either due to redemptions or inflows. In case of sharp market falls, the NAVs are expected to move down. This may lead to redemption pressures and the AUMs may come down. Conversely, if the outlook on country and markets is positive, it may lead to inflow of funds leading to overall increase in the AUM. Also if the fund is able to produce superior returns as compared to the benchmark (e.g. Nifty, it may result in inflows into the scheme, leading to an increase in the AUM. The above documents are prepared by the fund house and vetted by SEBI. Investor can download these documents from the mutual fund website. Investors should understand and analyse them prior to investing. The particulars of the Scheme have been prepared in accordance with the Securities and Exchange Board of India (Mutual Funds) Regulations 1996, (herein after referred to as SEBI 20 ________ Mutual Fund, Tax and Legal issues and general information on. (Website address). 21 Concept Clarifier NAV Expenditure Other Current Assets 1.2 Other Current Liabilities 0.5 Deferred Revenue Expenditure 4.2 387.7 387.7%). 22 Concept Clarifier Fund Fact Sheet After an investor has entered into a scheme, he must monitor his investments regularly. This can be achieved by going through the Fund Fact Sheet. This is a monthly document which all mutual funds have to publish. In a nutshell, the fund fact sheet is the document which investors must read, understand and keep themselves updated with. SEBI has clearly laid down limits for expenses that can be charged to the scheme. 23 The limits for schemes other than index schemes are as follows: Net Assets (Rs crs.) Equity Schemes Debt Schemes The above percentages are to be calculated on the average daily net assets of the scheme. The expense limits (including management fees) for index schemes (including Exchange Traded Funds) is 1.5% of average net assets. In case of a fund of funds scheme, the total expenses of the scheme including weighted average of charges levied by the underlying schemes shall not exceed 2.50 per cent of the average daily net assets of the scheme. In addition to the limits specified, the following costs or expenses may be charged to the scheme, namely brokerage and transaction costs which are incurred for the purpose of execution of trade and is included in the cost of investment, not exceeding 0.12% in case of cash market transactions and 0.05% in case of derivatives transactions expenses not exceeding of 0.30% of daily net assets, if the new inflows from such cities as specified by the Board from time to time are at least - (i) 30 per cent of gross new inflows in the scheme, or; (ii) 15 per cent of the average assets under management (year to date) of the scheme, whichever is higher: Provided that if inflows from such cities is less than the higher of sub-clause (i) or sub- clause (ii), such expenses on daily net assets of the scheme shall be charged on proportionate basis additional expenses not exceeding 0.20 per cent of daily net assets of the scheme Any expenditure in excess of the limits specified above shall be borne by the asset management company or by the trustee or sponsors. Mutual funds/AMCs shall launch new schemes under a single plan and ensure that all new investors are subject to single expense structure. Investors, who have already invested as per earlier expense structures based on amount of investment, will be subject to single expense structure for all fresh subscription. 24 schemes schemes expense ratio with peers. The schemes expense ratio must be tracked over different time periods. Ideally as net assets increase, the expense ratio of a scheme should come down. Investors today have an option of investing through direct plans. Since the direct plans do not entail distributor commissions, they may have a lower expense ratio.. 25 2.16 HOW DOES AUM AFFECT PORTFOLIO TURNOVER? The schemes size i.e. schemes performance is in line or better than its peers consistently, even though the AUM is increasing, then it can be a fair indicator that increased AUM is not a problem for the fund manager. holding. 26 Contingent Deferred Sales Charge (CDSC). This is nothing but a modified form of Exit Load, where in the investor has to pay different Exit Loads depending upon his investment period., he will not have to bear any Exit Load. Earlier there was a difference between the sale price and the NAV, the difference being the entry load. However SEBI has banned entry loads since May 2009. Further exit loads / CDSC have to be credited back to the scheme immediately i.e. they are not available for the AMC to bear selling expenses. Upfront commission to distributors will be paid by the investor directly to the distributor, based on his assessment of various factors including the service rendered by the distributor. Currently for equity funds / bonds funds redeemed within 1 year are charged 1% exit load. However liquid funds and money market funds normally have zero exit loads. There are other types of funds within these broad categories, which the investor must be aware of. They include the following: Index Funds invest in stocks comprising indices, such as the Nifty 50, which is a broad based index comprising 50 stocks. Large cap funds restrict their stock selection to the large cap stocks Midcap funds, invest in stocks belonging to the mid cap segment of the market. Funds that invest in stocks from a single sector or related sectors are called Sectoral funds. Other equity funds ELSS and Fund of funds Investments in new fund offers is through the offer documents as issued the mutual funds. These offer documents have two parts: Scheme Information Document (SID), which has details of the scheme Statement of Additional Information (SAI), which has statutory information about the mutual fund, that is offering the scheme. 27 The Key Information Memorandum (KIM) is a summary of the SID and SAI. As per SEBI regulations, every application form is to be accompanied by the KIM. Another importance concept to be kept in mind is the NAV of the scheme. The NAV or Net Assets Value of a scheme is the market value of assets of the scheme less all scheme liabilities. . NAV i.e. net asset value is calculated by dividing the value of Net Assets by the outstanding number of Units. After an investor has entered into a scheme, he must monitor his investments regularly. This can be achieved by going through the Fund Fact Sheet. Initial issue expenses these expenses are incurred when the NFO is made. These need to be borne by the AMC. Expense Ratio is defined as the ratio of expenses incurred by a scheme to its Average Weekly Net Assets. It means how much of investors money Portfolio Turnover is the ratio which helps us to find how aggressively the portfolio is being churned. Exit Loads, are paid by the investors in the scheme, if they exit one of the scheme before a specified time period. Exit Loads reduce the amount received by the investor. Not all schemes have an Exit Load, and not all schemes have similar exit loads as well. 28 CHAPTER 3 : EXCHANGE TRADED FUNDS (ETFS) Nifty 50, Bank Nifty etc.). Index ETF An index ETF is one where the underlying is an index, say Nifty 50.. For further details please check NSE website content/equities/etfs/etfs_launched_on_nse.htm. 29 Financial investors money is getting invested today and over a longer period of time, the power of compounding will turn this saving into a significant contributor to the investors returns. Scheme A B Term (Years) 25 25 As CAGR comes out to be 10.32%, while scheme Bs CAGR stands at 11.16%. Investors should take care that they place the order completely. They should not tell the broker to buy/ sell according to the brokers judgement. Investors should also not keep signed delivery instruction slips with the broker as there may be a possibility of their misuse. Placing signed delivery instruction slips with the broker is similar to giving blank signed cheques to someone. 30 3.3 WHAT ARE REITs? REITs or Real Estate Investment Trusts are similar to mutual funds. They invest in real estate assets and give returns to the investor based on the return from the real estate. Like a mutual fund, REITs collect money from many investors and invest the same in real estate properties like offices, residential apartments, shopping malls, hotels, warehouses). These REITs are listed on stock exchanges. The investors can directly buy and sell units from the stock exchanges. REITs are actually trusts and hence their assets are in the hands of an independent trustee, held on behalf of the investor. The trustee is bound to ensure compliance with applicable laws and protect the rights of the unit holders. Income takes the form of rentals and capital gains from property which is distributed to investors as dividends. Money is raised from unit holders through IPO (Initial Public Offer). Traditionally, Indians are known to be big buyers of Gold; an age old tradition. Gold as an asset class is considered to be safe This is because gold prices are difficult to manipulate and therefore enjoy better pricing transparency. When other financial markets are weak, gold gives good returns. It also enjoys benefit of liquidity in case of any emergency. We buy Gold, among other things for childrens can be said to be a new age product, designed to suit our traditional requirements. some w here. Thus his units are as good as Gold. Say for example 1 G-ETF = 1 gm of 99.5% pure Gold, then buying 1 G-ETF unit every month for 20 years would have given the investor a holding of 240gm of Gold, by the time his childs marriage approaches (240 gm = 1 gm/ month * 12 months * 20 Years). After 20 31 years the investor can convert the G-ETFs into 240 gm of physical gold by approaching the mutual fund or sell the G-ETFs in the market at the current price and buy 240 gms.s demat account as well. Lastly, the investor will not have to pay any wealth tax on his holdings. There may be other taxes, expenses to be borne from time to time, which the investor needs to bear in mind while buying / selling G-ETFs. 3 days Gold which the AP deposits for buying the bundled ETF units is known as Portfolio Deposit. This Portfolio Deposit has to be deposited with the Custodian. A custodian is someone who handles the physical Gold for the AMC. The AMC signs an agreement with 32 the Custodian, w Allocated Cash its multiples in the secondary market. 33 3.6.1 Product Details of Sovereign Gold Bonds 34 SI. No. Item Details. 17 18 Tradability Bonds will be tradable on stock exchanges/NDS-OM from a date to be notified by RBI. 19 SLR eligibility The Bonds will be eligible for Statutory Liquidity Ratio purposes. fro m APs the seller at the other end may be a retail investor who wishes to exit. 35 As explained earlier, the custodian maintains record of all the Gold that comes into and goes out of the schemes. 36 price of Gold in US $/ ounce as decided by the London Bullion Markets Association (LBMA) every morning, the conversion factor for ounce to Kg, the prevailing USD/ INR exchange rate, customs duty, octroi, sales tax, etc. also ensures that prices of ETF remain largely in sync with those of the underlying. 37 Consider a case where the demand for ETFs has increased due to any reason. A rise in demand will lead to rise in prices, as many people will rush to buy the units, thereby putting an upward pressure on the prices.. Practically any asset class can be used to create ETFs. Globally there are ETFs on Silver, Gold, Indices (SPDRs, Cubes, etc), etc. In India, we have ETFs on Gold, Indices such as Nifty 50, Bank Nifty etc.). An index ETF is one where the underlying is an index, say Nifty 50. An Exchange Traded Fund (ETF) is essentially a scheme where the investor has to buy/ sell units from the market through a broker (just as he/ he would by a share). An investor can approach a trading member of NSE and enter into an agreement with the trading member. Buying and selling ETFs requires the investor to have demat and trading accounts. Gold ETFs (G-ETFs) are a special type of ETF which invests in Gold and Gold related securities.. 38 CHAPTER 4 : DEBT FUNDS Coupon of the paper. The Coupon represents the rate of interest that the borrower will pay on the Face Value. Thus, if the Coupon is 8% for the above discussed paper, it means that the borrower will pay Rs.8 (8/100 X 100. 39 Finally the question arises, for how long the borrower has taken a loan. This can be understood by looking at the Maturity.. Interest rates can either be fixed or floating. Under fixed interest rates, the interest rate remains fixed throughout the tenure of the loan. Under floating rate loans, the rate of interest is a certain percentage over the benchmark. Example A Ltd. has borrowed against a debt instrument, the rate be G-Sec plus 3%. Therefore if the G-Sec moves up, the rate of interest moves up and if the G-Sec moves down the interest rate moves down. Prima facie debt instruments looks risk free. However two important questions need to be asked here: 1. What if interest rates rise during the tenure of the loan? 2. What if the borrower fails to pay the interest and/ or fails to repay the principal? In case interest rates rise, then the investors. 40rd higher interest rates in newer debt paper. However, this should be done, only when the investor is of the opinion that interest rates will continue to rise in future otherwise frequent trading in debt paper will be costly and cumbersome. Alternatively the interest rate risk can be partly mitigated by investing in floating rate instruments. In this case for a rising interest rate scenario the rates moves up, and for a falling interest rate scenario the rates move down.. AAA These are the safest among corporate debentures. This rating implies investors can safely expect to earn interest regularly as well as the probability of default of their principal is as good as nil. 41 D represents default. Such companies are already in default and only liquidation of assets will result in realization of principal and/ or interest. The price of an instrument (equity / bond) is nothing but the present value of the future cash flows. (for understanding the meaning of present value, please refer to NCFM module Financial. A = P * (1 + r)t Substituting P = 100, r = 8% and t = 10 years, we get the value for A as Rs. 215 bonds. 1-Jan-10 (950) 31-Dec-10 80 31-Dec-11 80 31-Dec-12 1080 YTM 10.01%. 43. In spite of its shortcomings, YTM is an important indicator to know the total return from a bond. As mentioned earlier, price of a bond is the present value of future cash flows. Thus if all the present values go down (due to increase in YTM), then By using the discounting formula we can find the PVs of all the 3 cash flows. The investor will get Rs.8 as interest payment each year, whereas in the final year, the investor will also get the Rs.100 principal back (along with). 44 Now, if interest rates in the market rise immediately to 9% after the bond is issued, we will have to use 9% as the rate of discounting (investors would like to earn 9% from this bond). In that case the cash flows and their PVs will be : indicative yield, 45 the fund. An important point to note here is that indicative yields are pre-tax. Investors will get lesser returns after they include the tax liability. These are close ended funds which invest in debt as well as equity or derivatives. The scheme invests some portion of investors Financial Markets: A Beginners Module or Derivatives Markets (Dealers) module. It is important to note here that although the name suggests Capital Protection, there is no guarantee that at all times the investors capital will be fully protected.. 4.5.5 MIPs Monthly Income Plans (MIPs) are hybrid funds; i.e. they invest in debt papers as well as equities. Investors who want a regular income stream invest in these schemes. The objective of these schemes is to provide regular income to the investor by paying dividends; however, there is no guarantee that these schemes will pay dividends every month. Investment in the debt portion provides for the monthly income whereas investment in the equities provides for the extra return which is helpful in minimising the impact of inflation.. 46 4.6 POINTS TO REMEMBER Debt funds are funds which invest money in debt instruments such as short and long term bonds, government securities, t-bills, corporate paper, commercial paper, call money etc. Any debt paper will have Face Value, Coupon and Maturity as its standard characteristics. The interest rate risk be reduced by adjusting the maturity of the debt fund portfolio. Credit Risk or Risk of Default refers to the situation where the borrower fails to honour either one or both of his obligations of paying regular interest and returning the principal on maturity. The price of an instrument (equity / bond) is nothing but the present value of the future cash flows. An important factor in bond pricing is the Yield to Maturity (YTM ). This is rate applied to the future cash flow (coupon payment) to arrive at its present value. An important relation to remember is that: As interest rates go up, bond prices come down. Fixed Maturity Plans are essentially close ended debt schemes. The money received by the scheme is used by the fund managers to buy debt securities with maturities coinciding with the maturity of the scheme. Capital protection funds are close ended funds which invest in debt as well as equity or derivatives. Balanced funds invest in debt as well as equity instruments. These are also known as hybrid funds. Monthly Income Plans (MIPs) are also hybrid funds; i.e. they invest in debt papers as well as equities. Investors who want a regular income stream invest in these schemes. Child Benefit Plans are debt oriented funds, with very little component invested into equities. The objective here is to capital protection and steady appreciation as well. 47 CHAPTER 5 : LIQUID FUNDS Liquid mutual funds are schemes that make investments in debt and money market securities with maturity of up to 91 days only. In case of liquid mutual funds cut off time for receipt of funds is an important consideration. As per SEBI guidelines the following cut-off timings shall be observed by a mutual fund in respect of purchase of units in liquid fund schemes and the following NAVs shall be applied for such purchase: where the application is received up to 2.00 p.m. on a day and funds are available for utilization before the cut-off time without availing any credit facility, whether, intra-day or otherwise the closing NAV of the day immediately preceding the day of receipt of application where the application is received after 2.00 p.m. on a day and funds are available for utilization on the same day without availing any credit facility, whether, intra-day or otherwise the closing NAV of the day immediately preceding the next business day irrespective of the time of receipt of application, where the funds are not available for utilization before the cut-off time without availing any credit facility, whether, intra-day or otherwise the closing NAV of the day immediately preceding the day on which the funds are available for utilization. This is relevant since corporates park their daily excess cash balances with liquid funds. 48 5.2 VALUATION OF SECURITIES 1) All money market and debt securities, including floating rate securities, with residual maturity of up to 60 days shall be valued at the weighted average price at which they are traded on the particular valuation day. When such securities are not traded on a particular valuation day they shall be valued on amortization basis. 2) All money market and debt securities, including floating rate securities, with residual maturity of over 60. 3) The approach in valuation of non traded debt securities is based on the concept of using spreads over the benchmark rate to arrive at the yields for pricing the non traded security. a. A Risk Free Benchmark Yield is built using the government securities as the base. b. A Matrix of spreads (based on the credit risk) are built for marking up the benchmark yields. c. The yields as calculated above are Marked up/Marked-down for ill-liquidity risk d. The Yields so arrived are used to price the portfolio. It is important to note 49 price of the paper to fall, as the investor is compensated by getting higher coupon, in line with the on going market interest rates. Investors prefer Floating Rate funds in a rising interest rate scenario. caps, Midcaps & Small caps, similarly in bonds we have indices depending upon the maturity profile of the constituent bonds. These indices are published by CRISIL e.g. CRISIL long term bond index, CRISIL liquid fund index etc. 1 CIR/IMD/DF/03/2015 April 30, 2015 50 5.6 POINTS TO REMEMBER Liquid funds carry an important position as an investment option for individuals and corporates to park their short term liquidity. Liquid mutual funds are schemes that make investments in debt and money market securities with maturity of up to 91 days only. In case of liquid mutual funds cut off time for receipt of funds is an important consideration. Floating Rate Schemes are schemes where the debt paper has a Coupon which keeps changing as per the changes in the interest rates. Portoflio churning in liquid schemes happens more often due the short term nature of securities invested in. It is important for mutual funds to ensure sound risk management practices are applied to ensure that the portfolio of liquid funds and money market funds is sound and any early warnings can be identified 51 CHAPTER 6 : TAXATION. To understand Capital Gains Taxation, definitions of equity and debt schemes must be understood; similarly difference between Long Term and Short Term must also be understood. 1. Equity schemes As per SEBI Regulations, any scheme which has minimum 65% of its average weekly net assets invested in Indian equities, is an equity scheme. If the mutual fund units of an equity scheme are sold / redeemed / repurchased after 12 months, the profit is exempt. However if units are sold before 12 months it results in short term capital gain. The investor has to pay 15% as short term capital gains tax. While exiting the scheme, the investor will have to bear a Securities Transaction Tax (STT) @ 0.001% of the value of selling price.. 2. Mutual fund schemes (other than equity) i.e. debt funds, liquid schemes, gold ETF, short term bond funds etc. 52 In case such units are sold within 36 months, the gain is treated as short term capital gains. The same is added to the income of the tax payer and is taxed as per the applicable tax slab including applicable surcharge and cess depending on the status of the tax payer. This is known as taxation at the marginal rate. Long term capital gains arise when the units are sold beyond 36 months. Here the taxation rules are o For resident investor - 20% (plus surcharge and cess as applicable) (with indexation) o For FII - 10% basic tax (plus surcharge and cess as applicable) on long term capital gains (without indexation) todays price) would be Rs. 10 * (1 + 12%) = Rs. 11.2. So his profit would not be Rs. 20, but Rs. 30 Rs. 11.2 = Rs. 18.8. The cost inflation index is notified by the Central Government (form 1981 up to 2015-16). The same is used by the tax payer for calculating long term capital gains. Example An investor purchased mutual fund units in January 2006 of Rs.10,000. The same was sold in the previous year for Rs.25,000. Long term capital gains applicable is as follows: FII - Without availing indexation benefit - Pay 10% on Rs,15,000 (Rs.25000 Rs.10,000) = Rs.1,500 Resident - Calculate indexed cost of acquisition (Rs.10,000 X 1081/ 497) = Rs.21,751, Capital gains = Rs.25,000 Rs.21,751 = Rs.3,249, Tax@20% on Rs.3249 = Rs.650 53 The rates for DDT are as follows: For individuals and HUF 25% (plus surcharge and other cess as applicable) For others 30% (plus surcharge and other cess as applicable) On dividend distributed to a non-resident or to a foreign company by an Infrastructure Debt Fund 5% (plus surcharge and other cess as applicable) Consider a case where Investor A invests Rs.100,000 in a bank fixed deposit @9% for 3 years and Investor B invests Rs.100,000 in a 3 year FMP. The indicative yield of the FMP is assumed also to be at 9%. We shall analyze the tax benefit of investing in an FMP. For Investor A, the interest income per annum is 100,000 X 9% = Rs.9,000. Each year the investor would have to pay tax of Rs.2,700 (30%, assuming he is taxed at the maximum marginal rate). Total tax payable in 3 years is Rs.8,100. For Investor B, since the investment is over 36 months, it would qualify as long term capital gains. When the investor entered the fund, the cost inflation index was at 939 and when he exited at maturity the cost inflation index had risen to 1081. Thus the new indexed cost of acquisition will become Rs.100,000 X 1081/939 = Rs.115,122 Since we have taken the benefit of indexation, the applicable tax rate will be 20%, (surcharge / cess excluded for calculation) The point to be observed here is that FMP is giving a higher return (post tax) as compared to a bank FD. This is true only if the investor is in the 30% tax bracket. However Bank fixed deposit offer premature withdrawal facility; hence they offer better liquidity as compared to FMP. Under section 10(23D) of the Income tax Act, 1961, income earned by a Mutual Fund registered with SEBI is exempt from income tax. Gains, Securities Transaction Tax (STT) and Dividends point of view. Tax rules differ for equity and debt schemes and also for Individuals, NRIs, OCBs and corporates. 54 Investors also get benefit under section 80C of the Income Tax Act if they invest in a special type of equity scheme, namely, Equity Linked Savings Scheme. Capital gains tax must be paid on all mutual fund schemes except equity schemes. Indexation is a procedure by which the investor can get benefit from the fact that inflation has eroded his returns. The dividend declared by mutual funds in respect of the various schemes is exempt from tax in the hands of investors. In case of debt mutual funds, the AMCs are required to pay Dividend Distribution Tax (DDT) from the distributable income. 55 CHAPTER 7 : REGULATIONS 7.1 OVERVIEW Regulations ensure that schemes do not invest beyond a certain percent of their NAVs in a single security. Some of the guidelines regarding these are given below: No scheme can invest more than 10% of its NAV in rated debt instruments of a single issuer wherein the limit is reduced to 10% of NAV which may be extended to 12% of NAV with the prior approval of the Board of Trustees and the Board of Asset Management Company.2 No scheme can invest more than 10% of its NAV in unrated paper of a single issuer and total investment by any scheme in unrated papers cannot exceed 25% of the NAV. No mutual fund scheme shall invest more than 30% in money market instruments of an issuer: Provided that such limit shall not be applicable for investments in Government securities, treasury bills and collateralized borrowing and lending obligations. No fund, under all its schemes can hold more than 10% of companys paid up capital carrying voting rights. No scheme can invest more than 10% of its NAV in equity shares or equity related instruments of any company of a single company. Provided that, the limit of 10% shall not be applicable for investments in case of index fund or sector or industry specific scheme. If a scheme invests in another scheme of the same or different AMC, no fees will be charged. Aggregate inter scheme investment cannot exceed 5% of net asset value of the mutual fund. No scheme can invest in unlisted securities of its sponsor or its group entities. Schemes can invest in unlisted securities issued by entities other than the sponsor or sponsors group. Open ended schemes can invest maximum of 5% of net assets in such securities whereas close ended schemes can invest upto 10% of net assets in such securities. Schemes cannot invest in listed entities belonging to the sponsor group beyond 25% of its net assets. Total exposure of debt schemes of mutual funds in a particular sector (excluding investments in Bank CDs, CBLO, G-Secs, T Bills, short term deposits of scheduled commercial banks and AAA rated securities issued by Public Financial Institutions and Public Sector Banks) shall not exceed 25% of the net assets of the scheme. An additional exposure to financial services sector not exceeding 5% of the net assets of the scheme shall be allowed only by way of increase in exposure to Housing Finance Companies (HFCs) for HFCs rated AA and above and registered with National Housing Bank (NHB). 2 SEBI/HO/IMD/DF2/CIR/P/2016/35 56 Total exposure of debt schemes of mutual funds in a group (excluding investments in securities issued by Public Sector) shall not exceed 20% of the net assets of the scheme. Such investment limit may be extended to 25% of the net assets of the scheme with the prior approval of the Board of Trustees. 3 There are many other mutual fund regulations which are beyond the purview of this module. Candidates are requested to refer to AMFI-Mutual Fund (Advisors) Module for more information.. There shall be pictorial depiction of risk named riskometer which shall appropriately depict the level of risk in any scheme. 3 SEBI/HO/IMD/DF2/CIR/P/2016/35 February 15, 2016 4 CIR/IMD/DF/4/2015 April 30, 2015 57 The following depicts a scheme having moderate risk Mutual funds may product label their schemes on the basis of the best practice guidelines issued by Association of Mutual Funds in India (AMFI) in this regard. 58. An investor with limited funds might be able to invest in only one or two stocks / bonds, thus increasing his / her risk. However, a mutual fund will spread its risk by investing in a number of sound stocks or bonds. A fund normally invests in companies across a wide range of industries, so the risk is diversified. Mutual Funds regularly provide investors with information on the value of their investments. Mutual Funds also provide complete portfolio disclosure of the investments made by various schemes and also the proportion invested in each asset type. Mutual Funds offer investors a wide variety to choose from. An investor can pick up a scheme depending upon his risk/ return profile. All the Mutual Funds are registered with SEBI and they function within the provisions of strict regulation designed to protect the interests of the investor. w hen in-built mechanism of SIP, investors average cost reduces as can be seen from the chart below: 59. There are a small section of investors like domestic staff, drivers and other employees earning low incomes and who may not have PAN cards or other documentation required for investing in mutual funds. They are advised by their employers to invest in SIPS. SEBI, in order to facilitate their investments, has withdrawn the requirement of PAN for SIPs where investments are not over Rs.50,000/- in a financial year. Such installments are called micro SIPs. e xit loads. This is known as STP. 60 7.8 WHAT IS SYSTEMATIC WITHDRAWAL PLAN (SWP)? SWP stands for Systematic Withdrawal Plan. Here the investor invests a lump sum investors capital goes down whereas in MIP, the capital is not touched and only the interest is paid to the investor as dividend. 61 less due to Dividend Distribution Tax. In case of Dividend Reinvestment Option, he will get slightly lesser number of units and not exactly 120 to the extent of Dividend Distribution Tax. In case of Dividend Payout option the investor will lose out on the power of compounding from the second year onwards. A = P *( 1 + r)t Where, A = Amount P = Principal invested. 62 7.10 POINTS TO REMEMBER Regulations ensure that schemes do not invest beyond a certain percent of their NAVs in a single security. AMFI (Association of Mutual Funds in India) is the industry association for the mutual fund industry in India which was incorporated in the year 1995. The product labeling in mutual funds shall be based on the level of risk which is represented pictorially. Mutual funds have various advantages like professional management, expert fund managers, investment through small amounts, etc. Systematic investment plans helps the investor invest a certain sum of money every month. This helps in regular saving as well as evens out the market differences over the period of investment SEBI, in order to facilitate investments in SIPS by small investors, has withdrawn the requirement of PAN for SIPs where investments are not over Rs.50,000/- in a financial year. Such instalments are called micro SIPs. Transfer of funds from one mutual fund scheme to another at regular intervals is referred to as systematic transfer plan. In a Systematic Withdrawal Plan the investor invests a lump sum amount and withdraws some money regularly over a period of time. Investors must understand clearly the various options like dividend payout, dividend reinvestment and growth options in mutual fund schemes and choose the one that helps them achieve their goal. 63 CHAPTER 8 : PERFORMANCE EVALUATION 8.1 OVERVIEW It is important to evaluate the performance of the mutual fund that you have invested in. For this, you need certain benchmarks and performance evaluation methods. Such performance can be quantified with the mathematical calculation of the historical returns. These measure the returns of the funds compared to the risk indicated over a period of time. Suppose two funds have the same percentage of returns over a period of time, the lesser risk funds have higher risk adjusted returns. Benchmark This is a stand measurement against which the funds returns are compared. The benchmark helps to gauge whether how the fund has performed against the benchmark. Historical returns of the funds will help you determine a relevant benchmark for your fund. For example, returns of index funds are compared with the performance of the index. How do mutual funds of the same category compare? Funds of the same category are compared and their returns judged based on the returns fund to see how each fund is performing. If the fund has better quality stocks in its portfolio, it has an ability to get better returns on capital invested. If returns are better, performance is better. The fund manager is the person who makes investment decisions and stock selection in the portfolio. The best way to do this is to track past performance. 64 Brown Color: Brown color indicates a high risk product. All equity funds such as diversified funds, sectoral funds, index funds, large-cap funds and small-cap funds will carry a brown colour code since the risk component in such schemes is high. PPFAS Long Term Value Fund falls under this category. Yellow Color: Yellow color would indicate medium risk. Hybrid products such as Monthly Income Plans, balanced funds will fall under this category. Blue Color: Blue color would indicate a low risk investment. Debt products like Fixed Maturity Plans, short term funds, gilt funds, income funds would come under this category. Also, product labelling mandates all Mutual Funds to label their schemes on certain parameters. For eg: PPFAS Mutual Fund has a label entailing the investment objective of the scheme as To seek and generate long-term capital growth from an actively managed portfolio primary of equity and equity related securities. As per the guidelines, every Mutual Fund scheme should carry a disclaimer that Investors should consult their financial advisers if in doubt whether this scheme is suitable for them. 65 CHAPTER 9 : G ROWTH OF ONLINE PLATFORMS FOR MUTUAL FUNDS 9.1 NMF II SEBI has allowed the mutual fund distributors to use the Exchange infrastructure for facilitating mutual fund transactions for their clients. For this, NSE has developed an online platform NMF II. This is an online platform which facilitates subscription, redemption, Systematic Investment Plan (SIP), Systematic Withdrawal Plan (SWP), Systematic Transfer Plan (STP), Switch and other transactions of mutual fund units. NSE had launched theMFSSplatform for facilitating mutual fund transactions by its members. Subsequently, SEBI in October 2013 allowed use of Exchange infrastructure by distributors. NMF II is a platform for facilitating transactions in mutual fund by distributors. At present, MFSS and NMF II are different platforms. At a later stage, once all the key features of MFSS are made available in NMF II, the MFSS platform may get merged into the new NMF II. NMF II is a web application and it can be accessed online from anywhere using a standard internet connection. 66 The platform has a provision to enter details pertaining to non-financial information updations on the platform; however the request for updations of non-financial information along with relevant documents has to be submitted at the service centre for onward submission to RTAs. Investor login does not have provision for updation of non-financial information. 9.3 KYC The investor has to be KYC compliant to be able to transact on the platform. For all new investors with fresh KYC, for whom the KYC status is not verified or submitted (as reflected on the KRA system which is separate from MF platform), the distributor shall do the initial due diligence / In-person verification and upload the KYC information and supporting scanned documents on the KRA system directly. KRA shall process the KYC application, verify documents and provide the KYC acknowledgement to the investor. The investor will be eligible to invest subsequent to the receipt of the KYC acknowledgement from the KRA. NMF II supports payment of subscription amount by cheque, demand draft, online payment through RTGS/ NEFT, internet banking and debit card. o Multiple modes to make payment i.e. cheque, net banking, RTGS/NEFT, debit card. 67 Login Screen 68 Transaction Slip 69 Redemption Transaction Switch Transaction 70 Systematic Registrations 71 Portfolio Statements Transaction Listing 72 Folio Enquiry 73 9.8 CONCLUSION NMF II is a web based online platform on NSE for the trading of mutual fund units. This has been given to distributors of mutual funds as per SEBI directives. It has many features and advantages to investors using the same. 74 NCFM MODEL TEST PAPER MUTUAL FUNDS : A BEGINNERS MODULE Q:1 For anybody to start a mutual fund, relevant experience in financial services is mandatory. [ 2 Marks ] (a) TRUE (b) FALSE Q:3 The sponsor registers the mutual fund with SEBI after forming the trust. [ 2 Marks ] (a) FALSE (b) TRUE Q:9 Investors are mutual, beneficial and proportional owners of the schemes assets. [ 2 Marks ] (a) TRUE (b) FALSE 75 Q:10 Investors have a right to be informed about changes in the fundamental attributes of a scheme. [ 2 Marks ] (a) TRUE (b) FALSE Q:11 A scheme with lower NAV is always better than a scheme with higher NAV. [:16 Offer Document has to be provided by the advisor along with the application form. [ 2 Marks ] (a) TRUE (b) FALSE Q:18 Fund fact sheet gives comparison of performance of each scheme with its benchmark. [2 Marks ] (a) TRUE (b) FALSE 76 Q:19 Expense Ratio = Expenses/ Average Weekly Net Assets. [ 2 Marks ] (a) TRUE (b) FALSE Q:20 Among equity funds, risk is highest for index funds. [ 2 Marks ] (a) TRUE (b) FALSE Q:23 A scheme has average weekly net assets of Rs. 324 cr and has annual expenses of Rs. 3.24 cr, its expense ratio is [ 2 Marks ] (a) 1% (b) 10% (c) Cant 77 Q:27 Gains made from Equity funds are not liable for long term capital gains tax. [ 2 Marks ] (a) TRUE (b) FALSE Q:28 A 100% international equity fund is similar to a debt fund from taxation viewpoint. [ 2 Marks ] (a) TRUE (b) FALSE:34 Money Markets refers to that part of the debt market where the maturity is [ 2 Marks ] (a) less than 1 year (b) less than 1 month (c) less than 6 months (d) more than 1 year 78 Q:35 Long term capital gains will not be charged for international funds with minimum 65% in Indian equities. [ 2 Marks ] (a) FALSE (b) TRUE NOTE : THIS IS A SAMPLE TEST. THE ACTUAL TEST WILL CONTAIN 60 QUESTIONS.) 79 NOTES 80
https://www.scribd.com/document/362692864/Nse-Academy-Ncfm-Mfbm
CC-MAIN-2019-35
refinedweb
9,768
62.17
Talk:Tag:tourism=camp site Contents - 1 Miscellaneous - 2 Scout camps (gender distinction) - 3 Further specifications - 4 Seasonal Opening - 5 Further ideas - 6 How to tag a non-dedicated campsite? - 7 Set impromptu deprecated - 8 entrance/reception marking - 9 key: rating of campsite (1-5 stars) - 10 Private camp sites with restricted access - 11 Dev of Other Features Table - 12 Camping for overlanders - 13 More Amenities -- Bear Wire? Miscellaneous In this page, the tag camp_site is applicable to nodes only. In the Key:tourism page, it is applicable to nodes and areas. Which page is right ? Osmarender doesn't seem to take into account the camp_site as an area. - To further this question it seems that some camp site tags only apply to nodes according to the wiki. For example, I want to add a scout camp which is fairly large in area and encompasses multiple buildings and other facilities. I was going to tag it the same way you tag schools but according to the wiki the scout=yes tag only applies to nodes, not areas. Is this an error, can you apply the scout=yes tag to an area? Exiton 22:06, 30 January 2012 (UTC) - The best way would be to set it up as a site relation --Jan van Bekkum (talk) 14:55, 10 February 2015 (UTC) How can you add the contact telephone number of the campsite? Robneild 09:09, 14 September 2008 (UTC) I like the idea to add information to the camp site node (or area) and differ between different kinds. Since there are 4 different icons proposed the tags should be equivalent to these icons. Why don't we put a scale scheme, perhaps like this: type=1/2/3/4? Alternative: Shouldn't there be a tag service=yes/no? And what does tents=yes mean? That there are only tents permitted? --Geogast 13:30, 17 February 2009 (UTC) - Maybe some sort of logarithmic scale? scale=1 : up to 10 people scale=2 up to 100 people scale=3 up to 1000 people? In the Netherlands we have the so called Paalkampeerterreinen (pole-campsites) where only 3 tents at the same time are allowed (approx 10 people) and where one camps in a circle around a pole just somewhere in the wild... Tents=yes means that at least tents are allowed I guess.... GercoKees 18:16, 23 February 2009 (UTC) - Why not "tents=100" if there is space for about 100 tents? Would be much easier than to invent a new system with scales -- Skunk 17:14, 5 April 2009 (UTC) - Indeed, why not? tents=yes means tents are allowed. It says only that and implies nothing else. tents=100 means tents=yes and also that there is space for 100 of them. AlaskaDave (talk) - This was proposed in the past: see Tags for the whole campsite --Jan van Bekkum (talk) 15:00, 10 February 2015 (UTC) Scout camps (gender distinction) I like the idea of noting scout camps with "scout=yes". But I think it could be slightly more specific. I propose "scout=boy" for camps which are used primarily by organizations such as the Boy Scouts of America, and "scout=girl" for camps used primarily by organizations such as the Girl Scouts of the USA. (There is one of each variety in my area.) The existing "scout=yes" can still be used for camps which are used by scouts of both genders. Vid the Kid 02:32, 28 June 2009 (UTC) - Surely scout camp sites (or those operated by other organisations) count as private areas, and therefore wouldn't normally be on a public map? But if it was necessary to note these, surely it would be better to use a standard operator name. Scout=yes is meaningless to those who don't know how your particular country's organisation works (indeed in writing this I'm jumping to assumptions about your boy scout organisation being one which operates private sites for scouts only). Rostranimin 19:31, 14 February 2012 (UTC) Further specifications caravan=yes/no if the site allows or not caravans (untagged means no information), cabins=yes/no/number for cabins. Can probably add even more information, as the level of service on camp sites varies a lot. Some sites offer only cold shower while other have hot shower as well water and electrical power for caravans. --Skippern 03:44, 28 June 2009 (UTC) - You also need the converse for Tag:tourism=caravan_site to indicate that a caravan site accepts tents. ChrisB 16:39, 20 July 2009 (UTC) - It is more than that: see camping for overlanders --Jan van Bekkum (talk) 15:03, 10 February 2015 (UTC) Seasonal Opening Is there a tag (except from note) to add information that a camping site is not open through the entire year ? --Creando 16:19, 16 August 2009 (UTC) - Please have a look at key:opening hours --!i! 08:32, 6 December 2009 (UTC) Further ideas - basing upon this ideas there is a proposal: Proposed_features/Extend_camp_site We have a cooperation here in Germany state Mecklenburg Vorpommern with a camping foundation. They have already a very poor internet map and allowed us to add camping sites out of there catalog. There they attribute every camp with a few icons: that could give ideas for us: A few that are very usefull for direct tagging: A few that are useful but can be tagged as separate nodes already: --!i! 13:01, 29 November 2009 (UTC) How to tag a non-dedicated campsite? On hiking routes there are often non-dedicated campsites. These are good places for impromptu camping, but don't have any improvements and facilities. How can we tag the differentiation to a dedicted campsite? --Rudolf 09:10, 28 June 2012 (BST) - Solved. I have overseen backcountry=yes. - Can someone explain the difference between backcountry=yes and impromptu=yes? According to wikipedia, I think that's the same thing. --Rudolf 10:56, 28 June 2012 (BST) - Vote for backcountry=yes from my side! This tagging would be consistent with the one described at the hiking page. --Lpirl (talk) 16:14, 7 August 2013 (UTC) - I oppose the proposal as it is too restrictive: overlanders also use parkings and police stations as unofficial campings. See camping for overlanders below. --Jan van Bekkum (talk) 15:11, 10 February 2015 (UTC) - IMHO that tag mostly transform the original meaning of what a "camp_site" is (juste like impromptu=yes). I would find it much easier for data usage to split camp_site in "with facilities" (tourism=camp_site) and "without" (something like tourism=bivouac_site). sletuffe (talk) 01:28, 15 February 2016 (UTC) Set impromptu deprecated IMHO the tag impromptu=yes/no is contradictorily. Impromptu means: "Prompted by the occasion rather than being planned in advance." If we map a campsite then it is not impromptu anymore. So this tag don't make much sense. What do you think? --Rudolf 08:40, 2 July 2012 (BST) - Right, one can never know if someone planned to go to a campsite or not (even if its not mapped) --Lpirl (talk) 16:05, 7 August 2013 (UTC) - Tag backcountry=yes is mainy used by hikers and cyclists. It does not cover the needs of overlanders. Although impromptu=yes is not intuitive it therefore should not be depreciated. See section about overlanding below. --Jan van Bekkum (talk) 13:21, 10 February 2015 (UTC) - I agree that impromptu=yes is a bit confusing, and too far from a camp_site definition, I'd favor moving those site to a more specific and independent key. (i.e. to not set tourism=camp_site) sletuffe (talk) 01:10, 15 February 2016 (UTC) tourism=bivouac_site and tourism=impromptu entrance/reception marking What ís the best way to map very big camp site? I see two options: 1. I put all tags to the area and then map separate building and mark is as entrance/reception. 2. I put all tags to the node where the entrance/reception is located and map the perimeter/fence/boundary separately. - In case 1 the entrance can be really unclear which makes good navigation impossible - tourist may end up at the fence from completely different direction. In case 2 the overall shape of the campsite can not be correctly rendered on map, because the connection with the campsite is lost/very loose. --Jakubt 14:21, 8 July 2012 (BST) - If you do know the area you can put an entrance node at the proper place and tag that separately. If you know building locations you could make a site relation. --Jan van Bekkum (talk) 15:15, 10 February 2015 (UTC) key: rating of campsite (1-5 stars) In Europe at least any camp-site has a rating from 1 to 5 stars. Its an important criteria to select a camp site. Why not add it key for it? - Who rates the camp sites? Is this something that is done all over Europe or only in EU or any other part? Is this star system comparable to other quality system? Is it even comparable to itself? If there is a nice objective system for determining the quality of camp sites it is worth a tag. Most quality systems I have seen have been managed by catalogs and quite a lot of them are not reliable at all. Gnonthgol (talk) 14:01, 7 August 2013 (UTC) - I think rating camp_sites is better left to end users. It's an opinion and highly subjective. That said, the "stars=*" rating system is already being used in OSM. AlaskaDave (talk) Private camp sites with restricted access Sometimes there are private camp sites belonging to an organisation, where only the organisation's members have access to the camp site. Often, the camp site would host more or less permanently installed caravans. Should these areas be tagged as camp_site at all, maybe with the attribute access:private? Or is it better not to use camp_site? What's the best way to tag these areas? Without further specification, camp_site will be misleading because you cannot just stay over for a night. Dev of Other Features Table As a follow up to discussion (Feb 2015) on the tagging list, starting to fill out this table. - What is the purpose of this table? The tags in the list are existing tags. If the intention is to add these tags as sub-tag under camping it will not work (duplicate keys). How does this section relate to Proposed features/Extend camp site? --Jan van Bekkum (talk) 20:39, 13 February 2015 (UTC) - This was a draft to replace the second table on the main page. Intention just to document "best practice". And perhaps highlight those additional values that are needed. However, I stopped when the tagging list started discussing a (much better) proposal to define values for camp_site=*. Sadely, I think that proposal may be fatally bogged down, as so many good proposals are. Sigh ... --Davo (talk) 21:44, 24 March 2015 (UTC) I do not think we have a clear enough distinction between tags that apply to the overall camp site and ones for the pitch. - Agreed. To me, a camp_site is for one person, one caravan or one tent and a campground contains camp_sites. But, seeing as the former term is already being used for the latter and is well entrenched, how will we differentiate them? Please, let's not resort to relations to solve this. AlaskaDave (talk) Camping for overlanders --Jan van Bekkum (talk) 14:34, 10 February 2015 (UTC) This proposal has had two substantial updates taking the feedback of the previous proposals into account --Jan van Bekkum (talk) 16:20, 23 February 2015 (UTC) Introduction Overlanders are people who travel for a longer period (at least a few months) usually with their own vehicles (mainly 4WD's, trucks and motorcycles) and cover a long distance, often in developing countries. Examples are Amsterdam to Capetown, Alaska to Tierra del Fuego and London to Delhi. Because many of the countries they pass have no or few official campings the requirements for tagging camping opportunities differ from those for "regular" campers. Overlanders today carefully maintain and exchange lists of suitable camping opportunities. It meets a clear overlander need if properly tagged camping opportunities were available in OSM. The overview below lists tags important to overlanders. It is a combination of existing tags, earlier proposed ones and some new proposals. Tagging Method The current tag tourism=camp_site does not enable to describe amenities that are present at the camping. In order to make as much use as possible of defined tags a camming needs to be described by multiple nodes or areas like tourism=camp_site, amenity=toilets, amenity=restaurant etc. within a site relation. Limitations Desirable information that is not fully provided in the OSM structure includes multiple camping images per camping and a rating system of environment (a bit comparable to the "Route touristique" on Michelin maps) and the camping facilities ("Is it functioning and clean?", comparable to a TripAdvisor rating). This could perhaps be solved by means of a Wikimedia Commons link for multimedia and a Wikidata link for the rating. Top level values - tourism=camp_site - tourism=caravan_site - redundant together with tents=yes etc. I would prefer to use tourism=camp_site only. Attributes Most important attributes: - camp_site=established (default), camp_site=unofficial, camp_site=wild_camp - Camping is done in three types of accommodation: (1) Established campsites - entities that have as main activity providing camping facilities for a fee, (2) informal campsites: - entities whose main activity is not camping, but that provide camping facilities for a fee. Usually these are hotels, hostels or motels. They may have separate facilities for campers (like a separate toilet section), allow campers to use shared facilities (typical in a hostel) or hand a room key out to the camper for use of toilet and shower, (3) wild camping - locations that do not provide facilities for campers and where campers stay for free. Such places are desirable because of nearby presence of public facilities (for example a kite club on the beach with free access to their toilets and beach showers or a guarded park with public toilets in the neighbourhood), their security (a police station or mission station) or their sheer beauty (an isolated palm beach) - name=* - description=* - opening_hours=* - payment=* - fee=* - tents=yes - campers=no - to be described in wiki similar to tents. It may be impossible for campers, trucks and busses to use a camping if the access road is not suitable, if it is in a protected environment or if individual places are too small - maxheight:physical=* and maxwidth=*. To be used to describe if trucks and busses can access and use the camping - cooking=* - in use for building=apartments; to be described in wiki - laundry_sink=* - to be defined: is a sink present for hand washing of laundry. Alternatively it could be an extra value for shop=laundry. Tags to be used in combination - leisure=firepit - amenity=toilets - amenity=shower with optional attribute fee=* - power_supply=* with new attribute power_supply=intermittent (see new feature proposal [[1]]) and opening_hours=* if electricity is available during a fixed daily period - internet_access=* with optional attribute fee=* - dog=* - to be proposed - amenity=drinking_water - amenity=parking - shop=laundry - if a coin operated washing machine and dryer are provided the attribute self_service=* can be used - amenity=waste_disposal - see separate discussion about this topic - shop=convenience, shop=gas, etc. - amenity=restaurant Relation The logical grouping is by means of a site relation: site (XML, Potlatch2, iD, JOSM, history, analyze, manage, gpx) Below are the comments on the first version of the proposal. Some comments, suggestions and/or questions are raised at other places. I copy (with link to the source) and comment them here in order to keep the discussion at one place. --Jan van Bekkum (talk) 10:54, 11 February 2015 (UTC) - Is it possible to add information on any height limitations for parking / entry to the proposals as well? - I suppose maxheight=* could be used for this --Jan van Bekkum (talk) 07:45, 11 February 2015 (UTC) - Some of your suggested tags won't work. Some are manipulations of already existing tags ... just use the existing tags! The people who produce the end maps use the existing tags .. they won't adapt to your suggestions .. particularly when there are already existing ones in use. - What do you recommend? What I want to achieve is that all attributes of a campsite that belong together (the shower belongs to the camping) can be tagged to a single node. Trying to tag amenity=toilet and amenity=shower will not work as one overwrites the other. See my comment Tagging Method. So the options are to create multiple nodes, one for the camping itself, one for toilet, one for shower, etc. , but then you can't see they belong together. - Make the camp_site an area ... that is what it is after all, then a node for each feature? If you don't have the time for that .. well the resolution of OSM is about 50mm .. so a different node for each object 50mm apart would indicate to anyone that they are associated? Or make an artificial campsite area of say 100m square and put the nodes in that .. a representation of what is there.. if someone cares enough they can come along later and make a better representation Warin61 (talk) 22:26, 11 February 2015 (UTC) - Why not a relation? I don't believe an area is the best approach. My main objection is that it still keeps objects unrelated that are related: if I use a search engine that finds a camping I still don't know it has a restaurant until I go to the map and see the symbols. Furthermore I want to create POI's while I am on the road with an app like OsmAnd. However, such apps create nodes only. I will often not be able to map the right area, because I don't know what the real camping area is and on the road because I don't have the tools. I strongly oppose the recommendation to create a 100 m square area that has no relation with reality. It is better to enter unapproved tags that are not rendered than knowingly wrong information that is rendered. Almost everything is an area, but nodes are for showing objects of which you cannot give area information. Creating a relation would be a much cleaner solution to address the issue --Jan van Bekkum (talk) 09:50, 12 February 2015 (UTC) The (probably best) alternative would be to create a site relation with separate nodes for camping, amenities etc. However, I don't like to create a site relation as long as no separate buildings (polygons at the correct location) have been defined. What I try to achieve in the proposal is to stick to existing tags in a structure that works in the OSM concept. --Jan van Bekkum (talk) 10:54, 11 February 2015 (UTC) - I would like to get some of the tags (for example dryer=*) that were proposed already as proposed features in 2009, but rejected because of a lack of voters on the agenda again. It is one objective of this discussion document. It may be true that general renderers do not display all tags, but more dedicated ones may. For example the site iOverlander shows the kind of information being discussed, but uses a proprietary database now. --Jan van Bekkum (talk) 11:22, 11 February 2015 (UTC) - I am afraid it is not enough, that is why it is a suggested addition. All cases I am referring to are based on real life experience from overland traveling in Europe, Middle East and Africa for about 18 months since 2013. The dryer was really something we have been looking for and that played a role in selecting a camping. --Jan van Bekkum (talk) 09:50, 12 February 2015 (UTC) - Some of your suggested tags .. well 'hot shower' does not exist .. it is simply shower .. I'm presently trying to add the tag 'temperature' .. for use with showers, baths, water_taps etc. That is presently open for comments .. will move to voting and hopefully be voted 'in'. Then there have to be enough people using it and then enough features with that tag for the renderers (the end map makers) to think it is worth adding. Not easy to add new features .. harder still to change existing tags to some thing else. - Agree, you will have my vote. shower=hot was proposed earlier for campings. --Jan van Bekkum (talk) 10:54, 11 February 2015 (UTC) - If you have an interest in adding things or changing things in OSM tags then join the tagging talk group .. I'm active there at the moment .. 'temperature', 'reception_desk' and a 'new' grouping of 'waste_collection' ... once those are finished .. well I might move on 'tents=yes/no/number'? But consider joining ... - Will do --Jan van Bekkum (talk) 10:54, 11 February 2015 (UTC) - OSM campsite tags - The page shows the documented tags .. in blue! The tags in red are undocumented .. and may change.... I'm for changing 3 of them, removing one .. and adding another.. but it takes time. If you want to add data (tags) to OSM camp sites .. use the blue tags on that page and they may appear on the OSM maps.. use something else and that reduces any chance of the information appearing on an OSM map. The data is there .. but won't appear. - Agree, but there are so many unapproved tags in use (as you know there is serious opposition against the voting process) I try to use a rendered that has the option to show raw tags anyhow. Tags "develop" rather than being formally approved. --Jan van Bekkum (talk) 10:54, 11 February 2015 (UTC) - Tags in frequent use are usually documented in the wiki... if they lack documentation in the wiki people don't really know what they are for, if they are meant only for nodes or areas or relations. An example is ... a dog bin .. Is this for dead dogs? It is intended for dog excrement... yet there another tag for that .. again undocumented .. and yet another tag .. but that is documented .. which one would you use? The first one you came across or would you keep searching? There needs to be some method to coordinate and come to some consensus on the tags. While there is opposition to the tag talk group .. I don't see a better method.. any ideas.. I have started a thread on it ... time to stir that pot again Warin61 (talk) 22:26, 11 February 2015 (UTC) - I agree, but it depends on the tag. I would say shower=hot is pretty trivial (although I have seen from the talk temperature in general is not). For other tags like washing_machine and dryer I would be happy to add a wiki page. --Jan van Bekkum (talk) 09:50, 12 February 2015 (UTC) - Some of the resistance to introducing new tags come from rendering issues. But how will you render campsites with 20 possible tags and hundreds of possible combinations? Maybe wild camp and official campsite need a different icon, but it's no use trying to introduce 20 icons for campsites. To make full use of this kind of data, you need an interactive map with pop ups and or a decent search function. So the application built for this data can adapt to the data. --joost schouppe 9;30, 13 February 2015 More Amenities -- Bear Wire? Sorry, but one more amenity(?) idea, though I'd be happy to hear of an alternative. At least in the US, national parks, forests, etc. where there is an active bear population will sometimes place a "bear cable" or "bear wire" at backpacking camp sites. This steel cable generally spans tree-to-tree for 15 feet or so and is about 15-20 feet off the ground. You use a rope to hang food, etc. from the wire to keep it away from bears. Importantly, it obviates the need to bring your own bear proof canister. Thus, it makes a significant difference to backpackers if there are bear wires at camp sites along their path. Amenity=bear_wire?? --MGH (talk) 03:22, 15 July 2015 (UTC) Seems to me that that type of thing (along with bear resistant lockers which area often found at car camp areas) are camp ground specific so perhaps should not be a value for the amenity tag. I'd rather see something in a camp_site namespace for things likely to only be found in a camp ground. Maybe camp_site:food_storage=bear_locker/bear_cable? --N76 (talk) 03:55, 15 July 2015 (UTC) It may indeed be appropriate to use something other than "amenity", since you can only have one amenity per node, and many campsites are unlikely to have more than one node. However it appears that the trend (above and in the accepted official page) is to use unique keys (e.g., tents=, backcountry=, power=, awards=, etc.) rather than overloaded keys (e.g.: camp_site:food_storage). I'm not sure about this proliferation of keys, but it appears we're headed in that direction. --MGH (talk) 12:57, 20 July 2015 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:Tag:tourism%3Dcamp_site
CC-MAIN-2017-13
refinedweb
4,212
63.19
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 03/06/2014 at 16:01, xxxxxxxx wrote: User Information: Cinema 4D Version: 14 Platform: Mac OSX ; Language(s) : C++ ; --------- I my shader parameters I have a link field that can be empty or contain a tag (a custom tag that I created). How, in C++ can I get the content of the link field? I'm using this code: BaseContainer * data = sh->GetDataInstance(); BaseTag * the_tag = (BaseTag* ) data->GetBaseLink(TAG_OBJECT); But this seems to crash Cinema4D. Damn! So many castings!!! What am I doing wrong? On 03/06/2014 at 20:16, xxxxxxxx wrote: Try it like this. //This is an example of getting the tag the user has dropped into the link gizmo with the ID of MYLINK //In this case the tag is a custom tag...So we get it by it's plugin ID# INITRENDERRESULT MyShader::InitRender(BaseShader *sh, const InitRenderStruct &irs) { BaseContainer *data = sh->GetDataInstance(); BaseTag *myTag = (BaseTag* )data->GetLink(MYLINK, GetActiveDocument(), TAG_PLUGIN_ID); if(myTag) GePrint(myTag->GetName()); return INITRENDERRESULT_OK; } -ScottA On 04/06/2014 at 02:41, xxxxxxxx wrote: Thank you so much, Scott. Got it working (still lots to do, though). Can't help thinking that this is way more complex than: def InitRender(self,sh,irs) : the_tag=sh[TAG_OBJECT] if the_tag is None: return c4d.INITRENDERRESULT_ASSETMISSING print the_tag.GetName()++? On 04/06/2014 at 02:55, xxxxxxxx wrote: BaseContainer * data = sh->GetDataInstance(); BaseTag * the_tag = (BaseTag* ) data->GetBaseLink(TAG_OBJECT); If you would've used a static_cast instead of a C-style cast, you would've seen an error at compilation time.++? You can write yourself a little print function. You can use IsInstanceOf() to check if you got a tag, object or shader, etc. -Niklas On 04/06/2014 at 07:25, xxxxxxxx wrote: You might want to take a quick look at the second part of my C++ video tutorial Rui. I explain a few of the fundamentals like the GePrint(). C++ requires more typing. But it's not necessarily "harder" than Python. More typing != harder. One of the most common things that is used in the C4D SDK is descriptions. Almost every time you see this in Python: obj[FOO] = fooValue It requires this in C++: obj->SetParameter(DescID(FOO), GeData(fooValue), DESCFLAGS_SET_0); Python wraps things up into tiny little packages that are short to type. But it's not "easier" than C++. Once you type this stuff few hundred times. You won't notice it so much. Also once you have built up enough C++ notes. You'll cut an paste a lot of code instead of typing it. The way I often work is that I play around in Python and try out a few ideas. Then once I have the idea formed into a solid idea. I then switch over to C++ and make the actual plugin there. If you can seamlessly jump back and forth from Python & C++. You'll have C4D by the nuts.
https://plugincafe.maxon.net/topic/7925/10293_getting-a-tag-from-a-link-field
CC-MAIN-2021-43
refinedweb
531
73.78
stdarg - variable argument lists #include <stdarg.h> void va_start( va_list ap, last));. Successive invoca- tions return the values of the remaining arguments. The va_end macro handles a normal return from the function whose variable argument list was initialized by va_start. The va_end macro returns no value. The function foo takes a string of format characters and prints out the argument associated with each format charac- ter); } The va_start, va_arg, and va_end macros conform to ANSI C3.159-1989 (``ANSI C''). These macros are not compatible with the historic macros they replace. A backward compatible version can be found in the include file varargs.h.).
http://www.linuxsavvy.com/resources/linux/man/man3/va_arg.3.html
CC-MAIN-2018-05
refinedweb
104
68.67
Hi! This is a real simple and stupid exercise I thought I knew how to handle. The complete heading is: - Write a function in C whose input is a real number and whose output is the absolute value of the input number. The following is the code I've developed. It compiles and runs but doesn't do what it is supposed to. If anyone can point where the error is I would be very grateful. Thank you in advance. #include <stdio.h> #include <math.h> double outputAbsoluteValue (double *number){ *number = fabs(*number); return *number; } void main(){ double myNumber=0.0; double absValue=0.0; printf("Enter Real Number: "); scanf("%f", &myNumber); absValue=outputAbsoluteValue(&myNumber); printf("\n Its absolute value is %f", absValue); fflush(stdin); getchar(); }
https://www.daniweb.com/programming/software-development/threads/336901/c-exercise-function-to-input-real-number-and-output-absolute-value
CC-MAIN-2020-24
refinedweb
126
60.21
I have installed Symantec antivirus software on all our devices; however spiceworks keeps tell me that 95% of the devicesdo not have the anti virus Can any one tell me why? 4 Replies Dec 30, 2010 at 10:47 UTC suggest to follow at least #1 - #3 of this how-to: Troubleshooting Spiceworks Inventory Inconsistencies - Spiceworks ... All of the steps of troubleshooting Windows unknowns. It can be be found here Dec 30, 2010 at 10:49 UTC check your Firewall on this devices in a Domain: or Workgroup: Dec 30, 2010 at 10:50 UTC and Test the Authentication: i hope this can help Jan 5, 2011 at 2:58 UTC After trying some of the suggestions above, some of the missing devices could also be servers. Windows servers don't have the AntiVirusProduct namespace in WMI, so they won't report under the up-to-date count.
https://community.spiceworks.com/topic/123247-how-can-i-get-an-accurate-reading-for-the-antivirus
CC-MAIN-2017-17
refinedweb
148
60.18
Asynchronous I/O With Python 3 In this tutorial you'll go through a whirlwind tour of the asynchronous I/O facilities introduced in Python 3.4 and improved further in Python 3.5 and 3.6. Python previously had few great options for asynchronous programming. The new Async I/O support finally brings first-class support that includes both high-level APIs and standard support that aims to unify multiple third-party solutions (Twisted, Gevent, Tornado, asyncore, etc.). It's important to understand that learning Python's async IO is not trivial due to the rapid iteration, the scope, and the need to provide a migration path to existing async frameworks. I'll focus on the latest and greatest to simplify a little. There are many moving parts that interact in interesting ways across thread boundaries, process boundaries, and remote machines. There are platform-specific differences and limitations. Let's jump right in. Pluggable Event Loops The core concept of async IO is the event loop. In a program, there may be multiple event loops. Each thread will have at most one active event loop. The event loop provides the following facilities: - Registering, executing and cancelling delayed calls (with timeouts). - Creating client and server transports for various kinds of communication. - Launching subprocesses and the associated transports for communication with an external program. - Delegating costly function calls to a pool of threads. Quick Example Here is a little example that starts two coroutines and calls a function in delay. It shows how to use an event loop to power your program: import asyncio async def foo(delay): for i in range(10): print(i) await asyncio.sleep(delay) def stopper(loop): loop.stop() loop = asyncio.get_event_loop() # Schedule a call to foo() loop.create_task(foo(0.5)) loop.create_task(foo(1)) loop.call_later(12, stopper, loop) # Block until loop.stop() is called() loop.run_forever() loop.close() The AbstractEventLoop class provides the basic contract for event loops. There are many things an event loop needs to support: - Scheduling functions and coroutines for execution - Creating futures and tasks - Managing TCP servers - Handling signals (on Unix) - Working with pipes and subprocesses Here are the methods related to running and stopping the event as well as scheduling functions and coroutines: class AbstractEventLoop: """Abstract event loop.""" # Running and stopping the event loop. def run_forever(self): """Run the event loop until stop() is called.""" raise NotImplementedError def run_until_complete(self, future): """Run the event loop until a Future is done. Return the Future's result, or raise its exception. """ raise NotImplementedError def stop(self): """Stop the event loop as soon as reasonable. Exactly how soon that is may depend on the implementation, but no more I/O callbacks should be scheduled. """ raise NotImplementedError def is_running(self): """Return whether the event loop is currently running.""" raise NotImplementedError def is_closed(self): """Returns True if the event loop was closed.""" raise NotImplementedError def close(self): """Close the loop. The loop should not be running. This is idempotent and irreversible. No other methods should be called after this one. """ raise NotImplementedError def shutdown_asyncgens(self): """Shutdown all active asynchronous generators.""" raise NotImplementedError # Methods scheduling callbacks. All these return Handles. def _timer_handle_cancelled(self, handle): """Notification that a TimerHandle has been cancelled.""" raise NotImplementedError def call_soon(self, callback, *args): return self.call_later(0, callback, *args) def call_later(self, delay, callback, *args): raise NotImplementedError def call_at(self, when, callback, *args): raise NotImplementedError def time(self): raise NotImplementedError def create_future(self): raise NotImplementedError # Method scheduling a coroutine object: create a task. def create_task(self, coro): raise NotImplementedError # Methods for interacting with threads. def call_soon_threadsafe(self, callback, *args): raise NotImplementedError def run_in_executor(self, executor, func, *args): raise NotImplementedError def set_default_executor(self, executor): raise NotImplementedError Plugging in a new Event Loop Asyncio is designed to support multiple implementations of event loops that adhere to its API. The key is the EventLoopPolicy class that configures asyncio and allows the controlling of every aspect of the event loop. Here is an example of a custom event loop called uvloop based on the libuv, which is supposed to be much faster that the alternatives (I haven't benchmarked it myself): import asyncio import uvloop asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) That's it. Now, whenever you use any asyncio function, it's uvloop under the covers. Coroutines, Futures, and Tasks A coroutine is a loaded term. It is both a function that executes asynchronously and an object that needs to be scheduled. You define them by adding the async keyword before the definition: import asyncio async def cool_coroutine(): return "So cool..." If you call such a function, it doesn't run. Instead, it returns a coroutine object, and if you don't schedule it for execution then you'll get a warning too: c = cool_coroutine() print(c) Output: <coroutine object cool_coroutine at 0x108a862b0> sys:1: RuntimeWarning: coroutine 'cool_coroutine' was never awaited Process finished with exit code 0 To actually execute the coroutine, we need an event loop: r = loop.run_until_complete(c) loop.close() print(r) Output: So cool... That's direct scheduling. You can also chain coroutines. Note that you have to call await when invoking coroutines: import asyncio async def compute(x, y): print("Compute %s + %s ..." % (x, y)) await asyncio.sleep(1.0) return x + y async def print_sum(x, y): result = await compute(x, y) print("%s + %s = %s" % (x, y, result)) loop = asyncio.get_event_loop() loop.run_until_complete(print_sum(1, 2)) loop.close() The asyncio Future class is similar to the concurrent.future.Future class. It is not threadsafe and supports the following features: - adding and removing done callbacks - cancelling - setting results and exceptions Here is how to use a future with the event loop. The take_your_time() coroutine accepts a future and sets its result after sleeping for a second. The ensure_future() function schedules the coroutine, and wait_until_complete() waits for the future to be done. Behind the curtain, it adds a done callback to the future. import asyncio async def take_your_time(future): await asyncio.sleep(1) future.set_result(42) loop = asyncio.get_event_loop() future = asyncio.Future() asyncio.ensure_future(take_your_time(future)) loop.run_until_complete(future) print(future.result()) loop.close() This is pretty cumbersome. Asyncio provides tasks to make working with futures and coroutines more pleasant. A Task is a subclass of Future that wraps a coroutine and that you can cancel. The coroutine doesn't have to accept an explicit future and set its result or exception. Here is how to perform the same operations with a task: import asyncio async def take_your_time(): await asyncio.sleep(1) return 42 loop = asyncio.get_event_loop() task = loop.create_task(take_your_time()) loop.run_until_complete(task) print(task.result()) loop.close() Transports, Protocols, and Streams A transport is an abstraction of a communication channel. A transport always supports a particular protocol. Asyncio provides built-in implementations for TCP, UDP, SSL, and subprocess pipes. If you're familiar with socket-based network programming then you'll feel right at home with transports and protocols. With Asyncio, you get asynchronous network programming in a standard way. Let's look at the infamous echo server and client (the "hello world" of networking). First, the echo client implements a class called EchoClient that is derived from the asyncio.Protocol. It keeps its event loop and a message it will send to the server upon connection. In the connection_made() callback, it writes its message to the transport. In the data_received() method, it just prints the server's response, and in the connection_lost() method it stops the event loop. When passing an instance of the EchoClient class to the loop's create_connection() method, the result is a coroutine that the loop runs until it completes. import asyncio class EchoClient(message, loop), '127.0.0.1', 8888) loop.run_until_complete(coro) loop.run_forever() loop.close() The server is similar except that it runs forever, waiting for clients to connect. After it sends an echo response, it also closes the connection to the client and is ready for the next client to connect. A new instance of the EchoServer is created for each connection, so even if multiple clients connect at the same time, there will be no problem of conflicts with the transport attribute. import asyncio class EchoServer, '127.0.0.1', 8888) server = loop.run_until_complete(coro) print('Serving on {}'.format(server.sockets[0].getsockname())) loop.run_forever() Here is the output after two clients connected: Serving on ('127.0.0.1', 8888) Connection from ('127.0.0.1', 53248) Data received: 'Hello World!' Send: 'Hello World!' Close the client socket Connection from ('127.0.0.1', 53351) Data received: 'Hello World!' Send: 'Hello World!' Close the client socket Streams provide a high-level API that is based on coroutines and provides Reader and Writer abstractions. The protocols and the transports are hidden, there is no need to define your own classes, and there are no callbacks. You just await events like connection and data received. The client calls the open_connection() function that returns the reader and writer objects used naturally. To close the connection, it closes the writer. import asyncio async def tcp_echo_client(message, loop): reader, writer = await asyncio.open_connection( '127.0.0.1', 8888, loop=loop) print('Send: %r' % message) writer.write(message.encode()) data = await reader.read(100) print('Received: %r' % data.decode()) print('Close the socket') writer.close() message = 'Hello World!' loop = asyncio.get_event_loop() loop.run_until_complete(tcp_echo_client(message, loop)) loop.close() The server is also much simplified. import asyncio async def handle_echo(reader, writer): data = await reader.read(100) message = data.decode() addr = writer.get_extra_info('peername') print("Received %r from %r" % (message, addr)) print("Send: %r" % message) writer.write(data) await writer.drain() print("Close the client socket") writer.close() loop = asyncio.get_event_loop() coro = asyncio.start_server(handle_echo, '127.0.0.1', 8888, loop=loop) server = loop.run_until_complete(coro) print('Serving on {}'.format(server.sockets[0].getsockname())) loop.run_forever() Working With Sub-Processes Asyncio covers interactions with sub-processes too. The following program launches another Python process and executes the code "import this". It is one of Python's famous Easter eggs, and it prints the "Zen of Python". Check out the output below. The Python process is launched in the zen() coroutine using the create_subprocess_exec() function and binds the standard output to a pipe. Then it iterates over the standard output line by line using await to give other processes or coroutines a chance to execute if output is not ready yet. Note that on Windows you have to set the event loop to the ProactorEventLoop because the standard SelectorEventLoop doesn't support pipes. import asyncio.subprocess import sys async def zen(): code = 'import this' create = asyncio.create_subprocess_exec( sys.executable, '-c', code, stdout=asyncio.subprocess.PIPE) proc = await create data = await proc.stdout.readline() while data: line = data.decode('ascii').rstrip() print(line) data = await proc.stdout.readline() await proc.wait() if sys.platform == "win32": loop = asyncio.ProactorEventLoop() asyncio.set_event_loop(loop) else: loop = asyncio.get_event_loop() loop.run_until_complete(zen())! Conclusion Don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below. Python's asyncio is a comprehensive framework for asynchronous programming. It has a huge scope and supports both low-level as well as high-level APIs. It is still relatively young and not well understood by the community. I'm confident that over time best practices will emerge, and more examples will surface and make it easier to use this powerful library. Source: Tuts Plus
http://designncode.in/asynchronous-io-with-python-3/
CC-MAIN-2018-13
refinedweb
1,927
51.55
- Hi, Ashley! ... No, there is no such requirement, but ideally namespace name should be unique, so usually it s represented by company address (which is unique)Apr 14, 2001 1 of 4View SourceHi, Ashley! --- Ashley Clark <aclark@...> wrote: > One more question then, should this uri be a real address?No, there is no such requirement, but ideally namespace name should be unique, so usually it's represented by company address (which is unique) and path that describes particular project/application/task (which is unique inside the company). It effectively creates globally unique identifier. For examples, I'm using. It doesn't need to be http, it could be something else and doesn't need to be resolved somewhere. Check XML Namespaces FAQ:, excellent resource. > That gets me past my stumbling block here. And also, thank you veryGlad you like it. Next version should also provide some interesting > much for providing such a wonderful library. features. Look for announce soon. Best wishes, Paul. __________________________________________________ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. Your message has been successfully submitted and would be delivered to recipients shortly.
http://groups.yahoo.com/neo/groups/soaplite/conversations/topics/205?o=1&source=1&var=1
CC-MAIN-2013-48
refinedweb
188
60.01
go to bug id or search bugs for Description: ------------ Some PHP extensions expect to have symbols from other PHP extensions available. This is the case for PDO drivers (which use symbols from PDO extension) and XSL (which uses symbols from DOM extension). Extensions are loaded with dlopen(..., RTLD_GLOBAL) on platforms with dlopen() (i.e. most unices). On Mac OS X, NSLinkModule() is used instead of dlopen(), but with 'private' option, which hides symbols of dynamically loaded extensions from other dynamically loaded code. Due to this symbol hiding, PDO and DOM needed to be compiled into base PHP binary, otherwise the dependant extension couldn't be dynamically loaded since expected symbols are not found. PDO configure script even intentionally silently disables compilation of dynamically loaded PDO module on MacOSX/Darwin because of this. Following patch makes it possible to load also PDO and DOM dynamically on Mac OS X and thus PHP extensions work all the same way as on other UNIX systems: --- Zend/zend_extensions.c.orig 2007-09-11 22:00:50.000000000 +0200 +++ Zend/zend_extensions.c @@ -243,7 +243,7 @@ void *zend_mh_bundle_load(char* bundle_p return NULL; } - bundle_handle = NSLinkModule(bundle_image, bundle_path, NSLINKMODULE_OPTION_PRIVATE); + bundle_handle = NSLinkModule(bundle_image, bundle_path, NSLINKMODULE_OPTION_NONE); NSDestroyObjectFileImage(bundle_image); /* call the init function of the bundle */ Reproduce code: --------------- Compile PDO (after fix to configure script) as dynamic extensions, as well as pdo_mysq, and add into php.ini lines: extension=pdo.so extension=pdo_mysql.so then run: php -m | grep -i pdo Expected result: ---------------- PDO pdo_mysql Actual result: -------------- dyld: lazy symbol binding failed: Symbol not found: _php_pdo_declare_long_constant Referenced from: /Users/Shared/pkg/lib/php/20040412/pdo_mysql.so Expected in: flat namespace dyld: Symbol not found: _php_pdo_declare_long_constant Referenced from: /Users/Shared/pkg/lib/php/20040412/pdo_mysql.so Expected in: flat namespace Trace/BPT trap Add a Patch Add a Pull Request When this code was originally written, PDO didn't exist and as such the choice to use the PRIVATE option was made to keep the namespace the least polluted possible. Having not tested the patch, the changes make sense and probably should be applied provided they don't cause any adverse affects to other files included. Assigned to our MacOSX expert. :) See also bug #42630 Unfortunately due to outside circumstance to the project, I cannot take ownership and commit a fix to this bug at this time. I strongly encourage you Jani to apply the fix though. This bug has been fixed in CVS. Snapshots of the sources are packaged every three hours; this change will be in the next snapshot. You can grab the snapshot at. Thank you for the report, and for helping us make PHP better.
https://bugs.php.net/bug.php?id=42629
CC-MAIN-2015-14
refinedweb
438
53.92
Custom Widgets Plugins: QT_INSTALL_PLUGIN not working on Windows 7 & Bug in Qt 5.6 Custom Widget Interface template - ghielmetti I am quite new to Qt and I would like to build a Custom Widget Plugin for the Qt-Designer. I am running Qt 5.6 on Windows 7. I have tried to follow the examples in the Qt-tutorials and other tutorials, but I cannot get it working yet. It seems as if the libraries that should be generated with the build are not written into the correct target directory, which I have set in the .pro file: CONFIG += plugin debug_and_release TARGET = $$qtLibraryTarget(levelmeterplugin) TEMPLATE = lib HEADERS = levelmeterplugin.h SOURCES = levelmeterplugin.cpp RESOURCES = icons.qrc LIBS += -L. greaterThan(QT_MAJOR_VERSION, 4) { QT += designer } else { CONFIG += designer } #target.path = C:/Qt/5.6/mingw49_32/plugins/designer #target.path = C:/Users/chd/Documents/Qt_Software/Qt_Designer_Widgets target.path = $$[QT_INSTALL_PLUGINS]/designer INSTALLS += target I have added my custom library directory to the QT_INSTALL_PLUGINS environment viariable and I have also added the path to the plugins directory: C:/Qt/5.6/<compiler_name>/plugins/ I have also tried to build the libraries without using the QT_INSTALL_PLUGIN directory. Although it seems as the environmental variables are set correctly, the libraries get built into the build debug directory of the widget and the widget is not available in Qt-Designer. I assume that Qt does not seem to use the target QT_INSTALL_PLUGIN settings, but I am not sure about this. Does anyone know what could cause this issue? How can I get Qt to use the QT_INSTALL_PLUGIN environmental variable? Any help or a link to a working clear step by step tutorial would be really apprecciated. PS: I have also noticed that Qt 5.6 offers a template for the interface for building a Custom Widget Plugin, but building the template interface produces an error on the compiler output due to a wrong include: #include <QtDesigner/QDesignerCustomWidgetInterface> This can be resolved with replacing this include with: #include <QtUiPlugin/QDesignerCustomWidgetInterface> I found this a bit confusing, as I actually would expect a template provided by Qt to work by default. - mrjj Qt Champions 2016 @ghielmetti said: Hi and welcome mingw49_32 You cannot make plugins for Designer with mingw unless u compile Creator yourself with same compiler. Creator is compiled with Visual Studio and you need to use that compiler to create plugins for it. (as far as i know) Its not Qt fault. just how DLLS works. If you want a plugin just to be able to place it visually, you can do that with the promote feature and skip the whole plugin part. But if u want to adjust properties design time, then u need plugin. - ghielmetti @mrjj Thank you for your reply. I will compile the plugin with msvc2013 and see if I get it working. I have promoted a widget and this seems to work. However, for some reason sometimes Qt can not find the paths to my promoted widgets' .h and .cpp files that I set in the .pro file. But it works if I copy the files into the project manually. - mrjj Qt Champions 2016 @ghielmetti Hi Normally I do have the .h file in the project folder and point to that in the Promote dialog. Im not sure it will take a full path in the dialog but I never tried :)
https://forum.qt.io/topic/67485/custom-widgets-plugins-qt_install_plugin-not-working-on-windows-7-bug-in-qt-5-6-custom-widget-interface-template
CC-MAIN-2018-13
refinedweb
557
56.66
The graph is among the most common data structures in computer science, and it’s unsurprising that a staggeringly large amount of time has been dedicated to developing algorithms on graphs. Indeed, many problems in areas ranging from sociology, linguistics, to chemistry and artificial intelligence can be translated into questions about graphs. It’s no stretch to say that graphs are truly ubiquitous. Even more, common problems often concern the existence and optimality of paths from one vertex to another with certain properties. Of course, in order to find paths with certain properties one must first be able to search through graphs in a structured way. And so we will start our investigation of graph search algorithms with the most basic kind of graph search algorithms: the depth-first and breadth-first search. These kinds of algorithms existed in mathematics long before computers were around. The former was ostensibly invented by a man named Pierre Tremaux, who lived around the same time as the world’s first formal algorithm designer Ada Lovelace. The latter was formally discovered much later by Edward F. Moore in the 50’s. Both were discovered in the context of solving mazes. These two algorithms nudge gently into the realm of artificial intelligence, because at any given step they will decide which path to inspect next, and with minor modifications we can “intelligently” decide which to inspect next. Of course, this primer will expect the reader is familiar with the basic definitions of graph theory, but as usual we provide introductory primers on this blog. In addition, the content of this post will be very similar to our primer on trees, so the familiar reader may benefit from reading that post as well. As usual, all of the code used in this post is available on this blog’s Github page. The Basic Graph Data Structure We present the basic data structure in both mathematical terms and explicitly in Python. Definition: A directed graph is a triple where is a set of vertices, is a set of edges, and is the adjacency function, which specifies which edges connect which vertices (here edges are ordered pairs, and hence directed). We will often draw pictures of graphs instead of explicitly specifying their adjacency functions: That is, the vertex set is , the edge set (which is unlabeled in the picture) has size 6, and the adjacency function just formalizes which edges connect which vertices. An undirected graph is a graph in which the edges are (for lack of a better word) undirected. There are two ways to realize this rigorously: one can view the codomain of the adjacency function as the set of subsets of size 2 of as opposed to ordered pairs, or one can enforce that whenever is a directed edge then so is . In our implementations we will stick to directed graphs, and our data structure will extend nicely to use the second definition if undirectedness is needed. For the purpose of finding paths and simplicity in our derivations, we will impose two further conditions. First, the graphs must be simple. That is, no graph may have more than one edge between two vertices or self-adjacent vertices (that is, can not be an edge for any vertex ). Second, the graphs must be connected. That is, all pairs of vertices must have a path connecting them. Our implementation of these algorithms will use a variation of the following Python data structure. This code should be relatively self-explanatory, but the beginning reader should consult our primers on the Python programming language for a more thorough explanation of classes and lists. class Node: def __init__(self, value): self.value = value self.adjacentNodes = [] The entire graph will be accessible by having a reference to one vertex (this is guaranteed by connectivity). The vertex class is called Node, and it will contain as its data a value of arbitrary type and a list of neighboring vertices. For now the edges are just implicitly defined (there is an edge from to if the latter shows up in the “adjacentNodes” list of the former), but once we need edges with associated values we will have to improve this data structure to one similar to what we used in our post on neural networks. That is, one must update the adjacentNodes attribute of each Node by hand to add edges to the graph. There are other data structures for graphs that allow one to refer to any vertex at any time, but for our purposes the constraint of not being able to do that is more enlightening. The algorithms we investigate will use no more and no less than what we have. At each stage we will inspect the vertex we currently have access to, and pick some action based on that. Enough talk. Let’s jump right in! Depth-First Search For the remainder of this section, the goal is to determine if there is a vertex in the graph with the associated value being the integer 6. This can also be phrased more precisely as the question: “is there a path from the given node to a node with value 6?” (For connected. undirected graphs the two questions are equivalent.) Our first algorithm will solve this problem quite nicely, and is called the depth-first search. Depth-first search is inherently a recursion: - Start at a vertex. - Pick any unvisited vertex adjacent to the current vertex, and check to see if this is the goal. - If not, recursively apply the depth-first search to that vertex, ignoring any vertices that have already been visited. - Repeat until all adjacent vertices have been visited. This is probably the most natural algorithm in terms of descriptive simplicity. Indeed, in the case that our graph is a tree, this algorithm is precisely the preorder traversal. Aside from keeping track of which nodes have already been visited, the algorithm is equally simple: def depthFirst(node, soughtValue): if node.value == soughtValue: return True for adjNode in node.adjacentNodes: depthFirst(adjNode, soughtValue) Of course, supposing 6 is not found, any graph which contains a cycle (or any nontrivial, connected, undirected graph) will cause this algorithm to loop infinitely. In addition, and this is a more subtle engineering problem, graphs with a large number of vertices will cause this function to crash by exceeding the maximum number of allowed nested function calls. To avoid the first problem we can add an extra parameter to the function: a Python set type which contains the set of Nodes which have already been visited. Python sets are the computational analogue of mathematical sets, meaning that their contents are unordered and have no duplicates. And functionally Python sets have fast checks for inclusion and addition operations, so this fits the bill quite nicely. The updated code is straightforward: def depthFirst(node, soughtValue, visitedNodes): if node.value == soughtValue: return True visitedNodes.add(node) for adjNode in node.adjacentNodes: if adjNode not in visitedNodes: if depthFirst(adjNode, soughtValue, visitedNodes): return True return False There are a few tricky things going on in this code snippet. First, after checking the current Node for the sought value, we add the current Node to the set of visitedNodes. While subsequently iterating over the adjacent Nodes we check to make sure the Node has not been visited before recursing. Since Python passes these sets by reference, changes made to visitedNodes deep in the recursion persist after the recursive call ends. That is, much the same as lists in Python, these updates mutate the object. Second, this algorithm is not crystal clear on how the return values operate. Each recursive call returns True or False, but because there are arbitrarily many recursive calls made at each vertex, we can’t simply return the result of a recursive call. Instead, we can only know that we’re done entirely when a recursive call specifically returns True (hence the test for it in the if statement). Finally, after all recursive calls have terminated (and they’re all False), the end of the function defaults to returning False; in this case the sought vertex was never found. Let’s try running our fixed code with some simple examples. In the following example, we have stored in the variable G the graph given in the picture in the previous section. >>> depthFirst(G, "A") True >>> depthFirst(G, "E") True >>> depthFirst(G, "F") True >>> depthFirst(G, "X") False Of course, this still doesn’t fix the problem with too many recursive calls; a graph which is too large will cause an error because Python limits the number of recursive calls allowed. The next and final step is a standard method of turning a recursive algorithm into a non-recursive one, and it requires the knowledge of a particular data structure called a stack. In the abstract, one might want a structure which stores a collection of items and has the following operations: - You can quickly add something to the structure. - You can quickly remove the most recently added item. Such a data structure is called a stack. By “quickly,” we really mean that these operations are required to run in constant time with respect to the size of the list (it shouldn’t take longer to add an item to a long list than to a short one). One imagines a stack of pancakes, where you can either add a pancake to the top or remove one from the top; no matter how many times we remove pancakes, the one on top is always the one that was most recently added to the stack. These two operations are called push (to add) and pop (to remove). As a completely irrelevant aside, this is not the only algorithmic or mathematical concept based on pancakes. In other languages, one might have to implement such a data structure by hand. Luckily for us, Python’s lists double as stacks (although in the future we plan some primers on data structure design). Specifically, the append function of a list is the push operation, and Python lists have a special operation uncoincidentally called pop which removes the last element from a list and returns it to the caller. Here is some example code showing this in action: >>> L = [1,2,3] >>> L.append(9) >>> L [1,2,3,9] >>> L.pop() 9 >>> L [1,2,3] Note that pop modifies the list and returns a value, while push/append only modifies the list. It turns out that the order in which we visit the vertices in the recursive version of the depth-first search is the same as if we had done the following. At each vertex, push the adjacent vertices onto a stack in the reverse order that you iterate through the list. To choose which vertex to process next, simple pop whatever is on top of the stack, and process it (taking the stack with you as you go). Once again, we have to worry about which vertices have already been visited, and that part of the algorithm remains unchanged. For example, say we have the following graph: Starting at vertex 1, which is adjacent to vertices 2, 3 and 4, we push 4 onto the stack, then 3, then 2. Next, we pop vertex 2 and iteratively process 2. At this point the picture looks like this: Since 2 is connected to 4 and also 5, we push 5 and then 4 onto the stack. Note that 4 is in the stack twice (this is okay, since we are maintaining a set of visited vertices). Now the important part is that since we added vertex 5 and 4 after adding vertex 3, those will be processed before vertex 3. That is, the neighbors of more recently visited vertices (vertex 2) have a preference in being processed over the remaining neighbors of earlier ones (vertex 1). This is precisely the idea of recursion: we don’t finish the recursive call until all of the neighbors are processed, and that in turn requires the processing of all of the neighbors’ neighbors, and so on. As a quick side note: it should be clear by now that the order in which we visit adjacent nodes is completely arbitrary in both versions of this algorithm. There is no inherent ordering on the edges of a vertex in a graph, and so adding them in reverse order is simply a way for us to mentally convince ourselves that the same preference rules apply with the stack as with recursion. That is, whatever order we visit them in the recursive version, we must push them onto the stack in the opposite order to get an identical algorithm. But in isolation neither algorithm requires a particular order. So henceforth, we will stop adding things in “reverse” order in the stack version. Now the important part is that once we have converted the recursive algorithm into one based on a stack, we can remove the need for recursion entirely. Instead, we use a loop that terminates when the stack is empty: def depthFirst(startingNode, soughtValue): visitedNodes = set() stack = [startingNode] while len(stack) > 0: node = stack.pop() if node in visitedNodes: continue visitedNodes.add(node) if node.value == soughtValue: return True for n in node.adjacentNodes: if n not in visitedNodes: stack.append(n) return False This author particularly hates the use of “continue” in while loops, but its use here is better than any alternative this author can think of. For those unfamiliar: whenever a Python program encounters the continue statement in a loop, it skips the remainder of the body of the loop and begins the next iteration. One can also combine the last three lines of code into one using the lists’s extend function in combination with a list comprehension. This should be an easy exercise for the reader. Moreover, note that this version of the algorithm removes the issue with the return values. It is quite easy to tell when we’ve found the required node or determined it is not in the graph: if the loop terminates naturally (that is, without hitting a return statement), then the sought value doesn’t exist. The reliance of this algorithm on a data structure is not an uncommon thing. In fact, the next algorithm we will see cannot be easily represented as a recursive phenomenon; the order of traversal is simply too different. Instead, it will be almost identical to the stack-form of the depth-first search, but substituting a queue for a stack. Breadth-First Search As the name suggests, the breadth-first search operates in the “opposite” way from the depth-first search. Intuitively the breadth-first search prefers to visit the neighbors of earlier visited nodes before the neighbors of more recently visited ones. Let us reexamine the example we used in the depth-first search to see this change in action. Starting again with vertex 1, we add 4, 3, and 2 (in that order) to our data structure, but now we prefer the first thing added to our data structure instead of the last. That is, in the next step we visit vertex 4 instead of vertex 2. Since vertex 4 is adjacent to nobody, the recursion ends and we continue with vertex 3. Now vertex 3 is adjacent to 5, so we add 5 to the data structure. At this point the state of the algorithm can be displayed like this: The “Data Structure” has the most recently added items on top. A red “x” denotes a vertex which has already been visited by the algorithm at this stage. That is, and this is the important bit, we process vertex 2 before we process vertex 5. Notice the pattern here: after processing vertex 1, we processed all of the neighbors of vertex 1 before processing any vertices not immediately adjacent to vertex one. This is where the “breadth” part distinguishes this algorithm from the “depth” part. Metaphorically, a breadth-first search algorithm will look all around a vertex before continuing on into the depths of a graph, while the depth-first search will dive straight to the bottom of the ocean before looking at where it is. Perhaps one way to characterize these algorithms is to call breadth-first cautious, and depth-first hasty. Indeed, there are more formal ways to make these words even more fitting that we will discuss in the future. The way that we’ll make these rules rigorous is in the data-structure version of the algorithm: instead of using a stack we’ll use a queue. Again in the abstract, a queue is a data structure for which we’d like the following properties: - We can quickly add items to the queue. - We can quickly remove the least recently added item. The operations on a queue are usually called enqueue (for additions) and dequeue (for removals). Again, Python’s lists have operations that functionally make them queues, but the analogue of the enqueue operation is not efficient (specifically, it costs for a list of size ). So instead we will use Python’s special deque class (pronounced “deck”). Deques are nice because they allow fast addition and removal from both “ends” of the structure. That is, deques specify a “left” end and a “right” end, and there are constant-time operations to add and remove from both the left and right ends. Hence the enqueue operation we will use for a deque is called “appendleft,” and the dequeue operation is (unfortunately) called “pop.” >>> from collections import deque >>> queue = deque() >>> queue.appendleft(7) >>> queue.appendleft(4) >>> queue [4,7] >>> queue.pop() 7 >>> queue [4] Note that a deque can also operate as a stack (it also has an append function with functions as the push operation). So in the following code for the breadth-first search, the only modification required to make it a depth-first search is to change the word “appendleft” to “append” (and to update the variable names from “queue” to “stack”). And so the code for the breadth-first search algorithm is essentially identical: from collections import deque def breadthFirst(startingNode, soughtValue): visitedNodes = set() queue = deque([startingNode]) while len(queue) > 0: node = queue.pop() if node in visitedNodes: continue visitedNodes.add(node) if node.value == soughtValue: return True for n in node.adjacentNodes: if n not in visitedNodes: queue.appendleft(n) return False As in the depth-first search, one can combine the last three lines into one using the deque’s extendleft function. We leave it to the reader to try some examples of running this algorithm (we repeated the example for the depth-first search in our code, but omit it for brevity). Generalizing After all of this exploration, it is clear that the depth-first search and the breadth-first search are truly the same algorithm. Indeed, the only difference is in the data structure, and this can be abstracted out of the entire procedure. Say that we have some data structure that has three operations: add, remove, and len (the Pythonic function for “query the size”). Then we can make a search algorithm that uses this structure without knowing how it works on the inside. Since words like stack, queue, and heap are already taken for specific data structures, we’ll call this arbitrary data structure a pile. The algorithm might look like the following in Python: def search(startingNode, soughtValue, pile): visitedNodes = set() nodePile = pile() nodePile.add(startingNode) while len(nodePile) > 0: node = nodePile.remove() if node in visitedNodes: continue visitedNodes.add(node) if node.value == soughtValue: return True for n in node.adjacentNodes: if n not in visitedNodes: nodePile.add(n) return False Note that the argument “pile” passed to this function is the constructor for the data type, and one of the first things we do is call it to create a new instance of the data structure for use in the rest of the function. And now, if we wanted, we could recreate the depth-first search and breadth-first search as special cases of this algorithm. Unfortunately this would require us to add new methods to a deque, which is protected from such devious modifications by the Python runtime system. Instead, we can create a wrapper class as follows: from collections import deque class MyStack(deque): def add(self, item): self.append(item) def remove(self): return self.pop() depthFirst = lambda node, val: search(node, val, MyStack) And this clearly replicates the depth-first search algorithm. We leave the replication of the breadth-first algorithm as a trivial exercise (one need only modify two lines of the above code!). It is natural to wonder what other kinds of magical data structures we could plug into this generic search algorithm. As it turns out, in the next post in this series we will investigate algorithms which do just that. The data structure we use will be much more complicated (a priority queue), and it will make use of additional information we assume away for this post. In particular, they will make informed decisions about which vertex to visit next at each step of the algorithm. We will also investigate some applications of these two algorithms next time, and hopefully we will see a good example of how they apply to artificial intelligence used in games. Until then! Thank you so much for this tutorial! This has been explained excellently and far better than the tutorials I have seen. Wait . . . why couldnt the code be changed to: Not knowing python, I don’t know what language features would be preventing this, but this seems like the standard way to deal with returning in the midst of a recursive function stack. That’s a good point. I don’t know why I overlooked that. Glad to help out :) As long as I’m commenting I might as well also say thank you for your posts, quite a few of them are over my head, but the ones that I can muddle through I find incredibly interesting and well written! An additional potential concern you might deal with when traversing enormous graphs is that worst case your map of visited nodes will become O(N) in size; searching through that map to find whether a node has been visited, even with an O(log N) algorithm, is going to lead to an additional O(N log N) overhead in runtime, as well as potentially doubling the requirement on memory (having to save the entire list of nodes before finally hitting your match). You might be more interested in modifying your data structure nodes to have a “visited” flag, and check that. It should lead to a smaller worst-case memory overhead, as well as a performance increase. Yes, precisely. I am aware that the usual algorithms involving depth-first search have one color the nodes many various colors to indicate different degrees of processing. I wanted to use a set (despite its inefficiency) simply to make the point that the question of whether a node has been visited or not is a piece of data related to the algorithm, and not to the graph (which, if you “attach” a color attribute to a node is not entirely clear). And then I promptly forgot to mention it :) You are also assuming single threading. If I have a 6 core CPU, it can and will change the profiles for both depth and breath. It is probably beyond the corpus of the article, but for today when there are even quad-core smartphones, it deserves a mention. Well breadth-first search has an obvious parallel algorithm (it’s embarassingly parallel, actually), but I imagine parallel depth-first to be a very difficult algorithm to implement in practice… Could you not lose the continue statement by using not in ? def breadthFirst(startingNode, soughtValue): visitedNodes = set() queue = deque([startingNode]) while len(queue) > 0: node = queue.pop() if node not in visitedNodes: visitedNodes.add(node) if node.value == soughtValue: return True for n in node.adjacentNodes: if n not in visitedNodes: queue.appendleft(n) return False Yes, you could! I’m a fan of not nesting logic whenever possible, though, so I accept the ugly continue for the sake of upholding my own silly aesthetic values. can somebody write the code in java?? Why not you?
http://jeremykun.com/2013/01/22/depth-and-breadth-first-search/
CC-MAIN-2014-41
refinedweb
4,057
59.84
@beamwind/preset-tailwind@beamwind/preset-tailwind beamwind preset mirroring the tailwind default theme Read the docs | API | Change Log This is a only a preset. beamwind provides a ready to use bwexport. InstallationInstallation npm install @beamwind/preset-tailwind UsageUsage Please refer to the main documentation for further information. import { setup } from '@beamwind/core' import tailwind from '@beamwind/preset-tailwind' setup(tailwind()) See preset-tailwind/src/theme.ts and core/src/theme.ts for the set values. ContributeContribute Thanks for being willing to contribute! This project is free and open-source, so if you think this project can help you or anyone else, you may star it on GitHub. Feel free to open an issue if you have any idea, question, or you've found a bug. Working on your first Pull Request? You can learn how from this free series How to Contribute to an Open Source Project on GitHub We are following the Conventional Commits convention.
https://www.npmjs.com/package/@beamwind/preset-tailwind
CC-MAIN-2022-33
refinedweb
157
50.33
For. Is that really pickle.dumps? It looks like cPickle.dumps. >>> import pickle, cPickle >>> pickle.dumps( (2, "Tiger River"), 0 ) "(I2\nS'Tiger River'\np0\ntp1\n." >>> cPickle.dumps( (2, "Tiger River"), 0 ) "(I2\nS'Tiger River'\np1\ntp2\n." It appears to be a different based on reference counts >>> cPickle.dumps( (2, "Tiger River"), 0 ) "(I2\nS'Tiger River'\np1\ntp2\n." >>> T = (2, "Tiger River") >>> cPickle.dumps( T, 0) "(I2\nS'Tiger River'\ntp1\n." Probably because of this optimization in cPickle.c::put static int put(Picklerobject *self, PyObject *ob) { if (ob->ob_refcnt < 2 || self->fast) return 0; return put2(self, ob); } where "put2" is the one which generates the "p1\n" causing the problems. I get different results between pickle and cPickle: import pickle, cPickle, dis tup = (2, 'Tiger River') pickle.dumps(tup) "(I2\nS'Tiger River'\np0\ntp1\n." cPickle.dumps(tup) "(I2\nS'Tiger River'\ntp1\n." Try passing dis.dis(pick JUMP_IF_TRUE 2608 (to 2630) 22 LOAD_GLOBAL 12656 (12656) 25 UNARY_POSITIVE 26 dis.dis(cPick LOAD_GLOBAL 12656 (12656) 22 UNARY_POSITIVE 23 Try using marshal.dumps, that should be faster, but afaik, neither pickle nor marshal guarantee the same dumps. import pickle, cPickle, dis pickle and cPickle aren't guaranteed to produce the same output: Since the pickle data format is actually a tiny stack-oriented programming language, and some freedom is taken in the encodings of certain objects, it is possible that the two modules produce different data streams for the same input objects. However it is guaranteed that they will always be able to read each other's data streams. also, you may want to read this to further understand pickle:
http://www.aminus.org/blogs/index.php/2007/11/03/pickle_dumps_not_suitable_for_hashing?blog=2
CC-MAIN-2014-15
refinedweb
279
52.05
Many of us have seen sci-fi movies where the characters come home, walk in the front door, and their lights turn on for them. Perhaps they tell the house to switch on the TV or bring up the video phone with its wall-sized screen to call a friend. Unfortunately, we're not quite "there" yet with regard to commercially-available home automation technology. But you might be surprised at how much can be achieved by the enthusiast looking to advance his home into the 21st century. Let me show you some of the shipping protocols and options. Then, we'll walk through the purchase process and installation to see what it takes to turn a house into a modern-day electronic toy. Before we get ahead of ourselves, lets talk a little about what home automation is, exactly. HA, short for home automation, is a technology class that enables automatic and/or remote control of household electronics. The most commonly connected devices in an HA implementation are usually lamps/over-head lighting, heating/air conditioning, lawn and garden irrigation, and security systems. One of the vendors we looked at, SmartHome, seems fairly biased by their near-exclusive offering of Insteon products. But the company does have some convenient information on HA, including a chart of the available technologies. You can check that out right here. We'll dig into more of the differences, considerations and available options later. Interview with George Hanover First, we wanted to talk to an expert in the field and find out why home automation isn't more popular among computing enthusiasts than it is today. We exchanged emails with George Hanover, a fellow and membership chair of the IEEE Consumer Electronics Society, to find out more. Tom's Hardware: Why isn't home automation more pervasive today? George: Well it is catching on, albeit slowly. “Buying” home automation is not like buying an appliance or even a home theater system. A customer can be shown a new refrigerator or TV set, but how does a salesman effectively demonstrate home automation? Also, all of the user devices must be compatible with each other and with the HA system so that they can talk to each other. So, when a customer buys in to a particular system, he/she is really making a long-term commitment. Tom's Hardware: We'd think that a basic home automation setup could be deployed for less than the price of a mid-grade computer. Many households have two or more computers these days. Is it the installation process scaring most folks off? George: Also, there’s the matter of retrofitting into the existing housing inventory. Each year, only a tiny percentage of the housing stock is new, which means the biggest market for HA is in existing homes, and some of them have been around for 40+ years. Tom's Hardware: Most people don't install irrigation systems themselves. Instead, they hire a contractor to perform the installation. Are there home automation installers, and are they difficult to find? George: Yes, there are, and no they’re not. Check the Customer Electronic Design and Installation Association (CEDIA) Web site at. You will see an installer locater and also find that CEDIA has a certification program for installers and holds an annual expo. Also, I think the level of expertise needed to install a first-rate HA system is much higher than needed to install an irrigation. It was mentioned on page 3:... Second - The "industry", for all our "talk" does NOT think home automation is even remotely a priority for the "masses". A "techno-geek", here or there, yes, but not the average consumer. Without the second, you will not get the first.. As consumers, we need to be told what this will do for us in a way that we can understand and see real benefits from. Not "off the wall" concepts of futuristic homes, but down to earth, realities that we could feel, today...or even tomorrow (short term). Not to mention, at a price point which is do-able for the majority.. Could any of it be done? I believe so, but I think, at this moment, there is no one company, no group of companies, who will bother. The ROI just is not there. Getting the right hardware/software combination is critical. I have gone from X-10 to the latest "Insteon" products and reliability has improved 90%. I. I ran X10 for years and had reliability and interference issues. 99% of these are gone with Insteon, the appearance of the devices is much nicer and the programmability is superior - using the ISY99 from Universal Devices. The author should seriously refrain from giving electrical advice: ... White is neutral (hot) and is ... I really hope this was a typo, because it kind of suggests that the author doesn't know (very basic!) difference between hot (black) and neutral (white). In the name of not having any Tom's readers electrocute themselves, I'd reccomed saying nothing more. Sometimes you'll happen upon different colors used for the same thing just because an older standard wasn't specifying a color for certain things at all, or the technician ran out of white and didn't care... Good point, neutral is not hot, its the return (completes the circuit) but not really a safety issue. People afraid of touching the neutral won't hurt anything, besides, the author already covered making sure that power was removed from the circuit.
http://www.tomshardware.com/reviews/home-automation-insteon,2308.html
CC-MAIN-2014-10
refinedweb
922
63.9
Customizing Plot Legends Plot legends give meaning to a visualization, assigning meaning to the various plot elements. We previously saw how to create a simple legend; here we'll take a look at customizing the placement and aesthetics of the legend in Matplotlib. The simplest legend can be created with the plt.legend() command, which automatically creates a legend for any labeled plot elements: import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline import numpy as np x = np.linspace(0, 10, 1000) fig, ax = plt.subplots() ax.plot(x, np.sin(x), '-b', label='Sine') ax.plot(x, np.cos(x), '--r', label='Cosine') ax.axis('equal') leg = ax.legend(); But there are many ways we might want to customize such a legend. For example, we can specify the location and turn off the frame: ax.legend(loc='upper left', frameon=False) fig We can use the ncol command to specify the number of columns in the legend: ax.legend(frameon=False, loc='lower center', ncol=2) fig We can use a rounded box ( fancybox) or add a shadow, change the transparency (alpha value) of the frame, or change the padding around the text: ax.legend(fancybox=True, framealpha=1, shadow=True, borderpad=1) fig For more information on available legend options, see the plt.legend docstring. Choosing Elements for the Legend¶ As we have already seen, the legend includes all labeled elements by default. If this is not what is desired, we can fine-tune which elements and labels appear in the legend by using the objects returned by plot commands. The plt.plot() command is able to create multiple lines at once, and returns a list of created line instances. Passing any of these to plt.legend() will tell it which to identify, along with the labels we'd like to specify:: plt.plot(x, y[:, 0], label='first') plt.plot(x, y[:, 1], label='second') plt.plot(x, y[:, 2:]) plt.legend(framealpha=1, frameon=True); Notice that by default, the legend ignores all elements without a label attribute set. Legend for Size of Points¶ Sometimes the legend defaults are not sufficient for the given visualization. For example, perhaps you're be using the size of points to mark certain features of the data, and want to create a legend reflecting this. Here is an example where we'll use the size of points to indicate populations of California cities. We'd like a legend that specifies the scale of the sizes of the points, and we'll accomplish this by plotting some labeled data with no entries: import pandas as pd cities = pd.read_csv('data/california_cities.csv') # Extract the data we're interested in lat, lon = cities['latd'], cities['longd'] population, area = cities['population_total'], cities['area_total_km2'] # Scatter the points, using size and color but no label plt.scatter(lon, lat, label=None, c=np.log10(population), cmap='viridis', s=area, linewidth=0, alpha=0.5) plt.axis(aspect='equal') plt.xlabel('longitude') plt.ylabel('latitude') plt.colorbar(label='log$_{10}$(population)') plt.clim(3, 7) # Here we create a legend: # we'll plot empty lists with the desired size and label for area in [100, 300, 500]: plt.scatter([], [], c='k', alpha=0.3, s=area, label=str(area) + ' km$^2$') plt.legend(scatterpoints=1, frameon=False, labelspacing=1, title='City Area') plt.title('California Cities: Area and Population'); The legend will always reference some object that is on the plot, so if we'd like to display a particular shape we need to plot it. In this case, the objects we want (gray circles) are not on the plot, so we fake them by plotting empty lists. Notice too that the legend only lists plot elements that have a label specified. By plotting empty lists, we create labeled plot objects which are picked up by the legend, and now our legend tells us some useful information. This strategy can be useful for creating more sophisticated visualizations. Finally, note that for geographic data like this, it would be clearer if we could show state boundaries or other map-specific elements. For this, an excellent choice of tool is Matplotlib's Basemap addon toolkit, which we'll explore in Geographic Data with Basemap. Multiple Legends¶ Sometimes when designing a plot you'd like to add multiple legends to the same axes. Unfortunately, Matplotlib does not make this easy: via the standard legend interface, it is only possible to create a single legend for the entire plot. If you try to create a second legend using plt.legend() or ax.legend(), it will simply override the first one. We can work around this by creating a new legend artist from scratch, and then using the lower-level ax.add_artist() method to manually add the second artist to the plot: fig, ax = plt.subplots() lines = [] styles = ['-', '--', '-.', ':'] x = np.linspace(0, 10, 1000) for i in range(4): lines += ax.plot(x, np.sin(x - i * np.pi / 2), styles[i], color='black') ax.axis('equal') # specify the lines and labels of the first legend ax.legend(lines[:2], ['line A', 'line B'], loc='upper right', frameon=False) # Create the second legend and add the artist manually. from matplotlib.legend import Legend leg = Legend(ax, lines[2:], ['line C', 'line D'], loc='lower right', frameon=False) ax.add_artist(leg); This is a peek into the low-level artist objects that comprise any Matplotlib plot. If you examine the source code of ax.legend() (recall that you can do this with within the IPython notebook using ax.legend??) you'll see that the function simply consists of some logic to create a suitable Legend artist, which is then saved in the legend_ attribute and added to the figure when the plot is drawn.
https://jakevdp.github.io/PythonDataScienceHandbook/04.06-customizing-legends.html
CC-MAIN-2019-18
refinedweb
966
57.67
Plack::App::DAIA - DAIA Server as Plack application version 0.471 To quickly hack a DAIA server, create a simple app.psgi: use Plack::App::DAIA; Plack::App::DAIA->new( code => sub { my $id = shift; # ...construct and return DAIA object } ); However, you should better derive from this class: package Your::App; use parent 'Plack::App::DAIA'; sub retrieve { my ($self, $id, %parts) = @_; # construct DAIA object (you must extend this in your application) my $daia = DAIA::Response->new; return $daia; }; 1; Then create an app.psgi that returns an instance of your class: use Your::App; Your::App->new;. This module implements a DAIA server as PSGI application. It provides serialization in DAIA/XML and DAIA/JSON and automatically adds some warnings and error messages. The core functionality must be implemented by deriving from this class and implementing the method retrieve. The following serialization formats are supported by default: DAIA/XML format (default) DAIA/JSON format DAIA/RDF in RDF/JSON. In addition you get DAIA/RDF in several RDF formats ( rdfxml, turtle, and ntriples if RDF::Trine is installed. If RDF::NS is installed, you also get known namespace prefixes for RDF/Turtle format. Furthermore the output formats svg and dot are supported if RDF::Trine::Exporter::GraphViz is installed to visualize RDF graphs (you may need to make sure that dot is in your $ENV{PATH}). Creates a new DAIA server. Supported options are Path of a DAIA XSLT client to attach to DAIA/XML responses. Not set by default and set to daia.xsl if option html is set. You still may need to adjust the path if your server rewrites the request path. Enable a HTML client for DAIA/XML via XSLT. The client is returned in form of three files ( daia.xsl, daia.css, xmlverbatim.xsl) and DAIA icons, all shipped together with this module. Enabling HTML client also enables serving the DAIA XML Schema as daia.xsd. Enable warnings in the DAIA response (enabled by default). Code reference to the retrieve method if you prefer not to create a module derived from this module. Optional regular expression to validate identifiers. Invalid identifiers are set to the empty string before they are passed to the retrieve method. In addition an error message "unknown identifier format" is added to the response, if warnings are enabled. Voss This software is copyright (c) 2012 by Jakob Voss. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~voj/Plack-App-DAIA-0.471/lib/Plack/App/DAIA.pm
CC-MAIN-2016-40
refinedweb
423
57.57
05 April 2012 16:02 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> The accident on Saturday 31 March resulted in the death of two workers. Prosecutors at Evonik's headquarter's in Experts are continuing their investigation of the plant’s technical work processes, they added. The accident occurred at a plant producing cyclododecatriene (CDT). CDT is used to make laurolactam which is used as a monomer in polyamide 12 (PA12). Meanwhile, Evonik warned of substantial constraints in supplying certain polyamide 12 and C12 monomer products because of the accident at the Evonik added that it is making every effort to ensure the CDT plant is being repaired and returned to full capacity as soon
http://www.icis.com/Articles/2012/04/05/9548269/evonik-workers-not-at-fault-in-germany-plant-explosion.html
CC-MAIN-2014-41
refinedweb
115
53
[ ] Aaron T. Myers commented on HDFS-2291: -------------------------------------- Thanks a lot for providing this patch, Todd. What's below are mostly nits. I agree that there could be a few more comments for the new public methods, so I didn't include that in my feedback. # {{dfs.namenode.standby.checkpoints}} - perhaps include ".ha" in there to make it clear that this option is only applicable in an HA setup? # Might as well make the members of {{CheckpointConf}} final. # {{LOG.info("Counted txns in " + file + ": " + val.getNumTransactions());}} - Either should be removed or should not be info level. # {{prepareStopStandbyServices}} is kind of a weird name. Perhaps "prepareToStopStandbyServices" ? # "// TODO interface audience" in {{TransferFsImage}} # Does it not seem strange to you that the order of operations when setting a state is "prepareExit -> prepareEnter -> exit -> enter," instead of "prepareExit -> exit -> prepareEnter -> enter" ? i.e. I don't think there's a correctness issue here, but if I were designing a system where this set of events is triggered, I'd go with the latter. # What's the point of the changes in {{EditLogTailer}}? # "TODO: need to cancel the savenamespace operation if it's in flight" - I think this comment is no longer applicable to this patch, right? # {{LOG.info("Time for a checkpoint !");}} - while strictly accurate, this doesn't seem to be the most helpful log message. # Can we make {{CheckpointerThread}} a static inner class? # {{e.printStackTrace();}} in {{CheckpointerThread}} should probably be tossed. # Nit: in {{CheckpointerThread#doWork}}: "if(UserGroupInformation.isSecurityEnabled())" - space between "if" and "(", and curly braces around body of "if". # You use "System.currentTimeMillis" in a bunch of places. How about replacing with "o.a.h.hdfs.server.common.Util#now" ? # Does it make sense to explicitly disallow the SBN from allowing checkpoints to be uploaded to it? I realize the case when both nodes are in standby is already handled by this patch, since you don't allow checkpoints if the node already has a checkpoint for a given txid, but I mean from a principled perspective. It seems kind of odd to me that two nodes both sitting in standby would be doing checkpoint transfers at all. > HA: Checkpointing in an HA setup > -------------------------------- > > Key: HDFS-2291 > URL: > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node > Affects Versions: HA branch (HDFS-1623) > Reporter: Aaron T. Myers > Assignee: Todd Lipcon > Fix For: HA branch (HDFS-1623) > > Attachments: hdfs-2291.txt, hdfs-2291.txt > > >:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201201.mbox/%3C1669730648.2918.1325638847833.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2018-34
refinedweb
401
57.87
The Python Flair JSON-NLP package Project description Flair JSON-NLP Wrapper (C) 2019 by Damir Cavar, Oren Baldinger, Maanvitha Gongalla, Anurag Kumar, Murali Kammili Brought to you by the NLP-Lab.org! Introduction Flair v4.1 wrapper for JSON-NLP. Flair provides state-of-the-art embeddings, and tagging capabilities, in particular, POS-tagging, NER, shallow syntax chunking, and semantic frame detection. FlairPipeline We provide a FlairPipeline class, with the following parameters for ease of use: lang: defaults to en. Different languages support different models, see the Flair Docs for details. use_ontonotes: defaults to False. Whether or not to use 4-class (True) or 12-class (False) NER tagging. fast: defaults to True. Whether or not to use the smaller, faster, but slightly less accurate versions of the models. use_embeddings: defaults to ''. Passing defaultwill map to glove,multi-forward,multi-backward, the recommended stacked-embedding configuration. char_embeddings: defaults to False. Whether or not to include character-level embeddings. bpe_size: defaults to 0. If you want to include Byte-Pair Encodings, set this value to 50, 100, 200, or 300. See more at Flair Embeddings Docs. pos: defaults to True. Whether or not to include language-specific part-of-speech tags. sentinment: defaults to True. Whether or not to include sentiment analysis, if it is available for the given language. Tagging and Embedding models are downloaded automatically the first time they are called. This may take a while depending on your internet connection. Microservice The JSON-NLP repository provides a Microservice class, with a pre-built implementation of Flask. To run it, execute: python flairjsonnlp/server.py Since server.py extends the Flask app, a WSGI file would contain: from flairjsonnlp.server import app as application The microservice exposes the following URIs: - /expressions - /token_list These URIs are shortcuts to disable the other components of the parse. In all cases, tokenList will be included in the JSON-NLP output. An example url is: am a sentence Text is provided to the microservice with the text parameter, via either GET or POST. If you pass url as a parameter, the microservice will scrape that url and process the text of the website. The additional Flair parameters can be passed as parameters as well. Here is an example GET call: bin ein Berliner. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/flairjsonnlp/0.0.7/
CC-MAIN-2021-39
refinedweb
403
60.61
React Context and Re-Renders: React Take the Wheel A React context Provider will cause its consumers to re-render whenever the value provided changes. // first render <Provider value={1}>// next render <Provider value={2}>// all consumers will re-render <Consumer>{value => (/*...*/)}</Consumer> This no big deal if you’re passing primitive values to value. If you pass 2 multiple times in a row, the consumers won’t re-render. However, if you’re passing objects then you’ve got to be more careful. The following code will cause consumers to re-render every time the Menu renders, even if nothing in the Menu actually changed (perhaps the component that rendered Menu is changing state). let MenuContext = React.createContext()class Menu extends React.Component { constructor() { super() this.state = { value: this.props.defaultValue } } setValue(newValue) { this.setState({ value: newValue }) } render() { return ( <MenuContext.Provider value={{ value: this.state.value, setValue: this.setValue }}> {/* other stuff */} </MenuContext.Provider> ) } } You Have Two Choices - Handle mutation yourself 🤡 - Let React do it 😎 You should let React do it. The only catch is that you’ll be putting something in state that bothers you. It’s okay, you’ll get over it 😬 - Move setValueinto your state object (and then get over it 😋) - Use this.stateas the value to Provider. class Menu extends React.Component { constructor() { super() this.state = { value: this.props.defaultValue, setValue: (newValue) => { this.setState({ value: newValue }) } } } render() { return ( <MenuContext.Provider value={this.state}> {/* other stuff */} </MenuContext.Provider> ) } } Now you no longer have to worry about if you changed the identify of the context you provided between renders. React took the mutation wheel. In short: if all you ever pass to a Provider is this.state, then you’ll always render responsibly!
https://medium.com/@ryanflorence/react-context-and-re-renders-react-take-the-wheel-cd1d20663647
CC-MAIN-2021-25
refinedweb
285
54.49
14 Oct Opening a ThickBox iframe from Flash If you don’t know what is ThickBox, check it out here: That’s some nice way to present content on your site right? Anyway, I’ve been playing around with ThickBox for quite some time now, and a recent project required me to call up a ThickBox iframe from within Flash itself. As the documentation on the site only provided HTML help on how to implement ThickBox, I had to look for a way to do this myself. A search on Google yielded a few complicated ways to do this, which I did not think was effective since they involved writing of new Javascript functions. Then, I found this article which describes how to do almost exactly what I wanted - but, it requires a writing of a new function. And, it is opening an image, and not an iframe. With what I learnt from the article, I went to take a look at ThickBox’s code. Thanks to their neat comments, I understood how the functions work after a few minutes, and I implemented what I learnt from the article (which is how to use ExternalInterface) to do exactly what I wanted. The idea is to call the Javascript function that launches a iFrame ThickBox from thickbox.js. The function name is actually tb_show, with 3 parameters that you can pass in, namely, the caption, url and imageGroup. Obviously, what I need to pass in here is the URL. Therefore, with the ExternalInterface, this is the code I need to put into Flash: [code lang="actionscript"] import flash.external.ExternalInterface; Button.onRelease = function(){ ExternalInterface.call(”tb_show” , null ,”page.html?KeepThis=true&TB_iframe=true&height=150&width=150″ , false); } [/code] If that doesn’t make sense to you, you need to see how ThickBox is launched with HTML originally: [code lang="html"] [/code] You can see that it requires the a tag to have a “thickbox” class. Which is why you can’t just use a getURL in Flash. I hope this helps people out there. =) Posted by Hugo on 14.10.07 at 5:00 pm Hi there, I tried to implement your code but got no success… nothing happens, is it complete? Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hi, Yea, it is complete… Can you show me your page and your code? Posted by Hugo on 14.10.07 at 5:00 pm Ok, here is the actionscript (it’s a 1 frame movie, just for testing purposes), and the code is in timeline: import flash.external.ExternalInterface; Botao.onRelease = function() { ExternalInterface.call(”tb_show”, “teste”, “teste.html?KeepThis=true&TB_iframe=true&height=150&width=150″, false); }; Now the html file: teste_thickbox ` @import “Scripts/thickbox.css”; AC_FL_RunContent( ‘codebase’,'′,’width’,'400′,’height’,'400′,’id’,'teste_thickbox’,'align’,'middle’,’src’,'teste_thickbox’,'quality’,'high’,'bgcolor’,'#66ccff’,'name’,'teste_thickbox’,'allowscriptaccess’,’sameDomain’,'pluginspage’,'’,'wmode’,'transparent’,'movie’,'teste_thickbox’ ); //end AC code Example 1 Please notice that the html code (… etc) does work! Nothing happens when I click the flash button =( Am I doing anything wrong? Thanks in advance! Posted by Hugo on 14.10.07 at 5:00 pm Hmmm… sorry, I think I’ve messed up your page width Posted by DarkSuiyoken on 14.10.07 at 5:00 pm It’s ok, lol. Try using swfObject to load in your swf. And for the ExternalInterface calling, try to replace your “teste” to null instead. I’m not exactly sure what that argument does, but I didnt need it. It’ll also help if you set up this on your server and link me to it, so I can take a look. Posted by Hugo on 14.10.07 at 5:00 pm Man… no success here =( I`ve uploaded it: Thanks in advance! Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hey, For me, the HTML link didn’t work too.. I noticed that the JS files aren’t in the directory where you linked them to be. They should be in a folder called “Scripts” where your html page is. Posted by Hugo on 14.10.07 at 5:00 pm Oooops, dude, sorry, forgot to upload the .js files… it’s ok now, the html works but the flash does not… Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hey Hugo, It’s actually working for me, you do not have the teste.html file right? That’s why it’s showing a page not found error within the iframe. Try changing your code to the following to see if Google loads up fine instead. =] import flash.external.ExternalInterface; Botao.onRelease = function(){ ExternalInterface.call(“tb_show” , null ,“? , false); } Posted by Hugo on 14.10.07 at 5:00 pm Dude, I’ve tested with urls, files, everything… It was not working, I really dunno why. Maybe a typo somewhere? Well, thank you very much, I feel bad about wasting your time, so thank you very much once again. PS: is the project I’m working on, just to let you know =) Thanks! Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Pretty sleek work there. Don’t worry about me wasting my time. =] I’m not too sure if this method of mine works 100% for everyone too, so I wanna know what will make it not work, and what it needs to work. Have you tried to embed your flash with swfObject instead? Posted by Hugo on 14.10.07 at 5:00 pm Man, thank you very much once again. I’ve been working on that project, when it’s complete I’ll show ya PS: email is gtalk too, in case you want it! Cya! Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hey, cool. Did it work in the end? =] Posted by jason on 14.10.07 at 5:00 pm Help! The html works but it wont work from flash Posted by jason on 14.10.07 at 5:00 pm the button seems to be calling a function that triggers the error: “undefined” Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hmm the undefined should be from another part of your script that used a trace command. Are you using Flash 8? Publishing with Flash 8? Posted by jason on 14.10.07 at 5:00 pm Can you please help me with the flash file. I have really need this project for Monday/ Posted by DarkSuiyoken on 14.10.07 at 5:00 pm sure, email it to me at darksuiyoken[at]gmail[dot]com Posted by jason on 14.10.07 at 5:00 pm I emailed you the flash file. Thank You! Posted by jack on 14.10.07 at 5:00 pm Hello, My thickbox wont open. I have a i frame long movie and in one layer i have: import flash.external.ExternalInterface; but1.onRelease = function(){ ExternalInterface.call(”tb_show” , null ,”next.html?KeepThis=true&TB_iframe=true&height=150&width=150″ , false); } in the other layer i have a button called but1. what did i do wrong? do u have a online example somewhere? thanks! Posted by DarkSuiyoken on 14.10.07 at 5:00 pm hi jack, you can get the source files from my this post: Posted by Qbic on 14.10.07 at 5:00 pm great solution Posted by Jeff Phillips on 14.10.07 at 5:00 pm You’re awesome buddy! Thanks a million. Works like a charm. I would have worked on this forever maybe if you hadn’t taken the few minutes to blog your findings. I googled “call thickbox from flash” and you were #2, (just below the article you mention in your post =). I’ll pass on the good karma as best I can! Posted by jerang on 14.10.07 at 5:00 pm Thanks for the great article. For those who have flash embeded in HTML and are having trouble with iframe interference, just add wmode=transparent as an attribute and paramater. Posted by Markc on 14.10.07 at 5:00 pm I’ve got this working in IE6, Firefox and Safari but can’t get it working in Opera - any ideas? Posted by DarkSuiyoken on 14.10.07 at 5:00 pm hi mark, I don’t really have the time to make it work for Opera right now. If you ever find a solution, do share it here! Posted by Max on 14.10.07 at 5:00 pm Hey, you helped me a lot. Thanks! Posted by Brandon on 14.10.07 at 5:00 pm Hi, I am actually trying to do the same thing with my website–call thickbox from a flash movie, but I am having no luck. The HTML thickbox links work great, but I cant get the links to work from the flash movies. Is there anyway you could personally help me (by phone or email), and I would be willing to pay you for your time. If so, please send me an email. Thanks, Brandon Posted by tristen on 14.10.07 at 5:00 pm I know this is an older post but it would be nice if your example worked! Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Did you look at the newer post? Posted by Chris Homan on 14.10.07 at 5:00 pm Hey… Posted by DarkSuiyoken on 14.10.07 at 5:00 pm Hi Chris, My post is about opening a thickbox frame from flash, but your site is actually purely Thickbox with HTML/CSS. Perhaps you should look on the ThickBox community for help for this. Posted by George on 14.10.07 at 5:00 pm Simply Excellent Post Posted by kppmbbwpuj on 14.10.07 at 5:00 pm lGVBav afjsokjjewil, [url=]mzmgjmeuiwoy[/url], [link=]wzanriwvqccc[/link], Posted by ricmetal on 14.10.07 at 5:00 pm thanks for that Posted by Chris on 14.10.07 at 5:00 pm Thanks alot for this! worked just fine, the .fla file was real handy! keep it up! Posted by Paul on 14.10.07 at 5:00 pm This worked perfectly! Thank you very much.
http://designfission.com/blog/2007/10/14/opening-a-thickbox-iframe-from-flash/
crawl-002
refinedweb
1,716
85.18
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I'm interested in increasing my understanding of some of the more advanced programming language constructs I see discussed here and in other places. Specifically the ones that are common in functional programming (first class everything, continuations, closures, etc.). Now the best way to understand them is to actually use them in a language that implements them. But that requires a good investment in learning those languages (Haskell, Lisp, etc). What I'm looking for are some papers that describe the various advanced topics in terms that a regular C programmer, like myself, can understand. In other words, pretend that you are extending the C language -- how would the syntax look like for the various constructs? Here's an example explanation for first class functions: In C, you can take the name of a function, and pass it as a parameter to another function, like what is used in the standard library qsort(). You can also assign those function names to variables and array elements (which is useful for creating state machines). However, you have to create those functions somewhere else in the program. This can make it difficult to read the code and figure out what is going on. In the case of qsort(), which requires (among other things) a pointer to a comparison function, that function is probably only used once. So it would make sense to define the function right there where it is used. This would require first class anonymous functions. If C supported this, this, then the syntax would probably look like this: qsort (my_array, 25, sizeof(*char), lambda(const void *p1, const void *p2) { return strcmp(*(char * const *) p1, *(char * const *) p2); }); In this example, the keyword "lambda" creates a function on the fly, and returns the address of that function as it's value. You can also use the expression: x = lambda(int arg1, int arg2) { /* contents of function */ }; And you can now call the function via: "x(arg1, arg2);". So, if C added the "lambda" keyword with the semantics given above, then it would have full first class anonymous functions. ... As you can see, I think I've got an understanding of first class functions. What I'd like to see are write ups similar to the above explaining other concepts such as closures, continuations, monads, or anything else that could be expressed in an imperative language such as C. Or, if anything doesn't fit in that context, an explanation of that also. A couple of postings that I've found that come close are Joe Spolsky's article "Can your programming language do this?" (which gives examples in Javascript), and the paper "Functional programming for the rest of us" on defmacro.org. In case you are wondering why I'm looking for this, first of all I kind of understand some of the concepts, but would like to further cement my understanding. And secondly, I'd like something I can send to other people I know that are primarily C (or similar language) programmers (since I don't understand the concepts enough to explain them). Also, I'd like to extend the language I'm developing to support as much as is relevant (yes, I know, why write another language -- I'm doing it mostly for my own education, and possibly to use as a resource for others to learn from). Thanks. I would suggest not doing this in C, mostly to not bog yourself down. For example, while I agree what you've described is a first-class function, an overwhelmingly key point you did not demonstrate was lexical scoping of open variables in the function body: we call this a closure, and, oversimplifying, it is why Douglas Crockford saw JavaScript as The Next Big Thing. It's a pain to encode this notion in traditional C/C++, yet it is also the starting point for many interesting things. Considering you seem more interested in novel control structures than building verifiable languages, and want to build up language features, the build-your-own interpreter approach sounds right down your alley. PLAI or HtDP have a great presentation of this, and PLAI is freely available online so you can give it a trial run. While you could probably implement a good chunk of it in something like JavaScript, you'd miss out on being able to experiment with the features you're trying to implement (e.g., continuations and macros done right), and would again get bogged down with things like syntax. Instead, go the traditional (PLT) Scheme road. At my undergrad PL course, students learned all the necessary Scheme to get going in the first night; it ain't bad. It might be worth trying to figure out how to turn your Scheme code into machine code (or at least C code), but there is a lot of noise like dealing with parsing and C's type system that take too much time: you explore a *lot* of language features in this approach, so you don't want to deal with non-essential issues. After, you'd be in a great position to go more theoretical (type theory with TAPL), or more low level (I'd like to recommend a compiler book, but am not really satisfied with any intro level ones for interesting languages). Finally, "Concepts, Techniques and Models of Computer Programming" (CTM) covers a bit more, such as concurrency, but skips other things like syntactic abstraction (macros). I haven't seen it used in a learning environment, so I can't yet vouch for it. All three of these resources are *very* clearly written. I have learned a few things from CTM, but in order to get further I feel like I need to get my head wrapped around the OZ language that the examples are presented in. Maybe I'm just lazy, but am looking for that "ah-ha" moment with a bit lighter reading (although I do agree that CTM is very well written). I'll start looking through the other texts that you mentioned. an overwhelmingly key point you did not demonstrate was lexical scoping of open variables in the function body: we call this a closure, Yep, I forgot about scoping issues. What I've done in my language, 2e, is as follows: A function is defined by specifying "@name(expression)", where expression is any amount of code. This not only assigns the expression to the given name, so it can be called like a C function "name(p1, p2, ...)" but the definition also returns a value that can be assigned or called. Now anonymous functions have two forms: "@@(expression)" and "@(expression)". They can both be assigned to a variable or array element, but the second form has its variables in the same scope as the parent function. So in that sense, you can do closures (I think). Like this: addfunc = @($1 + $2); add5 = @(addfunc($1, 5)); result = add5(7) #result should be equal to "12" In this case, the "add5()" function closes over the "addfunc()" function. This is the example that I've seen in other discussions on closures, hopefully I've got it right. If anyone wants to try out some variations, I've uploaded the latest version of my language to lang2e.sourceforge.net -- version 0.9. If I have got closures correct, then I'm going to go for continuations next (or at least get enough in so I can implement co-routines as described by Knuth). BTW, one weakness that I've got is that all variables in my functions are "static". So you can use recursion, but it is a bit tricky with variables getting clobbered (I've got code samples to show how to work around this). And another weakness is that I haven't done anything with tail call optimization yet. I've figured out an easy way to do it, just got to get time to test it is all. Thanks for the reading list, it looks like I've got some (fun) work ahead of me. BTW, one weakness that I've got is that all variables in my functions are "static". If all your variables are global, and your syntax has no way to refer to outer function arguments either (which seems to be the case), then you don't have closures. For the same reason, C does not have closures. Closures are an implementation technique for accessing variables (including function arguments) that are local, but to some enclosing scope. So your example does not exhibit closures. Here it is in lambda-calculus notation: add = \(x,y).x+y add5 = \x.add(x,5) result = add5(7) You witness closures when you are able to express it like this: add = \x.(\y.x+y) add5 = add(5) -- closes over x result = add5(7) Of course, this is a rather boring example. It becomes more interesting if the add function actually does something before returning the inner function, e.g.: add = \x.(let y = some_f(x); \z.y+z) Then, some_f only gets called once, when add5 is defined. With your version you could not avoid calling it each time add5 is used. What I mean by static variables is that within a function, variables values don't disappear after the function exits, similar to C when you use the "static" keyword to declare a variable. (This is partly do to laziness on my part in implementing the interpreter). But there are two types of functions -- one type shares variable scope with whatever function it was defined in (function definitions can be nested), and the other type has variables that are in a private scope (but they are still persistent after the function exits). Only anonymous functions defined with a single "@" symbol have shared variable scope -- ones defined with two "@@" symbols have separate scope, same with named functions. So far I don't have any syntax to get global variables (except for the fact that the names of named-functions are global). But I can see now that this fact still doesn't help me with closures. I think I understand why I don't have closures, but will have to experiment a bit to see what it would take syntax-wise to get them. From what it appears, you have to be able to have a function with local variable scope, but also be able to attach a group of variables to it that are seen within it's scope as well as the scope of whatever function attached them to it. Kind of similar to a C++ class definition (but somewhat different)? Kind of similar to a C++ class definition (but somewhat different)? There is a very strong correspondence between OO style objects and closures. I think I understand why I don't have closures, but will have to experiment a bit to see what it would take syntax-wise to get them. You probably don't need syntax. Most languages with closures don't do anything to specially flag variables that can be captured. They typically just follow the normal block scoping conventions for the language. E.g. in the curly braced syntax of JavaScript* it looks something like this function add(x) { return function(y) { return x + y; } } var add5 = add(5); alert(add5(7)); At the implementation level there may very well be a difference between captured variables and non-captured variables but that doesn't mean the syntax needs to show it. Also, one point to clarify from your original post is that anonymous lambdas aren't a prerequisite for first class functions and closures. Again, some JavaScript function add(x) { function addx(y) { return x + y; } return addx; } var add5 = add(5); alert(add5(7)); If a language has local first class named functions then anonymous lambdas are just a syntactic convenience. * note that JavaScript doesn't really have block scoping, but the difference isn't that important for this example. So in my languages context: f1 = @@( fv1 = 123; f2 = @( print($1, fv1) ); ### additional code; f2("foo") ) In this case, f1() has separate namespace from the rest of the program since it is defined with two "@@" symbols. But f2() shares namespace with f1(). Therefore, the print statement when f2 is called prints "foo123". So in this sense it looks like I have closures (f1 closes over f2). But here is where it falls apart: some_variable = 123; f1 = @( print(some_variable) ); f2 = @@( some_variable = 456; $1() ); ### now call f2, passing f1 as it's argument f2(f1) In this case, the variable name "some_variable" that f1() sees is the one that was defined above it, so it prints out "123". Even when it is called from within f2(). I think that to make this useful, I should get f1 to access the variable as it is set from within f2, when it is called from there. (Also, to make it useful, I should define a way so that named functions can get either a separate or shared namespace, just like I have for anonymous functions). Most languages are lexically scoped by default. That just means that the meaning of a symbol is bound based on its position in the (static) code rather than when the variable is examined during the dynamic execution of the program. Assuming that in your second example you have two different variables named "some_variable" and you aren't just mutating one global variable, then lexical scoping would say that "123" is the right answer. The alternative you propose is called dynamic scoping (again, assuming you aren't just mutating one variable). Before committing one way or the other be sure to dive into some explanations of the trade-offs involved. There's a very good reason that lexical scoping has been the far more common default for the last several decades : it's usually much easier to reason about. On the other hand, dynamic scoping does have power and even some very recent languages offer dynamically scoped variables as a non-default alternative. I'm confused by what you are trying to accomplish with namespacing. Right, the way the interpreter is currently implemented is each variable, when it is found by the parser, is assigned an index number into the variable storage data structure. So the variable name is combined with the function reference i.d. that it is found in, and this is used to find the variable's index. This index number is what gets loaded into the final compiled code. So lexical scoping is the most natural at the moment. But right now I don't have any implementation for object oriented features. So as a quick fix, I thought that if a function could access the variables of a wrapper function, then that would be a way to get a form of objects. Now they would act different that C++ classes, but that also fits in with one of my goals -- by re-inventing the wheel without exactly copying what was done before, I might discover something a bit more powerful (kind of like looking at old ideas with fresh eyes). Another idea was to emulate objects strictly through operator overloading. So far I've got a syntax that works great for using these "objects", but setting them up is a pain an non-intuitive. But I guess that is part of the learning experience. BTW, I like your idea of defaulting to lexical scoping, and using alternate syntax to get dynamic scoping. I was thinking of something like "namespace::varname" where namespace is the name of some other function, and this will retrieve the variable from that other function ( ::varname will grab variables from the main function which is unnamed)). So the variable name is combined with the function reference i.d. that it is found in, and this is used to find the variable's index. This index number is what gets loaded into the final compiled code. Does that mean functions aren't reentrant? Well, I have to say that's...bold :-) yes, even though functions have separate namespaces, if a function calls itself then the inner calling can clobber variables that may be needed once the inner loop returns. The way I get around it is to pass a counter parameter to the function, and replace any variable that needs to be preserved with an array -- using that counter parameter as an index. So, foo(var1, 0); @foo( x[$1] = something; ... foo(var1, $1 + 1); bla(x[$1]); ... ) In this example, x is a variable that is used before and after a recursive call to foo(). So it was replaced by an array, and everytime foo calls itself it increments the second parameter by one. In a future version, I'll use a built-in internal function, called "varpush(x)", which will push the current value of x onto a stack at the beginning of the function. Then when the function exits, it will pop the old value back off the stack. This way, instead of setting up and tearing down all of a functions variables every time it is entered / exited, only the variables that need to be preserved will get extra handling, thereby keeping the performance up. Yes, bold, I know, and potentially error prone. But I couldn't bear the overhead of messing with all the variables every time I entered or exited a function (and it makes the interpreter code that much easier to understand). Based off of this conversation, I really suggest following something like PLAI and using Scheme. You'll write the entire thing in 20 lines of functions and lists and quickly progress well beyond this point. For example, if you properly wrote the interpreter, making such changes should only impact 1-5 lines of code. Recursive functions, lexical scoping, mutable variables, continuations, futures, etc. require minimal but conceptually powerful changes in such an approach. You'll figure out the naive implementation approach, and, more importantly, the basic 'clean' semantic design space and sample usage patterns. After you understand what's interesting to include, then start thinking about what optimizations (local or global) are possible for your restricted set of features: for example, profile-driven inlining of functions and subsequent expression elimination transforms might yield a much bigger win than a funky stack discipline. In essence, all your variables are mutable as well as global (though not necessary globally visible). That's the opposite of lexical scoping - or any scoping at all, for that matter - and a sure recipe for disaster. And it is not only extremely error-prone and precludes recursion, as James points out, it also makes first-class functions practically useless. I feel that the discussion is evidence that your approach of trying to understand advanced PL concepts in terms of C does not fly. You should take the advise already given by others and invest time to actually learn some high-level language and respective implementation techniques. My variables are mutable (so it's not pure-functional), and internally they are all stored in the same data structure, but from the 2e script a variable "foo" in one function is unique to that function, and the same variable named "foo" in another function (or the main program) is unique there, just like in C or Perl, etc. Isn't that what lexical scoping is? They are just statically allocated at "compile" time instead of dynamically allocated when a function is entered (which I'll change at some point). Now another idea I've toyed with is to have a function inherit variable from it's parent function, but it would be copy-on-write. In this case, they would behave like environment variable inheritance in regular OS processes. That may be closer to what some of the functional languages provide? As for mutability, if I wanted to make a pure functional language then a couple lines of code in the assignment operator handler could make sure that variables can only be assigned a value one time. At the moment, however, I don't want to go for that, since I want to keep a regular procedural language that has some functional features (maybe this is a contradiction?). BTW, I've been reading through CTM, and find it extremely interesting, although it is a text that I will have to put a lot of energy into in order to get enough out of it. Also, PLAI appears to be helpful. I just hope it doesn't have a lot of "Use this magic Scheme feature to implement the same feature in our Scheme-like interpreter", since it can be a circular definition (I've seen this too much in other texts). So far through this project, every time I've needed to add a new feature or change an underlying construct, I've managed to do it with only a few lines of tweaking. But to get the data management part straightened out is going to require a bit more re-working. I'll make sure I keep good notes on all the wrong paths I've taken, and the corrective actions, so that anyone else interested can learn from it. a variable "foo" in one function is unique to that function, and the same variable named "foo" in another function (or the main program) is unique there, just like in C or Perl, etc. That's exactly like C and Perl except that it's totally different. In both languages the memory for locally declared variables is freshly allocated on each invocation. That means that a function can call itself (directly or indirectly) without clobbering data set up in a previous invocation. And all that happens without any extra effort on the part of the user. Isn't that what lexical scoping is? Nope. You've got globally scoped variables that are namespaced by their enclosing function. Scoping and namespacing are two different things. Namespacing is just a clever way to let two things appear to have the same simple name because their simple names are in different "spaces"*. But namespacing doesn't really change much - if I can write foo.bar anywhere to access bar in the single globally available foo namespace then that's about the same difference as naming the variable foodotbar and putting it in a global namespace. Whether namespaced or not, it's one binding and it's accessible globally. A binding is a relationship between a symbol and its "meaning." For variables you can roughly say that that means the relationship between a variable name and a location in memory (there are more formal ways to describe this but you're an implementation focused guy so "location in memory" will do). In C, if a function foo has a local variable named bar then there will be a new binding for bar each time foo is invoked because a new location on the stack will be allocated with each invocation. Global scoping means that a binding made anywhere in a program holds across the entire program. As I've illustrated above, you've described a language with globally scoped variables. There's only ever one binding for the foo.bar variable and that binding is globally accessible. Lexical scoping means that each time a function is called it makes new bindings for its declared variables and that a binding made in one part of your program is available over a certain chunk of the program that can be determined by the lexical position of the declaration. Often lexical scoping is based on nested blocks. In lexical scoping bindings are not in general available across chained function invocations (except perhaps to lexically nested functions) nor are they global - they aren't available anywhere except in their lexical environment. Dynamic scoping means each function invocation makes new bindings for its declared variables and each binding made in a function is available across the calls (and the calls from calls) that the function makes but the binding is lost when the function returns to its caller. Hope this helps clear up the confusion. Good luck with those books. * Some languages also add access protection schemes to namespaces - at which point namespaces have become simple modules. Those books primarily* use lambda and lists (and maybe numbers, if you feel like including them). CTM's use of Oz makes it the more exotic one to me. *I think they resort to the native macro system when discussing macros, but I never read them that far. Now I see. So in my documentation, I'll describe this as a global scope / local namespace implementation, since the variables are bound to a memory location at compile time. If I wanted lexical scoping, I'd have to bind the variables to a memory location at run time, whenever a function (or a separate scope) is entered. More likely, the variables will be bound to an index at compile time, but that index will be an offset to a chunk of memory allocated (at run time) whenever a new scope is entered. And, instead of scoping based strictly on functions, I should have a generalized scoping syntax such as braces, that denote when I'm entering a new scope. Once this is straightened out, I think I'll have a serious language implementation (I originally didn't intend it to go even this far, but I've got most of the features of some of the mainstream languages, plus a few nice extras that they lack -- but that should probably be a separate topic). There's a terminological snafu you keep repeating that suggests a deeper misunderstanding. Functions don't close over other functions (well they can), they close over their free variables. In your first example code, if you had higher order functions, f2 would be closing over fv1. In your second example, f1 closes over some_variable, though if some_variable is viewed as being in a "global" scope, then we don't usually close over them, though it is fine to do/say so but then we should say that f1 closes over print as well (unless print is a primitive construct in your language.) It's a pain to encode this notion in traditional C/C++, yet it is also the starting point for many interesting things. makes me wish for a taxonomy / hierarchy of such things, with examples of what each new "level" allows you to do vs. "lower" "levels". While not complete, my libconcurrency implements coroutines and continuations in C. The copying branch is a simple, naive implementation of continuations by stack copying. The other context functions operate via portable jmp_buf bit twiddling technique. Stress test it? If you go massively parallel, or deeply nested, won't you run out of stack-space fast? (I'm only asking because I've got major problems with ML running out of stack space.) No stress testing yet to be honest, and it's in a state of flux since I discovered that the latest glibc swizzle the jmp_buf in an odd way. This library is not intended for massive fine-grained parallelism, but medium-grained parallelism. The approach is basically the same as Lua coroutines. On a 32-bit machine, you would max theoretically max out around 500,000 coroutines with 4 kB stacks. But at that point it would seem you're already incurring too many context switches, so a better algorithm would be preferable. Derek: I think there is a hazard in your approach. In one sense, C is just a macro-assembler, and just about anything can be expressed in it. In another sense, C is a language with strong history. Users of C develop in their minds a set of idioms, assumptions, list of hazards, and so forth. These inform and constrain how they use the language. Expressing problems in C (or any language) constrains how you think about it. This is equally true when expressing problems in lambda calculus. The problem with asking "What would this look like in C', which is some augmentation of C" is that you may get a handle on the mechanism, but it's unlikely that you'll develop a sense of the idiom or impact of a feature. From an idiom and impact perspective, there are three features of modern languages (in my opinion) that have huge conceptual impact: safety, closures, and GC. These really do change how you look at things in a qualitative way. If your goal is to understand how this stuff actually works at the bottom, then I invite you to come study the BitC compiler or any number of other compilers that accept a modern language and emit C[1]. Look at how they do the transforms and why. At the risk of bias, I'ld especially like you to look at BitC and ask your questions on the bitc-dev list so that we can document the missing information in the code as we respond to your questions. Everybody would benefit that way, and we're at a stage where a "virgin" set of eyes would help us quite a bit. [1] Notable exception: GHC emits C, but it's a bad example for study, because (as I understand matters) it emits C that exploits incestuous knowledge of the internal operation of GCC. As a result, there is a lot of "hidden" magic that isn't obvious in the C code. I wrote a few articles on building a very basic Scheme-like interpreter in C++: While it's not an extension of the C language, it's still a kind of explanation of things like closures in terms of C++ primitives. As for continuations, I also made this interpreter in C++: You can use it to view the CPS-transform (just a very simple term transformation) of arbitrary terms. The interpreter is implemented in C++ as well, and uses a "trampolining" method to evaluate the CPS-transformed programs. I can make that source code available as well, if you're interested. Now that I've had a chance to digest the advice everyone has given here (and after going through some Scheme in SICP), I think I've got a better handle on name spaces vs. lexical scope, closures, and to some extent continuations. I've also found some video lectures going through SICP from UCB. So, let me see if I've got this right: First, having variable names be reusable in different functions without colliding (separate name spaces) isn't enough -- the variable names need to refer to a unique chunk of memory whenever the function is entered. From an implementation standpoint, that means that variables can't be bound to memory locations (or indexes into a data structure) at compile time -- they have to be bound whenever the function is entered at run time. This is usually done by having the variables point to offsets within the call stack or a heap. Secondly, a closure isn't necessarily a language feature, it is a programming technique that needs certain language features present in the language in order to use this technique easily. Basically, a function definition needs to be able to see certain variables in the parent function (assuming nested functions), and those variables then need to persist after the parent function exits (assuming the parent returns the inner function as a result). So either garbage collection needs to be implemented (assuming the inner function can see the outer function's variables), or function definitions need to include syntax that allows the outer function to specify which variables get passed to the inner function. In this second case, those variables would then get stored with the same data element that holds a pointer to the inner function (possibly passing them as default arguments when the function is called). An example of using a closure is defining a function which returns as it's result another function (or function pointer) that has behavior which is influenced by arguments passed to the first function. In other words, you have a function-creator function. Now a continuation is still something I think I get, but I'm probably still missing a few things. From what I understand, a lexical environment is normally set up each time a function is called. So a continuation is a variable that refers to one of these lexical environments (in other words, an "in-flight" function), along with possibly a starting instruction address within that function. And in languages such as C, when f1 calls f2, f1 actually passes a continuation to f2 which is what f2 calls when it executes a return(). So, have I got the right idea this time? (or at least some of it right?) Thanks. If you describe STM in C, you can show that it works but you can't show that it's safe.
http://lambda-the-ultimate.org/node/3171
CC-MAIN-2019-18
refinedweb
5,508
58.72
When I think of maps and location services, the first thing that comes to mind is GPS routing and navigation. Having a map is great for finding places and other points of interest, but navigating between points of interest is awesome. However, how might you tackle such a task when it comes to finding the fast route between two points? Do you have to create your own graph of points and traverse through them manually? This isn’t an issue when taking advantage of the HERE Routing API. We’re going to see how to create an Angular application that displays a map and allows users to pick two points and find the fastest traveling path between them while showing address directions along the way. Things you’d expect in your car navigational unit or on your mobile device. The goal of this project is to create something like the following image: As you can see in the above image, we have two input fields for a starting and destination position. The map on the screen shows a line which represents the fastest path of travel when using a car. Next to the map we have navigation instructions for traversing the path demonstrated on the map. If we wanted to, we could collect the current position of our device and display it as a marker somewhere on the path to demonstrate progress. Starting with the Angular CLI Assuming that you have the Angular CLI installed and configured, we are going to create a new Angular project. If you haven’t already, I strongly encourage you to take a look at my previous tutorial titled, Display HERE Maps within your Angular Web Application as it goes into further detail around creating an actual map. Most of our focus in this tutorial will be around displaying routing information on the map. From the Angular CLI, execute the following: ng new here-project The above command will create a new project for us. Before we get into any actual development, we need to include the HERE JavaScript libraries> The above two JavaScript libraries act as a baseline for displaying a map and using the HERE services. Other APIs will have other requirements when it comes to the JavaScript components. Now that we have the dependencies in place, we can focus on the development of our map. Creating a Map Component with Directions Box Like I mentioned previously, when it comes to building our map, a lot of the code will come from my previous tutorial on the topic. However, there will be some modifications as we progress. The first step towards displaying a map is to create a dedicated component within Angular. From the CLI, execute the following: ng g component here-map The above command will create a component with an HTML view and a TypeScript file for all the logic. Before we get into the UI portion, let’s create our baseline logic. Open the project’s src/app/here-map/here-map.component.ts file and include the following: import { Component, OnInit, OnChanges, SimpleChanges, ViewChild, ElementRef, Input } from '@angular/core'; declare var H: any; @Component({ selector: 'here-map', templateUrl: './here-map.component.html', styleUrls: ['./here-map.component.css'] }) export class HereMapComponent implements OnInit, OnChanges { @ViewChild("map") public mapElement: ElementRef; @Input() public appId: any; @Input() public appCode: any; @Input() public start: any; @Input() public finish: any; @Input() public width: any; @Input() public height: any; public directions: any; private platform: any; private map: any; private router: any; public constructor() { } public ngOnInit() { this.platform = new H.service.Platform({ "app_id": this.appId, "app_code": this.appCode }); this.directions = []; } public ngAfterViewInit() { let defaultLayers = this.platform.createDefaultLayers(); this.map = new H.Map( this.mapElement.nativeElement, defaultLayers.normal.map, { zoom: 4, center: { lat: "37.0902", lng: "-95.7129" } } ); } public ngOnChanges(changes: SimpleChanges) { } } The above code is lengthy, but not a whole lot is happening and it is definitely not complete. The first thing to take note of is how we’re declaring the following line: declare var H: any; Because we don’t have any TypeScript definitions, we need to declare the JavaScript library so that the TypeScript compiler will ignore it and not throw any errors instead. Inside the HereMapComponent class, we have a bunch of variables with an @Input(). These variables represent potential attributes for the HTML component, kind of like the traditional class or id attributes that you’d find on components. Most of our focus will be on the start and finish variables as they will hold our coordinates. Inside the ngOnInit lifecycle hook, we initialize our platform with the app id and app code for our project. To obtain an app id and app code, you’ll need a developer account with HERE. An account can be created for free from the HERE Developer Portal. The directions variable that is initialized in the ngOnInit will eventual have the turn by turn directions for our route. Because the map is a UI component, we cannot begin working with it until the ngAfterViewInit method has triggered. Once triggered, we can create a new map and center it on some coordinates. You’ll notice that we have an ngOnChanges method. We’ll be using the ngOnChanges hook for when component attribute values change in realtime. This will be useful for when the user enters new coordinate information and we need the map to reflect the change. With the base TypeScript in place, let’s take a look at the HTML component for our map. Open the project’s src/app/here-map/here-map.component.html and include the following: <div #map [style.width]="width" [style.height]="height" style="float: left"></div> <ol style="float: left; background-color: #FFF; width: 35%; min-height: 530px; margin-left: 20px; margin-top: 0"> <li * <p [innerHTML]="direction.instruction"></p> </li> </ol> Ignoring my terrible CSS skills, we have two parts to this component. We have the map itself as referenced by the #map template variable and we have an ordered list which loops through our directions variable. We are using the [innerHTML] attribute because the instructions that the HERE Routing API returns is HTML data. With the core map logic in place, we can focus on routing information and navigation instructions. Using the HERE Routing API for Navigation Remember how I mentioned the ngOnChanges hook? This is going to be a critical function towards our routing data and updating the map. However, before we get there, we need to create another function. Open the project’s src/app/here-map/here-map.component.ts file and include the following route function: public route(start: any, finish: any) { let params = { "mode": "fastest;car", "waypoint0": "geo!" + this.start, "waypoint1": "geo!" + this.finish, "representation": "display" } this.map.removeObjects(this.map.getObjects()); this.router.calculateRoute(params, data => { if(data.response) { this.directions = data.response.route[0].leg[0].maneuver; data = data.response.route[0]; let lineString = new H.geo.LineString(); data.shape.forEach(point => { let parts = point.split(","); lineString.pushLatLngAlt(parts[0], parts[1]); }); let routeLine = new H.map.Polyline(lineString, { style: { strokeColor: "blue", lineWidth: 5 } }); let startMarker = new H.map.Marker({ lat: this.start.split(",")[0], lng: this.start.split(",")[1] }); let finishMarker = new H.map.Marker({ lat: this.finish.split(",")[0], lng: this.finish.split(",")[1] }); this.map.addObjects([routeLine, startMarker, finishMarker]); this.map.setViewBounds(routeLine.getBounds()); } }, error => { console.error(error); }); } Most, not all, of the above code was taken from the official documentation. When calling the route method, we are expecting a source coordinates and destination coordinates to be supplied. These coordinates should be a string and delimited by comma. When we have the coordinate information, we can create our routing parameters. For this particular example we’re going to navigate by car and only have a starting and destination endpoint. After we define our parameters we need to clear the map. Every time we change the data, we don’t want old markers or navigation paths to linger on the map. Once the map is clear, we can calculate the route. Assuming that an actual route exists, we can get the maneuver information for navigation instructions and start constructing our path on the map. A scenario where a route doesn’t exist might be if I want to drive from California to Berlin, it just isn’t going to happen. To create a line with the HERE API, we need to push each of the latitude and longitude coordinates into a LineString variable. With those coordinates we can create a Polyline and give it a color as well as a width. While not necessary, we could create markers to represent our starting and ending points on the map. With all of our objects (line, markers, etc.) we can add them to the map and re-center the map to best fit the routing path. Before we get into the ngOnChange stuff, let’s wire up the route function so it works. First we need to revise the ngOnInit method to the following: public ngOnInit() { this.platform = new H.service.Platform({ "app_id": this.appId, "app_code": this.appCode }); this.directions = []; this.router = this.platform.getRoutingService(); } In the above method we are just initializing the router. Then in the ngAfterViewInit method we need to call the route method: public ngAfterViewInit() { let defaultLayers = this.platform.createDefaultLayers(); this.map = new H.Map( this.mapElement.nativeElement, defaultLayers.normal.map, { zoom: 4, center: { lat: "37.0902", lng: "-95.7129" } } ); this.route(this.start, this.finish); } The start and finish variables that are passed will be whatever data is bound to the component attributes. Finally we can take a look at the ngOnChange method: public ngOnChanges(changes: SimpleChanges) { if((changes["start"] && !changes["start"].isFirstChange()) || (changes["finish"] && !changes["finish"].isFirstChange())) { this.route(this.start, this.finish); } } Remember, the ngOnChange hook is called any time one of the component attribute values has changed. So if the start or finish coordinates change, this method will execute. When the method is called, we first need to figure out if one of our start or finish attributes triggered it. If yes, we can call our route function again to clear and update the map. Developing a Functional Angular Application for Routes and Navigation If I haven’t lost you yet, there are a few more things we need to take care of. While we have a functional map and instructions component, it isn’t being used as of now in our application. We need to make use of it. Because we are going to be working with forms, we need to enable such functionality { } In the above code you’ll notice that we’ve imported the FormsModule and added it to the imports section of the @NgModule block. It is simple and a little annoying that we have to do it at all, but it is necessary for Angular. The next step is to add some brief logic for initializing our form variables. Open the project’s src/app/app.component.ts file and include the following: import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { public start: string; public finish: string; public constructor() { this.start = "37.7397,-121.4252"; this.finish = "37.6819,-121.7680"; } public ngOnInit() { } } While of no direct relevance to the variable naming in the map component, the start and finish variables will be bound to our form. Open the project’s src/app/app.component.html file and include the following HTML markup: <div style="padding: 10px 0"> <div> <label style="display: inline-block; width: 60px; color: #FFF">Start</label> <input type="text" [(ngModel)]="start" /> </div> <div> <label style="display: inline-block; width: 60px; color: #FFF">Finish</label> <input type="text" [(ngModel)]="finish" /> </div> </div> <here-map</here-map> There are a few things to note in the above markup, ignoring my lack of CSS skills. Each of the two inputs are bound to our variables and those same variables are bound to the map component via the attributes. As the data in the forms change, so will the data in the map because of the ngOnChange method. In this example, I’ve chosen not to include my app id and app code values. Make sure to update those with your own. Conclusion You just saw how to use the HERE Routing API to find the quickest path of travel between two positions on a map using Angular. In this example we took a map, rendered a line as well as markers on the path, and then printed out the turn by turn directions on how to traverse that path. As previously mentioned, if you were using a device that had GPS, you could add a marker on the map to represent your current position in relation to the starting and ending points.
https://developer.here.com/blog/transportation-routing-and-directions-in-an-angular-application-with-the-here-routing-api
CC-MAIN-2020-05
refinedweb
2,140
55.95
August 2012 Volume 27 Number 08 Microsoft Azure - CyberNanny: Remote Access via Distributed Components By Angel Hernandez | August 2012 This article is about an application called CyberNanny, which I recently wrote to allow me to remotely see my baby daughter Miranda at home from anywhere at any time. It’s written in Visual C++ (MFC) and it comprises different technologies such as Kinect and its SDK, Azure, Web services and Office automation via Outlook. The project is hosted on CodePlex (cybernanny.codeplex.com), where you can check out the code or contribute to it. Before I get into the nuts and bolts of the application, I’ll briefly explain the technologies used to build it. C++ has been—and still is—the workhorse in many software shops. Saying that, the new standard C++ 11 takes the language to a new level. Three terms to describe it would be modern, elegant and extremely fast. Also, MFC is still around and Microsoft has been upgrading it with every new release of its Visual C++ compiler. The Kinect technology is amazing, to say the least; it changes the way we interact with games and computers. And with Microsoft providing developers with an SDK, a new world of opportunities is unveiled for creating software that requires human interaction. Interestingly, though, the Kinect SDK is based on COM (as well as the new programming model in Windows 8, called Windows Runtime, often abbreviated as WinRT). The SDK is also available to Microsoft .NET Framework languages. Azure is the Microsoft Platform as a Service (PaaS) offering that has been around for a couple of years. It provides a series of services that allow building solutions on top of them (such as Compute and Storage). One of the requirements I had with CyberNanny was the reliable delivery of messages via a highly available queue, and Azure provides that. The native use and consumption of Web services is possible using the Windows Web Services API (WWSAPI), which was introduced with Windows 7. I have a blog post (bit.ly/LiygQY) that describes a Windows Presentation Foundation (WPF) application implementing a native component using WWSAPI. It’s important to mention that WWSAPI is built in to the OS, so there’s no need to download or install anything but the Windows SDK (for header and lib files). Why reinvent the wheel? One of the requirements for CyberNanny was the ability to send e-mails with attached pictures, so instead of writing my own e-mailing class, I preferred to reuse the functionality provided by Outlook for this task. This allowed me to focus on the main objective: building a distributed application for looking after my baby. This article is organized in four main sections: - Overview of the general architectural solution - Kinect architecture - Locally deployed components (native) - Cloud-hosted components (managed) Overview of the General Architectural Solution The CyberNanny concept is simple (as shown in Figure 1), but it also has some moving pieces. It can briefly be described as a thick client written in Visual C++, which captures frames via the Kinect sensor. These frames can later be used as a picture that’s attached to a new e-mail composed in Outlook through automation. The application is notified about pending requests by spawning a thread triggered from a timer, which polls a queue hosted in Azure. The requests are inserted into the queue via an ASP.NET Web page. Figure 1 CyberNanny Architecture Note that in order to run and test the solution you must have: - Kinect sensor (I used the one on my Xbox 360) - Azure subscription - Kinect SDK Kinect Architecture Having a good architectural understanding of how things work and how they can be implemented is crucial to development projects, and in this case Kinect is no exception. Microsoft has provided an SDK for managed and native code developers. I’ll describe the architecture Kinect is built upon, as shown in Figure 2. Figure 2 Kinect for Windows Architecture The circled numbers in Figure 2 correspond to the following: - Kinect hardware: The hardware components, including the Kinect and the USB hub through which the sensor is connected to the computer. - Kinect drivers: The Windows drivers for the Kinect, which are installed as part of the SDK setup process as described in this article.: The Kinect Natural User Interface (NUI) for skeleton tracking, audio, color and depth imaging. - DirectX Media Object (DMO): This is for microphone array beam forming and audio source localization. - Windows 7 standard APIs: The audio, speech and media APIs in Windows 7, as described in the Windows 7 SDK and the Microsoft Speech SDK. I’ll demonstrate how I used the video component for capturing frames that are then saved as JPEG files for e-mailing purposes. The rendering of the captured frames is done via Direct2D. The Nui_Core Class I wrote a class called Nui_Core, which encapsulates the functionality I needed from the Kinect sensor. There’s a single instance of this object in the application. The application interacts with the sensor via a member of type INuiSensor that represents the physical device connected to the computer. It’s important to remember that the Kinect SDK is COM-based, hence the aforementioned interface—as well as all the other COM interfaces used throughout the application—is managed by smart pointers (for example, CComPtr<INuiSensor> m_pSensor;). The steps to start capturing frames with the sensor are: - Check whether there’s a sensor available by calling NuiGetSensorCount. - Create an instance of the Kinect sensor by calling NuiCreateSensorByIndex. - Create a factory object for the creation of Direct2D resources by calling D2D1CreateFactory. - Create events for each stream required by the application. - Open the streams by calling NuiImageStreamOpen. - Process the captured data (frame). Once the Nui_Core instance is set up, you can easily take a picture on demand by calling the TakePicture method, as shown in Figure 3. Figure 3 The TakePicture Method void Nui_Core::TakePicture(std::shared_ptr<BYTE>& imageBytes, int& bytesCount) { byte *bytes; NUI_IMAGE_FRAME imageFrame; NUI_LOCKED_RECT LockedRect; if (SUCCEEDED(m_pSensor->NuiImageStreamGetNextFrame(m_hVideoStream, m_millisecondsToWait, &imageFrame))) { auto pTexture = imageFrame.pFrameTexture; pTexture->LockRect(0, &LockedRect, NULL, 0); if (LockedRect.Pitch != 0) { bytes = static_cast<BYTE *>(LockedRect.pBits); m_pDrawColor->Draw(bytes, LockedRect.size); } pTexture->UnlockRect(0); imageBytes.reset(new BYTE[LockedRect.size]); memcpy(imageBytes.get(), bytes, LockedRect.size); bytesCount = LockedRect.size; m_pSensor->NuiImageStreamReleaseFrame(m_hVideoStream, &imageFrame); } } Note that you pass a smart pointer to store the bytes of the image as well as the number of bytes that are copied to it, and then this information is used to handcraft your bitmap. It’s important to mention that once you’ve finished using the sensor, it has to be shut down by calling NuiShutdown, and handles that were used need to be released. The DrawDevice Class As previously mentioned, the rendering capabilities are provided by Direct2D; that’s why another support class is required for use in conjunction with Nui_Core. This class is responsible for ensuring there are resources available for the captured frame, such as a bitmap in this case. The three main methods are Initialize, Draw and EnsureResources. I’ll describe each. Initialize: This is responsible for setting up three members of type DrawDevice. The application has a tab control with three tabs, so there’s a member for each tab (Color, Skeletal and Depth view). Each tab is a window that’s responsible for rendering its corresponding frame. The InitializeColorView shown in the following code is a good example of calling the Initialize method: bool Nui_Core::InitializeColorView() { auto width = m_rect.Width(); auto height = m_rect.Height(); m_pDrawColor = std::shared_ptr<DrawDevice>(new DrawDevice()); return (m_pDrawColor.get()->Initialize(m_views[TAB_VIEW_1]->m_hWnd, m_pD2DFactory.p, 640, 320, NULL)); } Draw: This renders a frame on the proper tab. It takes as argument a Byte* captured by the sensor. Just as in the movies, the effect of animation comes from the successive rendering of static frames. EnsureResources: This is responsible for creating a bitmap when requested by the Draw method. Locally Deployed Components (Native) The CyberNanny project comprises the following: - Application - CCyberNannyApp (inherited from CWinApp). The application has a single member of type Nui_Core for interacting with the sensor. - UI Elements - CCyberNannyDlg (Main Window, inherited from CDialogEx) - CAboutDlg (About Dialog, inherited from CDialogEx) - Web Service Client - Files auto-generated after executing WSUTIL against a service, Web Services Description Language (WSDL). These files contain the messages, structures and methods exposed by the WCF Web service. - Outlook Object Classes - In order to manipulate some of the Outlook objects, you have to import them into your project by selecting “Add MFC Class” from ActiveX Control Wizard. The objects used in this solution are Application, Attachment, Mail-Item and Namespace. - Proxy - This is a custom class that encapsulates the creation of the required objects to interact with WWSAPI. - Helper Classes - These classes are used to support the functionality of the application, such as converting a bitmap into a JPEG to reduce the file size, providing a wrapper to send e-mails and interact with Outlook, and so on. When the application starts, the following events occur: - A new window message is defined by calling Register-WindowMessage. This is for adding items to the list of events when a request is processed. This is required because you can’t directly modify UI elements from a thread different from the UI thread, or you’ll incur an illegal cross-thread call. This is managed by the MFC messaging infrastructure. - You initialize your Nui_Core member and set up a couple of timers (one for updating the current time on the status bar and another one that kicks off a thread for polling the queue to check whether there’s a pending request). - The Kinect sensor starts capturing frames, but the application doesn’t take a picture unless there’s a request in the queue. The ProcessRequest method is responsible for taking a picture, serializing the picture to disk, writing to the event viewer and kicking off the Outlook automation, as shown in Figure 4. Figure 4 The ProcessRequest Method Call void CCyberNannyDlg::ProcessRequest(_request request) { if (!request.IsEmpty) { auto byteCount = 0; ImageFile imageFile; std::shared_ptr<BYTE> bytes; m_Kinect.TakePicture(bytes, byteCount); imageFile.SerializeImage(bytes, byteCount); EventLogHelper::LogRequest(request); m_emailer.ComposeAndSend(request.EmailRecipient, imageFile.ImageFilePath_get()); imageFile.DeleteFile(); } } The frame originally captured by Kinect is a bitmap that’s approximately 1.7MB in size (which isn’t convenient for e-mailing and therefore needs to be converted to a JPEG image). It’s also upside down, so a 180° rotation is required. This is done by making a couple of calls to GDI+. This functionality is encapsulated in the ImageFile class. The ImageFile class serves as a wrapper for performing operations with GDI+. The two main methods are: - SerializeImage: This method takes a shared_ptr<BYTE>, which contains the bytes of the captured frame to be serialized as an image, as well as the count of bytes. The image is also rotated by calling the RotateFlip method. - GetEncoderClsid: As mentioned, the image file size is too big to use as an attachment—therefore, it needs to be encoded to a format with a smaller footprint (JPEG, for example). GDI+ provides a GetImageEncoders function that lets you find out which encoders are available on the system. So far I’ve covered how the application utilizes the Kinect sensor and how the frames captured are used to create a picture for e-mailing. Next, I’ll show you how to call the WCF service hosted on Azure. WWSAPI, introduced in Windows 7, allows native developers to consume Web or WCF services in an easy and convenient way, without worrying about the communication (sockets) details. The first step for consuming a service is to have a WSDL to use with WSUTIL that in turn produces codegen C code for service proxies, which are data structures required by the service. There is an alternative called Casablanca (bit.ly/JLletJ), which supports cloud-based client-server communication in native code, but it wasn’t available when I wrote CyberNanny. It’s common to get the WSDL and save it to disk, and then use the WSDL file and related schema files as input for WSUTIL. One aspect to take into account is schemas. They must be downloaded along with the WSDL, otherwise WSUTIL will complain when producing the files. You can easily determine the required schemas by checking the .xsd parameter in the schema section of the WSDL file: wsutil /wsdl:cybernanny.wsdl /xsd:cybernanny0.xsd cybernanny1.xsd cybernanny2.xsd cybernanny3.xsd /string:WS_STRING The resulting files can be added to the solution, and then you proceed to call your service via the codegen files. Four main objects are required to use with WWSAPI: - WS_HEAP - WS_ERROR - WS_SERVICE_PROXY - WS_CHANNEL_PROPERTY These objects allow the interaction between the client and the service. I put together the functionality to invoke the service in the Proxy class. Most of the WWSAPI functions return an HRESULT, so debugging errors can be a challenging task. But fear not, because you can enable the tracing from the Windows Event Viewer and see exactly why a given function failed. To enable tracing, navigate to Applications and Services Logs | Microsoft | WebServices | Tracing (right-click it to enable it). That pretty much covers the native components of the solution. For more information, please refer to the source code on the aforementioned CodePlex site. The next section is about the Azure component of the solution. Cloud-Hosted Components (Managed) Please note that this is not an extensive tutorial on Azure, but rather a description of the Azure components in CyberNanny. For more in-depth and detailed information, refer to the Azure Web site at windowsazure.com. The Azure platform (Figure 5) comprises the following services: - Azure Compute - Azure Storage - Azure SQL Database - Azure AppFabric - Azure Marketplace - Azure Virtual Network Figure 5 Azure Platform Services CyberNanny only has a Web Role that has allocated two cores to guarantee high availability. If one of the nodes fails, the platform will switch to the healthy node. The Web Role is an ASP.NET application, and it only inserts message items into a queue. These messages are then popped out from CyberNanny. There’s also a WCF service, which is part of the Web Role that’s responsible for handling the queue. Note that an Azure role is an individual component running in the cloud where each instance of a cloud corresponds to a virtual machine (VM) instance. In CyberNanny’s case, then, I’ve allocated two VMs. CyberNanny has a Web Role that’s a Web application (whether it’s only ASPX pages or WCF services) running on IIS. It’s accessible via HTTP/HTTPS endpoints. There’s also another type of role that’s called a Worker Role. It’s a background processing application (for example, for financial calculations), and it also has the ability to expose Internet-facing and internal endpoints. This application also utilizes a queue provided by Azure Storage, which allows reliable storage and delivery of messages. The beauty of the queue is that you don’t have to write any specialized code to take advantage of it. Neither are you responsible for setting up the data storage with a certain structure to resemble a queue, because all this functionality is provided out of the box by the platform. Besides high availability and scalability, one of the benefits provided by the Azure platform is the commonality to do things such as developing, testing and deploying Azure solutions from Visual Studio, as well as having .NET as the lingua franca to build solutions. There are some other cool features I’d love to add to CyberNanny, such as motion detection and speech recognition. If you want to use this software or contribute to the project, please feel free to do so. The technologies used are available now and even though they look “different,” they can interoperate and play nicely with one another. Happy coding! Angel Hernandez Matos is a manager in the Enterprise Applications team at Avanade Australia. He’s based in Sydney, Australia, but is originally from Caracas, Venezuela. He has been a Microsoft MVP award recipient for eight consecutive years and is currently an MVP in Visual C++. He has been writing software since he was 12 years old and considers himself an “existential geek.” Thanks to the following technical experts for reviewing this article: Scott Berry, Diego Dagum, Yonghwi Kwon and Nish Sivakumar
https://docs.microsoft.com/en-us/archive/msdn-magazine/2012/august/microsoft-azure-cybernanny-remote-access-via-distributed-components
CC-MAIN-2019-47
refinedweb
2,733
53.92
I am trying to create an algorithm to read a file with this shape: +6.590472E-01;+2.771043E+07;+ -5.003500E-02;-8.679890E-02;- + - fscanf Assuming declarations like: char s1[2], s2[2], s3[2]; char int1[21], int2[21], frac1[21], frac2[21]; char exp1[6], exp2[6]; and assuming that you read the line with fgets() or getline() into a string variable string, then you can use sscanf() to parse the string in one swoop like this: if (sscanf(string, "%[-+]%20[0-9].%20[0-9]%*[eE]%5[-+0-9];%[-+]%20[0-9].%20[0-9]%*[eE]%5[-+0-9];%[-+]", s1, int1, frac1, exp1, s2, int2, frac2, exp2, s3) != 9) …something went wrong — at least we can analyze the string… else …got the information… Note the use of 20 in the format string but the use of 21 in the variable declarations; this off-by-one is a design decision made in the standard I/O library long ago (circa 1979), well before there was a standard. The %*[eE] allows e or E as the exponent marker, and suppresses the assignment. Note that the exponent term would allow E9-8+7 as the exponent, and won't insist on a sign; there isn't a simple way around that unless you collect the exponent in two parts. You also can't simply tell where the scan finished. You could add a %n conversion specification at the end, and pass &n as an extra argument (with int n; as the variable definition). The %n isn't counted, so the condition is unchanged. You can then inspect buffer[n] to see where the conversion stopped — was it a newline, or end of string, or something bogus? Note that because the format string uses %[…] scan sets throughout, no spaces are consumed — and any spaces in the input would trigger an error. This requires a fairly comprehensive knowledge of the specification for sscanf(). You'll probably need to read it half a dozen times in the next month or so to begin to get the hang of it, and then reread it another half a dozen times in the next year, and after that you may be able to get away with a yearly revision — it's a complex function (the scanf() family of functions are some of the most complex in standard C). #include <stdio.h> int main(void) { char string[] = "+6.590472E-01;+2.771043E+07;+\n"; char s1[2], s2[2], s3[2]; char int1[21], int2[21], frac1[21], frac2[21], exp1[6], exp2[6]; int n; int rc; if ((rc = sscanf(string, "%[-+]%20[0-9].%20[0-9]%*[eE]%5[-+0-9];%[-+]%20[0-9].%20[0-9]%*[eE]%5[-+0-9];%[-+]%n", s1, int1, frac1, exp1, s2, int2, frac2, exp2, s3, &n)) == 9) { printf("[%s][%s].[%s]E[%s]\n", s1, int1, frac1, exp1); printf("[%s][%s].[%s]E[%s]\n", s2, int2, frac2, exp2); printf("[%s] %d (%d = '%c')\n", s3, n, string[n], string[n]); } else printf("Oops (rc = %d)!\n", rc); return 0; } Output: [+][6].[590472]E[-01] [+][2].[771043]E[+07] [+] 29 (10 = ' ')
https://codedump.io/share/pAsVFFSnlw1N/1/pattern-for-fscanf-to-retrieve-number39s-information
CC-MAIN-2017-43
refinedweb
523
67.59
Please confirm that you want to add Advanced Ruby Programming: 10 Steps to Mastery to your Wishlist.!). What is this course about and who is it for? Watch this video to find out. This is an intermediate-to-advanced level course on Ruby programming. It can be used with any code editor or IDE and any operating system that supports Ruby. This video provides a quick overview of the course and gives a few tips on how to get the most from it. This is the 'course text' – a short eBook that summarizes the most important topics from each of the steps of this course. Use this book to revise the subjects. It also contains links to useful resources such as Ruby installers, editors and IDEs. This contains an archive of all the code to accompany The Book Of Ruby and this course Table of ContentsTable of Contents: Web sites Appendix E: Ruby Development Software An overview of how to test conditions in Ruby and a first look at the alternative Boolean operators: and/&&, or/||, !/not An explanation of why the same tests may produce different results if you change the style of Boolean operators. Also a look at the various types of if, else and unless tests, plus case statements. Performing tests in Ruby may seem easy. But there are some complexities which, unless you are very careful, can change the way your program works. Use this quiz to test your understanding... What are class methods and what are they used for? How to use class methods and class variables and why it’s important to understand that classes are objects too! How to add methods to a specific object rather than to an entire class The difference between class and instance methods and variables, plus singletons. Regular arguments and default arguments, a discussion of parentheses, single and multiple return values and parallel assignment Usually, there is one way to pass data to a method in a Ruby object and one way to get data back out again. But there are exceptions to the rule. Here I discuss the importance of encapsulation and information hiding. Information-hiding, returning values from methods and unintentional side-effects Trapping exceptions and recovering from errors in your code. The fundamentals of exception handling techniques in Ruby. Exceptions are objects and they have a class hierarchy which can be useful to you when handling specific types of error How to deal with errors In Ruby blocks are like ‘nameless methods’ that are often used in iterators. This short video explains the basics Blocks, procs, lambdas, block parameters, closures, passing blocks as arguments, yielding anonymous blocks and using blocks as iterators How to pass and execute blocks to methods. Careful! This is quite tricky! Symbols are widely used in Ruby programming. Here we find out what’s special about them This is Chapter 11 of The Book Of Ruby. It explains symbols. Threads and Fibers can help you to write programs that do more than one thing at a time. But they can be tricky things to use effectively! What are Ruby Symbols? And how can threads help your programs multitask? Ruby does not implement ‘multiple inheritance’. So how can a class include features from multiple parent classes? The answer is: it can’t. But it can mix-in multiple modules… You just have to look at a module definition to see how similar it is to a class. But the resemblance is not superficial. Here I explain the relationship between a Class and a Module. Modules as namespaces and as mixins. How to access module constants, module methods and instance methods and how to avoid potential problems Modules, mixins and methods How to use Ruby's File and IO classes to read, write and copy files. How to find file information and traverse directories and subdirectories recursively. How to save and load structured data to and from disk in a human readable format. You can use Marshal to read and write byte streams of data to and from disk Opening and closing files, reading and writing data This explains the fundamentals of matching text patterns to find or modify strings using Regular Expressions Some sample programs show how you can match patterns substitute text or create new files – for example, containing extracted documentation – using Regular Expressions Matching and replacing text patterns How to add and remove methods at runtime, deal with method-calls to methods that don’t exist and evaluate strings as Ruby code. In Ruby you don’t have to write a complete program before you run it. You can add new Ruby code to the program that is running. Here’s a simple example. Blurring the boundaries between code and data Huw Collingbourne is the technology director at SapphireSteel Software, developers of programming tools for Microsoft Visual Studio..
https://www.udemy.com/expert-ruby-programming-ten-steps-to-mastery/?affcode=E0UedFhQQHYASxxj
CC-MAIN-2017-22
refinedweb
807
63.19
DBLP Some Lessons Learned - Aubrey Raymond Mosley - 3 years ago - Views: Transcription 1 DBLP Some Lessons Learned Michael Ley Universität Trier, Informatik D Trier Germany ABSTRACT. 1. INTRODUCTION In June 2009 the DBLP Computer Science Bibliography 1 from the University of Trier contained more than 1.2 million bibliographic records. For computer science researchers the DBLP web site is a popular tool to trace the work of colleagues and to retrieve bibliographic details when composing the lists of references for new papers. Ranking and profiling of persons, institutions, journals, or conferences is another sometimes controversial usage of DBLP. The DBLP data may be downloaded. The bibliographic records are contained in a huge XML file 2. We are aware of more than 400 publications which mention the usage of these data for an amazing variety of purposes. Many researchers simply need non-toy files to test and evaluate their algorithms. They are interested in XML, but not in the semantics of the data. Others inspect the DBLP data more closely: It is easy to derive several graphs like the bipartite personpublication graph, the person-journal or person-conference graphs, or the coauthor graph, which is an example of a social network. Methods for analysis and visualization of these medium sized graphs are reported in many papers. An early version of this paper was titled dblp.xml A Documentation. Version: June 18, /06: sizeof(dblp.xml) = 532MB, sizeof(gz) = 93MB. Bibliometric studies are a third group of publications. They make use of the full semantics of the data. The main disadvantages of DBLP for this purpose are the lacking citation information and the varying coverage for different subfields of computer science[3]. The main advantages are the free availability and the inclusion of many conference proceedings which play an essential role for many branches of CS and are poorly covered by other bibliographic data bases. A fourth group of papers deals with person name disambiguation, a special aspect of data quality. DBLP is a (very imperfect) authority file [1] for computer science researchers. We try to identify the persons behind the research papers and to treat synonyms and homonyms as precise as possible. Incomplete and inconsistent information, imperfect software, lack of time, and our own inability are the limiting constraints for this task. The DBLP web server lists all known papers published by a person on her/his person page. This simple mapping becomes tricky, as soon as a person has several names (synonyms) or if there are several persons with the same name (homonyms). The main obstacles are the bad habit to abbreviate (given) names beyond recognition and spelling errors. The main algorithmic idea we use to identify names we should check more precisely, is to look at person pairs which have the distance two in the coauthor graph and have a similar name[6]. DBLP isn t a well designed project. It grew from a small scale experimental server which was set up at the end of 1993 to test web technology. In retrospect many ad hoc solutions are poorly designed. Nevertheless, our policy is to keep DBLP as stable as possible. Data formats, URLs etc. are only changed if they prevent important functionality, and not if we simply recognized them as unaesthetic. Section 2 of this paper describes dblp.xml. Beyond the syntactic framework given by the DTD, there are a lot of conventions and micro-syntax rules. We already commented on the special treatment of person names in DBLP. In section 3 you may find more details. In section 4 we sketch the remaining HTML-style part of DBLP. The new DBLP XML services are explained in the online appendix. Our example application is a simple crawler which finds the shortest path between two DBLP authors in the coauthor graph. In addition the appendix lists code to map person names to DBLP URLs. 2. DBLP RECORDS The DBLP data set is available from the location 2 The file dblp.xml contains all bibliographic records which make DBLP. It is accompanied by the data type definition file dblp.dtd. You need this auxiliary file to read the XMLfile with a standard parser[4]. dblp.xml has a simple layout: <?xml version="1.0" encoding="iso "?> <!DOCTYPE dblp SYSTEM "dblp.dtd"> <dblp> record 1 record n </dblp> The header line specifies ISO ( Latin-1 ) as the encoding, but in fact the file only contains characters <128, i.e. pure ASCII. All non-ascii characters are represented by symbolic or numeric entities. The symbolic entities like é for the character é are declared in the DTD. Numeric entities like é should be understood by any XML parser without declaration. In practice there are some obstacles in parsing the large XML file which cost us a lot of time: The SAX parser contained in the Java standard distribution has a limit for handling symbolic entities. When starting the Java virtual machine the option DentityExpansionLimit has to be set to a large number. The parser contained in Java 1.6 distribution is not able to handle large XML files. You should install an alternative SAX Java parser. We now use Xerces-J from the Apache XML project, which reads dblp.xml without any problem. The DBLP FAQ 3 reports more details. The XML root element <dblp> contains a long sequence of bibliographic records. The DTD lists several elements to be used as a bibliographic record: <!ELEMENT dblp (article inproceedings proceedings book incollection phdthesis mastersthesis www)*> These tags correspond to the entry types used in BibT E X[5]. DBLP records may be understood as BibT E X records in XML syntax +ɛ : <article key="journals/cacm/szalay08" mdate=" "> <author>alexander S. Szalay</author> <title>jim Gray, astronomer.</title> <pages>58-65</pages> <volume>51</volume> <journal>commun. ACM</journal> <number>11</number> <ee> </ee> <url>db/journals/cacm/ cacm51.html#szalay08</url> </article> Record Attributes This record describes an article from CACM. The enclosing article element has two attributes: key is the unique key of the record. DBLP keys look like slash separated Unix file names. The most important sub-trees in the key namespace are conf/* for conference or workshop papers and journals/* for articles 3 which are published in journals, transactions, magazines, or newsletters. The second part of a DBLP key typically designates the conference series or periodical the papers appeared in. The last part of the key may be any sequence of alphanumerical characters, in most cases these IDs are derived from the authors names and the year of publication, sometimes a letter is appended to make this key part unique. Keys are not changed if we correct misspelled author names or year information. Obviously keys are not functional dependent from the contents of their records, you should not make any assumption about the last part of a key. The DBLP key namespace layout is coarse and sometimes fuzzy: periodicals or conference series often are renamed, sometimes they are split or they are joined. The world of publications does not form a hierarchy, the mapping into the DBLP namespace often is a very helpful sub-categorization, but sometimes it may be wrong or controversial. A typical problem is the decision when to treat a satellite workshop as a selfcontained event or as a part of the hosting conference. mdate is the date of the last modification of this record. The format complies with ISO 8601, i.e. YYYY-MM-DD. We have no explicit change log for DBLP, but the mdate attribute makes it easy to load only recent additions into your application. The file dblp.xml contains only the current version of the records. dblph.xml is an extended version of the XML file which additionally contains old versions of the records. These information were extracted from the daily backups of the last years. If may be useful for the analysis of the evolution of DBLP. dblph.xml is only updated occasionally. In our example the elements author, title, pages, year, volume, journal, and number are used similar to the corresponding BibT E X fields. Author In BibT E X there is at most one author field which may contain an and -separated list of names. In DBLP records there is an author element for each author. The order of the author elements inside a record is significant, it should be the same as on the head of the paper. In BibT E X a name may be entered as "Michael Ley" or as "Ley, Michael" (last name, first name). In DBLP we always use the first form, there should be no comma inside of author elements. If name parts are abbreviated, each initial should always be followed by a dot, i.e. we write H. P. Smith and not H P Smith or HP Smith. Behind a dot, there should always be a blank or a hyphen. Names may be composed of a list of name parts. We do not make any statement about the role of the name parts. For western names, the sequence usually starts with given names and ends with family names. But even in Europe there is large diversity in local naming traditions. In many situations person names are outside of this simplistic schema: The name "Luqi" is complete, the categories first/last name simply do not apply. Ingibjörg Sólrún Gísladóttir is a traditional Icelandic name. Gísladóttir means daugther of Gísla, 3 there is/was no family name inherited over many generation. Because of these patronyms the Icelandic telephone directory is sorted by first names. In Peter van der Stok only Peter is the given name. Yi-Ping Phoebe Chen is the transliteration of a Chinese name. Phoebe is an additional western name. Jr. is a postfix often appended to names: Ola R. Snøve Jr.. We separate postfixes by a space and not by a comma from the other name parts. Section 3 describes more details of person names in DBLP. If the author of a publication is unknown, the DBLP record does not contain any author element. Title The only element which has to exist in every DBLP publication record is the title element. It may contain sub elements for subscripts, sup elements for superscripts, i elements for italics, and tt for typewriter text style. These elements may be nested. Pages Our preferred style to fill the pages element is from-to (unlike the -- of BibT E X). If the number of the last page is unknown, we write from-. If it is a single paged paper, we just write the page number without hyphen. For split articles in magazines we occasionally use a comma separated list of page numbers or page ranges. In rare cases the pages element may contain any character sequence. Whenever available we prefer the conventional page numbering which is established for paper media. It is a simple addressing scheme inside of volumes and a length specification at the same time. Unfortunately many electronic publications abandon page numbering. In these cases the pages element may contain a paper number or the length of the paper. Years The year element should always contain a four digit number to be interpreted according to the Gregorian calendar. For journal articles, we assume that there is always a definite date of publication of the issue. For papers published in conference proceedings the year specification may become tricky, if the proceedings are are not published in the same year as the conference was hosted. In DBLP the year field of conference papers specifies the date when the conference took place, the year field of the enclosing proceedings specifies the publication date. Our example shows the situation for a typical post-proceedings: <inproceedings key="conf/naa/xiang08" > <title>numerical Quadrature </title> <booktitle>naa</booktitle> <crossref>conf/naa/2008</crossref> </inproceedings> <proceedings key="conf/naa/2008" > <title>naa 2008, Lozenetz, </title> <year>2009</year> <publisher>springer</publisher> </proceedings> Crossref Like in BibT E X, we use a inproceedings record for the paper and a proceedings record for the volume. The crossref field in the inproceedings record contains the key of the proceedings record. The conference took place in 2008, the proceedings were published in In rare cases cumulative proceedings with papers from two or more years of a conference series are published. DBLP is not able to model this situation precisely, for an example look at LNCS For article records the journal field contains the name of the journal. The volume and the number field are used to specify the issue the paper appeared in. For inproceedings records the booktitle field gives the short name of the conference or workshop. In many cases conference acronyms are only self-explanatory for researchers who work in the subfield of computer science the conference addresses. The corresponding proceedings record should contain more detailed information about the conference and the proceedings volume. Unfortunately the proceedings records (and the crossref fields) are missing for a lot of legacy inproceedings records. URL and EE A DBLP record may contain up to two URLs in the fields url and ee. Both URLs may be either global or local. A global URL is standard internet URL, it allways starts with protocol specification of the form letter + : ( ftp:, ). If the url or ee contents does not start with protocol name followed by a colon, it is a local URL pointing inside the DBLP web site. To get valid URL, you simply have to add a base URL of a DBLP server as a prefix. The base URLs (DBLPbURL) of the most stable DBLP servers are ley/db/ for the origin DBLP server at the University of Trier, Germany. for our alternate server at Trier. The author pages of this server intentionally do not include the Complete Search facility. for the DBLP mirror hosted by ACM SIGMOD. In retrospect, the field names url and ee are misnomers, but there is simple translation : url = position inside the table of contents. When we started DBLP end of 1993, nearly all publications were only available on paper. A citation of a paper, for example on an author s page, should be linked upwards to its formal context, i.e. the table of contents of the proceedings or journal volume were the paper was published in. For proceedings entries there should be a link downwards to it s table of contents. In both cases the url field contains the location of the table of contents. The url field is available for almost all DBLP records. Usually it is a local URL. ee = position of the electronic edition. During the 1990s electronic versions of most formally published 4 papers became available on the servers of ACM, IEEE, and the large commercial publishers. It was obvious to extend the DBLP bibliography to a portal which makes it easy to find publications on the publishers servers. ee contains the required link information. Usually the ee fields are global URLs. Local URLs are only used, if DBLP has supplementary information about a paper this facility was used for the ACM SIGMOD Anthology. For some publishers it took several years to learn how to organize their digital libraries. Today most of them provide DOIs to address publications. Unfortunately some of the publishers do not map old URLs they declared as stable to the current ones. The most important cases are the IEEE Computer Society and Springer. DBLP still contains thousands of broken URLs which are the result of reorganizations by publishers, which did not take care of backward compatibility. To store the table of contents location in (nearly) every record and to use complete URLs for the external links may seem cumbersome. Definitely it is possible to represent these information more compact by reusing the URL information of proceedings records if available and by using auxiliary tables with the publishers addressing schemes. The advantage of the more redundant representation is its simplicity: DBLP records are self-contained, from each record you may produce a linked citation like on the DBLP authors pages without any additional lookup in auxiliary tables. (In)proceedings Our next example shows a record describing a conference paper published in an LNCS volume and the record of the volume: <inproceedings key="conf/er/norrie08" mdate=" "> <author>moira C. Norrie</author> <title>pim Meets Web 2.0.</title> <pages>15-25</pages> <booktitle>er</booktitle> <ee> </ee> <crossref>conf/er/2008</crossref> <url>db/conf/er/er2008.html#norrie08</url> </inproceedings> <proceedings key="conf/er/2008" mdate=" "> <editor>qing Li</editor> <editor>stefano Spaccapietra</editor> <editor>eric Yu</editor> <editor>antoni Olivé</editor> <title>conceptual Modeling - ER 2008, 27th International Conference on Conceptual Modeling, Barcelona, Spain, October 20-24, Proceedings</title> <volume>5231</volume> <isbn> </isbn> <booktitle>er</booktitle> <series href="db/journals/lncs.html">lecture Notes in Computer Science</series> <publisher>springer</publisher> <url>db/conf/er/er2008.html</url> </proceedings> The proceedings records lists the editors of the volume. For conference proceedings editor is a fuzzy term. The editors listed on most LNCS proceedings are the leading organizers of the conferences: The chairs of the program committee and the general chairs. Additionally the copy editor may be listed, that is a person who did the work to form a consistent book from the manuscripts. Unfortunately the digital library of the IEEE Computer Society and the Xplore system of the IEEE umbrella organization do not list any editors. On the other hand ACM now lists the PC chairs and general chairs of most conference, but does not mention the copy editors. Editors are only the tip of the iceberg: To run a large academic conference requires hard work by hundreds of people. People who volunteer for these kind of service should be credited for their work. Several DBLP users suggested to extend our service towards this direction. Unfortunately we have not the human resources to maintain such a service on a reasonable level. If you plan any bibliometric analysis of editorship, you should be aware that these information are very incomplete and that there exists no consensus, who should be listed as the editors of a proceedings. The title field of proceedings records requires same comments, too. Most LNCS volumes are ideal: After the main title, they list the exact name of the conference, the location of the event, and the exact date. Whenever possible, we try to imitate this style. Some publishers give only very incomplete information about conferences or workshops, a bad example is set by Xplore. The series field specifies the book series the volume is part of. If there is a series field and a volume field, the volume number is interpreted as the numbering of the series. Sometimes two volume fields with different roles are required: LNCS Volume 5358 is the first volume of ISVC 2008 and LNCS Volume 5359 is the second proceedings volume of this conference. We refrained from inventing a second volume field. We use a simple convention: The local number is appended in braces to the booktitle of the proceedings and the inproceedings records, in our example ISVC (1) / ISVC(2). The optional attribute href in the series field contains the local URL of the main page of the series. LNCS Journals Our last example of this section looks somewhat exotic: <article key="journals/jods/hurtadopw08" mdate=" "> <author>carlos A. Hurtado</author> <author>alexandra Poulovassilis</author> <author>peter T. Wood</author> <title>query Relaxation in RDF.</title> <pages>31-61</pages> <volume>10</volume> <journal>j. Data Semantics</journal> <ee> </ee> <crossref>journals/jods/ </crossref> <url>db/journals/jods/ 5 jods10.html#hurtadopw08</url> </article> <proceedings key="journals/jods/ " mdate=" "> <editor>stefano Spaccapietra</editor> <title>journal on Data Semantics X</title> <booktitle>j. Data Semantics</booktitle> <series href="db/journals/lncs.html">lecture Notes in Computer Science</series> <volume>4900</volume> <publisher>springer</publisher> <isbn> </isbn> <url>db/journals/jods/jods10.html</url> </proceedings> Springer publishes several journals inside of the LNCS series. Each volume of these journals is an LNCS volume. Rightly authors of papers from these journals asked us to classify their papers as articles. On the other hand the volumes of these journals are more self-contained than usual volumes of a journal: They have an ISBN and a series volume number. The editors change from volume to volume. Our ad hoc solution is to describe the volumes by proceedings records or book records, but this is incorrect and shows the limitations of classic BibT E X records to model the world of scientific publications. More Record Types A future DBLP version should have new record types for journal volumes. In addition it makes sense to make journals as a whole first class citizens, i.e. to model them by explicit by records of a new type. Information like the ISSN, the publisher of the journal, and the journal home page can be stored in such records. The same holds for conference series like the annual VLDB conference, or book series like LNCS. The modeling will become complicated because these objects do not form a hierarchy: A proceedings volume of a joint conference is a member of several conference series. The SIGMOD 2000 proceedings volume is a member of the SIGMOD conference stream and it is a number (not a volume) of SIGMOD Record volume 29. We are aware of many shortcomings of the current data model, but nevertheless a fundamental revision has low priority. We fear that a perfect data model becomes very complex and makes data acquisition too expensive. In DBLP we use plain hypertext pages to describe situations not covered by our data model. We discuss this escape mechanism in section 4. To collect citation links is another feature which is very attractive, but which was abandoned because it is too costly to be done manually. The cite fields in the DBLP records were entered for a small subset of the ACM SIGMOD Anthology papers, the contents of these fields are DBLP keys of cited papers. For large scale citation link information you should look at CiteSeer and the ACM Guide to Computing Literature. 3. PERSONS Humans are social beings. Language requires to name persons you want to speak about. As long as life was organized in small groups and there was little exchange with other groups, ad hoc naming of persons works perfectly. It was sufficient to name a person by her/his function ( Smith ) or by the parents ( Gísladóttir ). A large variety of naming conventions developed in different cultures[2]. DBLP is a global person register for researchers of computer science and neighboring sciences. Our collection now contains about different person names. On this scale, which is at most medium compared to the set of all living humans, the traditional naming conventions reach their limits. Synonyms and homonyms become a main problem. Author Pages When we started DBLP in 1993 with the papers of a few hundred persons from the data base systems community, we did not take care of any scaling problems or the complexity of naming conventions. Each person should have her/his own DBLP author page. The papers by David Maier are listed on DBLPbURLindices/a-tree/m/Maier:David.html and the papers by Laura M. Haas on DBLPbURLindices/a-tree/h/Haas:Laura M=.html (DBLPbURL = a DBLP base URL see section 2). All pages for persons whose last name starts with an m are stored in the directory /indices/a-tree/m/, and all h -names are stored in /indices/a-tree/h/, etc. The filenames inside these directories are formed last-name:first-name.html Blanks are mapped to underscores, all other characters which are not alphanumeric are mapped to =. This avoids illegal URLs. Today this primitive mapping produces huge directories. For example there are now more than persons with last names starting with s. A few years ago this was a performance problem. Today there is no good argument to change the URLs of thousands of web pages which are referred by numerous other web servers and search engines. We regard URL stability as a very important virtue to make a service reputable. There are (at least) two solutions for implementation: We continue to materialize DBLP author files as static HTML files. The files are generated daily. Fortunately a lot of data base system technology has been moved into standard file systems. Contemporary file systems have no performance problem to handle huge directories, because they use variants of B-trees or hash tables (ext2/3) as access paths. For the file system with the author pages, we switched off the logging into a journal if the system crashes, we produce new author pages from dblp.xml. The journaling was the main bottleneck when we refreshed all author pages. Today you can choose from a large palette of technologies to implement dynamic web pages. Even the oldest of these technologies, the CGI interface between web servers and programs written in nearly any language, allows realization of any legal URLs by software. For example the DBLP BibT E X pages look like static HTML files, but they are produced on demand 6 by a little (and still imperfect) C program from the XML records. In retrospect it was a design error to split the person names into first names and last names. A correct name splitting is not computable with feasible costs. To make our algorithm reproducible, we nevertheless document the source code in the appendix. The problem becomes evident, if you apply it to a Spanish name like Juan Antonio Holgado Terriza : The algorithm says that Terriza is the last name, but in fact Holgado Terriza is the correct answer. Certainly it is possible to codify a lot of knowledge about naming into an algorithm, but in practice any algorithm remains incomplete because our knowledge is incomplete. To tag name parts by more detailed markup fails for the same reason. For some users of DBLP the incorrect splitting of their names is annoying, but we do not claim any longer that the name parts are first/last names. Homonyms and Synonyms David and Maier are popular names, but at the moment there seems to be only one person who publishes computer science papers the well-known professor at Portland, Oregon. Other names are less unique. In DBLP you may now find papers by at least seven different persons with the name Chen Li. In these cases, we have to add some small mystical number to make the homonym persons distinguishable: <author>chen Li</author> <author>chen Li 0002</author> <author>chen Li 0007</author> We optionally append a space character and a four digit number to names. For the naïve name split algorithms the number is a postfix to the last name. The number is part of the URL, but not printed with the name: /indices/a-tree/l/li 0007:Chen.html The hard problem is to detect homonym persons and to assign the correct ID numbers. We have no solution for this problem, but simple heuristics proved to be very helpful: The coauthor index at the bottom of the DBLP authors pages is colored. If persons have jointly published they get the same color. If there is no direct connection between two persons or transitively via another member of the coauthor list they are assigned different colors. If a coauthor list is monochrome, we are quite confident that the main name entry represents a single person. There are two main reasons for multicolored coauthor lists: If DBLP only contains a small sample of the publications of a person, our information is simply too incomplete this often happens to senior researchers working in an area only partially covered by DBLP. The other reason is a split personality. If the coauthors can be separated in disjunct groups, the main entry may represent several persons or a person who works in several distant research areas with unconnected colleagues. The reason for this may be the change of the affiliation. An open challenge is to develop a clustering algorithms which indicates homonyms with a better precision. This algorithms should not only look at the structure of the coauthor graph, but also to the conferences, journals, title key words, publication years etc. At the moment the splitting of DBLP author pages is either triggered by requests of authors who find their publications mixed with other persons writings or if we can prove our own strong suspicion that there are several persons behind an entry. In many cases homonyms remain undetected. The problem is not new. The library community works on it for a long time. An important project is VIAF, the Virtual International Authority File, their web interface is available on If you enter my name, you get a list of some of my old papers mixed with publications of several unrelated homonyms. I share the year of birth (1959) with at least one of these homonomys. After an entry has been split and hidden numbers have been added to the author and editor fields in the bibliographic records we may miss to assign the correct name ID to new entries. Again we have no perfect solution, but a heuristics which works in many cases: There is a daily job which looks for homonym persons with different ID numbers and a shared coauthor. Until now there were no homonyms close enough to share a coauthor. All alerts produced by our program were caused by input errors, we are still waiting for the first false hit. The job to alert for likely incorrect homonym IDs is a special case of a more general framework[6]: In a first step we select a promising subset of the huge product space of the > DBLP person pairs. It is often senseless (e.g. for the homonyms) or too expensive to consider all personpairs. A blocking function (this notion was introduced by the census community) selects a subset of the product space. Important blockings are (1) all person pairs with the distance two in the coauthor graph, (2) all person pairs who have published in the same conference series or journal, (3) all person pairs who have published papers which contain the same rare title (key-)word. The person-pair-streams provided by the blockings may be filtered. For example we may be interested only in persons with overlapping publication years. Other filters look at the mdate attribute or the coauthor colorings. The most important filter applies a string distance to the names of the two persons. Typical string distances are the classical edit-distance, diacritic/case insensitive comparisons (René Rene), comparators insensitive to permutations of name parts (Li Chen Chen Li), comparators which are able to expand initials (M. Ley Michael Ley), or the comparator mentioned above which ignores the hidden ID (Chen Li 0003 Chen Li). We use this software to find likely synonyms, e.g. if there are entries Michael J. Carey and M. Carey in the same journal, we should check if they belong to the same person. It is easy to produce long lists of candidates, but fine tuning to get a good precision remains a open problem. We implemented more than 20 distance functions for person names. But there remain many cases which are hard to capture by general rules and require a specialised knowledge base. For an example of an hard case look at A. Kourtis = Anastasios Kourtis = Tasos Kourtis = T. Kourtis The software sketched above works retrospective, it helps to find errors in the data base. In addition we experiment with software which tries to avoid input errors. The basic idea is to consider the input of a new multi-authored publication as a graph matching problem. We have to find a good position for the small graph (the new entry) in the huge graph (DBLP). First we look for exact matches of person names, then we use distance functions insensitive to diacritics and initials. If we find some of the names to be entered, we do a local search in the neighborhood of the hits with 7 more expensive distance functions. Again the fine tuning of this algorithm remains the challenge. Often the human user sees partial hits our algorithms are not able to locate. Person Records Many DBLP authors maintain their own personal home pages on the web. It soon became obvious to add links from the DBLP author pages to personal home pages. Home pages were modeled as special web publications: <www key="homepages/m/davidmaier" > <author>david Maier</author> <url> maier/</url> The key always starts with homepages/, the author field specifies the name of the person, the title field always has the value Home Page, and the url field contains the location of the home page. Later it became clear that this modeling was shortsighted: We should be able to store more information about a person, we need person records. To enable a smooth upgrading of running software, we extended the home page records to person records. The most important addition are secondary names for persons: <www key="homepages/h/alonyhalevy" > <author>alon Y. Halevy</author> <author>alon Y. Levy</author> <url> Persons may change their names for several reasons, marriage is the most important one. If a person records contains more than one author field, the additional names are interpreted as a list of secondary names for the same person. On the DBLP web server we simply produce redirections from the secondary name URLs to the primary name URL. The primary name person page lists the publications of any spelling variant of the person. Secondary names are not only useful to model name changes, but they also enable us to treat with synonyms which are used at the same time: <www key="homepages/r/cjvanrijsbergen" > <author>c. J. van Rijsbergen</author> <author>cornelis Joost van Rijsbergen</author> <author>keith van Rijsbergen</author> <url> keith/</url> This famous IR pioneer is known under three name variants. To identify people, it may be helpful to store additional information like their affiliation or their name in an alternative writing system. In person records, there is an optional note field. The contents of this field is printed out at the heading of the corresponding person page. Currently the note field of person records is the only place where DBLP extends above the Latin-1 character set. <www key="homepages/m/atsuyukimorishima" > <author>atsuyuki Morishima</author> <note>森嶋厚</note> The last extension of person records for now is still populated only by a few instances. cite fields inside of person records are interpreted as biographical citations in a broader sense, we intend to associate Festschrift publications, obituaries, etc. with the honored person. <www key="homepages/k/parisckanellakis" > <author>paris C. Kanellakis</author> <url></url> <note>dec. 3, Dec. 20, 1995</note> <cite>conf/birthday/2003pkc</cite> <cite>journals/csur/abiteboulkmsv96</cite> <cite>conf/pods/abiteboulkpv96</cite> Person IDs DBLP is an (very imperfect) authority file for computer scientists. Conference servers, conference management systems, preprint servers, publishers, and universities refer to DBLP. Several of these members of the publication chain asked us to provide a stable mechanism to point to persons in DBLP. To keep the URLs of author pages stable was a first important step. But this policy is limited by the dependence of the URL from the exact spelling of the person name. For most persons the spelling converges to a stable and hopefully correct state after a while. But sometimes our information about a name remains incomplete for a long period. Nevertheless it is necessary to provide stable person IDs to enable other applications to exchange information with DBLP. A global person ID does not exist and is very controversial. Even for scientists it is not well established yet. Until there is a more general mechanism, we should introduce or own DBLP person IDs. Again we do this by extending an existing mechanism: We simply use the IDs of person records as person IDs. The IDs of the existing person records remain unchanged. For any person in DBLP who has no person record we generate a new record. The new IDs are consecutively assigned integers. A generated person record only contains a mapping from an ID to a name: <www key="homepages/45/123" > <author>c. Ley</author> We may change C. Ley to Carola Ley as soon as we get additional information about the name. The person ID remains stable, the URL of the author page changes. Two situations require further explanation: Author pages may be joined or splitted. If there is already an author page and person ID for Carola Ley, we may have assigned two or more IDs to the same person. We should not invalidate an redundant ID, but register that it is equivalent to another one: <www key="homepages/45/123" > <crossref>homepages/55/1002</crossref> The hard case is splitting of author pages. On the page of C. Ley we may have collected publications by Carola Ley and Christoph Ley. The ID of C. Ley should become invalid because it is not a single person, but a set of similar named persons. In practice this strict interpretation of splitting may have absurd side effects: Assume C. 8 Ley (alias Carola) has published papers listed in DBLP for several years with her initial only. Now the first paper of Christoph is added to the author page of C. Ley. In this asymmetric sitation it makes sense to keep the ID for Carola stable and to assign a new ID to Christoph, even if his publications temporarily were merged with Carola s publications by mistake. There remains a gray zone in the decision when a splitting is asymmetric or symmetric. To make Person Ids operational we provide a simple mapping service. The URL redirects to the author page with the specified pid (person record ID without homepages/ prefix). For example try 4. ESCAPE TO HYPERTEXT DBLP was started in 1993 as a small collection of HTML files which were directly entered using a standard text editor. Very soon the bibliographic records were extracted from the HTML files. For the records we used a format very close to XML, later it was adjusted to conform with the standard. We did not envision the details of XML in 1994, but it was a natural choice for lazy programmers. Our first parser was trivially derived from the parser of the xmosaic browser, to use a small customized markup language was very simple to implement. Not all information from the origin HTML tables of contents (TOCs) went into the bibliographic records. dblp.xml is sufficient to generate the DBLP person pages, but not the TOC pages and the navigation pages. The most notably information missing for the TOCs are session titles. The source for a typical TOC page now looks like this: <cite key="conf/vldb/2006"> <h2>keynote Addresses</h2> <ul> <li><cite key="conf/vldb/jhingran06" style=ee> <li><cite key="conf/vldb/sikka06" style=ee> </ul> <h2>ten-year Best Paper Award Talk Session</h2> <ul> <li><cite key="conf/vldb/halevyro06" style=ee> </ul> <h2>research Sessions</h2> <h3>continuous Query Processing</h3> <ul> <li><cite key="conf/vldb/lictach06" style=ee> <footer> This is HTML with a few additional customized elements: cite includes the bibliographic record specified by the key attribute. The optional attribute style is used the choose from several formatting options. footer produces the standard DBLP footer, ref is used for hyperlinks inside of the DBLP web pages, etc. We named this slightly extended HTML bibliography hypertext (BHT). In 1994 we implemented a primitive program which produces HTML from these BHT-files. The program imitated the idea of the C preprocessor, later Server Side Includes and several more advanced mechanisms to compose HTML from scattered building blocks were introduced. Contrary to server side includes the HTML pages are not produced on demand but they are composed daily in advance. We use this archaic mechanism until today because it gives us a lot of flexibility to circumvent the limited modeling power of the bibliographic records. In essence DBLP = bibliographic records + BHT-files. Even it we often get to limits of the bibliographic records, we are very hesitant to extend the model because we try to keep it simple and manageable. The hypertext part of DBLP gives us the freedom to describe any irregularity in the publication world without inventing a mechanism for a singular case. In a few cases we extended the power of the bibliographic records because a not anticipated phenomenon started to occur more frequently. Homonyms/synonyms and their representation in person records are the most important examples of a DBLP-feature which was moved from hypertext to records. For the TOC pages the BHT files often are not more than a skeleton with the enumeration of the papers which constitute the volume. In many cases they are upgraded by session titles. Sometimes we have added editorial comments. TOC BHT files for proceedings may be viewed as appendices to the corresponding proceedings records. The url field is the connecting link. On the level above, for journals or conference streams, the BHT files often become much more irregular. Here the escape mechanism to hypertext may be more justified. The original BHT files are still not consistent with the XML syntax. In /xml/dblp bht.xml you may find a version of them which is XMLlized by a script. 5. REFERENCES [1] Authority control. Wikipedia, [2] Personal name. Wikipedia, [3] A. H. F. Laender, C. J. P. de Lucena, J. C. Maldonado, E. de Souza e Silva, and N. Ziviani. Assessing the research and education quality of the top Brazilian Computer Science graduate programs. SIGCSE Bulletin, 40(2): , [4] T. C. Lam, J. J. Ding, and J.-C. Liu. XML document parsing: Operational and performance characteristics. IEEE Computer, 41(9):30 37, September [5] L. Lamport. LaTeX User s Guide and Document Reference Manual. Addison-Wesley, [6] M. Ley and P. Reuther. Maintaining an online bibliographical database: The problem of data quality. In EGC 2006, Actes des sixièmes journées Extraction et Gestion des Connaissances, Lille, France, janvier 2006, 2 Volumes, pages 5 10, APPENDIX If your software needs only in a few facts from DBLP, downloading the entire dblp.xml file may be a too costly burden. The web pages are intended for humans, wrappers are always exposed to the risk of formatting changes. In the online appendix we describe a very basic API for DBLP. It is available from LabVIEW Internet Toolkit User Guide LabVIEW Internet Toolkit User Guide Version 6.0 Contents The LabVIEW Internet Toolkit provides you with the ability to incorporate Internet capabilities into VIs. You can use LabVIEW to work with XML documents, Search and Information Retrieval Search and Information Retrieval Search on the Web 1 is a daily activity for many people throughout the world Search and communication are most popular uses of the computer Applications involving search 10CS73:Web Programming 10CS73:Web Programming Question Bank Fundamentals of Web: 1.What is WWW? 2. What are domain names? Explain domain name conversion with diagram 3.What are the difference between web browser and web server XML. CIS-3152, Spring 2013 Peter C. Chapin XML CIS-3152, Spring 2013 Peter C. Chapin Markup Languages Plain text documents with special commands PRO Plays well with version control and other program development tools. Easy to manipulate with scripts CHAPTER 2 XML PROCESSING 10 CHAPTER 2 XML PROCESSING This chapter describes the XML syntax, XML schema description languages, validating XML, query processing on XML etc. 2.1 XML SYNTAX XML is a technology for creating markup. Lesson Overview. Getting Started. The Internet WWW Lesson Overview Getting Started Learning Web Design: Chapter 1 and Chapter 2 What is the Internet? History of the Internet Anatomy of a Web Page What is the Web Made Of? Careers in Web Development Web-Related DTD Tutorial. About the tutorial. Tutorial About the tutorial Tutorial Simply Easy Learning 2 About the tutorial DTD Tutorial XML Document Type Declaration commonly known as DTD is a way to describe precisely the XML language. DTDs check the validity Introduction to XML Applications EMC White Paper Introduction to XML Applications Umair Nauman Abstract: This document provides an overview of XML Applications. This is not a comprehensive guide to XML Applications and is intended for Introduction to Web Services Department of Computer Science Imperial College London CERN School of Computing (icsc), 2005 Geneva, Switzerland 1 Fundamental Concepts Architectures & escience example 2 Distributed Computing Technologies XML Processing and Web Services. Chapter 17 XML Processing and Web Services Chapter 17 Textbook to be published by Pearson Ed 2015 in early Pearson 2014 Fundamentals of Web Development Objectives 1 XML Overview 2 XML Processing SAS. Cloud. Account Administrator s Guide. SAS Documentation SAS Cloud Account Administrator s Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. SAS Cloud: Account Administrator's Guide. Cary, NC: Web Usage in Client-Server Design Web Search Web Usage in Client-Server Design A client (e.g., a browser) communicates with a server via http Hypertext transfer protocol: a lightweight and simple protocol asynchronously carrying a variety Qlik REST Connector Installation and User Guide Qlik REST Connector Installation and User Guide Qlik REST Connector Version 1.0 Newton, Massachusetts, November 2015 Authored by QlikTech International AB Copyright QlikTech International AB 2015, All Chapter 8 Analyzing Systems Using Data Dictionaries (SOOADM) 1 Chapter 8 Analyzing Systems Using Data Dictionaries (SOOADM) 1 Handouts for Chapter - 8 Objective: Understand analysts use of data dictionaries for analyzing data-oriented systems Create data dictionary Chapter-1 : Introduction 1 CHAPTER - 1. Introduction Chapter-1 : Introduction 1 CHAPTER - 1 Introduction This thesis presents design of a new Model of the Meta-Search Engine for getting optimized search results. The focus is on new dimension of internet The Unicode Standard Version 8.0 Core Specification The Unicode Standard Version 8.0 Core Specification To learn about the latest version of the Unicode Standard, see. Many of the designations used by manufacturers SRCSB General Web Development Policy Guidelines Jun. 2010 This document outlines the conventions that must be followed when composing and publishing HTML documents on the Santa Rosa District Schools World Wide Web server. In most cases, these conventions also Firewall Builder Architecture Overview Firewall Builder Architecture Overview Vadim Zaliva Vadim Kurland Abstract This document gives brief, high level overview of existing Firewall Builder architecture. Eventia Log Parsing Editor 1.0 Administration Guide Eventia Log Parsing Editor 1.0 Administration Guide Revised: November 28, 2007 In This Document Overview page 2 Installation and Supported Platforms page 4 Menus and Main Window page 5 Creating Parsing A Concept for an Electronic Magazine TERENA-NORDUnet Networking Conference (TNNC) 1999 1 A Concept for an Electronic Magazine Alexander von Berg Helmut Pralle University of Hanover, Institute for Computer Networks and Distributed Systems. PHP Debugging. Draft: March 19, 2013 2013 Christopher Vickery PHP Debugging Draft: March 19, 2013 2013 Christopher Vickery Introduction Debugging is the art of locating errors in your code. There are three types of errors to deal with: 1. Syntax errors: When code Performance evaluation of Web Information Retrieval Systems and its application to e-business Performance evaluation of Web Information Retrieval Systems and its application to e-business Fidel Cacheda, Angel Viña Departament of Information and Comunications Technologies Facultad de Informática, Efficiency of Web Based SAX XML Distributed Processing Efficiency of Web Based SAX XML Distributed Processing R. Eggen Computer and Information Sciences Department University of North Florida Jacksonville, FL, USA A. Basic Computer and Information Sciences VIRTUAL LABORATORY: MULTI-STYLE CODE EDITOR VIRTUAL LABORATORY: MULTI-STYLE CODE EDITOR Andrey V.Lyamin, State University of IT, Mechanics and Optics St. Petersburg, Russia Oleg E.Vashenkov, State University of IT, Mechanics and Optics, St.Petersburg,... 2. Compressing data to reduce the amount of transmitted data (e.g., to save money). Presentation Layer The presentation layer is concerned with preserving the meaning of information sent across a network. The presentation layer may represent (encode) the data in various ways (e.g., data you MOBILE EMAIL MARKETING CHECK LIST MOBILE EMAIL MARKETING CHECK LIST 01 MOBILE EMAIL MARKETING No technology in the past has been able to record similar growth as mobile communication. By 2010 there will be more than 923 Web Presentation Layer Architecture Chapter 4 Web Presentation Layer Architecture In this chapter we provide a discussion of important current approaches to web interface programming based on the Model 2 architecture [59]. From the results COMMON CUSTOMIZATIONS COMMON CUSTOMIZATIONS As always, if you have questions about any of these features, please contact us by e-mail at pposupport@museumsoftware.com or by phone at 1-800-562-6080. EDIT FOOTER TEXT Included Distributed Systems Principles and Paradigms Distributed Systems Principles and Paradigms Chapter 12 (version October 15, 2007) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.20. CommonSpot Content Server Version 6.2 Release Notes CommonSpot Content Server Version 6.2 Release Notes Copyright 1998-2011 PaperThin, Inc. All rights reserved. About this Document CommonSpot version 6.2 updates the recent 6.1 release with: Enhancements Your guide to finding academic information for Engineering Your guide to finding academic information for Engineering This guide is designed to introduce you to electronic academic databases relevant to Engineering. Online databases contain large amounts of current Corporate Access File Transfer Service Description Version 1.0 01/05/2015 Corporate Access File Transfer Service Description Version 1.0 01/05/2015 This document describes the characteristics and usage of the Corporate Access File Transfer service, which is for transferring WEB DEVELOPMENT IA & IB (893 & 894) DESCRIPTION Web Development is a course designed to guide students in a project-based environment in the development of up-to-date concepts and skills that are used in the development of today s websites. Chapter 3: XML Namespaces 3. XML Namespaces 3-1 Chapter 3: XML Namespaces References: Tim Bray, Dave Hollander, Andrew Layman: Namespaces in XML. W3C Recommendation, World Wide Web Consortium, Jan 14, 1999. [], Software documentation systems Software documentation systems Basic introduction to various user-oriented and developer-oriented software documentation systems. Ondrej Holotnak Ondrej Jombik Software documentation systems: Basic introduction Message Archiving User Guide Message Archiving User Guide Spam Soap, Inc. 3193 Red Hill Avenue Costa Mesa, CA 92626 United States p.866.spam.out f.949.203.6425 e. info@spamsoap.com RESTRICTION ON USE, PUBLICATION, Jet Data Manager 2012 User Guide Jet Data Manager 2012 User Guide Welcome This documentation provides descriptions of the concepts and features of the Jet Data Manager and how to use with them. With the Jet Data Manager you can transform [MS-ASMS]: Exchange ActiveSync: Short Message Service (SMS) Protocol [MS-ASMS]: Exchange ActiveSync: Short Message Service (SMS) Protocol Intellectual Property Rights Notice for Open Specifications Documentation Technical Documentation. Microsoft publishes Open Specifications Salesforce Customer Portal Implementation Guide Salesforce Customer Portal Implementation Guide Salesforce, Winter 16 @salesforcedocs Last updated: December 10, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered Galter Health Sciences Library Galter Health Sciences Library Web of Science Take a Class The Galter Library teaches a related class called Finding Cited References and Impact Factors. See our Classes schedule for the next available A terminology model approach for defining and managing statistical metadata A terminology model approach for defining and managing statistical metadata Comments to : R. Karge (49) 30-6576 2791 mail reinhard.karge@run-software.com Content 1 Introduction... 4 2 Knowledge presentation... Intelligent Log Analyzer. André Restivo <andre.restivo@portugalmail.pt> Intelligent Log Analyzer André Restivo 9th January 2003 Abstract Server Administrators often have to analyze server logs to find if something is wrong with their machines., Bitrix Site Manager 4.1. User Guide Bitrix Site Manager 4.1 User Guide 2 Contents REGISTRATION AND AUTHORISATION...3 SITE SECTIONS...5 Creating a section...6 Changing the section properties...8 SITE PAGES...9 Creating a page...10 Editing A. Citebase Search: Autonomous Citation Database for e-print Archives Citebase Search: Autonomous Citation Database for e-print Archives Tim Brody Intelligence, Agents, Multimedia Group University of Southampton Abstract Citebase is a culmination ICE Trade Vault. Public User & Technology Guide June 6, 2014 ICE Trade Vault Public User & Technology Guide June 6, 2014 This material may not be reproduced or redistributed in whole or in part without the express, prior written consent of IntercontinentalExchange, & Front-End Performance Testing and Optimization Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client Lab Experience 15. HTML and FTP Downloading Lab Experience 15 HTML and FTP Downloading Objectives Learn some of the most basic features of HTML (HyperText Markup Language), the underlying language for creating web pages Create a simple web page Introduction to Web Technologies Introduction to Web Technologies Tara Murphy 17th February, 2011 The Internet CGI Web services HTML and CSS 2 The Internet is a network of networks ˆ The Internet is the descendant of ARPANET (Advanced DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES By Michael Crouse Dr. Errin W. Fulp, Ph.D., Advisor Abstract The increasingly high volume of users on the web and their use of web KEYWORD SEARCH IN RELATIONAL DATABASES KEYWORD SEARCH IN RELATIONAL DATABASES N.Divya Bharathi 1 1 PG Scholar, Department of Computer Science and Engineering, ABSTRACT Adhiyamaan College of Engineering, Hosur, (India). Data mining refers to Moving from CS 61A Scheme to CS 61B Java Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope you: Software Requirements Specification for DLS SYSTEM Software Requirements Specification for DLS SYSTEM 3.1.1 Purpose This is the Software Requirements Specification (SRS) for the DLS (Digital Library System). The purpose of this document is to convey information CMP3002 Advanced Web Technology CMP3002 Advanced Web Technology Assignment 1: Web Security Audit A web security audit on a proposed eshop website By Adam Wright Table of Contents Table of Contents... 2 Table of Tables... 2 Introduction...
http://docplayer.net/153082-Dblp-some-lessons-learned.html
CC-MAIN-2018-47
refinedweb
9,048
54.83
Performing an HTTP Request in Python Check out DataCamp's Importing Data in Python (Part 2) course that covers making HTTP requests. In this tutorial, we will cover how to download an image, pass an argument to a request, and how to perform a 'post' request to post the data to a particular route. Also, you'll learn how to obtain a JSON response to do a more dynamic operation. - HTTP - Libraries in Python to make HTTP Request - Request in Python - Using GET Request - Downloading and Saving an image using the Request Module - Passing Argument in the Request - Using POST Request - JSON Response HTTP HTTP stands for the 'HyperText Transfer Protocol,' where communication is possible by request done by the client and the response made by the server. For example, you can use the client(browser) to search for a 'dog' image on Google. Then that sends an HTTP request to the server, i.e., a place where a dog image is hosted, and the response from the server is the status code with the requested content. This is a process also known as a request-response cycle. You can also look at this article, What is HTTP for a more detailed explanation. Libraries in Python to make HTTP Request There are many libraries to make an HTTP request in Python, which are httplib, urllib, httplib2 , treq, etc., but requests are the simplest and most well-documented libraries among them all. You'll be using the request library for this tutorial and the command for doing this is below: pip install requests Request in Python According to Wikipedia, "requests are a Python HTTP library, released under the Apache2 License. The goal of the project is to make HTTP requests simpler and more human-friendly. The current version is 2.22.0" Using GET Request GET request is the most common method and is used to obtain the requested data from the specific server. You need to import the required modules in your development environment using the following commands: import requests You can retrieve the data from the specific resource by using 'request.get('specific_url')', and 'r' is the response object. r =requests.get('') Status Code According to Wikipedia, "Status codes are issued by a server in response to a client's request made to the server.". There are lots of other status codes and detailed explanations that can be found here: HTTP Status Code. However, the two most common status code is explained below: r.status_code You can see below after running the above code the status code is '200' which is 'OK,' and a request is successful. Headers You can view the response headers by using '.headers.' where it returns the Python Dictionaries. They return a lot of additional information containing the case insensitive name with resources types along with the server name, version, etc., and are also included with the code shown below. r.headers The vital information obtained in the above code is the Server name as 'Apache', content type, Encoding, etc. r.headers['Content-Type'] You can see above the type of content of the header by using 'content-type' which is case insensitive, and 'Content-Type' would also give the same result as below. Response Content You can get the HTML text of a page by using '.text.' where the request can decode any content automatically from the server, and it is in a Unicode form with the code below: r.text You can get an entire page of HTML and parsing can be done by the help of HTML parser. '\r\n<!DOCTYPE html>\r\n<html>\r\n<head>\r\n\r\n<link href="" rel="stylesheet" type="text/css" />\r\n<link rel="shortcut icon" href="" type="image/x-icon" />\r\n<meta http-\r\n<link rel="shortcut icon" href="" type="image/x-icon">\r\n<link rel="icon" href="" Downloading and Saving an Image Using Request module You need to import the module, i.e., using the requests command in your local computer and 'receive' the response object with the 'request.get.' along with the image URL to be download as done below. import requests receive = requests.get('') with open(r'C:\Users\Dell\Desktop\comics\image5.png','wb') as f: f.write(recieve.content) You can see the 'with' statement above helps to manage the file stream using the with open function where the required path specifies for doing a specific operation, 'r' which converts the normal string to a raw string. I.e., all of the characters need to remain the same where the location of a local computer contains '\' in C:\Users\Dell\Desktop\comics\image5.png' needs to be preserved and should not be escaped. The mode for opening is 'wb' which is writing the files in a binary way, and 'f' is the file object that has to write a function to write the appropriate content, i.e., downloading the required image. The final code for downloading and saving an image using the request module is the following: import requests receive = requests.get('') with open(r'C:\Users\Dell\Desktop\comics\image5.png','wb') as f: f.write(recieve.content) Passing Argument in the Request You'll be using the httpbin which is what simple HTTP libraries use for testing where the data would have to be given after the 'httpbin.org/get?key=value, but request provides an easy way to make a dictionary where it is passed as an argument using the 'param' keyword. You can see the example below the 'ploads' as the dictionary containing 'things' and 'total' as key with its respective value as 2 and 25 is passed as an argument to 'get' where 'params=ploads' import requests ploads = {'things':2,'total':25} r = requests.get('',params=ploads) print(r.text) print(r.url) You can also view the URL using the command 'r.url' as follows: print(r.url) The output after doing this is the URL containing the dictionary that you've set up in key-value form after the '?' sign. Using POST Request POST is the most common request method used to send data mostly through 'form' to the server for creating/updating in the server. You can use 'post' a form data to the route where the 'requests.post' and make a dictionary called 'pload' where it is sent as an argument to the post to 'data=pload'. import requests pload = {'username':'Olivia','password':'123'} r = requests.post('',data = pload) print(r.text) The result below is the JSON result from the 'httpbin' website along with necessary data is the 'form', which have both password and username as key and with its value testing and 123 and Olivia respectively. JSON Response JSON stands for the 'JavaScript Object Notation' where it is the most common way of interchanging the data format, which is easy to read and write and is based on the common language 'JavaScript'. It is a format which is platform-independent and based on objects where the data is in the form 'key-value' pair. If you want to know more about JSON, have a look at this JSON Data in Python tutorial. Converting JSON to Python Dictionary You can see below 'r.json()' creates a Python dictionary from the JSON response given by the 'httpbin' website. import requests pload = {'username':'olivia','password':'123'} r = requests.post('',data = pload) print(r.json()) The result obtained from the above code is Python dictionary and is also in 'key-value' which is shown below. Converting JSON to Python dictionary and storing in a variable. You can have a JSON data to be Python dictionary and do the more dynamic operation by using ''r.json()' as done below. import requests pload = {'username':'olivia','password':'123'} r = requests.post('',data = pload) r_dictionary= r.json() print(r_dictionary['form']) The dictionary item can be accessed through 'r_dictionary['form']' where only the form data is shown below. Conclusion Congratulations. You've finished the tutorial and learned about the basics of HTTP. Also, you learned about the request library in Python to make different types of request's like 'get' to download an image, passing an argument to a request, and a 'post' request to post the data to a particular route. Finally, you learned how to obtain a JSON response to do a more dynamic operation. If you would like to learn more about HTTP requests in Python, check out DataCamp's Importing Data in Python (Part 2) course.
https://www.datacamp.com/community/tutorials/making-http-requests-in-python
CC-MAIN-2021-04
refinedweb
1,418
62.48
downloading downloading i doing project in php with mysql. i am using xampp.i need code for downloading downloading excel file using Java and springs downloading excel file using Java and springs I need to find out how to download an excel file using the spring framework in java. Please help me out as I am new to springs and its urgent Downloading and Installing Struts 2.3.8 /dist/struts/ and download the struts-2.3.8-all.zip file as shown below: Step... configuration file. Here is the code of the example.xml file: <action name="...;The code of the jsp file (HelloWorld.jsp) is as follows: <%@ page page downloading page downloading down loading code for file existed in webbrowser like one text document placed in web browser please help me friend Downloading Struts & Hibernate :\Struts-Hibernate-Integration\code\WEB-INF\src\build.xml" file in your... Downloading Struts & Hibernate  ....zip. Save downloaded file into your hard disk. Downloading Ant Ant is a free Downloading in JSP - JSP-Servlet application while downloading the uploaded file contents.. Actually my problem is I...Downloading in JSP Respected Sir/Madam, I am.... But, I am not getting the full display of the text or image or excel or etc downloading a file directly from mysql using java downloading a file directly from mysql using java Hi, I am trying to come up with a code to download a file that is on a mysql database (in form of a blob) without using url. Can anyone tell me how it can be done because I am Excel File data upload into postgresql database in struts 1.x Excel File data upload into postgresql database in struts 1.x Dear members please explain how Excel Files data upload into postgresql database in struts 1.x Excel file Handling in Java or to add data to excel file, here is a code. import java.io.*; import...Excel file Handling in Java Hello Sir, I am new to Java, I have Started Java Core by myself. I want to make a project which include 3-4 Excel file importing excel file importing excel file Hi All I am developing a java application whre we upload a excel file in to database and all excel value should import... garbaze value. my code is: package import java.io.FileInputStream; import download excel download excel hi i create an excel file but i don't i know how to give download link to that excel file please give me any code or steps to give download Export data in excel sheet in java in struts - Struts Export data in excel sheet in java in struts Hello, All how can i export data in excel file i java Hi friend, For solving the problem visit to : Thanks jsp excel code - JSP-Servlet jsp excel code Hi how to insert form data into excel file using jsp? Hi Friend, Try the following code: 1)register.jsp: Registration Form User Name: importing excel file and drawing multiple graphs from one excel file importing excel file and drawing multiple graphs from one excel file thanks a lot sir for replying with code for importing excel file... time from one excel file using different columns..and instead of passing column Upload and Downloading files - JSP-Servlet and downloading files in JSP, I am facing a problem.. 1) The file name is getting inserted..., R.Ragavendran.. Hi friend, Code to download file from database Some points to be remember: This code for download the file insert values from excel file into database insert values from excel file into database hi i want to insert values from Excel file into database.Whatever field and contents are there in excel... the following link: Insert values from excel file to database excel excel how to read columns values of excel file in java and store in array list in java like col1 col2 2 3 1 7 7 9 then list1 have values of col1 and list2 have values of col2...   Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading files using Struts FormFile. Below is the code. NewDocumentForm newDocumentForm = (NewDocumentForm) form; FormFile file JavaScript get excel file data JavaScript get excel file data...; By the use of JavaScript we can get the excel file data as well. Here is the example which gets the data from the excel file with the cell and row index Inserting data in Excel File - Java Beginners Inserting data in Excel File Dear Sir, How can we fill the excel... an excel with filled data? Kindly give me the solutions to questions..... Thanks & Regards, Karthikeyan. K Hi friend, Code to solve convertion before downloading convertion before downloading convert zip file which is in server to doc while before downloading code for attendence in excel sheet - Java Magazine code for attendence in excel sheet When i enter an "Empid "in an an HTML page, time of the system should save in the excel file of that Empid row(IE... the problem : Create a Excel file Search Properties File IN Struts - Struts the detail along with code and also entry about properties into configuration file...Properties File IN Struts Can we break a large property file into small pieces? Suppose we have property file whose size is 64 kb .can we break Reading And Writing Excel File to download the excel file and code first then make folder with the name of excel in C drive then copy the excel file and copy the code into webapps then follow... reading and writing excel to display the excel file content in the jsp how to display the excel file content in the jsp How to present the content of the newly created excel file in the following jsp to the web browser: print("code sample"); <%@ page import="java.io.InputStream" %> retrieving of value from excel file - JSP-Servlet retrieving of value from excel file Dear sir, Thanks for sending a code now i am getting a particular column values i.e EmailId column for sending... this message to all the employees when i upload a file then it fetch a data from java file upload in struts - Struts java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend, Please visit the following links: http Import Excel file(date) into mysql database in php Import Excel file(date) into mysql database in php I try to import excel file which contains data date(yyyy-mm-dd) into mysql database in php. But it not insert the data in mysql. Code $fp = fopen("data.csv",'w'); fwrite reading data from excel file and plotting graph reading data from excel file and plotting graph I am doing a project using NetBeans in which i have to take input an excel file and then using the data in excel file, i have to plot graphs based on CELL ID selected. please help STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain Create and Save Excel File in JSP Create and Save Excel File in JSP  ... and saving Excel file from JSP application. In this example we are going... file. To create an excel sheet we can use third party API which is Apache Downloading and installing jQuery UI Downloading and installing jQuery UI Downloading and installing jQuery UI There are Three ways of downloading or adding jQuery UI : First connect jdbc to an Excel spreadsheet to an Excel spreadsheet file using jdbc? Hello Friend, Follow... Excel Driver(*.xls) Select work book or excel file and Create the DSN name (e.g excel) Click "Ok" and restart your compiler. Compile the following java code Read Excel(.xlsx) document using Apache POI how to read Excel file having .xlsx extension using Apache POI library... void main(String[] args) throws Exception { // // An excel file name. You can... the // excel file. // fis = new FileInputStream(filename); // // Create an excel retrieving of value from excel file - JSP-Servlet to all the employees that are in the excel sheet.So in the previous code i have got...retrieving of value from excel file Dear sir, Thanks for sending a code now i am getting a particular column value i.e EmailId column Excel - JSP-Servlet Excel How to export data from jsp to excel sheet. I am using struts1.2 in my application. Hi friend, Code to data from Jsp to excel sheet. "abc.jsp" Enter XLS file name downloading created xml file - Java Beginners downloading created xml file Hi... I've an application where i need to create an xml file and when a button is clicked it has to prompt me for saving the xml file so i created xml file successfully and whats the problem Browse Excel File Problem - Development process Browse Excel File Problem I am using a web application,in which i have to browse an excel file columns. It is working in my system. But if i access... displayed when i browse the file. What is the problem? Hi Friend Access Excel file through JDBC Access Excel file through JDBC In this section, you will learn how to access excel file through Jdbc and display records in JTable. As you know Excel comes... and the excel file. Here the name of the worksheet is equivalent to any database Open Source Excel VBA Models Combo Set XL-VBA4 1 The Excel VBA Models Open Source Code Combo... Carlo Simulation, option greeks. Excel VBA, Open Source Code, Finance...Open Source Excel Excel Open Source Comparison In addition save excel file with save and open dilog box save excel file with save and open dilog box following is my jsp code it is working correct.. i want to save excel file throgh jsp bt not by hard...;% out.println("Your excel file has been generated!");%> </i>< convert excel file data into different locales while converting exclile file to csv file convert excel file data into different locales while converting exclile file to csv file can any one provide the code for how to convert excel file data into different locales while converting exclile file to csv file Selecting Excel Sheet File selecting excel sheet file In this program we are going to select the excel sheet... . The code of the program is given below: <%@  Read Excel file and generate bar graph the data of excel file, we have generated a bar chart. The given code uses Apache...Read Excel file and generate bar graph In this tutorial, you will learn how to read an excel file and generate bar graph. Here is an example that reads get values from Excel to database get values from Excel to database hi i want to insert values from Excel file into database.Whatever field and contents are there in excel file... express 2005. how can i do with java code insert rows from Excel sheet into a database by browsing the file Excel file in java(JSP). I can insert rows using ODBC connetion. But using odbc... it user friendly.i.e user should browse the existing excel file...insert rows from Excel sheet into a database by browsing the file   ODCB-EXCEL - Java Beginners ODCB-EXCEL Hi, i would like to extract data from an excel file. my excel file name is: TEST.xls the worksheet name is : qas i have configure... to extract the data of the excel file that has been stored into the database or you Read Simple Excel(.xls) document using Apache POI will learn how to read Excel file having .xls extension using Apache POI library... POI version 3.7. For downloading the above library The code... to read the // excel file. // fis = new FileInputStream(filename How to Read Excel file Using Java How to Read Excel file In this section,you will learn how to read excel file... and every cell and stored the excel file values into the vector.This Vector data is then used to display file values on the console. Here is the code Multiple file upload - Struts Multiple file upload HI all, I m trying to upload multiple files using struts and jsp. I m using enctype="multipart". and the number of files... array of formfile. Can anyone suggest me or send me some sample code to resolve reading excel sheet in java reading excel sheet in java can anyone tell me a proper java code to read excel sheet using poi Here is a java code that reads an excel file using POI api and display the data on the console. import java.io. can you please help me to solve...; <h1></h1> <p>struts-config.xml</p> <p>...;<struts-config> <form-beans> <form-bean name how make excel (filename); hwb.write(fileOut); fileOut.close(); System.out.println("Your excel file...how make excel how make excel spreadsheet IN JAVA. please send this code argently Hello Friend, Try the following code: import struts struts <p>hi here is my code in struts i want to validate my...;!-- This file contains the default Struts Validator pluggable validator... in this file. # Struts Validator Error Messages errors.required={0 Java and excel - JDBC in excel using code. plz i nd help becos it is my project, as a serious student i tried using the following code but it gives me a distorted records in excel... make the assumption that the client has Excel and // the file type .XLS Jsp to Excel an excel file and write data into it using jsp. For this, you have to import... Jsp to Excel  ... type into cell by using this method. Here is the code of excel.jsp namespace in struts.xml file - Struts in struts.xml file Struts Problem Report Struts has detected an unhandled.../struts.properties file. this error i got when i run program please help me Hi friend, Plz give full details with source code to solve the problem Retrieve Data from the database and write into excel file Retrieve Data from the database and write into excel file. In this section, we are going to retrieve data from the database and write into the excel file...); fileOut.close(); System.out.println("Data is saved in excel file how to generate pdf file in struts how to generate pdf file in struts I am developing a struts application.I am having one registration form when i am submitting the form the values... be shown as pdf or excel format excel report fro jsp mysql generating excel report form mysql database using jsp code. With the help from your site, I can able to generate excel file for all data types other than blob data. But I need your help for generating a excel file from mysql DB for blob swing application to import a object in a excel to make a swing application where I can import a object in a cell of a excel... to do it. please help. The details of the excel operation which i want to do by one click: in excel its in 'Insert' ribbon. first select a cell (cell how to use Excel Templet to write excel file using java. how to use Excel Templet to write excel file using java. how to use Excel Templet to write excel file using java Problem in downloading files - JSP-Servlet Problem in downloading files Hi, i have uploaded some files using its uploading successfully. but when downloading the same file, its giving exception as below org.apache.jasper.JasperException: getOutputStream() has Stream Result example to downloading contents. It is a custom result type for sending raw data via...;. contentDisposition - It is used for specifying file name. An Example of Stream...;/struts-tags" %> <html> <head> <title>Chain Result Excel sheet image reading issue Excel sheet image reading issue Hello every one.I?m trying to read images from an excel sheet using OleDbDataReader. My excel file has 6... with a ?blank? for the sixth column. Here is my demonistration code How to add download option for openwith and saveAs on exporting file to excel in servlet How to add download option for openwith and saveAs on exporting file to excel... for generated .excel file to save at particular position on the system.Now as per...(); System.out.println("Your excel file has been generated!"); } catch how to use Excel Template to write excel file using java how to use Excel Template to write excel file using java How to use Excel template to write data in that file using java jsp excel code - JSP-Servlet jsp excel code hi how to store html form data into excel sheet by using jsp? and repeat this process continuously for entire sheet java to excel connectivity - Swing AWT java to excel connectivity give me source code to stored the value... Excel Driver(*.xls) 5. Select work book or excel file and Create the DSN name (e.g... database on the excel. Hi Friend, Follow these steps: 1. Open Data How to Create New Excel Sheet Using JSP on browser The excel file will be generated into C:\excel folder. The code... sheets in a excel file. To create a excel sheet we can use third party APIs.... The java.io.InputStream class is used to create file. We are creating an excel Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/35058
CC-MAIN-2015-18
refinedweb
2,927
63.59
All this talk about encryption over the past few days has me second guessing myself. And because I'm still new at the security game, but not to flog a dead horse, I'm curious as to my methods. I write Perl for web CMS apps and forms processing. I'm using Crypt::CBC to 1) store passwords for database access and 2) admin page log-on. Currently: 1) Everytime Perl connects to MySQL db, the encrypted connect password, stored in the root (outside my public_docs) is decrypted using a key, also stored in a file in the root. This has been addressed, even by myself here and here, and I'm still okay with this scenario, though open to advice. More curious about... 2) A user enters a password in an non-secure (no SSL—which now has me nervous) HTML form. The Perl encrypts the password with a static key (again, stored in the root outside of my public_docs) and inserts it into a MySQL table of users. When they log in, the encrypted password is pulled from the db, decrypted, and compared with the log-in. 3) Future application: I want to encrypt larger amounts of text (several paragraphs of plain text) before storing in the database. Questions: Is #1 still sound? What are the hazards of #2, and what would be better? What are some approaches to #3? Found this but would like to know more about Digest::MD5 or others. Sorry for all the questions, could break it down into smaller ones, but that just drags this all out even further. Thanks! 2) and 3) suffer from the same problem. Note that your scheme does not protect the user or the communications channel in any way. I can't really figure out why you would want to encrypt these files on the server. If you don't trust the people with access to the files, you have much bigger problems, and I would suggest moving to another machine. I can't really figure out why you would want to encrypt these files on the server. It's not a matter of "trust" but of trying to comply with still-somewhat-vague HIPAA guidelines for some patient records. Thanks for your reply. Joost. As far as #2 goes: 1. Use SSL 2. A better method would be to hash the user's password into the DB. Then, instead of decrypting & comparing to the one they entered, you would hash the one they entered and compare to the hash in the DB. You can verify they actually typed the correct password since the hash will be unique. Although this doesn't stop brute-forcing if the password hash is compromised, it does protect from "knowing an algorithm" (since you can't reverse the results of a hash). Footnote: Recent events bring some question to the uniqueness of hashes, but the results they found are for very special cases (so far). Its something we need to keep an eye on, but I don't think it invalidates hash-usage approaches just yet. A better method would be to hash the user's password into the DB This is the asymetric solution I've been reading about, right? Would this be a good place for Digest::MP5? Or can you suggest another? Thanks. For passwords, Digest::MD5 is fine, although for hashing longer texts you might use Digest::SHA1, as it uses a 160-bit key (vs. MD5's 128-bit). -b ########################### # use mapps; # # CREATE TABLE users ( # auid int(10) unsigned NOT NULL auto_increment, # auname varchar(30) default NULL, # PRIMARY KEY (auid) # ) TYPE=MyISAM; # # CREATE TABLE secrets ( # auid int(10) unsigned NOT NULL auto_increment, # passwd char(40) NOT NULL default '', # salt int(11) NOT NULL default '0', # PRIMARY KEY (auid) # ) TYPE=MyISAM DEFAULT; ########################## package Mapps::Auth; use Exporter; use Digest::SHA1; use DBI; use warnings; use strict; use vars qw($VERSION @ISA @EXPORT); our $VERSION = 1.00; our @ISA = qw(Exporter); our @EXPORT = qw(&new &auth); sub new { my $class = shift; my $self = {}; return bless $self, $class; } sub auth { my ($self, $dbh1); my $uname = shift; my $passwd = shift; my ($dbsecret, $salt, $uid); $dbh1 = DBI->connect('dbi:mysql:itiv', 'lwriter', '**I can't tell +you!') or die "Couldn't connect: $dbh1->errstr"; # get secret from db my $statement="SELECT admin_users.auid, auname, passwd, salt FROM admin_users, secrets WHERE admin_users.auid=secrets.auid AND auname='$uname';"; my $sth = $dbh1->prepare($statement) or die "Couldn't prepare stat +ement: ".$dbh1->errstr; $sth->execute or die "Couldn't execute statement: ".$dbh1->errstr; while (my $ref = $sth->fetchrow_hashref){ $dbsecret = $ref->{'passwd'}; $salt = $ref->{'salt'}; $uid = $ref->{'auid'}; } # encrypts password using # SHA-1 algorithm my $sha1 = Digest::SHA1->new; # reset algorithm $sha1->hexdigest; # encrypt my $secret = Digest::SHA1::sha1_hex($passwd . $salt); #die "$uid, $dbsecret, $secret, $salt "; # does generated secret match database secret? if ($secret eq $dbsecret){ return (1, $uid); } return (0, $uid); } 1; [download] Neil Watson watson-wilson.ca <toot-horn>Check out my writeup</toot-horn> - especially the second and last paragraphs. Oh, I dunno. Have you ever read an IRS instruction booklet? That's so encrypted that even they can't decipher it. -- tbone1, YAPS (Yet Another Perl Schlub) And remember, if he succeeds, so what. - Chick McGee ------ Getting strong cryptographic algorithms was hard (centuries upon centuries of human study into mathmatics). But it's a mostly-solved problem now. It also turns out that this was the easy part. The hard part is useing these algorithms correctly. I'm not a cryptographer. My math background isn't strong enough. But that's OK, because a lot of other people are cryptographers, and I can benifit from their studies. Further, people who are cryptographers often have little experiance with implementing their algorithms in real applications. So that's where I find my niche: practical application of cryptography. I'd say that #1 was never really sound, but if you've read about and understand the problems of this approach, then keep it in. (Other posters seem to have mentioned this one, and I don't really have anything more to add). For #2, I'd use a cryptographic hash with salt value (pseudo-code below): hash( salt + hash( salt + plaintext_passwd ) ) [download] You store the resulting value and the salt in the database. When you need to authorize a user, you put the password they gave you through the same hashing function and then compare it with the value in the database. The beneifits of this approach is that it is computationally infesible, even for the superuser, to reverse the function to get a plaintext password. The problems are that if a user forgets their password, you'll have to reset it to something instead of giving them the orginal password. Which I don't consider to be a big deal. However, in your case, I'd be more concerned that the user's channel (HTTP) is not encrypted. For #3, you'll run into the same problems with #1--how do you keep the master key safe? The recent collision found in MD5, all on its own, is not a big deal. MD5 wasn't expected to last much longer (indeed, it was expected to fall a lot sooner). Cryptographers and security people have been discouraging the use of MD5 for years. The big thing about this attack is that it might be extended to other algorithms. SHA-1 might be vulnerable, and we expected that to be good for a while. "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni. Thanks hardburn, for that great node. I need to find out more about the crytographic hash and find some examples. Looks like the way to go. The only think I'd like to recommnend is the book Applied Cryptography. That book has very good coverage of the families of cryptographic algorithms you're likely to need/encounter. It can be read to get an overview, or, if you have the desire, it also describes each of the algorithms in detail. It even includes discussion of known attacks against each algorithm. What I learned from that book has served me very well. Kyle -Yendor IWeThey HOPE Ride Digest::MD5 isn't a suitable encryption technique There are ways to use a hash function as a cipher. IIRC, it involves using the digest as an output-feedback stream cipher. The security of the resulting cipher is based on the security of the hash function (in other words, don't use MD5 for this). Also, keep in mind that the creaters a cryptographic hash function generally don't have in mind its use as a cipher. The only practical use I've been able to find for this is a situation where a government has banned strong encryption software, but not digest functions. In which case a coder can grab an existing digest function and make themselves a strong encryption program. Other than that, you're better off using a real cipher. The only practical use I've been able to find for this is a situation where a government has banned strong encryption software, but not digest functions. In which case a coder can grab an existing digest function and make themselves a strong encryption program. The FBI would rather be able to spy on us, than let us have secure internet connections. security article So you can be pretty sure that the "government" is sniffing all network traffic, and probably harvesting all passwords it can. So the question is "can you trust the government?. I shudder to think about the possibility that people, like those from Enron, would get control over the internet. Ooops, isn't that Microsoft? :-) Too often, organizations add encryption/authentication and other schemes to their various systems without understanding what it is they are trying to accomplish. Meeting gov't or corporate rules is often the motivator but you can do this without accomplishing the what the rule was put in place to accomplish. I'm afraid that may be the case here. So why DO we use encryption and authentication? A: to keep the data out of view of everyone except for those allowed to see it. We can break this down specifically like this: * End user machine: (screen, memory, perhaps disk) probably unencrypted and viewable by everyone but out of your scope. * Network transmission: unencrypted and viewable in your case. Bad monk, no cookie for you! (USE SSL !!!). This is where your effort is most important. Protect the user's credentials otherwise all other security is meaningless because the user can be spoofed. Read and follow jbware's advice above. * Web Server and DB: (memory, disk, DB) Note that the root user can read everyfile, and the contents of memory if she/he desires, so this person must be trusted and the machine must be physically secured (otherwise I can become root by booting with an external media). The database on the otherhand can be setup to hold only encrypted data. Thus encrypting the data in the DB can help prevent other staff, db users, the gov't, etc. from accessing this data. To do this effectively, you'll want to use the user's passwd hash as the key to encrypted that row of data. This is not often done however, because the data is NOT recoverable if the passwd is lost. I do NOT recommend building a web site on this principle. You can use a recoverable key to encrypted the DB data. Some commercial DBs like oracle have encryption routines built in. You can use these to provide some level of security against DB hacking, but keep in mind this is only as good as the security of the key used to access the data, which would have to be stored on the webserver. This an easy way to placate the rule makers. Cheers ------------------------------------- Nothing is too wonderful to be true -- Michael Faraday Note that the root user can read every file, and the contents of memory if she/he desires, so this person must be trusted This is really getting off of the subject of Perl, but as knowledge is paramount in securing any system, I'm posting it anyway. One should note also that there have been recent developments of security systems which -- even in their default configurations -- deny even root access to system objects. Such things are sometimes known as Mandatory Access Controls. If security is an issue (and I cannot imagine a situation where it is not), then the administrator should investigate these avenues to be implemented in tandem with existing methodologies. I feel obligated to echo the imperatives of other monks to use SSL. If you don't have the money to pay an organisation like VeriSign (not that I would give them a penny after their .com fiasco), you can create a self-signed certificate which -- although not signed by an established CA -- grants you the intended benefits of SSL, for free1. If you're using Apache, it ships with documentation on creating a self-signed certificate. And (at least in 2.x, haven't checked into SSL on a 1.x server yet) the SSL configuration file is extremely well-commented, so you should have no problems setting it up. ---- 1: Okay, so your users have no reason to believe that it's you who signed it. But by accessing your site right now without SSL, they're saying how much they trust you, so I think that that's a moot point. A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (433 votes), past polls
http://www.perlmonks.org/index.pl/jacques?node_id=386327
CC-MAIN-2014-15
refinedweb
2,306
63.49
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. How to Date deduction with Python ? here shows date of birth field in my openerp model 'date_of_birth': fields.date('Date of Birth'), need to change its default date to 25years earlier.because its easier to user to pick year. ( in openerp jquery default load current 20years in list and user have to get some time to select earlier year ). for ex : _defaults = { 'date_of_birth':fields.date.context_today - 25years please advice me to implement this issue (if its with python function seems good for my requirement) ---------------EDITED--------------- @ Dear Patently, I tried it with my console.its gives a error >>> import datetime >>> from datetime import timedelta >>> diff = datetime.datetime.now() - datetime.timedelta(years=42) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'years' is an invalid keyword argument for this function I tried it with openerp in eclipse def _dob(self, cr, uid, context=None): diff = datetime.datetime.now() - datetime.timedelta(years=42) return diff also gives error File "/home/bellvantage/Documents/openerp-7.0/openerp-7/openerp/addons/bpl/bpl.py", line 47, in _dob diff = datetime.datetime.now() - datetime.timedelta(years=42) TypeError: 'years' is an invalid keyword argument for this function try this: import datetime from datetime import timedelta diff = datetime.datetime.now() - datetime.timedelta(years!
https://www.odoo.com/forum/help-1/question/how-to-date-deduction-with-python-14869
CC-MAIN-2016-50
refinedweb
242
56.86
Welcome to the Parallax Discussion Forums, sign-up to participate. /******************************************************************************* Program: REsys, module Regulate, v.0.1, 14 July 2012 Written by: David Voss, Tree of Life Foundation Renewable Electronics, Patagonia, AZ Copyright: 2012, David Voss. Released under the GNU GPL; see the bottom of the file for terms of use. Purpose: Routines to regulate battery, solar, and wind generator Target: ANSI C ******************************************************************************/ #include "main.h" // Sets the part # and includes all the other necessary // include files. See it for details. /******************************************************************************* regulateBattery ******************************************************************************* There are 4 ways this controller can regulate the battery: a dump load, solar PWM, solar MPPT, and to NOT regulate the battery (i.e., just a battery monitor). Depending on the settings in the Makefile, conditional compilation chooses how to regulate. At times, this makes the code a little confusing, but it allows there to be only one copy of the main logic, so if it needs to be changed, it only needs to be changed once, and it should stay consistent. Regulation is set up as a state machine with the below battery states (see 'switch (BatteryState)'). In the descriptions, MPPT means that all available current is flowing to the battery; PID means that the net current is reduced from what is available to maintain a particular voltage setpoint. ******************************************************************************/ void regulateBattery(void) { if (Implemented.WIND) { // Calculate RE Buss Voltage Setpoint REbussSetpoint = MinREbussSetpoint; if ((SolarDutyCycle < SOLAR_DUTY_CYCLE_MARGIN) && (REbussSetpoint < REbussVolts * RE_BUSS_MARGIN)) REbussSetpoint = REbussVolts * RE_BUSS_MARGIN; MinREbussSetpoint = BatteryVolts; } // end if (Implemented.WIND) switch (BatteryState) { case BATTERY_STATE_BULK: /* All available current is flowing to the battery, and BatteryVolts < AbsorbSetpoint. MPPT. */ /* Check if we've reached the Absorb Setpoint */ if (BatteryVolts >= AbsorbSetpoint) { BatteryState = BATTERY_STATE_ABSORB; BatterySetpoint = AbsorbSetpoint; TimeStamp = Time; // Initialize TimeStamp /* No change of state */ } else { /* Nothing to be done here */ } break; case BATTERY_STATE_ABSORB: /* The AbsorbSetpoint voltage has been reached, but the current necessary to maintain that setpoint is greater than FloatTransitionAmps (or we've only been below FloatTransitionAmps for less than FloatTransitionTime). PID. */ /* Check if we've met the float criteria (only check the timeout; the only way the timeout can get large enough is if the current is small enough; if the current is too big, the timeout is reset each time). */ if (Time - TimeStamp > TransitionTimeout) { BatteryState = BATTERY_STATE_FLOAT; BatterySetpoint = FloatSetpoint; /* Check if we can't maintain absorb voltage anymore, and the TransitionTimeout (from BATTERY_STATE_REFLOAT) has expired. */ } else if ((BatteryVolts < AbsorbSetpoint - HysteresisVolts) && (Time - TimeStamp > TransitionTimeout)) { BatteryState = BATTERY_STATE_BULK; /* No change of state */ } else { /* Reset the time stamp if the current is still bigger than FloatTransitionAmps */ if (BatteryAmps > FloatTransitionAmps) TimeStamp = Time; } break; case BATTERY_STATE_FLOAT: /* The FloatTransitionAmps and TransitionTimeout requirements have been met, and the voltage has been reduced to FloatSetpoint. PID. */ /* Check if we can't maintain float voltage anymore */ if (BatteryVolts < FloatSetpoint - HysteresisVolts) { BatteryState = BATTERY_STATE_UNDERFLOAT; TimeStamp = Time; /* Check if equalize criteria has been met */ } else if ((BatteryAhSinceEqualize > EqualizeMaxAh) | (Time - EqualizeTimeStamp > EqualizeMaxInterval) | (BatteryMinSoC < EqualizeSoC)) { EqualizeAccumulatedTime = 0; BatteryState = BATTERY_STATE_UNDEREQUALIZE; TimeStamp = Time; /* No change of state */ } else { /* Nothing to be done here */ } break; case BATTERY_STATE_UNDERFLOAT: /* The available current isn't enough to maintain the FloatSetpoint anymore. MPPT. */ /* Check if we've reached the float voltage again */ if (BatteryVolts >= FloatSetpoint) { BatteryState = BATTERY_STATE_REFLOAT; TimeStamp = Time; /* Check if the RefloatTimeout has expired */ } else if (Time - TimeStamp > RefloatTimeout) { BatteryState = BATTERY_STATE_BULK; /* No change of state */ } else { /* Nothing to be done here */ } break; case BATTERY_STATE_REFLOAT: /* The FloatSetpoint has been reached again after having gone below it, and we're checking if it can be maintained with less than FloatTransitionAmps within FloatTransitionTimeout. PID. */ /* Check if we've met the criteria to return to float mode */ if (BatteryAmps < FloatTransitionAmps) { BatteryState = BATTERY_STATE_FLOAT; /* Check if we can't maintain float voltage anymore */ } else if (BatteryVolts < FloatSetpoint - HysteresisVolts) { BatteryState = BATTERY_STATE_UNDERFLOAT; /* Check if the transition timeout has expired */ } else if (Time - TimeStamp > TransitionTimeout) { BatteryState = BATTERY_STATE_BULK; TimeStamp = Time; // Initialize TimeStamp /* No change of state */ } else { /* Nothing to be done here */ } break; case BATTERY_STATE_EQUALIZE: /* The EqualizeSetpoint has been reached, but EqualizeAccumulatedTime < EqualizeTime and EqualizeTimeout hasn't expired yet. PID. */ /* Check if the EqualizeTime has been met */ if (EqualizeAccumulatedTime >= EqualizeTime) { BatteryState = BATTERY_STATE_FLOAT; BatterySetpoint = FloatSetpoint; EqualizeTimeStamp = Time; BatteryAhSinceEqualize = 0.0; BatteryMinSoC = 100; /* Check if we can't maintain equalize voltage anymore */ } else if (BatteryVolts < EqualizeSetpoint - HysteresisVolts) { BatteryState = BATTERY_STATE_UNDEREQUALIZE; /* No change of state */ } else { EqualizeAccumulatedTime++; } break; case BATTERY_STATE_UNDEREQUALIZE: /* All available current => battery in order to reach the equalize voltage (either in transition from float or fallen down a bit from the EqualizeSetpoint due to lack of net current). MPPT. */ /* Check if we've reached equalize voltage */ if (BatteryVolts >= EqualizeSetpoint) { BatteryState = BATTERY_STATE_EQUALIZE; BatterySetpoint = EqualizeSetpoint; /* Check if the EqualizeTimeout is expired */ } else if (Time - TimeStamp > EqualizeTimeout) { BatteryState = BATTERY_STATE_BULK; /* No change of state */ } else { /* Nothing to be done here */ } break; } // End switch BatteryState /* Now either maximize the power to the battery if we're in a non-regulating battery state or use PID to regulate to the BatterySetpoint if we're in a regulating state. */ if (BatteryState % 2) { // All of the non-regulating states are // ODD, so (State % 2) would be true (1). /* We're in a non-regulating battery state (BULK|UNDERFLOAT|UNDEREQUALIZE) and we want to maximize net current flowing into the battery. */ if (Implemented.DUMP_LOAD) { dumpSetDutyCycle(0.0); // Make sure there's no dump load } if (Implemented.SOLAR) { if (Implemented.SOLAR_MPPT) { solarMPPT(); // Maximize current } else { solarPWM(); // Connect if current available } } if (Implemented.WIND) { /* Use PID on the REbuss=>Battery buck converter to maintain the REbussSetpoint voltage, which should keep the TSR of the wind generator where we want it to be, and maximize its current. */ pid(&REbussSetpoint, &REbussVolts, &REbussPID, &REbussDutyCycle); } } else { // if (BatteryState % 2) /* We're in a regulating battery state (ABSORB|[RE]FLOAT|[RE]EQUALIZE) */ if (Implemented.DUMP_LOAD) { /* Note that since increasing the dump duty cycle _decreases_ the net current into the battery, the setpoint and actual values are doubly reversed to give a positive error, i.e., BatteryVolts - *Setpoint instead of *Setpoint - BatteryVolts, giving negative feedback. */ if (Implemented.WIND) { /* Use PID on the dump load to maintain the REbuss voltage */ pid(&REbussVolts, &REbussSetpoint, &DumpPID, &DumpDutyCycle); /* Use PID on the REbuss=>Battery buck converter to maintain the Battery voltage */ pid(&BatterySetpoint, &BatteryVolts, &REbussPID, &REbussDutyCycle); REbussSetDutyCycle(REbussDutyCycle); } else { // no Wind Generator /* Use PID on the dump load to maintain the Battery voltage */ pid(&BatteryVolts, &BatterySetpoint, &DumpPID, &DumpDutyCycle); } // end if (Implemented.WIND) dumpSetDutyCycle(DumpDutyCycle); } else { // no dump load /* Use PID on either the solar buck converter (if there's MPPT) or the direct PWM (if no MPPT) to maintain the Battery voltage. */ pid(&BatterySetpoint, &BatteryVolts, &SolarPID, &SolarDutyCycle); } // end if (Implemented.DUMP_LOAD) } // end if (BatteryState % 2) if (Implemented.SOLAR) solarSetDutyCycle(SolarDutyCycle); } // End function 'regulateBattery' /******************************************************************************* monitorBattery ******************************************************************************/ void monitorBattery(void) { if (BatteryVolts > EqualizeSetpoint - HysteresisVolts) { BatteryState = BATTERY_STATE_EQUALIZE; } else if ((BatteryVolts > AbsorbSetpoint - HysteresisVolts) && (BatteryVolts < AbsorbSetpoint + HysteresisVolts)) { BatteryState = BATTERY_STATE_ABSORB; } else if ((BatteryVolts > FloatSetpoint - HysteresisVolts) && (BatteryVolts < FloatSetpoint + HysteresisVolts) && (BatteryAmps < FloatTransitionAmps)) { BatteryState = BATTERY_STATE_FLOAT; } else { BatteryState = BATTERY_STATE_BULK; } } // End function 'monitorBattery' /******************************************************************************* pid ******************************************************************************/ void pid(float *setpoint, float *actual, PID *pid_ptr, unsigned short *duty) { float error, accumulated_error, new_duty; unsigned short duty_increment; error = *setpoint - *actual; // (setpoint - actual) gives negative feedback accumulated_error = pid_ptr->accumulated_error + error; /* Correct accumulated error if it's getting too big from saturation. */ if (accumulated_error > pid_ptr->max_accumulated_error) { accumulated_error = pid_ptr->max_accumulated_error; } else if (accumulated_error < -pid_ptr->max_accumulated_error) { accumulated_error = - pid_ptr->max_accumulated_error; } pid_ptr->accumulated_error = accumulated_error; // Store it for next time duty_increment = (unsigned int) // Compute PID duty cycle increment (error * pid_ptr->proportional_gain + // P accumulated_error * pid_ptr->integral_gain + // I (error - pid_ptr->old_error) * pid_ptr->derivative_gain); // D pid_ptr->old_error = error; // Store current error as the next 'old' error if (duty_increment > pid_ptr->max_duty_increment) { duty_increment = pid_ptr->max_duty_increment; } else if (duty_increment < -pid_ptr->max_duty_increment) { duty_increment = -pid_ptr->max_duty_increment; } new_duty = *duty + duty_increment; if (new_duty > pid_ptr->max_duty_cycle) { *duty = pid_ptr->max_duty_cycle; } else if (new_duty < pid_ptr->min_duty_cycle) { *duty = pid_ptr->min_duty_cycle; } else { *duty = new_duty; } } // End function 'pid' /******************************************************************************* solarMPPT ******************************************************************************* Solar maximum power point tracking is set up as a state machine, with the below states, which can be sequentially moved through. This routine only gets called when we're trying to maximize power from the solar panels; therefore, it does not implement charge controlling (for that, see 'regulateBattery' above). Note that to save on parts and complexity, the high-side MOSFETs of the buck converter may not be able to be set to a duty cycle of 1, but only about 0.95 (SolarPID.max_duty_cycle). Therefore, whenever the solar panels are connected, the current may be passing through the buck converter, so there might not really be a 'BATTERY_STATE_DIRECT'. Even if this is the case, it's still useful to maintain a separate state to reduce the condition checks per pass. ******************************************************************************/ void solarMPPT(void) { // Since PID and MPPT are never active at the same time, we use some PID // volatile memory for MPPT to save global SRAM. #define old_amps SolarPID.old_error switch (SolarState) { case SOLAR_STATE_OFF: /* Not enough voltage to get any current from panels. Note that even though we lose a bit of current by disconnecting at very low current values (SolarMinDirectCurrent), it's justified in that no diodes (and their attendant voltage drop, which wastes much more power than we are losing here) are necessary. */ /* Check if solar voltage is high enough to connect and charge. */ if (SolarVolts >= SolarMinVolts) { SolarState = SOLAR_STATE_DIRECT; SolarDutyCycle = SolarPID.max_duty_cycle; /* No change of state */ } else { /* Nothing to be done here; remain off */ } break; case SOLAR_STATE_DIRECT: /* We're getting current, but not enough to justify maximum power point tracking (it's less than SolarMinMPPTamps), so the panels are connected directly to the battery (well, not necessarily; see the note in the function header above). */ /* Check if we should disconnect */ if (SolarAmps < SolarMinDirectAmps) { SolarState = SOLAR_STATE_OFF; SolarDutyCycle = SolarPID.min_duty_cycle; /* Check if we should switch to MPPT */ } else if (SolarAmps > SolarMinMPPTamps) { SolarState = SOLAR_STATE_MPPT; old_amps = SolarAmps; /* No change of state */ } else { /* Nothing to be done here; remain direct */ } break; case SOLAR_STATE_MPPT: /* Maximum power point tracking is active. */ /* Check if the current is low enough that we should revert back to a direct connection */ if (SolarAmps < SolarMinMPPTamps) { SolarState = SOLAR_STATE_DIRECT; SolarDutyCycle = SolarPID.max_duty_cycle; #ifdef _SOLAR_MPPT_SWEEP_ /* Check if our sweep timeout has expired and we should sweep */ } else if (Time - SolarSweepTimeStamp > SolarSweepInterval) { /* nada; not implemented yet */ #endif /* No change of state */ } else { /* Implement "Perturb and Observe" to get Maximum Power Point */ if (Flag.PERTURB_DIRECTION == 1) { if (SolarAmps >= old_amps) { SolarDutyCycle++; } else { SolarDutyCycle--; Flag.PERTURB_DIRECTION = 0; } } else { if (SolarAmps >= old_amps) { SolarDutyCycle--; } else { SolarDutyCycle++; Flag.PERTURB_DIRECTION = 1; } } /* Keep duty cycle within bounds */ if (SolarDutyCycle > SolarPID.max_duty_cycle) { SolarDutyCycle = SolarPID.max_duty_cycle; } else if (SolarDutyCycle < SolarPID.min_duty_cycle) { SolarDutyCycle = SolarPID.min_duty_cycle; } old_amps = SolarAmps; } break; #ifdef _SOLAR_MPPT_SWEEP_ case SOLAR_STATE_SWEEP: /* Periodically (SolarMPPTsweepInterval) sweep from SolarMinSweepVoltage to SolarMaxSweepVoltage to find the maximum power point and make sure we're not in a local maximum that is less than the real maximum. */ /* later, dude (currently no way to get to this state) */ break; #endif } // end switch MPPTstatus } // end solarMPPT /******************************************************************************* solarPWM ******************************************************************************* This function gets called if the battery is regulated with a dump load, and there is solar, but no MPPT. ******************************************************************************/ void solarPWM(void) { switch (SolarState) { case SOLAR_STATE_OFF: if (SolarMinVolts >= BatteryVolts) { SolarState = SOLAR_STATE_DIRECT; SolarDutyCycle = 1.0; } break; case SOLAR_STATE_DIRECT: if (SolarAmps < SolarMinDirectAmps) { SolarState = SOLAR_STATE_OFF; SolarDutyCycle = 0.0; } break; } // end switch (SolarState) } // end solarPWM() /******************************************************************************* ******************************************************************************* * This file is part of REsys, a subset of REcollection. * * * * REsys is free software: you can redistribute it and/or modify it under the * * terms of the GNU General Public License as published by the Free Software * * Foundation, either version 3 of the License, or any later version. * * * * REs REsys (named 'COPYING' in the top-level directory). If not, see * * <>. * ******************************************************************************/ eagletalontim wrote: » I also just found this :. Is % ----- = ------ Of 100 Or... Reading % ------------ = ------ Need 100 eagletalontim wrote: » I have still be researching a bit on N Channel Mosfets and really don't understand the gate charge part. Why / How are they different than transistors? not sure what you mean by "pulse"; you should use PWM, and an ADC to measure the voltage. For a lead-acid battery, you don't need to know the current going into the battery to properly charge it. It should have a 3-stage charging profile as follows: Stage 1 (bulk): direct connection to the battery until it reaches the "absorb" voltage; this should be about 14.1 V for a sealed lead acid battery, but the battery manufacturer knows for sure. Stage 2 (absorb): adjust the PWM to maintain the absorb voltage either for a fixed amount of time (usually about 2 hours), or if you have implemented current measurement, until the current goes down to about C/100 to maintain the absorb voltage. C/100 means 0.07 A for a 7 Ah battery. Stage 3 (float): reduce the PWM to maintain float voltage, usually about 13.2 V, but again, check with the battery manufacturer. Most cheap battery chargers combine the absorb and float stages, and use a single average voltage (usually about 13.8 V). This will (as you've noticed) take a long time to fill, and will shorten the life somewhat by keeping it at too high a float voltage. An optional 4th stage is equalization, which is an occasional controlled overcharge to equalize the voltage in the individual cells in the battery; this is usually not done with sealed batteries because it causes them to gas and, since they are sealed, it can't be replaced. This is one reason why real flooded batteries work better in solar service. You can set up the stages as a state machine, with the criteria listed above for transitioning between stages (in practice, there are often some intermediate stages that are useful). I'm including a routine I've written for this purpose, but beware that I've not implemented this, so it's not tested. I'm pretty sure it'll work, but use at your own risk. Look in particular at the "switch BATTERY_STATE" segment in the function regulateBattery below. It's way more complicated than you need, since it also does MPPT and a dump load, but it should give you some idea. David Voss Stage 1 (bulk): The microcontroller adjusts the output voltage of the switcher to produce maximum current in to the battery while monitoring the voltage until it reaches the "absorb" voltage. This provides maximum charging efficiency. Stage 2 (absorb): The microcontroller adjusts the switcher to maintain the absorb voltage until the current goes down to C/100. Stage 3 (float): reduce the switcher output to maintain float voltage. Not only does this reduce charging time, but it increases battery life quite a bit. Also, in some ways I find the circuitry involved is actually easier to work with and more flexible than most advanced charging circuitry. Life is unpredictable. Eat dessert first. EDIT : Found this switching regulator here : If I need more than 3A, can I "stack" these to handle more? So for stage 1 this is what I need to do? : Bypass switching regulator circuit and dump pure solar power into batteries via a MOSFET transistor using PWM. Between pulses of the PWM the battery voltage will be read(?) so when the voltage is at the desired charged voltage (13.6), stage 2 will kick in. Other calculation / measuring would need to be included to ensure the solar panels can provide the needed voltage so Stage 2 does not kick in too quick. Stage 2? : The Prop will activate the Adjustable Switching regulator circuit and the deactivate direct MOSFET circuit to output a specific voltage to the battery?. The PWM frequency will stay the same for the Adjustable Switching regulator circuit(?) while the current continues to be pumped into the battery at the regulators current limit. The stronger the regulator the faster stage 2 will complete? During each pulse the current will be read to ensure it is higher than C/100 (7Ah / 100 = 70mA?). When lower than C/100 go to stage 3 Stage 3 : Unsure what to do here since the current flow will still be the same through the adjustable switching regulator. Is it possible to reduce current on demand? The charger was a programmable output voltage switching regulator controlled by a microcomputer that monitored the battery voltage and charging current. The software controlled the charging current by increasing or decreasing the output voltage of the switching regulator. After several years of operation the customer was impressed by how well the system worked and how much longer the batteries lasted. The description altosack posted is pretty well identical to part of the setup and calibration section of the user manual for the charger. The switching regulator you posted should work well for your solar cell and battery. Your solar cell is rated at 18W so even if the switcher had 100% efficiency (which it does not, it is ~ 77%) the MAXIMUM current out at 13.2V would only be ~ 1.4 Amps. The regulator would be used for all 3 stages to get maximum charging efficiency, and it should never be bypassed. So for stage 1 (bulk): The MICROCONTROLLER adjusts the SWITCHING REGULATOR OUTPUT VOLTAGE to produce maximum current in to the battery while monitoring the voltage until it reaches the "absorb" voltage. This provides maximum charging efficiency. The only time you would need to limit the current would be if you had a very high output solar cell array charging a small capacity battery. Stage 2 (absorb): The MICROCONTROLLER adjusts the SWITCHING REGULATOR OUTPUT VOLTAGE to maintain the “absorb” voltage until the current goes down to C/100. Stage 3 (float): The MICROCONTROLLER adjusts the SWITCHING REGULATOR OUTPUT VOLTAGE to maintain the float voltage. Life is unpredictable. Eat dessert first.? Or... do I have this wrong? going to have to respectfully disagree with this statement. Without any regulation / monitoring you can easily overcharge and damage a battery. Damage can include bursting, drying up, leaking, etc. This depends on both the battery under charge, and the capacity of your solar panels. Reading further though I see that this is not a direct connection but a switched power supply to the battery. Still though, the first statement is a little misleading. There are two common ways for a charge controller to operate: [simple] PWM, which can never deliver more than the short-circuit current of the solar panel into the battery, but is very easy to implement, and MPPT (maximum power point tracking), which uses some form of DC-DC converter (usually a buck converter in practice) to maximize the current into the battery, which can, in certain circumstances, be more than the short circuit current of the solar panel. MPPT is far more challenging to implement, because you have to place the MOSFET in the positive leg, meaning the gate driver supply has to be 12-15V above that, and must be floating (not at a fixed voltage). This is non-trivial to implement. To me, "switching regulator" means a packaged device that you connect a choke (coil) and capacitors to provide a more efficient power supply than a linear regulator, usually for signal electronics, not power electronics. However, I assume that kwinn is speaking of a discrete buck converter, because I've never heard of a packaged switching regulator that can put out 2400W ! In any case, both a packaged switching regulator and a discrete buck converter are ways of implementing MPPT. The only way that buck converters can be stacked is if they are out-of-phase with each other and maintain that phase relationship from one cycle to the next. Separate packaged components cannot do this. However, discrete buck converters can be driven by a Prop (one phase per cog, each phase with a primary and a diode MOSFET), and the phase relationship can be maintained by the shared system counter. The only other microcontrollers I've run across that can do this in a DIP package are some 16-bit dsPICs that are limited to 4 phases (not too bad, really, but they are simply not Props !), unless you use multiple chips. All regulating voltages (including absorb, float, and equalize) are measured across the battery, not in series with anything. You wouldn't, but in fact you don't care about the voltage output of the regulator, or the input voltage of the solar panels. You measure the current into the battery, which is what you are trying to maximize, and adjust the PWM to that effect. A common way of doing this is with the "perturb and observe" algorithm (a simple version is implemented in the code I gave in my first reply). To maintain float voltage (or any voltage above the resting voltage), current must be flowing into the battery, although it can be quite small, perhaps between C/200 and C/1000 into a well-charged battery. With MPPT, making the voltage differential between the output of the buck converter and the battery very small will reduce the amount of current flowing to that which you need. The voltage will hunt a certain amount, but this can be minimized by implementing PID on the duty cycle.. As I said in my first reply, you don't need to measure current to implement 3-stage charging (although I like to have this information, anyway) with simple PWM, but you will need to measure it to implement MPPT. To measure it, you will need a low-resistance precision resistor (often called a shunt); common ones are 0.1 and 1.0 milli-ohms, but you may be able to get away with 10 milli-ohms in your low current application. (Higher resistance gives better accuracy and lower current capacity.) The voltage across the shunt can be calculated by: V = I R (voltage = current x resistance); for example, 2A into a 1 milli-ohm shunt will have a 2 mV differential. This is hard to read with any accuracy with a single-ended ADC; it's best to use a differential ADC with gain (the voltage difference between the two sides of the shunt will be multiplied by the gain, usually between 32 and 128 times). Two differential ADCs with gain in a DIP are the Nuvoton NAU7802 (~18-bits effective resolution, (2) sigma-delta inputs) and an Atmel ATtiny261A (a cheap microcontroller with 10-bits effective resolution, probably enough, with some oversampling, for your application, and with (11) SAR inputs). Beau, I don't think the statement I made is at all misleading, since I think it is completely understood that we are monitoring the battery and feeding it with a switched connection (else we would not need a Prop, which is probably not the way that you want us to go !). My apologies for misusing the term “switching regulator”. You are correct, it was a discrete switching power supply with a 50A output current. IIRC the description was “Digitally controlled variable voltage switching power supply”. @eagletalontim I am not sure if the switching regulators can be stacked. No experience with doing that. I don't see any need for more current unless you are planning on a much larger system than the 18W solar panel and 7AH battery in your initial post. On the other hand I don't see why they could not be stacked, although it might require some additional circuitry to balance the current between the regulators. Life is unpredictable. Eat dessert first. Not sure if this is the proper way to connect it for what I need, but I can now see how it is possible to get 1000W + controllers using MOSFETs. Not sure what PID is. Could you explain this? I am still trying to get a grasp on PWM and how you can change output voltage based on the pulse width. For Buck mode (Stage 1) the pulse width will be spaced far out to allow a large amount of current through but what about voltage? Voltage and Current are changed at the same time right? So this would mean I would have to put more than 13.6V voltage into the battery in stage 1 since my panel outputs 24V? I have been reviewing this schematic and trying to figure it out : Q3 which is a mosfet appears to be a direct short when it is activated. What is the purpose of it? The MAX4173H appears to be a current sensing chip, but it is surface mount only? Is there an alternative besides using a chip? Maybe an ADC function or something different that I don't have to spend money on right now? I already have some MOSFETs on hand that I am sure I could use for the time being until I am able to order better ones. Problem is, I don't know how to "properly" run them without a mosfet driver which I don't have. Is it possible to run them reliably with just the Prop and a few resistors? As for the operation of the buck switching regulator, the central component of that circuit is the inductor. The inductor stores energy as a magnetic field. Referring to the schematic on the LM2678 data sheet, when the output voltage drops below the chips internal reference voltage (as sensed by the "feedback" pin) it turns on the transistor that connects the input voltage to the left side of the inductor. The current through the inductor increases (the magnetic field also increases) and charges the output capacitors until the output voltage exceeds the reference voltage. At this point the chip turns off the transistor, the magnetic field around the inductor collapses transferring its stored energy into the output capacitors, and once the output voltage drops below the reference voltage the cycle repeats. Life is unpredictable. Eat dessert first. Is PID needed for what I am trying to do? Is there a PID for dummies reference that I can look at that include a complete schematic? Here is what the microcontroller software needs to do for each stage of charging. For stage 1 (bulk): The microcontroller would start the switching regulator at a low voltage and execute a loop that increments the output voltage and measures the charging current to find the peak current. Once the initial peak current has been found it would vary the voltage up and down a small amount to maintain the peak current. It also monitors the output voltage to see if has reached the “absorb” voltage at this point. Stage 2 (absorb): Once the “absorb” voltage has been reached the microcontroller would maintain that output voltage from the switcher and monitor the current until it goes down to C/100. Stage 3 (float): The microcontroller adjusts the switcher output voltage to maintain the float voltage. You will need to measure the charging current and the switcher output/battery voltage which can be done with a 2 channel adc or 4 prop pins used to make 2 sigma/delta adc's. You will also need to control the switcher output voltage which can be done using a dac or a prop pin and pwm. Life is unpredictable. Eat dessert first. I hope you don't find this patronizing, but I sincerely recommend that you start with a simple PWM controller, and do not attempt MPPT for the first iteration. Nothing you build will be wasted, and it can all be applied to MPPT if and when you are ready and you find it necessary (it may, in fact, not be necessary). Even though I'm a mechanical engineer with many solar electric installations under my belt, and I've been studying electronics for quite a while and have a good understanding (probably better than many electrical engineers), I'm going to start with simple PWM, also, because it's a prudent and effective path to take. PID stands for Proportional, Integral, Derivative, and is a (software) method for convergence to a desired output for a controller, and, as such, there is no schematic. You don't really need it as long as you don't mind having the voltage hunt around a bit instead of being a steady-state absorb or float value. This won't significantly shorten the life of your battery. Definitely, start simple, and if and when you want to add a little finesse to your voltage control, read up on PID and look at the subroutine in my regulate.c file, and we'll help you. All you need for a simple PWM solar controller is a Prop, an ADC chip, a gate driver, and a MOSFET. A recommended extra is a 16x4 (or so) character LCD so you can output the voltage and whatever else you want to see. Put the MOSFET in the battery negative connection, and use the battery positive for the power supply for the gate driver. Use the ADC chip (I recommend the Nuvoton NAU7802 because it's accurate, easy-to-use, and it can also read the current when you're ready to do that) to read the voltage, and implement the 3 charging stages outlined before in software. I think the "regulate.c" file I included is a good starting point; you can remove all the stuff about the RE buss, wind, dump loads, and PID, if you want to. Phase 2 would be to implement current measurement; with that, you can do all the things that a state-of-charge meter (such as a TriMetric 2025) can do, if you want to write the software to do it, and you can transition from the absorb stage to the float stage with a more accurate method than simply timing it. Phase 3 would be to implement PID. Phase 4 would be to implement MPPT. What I figure this would be useful for is to take a known voltage of 13.8V and set it as the "Need" variable. Then begin taking samples of the voltage that is charging the batteries. If the first sample is say 14.0V, an adjustment is made to the PWM duty cycle to drop the voltage to be closer to the "Need" Variable based from starting preset "step" amount of say +/- 50?. Then take another reading and check it against the previous reading. Let say that reading is 13.5V. This means that the duty cycle was changed too much and needs to be adjusted accordingly. So to do this, I would take the previous reading of 14.0V and the latest reading of 13.5V and put them through the formula below to find the difference in percentage.: So first reading was 14.0V which when put through the formula above (14.0 * 100) / 13.8 = 101.45% Then the second reading would be (13.5 * 100) / 13.8 = 97.83% So the preset "step" amount of -50 from the duty cycle made a 3.62% change in voltage which is too much so the "step" variable needs to be changed to not change the voltage so much in the next adjustment. To do this, there will be another formula which I am working on to get the adjusted step amount. Will post once I figure it out. Predefined values Pulse Period = 1000 ' just an example "Need" Voltage = 13.8V Step Increment = 50 First Reading (Stored reading) = 14.0V Second Reading (Last reading) = 13.5V Difference = 14.0 - 13.5 = 0.5 So in theory for each value of the Step Increment (50) the voltage change is 0.01V So to reach our goal, the Step value should change to (Need Voltage - Last Reading) = 0.3 Difference from last reading to needed voltage / Current Step increment voltage change (0.01) = New Step increment value : 30 So to sum it all up..... New PWM Pulse period = Pulse Period + (Need Voltage - Last Reading) / ((Stored reading - Last Reading) / Last Step Increment Value) I will have to test this in a program to see if this is anywhere close to what I need, but I don't feel like writing a program right now :p If you know how to use a P-channel MOSFET, then go for it (in the positive leg). Just make sure you connect the Source to the battery side so the voltage necessary for the gate turn on/off will stay relatively stable and not float with the solar panel voltage whenever it disconnects and the panel side goes to the panel Voc. If/when you want to implement a buck converter for MPPT, I think you will need an N-channel. For the PWM, I would recommend a 16-bit duty cycle to keep the voltage stable. If you don't mind a little audible hum, this works fine with an 80 MHz clock (1.2 kHz hum). If you want to keep it out of the audible range (say, at least 40 kHz), you can use software stepping to achieve this. An example is here:-(*.S)-in-propgcc. If you're not using PropGCC, start ! If you really, really don't want to, you can implement that easily in Spin/PASM by removing the #ifdefs and including everything in between where it says "#ifdef SOFTWARE_STEPPING" and the "#endif". Don't try to read the current by using sigma-delta with the Prop; you will get no joy. It's possible with an MCP3208 or MCP3304, but only with a low Vref of about 0.3-0.4V and significant over-sampling (say, 1024x). If you want to read the micro- to milli-volts from a shunt, make your life easier and get a differential with gain ADC, such as the NAU7802. The gate is insulated from the conducting channel so it is a capacitor. When you turn the mosfet on or off you have to charge or discharge that capacitor. That is why you have to provide a large (relatively) current for a brief time. Life is unpredictable. Eat dessert first. Anyway, it's generally measured in nanoCoulombs. One Coulomb is one Amp-Second. So, if a MOSFET has a gate charge of 1000nC, it'll take 1mA 1mS to turn it on/off. Edit ... forgot link :p If you don't want to use a packaged gate driver, I think there are ways to do this with discrete components (usually based on a transistor, if I recall correctly), but I have no experience doing it this way, and I don't know if you can get the same performance. Maybe. However I want / need to increase the amps and lower the voltage going into my charge controller to something like 14 or 15 Volts while increasing the amps a little to at least 5 amps using capacitors / transformer or whatever will be needed how can I achive this at the lowest cost posible? More info you may need I am using a 30 amp (max) charge controller PWM that isnt getting enough amps from the 1 panel to charge 12 volt dc battery (100 ah). by adding a 25 volt capacitor acrose the solar panel input on the pwm charge controller this will increase the amps ??? Ple ase note i cant buy another solar panel or buy a mppt charge controller as i cant afford that, so my capacitor idea if it isa good idea we all could use to gain more amps if this is not the case please let me know :).
https://forums.parallax.com/discussion/143531/solar-charge-controller
CC-MAIN-2018-17
refinedweb
5,815
51.18
tensorflow:: ops:: Fingerprint #include <array_ops.h> Generates fingerprint values. Summary Generates fingerprint values of data. Fingerprint op considers the first dimension of data as the batch dimension, and output[i] contains the fingerprint value generated from contents in data[i, ...] for all i. Fingerprint op writes fingerprint values as byte arrays. For example, the default method farmhash64 generates a 64-bit fingerprint value at a time. This 8-byte value is written out as an uint8 array of size 8, in little-endian order. For example, suppose that data has data type DT_INT32 and shape (2, 3, 4), and that the fingerprint method is farmhash64. In this case, the output shape is (2, 8), where 2 is the batch dimension size of data, and 8 is the size of each fingerprint value in bytes. output[0, :] is generated from 12 integers in data[0, :, :] and similarly output[1, :] is generated from other 12 integers in data[1, :, :]. Note that this op fingerprints the raw underlying buffer, and it does not fingerprint Tensor's metadata such as data type and/or shape. For example, the fingerprint values are invariant under reshapes and bitcasts as long as the batch dimension remain the same:
https://www.tensorflow.org/versions/r2.3/api_docs/cc/class/tensorflow/ops/fingerprint?hl=nb-NO
CC-MAIN-2021-43
refinedweb
201
56.76
Keycloak Identity Brokering with OpenShift This article discusses how to set up and configure a Keycloak instance to use OpenShift for authentication via Identity Brokering. This allows for Single Sign On between the OpenShift cluster and the Keycloak instance. The Keycloak instance will be running on the OpenShift cluster and leverage a ServiceAccount OAuth Client. Provisioning Keycloak to your OpenShift namespace Use the below command to create the Keycloak resources in your OpenShift project. oc process -f | oc create -f - IMPORTANT: This template is intended for demonstration purposes only. This will create the following resources: - DeploymentConfig – Defines the Keycloak image to use and other Pod & container settings. - Service – Defines a Service in front of the Keycloak Pod. - Route – Exposes the Keycloak Service at an externally available hostname. - ServiceAccount – Defines a constrained form of OAuth Client in our namespace. For more info on the ServiceAccount OAuth Client, see. After provisioning, the Keycloak service will be available at the exposed Route. Use the below command to get the route and sign in to the Administration Console. oc get route keycloak --template "http://{{.spec.host}} " You can sign in initially using the ‘admin’ user and the generated password stored in the ‘KEYCLOAK_PASSWORD’ environment variable. oc env dc/keycloak --list | grep KEYCLOAK_PASSWORD Creating a new Realm The ‘admin’ user will be signed in to the ‘master’ realm. This user has full control over the Keycloak instance. We can create a dedicated realm for our OpenShift project and allow OpenShift users to administer the realm. Only users who can access our OpenShift cluster will be able to sign in to Keycloak. Create a realm, and name it after our OpenShift Project. Configuring Keycloak to use OpenShift for Identity Brokering After creating the realm, the context should switch to the new realm. From the ‘Identity Providers’ menu, choose to ‘Add provider…’ and select ‘OpenShift v3’. Fill in the below fields. Client ID This field is the OAuth Client identifier in OpenShift. As we’re using a ServiceAccount OAuth Client, the id will be in the below format: system:serviceaccount::keycloak For example, if our project had an id of ‘myproject’, the Client ID would be: system:serviceaccount:myproject:keycloak Client Secret The secret is stored as a token for the ServiceAccount in OpenShift. To retrieve the secret, execute the following: oc sa get-token keycloak Base URL The Base URL is the OpenShift Master URL e.g.. IMPORTANT: The OpenShift Master URL will need to have a trusted CA-signed certificate for Keycloak to successfully call the OAuth callback endpoint. Default Scopes These are the scopes to send to OpenShift when authorizing the user. As we’re only interested in authentication, and not making modifications to OpenShift on behalf of users, we can just use the ‘user:info’ scope. After filling in all the above fields, the Provider can be created. Giving your OpenShift user a Role in Keycloak If you attempt to sign in to the realm now, any user who successfully signs in will *only* be able to manage their own account in Keycloak. To allow users to manage the realm, they’ll need additional permission. There are two approaches to giving Users extra permissions: - Setting Default Roles/Groups for every user. - Explicitly setting Roles per user. Explicit Roles can be managed from the ‘Users’ menu after Users have signed in at least once. However, we’re going to set up Default Roles so Users have roles on first sign in. To set a Default Role, choose the ‘Default Roles’ tab from the ‘Roles’ menu. Then choose the ‘realm-management’ Client Role and add the ‘manage-realm’ role to the ‘Client Default Roles’. You may want to choose different or more restrictive roles depending on your requirements. Trying it out Try it out by navigating to the Realm Admin Console page in a new browser session or incognito window. The URL for the Realm Admin Console page can found in the ‘Clients’ menu as the Base URL for the ‘security-admin-console’ Client ID. This will show a login screen with an ‘Openshift v3’ option. Choosing the ‘Openshift v3’ option should open an OpenShift login page. Login to OpenShift and you should eventually be redirected back to Keycloak and be able to manage the Realm. You may need to fill in some account details on the first login. Where to go from here? If you’d like to have more control over the permissions OpenShift users have in your Keycloak instance, you may want to remove the Default Roles. This would be particularly important if you don’t know or trust all OpenShift users. In that case, you could remove the Default Roles, and only add specific Roles to users you trust. You could also create a Group with specific Roles to make it more manageable. The OpenShift template used in this article is available at. The Red Hat OpenShift Container Platform is available for download. For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.
https://developers.redhat.com/blog/2017/12/06/keycloak-identity-brokering-openshift/
CC-MAIN-2018-47
refinedweb
837
64
Recently while I was studying a C++ tutorial, I wrote this piece of code: #include <iostream> #include <fstream> #include <string> using namespace std; void main() { string text; ifstream myFile ("data.txt"); while (!myFile.eof()) { getline(myFile, text); cout << text << endl; } cin.get(); } Now the problem is I really don't understand this statement: while (!myFile.eof()) Please explain me ! I am just a beginner, so the more lucid the explanation, the better ! If you feel there's something I need to learn beforehand/any enhancement to the code, please convey it to me ! I am ready to learn. Also how to check if the file already exists or not. Also please explain the ! operator. Thanks This post has been edited by Jeet.in: 26 June 2011 - 12:54 PM
http://www.dreamincode.net/forums/topic/237104-file-read-fstream-queries/page__p__1372860
CC-MAIN-2016-18
refinedweb
129
77.43
Performance slowdown when displaying many small imagesfaKastner Aug 11, 2012 2:30 AM Hey! I'm working on a small Flex Mobile game. It uses a 2D game grid and I am trying to display tiles in the cells. There is 3 different images being used: One for Walls, one for floor and one for the character. I display these images in the following way: I have a group with a id ( displayGroup ), I then create a array of new Image() with the position they should be placed at and the image they should use. I then loop over the array and add the images individually as a element into the group. In code, it looks like this: var temparray:Array = disp.drawMapGfx(map, hero) for each (var img:Image in temparray) { displayGroup.addElement(img); } Now, the problem appears when I move. What I currently do is the following: var numImages:int = displayGroup.numElements-1; var temparray:Array = disp.drawMapGfx(map, hero) for each (var img:Image in temparray) { displayGroup.addElement(img); } for(var i:int = numImages; i >= 1; i--) { displayGroup.removeElementAt(i); } I first grab the total number of images in the group. Then I add the updated images that take the new position into account, and then I remove all the outdated images from the group. I go to 1 instead of 0 because element 1 is a rect that creates the background color. For testing, I've been using "Run as iPhone 3 GS". Every image is 50x30, which, taking the GUI into consideration equates to 11 x 17 images + 1 for the character. So it is displaying up to a maximum of 188 images. Picture for reference: The problem is that the images flicker as the old images disappear and the new images appear. What can I do to have it redraw the screen faster, to prevent any flickering ? Would it make sense to, instead of redrawing every image, to simply shift the whole scene and just load / remove images where necessary, rather than reloading all images ? And a small second question: Is it possible to share a mobile app in a single desktop-ready AIR file, so that I can have friends who do not have smartphones help test on their computer ? 1. Re: Performance slowdown when displaying many small imagesBBlommers Aug 13, 2012 12:33 AM (in response to faKastner) How big is the world, in total? Theoretically, you could load the whole world on startup, and use a mask to display only the relevant part of the world. With each move of the character, you move the mask to the next section. 2. Re: Performance slowdown when displaying many small imagesfaKastner Aug 13, 2012 2:41 AM (in response to BBlommers) Hello, Thank you for replying, BBlommers, the world in total is 80 by 24 cells, so not impossibly large, though trying to load all images at once without any mask or anything does spike the memory usage to 20 MB ( currently, it sits at around 6 MB ). How does this masking that you speak of work? And would embedding the images make sense ? I've played around a bit with re-using the same image containers and setting clearOnLoad to false, the result is better, but when it switches from the invisible black tiles to display, there is still flickering. Currently, I'm thinking that maybe if I expand the view to around 2 cells beyond visible range, as a buffer, and then copy the source of one image into the other ( e.g. image[0].source = image[12].bitMapData , i might be able to prevent flickering, but I still have to test that. Any ideas, tips or advice in the meantime would be really helpful. 3. Re: Performance slowdown when displaying many small imagesBBlommers Aug 13, 2012 3:00 AM (in response to faKastner)1 person found this helpful Embedding the images will probably improve the load time (and improve the memory usage as well). By masking, you 'mask' a part of the image. In other words, you only display a specific rectange of the image. You can think of it as a viewport; only the part that is masked will be visible. Example code: <fx:Script> public function move():void { moveCanvas.xTo = 100; moveCanvas.play(); } </fx:Script> <fx:Declarations> <s:Move </fx:Declarations> <!-- Surrounding group necessary, since panel1 shouldn't have an HorizontalLayout as parent --> <s:Group <s:Group <s:Rect <s:fill> <s:SolidColor </s:fill> </s:Rect> </s:Group> <s:Group <CellContainingImage width="100" /> <CellContainingImage width="100" left="100" /> </s:Group> </s:Group> Panel1 is the total world view, containing all cells. By calling the move-method, you move this world to the left. The viewport (mask) stays in place, but the world behind this viewport moves to the left. This way, you would see the cell to the right of the current cell. I hope this makes sense.. Let me know if there are any issues. 4. Re: Performance slowdown when displaying many small imagesfaKastner Aug 14, 2012 5:40 AM (in response to BBlommers) Hello, I've tried the embedding and it's helped a lot, but it's not perfect yet, so I will definitely look into the masking you suggested. I figured that other people might have smiliar issues so I wanted to post some stats I've gathered on the different implementations. I've tested 3 different implementations: Implementation 1: Simply using the images location as the source of the image tag: package model { import mx.core.BitmapAsset; import mx.core.FlexGlobals; import spark.components.Group; import spark.components.Image; public class Display { private var floorURI:String = "img/Brown-Block.png"; <); } This resulted in a peak memory usage of 9.6 MB, and had considerable flickering. Next test was a poor implementation of embedding. I will post it here in shame, so that other people don't make the same mistake package model { import mx.core.BitmapAsset; import mx.core.FlexGlobals; import spark.components.Group; import spark.components.Image; public class Display { [Embed(source="img/Brown-Block.png")] private var floorURI:Class; < = new floorURI() as BitmapAsset; group.addElement(image); } Needless to say, making a new class for every image did no favors to the memory usage, though the loading time of the images still improved considerably. Peak memory usage of 32.3 MB Now, finally, what I assume to be the prim and proper implementation , and the one I should've used from the beginning: package model { import mx.core.BitmapAsset; import mx.core.FlexGlobals; import spark.components.Group; import spark.components.Image; public class Display { [Embed(source="img/Brown-Block.png")] private var floorClass:Class; private var floorURI:BitmapAsset = new floorClass() as BitmapAsset; <); } Just a single embedded class turned into a ressource, which is then reused by every image as it's source ( which is OK as we won't be modifying or applying any effects to it ). A peak memory usage of 6.2 MB AND fast load times. Now to try the masking ! 5. Re: Performance slowdown when displaying many small imagesFlex harUI Aug 14, 2012 2:46 PM (in response to faKastner) Are you looking for something like this: ash 6. Re: Performance slowdown when displaying many small imagesfaKastner Aug 16, 2012 1:44 PM (in response to Flex harUI) Hello, @Flex harUI : The spritesheet thing unfortunately doesn't seem like it was related to my problem. I don't use a 3d engine, and condensing 3 files into one wouldn't give me a sizable performance increase, but thank you for trying! @BBlommers: Thank you very much for the mask suggestion! I ended up going with a mix between your suggestion and what I had thought of myself: I use the mask, but instead of loading the whole world, I simply load 1 cell outside of the player's view as a buffer. When the player moves, I move the mask, then load in the next row of images just outside of the mask, and then removing the row of images on the opposite site, so that it maintains a 1 cell buffer on each side. This results in a fairly low memoy usage ( 13 MB peak when moving rapidly, but mostly sitting at 10 MB ) and absolutely no flickering, as the images that come into view have been loaded already. I've marked your initial response where you suggested to use a mask as correct. Thank you very much for your assistance! 7. Re: Performance slowdown when displaying many small imagesBBlommers Aug 17, 2012 12:54 AM (in response to faKastner) Glad I could help. 8. Re: Performance slowdown when displaying many small imagesfaKastner Aug 24, 2012 7:58 PM (in response to BBlommers) Hello again BBLommers, I had 2 follow up questions to your (awesome!) solution. The major one is the following: I added multiple masked groups to use as different layers for my graphical assets. What I mean is that I have one group soley for the map's terrain, called the mapLayer One group soley for the units populating the map, called the unitLayer One group for objects ( chests, barrels and what have you ) populating the map called the objectLayer One group for items, called the itemLayer and finally a group for text that gets displayed called the labelLayer. In code, it looks like this: <s:Group <s:Group <s:Rect <s:fill> <s:SolidColor </s:fill> </s:Rect> </s:Group> <s:Group </s:Group> <s:Group <s:Group <s:Group <s:Group </s:Group> </s:Group> Previously, I had it all in a single group, but it became very cumbersome to work with as it was highly unstable - say because of a logic error a early element gets removed, which gets the index of what element has what ID in the group mixed up, which then quickly results in out of range errors. This new approach is alot easier and faster to work with, not to mention more stable, but the problem comes when working with depth. As the tiles I am using have a perspective to them, I need some to be at a higher Z-level , or depth, than others. Previously I was able to easilly solve this by assinging the depth = the Y position on the coordinate grid of the unit in question, but now that I use individual groups, it seems impossible to align them on the same depth. A picture might be helpful: This is how I want it to be ( how it was with my old implementation ) Please note how one of the ladybugs is partially hidden by the wall. Now with the new method of having multiple groups: The ladybug always appears unobscured, due to being in a different group. So the question is: Is there a reasonable way to get the desired result ? Or is there a better approach, generally speaking to be able to somewhat separate the different elements? The second question is much simpler: I get a bunch of warnings about mask not being bindable, but I'm not sure in what way I would bind it to get rid of the error. I don't really intend on ever changing the mask as it's simply the canvas. I've tried to look up information on how to bind spark elements such as the group I am using for a mask, but came up blank. ( Or the answers I found just raised way more questions on how it translates to my situation ). If you'd prefer that I make a new topic then I will gladly do so, for now I put it here as it relates. 9. Re: Performance slowdown when displaying many small imagesBBlommers Sep 10, 2012 2:42 AM (in response to faKastner) Apologies for the late response, I was on holiday. Is the problem already solved, or are you still stuck? I don't have a problem at hand, but I can give it some thought if it's still relevant. As for the second question, I encounter the same warnings, but have never acted on it. I don't think it is related to the first question, or is reponsible for any error at all.
https://forums.adobe.com/thread/1047945
CC-MAIN-2018-30
refinedweb
2,038
61.56
In trying to understand ASLR, I built this simple program: #include <stdio.h> #include <stdlib.h> int main() { printf("%p\n", &system); return 0; } $ cat /proc/sys/kernel/randomize_va_space 2 $ gcc aslrtest.c 0x400450 system() system() Any references to a function from a shared library that's made from a non-position-independent object file in the main program requires a PLT entry through which the caller can make the call via a call instruction that's resolved to a fixed address at link-time. This is because the object file was not built with special code (PIC) to enable it to support calling a function at a variable address. Whenever such a PLT entry is used for a function in a library, the address of this PLT entry, not the original address of the function, becomes its "official" address (as in your example where you printed the address of system). This is necessary because the address of a function must be seen the same way from all parts of a C program; it's not permitted by the language for the address of system to vary based on which part of the program is looking at it, as this would break the rule that two pointers to the same function compare as equal. If you really want to get the benefits of ASLR against attacks that call a function using a known fixed address, you need to build the main program as PIE.
https://codedump.io/share/eZLNCCxTDIiO/1/why-aren39t-glibc39s-function-addresses-randomized-when-aslr-is-enabled
CC-MAIN-2017-47
refinedweb
244
59.16
A simple and small ORM supports postgresql, mysql and sqlite peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. - a small, expressive ORM - python 2.7+ and 3.4+ (developed with 3.6) - supports sqlite, mysql, postgresql and cockroachdb - tons of extensions Examples Defining models is similar to Django or SQLAlchemy: from peewee import * import datetime db = SqliteDatabase('my_database.db') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(unique=True) class Tweet(BaseModel): user = ForeignKeyField(User, backref=lie') # Get tweets created by one of several users. usernames = ['charlie', 'huey', 'mickey'] users = User.select().where(User.username.in_(usernames)) tweets = Tweet.select().where(Tweet.user.in_(users)) # We could accomplish the same using a JOIN: tweets = (Tweet .select() .join(User) .where(User.username.in_ twitter app <>_.
https://pythonawesome.com/a-simple-and-small-orm-supports-postgresql-mysql-and-sqlite/
CC-MAIN-2020-10
refinedweb
144
52.36
Nairaland Forum / Science/Technology / Programming / C++ Question (1561 Views) Please Help Me Solve This C++ Question / C# Question........help??? (1) (2) (3) (4) (0) (1) (Reply) (Go Down) octave:1> N=54321N = 54321octave:2> N/10**4ans = 5.4321octave:3> floor(N/10**4)ans = 5 ekt_bear: Actually, the rounding down seems to be unnecessary. C++ float divided by integer cast into integer seems to automatically truncate for you:#include <cmath>#include <iostream>using namespace std;int main(){ float n = 99999; // Some 5 digit number long woo = n/pow(10,4); cout << "Result is " << woo << endl; return 0;} mkwayisi: Am I missing something? What's up with this casting and long data types. Ain't it as simple as this:#include <iostream>int main(){ int num = 12345; std::cout << "Left-most digit: " << num / 10000; return 0;} lordZOUGA: #include <iostream>using namespace std;int main(){long aNumber = 25632; //some number with irrelevant numer of digitschar* aString = static_cast<char*>(aNumber);cout << "the first digit is: " << aString[0] << endl;//or cast ur number back to int and use it for some stuff//int myNumber = static_cast<int>(aString[0]);return 0;} lordZOUGA: your code wasted memory. ekt_bear: This doesn't compile under g++:woo2.cpp: In function ‘int main()’:woo2.cpp:27:45: error: invalid static_cast from type ‘long int’ to type ‘char*’Moreover, it seems some like a very clunky way of doing it. You are popping off digits from a number. Unnecessary to convert to a string first... ekt_bear: Finally, compilers are smart. It will optimize this away for me...no need to worry about it when writing code. lordZOUGA: no, it won't optimize but it will compile. It will set aside space like you specified lordZOUGA: oh well, wrote that on my phone... Guess I was wrong in converting it to a pointer..Seems clunky but for an unknown number of digits... ekt_bear: Which real-time systems don't allow you to perform division? ekt_bear: So you can do integer to string conversion on a real-time system, but not division by 10? lol (0) (1) (Reply) How To Make Money Easily Through Your Website Directly Of Your Home / C# School Timetable Generator / Is Zend Php Certification Obtainable From Nigeria? .
http://www.nairaland.com/948051/c-question
CC-MAIN-2017-34
refinedweb
370
64.61
I was developing a small Windows app using C#, and I decided that it would be nice to save the window's position and size when the app exited so it would be the same the next time the app was run. Having come from the MFC world, I searched for something equivalent to GetProfile and WriteProfile methods of the CWinApp class, but found nothing. I knew there was a Registry class in the Microsoft.Win32 namespace, but there was nothing available for writing to INI files. I would need to use the Win32 APIs for those. If I wanted to use an XML file, there was a whole slew of classes available in the System.Xml namespace. I also looked into using App.config files which some people had mentioned in the forums, but those supposedly were not meant to be written to... GetProfile WriteProfile CWinApp Registry Microsoft.Win32 System.Xml All these similar mechanisms, each with their own interfaces... which one to use? I just needed a simple way to persist my window's position and size somewhere to be later retrieved. I set my small Windows app project on the side and decided that it was time to write a new class library; one that would unify all these mechanisms into a common and simple interface: the IProfile interface. This article presents the IProfile interface and the four classes that implement it: Xml, Registry, Ini, and Config. These four classes allow reading and writing values to their respective mediums, via a common and simple interface. Enjoy! IProfile Xml Ini Config So what does this common interface look like? Here's a simplified version of it -- see IProfile.cs or AMS.Profile.chm for the complete definition: interface IProfile { void SetValue(string section, string entry, object value); object GetValue(string section, string entry); string GetValue(string section, string entry, string defaultValue); int GetValue(string section, string entry, int defaultValue); double GetValue(string section, string entry, double defaultValue); bool GetValue(string section, string entry, bool defaultValue); string[] GetEntryNames(string section); string[] GetSectionNames (); void RemoveEntry(string section, string entry); void RemoveSection(string section); bool ReadOnly { get; set; } event ProfileChangingHandler Changing; event ProfileChangedHandler Changed; } As you can see, it's a simple interface bearing a slight resemblance to the GetProfile and WriteProfile methods of the CWinApp class. It's all based on a simple paradigm: a piece of data is associated with an entry inside of a section. The section may contain multiple entries and there may be multiple sections available. That's it! A section could be something like "Main Window", holding entries such as "X", "Y", "Width", "Height", each one having their corresponding numeric values. If you've worked with INI files in the past, this concept will be very familiar to you. Aside from the standard Get and Set methods used to read and write the values, the interface also allows you to retrieve the names of the available entries and sections, and to remove them if desired. It even has a ReadOnly property to prevent any changes to the profile. And to top it off, it contains two events that allow you to be notified when something in the profile (section, entry, or value) is about to change or has already changed. (I may have gone a little overboard :-) ) ReadOnly When it came time to implement the interface, I went with the most popular mediums available. XML was number one on the list. Its popularity is just impossible to ignore. The Registry was my second choice. Although people may be turning away from it, it's still a very efficient way to read and write data to a centralized location. Lastly, I went with INI files. I realize they're practically extinct these days but some people still use them due to their simplicity. Once I had completed my classes, I decided to add one more implementation to the list: one to handle config files. From the forums and articles here at CP, I noticed several developers needing a way to write to them. So I said, "Why not?". It's somewhat similar to the XML one so it didn't take long to write. The result was four classes, all part of the AMS.Profile namespace: Xml, Registry, Ini, and Config. Their main objective is the implementation of IProfile based on their respective storage mediums. AMS.Profile So how do these classes store their Profile data? Here's a brief synopsis of how each class works: As you probably know, XML is all about storing data inside a text file in pretty much any markup-based format. So which format would I choose to organize the data using the section/entry paradigm? After considering a couple of possibilities, I decided that the format below would be preferable, since it allows section and entry names to contain spaces. It also looks cleaner and more consistent than if I had used the section and entry names themselves to name the elements. <?xml version="1.0" encoding="utf-8"?> <profile> <section name="A Section"> <entry name="An Entry">Some Value</entry> <entry name="Another Entry">Another Value</entry> </section> <section name="Another Section"> <entry name="This is cool">True</entry> </section> </profile> Notice, the root element is called "profile". This is the default root name, which you may change via the class' RootName property. When you check out the class, you'll also notice that the default name of the XML file is based on the type and name of the application -- program.exe.xml or web.xml. This, of course, is also customizable. The above data would appear similar to this, when viewed on the Registry editor (regedit.exe): - My Computer ... - HKEY_CURRENT_USER ... - Software ... - AMS - ProfileDemo - A Section Name Data An Entry Some Value Another Entry Another Value - Another Section Name Data This is cool True INI files are pretty much self explanatory, format-wise. Here's the above data in INI format: [A Section] An Entry=Some Value Another Entry=Another Value [Another Section] This is cool=True Like the XML file, the default file name will be based on the name and type of the application -- program.exe.ini or web.ini. If you don't like it, it's easy enough to change via the constructor or Name property. Config files are the most complex of the bunch, format and code wise. Let me begin by illustrating how the above data would look: <configuration> <configSections> <sectionGroup name="profile"> <section name="A_Section" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=null" /> <section name="Another_Section" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=null" /> </sectionGroup> </configSections> <profile> <A_Section> <add key="An Entry" value="Some Value" /> <add key="Another Entry" value="Another Value" /> </A_Section> <Another_Section> <add key="This is cool" value="True" /> </Another_Section> </profile> <appSettings> <add key="App Entry" value="App Value" /> </appSettings> </configuration> As you can see, there's a lot more going on here than with the other formats. The profile element contains an element for each section and the values are kept as attributes of add elements. Notice that for each section, the configSections element needs to specify how to read the values contained by it. This is all standard fare, required for the framework to properly load the values and allow you to retrieve them using the System.Configuration.ConfigurationSettings class, like this: profile add configSections System.Configuration.ConfigurationSettings NameValueCollection section = (NameValueCollection) ConfigurationSettings.GetConfig("profile/A_Section"); string value = section["An Entry"]; Notice, there's also an "appSettings" element, which wasn't part of the other samples. I just threw it in there just to show that it may also be accessed via the Config class. If you're familiar with the System.Configuration namespace, you'll know that appSettings is the default section used for storing application-specific settings that may be read using the ConfigurationSettings.AppSettings property: appSettings System.Configuration ConfigurationSettings.AppSettings string value = ConfigurationSettings.AppSettings["App Entry"]; Well, the Config class also allows you to read (and of course, write) to that section, as follows: Config config = new Config(); config.GroupName = null; // don't use the "profile" group ... string value = config.GetValue("appSettings", "App Entry", null); config.SetValue("appSettings", "Update Date", DateTime.Today); One thing to keep in mind for Windows apps is that .NET caches the config data as it reads it, so any subsequent updates to it on the file will not be seen by the System.Configuration classes. The Config class, however, has no such problem since the data is read from the file every time, unless buffering is active. Like the Xml and Ini classes, the default file name will be based on the name and type of the application -- program.exe.config or web.config. This is the name expected by the .NET framework classes, but you can still change it if you want. Keep in mind that writing to web.config causes the application to end along with all the active Sessions. So now that you've seen the classes, how do you go about using them in your code? Well, I packaged them inside their own DLL, called AMS.Profile.dll. This makes them easy to use in your various projects, without having to recompile them everywhere. Simply add the DLL to your project's References, which is done as follows inside Visual Studio .NET: OK, so how do you actually use these classes inside the code? That's the easy part, actually. The hard part may be deciding which one of the four to use. Whereas before you may have based your decision on the amount and complexity of the code involved, now that's no longer an issue. Now you just worry about which storage medium is best for the job. And that part is basically up to your program's requirements and/or personal preferences. Here are some of my observations to help you decide: Still trying to decide? Here's the bottom line. Even if you don't pick the proper class from the start, it's easy enough to switch to another one later. OOP rocks! Now that you have selected the profile class for the job, how do you use them inside your code? Well, most of the time, you'll just declare the class as a field of another class and then call the GetValue and SetValue methods. Here's an example with the Xml class: GetValue SetValue Xml profile = new Xml(); ... int width = profile.GetValue("Main Window", "Width", 800); int height = profile.GetValue("Main Window", "Height", 600); ... profile.SetValue("Main Window", "Width", this.Size.Width); profile.SetValue("Main Window", "Height", this.Size.Height); Keep in mind that the Xml and Config classes allow you to use buffering to improve the performance of reads and writes to their respective files. If you use C#, you can take advantage of the using statement to easily create the buffer, write to it, and then automatically flush it (when it's disposed). Here's an example: using Xml profile = new Xml(); ... using (profile.Buffer()) { profile.SetValue("Main Window", "Width", this.Size.Width); profile.SetValue("Main Window", "Height", this.Size.Height); } As you saw from the IProfile interface above, there are several other methods, properties, and events available. Those you'll find fully documented inside the code, as well as in the downloadable documentation (AMS.Profile.chm zipped). I recommend you view the help file to get the details on everything that's available from the DLL. Of course, if you have questions or concerns, you may address them to me personally or by posting a message below. I wrote the demo program to illustrate the functionality of the four Profile classes, and at the same time, to test the classes. I added a Test button that calls a method of the Profile base class to verify that the most important methods and properties work correctly and consistently across all profiles. You may want to check out that code for examples of how to use these classes. Profile Keep in mind that this is just a demo program, not a utility. Each Profile object works with their default names. For example, for the Xml object, the name of the file will be Demo.exe.xml and will be located in the same folder as Demo.exe. There's no provision for changing these names, since, again, it's just a demo. To use the demo, select a Profile from the top combo box. This will cause the sections in the profile to be placed into the Section combo. Choose a section from there and you'll see all of its entries get added to the Entry combo box. Choose an entry from there and you'll see its value placed into the Value textbox. If you want to add a new Section or Entry, simply type it into the proper combo box. Press the Save button to write the value to the profile via SetValue. This should all be pretty straightforward, I hope. If not, please let me know. ProgramE News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/5304/Read-Write-XML-files-Config-files-INI-files-or-the?msg=3664692
CC-MAIN-2017-22
refinedweb
2,220
56.05
This is fine, though in general it should be noted that it is better for the kernel header files to use machine types from machine/stdint.h so as not to create unnessary namespace pollution by requiring that <sys/types> be included. Header files would be able to safely conditionally #include <machine/stdint.h> and friends without polluting the namespace and thus become more self-contained. (generally speaking, only sys header files and source should directly include machine/ header files like that). A good example of this would be a program that uses old-style varargs.h, and we have several. We would not want any of our header files which use varargs internally to include <stdarg.h> because it would conflict with programs using the old-style varargs.h, which is why they all use machine/stdarg.h. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> :joerg 2004/03/01 03:50:59 PST : :DragonFly src repository : : Modified files: : sys/sys bus_private.h : Log: : Adjust indentation, use uint32_t and line up comments. : : Revision Changes Path : 1.4 +62 -62 src/sys/sys/bus_private.h : : :
http://leaf.dragonflybsd.org/mailarchive/commits/2004-03/msg00003.html
CC-MAIN-2014-52
refinedweb
182
60.11
Unknown Service nameAnton Hughes Nov 14, 2014 2:36 PM Hi I have what should be a very simple usecase: A request is routed to one of two bean services. Both services use the same interface, but different implementations. However, SY can only find one of the services - and continues to say the BService is 'unknown'. Both the Service name and Component name are 'BService' This is the same pattern in the BrokerSelectorService. Can anyone tell me what the problem is? Thanks - unknown service.zip 33.1 KB 1. Re: Unknown Service nameRob Cernich Nov 14, 2014 2:44 PM (in response to Anton Hughes) Hey Anton, I think the issue is with the @Service annotation on BService. I think you need to specify the name "BService" explicitly, e.g. @Service(value=BrokerSelectorService.class, name="BService"). I think that should fix you up. Hope that helps, Rob 2. Re: Unknown Service nameJorge Morales Nov 14, 2014 2:44 PM (in response to Anton Hughes) Hi, I haven't looked at the attachment as I'm on my phone, but my guess is that your services, as both have the same interface needs the name value in the @Service annotation, to distinguish between both implementations. 3. Re: Unknown Service nameAnton Hughes Nov 14, 2014 3:06 PM (in response to Jorge Morales) Jorge Morales wrote: Hi, I haven't looked at the attachment as I'm on my phone, but my guess is that your services, as both have the same interface needs the name value in the @Service annotation, to distinguish between both implementations. Hi Jorge I would expect, having set the interface and implementation in the switchyard.xml - that this would be enough to bind the interface and implementation. Bob Thanks - this did work - but, as mentioned above - why is this needed after I have already specified in the xml. Also, your solution isnt really type safe. And type satefy is one of the more innovative improvements that CDI brings - this seems like a bit of a step backwards. Also, it means - as I found out - refactoring is a manual process. So, if specifying the contract and the implementation specified in the xml, like so: <sca:component <bean:implementation.bean <sca:service <sca:interface.java </sca:service> </sca:component> Does not actually tell Switchyard what the implementation for the contract is - then why is this needed? Do we even need the xml? 4. Re: Unknown Service nameRob Cernich Nov 14, 2014 3:33 PM (in response to Anton Hughes) Hey Anton, Unfortunately, the issue here is that the CDI integration relies on the annotation, while the SwitchYard deployment itself is configured using the file. Given that, there is an obvious disconnect between the two and that's probably an area that should be improved (i.e. the CDI integration should rely on the switchyard.xml too). That said, the default hame attached to the service is the simple name for the service interface type (e.g. BrokerSelectorService), so in your case, you had two services with the same name, at least as far as the CDI integration was concerned. switchyard.xml is needed to configure items which can't be configured using annotations (e.g. application namespace). There are scanners which can be configured on the switchyard-plugin, which will populate portions of the switchyard.xml during the build process (e.g. the transformer scanner). That said, it's a bit tricky getting the editor to display the generated components/services (i.e. your mileage may vary). If you want, you can enable the bean scanner, but you may end up with multiple component definitions if you're creating components through editor. (I don't think the editor populates the name or component name attributes on @Service and given the defaults used in the UI, you'll end up with two components: SomeServiceBean (created by the tooling) and SomeService (created by the scanner). If you simply rely on the scanner, you shouldn't have any problem, but YMMV.) Hope that helps clarify things a bit. Sorry for the inconvenience. Rob 5. Re: Unknown Service nameAnton Hughes Nov 14, 2014 6:02 PM (in response to Rob Cernich) Hi Rob Thanks for responding on this. I must say, I am still rather confused. I have read a lot about switchyard, and watched all the videos that Keith has put together - and the general idea seems to be that the switchyard.xml is the glue that brings all the elements together, and its also - with the help of the SY editor - makes rapid application development possible.. You say the xml is needed for deployment and setting the application namespace. If that is the case - why make it a key feature in the UI editor? Why even provide ways to set and change Service Names in the UI if this name is not acknowledged? Question - is it possible to write a SY application, and not need the SY xml, or just have a bare minimum xml file? Now I dont feel I can trust the UI or the xml Please have a read of how the homepage for Switchyard describes itself:. As we have seen, services are not maintainable - manual refactor is need. The model is not uniform - there is a disconnect between the SY approach and CDI. There is inconsistency between what is named in the SY xml, and what SY thinks is a service name. And this results in duplication. There are at least 3 places I have to name a service - in the class, the component, and the service (now I'm not sure what a service is??). Apologies for having such rant on a Friday night - but I and my team have invested heavily in this technology. I was under the impression that the SY.xml was akin to the blueprint xml, or the spring xml. Now, Im not so clear on it. 6. Re: Unknown Service nameRob Cernich Nov 14, 2014 6:25 PM (in response to Anton Hughes) Anton Hughes wrote:. The name is important. In this case the name in the xml did not match the name in the @Service annotation. I think the tooling will highlight this as an error. There is certainly room for improvement, especially where refactoring is concerned. Please feel free to open a JIRA for this issue. Anton Hughes wrote: Question - is it possible to write a SY application, and not need the SY xml, or just have a bare minimum xml file? Now I dont feel I can trust the UI or the xml You need the switchyard.xml. As I stated earlier, there is only a limited amount of configuration supported through annotations, and it is recommended not to use them in conjunction with the editor (synchronization issues between source and generated models). The screenshot you provided shows that there were problems with the configuration. If the error messages or problem descriptions did not allow you to easily identify the issues, then that is something that should be addressed in the tooling. Feedback is welcome. Anton Hughes wrote: Apologies for having such rant on a Friday night - but I and my team have invested heavily in this technology. No worries. Sorry you wasted time on this. 7. Re: Unknown Service nameKeith Babo Nov 17, 2014 9:01 AM (in response to Rob Cernich) One thing to recognize here is that SY brings together a lot of independent implementation technologies under a single application model. We try to make it as smooth and uniform as we can, but you are always going to have to deal with the capabilities/restrictions/oddities of a given implementation technology. CDI sets up wiring for injection points based on type. We can't change that in SwitchYard. What we can do is provide a way to affect that wiring by providing additional metadata (in the form of the 'name' annotation element) in the CDI bean which corresponds to configuration in switchyard.xml. Yes, the name has to "sync up" between the two, and it would be nice if this only had to be specified once. Recognizing this could be a problem, the editor provides built in validation to make sure that potential errors are caught early in the development process. There's no need to double-check everything when the editor is validating the model and associated implementation artifacts for you. It sounds like the main problem here is that we don't have adequate documentation on how to resolve validation errors that are reported by the tooling. That's definitely something we can improve on. 8. Re: Unknown Service nameAnton Hughes Nov 17, 2014 10:00 AM (in response to Keith Babo) Hi Keith Yes, documentation would help. I was also thinking - and tried to look into this over the weekend - if the Impl of the org.switchyard.component.bean.Service annotation could somehow read the SY.xml file. 9. Re: Unknown Service nameKeith Babo Nov 21, 2014 3:11 PM (in response to Anton Hughes) Could be possible. I haven't looked at this closely, but I suspect you might be able to wire this in ClientProxyBean from BeanComponentActivator.activateService. Note that the ServiceReference is set here: components/BeanComponentActivator.java at master · jboss-switchyard/components · GitHub
https://developer.jboss.org/thread/250184
CC-MAIN-2018-22
refinedweb
1,545
62.88
pingo 0.1.7 it started (Portuguese skills needed to understand the link). To our English-speaking friends we like to say that it means “pin, go!” – the main purpose of this package. Basic usage Blink.py on an UDOO board: import pingo from time import sleep board = pingo.udoo.Udoo() led_pin = board.pins[13] led_pin.set_mode(pingo.OUTPUT) while True: led_pin.high() sleep(1) led_pin.low() sleep(1) To do the same on a Arduino Yún, just change the line were the board is instantiated, and the pin numbers as needed: import pingo from time import sleep board = pingo.arduino.yun.YunBridge() # <--- led_pin = board.pins[13] led_pin.set_mode(pingo.OUTPUT) while True: led_pin.high() sleep(1) led_pin.low() sleep(1) Drivers In the examples above, pingo.udoo pingo.arduino.yun are drivers, and the respective Udoo and YunBridge are classes implementing the pingo.board.Board interface. The following table lists the drivers currently planned or under development. -.7.xml
https://pypi.python.org/pypi/pingo/0.1.7
CC-MAIN-2017-09
refinedweb
160
54.39
From: joel de guzman (djowel_at_[hidden]) Date: 2001-11-28 19:18:19 ----- Original Message ----- From: "joel de guzman" : > > > > I fear these will engender confusion because they are similar to size_t > and > > ptrdiff_t, types that don't have a generic programming conotation. So > > someone seeing int_t might think "hmmm, this must be some platform > specific > > integral type..." > > I don't know much about loki nor mpl. I do have a metaprogramming library > that uses functional techniques such as currying and list processing, not > unlike mpl. I too have an int_t facility. Here's how I bind arguments: > > add<int_t<1>, int_t<2> > --> direct, both args supplied > add<int_t<2>, _> --> 2nd arg, curry > add<_, int_t<2> > --> 1st arg, curry > add<_, _> --> both args, curry > > Now imagine if I use int2type instead.... That's why I tend to > agree with mpl's this time. > > --Joel In fact, in my -yet-another-list-implementation-, I used int_, bool_, char_, opting for the shortest possible name. IMO, in its domain these are the primitive types. I am not at all concerned about confusion since it is in its own namespace and I assume that clients of the library read the documentation (if any). --Joel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/11/20765.php
CC-MAIN-2019-13
refinedweb
225
65.62
I: - Please use the comment functionality on the blog server to suggest anything you think sounds interesting - Keep in mind my areas of expertise from the tag line of my blog – general setup and deployment issues, WiX, XNA Game Studio, the .NET Framework and Visual Studio. I know about things outside of these areas, but my level of depth of knowledge may vary widely depending on the topic, and I am not going to pretend to be an expert and post about a topic I am not very familiar with - I will remove comments as I create posts that address them I have been keeping a folder on my computer where I drop emails that I send myself with ideas for future… I expect that this is outside of your area of expertise, but I was inspired by your last blog post, so here goes… Several months ago I was trying to get the Pocket PC Emulator running on my laptop, which has Windows Server 2003 SP1 (which has NoExecute). The VPC Service, which the emulator uses fails to start and the driver is flagged by the driver protection service (sorry if I get the name wrong.) I found some work arounds that suggested disabling /NoExecute to get it to run. I tried this and it didn’t work. Sorry if I am light on details (I gave up trying to get this to work several months ago and haven’t tried again since) but the conclusion that I came to was that there was something about running an AMD x64 CPU in 32-bit mode that made NoExecute always on. I don’t remember the utility I used to verify this, but I found a utility somewhere that told me NoExecute was on even after changing the NoExecute option and rebooting. Am I nuts, or is it possible that NoExecute can’t be turned off on 64 bit CPUs running 32 bit OSs? Is theree any Advice on why Office Pro 03 would not upgrade as the Work 8 program is installed on my computer. I have been tossed between MS and Gateway countless times. They are not helping to discover why I can’t simply Upgrade, instead of a full Office version. Hey Neeley – According to the information I’ve been able to find so far, Office 2003 Professional Edition should allow you to upgrade from several previous versions of Microsoft Works. Can you let me know exactly what version of Works that you have installed on your computer? You can do that by using the steps listed at to gather a list of installed products, and then send it to me at aaronste (at) microsoft (dot) com. Hey Neeley – can you also search for the file named worksup.dll on your computer and if you have it installed can you tell me what version of that file is installed and where it is installed? That is one of items that Office 2003 setup looks for to determine whether or not Works 8 is installed… I am a student developer at The Ohio State University and I am working with a team to attempt to develop a media center application. The only issue is we don’t know how to communicate with the outside world (i.e. a database or something of the like). How about a blog tutorial on creating a C# applet or an ActiveX control that could do something along these lines? Thanks! I am trying to install .Net 2.0 RC1 but I have to uninstall .Net 2.0 beta first. I click on the link and obviously I go into Add/Remove programs. I go down to .Net 2.0 Beta but suprise there is no "remove" link to click on. How the heck am I gonna install RC1 if I cant get Beta removed. Any ideas? Thanks Gerhard Hi Gerhard – if you are having trouble removing .NET 2.0 Beta and it is not appearing in Add/Remove Programs, I would suggest that you follow these steps: 1. Download smartmsizap from 2. Run smartmsizap.exe /p {7A1ADD0C-17F3-47B8-B033-A06E189C835D} This should forcibly remove the .NET Framework 2.0 beta 2 for you. Please contact me at if this doesn’t work for you. Hi, In the .NET Framework 2 documentation – the System.Drawing Namespace has the following section/text (titled ‘Caution’) : ‘Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions.’ Having used this System.Drawing namespace extensively from ASP/Netv1.1 in the past (to do realtime image generation / and realtime image conversion for various clients including browsers/mobile) – I am confused as to what Microsoft expects us to use instead of this namespace in ASP.NET 2? While I appreciate that there may be slower performance – the part about ‘unexpected problems / runtime exceptions’ is a worry. Are you able to elaborate on this matter – and also explain if there are some classes in this system.drawing namespace that are actually considered safe to use under ASP.NET – or if there are some alternatives? Thanks – and hope you can help out (or even create a blog entry on this topic)! Hey Aaron, I am having load of problems with http ports when using Webmedia player and internet radio. It is very difficult to find out which ports your Windows firewall is blocking and not. As this problem is also occuring when using the media player I think it could be a good topic in the near future. The problem that I have is that sometimes when I turn mce on from standby the media player tells me that I am not connected to the internet and therefor not possible to play the stream. If I then make a repair on the networking card it works again???!!! Hey Aaron. I have a product that is divided into a core and multiple addins. The addins are released separately from the core. Currently the core and each one of the addins have their own installation (MSI) package. Is there a way to include the addins as (yet unimplemented) features of the core, so that the installation maintenance would be performed through one MSI package? For example, to enable these features by delivering the addins by patches or to use the "Advertise" MSI mechanism? Are there any plans to add HD channels to the MsnTV Remote Record feature? Always a pleasure to read Aaron’s blog 🙂 We supply possibly the coolest MCE platforms in the world, and these can truely see the convergence of the ‘digital home’ on an ‘appropriate platform’ – the slim, silent and cool running (<43W typical, <28dBA typical) T2eMP (AVA) PC platform…… we are however concerned that there is no ‘easy / public’ window to FAQ’s re MCE2005 (ie why one of our clients can’t get EPG via a shared modem (dial up) – of course we all generally assume broadband is everywhere (which really is needed for online services), but sadly BB is not available (nor will be) for some……. so how about an easy to search, free FAQ web – for all MCE users ? Possible ? For hardware providers – like us, and software providers – like MS, the public, non-techy, needs real help…. how do we do that ? Tranquil would back / sponsor and pay for the hosting of such a web, but the content – we only have a small view of what is needed, and the solutions available. ….. welcoming any feedback. Hello: XP Media Edition 2002 I need immediate and critical response in resolving a FrontPage 2000 Server Extension / IIS Details. I just purchased an PC that doesn’t recognize FrontPage 2000 Server Extensions. IIS v5 can’t install the extensions noting unable to locate Personal Web Server, and that Windows 2000 OS and greater can’t install 2000 extensions. If I attempt to download 2002 Server extension I received a message noting the IIS 4+ and greater need to be installed, and I’m already running IIS v5. I need to install Visual Studio .NET 2003 and will not install without the extensions. My class ends this week and this is CRITICAL. 2nd inquiry relates to whether XP Media Edition supports dual OSs? Thank you and have a Blessed & Peaceful Day! I’m at wits end trying to successfully install (or uninstall!) .NET Framework 2.0. No matter what I do, it always hangs at "Registering System.EnterpriseServices.dll." I’ve tried everything that looks applicable from this site, including all the steps here: There’s no problem with administrative permissions, which looks like the only case resembling what I’m getting. Is there any hope? I’m going nuts trying to figure out why my MCE2005 Rollup2 won’t go back to sleep after recording. It will if I reboot it, but if I watch Live TV, use the remote to put it in standby, it will wake up to update the EPG, and won’t go back to sleep on its own. I can force it using the power button on the remote; it just won’t do it on its own. If I close the MCE shell, it will go to sleep on its own. I see with the shell running, that performance monitor shows a CPU spike of 10% every 8 seconds; if I close the shell, this goes away. Related? I see others have this problem, but have never seen a solution, other than some custom screen saver (which didn’t work for me). I would like to see an introduction and troubleshooting article on ‘Shim database’-caching. I have read that this is caching of registry, and this technolgy is/becomes important in Vista and .net2.0. I came across your blog when I was googling on ‘Unable to open shim database version registry key’. One of the advises referred to as yours, was to give user (supposedly write/set value) permissions to HKLMSoftwareMicrosoft.NETFrameworkPolicyAppPatchv2.0.50727.00000. Before I do a thing like that, I would like to know the risk involved. My apologies for any mispellings or bad grammar. English is, by far, my native language. Hi Aaron – Thanks for a great blog.. There still seems to be a lot of problems with mce create video dvd & I’d like to see an informed, comprehensive summary of issues and solutions. For info I have read pretty much everything I can find on the web about this isue but still can’t get it to work on my machine. Every forum has threads running with nearly the same stuff being reinvented but clearly loads of disgruntled users out there. Just as example – I am in uk region and record pal broadcasts – I find 894553 describes one of the problems I get, but I’ve no idea if this is rolled up into RU2 or the Jan Rollup so don’t know if it applies. Some people say there is a sequence that must be followed for sonicencoders, sonic windows components, r/up2, powerdvd etc as they are all dependent. Others say don’t bother and install a third party product. For me the issue is that all I want is a basic function in mce that works…and its supposed to be there anyway. I’m convinced you would help a lot of people…. I have had an issue for some time with Media Center 2005 UR2 and an AverMedia A180 tuner not working with the onboard SATA controller of my machine. It is some kind of hardware/driver issue because my ATI HDTV Wonder works fine. AverMedia has not gotten me any solutions, and newer drivers don’t seem to fix anything. There is a post about it here If there is anything someone smarter than I could do as far as at least looking at or reproducing this problem it would be great. An explanation would at least be nice if there is no fix. Perhaps talk about whether or not there will be "Softsled" in Vista. Information is hard to come by. Recently had to buy a new box (HP Pavilion a1410n) running Media Center. Installed my old HD to keep all my previous files. Old HD (d:/) was running xp Pro, which I still need because it contains IIS web server. Can you suggest a good, reliable tutorial on setting up dual boot between Media Center and xp Pro (running on different drives), or is there an easier way to get IIS functionality into Media Center? Thank you. Hi Walter – I haven’t tried this myself, but because Media Center does not allow domain joining, I don’t think you will be able to get IIS configured and running, so dual boot is probably your best bet here. Setting up a dual-boot system with Media Center and some other version of Windows is not any different than setting up dual-boot for any other 2 OS types. There are some really good instructions for configuring the Media Center OS at that I think will be useful to you here. Hi Aaron, thanks for great blog. Question – is it possible to make Media Center (2005) a little more patient with the TV tuner card when resuming from standby to execute a scheduled recording. I’m having a problem with a DVB tuner card which will not work properly after resuming, likely a driver bug. All MCE or indeed any other app shows is "no signal found" I’ve been able to workaround this for interactive using a WMI in a script which waits for a Power Management event, and if it’s of type 7 (resume) I stop ehrecvr, recycle the bad device with devcon & restart ehrecvr. Net result – TV works a few seconds after resuming. But meantime MCE has got in, attempted to start recording, and failed. Grrrr. I’m experimenting with suspending the ehsched process during suspend, resuming it afterwards, but maybe there’s a better way? (other than hassling my vendor to fix their drivers!!!) hello, i am getting error when i run vs2005 beta 180 days…the error is: the application data forlder for visual studio could not be created.. plz help me Hi Shahid – This error means that the folder %APPDATA%MicrosoftVisualStudio8.0 could not be created. There is probably some kind of permission problem for that folder. Can you please try to create this folder manually using Windows Explorer and see if it works for you? Hello Aaron, Let me thank you for writing such an insightful blog. Since you cover my two main interests (Visual Studio and Media Center) I hope I can coax you into answering me the following: I’m desperately trying to get Media Center TV to get a signal from a video source (eg. VCR or gaming console) to my TV card from the (S-)video-in connctor. Not the tuner. The MCE Newsgroup tell me, it can’t be done with the current MCE features, as MCE only controls the tuner. Can an add-in provide MCE the ability to get a video from the video-in of one of my cards? My nVidia 6600 GT features a video-in as well as my Hauppauge WinTV Nova S+. Is the Media Center SDK a good tool for this problem? Can it be used with the Express versions of Visual Studio? Hi Arminator – From what I know, getting video signals from non-TV sources is not supported in Media Center. If you know enough about how to write code that could receive input from an alternative video source, you should be able to wrap that as a Media Center add-in. The Media Center SDK can help you figure out how to integrate your application into Media Center. The current version of Media Center (Update Rollup 2 for Media Center 2005) runs on the .NET Framework 1.1, so you will need to use Visual Studio .NET 2003 to create an add-in that will work correctly with Media Center. The Express Editions can be used to create Media Center add-ins for Windows Vista. Aaron, Hi I ran into an issue with winfx beta 2 and the add-ins for vs 2005. For some reason the add-ins try to convert my older sln files to 9.0 files. You cannot cancel the operation and still work with the old project. Now I am forced to remove vs 2005 and winfx beta 2. The uninstall for vs 2005 breaks because it cannot determine the unistall order. Can write about the manual uninstall. Also I have looked on the forums and cannot find a good place to post the issue. I left one message in the "Cider" forum to no avail. Is there a way to get the converter to stop converting projects. Otherwise I have to wait until november or january to work with winfx. thanks Aaron, Great blog. Lots of useful and insightful stuff. I’ve already benefitted from your tips on using msiinv and the Windows Installer Cleanup tool, neither of which I was aware. However, the installer problem I just can’t figure out is that MSI installs built using a setup-deployment project in Vis Studio 2005 cause corruption in Add/Remove programs. After running the install, either by right-clicking and choosing install from the IDE, or manually running the MSI (on either the development machine or another pc; the result is the same), Add/Remove Programs displays hundreds of lines of empty space and unusual video artifacts, and the program I just installed does not appear in the list. I can then use msiexec /x {product code} to uninstall the app and Add/Remove Programs returns to normal. I tried creating a new test solution with a similar structure to my app (i.e., a VB Windows app with C# and VB libraries), but I couldn’t duplicate the error. I deleted/recreated the Setup project several times with no effect. Oddly enough, I found that by changing the Product Name in the project’s properties, however, it would sometimes install correctly, but so far it’s only happened when the Product Name is short (<8 chars). At other times, switching to a shorter product name didn’t fix it, but then changing the version & product code got it working!?! Those observations may just be coincidences, but I would really like to to have the full app name display in Add/Remove Programs, and, more importantly, I would like to have some clue as to what’s causing this really strange behavior. Any ideas or helpful links? I’ve googled high and low to no avail. Thanks in advance. – Dan When trying to open media center it immediately gives ehshell.exe application error 0XC0000005 message and closes the media center windows. Tried reconfguring media center but no go. Request you to provide your valuable suggestions in this regard. Thanks Rayaprolu Hi Rayaprolu – The log file %windir%ehomeehshell.crash should provide more details about why this crash is happening. Depending on what that says, there are different options for how to try to resolve this issue. What kind of reconfiguration have you tried already? Also, what version of Media Center are you running, and what is the latest Media Center hotfix that you installed? Hi Astebner The version of Media center is Media Center 2005. I am not sure about the latest hotfix donloaded on the system. I performed the following steps to resolve the issue but did not succeed. REGEDIT > hkey_current_user > software > software > windows > currentversion > Media Center > settings delete settings folder. hkey_local_machine > software > microsoft > windows > currentversion > media center > settings delete settings folder deleted Ehshell.config file from Ehome folder Hi Rayaprolu – Deleting those settings registry hives/folders generally do not help resolve Media Center crashes (at least in my previous experience). The log files %windir%ehomeehshell.crash and %windir%medctroc.log can be helpful to try to narrow this type of issue down. You can send them to me at Aaron.Stebner (at) microsoft (dot) com and I will try to take a look and see if I can figure anything out. Hi Dhubbard – You can use the steps at to perform a manual uninstall of the various pieces of Visual Studio 2005. Hopefully this helps solve the "unable to determine valid ordering" issue you have been experiencing. My Media Center keeps crashing when I try to open albums and album cover jpgs. The error says I don’t have ntdll.dll but I ran a check on my system and I do….I really do. I have tried to Point the Media Center to the other ntdl’s but it is stubborn and will not give up that easily. My question is why can’t a simple person, or simpleton like myself, just go and get some dll’s and put them where they have gone MIA. The error messages tell me where the MIA file is missing from, and it seems only logical that you could just replace that dll. Anyway, thanx for listening and maybe your readers have a solution. M Hi Mstuffed – I’m not sure how to explain why this is happening on your system. Are you able to view your albums and album art in Windows Media Player? Or does this error only happen in Windows Media Center? You may want to try to repair Windows XP SP2 on your system and see if that helps here. I’m sorry I’m not sure what else to suggest. I am trying to install .net 2.0 and i recieve the following error Unexpected error 2352 when installing .net2.0 i have ran your cleanup utility to remove all remaining items from .net 1.0 and 1.1 and i still recieve this error, i have googled it and have found nothing, but i have been tearing through your site and the .net cleanup utility is fantastic it has worked with other issues i have run into if there is more information you would like me to add please let me know and i can answer any of the questions Hi KJRichmond – This error message sometimes can be caused by the file %windir%system32cabinet.dll being missing or corrupted. You might want to try to repair the latest OS service pack on your system or copy it from %windir%system32dllcache to %windir%system32 and then try to install the .NET Framework 2.0 again and see if that helps here. I’d like to see the .NET 2.0 Deployment Guide updated with ASP.NET information such as selecting specific IIS web sites during installation; configuring COM+ app pools; configuring a virtual directory for .NET 2.0; and SQL Server 2005 database deployment. Hey can you help me in solving this case? I am having large .vhd files (bigger than 3 GB upto 9 GB) and I have to create installer using DVD.. How to do create multi CD/DVD installer using wix when individual files are greater than 8 GB ? How to use media tag ? if any example posted, would be helpful.. Thx I have just decided to take the plunge and start using Vista Media Center. I have a Nova-T-500 Digital Dual TV tuner card that I have tested before under XP2 using Hauppauge WinTV2000 without any issue whatsoever. My experience with Vista is somewhat disappointing so far. I am completely unable to get any TV signal and the EPG does not work. I have to say that I live in Espoo (Finland). Could you please help? Hi Kananga Boy – You may want to try to update your TV tuner drivers and see if that helps resolve this issue. I would suggest checking first on Windows Update, and if that doesn’t help, check on the website for your tuner card manufacturer. If these do not help, I suggest posting a question on the Windows Media Center for Windows Vista beta newsgroups and look for further advice there. Aaron, while trying to upgrade a SharePoint 2007 development VM I ‘m stuck with the problem that the VS extensions for WFX and for WWF need build 4203.2 aka .Net 3.0 RC and don’t accept the published build 4307 also called .Net 3.0 RC. Can you try to clarify and perhaps propose a workaround please? Thanks, Thomas I have seen VS 2005 setup..and it look quite nice..I wanted to know what is the authoring tool used to develop this setup..I think it’s Wise .. can WIX create same kind of installer?_MACHINESOFTWAREMicrosoftWindowsCurrentVersionMedia CenterServiceVideoTunersD wodifficult:windowsehomeeh! Thanks!.. IN PROCESS EXPLORERE WE SEE msiexec.exe -Embedding <GUID> – this is the custom action server (indicated by the -Embedding switch) what is guid here stand for component ,product or what else Hi Sam.Desilva15 – This GUID is used internally by Windows Installer and it does not represent the component GUID or product code. The value of this GUID is not officially documented and should not be relied on in any way by any process other than Windows Installer itself. How about an Iplayer for Xbox or other catch up TV services. Hi Aaron, I'm stuck. My main server in the office is refusing to install any .NET updates/fixes/patches. I have run your netfx_setupverifier and it successfully checks whatever is installed on the system: .NET 1.1 SP1, .NET 2 SP2, .NET 3 SP1, and .NET 3.5. Any attempts to update them fail, whether using Microsoft Update or direct installation. I did some extensive reading on the subject but so far don't know which way to go. Would be able to help me? Peter Hi PMZ01 – What I usually suggest in this type of scenario is to uninstall the versions of the .NET Framework that are on your computer, re-install them, and try again to apply the .NET Framework updates that are available on Windows Update. You can find a set of steps and a tool to help you do this at blogs.msdn.com/…/8108332.aspx. Hi Aaron, Thank You for your help. I will look at that article. But before I start repairing/removing any .Net components I would like to know if this will affect the functionality of my server. This server is my production Exchange 2003 server and I would not like to find out that Exch2003 requires .Net 2.0 to run. Hi PMZ01 – Uninstalling the .NET Framework could affect the functionality of your server, but that is why I suggested re-installing it afterwards. If this is a production server, you'll need to be careful and schedule some downtime while you do this because there are likely to be things that do not work correctly in between the time that you uninstall and the time that you re-install. Hi Aaron, I'd love to see a topic on how to create unattended network installations for Visual Studio 2008 and 2010 express which don't require a network file share, but instead work against a local (internal) HTTP server. Hi Chad – I don't believe that Visual Studio setup supports that type of scenario. You might want to ask on the Visual Studio setup forums just in case though – social.msdn.microsoft.com/…/threads. Hello Aaron, There is very little information about any custom action testing framework we could use in MSI's. Can you once talk about it in one of your blog posts? I would be interested in Lux Regards, Kiran Hegde Hi Kiran Hegde – I don't have any personal experience using Lux or with unit testing custom actions. There is some information about how to use Lux in the WiX documentation at wix.sourceforge.net/…/lux.htm and wix.sourceforge.net/…/lux_xsd_index.htm, so hopefully that'll help you get started. Hi from the UK. Now I don't want to beg but please, please , please can you help with error code 1935 when trying to install MS Visual studio C++ versions. My issue started when trying to install a video editing suite from Corel. It tries to install or config MS Visual 2005-2008. But it fails due to 1935. After weeks of searching logs and asking on forums it seems this issue has been going on for years and is very common yet no fix other than reinstall Windows is offered. If you check out this forum post (mine) it will save me adding loads of log details here. social.msdn.microsoft.com/…/unable-to-install-64bit-ms-visual-2008-32bit-installed-ok But this issue so needs your help, you would make a lot of users very happy and have our undying gratitude 🙂 Thanks. Mike Hi Mike Trodd – I've posted a reply to your forum post, and I also replied to the blog comment about this issue that you posted at blogs.msdn.com/…/491653.aspx. Hopefully this helps.
https://blogs.msdn.microsoft.com/astebner/2005/07/22/suggest-a-topic-for-a-future-blog-post/
CC-MAIN-2016-36
refinedweb
4,852
72.05
Patent application title: VIEW MATCHING OF MATERIALIZED XML VIEWS Inventors: Per-Ake Larson (Redmond, WA, US) Per-Ake Larson (Redmond, WA, US) Guido Moerkotte (Schriesheim, DE) Frank W. Tompa (Waterloo, CA) Jingren Zhou (Bellevue, WA, US) Jingren Zhou (Bellevue, WA, US) Assignees: Microsoft Corporation IPC8 Class: AG06F706FI USPC Class: 707 4 Class name: Database or file accessing query processing (i.e., searching) query formulation, input preparation, or translation Publication date: 2009-12-31 Patent application number: 20090327255 Abstract: A materialized XML view matching system and method for processing of SQLXML queries using view matching of materialized XML views. The view matching process of the embodiments of the system and method use a multi-path tree (MPT) data structure. Embodiments of the materialized XML view matching system and method construct an MPT data structure for each input query and view expression. View matching is performed on the MPT data structures to generate a set of partial matches, which then are cleaned to generate a set of candidate matches. A valid match definition is generated by testing each candidate match for different forms of compliance. Using the valid match definition, a set of valid matches is identified and extracted. For each valid match, a substitute query expression is constructed that can serve as a replacement for the original query. These substitute queries can be used to evaluate the original query. Claims: 1. A computer-implemented method for matching materialized XML views, comprising:inputting a query expression and a view expression;constructing a multi-path tree (MPT) data structure for the query expression and the view expression; andperforming view matching of the MPT data structures to generate a set of partial matches that represent possible matches for the materialized XML view. 2. The computer-implemented method of claim 1, further comprising cleaning up the set of partial matches by identifying and removing those partial matches that cannot be part of a complete match to generate a set of candidate matches. 3. The computer-implemented method of claim 2, further comprising removing any match in the set of candidate matches that is not parent compliant. 4. The computer-implemented method of claim 2, further comprising defining what constitutes a valid match. 5. The computer-implemented method of claim 4, further comprising defining root compliance, node-test compliance, restriction compliance, logic compliance, and parent compliance as conditions for the valid match. 6. The computer-implemented method of claim 5, further comprising defining a match as valid if the match is root compliant, node-test compliant, restriction compliant, logic compliant, and parent compliant. 7. The computer-implemented method of claim 4, further comprising performing match extraction on the set of candidate matches using the valid match definition to generate a set of valid matches. 8. The computer-implemented method of claim 7, further comprising constructing a substitute query expression for each valid match in the set of valid matches to generate a set of valid substitute query expressions for the query expression for processing by a database management system (DBMS) engine. 9. The computer-implemented method of claim 8, wherein constructing a substitute query expression further comprises constructing residual predicate filters for the MPT data structures. 10. The computer-implemented method of claim 9, wherein constructing a substitute query expression further comprises constructing continuations for the MPT data structures. 11. The computer-implemented method of claim 10, wherein constructing a substitute query expression further comprises constructing correctives for the MPT data structures. 12. A computer-implemented method for generating a multi-path tree (MPT) data structure that is readable by a computing device, comprising:inputting a query expression and a view expression;defining a distinguished node, an intermediate node, and a functional node;introducing a distinguished node from the query expression or the view expression as a root node of the MPT data structure;introducing a distinguished node for each structured query language (SQL) column and for each FLWOR variable in the query expression or the view expression;labeling edges with a path expression from a parent node to a child node in the MPT data structure; andgenerating the MPT data structure by using the labeled edges to connect each of the nodes. 13. The computer-implemented method of claim 12, further comprising labeling each distinguished node with a corresponding column or variable name. 14. The computer-implemented method of claim 12, further comprising encoding optionality into the MPT data structure. 15. The computer-implemented method of claim 12, further comprising expanding the labeled edges in the MPT data structure by applying recursively rules for MPT expansion. 16. The computer-implemented method of claim 12, further comprising normalizing the MPT data structure to generate a normalized MPT data structure in a tree disjunctive normal form (TDNF). 17. A method for evaluating a structured query language (SQL) query, comprising:inputting the SQL query and a view expression;constructing a multi-path tree (MPT) data structure for both the SQL query and the view expression;view matching the MPT data structures to generate a set of partial matches;cleaning the set of partial matches by removing partial matches that cannot be part of a complete match to generate a set of candidate matches;defining a valid match to generate a valid match definition;performing match extraction on the set of candidate matches using the valid match definition to obtain a set of valid matches;for each valid match in the set of valid matches, construct a substitute SQL query for use in place of the original SQL query; andprocessing the substitute query using a SQL engine to obtain results for the original SQL query. 18. The method of claim 17, wherein constructing a multi-path tree (MPT) data structure further comprises:defining three types of nodes: (1) a distinguished node; (2) an intermediate node; and, (3) a functional node;introducing as a root node of the MPT data structure a distinguished node obtained from the SQL query;introducing a distinguished node for each SQL column and for each FLWOR variable in the SQL query;labeling each distinguished node with a corresponding column or variable name;labeling edges with a path expression from a parent node to a child node; andconnecting nodes into the MPT structure using the labeled edges. 19. The method of claim 18, wherein defining the valid match further comprises:defining root compliance, node-test compliance, restriction compliance, logic compliance, and parent compliance; anddefining a match as valid if the match is root compliant, node-test compliant, restriction compliant, logic compliant, and parent compliant. 20. The method of claim 19, wherein performing match extraction further comprises:identifying each valid match; andextracting each implied mapping from each distinguished query node for each valid match. Description: BACKGROUND [0001]SQL/XML has become the standard means to extend SQL to include XML data types and to query XML columns by XQuery. It is supported by most commercial relational database systems, while others, such as Microsoft® SQL Server, support a dialect called SQLXML. [0002]A table in SQL Server may contain columns of type XML. A column of type XML may store XML documents, which can then be queried using XQuery. To speed up the processing of queries against an XML column, a special type of index called an XML index can be created on the column. To create the index, the documents are completely shredded, pulling out every node in the documents and including them in the index. This makes the index very large compared with the original size of the documents (on the order of 3-5 times larger). [0003]Users often were dissatisfied with the size of an XML index because even when user are only interested in querying just parts of a document, the index includes everything in the document. It is well-known that in relational database systems judicious use of materialized views can speed up query processing by several orders of magnitude. In a relational database system that supports XML data, it is therefore important to extend the materialized view mechanisms to queries and views that also involve XML columns. To exploit materialized views, three problems need to be overcome. First, which views to materialize must be determined. Second, it has to be decided which views, if any, can be used to answer a query and how to rewrite the query to make use of the views. Finally, materialized views should be kept current in the presence of updates. In order to be able to query just a portion of the XML document and keep the index size manageable, the second problem (called the "view matching problem") should be addressed. [0004]Previous work on the XML view matching problem has been very restrictive in the kinds of views that can be created or what can be included in a view. The reason is that there is no point in allowing views that are more complex than the given matching technique can handle. [0005]Some existing techniques are dependent on knowledge of the actual data that is currently available. This is problematic, however, in relational database system if the data is changing over time. A match that was valid at one point in time may no longer be valid at a later time. This prevents the reuse of query plans, something which is standard practice in relational database]Embodiments of the materialized XML view matching system and method use view matching of materialized XML views to expedite processing of SQLXML queries. The view matching process of the embodiments of the materialized XML view matching system and method use a multi-path tree (MPT) data structure. The MPT data structure is capable of capturing the semantics of multiple XPath expressions occurring in an SQLXML expression. Moreover, embodiments of the materialized XML view matching system and method use a view matching technique that correctly handles expressions combining conjunctions, optional components, and arbitrary disjunctions. When a view is matchable, embodiments of the materialized XML view matching system and method also calculate residual predicates, continuations, and correctives. Together, these techniques form an entire query rewrite process. [0008]Embodiments of the materialized XML view matching system and method are an improvement over existing techniques. Matching decisions do not depend on the actual data content at the time of matching. As long as the data satisfies schema constraints, then found matches and rewrites remain valid. Moreover, view matching is performed on both relational and XML expressions and such that XPath expressions are correctly matched with combinations of conjunctions and disjunctions as well as optional substructures. In addition, embodiments of the system and method define a more generic class of views having additional information, which is less restrictive that current techniques. [0009]Embodiments of the materialized XML view matching system and method take as input a query expression (such as a SQL query) and a view expression. A MPT data structure then is constructed for each of these expressions. View matching then is performed on the MPT data structures and a set of partial matches is generated. [0010]The set of partial matches is cleaned by identifying those that cannot be part of a valid match. These false partial matches then are removed, and a set of candidate matches is obtained. A candidate match is a valid match only if it satisfies a specific set of compliance conditions. Match extraction is performed on the set of candidate matches, in the process testing each candidate against the compliance conditions, and a set of valid matches is obtained. For each valid match, a substitute query expression is constructed that can serve as a replacement for the original query. These substitute query expressions can be used to evaluate the original query. a general overview of embodiments of the materialized XML view matching system and method disclosed herein [0014]FIG. 2 is a block diagram illustrating details of embodiments of the materialized XML view matching system and method shown in FIG. 1. [0015]FIG. 3 is a flow diagram illustrating the operation of embodiments of the materialized XML view matching system and method shown in FIGS. 1 and 2. [0016]FIG. 4 is a flow diagram illustrating the operation of embodiments of the MPT module shown in FIG. 2. [0017]FIG. 5 is a first example that illustrates a first MPT fragment, T1, which is comparable to T2 in FIG. 6. [0018]FIG. 6 is a second example that illustrates a second MPT fragment, T2, which is comparable to T1 in FIG. 5. [0019]FIG. 7 illustrates an example of normalized MPTs for a query expression and a view expression. [0020]FIG. 8 is a flow diagram illustrating the operation of embodiments of the valid match definition module shown in FIG. 2. [0021]FIG. 9 illustrates an exemplary example of MPT fragments that represent a view and a query. [0022]FIG. 10 is an example of mapping the view of FIG. 9 into the virtual and node implicitly associated with the query node of FIG. 9. [0023]FIG. 11 illustrates an exemplary implementation of the MatchViewQuery function. [0024]FIG. 12 illustrates an exemplary implementation of the Match function. [0025]FIG. 13 illustrates an exemplary implementation of the routine IsLink( ). [0026]FIG. 14 is an exemplary example of sample MPT's of a view expression and a query expression. [0027]FIG. 15 illustrates an exemplary implementation of the CleanUp function. [0028]FIG. 16 illustrates an exemplary example of a sample view and query. [0029]FIG. 17 illustrates an example of a suitable computing system environment in which embodiments of the materialized XML view matching system and method shown in FIGS. 1-16 may be implemented. DETAILED DESCRIPTION [0030]In the following description of embodiments of the materialized XML view matching system and method reference is made to the accompanying drawings, which form a part thereof, and in which is shown by way of illustration a specific example whereby embodiments of the materialized XML view matching system and method may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the claimed subject matter. I. System Overview [0031]FIG. 1 is a block diagram illustrating a general overview of embodiments of the materialized XML view matching system and method disclosed herein. It should be noted that the implementation shown in FIG. 1 is only one of many implementations that are possible. Referring to FIG. 1, a materialized XML view matching system 100 is shown implemented on a computing device 110. It should be noted that the computing device 110 may include a single processor (such as a desktop or laptop computer) or several processors and computers connected to each other. [0032]In general, embodiments of the materialized XML view matching system 100 inputs a query expression 120 and a view expression 130. Although only one of each is shown in FIG. 1, there may be more than one query expression 120, more than one view expression 130, or both. The query expression 120 and the view expression 130 are processed by embodiments of the materialized XML view matching system 100. The details of the materialized XML view matching system 100 are set forth below. Embodiments of the materialized XML view matching system 100 outputs a set of valid substitute query expression 140. These substitute query expressions can be used in place of the original query expression 120. [0033]Embodiments of the materialized XML view matching system 100 include a number of program modules. FIG. 2 is a block diagram illustrating details of embodiments of the materialized XML view matching system 100 and method shown in FIG. 1. In general, FIG. 2 illustrates the program modules utilized by embodiments of the materialized XML view matching system 100 to process the query expression 120 and the view expression 130 to generate the set of valid substitute query expressions. [0034]As noted above, at least one query expression 120 and at least one view expression 130 are input to the materialized XML view matching system 100. Embodiments of the system 100 include a multi-path tree (MPT) module 200 that generates MPT data structures. The output of the MPT module 200 is MPT data structures 210 for each expression including the query expression 120 and the view expression 130. These MPT data structures 210 are input to a view matching module 220. The view matching module 220 processes the MPT data structure 210 and outputs a set of partial view matches 230. [0035]Embodiments of the materialized XML view matching system 100 also includes a cleaning module 240. The set of partial matches 230 is processes by the cleaning module 240 to clean-up any unnecessary or unneeded partial matches. The output of the cleaning module 240 is a set of candidate matches 250. [0036]Embodiments of the materialized XML view matching system 100 also include a valid match definition module 260 that sets forth a definition of a valid match. A match extraction module 270 uses information from the valid match information module 260 to process the set of candidate matches 250. The match extraction module 270 outputs a set of valid matches 280. A substitute query construction module 290 processes the set of valid matches 280 and outputs the set of valid substitute query expression 130. II. Operational Overview [0037]FIG. 3 is a flow diagram illustrating the operation of embodiments of the materialized XML view matching system 100 and method shown in FIGS. 1 and 2. The method begins by inputting a query expression and a view expression (box 300). Next, a multi-path tree (MPT) data structure is constructed for both the query expression and the view expression (box 310). The details of the MPT data structure is described in further detail below. [0038]The method then performs view matching of the MPT data structures to generate a set of partial matches (box 320). The set of partial matches then is cleaned by identifying and removing those partial matches that cannot be a part of a complete match (box 330). This generates a set of candidate matches, which are cleaned partial matches. The method then defines what constitutes a valid match (box 340). [0039]Match extraction then is performed on the set of candidate matches using the definition of a valid match to generate a set of valid matches (box 350). For each of the valid matches in the set of valid matches, a substitute query expression is constructed (box 360). This generates a set of valid substitute query expressions that can be used in place of the original query expression. The output is the set of valid substitute query expressions for processing by a database management system (DBMS) engine, such as a SQL engine (box 370). III. Operational Details [0040]The operational details of embodiments of the materialized XML view matching system 100 and method now will be discussed. These embodiments include embodiments of the program modules shown in FIG. 2. The operational details of each of these programs modules now will be explained in detail. III.A. SQLXML Views [0041]The discussion of the operational details begins with a discussion of how to define SQLXML views. The SQLXML views can be exploited to answer SQLXML queries. A SQLXML materialized view can be defined using selection, projection, join, and group-by operators in the same way as any relational view, but with the additional feature that sub-components of any XML column can also be defined in the FROM clause and projected using the SELECT clause. Property functions take as parameter an XQuery expression that is evaluated to extract XML element ids (ordpath), parts of XML documents (nodes), or values from within those parts (value). The value function takes an SQL type as its second parameter and casts the value into that type. Functions ordpath and value return single values, but nodes is a table-valued function that generates a single-column row for each node that the XQuery expression returns. [0042]To incorporate the results of a table-valued function, embodiments of the system 100 and method use of the apply operator available in SQL Server by Microsoft®. The operator takes two operands, both of which are table-valued expressions or functions. Apply is similar to a nested loop join or repeated function call: for each row in the left input, the right operand is evaluated once, producing one or more rows. The right operand can reference columns from the left operand. [0043]The apply operator takes two forms: cross apply and outer apply. If the function used as right operand returns at least one row, both forms output all returned rows extended with the values from the row of its left operand. If the result from the right operand is empty, cross apply outputs nothing whereas outer apply outputs one row consisting of the row value of the left operand padded with NULL values in all columns of the right operand. Thus cross apply and outer apply are similar to inner join and left outer join, respectively, providing the ability to manage optional substructures. [0044]When defining SQLXML materialized views, embodiments of the system 100 and method restrict references to subcomponents of an XML data field to the FROM and SELECT clauses. The grammar for the SQLXML materialized views is as follows: TABLE-US-00001 View := Create Select From Where Groupby Create := create materialized view VName as Select := select Selector (, Selector)* Selector := Column | Node | Value Column := SQL-expr Node := CName .ordpath( ) as CName Value := CName .value(`.`,` conversion `) as CName | CName .query(`.`) as CName | CName .exist(`.`) as CName From := from SQL-expr (CApply j OApply)* CApply := cross apply Extract OApply := outer apply Extract Extract := CName .nodes(` XPath `) as TName ( CName ) Where := where SQL-expr Groupby := groupby SQL-column-list [0045]References to subcomponents of an XML data field are restricted to the FROM and SELECT clauses so that the embodiments of the system 100 and method are able to capture the complete semantics of the view for matching, as described in detail below. In addition, embodiments of the system 100 and method also restrict XPath expressions to those involving predicates and the following axes: self, child, descendant-or-self, and attribute. Embodiments of the system 100 and method also restrict comparisons in XPath expressions to compare node values to constants. Queries are not similarly constrained, but instead it is attempted to extract an appropriate "information need" from the XQuery expression. In this manner, embodiments of the system 100 and method might not recognize a valid match, but will never generate an invalid match. [0046]Consider now the following simple example query: TABLE-US-00002 select T.key as year, course.value(`@cnum`, `int`) as cid from T cross apply doc.nodes(` for $in cal/courses/course where $i/subject = "CS" or ($i/*/meetings/hours < 3 and $i/*/meetings/lab = "req") return $i `) as x1(course) where key > 2005 [0047]For each qualified doc, the query result includes a row reporting each qualifying cnum attribute associated with the primary key of the table reflecting the calendar's year. Embodiments of the system 100 and method determine whether the data captured by the materialized view can be used to answer such a query. III.B. Multi-Path Tree (MPT) Module [0048]The multi-path tree (MPT) module 200 inputs query expressions and view expressions and generates MPT data structures for each of these expressions. Embodiments of the module 200 capture the semantics of multiple XPath expressions in an SQLXML expression using the Multi-Path Trees. These MPTs play the same role as tree patterns and XPS trees currently in use, but also include additional information to increase their expressivity. In particular, an MPT can model disjunction, conjunction, optionality, and simple restrictions on arbitrarily many XPath expressions derived from an XML column. It should be noted that throughout this document V is used for a view and Q is used for a query. III.B.1. MPT Core [0049]FIG. 4 is a flow diagram illustrating the operation of embodiments of the MPT module 200 shown in FIG. 2. A Multi-Path Tree (MPT) is an unordered, directed, singly-rooted tree. As shown in FIG. 4, the first step is to define three kinds of nodes for the MPT structure: distinguished nodes, intermediate nodes, and functional nodes (box 400). A distinguished node identifies a named component (which may be filtered by various expressions to define the query and the view) or a component that is projected by the view. Thus a distinguished node corresponds to a column in the FROM clause of the query or view definition or to a variable name in a query's FLWOR expression, or it is a projected node in a view definition. Intermediate nodes are introduced after the initial construction of an MPT, and they represent unnamed components. Functional nodes are similarly introduced and represent Boolean connectors or nil. Embodiments of the module 200 use isDist to test whether a node is distinguished, and such a node is labeled by its corresponding column or variable name. Moreover, isFunc is used to test whether a node is a functional node. [0050]An edge is labeled with the path expression indicating the path from the parent node to the child node. During construction of an MPT, an edge label can be an arbitrary path expression, which is possibly preceded by a question mark to indicate optionality. Upon completion, however, only a simple step (axis plus node test) can appear as an edge label. Edges terminating at a functional node must be unlabelled. It should be noted that PE(a, b) is used to refer to the label on edge (a, b). [0051]Mathematically, let x0, . . . , xk be successive nodes on a directed path in an MPT, none of which are functional nodes. Moreover, let the connecting edges be labeled e0, . . . , ek, respectively. This directed path represents the XPath e1/ . . . /ek plus the set of XPaths e12/ . . . /ed[ed|1/ . . . /ek for each distinguished node xd,d>0. If a (sub)tree is rooted by an or node, disjunctive semantics are applied to all its sub-trees. Otherwise conjunctive semantics are applied. A nil node matches any XPath expression (including the empty one). III.B.2. MPT Construction [0052]Given an SQLXML expression representing a view or a query, embodiments of the MPT module 200 construct its MPT as follows. Referring again to FIG. 4, embodiments of the module 200 introduce as the MPT's root a distinguished node labeled by the fully qualified relational XML column name (box 410). Next, a distinguished node is introduced for each SQL column and for each FLWOR variable named in the SQLXML expression (box 420). The distinguished node is labeled with the corresponding column or variable name (box 430). [0053]The MPT labels edges with a path expression from a parent node to a child node (box 440). The MPT initially connects these nodes into a tree using labeled edges as follows (box 450): [0054]1. For each FROM clause entry cross apply x.nodes(`P`) as T(y), insert an edge (x,y) with PE(x,y)=P. [0055]2. For each FROM clause entry outer apply x.nodes(`P`) as T(y), insert an edge (x,y) with PE(x,y)=?P. [0056]For view definitions, embodiments of the module 200 also include the following edges: [0057]3. For each SELECT clause entry x,f(`P`) as y with property function f, insert an edge (x,y) with PE(x,y)=?P.Note the inclusion of the optionality marker (?), since these expressions return the empty sequence when no match is found. [0058]Embodiments of the module 200 adopt a restricted view grammar so that MPT construction is straightforward for views. However, since path expressions can occur deeply nested in a FLWOR expression, describing the extraction process for such expressions in queries can be quite lengthy. The following lemma holds for MPTs constructed from views: [0059]Lemma 1: An initially constructed MPT captures exactly the XPath information expressed in a view defined by the grammar given above.III.B.2.a. Encoding Optionality [0060]Given an initial MPT, embodiments of the module 200 first transform optionality markers embedded in edge labels into structural components (box 460). This is achieved as follows. For each edge (x,y) having label P(x, y)=?P1, an or node u and a nil node v are introduced. Next, edge (x,y) is deleted, edges (x,u) and (u,v) are inserted with no labels. Moreover, edge (u,y) is inserted with label P1. As a result, every optional node becomes a child of an or node that has a nil node as a second child. III.B.2.b. Recursive MPT Expansion [0061]After encoding optionality, edge labels are still XPath expressions involving multi-step paths with nested predicates. Embodiments of the module 200 expand edges in the MPT by applying the following rules recursively, such that each application simplifies the label on some edge (box 470). [0062]For each edge (x,y) having one or more predicates, embodiments of the module 200 remove the predicates from its label and attach corresponding sub-trees to y for each predicate. In particular, given PE(x,y)=P1[P2], the label is changed on (x,y) to be P1, an intermediate node z is introduced, and edge (y,z) with label P2 is inserted. [0063]Embodiments of the module 200 also expand edges labeled by multi-step XPath expressions into paths composed of additional intermediate nodes and simpler edges. An edge labeled by an XPath expression consisting of several location steps is expanded into a sequence of edges, whereby each is labeled by a single step expression. Formally, for each edge (x,y) with PE(x,y)=l0/ . . . /lk+1, edge (x,y) is deleted and intermediate nodes z1, . . . ,zk are added as well as the following edges: (x,z1) with PE(x,z1)=l0, (zi, zi+1) with PE(zi, zi+1)=li for 0<i<k, and (zk,y) with PE(zk,y)=lk+1. [0064]Embodiments of the module 200 then simplify all edge labels that represent Boolean expressions by introducing Boolean operator trees. By construction, edges with such labels can only result from expanding predicates. This means they can only appear on sub-trees of the MPT and must end at an intermediate node. In particular, if an edge (x,y) has label PE(x,y)=P1 or P2, then the edge is deleted, an or node u and an intermediate node z are introduced, an unlabelled edge from x to u is inserted, and two additional edges are inserted: from u to y with label P1 and from u to z with label P2. Similar processing can be applied to conjunctive expressions. [0065]To handle predicates involving comparisons, embodiments of the module 200 annotate the corresponding nodes of the MPT with the comparisons. These annotations are called restrictions, in order not to confuse them with XPath predicates. [0066]The recursive nature of XPath expressions results in the recursive application of these rules (handling multiple steps and predicates). The procedure terminates when all edges labels reflect simple axis-node-test steps with no predicates. Embodiments of the module 200 satisfy the following two lemmas: [0067]Lemma 2: Recursive MPT expansion is a finite Church-Rosser replacement system. [0068]Lemma 3: An MPT resulting from encoding optionality and recursive MPT expansion encodes the same semantics as its input MPT.III.B.2.c. MPT Normalization [0069]To avoid matching being influenced by mere syntactic differences, embodiments of the module 200 transform both the view expressions and the query expressions into normal form in a tree disjunctive normal form (TDNF) (box 480). XML attributes are treated as if they were simple children by changing the attribute axis to a child axis in the MPT. Embodiments of the module 200 also combine successive or nodes into a single disjunction and successive and nodes into a single conjunction, and the nodes linked by the self axis then are merged. [0070]In the relational context, normalization often involves converting Boolean expressions into conjunctive normal form. However, this is not appropriate in the XML context because it is not possible to move conjunctions up or down a path. For example, expressions a[b/c and b/d] and a[b[c and d]] are not equivalent, because a[b[c and d]] requires that both the c node and the d node have the same b node as parent, whereas a[b/c and b/d] does not require that they have the same parent. However, expressions a[b/c or b/d] and a[b[c or d]] are equivalent. This means that embodiments of the module 200 can push disjunctions down over common prefixes on paths during normalization. [0071]A location step, such as a[b][c], involving multiple predicates is equivalent to having a single conjunctive predicate, that is a[b and c] for this example. As part of normalization embodiments of the module 200 may pull disjunctions up with respect to explicit and implicit conjunctions. Finally, embodiments of the module 200 remove nil conjuncts, collapse multiple nil disjuncts into a single one, and restrict and nodes to appear only as the root of a disjunct and only when more than one conjunct is present in that disjunct. [0072]The normalization process of embodiments of the module 200 thus involves six transformations: (1) changing occurrences of the attribute axis to the child axis; (2) combining successive and nodes, or nodes, and nodes linked by the self axis; (3) pushing or nodes down over common prefixes; (4) pulling or nodes up whenever they have a sibling node; (5) removing nil nodes occurring in conjunctions or with other nil nodes in disjunctions; and (6) ensuring that and nodes appear only under or nodes and only when there are multiple conjuncts. The resulting normal form is called a tree disjunctive normal form (TDNF). [0073]FIGS. 5 and 6 illustrate two comparable MPT fragments, namely T1 and T2. In particular, FIG. 5 is a first example that illustrates a first MPT fragment, T1, which is comparable to T2 in FIG. 6. FIG. 6 is a second example that illustrates a second MPT fragment, T2, which is comparable to T1 in FIG. 5. Moreover, as shown in FIGS. 5 and 6, T2 is the TDNF of T1. FIG. 7 illustrates an example of normalized MPTs for a query expression and a view expression. In FIG. 7, distinguished and intermediate nodes are shown as single circles, the former being those with labels, and functional nodes are enclosed by rectangles. For explanatory purposes, a preorder numbering is used to identify nodes in the MPT. [0074]Embodiments of the module 200 satisfy the following lemma and theorem: [0075]Lemma 4: MPT normalization is a finite Church-Rosser replacement system. [0076]Theorem 1: Normalized MPTs are equally as expressive for representing a collection of XPath expressions as is the view language described in FIG. 1, and their construction preserves the semantics of a view. III.B.3. Additional Notation [0077]Embodiments of the module 200 use the notation NODES(E) for the set of all nodes in the MPT for expression E, whether E represents a query or a view definition. For xε NODES(E), embodiments of the module 200 denote its (immediate) successors in the MPT by succ(x), that is, succ(x)={yε NODES(E)|(x,y) is an edge in E}. [0078]Let PE(x, y) be the axis-node-test label of the edge ending at node y. For convenience, embodiments of the module 200 define PE(y)=PE(x,y), AX(y) to be its axis, and NT(y) to be its node-test. Given a node w and another node z, possibly from another MPT, if AX(z)=`/` it is denoted by w=PE(z) the children of w that match NT(z). Similarly, if AX(z)=`//`, then it is denoted by w=PE(z) the singleton set {w} if w matches NT(z) union all descendants of w that match NT(z) minus all descendants of any or node that is itself a descendant of w. Thus, in the latter case, w=PE(z) may include w and some of its descendants, but it does not look beyond any or nodes. III.C. Valid Match Definition Module [0079]The valid match definition module 260 defines what constitutes a valid match. A core component of the module 260 is to find a valid mapping from the view's MPT nodes to the query's MPT nodes. If such a mapping exists, embodiments of the module 260 construct the query substitute based on it. In this section, the MPT matching problem is set forth and the notion of valid match is defined. [0080]In order to find a valid match from a view to a query, homomorphisms are typically considered. However, there can be exponentially many homomorphisms between two MPTs. Instead of computing them one after the other, embodiments of the module 260 calculate a cover in polynomial time. From the cover, embodiments of the module 260 subsequently extract all valid matches and generate query substitutes from them. [0081]Another deviation from the traditional homomorphism approach is that optional components and disjunctions force the consideration of partial mappings whereas homomorphisms are total mappings. The mapping must adhere to the logic of the MPTs for the query and the view, but the mapping can be partial, and it also need not be surjective. lII.C.1. Link Sets and Matches [0082]The match cover used by embodiments of the module 260 is a set of links representing all possible homomorphisms using quadratic space in the worst case. Denote the space NODES(V)×NODES(Q)×NODES(V)×NODES(Q) by H.sub.V,Q. A link is defined to be a quadruple of nodes (v,q,v',q')εH.sub.V,Q representing the possibility to map a view node v to a query node q under the condition that a second view node v' is mapped to a second query node q'. [0083]Henceforth, let H be a link set, that is, H.OR right.H.sub.V,Q Because a link set H is a cover for a collection of node mappings, the properties of subsets of Hare of particular interest. Any, h.OR right.H is called a match (although not necessarily a valid one), and, h(x) is defined as: h(x)={y⊕.E-backward.xp.E-backward.yp, (x, y, xp, yp)εh}. III.C.2. Conditions for Valid Matches [0084]The conditions for valid matches includes a definition of logic compliance and the definition of root compliance. This algorithm is part of tree matching in order that embodiments of the module 260 can be embedded in a general relational view matching procedure. III.C.2.a. Root Compliance [0085]FIG. 8 is a flow diagram illustrating the operation of embodiments of the valid match definition module 260 shown in FIG. 2. The process begins by defining root compliance as a condition for a valid match (box 800). The first condition deals with relational view matching and is based on the fact that the root node of an MPT represents the base table column containing XML. Embodiments of the module 260 use a relational view matching algorithm having the following condition: [0086]Condition 1: Let H be a link set and let h.OR right.H be a match. [0087]Further, let k1, . . . ,kn be the key attributes of the query and the view and assume both project on their keys. (If this is not the case, then modify them accordingly.) Call h root compliant if and only if h maps the root of V to the singleton set containing only the root of Q and for any database DB, Πk1.sub., . . . ,kn(V(DB)) Πk1.sub., . . . ,kn(Q(DB)). III.C.2.b. Node-Test Compliance [0088]Referring again to FIG. 8, embodiments of the module 260 define node-test compliance as a condition for a valid match (box 810). A second condition is that matched nodes must have compatible node tests. To perform these tests, node tests are organized in a lattice. The top element is node( ) and the bottom element is a new node test fail( ), which fails for every node. It can be said that x covers y and write, xy, if the node test x evaluates to true whenever the node testy evaluates to true. Let n serve as a namespace and a as an element name. Then, for example, **:an:a and *n:*n:a but neither *:an:*nor n:**a. [0089]Embodiments of the module 260 have the following condition: [0090]Condition 2: Let H be a link set and, h.OR right.H be a match. h is called node-test compliant if [0090].A-inverted.(x, y, xp, yp)εh, is Func(x) is Func(y)NT(X)NT(y). III.C.2.c. Restriction Compliance [0091]Referring again to FIG. 8, embodiments of the module 260 define restriction compliance as a condition for a valid match (box 820). Nodes in the MPT may be adorned by restrictions, which are Boolean expressions of simple predicates such as comparisons. Any match must obey these restrictions. Let Restrict(x) denote the restriction on node x, which is assumed to be true when there is no restriction specified. Embodiments of the module 260 use the following condition: [0092]Condition 3: Let H be a link set and h.OR right.H be a match. [0093]H is called restriction compliant if, if .A-inverted.(x, y, xp, yp)εh, is Func(x) is Func (y)(Restrict(y))Restrict(x). III.C.2.d. Logic Compliance [0094]Referring to FIG. 8, embodiments of the module 260 define logic compliance as a condition for a valid match (box 830). Matching Boolean expressions is surprisingly complex. Embodiments of the module 260 correct problems with matching of Boolean expressions in existing work. Some existing matching algorithms also consider logic compliance but have flaws. First of all, many do not normalize their trees, so that the view a[b][c or d] does not match the query a[(b and c) or (b and d)]. Furthermore, consider the view a[b or c] and the identical query a[b or c]. Despite the query and view being identical, these existing algorithms fail, since they require one of the disjuncts in a view to match the complete query, which is overly restrictive. [0095]It will now be discussed how an or node should be matched against a regular node. FIG. 9 Illustrates an exemplary example of MPT fragments that represent a view (on the left side of the figure) and a query (on the right side of the figure). In order to match view node 1 to query node 1, embodiments of the module 260 need to map the or node to something from which the leaves in the view can be mapped to the leaves in the query. One approach is to map it to the virtual and node implicitly associated with query node 1 to treat its children as conjuncts. FIG. 10 is an example of mapping the view of FIG. 9 into the virtual and node implicitly associated with the query node of FIG. 9. Thereafter, each disjunct can be mapped as required. [0096]With these points in mind, logic compliance can now be defined. It should be noted that all queries and views are in TDNF, so that and nodes can occur only under or nodes and or nodes cannot have siblings. Embodiments of the module 260 use the following condition for logic compliance: [0097]Condition 4: Let H be a link set and h.OR right.H be a match. [0098]H is called logic compliant if, [0098].A-inverted.(x, y, xp, yp)εh, [0099]the following constraints hold: [0100]1. If x and y are both or nodes, then, [0100].A-inverted.y'ε succ(y), (x, y', x, y)εh [0101]2. If x is an or node and y is not an or node, [0102]then one of the following two conditions is true: [0103](a) y' is the only successor for y and y' is an or node and (x, y', x, y)εh [0104](b) .E-backward.x'ε succ(x), (x', y, x, y)εh [0105]3. If y is an or node and x is not an or node, then both of the following two conditions are true: [0106](a) x is not an and node (because of TDNF) [0107](b) .A-inverted.y'ε succ(y). [0107](x, y', x, y)εhV.E-backward.y''εy'/PE(x), (x, y'', x, y)εh [0108]4. If neither x nor y are or nodes, then [0108].A-inverted.x'ε succ(x).E-backward.y', (x', y', x, y)εh. III.C.2.e. Parent Compliance [0109]Referring again to FIG. 8, embodiments of the module 260 define parent compliance as a condition for a valid match (box 840).The embodiments of the module 260 also use the final condition: [0110]Condition 5: Let H be a link set and h.OR right.H be a match. [0111]h is called parent compliant if [0111].A-inverted.(x, y, xp, yp)εh [0112]where xp and yp are not the MPT roots, [0112].E-backward.x'.E-backward.y', (xp, yp, x', y')εh. III.C.2.f. Match Validity and Covers [0113]Referring again to FIG. 8, embodiments of the module 260 define a match as valid if it is root compliant, node-test compliant, restriction compliant, logic compliant, and parent compliant (box 850). Formally, each of the above conditions is aggregated using the following definition: [0114]Definition 1: Let H be a link set and h be a match. The partial mapping h is a valid match if it is root compliant, node-test compliant, restriction compliant, logic compliant, and parent compliant. [0115]The following definition introduces minimal covers, which are link sets that contain only those links necessary for valid matches. [0116]Definition 2: Given a link set H, H is a cover for (V,Q) if for all valid matches, [0116]h.OR right.H.sub.V,Q, h.OR right.H, [0117]A cover H is a minimal cover if no proper subset of H is a cover.Clearly, H.sub.V,Q itself is a cover. Although this set contains only |NODES(V)|2|NODES(Q)|2 links, it may represent exponentially many valid matches. III.C.3. Roadmap [0118]A query substitute for a query Q is a query where at least one base table occurrence is replaced by the name of a materialized view. Other changes ("compensations") may be applied as well. [0119]Definition 3: Given a query Q, a query substitute Qs is valid if for any database DB, Qs(DB)=Q(DB). [0120]Here are the steps to find query substitutes for XML: [0121]1. Call a function MatchViewQuery, passing the normalized MPTs for the view and the query. This first produces a link set HM, which is a (small) cover for (V,Q). FIG. 11 illustrates an exemplary implementation of the MatchViewQuery function. [0122]2. Before returning from MatchViewQuery, call a function Cleanup on HM. This produces a link set, [0122]HC.OR right.HM, where HC is a minimal cover for (V,Q). [0123]3. Extract all valid matches from HC and construct corresponding query substitutes Qs, all of which will be valid. [0124]The embodiments of the module 260 also use the following lemma: [0125]Lemma 5: Assume MatchViewQuery produces the link set IIM before the call to Cleanup. For every link, [0125](x, y, xp, yp)εHM, [0126]there is a linkset, [0126]h.OR right.HM, [0127]such that, [0127](x, y, xp, yp)εh [0128]and h is node-test compliant, restriction compliant, and logic compliant. [0129]Thus, links in IIM pair nodes x and y only when the sub-tree rooted at x in V matches the sub-tree rooted at y in Q. Although |HM| may be, Ω(|NODES(V)|2|NODES(Q)|2) even when HC is empty, it will often be only slightly larger than |HC|. III.D. View Matching Module [0130]The view matching module 220 inputs the MPT data structures and outputs a set of candidate matches. The task of embodiments of the view matching module 220 is to calculate a minimal cover, checking all the conditions summarized in Definition 1. The MatchViewQuery first checks root compliance and then calls other routines to form a covering link set. Node-test and restriction compliance are checked by Match. Logic compliance is checked by IsLink, which is called from Match, and parent compliance will be enforced afterwards, during cleanup, by deleting links that are not part of valid matches. [0131]After checking root compliance, the function MatchViewQuery calls the function Match, which passes the roots of the two MPTs, which are to be treated as if they were simply and nodes. In order to avoid repeated calculations, a Boolean matrix M(x,y) caches whether a view node x can be mapped to a query node y. It is initially undefined on all inputs. The function Function Cleanup will be described below. [0132]FIG. 12 illustrates an exemplary implementation of the Match function. The function Match checks the two compliances that depend on the pair of nodes only: (1) do the node types and node names match; and, (2) do the query restrictions imply the view restrictions? If these two tests succeed, Match then tests each successor in the view to determine whether it can be matched to some successor of the query node (Condition 4.4), repeatedly calling IsLink. It also treats the case where the child x' of the view node x is an or node, by treating y as an and node to try to match the or. Consider again FIG. 9 and a call Match(v1,q1), where vi represents the view node 1 and q1 represents the query node 1 (the right-side MPT fragment of FIG. 9). The only child node for v1 is the or node, which is bound to x0 by the outer for all loop, and the IsLink call (marked with *) is eventually invoked to compare it to q1, but treating q1 as and. [0133]FIG. 13 illustrates an exemplary implementation of the routine IsLink( ). The routine IsLink( ) with four parameters (x,y,x',y') checks whether x can be mapped to y under the condition that x' has been mapped to y'. That is, it checks whether (x,y,x',y') is potentially part of a valid match. If all tests succeed, a link for (x,y,x',y') is normally recorded by calling insertMatch, which also updates M. If any test fails, x cannot be mapped to y, in which case insertMatch updates M only and does not record a link. To make query substitution more efficient, insertMatch also records whether query node y can be mapped to query node x, using the same techniques. [0134]To examine the body of IsLink, an example first is traced, where the view and query trees are given in FIG. 14. In particular, FIG. 14 is an exemplary example of sample MPT's of a view expression and a query expression. Assume MatchViewQuery's test for root compliance succeeds when comparing Vcol to Qcol, and it therefore calls Match(v1.asAnd( ),q1.asAnd( )). For simplicity, the parameter M will be omitted throughout this example, since it behaves like a global variable consulted and updated by all functions. Since both arguments are passed as if they were and nodes, the types match and their children are compared in the doubly-nested loop. [0135]From the inner loop, IsLink(v2, q2, v1, q1) is called, from which Match(v2, q2) is invoked. Again, both nodes have the same node test and no restrictions are present, so the first test succeeds. Then all the successors x' of view node v2 must match. These are view nodes 3 and 4. For each of them, at least one matching query node y' must exist (see Condition 4.4). The candidates y' are among the successors of y, which for this example is the single or node 3 in the query. This results in the two calls IsLink(ve, q3, v2, q2) and IsLink(v4, q3, v2, q2), whose results are conjunctively connected (see Condition 4.4). [0136]First, consider IsLink(v3, q3, v2, q2). Using case 2, embodiments of the module 220 test Condition 4.3, by iterating over all successors of the query's or node to call IsLink(v3, q4, v3, q3) and IsLink(v3, q8, v3, q3). The first of these invocations calls IsLink(v3, qi, v3, q3), where qi represents query nodes 5, 6, and 7. These calculate qi/PE(c), which effectively evaluates qi/descendant-or-self::c, returning q6 in case of i=6 and no nodes for iε{5,7}. This induces the call Match(v3, q6), which returns true since c>1→c>0. Thus, the link (v3, q6, v3, q3) is recorded and the function returns true, but without recording the link (v3, q4, v3, q3). [0137]Similarly, when processing IsLink v3, qi, v3, q8) for iε{9,11}, qi/descendant-or-self::c returns q10 in the case of i=9 and no nodes for i=11. The call to Match(v3, q10) again returns true since c>2→c>0. This records link (v3, q10, v3, q3) and causes IsLink(v3, q8, v3, q3) to return true, but without recording it as a link. Back on the stack, the call of IsLink(v3, q3, v2, q2) in case 2 thus has had true returns from both calls, so it too returns true, producing the third link (v3, q3, v2, q2). [0138]The call IsLink(v4, q3, v2, q2) is treated similarly and produces the three links (v4, q7, v4, q3), (v4, q9, v4, q3), and (v4, q3, v2, q2). Finally, Match(v2, q2) can return true, and IsLink records the final link (v2, q2, v1, v1). [0139]Those code fragments that are not covered by the above example will now be discussed. Case 1 of IsLink records a link whenever the view node is nil so that the disjunction represented by the or node above it is always satisfied. Case 3 makes sure that if view node x is an or node, one of its successors is matched (representing Condition 4.2 exactly). Finally, case 4 handles situations where the view node is and (Condition 4.4), but recording the corresponding link only if the corresponding query node is also an and node. III.E. Cleaning Module [0140]The cleaning module 240 inputs the set of partial matches and outputs a set of candidate matches. The matching procedure does not check for parent compliance. Hence, it may be that links are recorded for which the parents subsequently failed to match. Such a links without a corresponding parent should be eliminated to produce a minimal cover. [0141]Deleting a link that causes a violation of parent compliance cannot affect any other compliance, but parent compliance will again have to be checked for the smaller link set. FIG. 15 illustrates an exemplary implementation of the CleanUp function. The CleanUp procedure shown in FIG. 15 tests Condition 5, where `_` can match any value, iteratively removing links that violate the condition. [0142]This leads to the following theorem used by embodiments of the module 260: [0143]Theorem 2: The function MatchViewQuery produces a minimal cover for (V,Q) if and only if such a cover exists. III.F. Match Extraction Module [0144]The match extraction module 270 inputs the set of candidate matches and the definition of valid match and outputs a set of valid matches. To determine whether a relational view can be used to answer a relational query, first it needs to be ensured that the rows in the view cover all the rows required to answer the query. Then it needs to be ensured that each row in the view includes sufficient data to enforce any remaining restrictions. In other words, to eliminate view rows that do not participate in the query answer. If such data is not explicitly present in the view, it must be possible to obtain the needed data through back-joins to base tables. Finally, it needs to be ensured that there is enough data available in the view (or available through back-joins) to calculate the requested column values specified by the query. Existing algorithms can be used to address these three steps when atomic data values are involved, and MatchViewQuery(V,Q) addresses the existence questions in the context of XML data columns. For convenience, it has been assumed that database keys are present in the view and that all the ordpaths of all distinguished nodes are selected by the view, thus guaranteeing that back-joins will always be possible if the appropriate rows exist. What has not yet been addressed is how to extract required data values from any of the XML data columns involved in the materialized view when they are needed to eliminate extraneous rows or to populate a query's answers. That is, it has been determined whether a view is adequate, but it has not yet been described how it could actually be used. [0145]To evaluate the query, a match is isolated to identify a binding in the materialized view for each distinguished node in MPT(Q). Starting with a minimal cover for (V,Q), the match extraction module works in two steps: [0146]1. identify all valid matches, and [0147]2. for each valid match, extract all the implied mappings for the distinguished query nodes. [0148]For step one, all valid partial functions from view nodes to query nodes that are embedded in the minimal cover Hc produced by MatchViewQuery, could naively be enumerated. Let f be one such partial function and, Lf={(x, f(x), x', y')εHc{. Because, Lf.OR right.Hc, checking whether Lf is a valid match requires that it is parent compliant and logic compliant (the other compliances cannot be violated by any subset of Hc). However, ensuring these conditions can instead be incorporated into the enumeration process that generates potential partial functions. Let F be the resulting set of partial functions, each of which corresponds to a valid match. [0149]For each fεF, every view node is uniquely mapped to a query node. However, mapping multiple view nodes to a single distinguished query node also needs to be avoided. Extracting partial functions from the inverse mappings makes up the second step. Given fεF, let, f-1(y){umlaut over (=)}{{circumflex over (x)}|f(x)=y}. Let, qi, iε1 . . . d be the distinguished nodes in query Q. An extracted match mf maps each distinguished query node qi to some view node vi such that f(vi)=qi or to nil if f1(qi)=0. The pair (f, mf) is called an extracted match. III.G. Substitute Query Construction Module [0150]The substitute query construction module 290 inputs the set of valid matches and outputs the set of valid substitute query expressions. The general procedure to construct a query substitute from an extracted match (f,mf) will be outlined. Recall from above that that the view is a relation that includes an XML column for each of the projected nodes from MPT(V). If the query is not equivalent to the materialized view, compensations must be applied to the data in the view to produce the query answer. Thus, evaluating the query involves filtering the tuples in the materialized view V to preserve only those also selected by the query, applying the map mf to find corresponding nodes in MPT(V), and using both MPTs to determine how to extract data corresponding to the query's distinguished nodes from the view's projected nodes, how to correct for the possible occurrence of duplicates in the view that should not be in the query answer, and whether the query requires additional sorting to produce the final answers. The following sub-sections describe how to use the information in the extracted match (f,mf) and in the corresponding MPTs to derive these compensations. [0151]For the remainder of this section, it will be assumed that a query substitute is being calculated with respect to a valid match fusing the inverse function mf. For efficient execution, it is assumed that the Boolean matrix M computed by MatchViewQuery also records whether f(vi) would also match vi were the roles of V and Q reversed. In this case, it can be said that query node f(vi) is equivalent to view node vi. III.G.1 Constructing Residual Predicate Filters [0152]Let w be some document sub-tree. It can be said that a node in an MPT holds if the corresponding part of w matches that node. Assume it is known that some node vεMPT(V) holds, in other words, w satisfies the node under consideration in the view. Assume further that it is desired to check whether f(v) holds, in other words, w satisfies the corresponding node in MPT(Q)). [0153]Clearly, if v is equivalent to f(v), then without any need for further testing of the instance, it is known that f(v) holds for w. Furthermore if neither v nor f(v) are function nodes, then their types must match (node test compliance). If the restrictions on those nodes are not equivalent, then the query condition must be tested explicitly on w. For example, if the view specifies price <100 and the query specifies price <50, the query's test must be applied to the data in the materialized view. [0154]If neither v nor f(v) is an or node, then their sub-trees are conjunctively connected. Let q be the root of one of the sub-trees of f(v). If q is equivalent to some child of v, then q must hold for the corresponding substructure of w whenever v holds for w. Therefore, the sub-tree rooted at q in MPT(Q) cannot impose the need for any compensations. It can henceforth be ignored. The remaining conjuncts in a query require the generation of code to filter out less restrictive data from the materialized view. [0155]Similarly, filters must be applied when disjuncts in the view might cause extraneous data to be included in the view. Let, orqεMPT(Q) be an or node, let v be a node in MPT(V) that is mapped to it (f(v)=orq), and let w be a fragment of the materialized view for which v holds. The following strategy is potentially much more efficient than evaluating orq against w. [0156]Case 1: v is an or node. Because of logic compliance (Condition 4.1 followed by Condition 4.2b), for every disjunct under orq, there exists a disjunct under v that maps to it. Let, [0156]xi(1≦i≦k) and zj(1≦j≦m) be all the successor nodes of v such that for every xi there exists a successor node yi of orq such that yi=f(xi) and none of zj is mapped via f to a node in MPT(Q) (in other words, f(zj) is undefined). [0157]Case 2: v is not a function node. Because of Condition 4.3, f maps v to some node within every disjunct under orq. For simplicity, for this situation, k=1, m=0. and x1=v. [0158]Using the following procedure, a decision tree (or an equivalent Boolean expression to be evaluated lazily) can now be built to test whether orq holds for w. As part of view materialization, a bitmap is associated with every or node in the view recording which successor(s) hold on that part of the data. This bitmap is also very helpful in maintaining the materialized view. To check whether orq holds, first each of the xi associated with v are checked. [0159]FIG. 16 illustrates an exemplary example of a sample view and query, for which x1 refers to view node 3 and x2 refers to view node 6. Whether or not a document fragment w matched by view node 2 has some child f is irrelevant to determining whether orq holds. View node 8 corresponds to z1 which has no image in the query. [0160]If w has either an a or a b substructure, an expression Er should be evaluated to determine whether the corresponding child node of orq holds. First, it can be checked whether xi holds by looking in the bitmap for v. If the entry is false, yi cannot hold. If xi holds, the matrix produced by MatchViewQuery can be checked to see whether xi≡yi. If this is true, then yi must also hold. If not, yi must be checked explicitly. Continuing the example, if it is known that view node 2 holds, at most the tests for a/g and b/e/h need be evaluated against the data: a[cd] and b/e can be tested against the bitmap, avaq and bv≡bq can be tested against the matrix, and if x1 holds, then a is already known to have children c and d (sub-trees rooted at query nodes 4 and 5 are ignored because they are conjunctively connected node with equivalent nodes in the view). [0161]Finally, consider a maximal path leading to any node y in the query pattern tree such that on this path (except for y) no residual predicate has been identified as needing to be tested. If the nodes in the path are not equivalent to the mapped-from nodes in the view, then a residual predicate has to be applied to ensure that w matches all nodes along the query path. III.G.2 Constructing Continuations [0162]A procedure for constructing continuations is best explained by an example. Let R be a relation with key id and XML attribute r and let a materialized view be: TABLE-US-00003 select id, r as rv, a.ordpath( ) as av, b.ordpath( ) as bv, c.ordpath( ) as cv, e.ordpath( ) as ev from R outer apply r.(.//a) as A(a) outer apply a.(.//b) as B(b) outer apply b.(.//c) as C(c) outer apply c.(.//e) as E(e) [0163]Consider the query: TABLE-US-00004 select id, r.query( for $a in ./a, $c in $a//b//c, $d in $c//d, $e in $d//e, $g in $e//g return E) from R where E is an arbitrary XML expression involving the variables {$a; $c; $d; $e; $g}. The nodes in MPT(V) are referred to as {rv; av; bv; cv; ev}, and the nodes in MPT(Q) are referred to as {r,a,c,d,e,g}. [0164]A distinguished query node y is covered if mf(y) is a projected view node. For the current example, f(rv)=r, f(av)=a, f(cv)=c, f(ev)=e. The query nodes r, a, c, and e are covered, while d and g are not covered. [0165]Next, a binding table of the following form is filled out: T, s, P, c One tuple in this table records the fact that the necessary bindings for the query node (T) can be derived by applying the path expression (P) to the view or query node (S). Additionally, since this may produce a superset, the predicates (C) have to be applied. [0166]Let y be a distinguished query node and mf(y) be a projected view node. The tuple, (y, mf(y), `.`, σy) is added to a table where σy is the residual predicate for y, as explained above. In the current example, this results in entries for r, a, c, and e. After this step, what is possibly left are some query nodes that do not occur in the first column of the binding table. Any query node is bound if it occurs in the first column of the binding table. Otherwise, it is called unbound. Note that the root of the query must be bound for the view to qualify as a match (root compliance). [0167]Consider a pair of distinguished nodes (y, y') related by a cross or outer apply, with y bound and y' unbound. There are two cases: [0168]1. If some descendant of y' has an entry in the binding table, let y'' be the closest such descendant and let x be the projected view node such that f(x)=y''. [0169]Let y . . . y'. . . y'' be the path of edges connecting y to y'' and let yj be all the distinguished nodes along that path, [0169](0≦j≦n, y0=y, y1=y', yn=y''). [0170]For all j in the range 1≦j<n, add, [0170](yj, yj+1, PE(yj, yj+1), σj) [0171]to the binding table, where, [0171] PE(yj, yj+1) is the backward path from yj to yj 1 and σj is the residual predicate for yj. The expression, y=y'/ PE(y, y') [0172]is also conjoined to the predicates in the entry for y0. [0173]2. If there is no bound descendant of y0, the tuple, can be added to the binding table, where σy' is the residual predicate for y'. [0174]These steps are applied until all distinguished variables of the query are bound. Consider the current example, where c is bound and d is unbound. Since the successor node e of d is bound, case 1 can be applied, which results in the entry, (d, e, anc::d,c=d/anc:c) The other unbound query node is g, to which case 2 applies. Hence, the term, (g, ev, .//g, true) is added. [0175]Finally, for every projected view node x, if f(x) is a distinguished node that was introduced from an application of cross apply, then the filter "x IS NOT NULL" should be added. III.G.3 Constructing Correctives [0176]If there exists any distinguished node in the view that is not mapped to a distinguished query node, the corresponding column from the materialized view is not involved in the query substitute. In this case, a duplicate eliminating projection on the set Yd of the distinguished nodes of the query must be performed. Using the above example, there is an additional distinguished view node bv. Thus, there could be multiple b nodes on the path from a to c in the document instance, each of which is matched by bv. Therefore, duplicates must be eliminated. [0177]If some distinguished query nodes are not also projected nodes, there will be too much unnesting produced by the compensations described so far. Let Y={y1 . . . yn} Be the distinguished nodes of the query that are bound by for entries in a query and do not appear in the select clause of the query. If n>0, a grouping should be added on the complement of, Y( Y=Yd/Y) [0178]Considering the above example again, the application of apply operators on path expressions leads to an unnested materialized view, where each tuple in R gives rise to many tuples in the materialized view. In contrast, because of the semantics of XML and SQL, the example query returns one tuple for every tuple in R, including a sequence of items found by evaluating E on all possible bindings for $a, $c, $d, $e, $g. For a given tuple in R, this set of bindings can be constructed by grouping the materialized view on the key id. [0179]Finally XQuery may impose a certain order on the variable bindings. If so, an additional sort is required. Since no order is imposed on the tuples in the materialized view, and given the fact that the example query may be run in XQuery's ordered mode, each set of bindings should be sorted. In other words, each group, on the ordpaths of $a, $c, $d, $e, $f in this order. IV. Exemplary Operating Environment [0180]Embodiments of the materialized XML view matching system 100 and method are designed to operate in a computing environment. The following discussion is intended to provide a brief, general description of a suitable computing environment in which embodiments of the materialized XML view matching system 100 and method may be implemented. [0181]FIG. 17 illustrates an example of a suitable computing system environment in which embodiments of the materialized XML view matching system 100 and method shown in FIGS. 1-16 may be implemented. The computing system environment 1700 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 1700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment. [0182]Embodiments of the materialized XML view matching system 100 and method are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with embodiments of the materialized XML view matching. [0183]Embodiments of the materialized XML view matching system 100 and method may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Embodiments of the materialized XML view matching. 17, an exemplary system for embodiments of the materialized XML view matching system 100 and method includes a general-purpose computing device in the form of a computer 1710. [0184]Components of the computer 1710 may include, but are not limited to, a processing unit 1720 (such as a central processing unit, CPU), a system memory 1730, and a system bus 1721 that couples various system components including the system memory to the processing unit 1720. The system bus 1785]The computer 1710 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the computer 17. [0186 1787]The system memory 1740 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 1731 and random access memory (RAM) 1732. A basic input/output system 1733 (BIOS), containing the basic routines that help to transfer information between elements within the computer 1710, such as during start-up, is typically stored in ROM 1731. RAM 1732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 1720. By way of example, and not limitation, FIG. 17 illustrates operating system 1734, application programs 1735, other program modules 1736, and program data 1737. [0188]The computer 1710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 17 illustrates a hard disk drive 1741 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 1751 that reads from or writes to a removable, nonvolatile magnetic disk 1752, and an optical disk drive 71055 that reads from or writes to a removable, nonvolatile optical disk 1756 such as a CD ROM or other optical media. [01 1741 is typically connected to the system bus 1721 through a non-removable memory interface such as interface 1740, and magnetic disk drive 1751 and optical disk drive 1755 are typically connected to the system bus 1721 by a removable memory interface, such as interface 1750. [0190]The drives and their associated computer storage media discussed above and illustrated in FIG. 17, provide storage of computer readable instructions, data structures, program modules and other data for the computer 1710. In FIG. 17, for example, hard disk drive 1771 is illustrated as storing operating system 1744, application programs 1745, other program modules 1746, and program data 1747. Note that these components can either be the same as or different from operating system 1734, application programs 1735, other program modules 1736, and program data 1737. Operating system 1744, application programs 1745, other program modules 1746, and program data 1747 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information (or data) into the computer 1710 through input devices such as a keyboard 1762, pointing device 1761, commonly referred to as a mouse, trackball or touch pad, and a touch panel or touch screen (not shown). [0191]Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, radio receiver, or a television or broadcast video receiver, or the like. These and other input devices are often connected to the processing unit 1720 through a user input interface 1760 that is coupled to the system bus 1721, but may be connected by other interface and bus structures, such as, for example, a parallel port, game port or a universal serial bus (USB). A monitor 1791 or other type of display device is also connected to the system bus 1721 via an interface, such as a video interface 1790. In addition to the monitor, computers may also include other peripheral output devices such as speakers 1797 and printer 1796, which may be connected through an output peripheral interface 1795. [0192]The computer 1710 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 1780. The remote computer 1780 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 1710, although only a memory storage device 1781 has been illustrated in FIG. 17. The logical connections depicted in FIG. 17 include a local area network (LAN) 1771 and a wide area network (WAN) 1773, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. [0193]When used in a LAN networking environment, the computer 1710 is connected to the LAN 1771 through a network interface or adapter 1770. When used in a WAN networking environment, the computer 1710 typically includes a modem 1772 or other means for establishing communications over the WAN 1773, such as the Internet. The modem 1772, which may be internal or external, may be connected to the system bus 1721 via the user input interface 1760, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 1710, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 17 illustrates remote application programs 1785 as residing on memory device 1781. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. [0194. Patent applications by Jingren Zhou, Bellevue, WA US Patent applications by Per-Ake Larson, Redmond, WA US Patent applications by Microsoft Corporation Patent applications in class Query formulation, input preparation, or translation Patent applications in all subclasses Query formulation, input preparation, or translation User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20090327255
CC-MAIN-2014-52
refinedweb
13,310
51.48
I wrote a function which generates 2 coloured image blocks: def generate_block(): x = np.ones((50, 50, 3)) x[:,:,0:3] = np.random.uniform(0, 1, (3,)) show_image(x) y = np.ones((50, 50, 3)) y[:, :, 0:3] = np.random.uniform(0, 1, (3,)) show_image(y) Is this what you are looking for ? def generate_block(): x = np.ones((50, 50, 3)) x[:, :, 0:3] = np.random.uniform(0, 1, (3,)) plt.imshow(x) plt.figure() y = np.ones((50, 50, 3)) y[:,:,0:3] = np.random.uniform(0, 1, (3,)) plt.imshow(y) plt.figure() c = np.linspace(0, 1, 50)[:, None, None] gradient = x + (y - x) * c plt.imshow(gradient) return x, y, gradient To use np.linspace as you suggested, I've used broadcasting which is a very powerful tool in numpy; read more here. c = np.linspace(0, 1, 50) creates an array of shape (50,) with 50 numbers from 0 to 1, evenly spaced. Adding [:, None, None] makes this array 3D, of shape (50, 1, 1). When using it in (x - y) * c, since x - y is (50, 50, 3), broadcasting happens for the last 2 dimensions. c is treated as an array we'll call d of shape (50, 50, 3), such that for i in range(50), d[i, :, :] is an array of shape (50, 3) filled with c[i]. so the first line of gradient is x[0, :, :] + c[0] * (x[0, :, :] - y[0, :, :]), which is just x[0, :, :] The second line is x[1, :, :] + c[1] * (x[1, :, :] - y[1, :, :]), etc. The ith line is the barycenter of x[i] and y[i] with coefficients 1 - c[i] and c[i] You can do column-wise variation with [None, :, None] in the definition of c.
https://codedump.io/share/0kv153lQe2CJ/1/generating-colour-image-gradient-using-numpy
CC-MAIN-2017-13
refinedweb
293
85.59
VM_MAP_FINDSPACE(9) MidnightBSD Kernel Developer’s Manual VM_MAP_FINDSPACE(9) NAME vm_map_findspace — find a free region within a map SYNOPSIS #include <sys/param.h> #include <vm/vm.h> #include <vm/vm_map.h> int vm_map_findspace(vm_map_t map, vm_offset_t start, vm_size_t length, vm_offset_t *addr); DESCRIPTION The vm_map_findspace() function attempts to find a region with sufficient space in the map for an object of size length at the address addr. IMPLEMENTATION NOTES It is the caller’s responsibility to obtain a lock on the map using vm_map_lock(9) before calling this function. This routine may call pmap_growkernel(9) to grow the kernel’s address space, if and only if the mapping is being made within the kernel address space, and if insufficient space remains in the kernel_map. RETURN VALUES The vm_map_findspace() function returns the value 0 if successful, and *addr will contain the first virtual address in the found region; otherwise, the value 1 is returned. SEE ALSO pmap_growkernel(9), vm_map(9), vm_map_entry_resize_free(9), vm_map_lock(9) AUTHORS This manual page was written by Bruce M Simpson 〈bms@spc.org〉. MidnightBSD 0.3 July 19, 2003 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/vm_map_findspace.9.html
CC-MAIN-2015-48
refinedweb
184
53.21
Hello, I'm a Java newbie but I am enjoying the experience so far. I am currently stuck on what seems to be a trivial problem, but so far it has me stumped! I'm reading in lines of text one at a time, and feeding them into a scanner to break them up into individial words. All works fine. I know need to analyse the words emitted from the scanner and check them. My particular application permits comments in the text file, in the following form: ( this is a comment ) So an opening bracket will set a flag, such that no further text processing will occur until a closing bracket is encountered. However, I have got stuck on the detection of the firt opening bracket. My code is not detecting it, and I can't see why. Here is the code: Code : package com.f3; import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.util.Scanner; public class F3Assembler { //class level globals public F3Assembler(String inputFile) throws FileNotFoundException { BufferedReader br=null; int lineNumber=1; boolean inComment=false; String forthWord; Scanner sc; try { String sCurrentLine; br=new BufferedReader(new FileReader(inputFile)); while((sCurrentLine=br.readLine())!=null) { System.out.println("Line " + lineNumber + ": " + sCurrentLine); sc=new Scanner(sCurrentLine); while((sc.hasNext()) && (!inComment)) { forthWord=sc.next(); if(forthWord.substring(0, 0)=="(") { inComment=true; System.out.println("Comment detected; ignore rest of line"); } else { System.out.println("word: " + forthWord); } } System.out.println(""); lineNumber++; inComment=false; } } catch (IOException e) { e.printStackTrace(); } finally { try { if(br!=null) br.close(); } catch (IOException ex) { ex.printStackTrace(); } } } } For some reason, the line: if(forthWord.substring(0, 0)=="(") { .... } is not triggering. I've tried various things, like: if(forthWord=="(") { ... } But to no avail. I've also checked the length of the strings coming in, to make sure (for example) that there isn't a trailing space or something being passed back by the scanner, but the scanner *is* working properly. Any idea what the problem could be? Any help/pointers greatly appreciated. Many thanks. --- Update --- Ah! I think I've found it. The == operator is checking the *objects* to see if they are the same, and, of course they're not. Looks like .equals() is the one for me. Okay, that makes sense! I'm quite enjoying this Java thing!
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/27354-testing-string-single-character-printingthethread.html
CC-MAIN-2014-35
refinedweb
389
61.73
Im having trouble with a C++ assignment. I need to write a program like this: Write a program that displays fice rolls of two dice where each die is a number from 1 to 6, and shows the total. when run the programs output should look somewhat like this: 2 4= 6 1 1= 2 6 6= 12 4 3= 7 5 2= 7 ok i got the program to display the output but it dosent randomize differently every time i run it. It has to be in somesort of a loop. Heres what i got so far: /*dice 2/18/03 */ #include <iostream.h> #include <math.h> //Randomizer int main() { int die for (int roll= 1; roll<= 5; roll++) double die=1+random(6) cout<<die<<endl; return(0); }
http://cboard.cprogramming.com/cplusplus-programming/36706-cplusplus-asignment-printable-thread.html
CC-MAIN-2015-48
refinedweb
130
79.5
The basic idea is I put each element of the string in a stack. Each time I find a closed bracket, I traverse back until I find the corresponding open bracket. I put whatever between these two brackets in a string str and then I keep poping the stack until I find how much times I should repeat the str. I repeat the str and then put it back to the stack. Keep doing this, I will reach the end of my original string, then the join of all the elements in the stack is the answer. def decodeString(self, s): """ :type s: str :rtype: str """ stack = [] str = num = '' for c in s: if c != ']': stack.append(c) else: while stack[-1] != '[': str = stack.pop() + str stack.pop() while stack and stack[-1].isdigit(): num = stack.pop() + num num = int(num) str = str * num stack.append(str) num = str = '' return "".join(stack)
https://discuss.leetcode.com/topic/58867/simple-python-solution-using-stack-with-brief-explanation
CC-MAIN-2017-47
refinedweb
152
85.08
Arduino Water Thermostat am just finishing up a project at work, which requires two water tanks to be maintained at 180F. Each tank has twin 1500 watt, 240vac heating elements, each controlled by a SSR (Solid State Relay). My Arduino Mega 2560 reads two DS18B20 temp sensors (one in each tank), and maintains the temperature with a 5 degree window. I display both tank temperatures on a LCD, and control the color of two RGB LED’s, blue for under temp, green for correct temp, red for over temp. The photo’s are here. The following is the working code for the project. #include <OneWire.h> #include <DallasTemperature); For more detail: Arduino Water Thermostat We recommend EasyEDA for electronic circuit design. EasyEDA: A Powerful Free Circuit, Simulation & PCB Design Tool Register now for free. No Need to download. Lots of resources and Step by step tutorials This Post / Project can also be found using search terms: - termostato con ds18b20 e arduino lcd
http://duino4projects.com/arduino-water-thermostat/
CC-MAIN-2016-50
refinedweb
162
63.8
Hello, Everyone: My friend and I were doing a C++ program which relates to dice. Here is the problem:. Here is the code: However, the g++ compiler says that <conio.h> does not exist:However, the g++ compiler says that <conio.h> does not exist:Code: // Include files #include <iostream> // used for cin, cout #include <conio.h> #include <iomanip> #include <cstdlib> #include <ctime> using namespace std; // Global Type Declarations // Function Prototypes void instruct (void); void pause (); //Global Variables - should not be used without good reason. int main () { // Declaration section const int arraySize = 13; int die1, die2, frequency[ arraySize ] = { 0 }; // Executable section instruct (); srand( time( 0 ) ); //Roll Dice 36000 times store result in frequency array for ( int roll = 0; roll < 36000; ++roll ) { die1 = 1 + rand() % 6; die2 = 1 + rand() % 6; ++frequency[ die1 + die2 ]; } //Column Headers cout << setw( 10 ) << "Dice Combination" << setw( 17 ) << "Frequency\n\n"; //Start at Element 2 To Skip Elements 0 and Element 1 for ( int combo = 2; combo < arraySize; ++combo ) cout << setw( 9 ) << combo << setw( 18 ) << frequency[ combo ] << endl; pause (); return 0; } void instruct (void) { // Declaration section cout << "This program will simulate the rolling of two dice 36000 times. " << "It will then\nprint out the results of how many times each " << "combination of the two dice\nappeared. You should see in the " << "results that the 7 combination will appear\nthe most with the 2 " << "combination and the 12 combination appearing the least.\n" << "_______________________________________________________________" << "_____________" << "\n" << endl; // Executable section } void pause () { // Declaration section // Executable section cout << "\nPress any key to continue..."; getch(); cout << "\r"; cout << " "; cout << "\r"; } I understand <conio.h> is used in C not in C++, however, I need help finding out what <conio.h> means in C++. Please someone help us figure out the problem and what does conio.h means in C++. I will appreciate any help. Thank you.I understand <conio.h> is used in C not in C++, however, I need help finding out what <conio.h> means in C++. Please someone help us figure out the problem and what does conio.h means in C++. I will appreciate any help. Thank you.Code: dice.cpp:3: conio.h: No such file or directory
http://cboard.cprogramming.com/cplusplus-programming/72984-using-conio-h-cplusplus-printable-thread.html
CC-MAIN-2014-42
refinedweb
362
64.91
AOL Blocking Open Source IM Clones ... Again 267 jeremie asks: "AOL has been attempting to block access to AIM via Jabber, GAIM, and other open source projects based on libfaim. Both Jabber.org and Jabber.com have issued statements, and are welcoming AOL to work together with the community in creating an open server to server interoperability solution that meets their FCC Conditions." This kind of crap makes me glad that I never completely made the move away from IRC. Of course, this isn't the first time AOL has tried to pull this off, and it seems that the supposed FCC intervention that was supposed to open the AIM protocol has fallen thru. With all of this back and forth on the issue from AOL, do we really need to use their system at all? not tik (Score:1) -- Brian Voils "A university is what a college becomes when the faculty loses interest in students." Everybuddy... (Score:2) Yes... (Score:4) AOL needs to be forced to open this up... the FCC failed, the real question is what should be the next course of action. Interoperability (Score:1) Do we really need to use their system? (Score:2) And some of us are forced to use AIM as a workplace tool. (scary, I know) In short, do we need to use AIM? No. Do we want to use it? For most of us, no. Are we stuck using it for lack of alternative? Unfortunately, yes. That is why this hurts so badly. They are trying to kill off all chance of a reasonable alternative for those of us that loathe AOL but just can't help but like one of the two chat systems they control. aol is "closed" minded. (Score:1) Re:Yes... (Score:1) Fight censors! Re:not tik (Score:2) Maybe it's just my libertarian leanings, but... (Score:3) A unified standard (Score:3) I try to convert them to ICQ, but even I admit ICQ is far worse than it was at the beginning - horribly bloated and adding features no one I know gives a damn about. I have ICQ and MSN Messenger running on my system now since it's what friends of mine use. I have two people on MSN, but have to use it to chat with them. If a good friend was using AOL Messenger, I'm sure I'd add that as well. We don't need AOL Messenger opened up as much as we need a new standard for everyone to use. If we had to use different mail programs to write to people we would be up in arms, but we put up with this IM isolation because we have to. I've tried alternatives to ICQ a couple years ago and they were all substandard. What will it take until some standards body moves in and gives us a standard that everyone has to adopt to? Maybe it's just a pipe dream, but AOL staying private seems like nothing compared to having different standards in my book. There are a few issues confused here... (Score:4) Face the facts (Score:2) Fact of the matter is aside from the coding of GAIM, FAIM, and others, these clients all need to connect to AOL's servers which can cost AOL a fortune. Sure those who use the clients (FAIM, GAIM, others) will complain about this, but when AOL created their Instant Messenger, they created it with the intentions of having AOL subscribers use it. After a while they opened it up to outside sources. Now these outside sources (people who don't use AOL) who download the AOL IM program are subjected to advertisements and other gimmicks which creates revenue for AOL. These open source clients bypass all that gooey crap (which IMO is a good thing) so one should see clearly why AOL would want them banned. I'm hoping this was a sarcastic question. NCR Codebreakers [antioffline.com] (Enigma machines) Slow down folks. (Score:4) Although, in many respects it would be desirable for AIM to open up there protocol, they haven't yet. They don't act out against TOC/OSCAR clients, though, and so that's good enough. TOC/OSCAR does have limitations compared to the full protocol, but it's still more than usable. And rather than go whining about how a library that was just a reverse-engineering job was broken, reverse-engineer it again, or use the library that isn't broken. Now stop crying and get Everybuddy. Or Netscape 6. Or use AOL's quickbuddy. Or, god no, something other than *nux. -Andrew Time to replace IM's with OLM's. (Score:2) A standard to be run by the ISPs in much the same way that E-mail is run. A standard which is opened, and indisputable (yeah, right), and which no one body can control. A standard which uses an address convention that is universal to the net, in much the same way email@domain.com is universal. Perhaps an OLM* standard could pop up that operated similar to current software... had built in anti-spam measures... was opened and free... and was operated at the expense of ISPs (A value added service that would form by it's self if the software was available.) Just think... maybe name*domain.com or some-such address. The replacement for e-mail. Someone writes a bunch of opened sourced servers for it, someone writes a bunch of opened sourced clients, and boom... The OLM's replace the IM's with a free alternative that nobody can control. This is an idea I've been going over in my head for ages -- and I've even considered working on it myself but I'm not sure where to begin. (OLM = On-Line Message. I hate the term IM... it's stupid AOL induced crap. BBS users remember the term OLM from years back. Even ICQ windows clearly state along the top "online message". It's time we replace the IM's with OLMs I say!) "Everything you know is wrong. (And stupid.)" Bunch of BS (Score:2) Perhaps it wouldn't be so bad if AOL didn't control the two most popular IM services. I remember when ICQ was a struggling "beta" service that actually had some quality and usability to it. Now it's laggy, buggy, and filled with security holes beyond belief. And now we have AIM, which is not an altogether powerful system, but it's always worked well for me. Of course, now it doesn't work at all, simply because I'm not willing to use their "official" client. I use Jabber and GAIM, and now I'm cut off from the people I talk to on there on a regular basis. With the way this is going, I won't be shocked or saddened if I see AOL/Time Warner in an antitrust case by the government, a la Microsoft. In fact, I look forward to it. This BS has gone on long enough, and it's just not acceptable. P.S. If you ask me, we should all use IRC anyway. Re:Yes... (Score:2) Nobody is locked into AIM and ICQ. They can use an IRC client (blech), Yahoo! Messenger (for Win32/Mac/Unix/Java), a web-based app, a Java app (often the same thing as the web-based thing), any number of cheesy open source projects, or -- my favorite -- just plain old talk/ntalk. I don't see what's so bad about talk. Seriously. I type and someone else sees it. That's all I need... well, that and e-mail file attachments. Sure, but how are you going to switch everyone? (Score:2) And as long as everyone I know is using AIM and not another system, I'm afraid no matter how cool a competing system is, it's just not useful, since the entire point is to talk to the people I know. But if you do want to attempt this, please make an easy-to-use and easy-to-install Windows client (easy to create new accounts, little to no setup required [intelligent defaults], etc.) or else you'll never get a big enough userbase to make it useful... Re:Everybuddy... (Score:2) MSN Messenger has supposedly surpassed AIM in number of users, so this is possibly a reaction to that. This really isn't all that surprising to me, though. AOL wants to retain their stranglehold on Instant Messaging, Microsoft poses a threat, and the open source clients happen to get trampled between them. Personally, I think it's a good thing. If AIM and MS keep fracturing the IM population, that gives protocols like Jabber a better chance of exposure. AIM not even compatible with AOL (Score:2) The only successful way for me to use AIM on my computer while logged in to AOL is to use AOL version 5.0, and an older version of AIM, with my AOL Privacy Preferences set to block all messages. This is absolutely ridiculous! I also think that it is ridiculous that AIM has ads for AOL, even when you are logged in as a paying AOL customer - why try to market your customer for something that he or she already has? While using AOL 6, and the latest version of AIM - the only one AOL 6 allows to run - the only way to use AIM is to log in to AIM under another screen name - that's my only alternative. So before we begin to worry about AOL opening its messaging networks to other companies and networks, I think that AOL needs to bring unity to their own software. r. ghaffari Jabber's Advantage (Score:3) For those of you who haven't used Jabber yet, you should check it out, it really is the most convenient IM system out there. Re:Yes... (Score:3) They have the right to do this (Score:3) With the proliferation of many different messenger systems, all those AIMers are going to be cut off from their friends who use MSN/Yahoo/ICQ. The motivation to use AIM diminishes as other messengers take off. So instead of AOL joining the community at large, they are creating a substantial, yet isolated community. It is a stupid mistake in the issue of a greater diverse internet. A smart move in the issue of keeping a captive audience. But in the end, they are just shooting themselves in the foot because if you are using AOL, you really don't need AIM to communicate to other AOLers but you will need another messenger to chat with your friends on MSN. AOL just has a large enough ego to think these companies are clamoring to gain access to their herd of people. That may be partially true, but I believe it is more about these other applications trying to give their users as much versatility as possible, something AOL should think about. r. ghaffari why should they? (Score:2) Ok, so perhaps it's a little silly on their part, but they have a right to make their own mistakes. Forcing AOL to allow access is on a par, legally, with Microsoft forcing a rejection of the GPL because they whine hard enough about "needing" to do so. Use AOL's own software, or stop whinging about it. I mean jeez, if it's that important, dual boot or use the PalmOS version. OR JUST USE AOL'S SOFTWARE. Consession? (Score:2) Re:FCC really needs to regulate IM (Score:2) Re:Sure, but how are you going to switch everyone? (Score:2) Sad that you can make a file available on the web for free, yet no one will download it. Give them a hard copy via junk mail, and they actually give it a shot. Hmmm... maybe you meant that as a joke, but on the chance that it's not: AOL does not distribute AIM via hard copy. And really, only about 10% of the people I intereact with on AIM use AOL. The reason AIM is a success over other IM products is that (1) It has a very simple, friendly, interface. There's nothing that keeps people using a product than a pretty interface. Believe it or not, people who aren't geeks like you prefer comfort over functionality. (2) It was almost first to market. And then it bought out the one (ICQ) that actually was. The big thing about any communication protocol is saturing the terminal points with it. If IRC, or even ICQ met the requirement (1) above, either of those would have grown enormously, and would be in the position AIM is now. -Andrew P.S. And don't respond if you can't figure out my point. I'm piss drunk. I don't need to have a point. Re:Sure, but how are you going to switch everyone? (Score:2) AOL software was what was mailed out on disks, and while it's popular, it's not as ubiquitous as AIM - even a large portion of the millions of people who don't use AOL have gone out and downloaded AIM, to the point where a huge percentage of internet users use AIM. A monumental first (Score:4) It's called IMUnified (Score:2) You can go to [imunified.org] for some early information on it. The members include AT&T, Excite@home, MSN, Odigo, Phone.com, Prodigy, and Yahoo! AOL's been battling this all the way. As for AOL, I think they should be able to do whatever they want with AIM/ICQ, since it's their product. Knock off the calls for regulation by government members who don't know anything about these technologies. BTW, this doesn't violate anything that they agreed to in order to merge with Time-Warner. The IM thing they agreed to was a very narrow clause about IM and high-speed networks, I believe. Possibly about high-speed wireless, I forget. Really, though, I know that down the road they're going to want badly to interop with the clients above — they're only going to screw their own users once the other ones get popular, especially since MSN now has more people using their IM than are using AIM. The same thing's going to eventually happen to ICQ if they wall themselves off from everyone else. Here's the really ironic thing about this particular situation, though. They were just complaining to the DoJ last Friday about the possibility of them being shut out by Microsoft's HailStorm initiative (which right now is planning to interop with IMUnified -- MS wants traffic through their system more than they care about whose client/OS you access it with, hence the recent talk about .NET stuff on other platforms). After AOL started up with this talk, AOL's blocking of non-AIM/ICQ users was brought up, and lo and behold, by Monday they come out with this hilarious rationalization: "AOL suggested that its efforts to open its instant-messaging system to rivals could be affected by Microsoft's attempt to incorporate the messaging service into its Web-based programs." AOL Executive VP Kenneth B. Lerer even says, "We are working toward interoperability with conviction and expect to be in a position to begin testing this summer." This latest move sure clears that up, don't it? :) Cheers, Re:They have the right to do this (Score:2) -Andrew GAIM functionality (Score:5) The problem basically lies with AOL trying to block Jabber. This has been going on for several days -- since Monday, I believe. We just sort of caught a stray bullet this time, so to speak. Good luck to the Jabber guys. I would like to see some communications with AOL as well. Peace, Rob --- Rob Flynn Re:Slow down folks. (Score:3) Actually, TOC and Oscar are two very separate protocols. And Gaim can do them both, actually :) Which is good for Gaim; if one of them ever stops working (which happens more than AOL would probably like to admit) you can easily switch to the other one. Gaim's the only client that lets you choose between the two protocols. > And rather than go whining about how a library that was just a reverse-engineering job was broken, reverse-engineer it again, or use the library that isn't broken. Actually, the library got it comletely right, it's the clients using it that got it wrong. There's a particular string that the client decides that AOL is filtering on. MSN Messenger (Score:2) But if you do want to attempt this, please make an easy-to-use and easy-to-install Windows client MSN Messenger. easy to create new accounts Just sign up for a Passport at hotmail.com and you have an MSN Messenger account. little to no setup required [intelligent defaults], etc.) or else you'll never get a big enough userbase to make it useful... Jabber's MSN transport still works. Isn't MSN almost beating AIM now in user base? Re:Already there (Score:2) Besides, support for the other networks only insures it will never become the standard. The idea I'm suggesting would work like a hybrid of both current day e-mail and Instant-Crap software. In a perfect situation... * Each ISP hosts a server for their customers (and other servers would of course exist, much as it does with E-mail). * Messaging would of course Peer to Peer, not requiring the servers after Login has taken place. (Login is only there to validate online status and present requesting clients with the last known IP of the user). * And it WOULDN'T NEED TO SUPPORT OTHER NETWORKS. * Anti-Spam, Encrytion, and other such features could be designed in from the ground up, rather than horribly shoehorned in later on. But unfortunately, I know this vision will never be realized. "Everything you know is wrong. (And stupid.)" Why doesn't Kit use TOC? (Score:3) While AOL has gone and changed/broken the TOC standard, it doesn't happen often, and the changes are easily circumvented (since they always keep TiK, TNT, and QuickBuddy working). AOL never even made a pretense of documenting OSCAR, though, so they can break it whenever they want. And when they break it, we don't have the source to their older OSCAR clients for comparison. AOL will do what it wants with AIM. Like it, or start moving to Jabber, as I'm doing. Er, Why does Kit use TOC? (Score:2) Re:MSN Messenger (Score:2) 2) Unless the people I interact with are significantly different from the average person, MSN is nowhere close. Two of my "in real life" friends/acquaintances use MSN, while every single one of them uses AIM (even the two who also use MSN). I can talk to everyone on AIM, but very few people on MSN (ICQ is a little better - 6 or 7 of my friends use it...but compare to the 109 who use AIM). The problem with proprietary clients (Score:2) How do you know that the 8Mb (or whatever) executable doesn't send back (over its proprietary, no-user-serviceable-parts-inside protocol) information they may be interested in? Like what hardware/software you have, or even what MP3 files are on your system (remember, AOL Time Warner is a big chunk of the recording racket). Or, once UCITA is law in your state, are you so sure that AOLTW's latest client won't take summary action and delete MP3s by Warner artists on your system? The possibilities are limitless. The key point is that proprietary software doesn't serve you but its creator; you benefit where your interests align with theirs, but where they don't, you know who will prevail. Why Closed protocols suck (Score:4) This is good for AOL, but bad for the internet. The problem with the internet at the moment is that it has major applications that do not have simple, open, commodity protocols accepted by the majority of users of that service. What lets email work so well is RFC821 and PFC822, defining the transport and the format of email so that clients and servers need not be tied to each other. Now instant messaging in it's current state is horrible. We have a disjoint set of non-structured namespaces (BigMan200 anybody?), We have a single centralised server. And the protocol is closed. Of course, most IM issues were solved by email years ago. Unless I'm being very dense, it wouldn't be too hard to make IM id's similar to email addresses (I have a sneaking suspicion that Jabber does this, but I haven't looked at it close enough).. This is the major strength of the internet, the openness, simplicity and strict focus of the protocols employed by most internet clients. For all but a few protocols, the communication can be done by a clueful individual with a telnet client. (I have done this, and it is a lot of fun. EHLO everyone!) If the AIM protocol remains closed and binary, it will stay linked with AOL. We don't need another propriety protocol polluting pathways with packets parsable by 'proper' programs.. Now remember, this is an opinion. Yours may be different, and I like to change mine if I see one that looks good. James, Which ammendment gave YOU the right to use AIM? (Score:5) Taco, what the HELL are you talking about!? "This kind of crap"? "tried to pull this off"? IT'S THEIR NETWORK! I use GAIM myself, and I did find myself shut out this afternoon, but I don't blame AOL. It's their network, and they can do as they please with it. You act like you have some sort of right to use stuff other people maintain, and you expect to have it free. What the hell? Yes, AOL makes money through those banner ads, and they use them to support the service. If the ads aren't showing up on your screen, then they aren't making money off of you, and THEY DON'T OWE YOU ANYTHING! That whole thing was exaggerated. I work at an IM company (none you've ever heard of), and while I'm not clear on the details, I have been told that the FCC thing does not apply to AIM itself, or the OSCAR protocol. You still aren't allowed to use it without AOL's permission. If you don't like their system, don't use it! I don't understand... You complain about something you get for free. YOU HAVE NO RIGHT TO COMPLAIN ABOUT SOMETHING YOU GET FOR FREE! If you don't like it, don't fucking use it. OTOH, you could always do what I did: Switch to TOC. Voila, GAIM works again. You can't check people's away messages, but it works. ------ Re:Consession? (Score:2) Cliff, not Taco, my bad (Score:2) ------ Left arm doesn't know what right arm is doing ? (Score:2) Also for more amusement, try loading in Mozilla - it tells you to upgrade to Internet Explorer !! Re:Already there (Score:3) The Jabber protocol is open, the server is open source and the clients can be open source, closed source freeware, commercial or whatever you want to licence them under. Under development is JabberZilla [mozdev.org] which is going to be a cross platform mozilla based client that will offer similar functionality to the AIM with Netscape 6. Opera currently supports ICQ in their version 5 windows browser, there are people who want them to change to Jabber support. Voice your views in the opera.wishlist [opera.no] newsgroup (on news.opera.no). Re:Time to replace IM's with OLM's. (Score:3) How about piggybacking on an existing system? Say, SMTP/POP? Let's imagine a chat PTP app that runs BESIDES SMTP/POP. Let's say it uses the NTALK protocol to keep things clean. The problem with dynamic IP is to keep track of your buddies' IP addresses. No sweat: You connect. The chat app E-MAILS all your buddies a short message: "Yo! I'm online at 247.308.133.32 @ 12:33+06". It also LOOKS regularly at your inbox for exactly such messages from your buddies telling you their IP addresses. Now, the chat app updates your buddies list with their "new" (improved?) IP addresses. It also checks each of those addresses to see who is STILL online since they sent their last "yo!" message, and updates the list accordingly. Wanna chat? Just double-click on your buddy's name in the "online" list, and voilà, pops opens a window of PTP chat/file exchange/whatever with your buddy. CUCKOO!!! There, it's simple, clean, **STANDARD**, and, most importantly, **ISN'T CENTRALIZED**, so it can't be tapped into nor shut-down. Freechat anyone? -- Re:The problem with proprietary clients (Score:5) Then, all of the most vile and evil things said by users who don't know they are actually being listened to through their microphones are force-fed back into the minds of the general public (via thought-controlling microwaves) and this is what's contributing to the downfall of America's Youth (see recent school shootings), not to mention global warming, the spread of AIDS, Bush as US President, and the California Energy Crisis. fnord I beg of you, my fellow slashdotters! UNINSTALL your AIM clients and wrap aluminum foil around your heads! It's the only way we can survive as a race of multicellular-semi-humanoid-bipedal-lifeforms! And I mean it. "Everything you know is wrong. (And stupid.)" Re:Yes... (Score:2) Re:Left arm doesn't know what right arm is doing ? (Score:2) People who are not connected with Netscape can get involved in Mozilla it doesn't mean that Netscape will include it in their browser releases! As for AOL.com telling people to upgrade to IE, well I thought it was funny, I hope they do plan to use Mozilla once their contract with MS runs out. If I were AOL I'd sure not trust the future of my browser on MS. It ain't libfaim, folks (Score:2) Congratulations to Jabber.org on reaching this milestone. Dropping AIM not feasible (Score:2) Why do we NEED it at all? (Score:4) Here's the answer. We create such a system. Don't make ANY effort to be compatible with the AOL systems. This is designed to replace, not to coexist with those systems. Create it to be bug free and cross platform, of course. Then... add THE feature. Whatever feature will draw in the 90% of the users for whom it is a challenge locating the start button, like most MCSE's. Now...here's the trick. If this system were to become extremely popular, such that it actually rivaled the other services, they would probably add in support for it (being an open protocol, they certainly could). The trick would be forcing open their system as well in the process, although I don't think the GPL can reach THAT far. Wishful thinking, yes yes I know. -Restil Re:There are a few issues confused here... (Score:2) Instant messaging is currently hobbled by islands of proprietary protocols, and AOL's efforts to stop the tide of interoperability are as pointless as Canute's. Re:Yes... (Score:2) Perhaps AOL might be more amenable to 3rd party companies piggybacking their service if they put some money on the table? TOC is not OSCAR (Score:2) Re:There are a few issues confused here... (Score:2) TOC is *not* a fully-supported protocol. A lot of the features are missing, plus it's only half-implemented. Furthermore, the FCC has *mandated* that AOL open up their servers to any compatible clients. AOL's terms of service are meaningless here...the FCC has told AOL that they must allow any clients, not just their own. That was one of the conditions of their buyout^H^H^H^H^H^Hmerger with Time Warner. Question for Taco (Score:2) Assuming AOL is willing to consider this as a business case, how much are YOU as a user willing to pay for this use of AOL's servers? If your answer is zero, then shut the fuck up. we must help (Score:2) We must stop the repression! How dare AOL stop people who aren't AOL subscribers from using their services? What Nazis! Not wanting you to use their servers because you refuse to use their client? What scum! Fight back people! We shall have a revolution! (someone call Katz) Re:Already there (Score:2) Re:Which ammendment gave YOU the right to use AIM? (Score:3) Of course, AIM is free, so a better analogy would be: If company B gives free calls to all other members of company B, and half your family uses company B, but you live in a small town in the middle of nowhere and you can not get access to company B's services, then you're still out of luck if you have phone services, but they are not company B's. Open is a good thing. Telephone companies are still able to gain profits, while having open standards for communicating with one another. Bren. What the hell? (Score:2) The government has NO business intervening here, and no one has a god-given right to their code. Re:Bunch of BS (Score:2) Hmm. You should probably stop using AOL, then. I mean, if you're not satisfied with the service that they provide, you really just stop using their service completely. Then, their zany shenanigans won't even matter to you. Oh, you're under the impression that you're not using AOL when you use the AIM and ICQ networks, aren't you? At the risk of sounding like a total ass, cry me a friggin' river. Don't like the fact that AOL changes things on services that they own and they maintain? Don't use them! How, exactly, did you get it in your head that AOL is somehow duty bound to let you use their free service with whatever software you want? Cut off from the people you talk to? Well, looks like you have two choices, then: swallow your pride (careful not to choke) and use the evil, nasty, icky AIM client, or convince all your friends to join you on #AOheLl5uX0rZrocks for great justice. My, but I've overlooked a third option: bitch an moan about it and propagate the idea that we need yet another antitrust lawsuit setting yet another Really, Really Bad Precedent in the tech industry. Brilliant idea. Do you like the idea of companies (and OS projects) not being free to make changes to their own systems without first going through some n-month-long review and notice period? Because that's what you're talking about. A successful antitrust case would hurt AOL directly, but it would also set a really, really unpleasant precedent for anyone else looking to create an IM network. The road to victory for Open Source is not paved with lawsuits and anti-trust cases. That struggle will ultimately be won by the side that can pump more money and better lawyers into the legal system. The Open Source movement will win, if it does, by producing better, faster and cheaper software than Closed Source can. Instead of complaining about your inability to run free in a Closed IM network, focus instead on contributing to an Open one. Then you won't ever need to worry about being on the receiving end of cheap tricks. Re:ICQ Bloated (Score:2) Re:Yes... (Score:2) I can sympathize with GAIM users since that's a grassroots kind of thing and perhaps AOL should make allowances for that kind of product. It certainly shouldn't make allowances for money making ventures such as Jabber.com. Re:TOC is not OSCAR (Score:2) AOL does let people use TOC, yes, but that doesn't mean they support it. Over time TOC has steadily drifted away from what the original protocol spec says. Little changes here and there that have managed to break nearly every TOC client other than TiK at one point or another. Also, TOC keeps losing features. It used to have things like toc_dir_search to search by directory info and email address, but this only works in Oscar now. Also, there are things that will *only ever* work in Oscar - such as getting away messages or making File Transfer requests. AOL has stopped developing TOC, and left it in a rather sorry state. So while AOL *lets* us use TOC, they don't *support* TOC. I still don't understand this... (Score:2) However, the major point of this post is that I honestly don't understand why everyone gets so worked up about what AOL does with AIM. Let's look at this logically. While we've always had things like talk/ntalk, AOL really pioneered the instant messaging field with AIM and ICQ (yes, I count ICQ as AOL's because they bought it, so any innovations by Mirabilis belong to AOL now). It's their servers, their network, their software, and their innovations. So basically, why can't they do whatever they want with it? You don't like it? Write your own. Lots of people have, and there are open standards projects. If the open standards are good enough and enough people adopt them, then AOL will have to join or fade away. But noone really has a place to tell AOL what to do or not to do with AIM. It's theirs, completely. And if they don't want to let MSN or Yahoo play in their sandbox, it's their decision. And if they want to keep the OSCAR protocol for the "official" clients and only let everyone else use TOC, it's their choice as well. Let's remember, folks, having a monopoly on something is not illegal. It's how you use that monopoly. AOL isn't trying to squash the open standards projects for IMs, and they're not trying to run MSN and Yahoo and the rest of the people who have developed IM clients out of business. They're just running their own IM system as best they can, getting new signups, and trying to enforce their rules about how the system is used. Which is all perfectly legal, and well within their rights. --- Re:There are a few issues confused here... (Score:2) Actually, that's only partly true (look, I previewed this time! Yay!). temas (one of the Jabber developers) and I went through the document stating all the rules and regulations that AOL has to follow as a result of the merger. The part about IM has been greatly misread by a lot of people. AIM only has to make new "advanced, IM-based high speed services ('AIHS')" open. AIHS services include things such as video conferencing (which is pretty much the only example the document gives). So basically as long as AOL doesn't add video conferencing to AIM, they don't need to tell Jabber squat about how it works, or make any offer of interoperability. However, the moment they have a working implementation of video conferencing, they have to call Jabber and ask where they should send the protocol spec. You can read a copy of the deal at 0 01/fcc01011.pdf [fcc.gov]. Note especially page 3-4, where it talks about the IM service. It's one long sentence so beware. Re:Why Closed protocols suck (Score:2) No. We have a bunch of separate namespaces, each with their own centralized server. And each one has it's own protocol, which is either open or closed. But it doesn't matter, because they don't talk to each other.. Again, no. They should not be allowed to lock up the idea, but they can do whatever they want with the protocol. They wrote it. Don't like it? Write your own protocol. Noone's stopping you, least of all AOL.. And once again, no. If AOL wants to run a private system, it's their right. How would you like it if you developed a huge intranet with a nice customer database or such, email, and all kinds of other services, gave access to it to a few people (friends and coworkers) and the government came along and said "Oh, this is nice. Now you have to run this service and let everyone in the world access it as much as they like." Cause that's what you're saying they should do to AOL. I would agree with you wholeheartedly if AOL was trying to prevent everyone else from developing their own IM protocols and systems, but they're not. They're just keeping their own system private. Go work on the open standards protocols. If one gets developed that is so great, and everyone starts adopting it as the standard, then maybe you'll take some AIM customers away from AOL, or force them to be compatible. Think about it like MS Exchange, or Groupwise, or CC:Mail. All of these have their own private protocol that they use to talk to their own kind. However, SMTP is so overwhelmingly popular on the Internet, that they all have SMTP gateway applications so that they can talk the standard. But it's not illegal for them to have their own private protocols and not open them up, and if they wanted to not talk SMTP, then they could, but they wouldn't be able to email very many people. And this is not to mention that AOL *does* have a semi-open protocol, TOC. It's not open to changes from the tech community, but it is open so that 3rd party clients can be written and used. -Todd --- You want AOL to open it up? Then pay for it. (Score:4) Even if the Jabber team ever comes out with a stable, robust release, it's not going to be able to support even half as many users as Yahoo instant messaging without someone footing the bill for millions of dollars in servers and fiber-channel storage arrays, commercial database software, and tens of thousands of dollars a month in hosting and connectivity services. Do you really think a multi-million concurrent-user instant messaging system can run on one rack of Postgres servers on a T1? Phooey. You want free communication without ads or service charges? Buy a CB radio and talk to your neighbors. That's peer-to-peer. I'd like to see a show of hands: how many of the people here calling for free access to AIM servers aren't (a) MSN and Yahoo employees or (b) people who have never had a job besides maybe cleaning trays in a dorm cafeteria? MessengerA2Z (Score:2) Re:Bunch of BS (Score:2) oh whatever. You are complaining that ICQ is laggy? What about IRC? Little assholes feeling the need to shutdown EFnet for no reason, hard to find servers to connect to anymore, etc. I use AIM on a regular basis to communicate w/friends.. Most haven't a god damn clue how to use AIM as it is, you really think that they are going to remember (or what to even think about learning IRC?) get real pal, that's why they have AIM. Just my worthless whining Re:Why Closed protocols suck (Score:2) The real concern is, why does AOL have a stranglehold on the IM market? Because no one has written a reasonable open-source client - and GAIM, et. al. do not count - that's just piggybacking off of AOL's protocol. How about writing a good, open-source IM client without a centralized server? Another /. reader and myself have been looking into the possibility of such a thing. Only open-source can make IP a thing of the past, so go out and claim as much for the open-source world as you can! Re:Bunch of BS (Score:2) <katz> The internet is all about communication. People & products that impede communication have no place on the net </katz> -- Re:Why do we NEED it at all? (Score:2) You are NEVER going to be able to replace that. All the open-source in the world can't beat that. If I want to talk to my friends, I have to use AIM protocols (gaim w/ TOC, b.t.w.). I don't have a choice. I (and nobody I personally know) don't have enough clout to suddenly tell 95 of my friends and family that they need to switch to something else just to talk to me. If I were to say "talk to me via jabber; its a better service anyways", I would get a lot of email. And next to no messages. Re:Slow down folks. (Score:2) I think it should be: Or, god no, something other than (*n?x | *BSD) But what's the big deal? I purposely don't run GAIM or any of the others so that I can avoid having to provide my mother tech support on a regular basis (unfortunately, she can use AIM). Re:A monumental first (Score:2) Fixing NAT for ICQ (Score:2) First of all make sure you set up all your computers to have static ip addresses, dynamic won't work. You'll need to set the gateway and DNS on each computer yourself since DHCP won't be doing it for you anymore but it's not that hard. Now, say you've given your computers the ip addresses: 192.168.0.10 -> 192.186.0.13 (keep the gateway computer on 192.168.0.1) In your firewall settings simply tell it that any traffic coming in on ports say, 30000-30019 gets automatically sent to 192.168.0.10, 30020-30039 goes to 192.168.0.11 and so on. Now in icq in the connection settings tell it you're not using a proxy, but you are using a firewall and it should use ports 300xx-300yy for incoming events. That fix file transfers and allow people to read your away/dnd/etc messages again Hope this helps! Jabber can (Re:Why do we NEED it at all?) (Score:2) Jabber can do this.. (Score:2) This is why halfway through yesterday, Fire (the sole *real* method for i, a Mac OS X user, to connect to AIM and thus contact quite a few people i want to talk to..) and Gaim were still blocked from AOL, but Jabber peoples could connect just fine-- only, though, if they were on the jabber.org server, because that was the only one that had been fixed with the entry hack. That's the good thing about this approach, you have one small client and it can adapt to whatever happens. THe problem with this, of course, is that AOL can IP-block the jabber server, meaning everyone is simply screwed.. not sure how to get around that. Re:Time to replace IM's with OLM's. (Score:2) On the plus side, maybe it would also allow people to create a ISBHL (Instant Spam Black-Hole List) of those addresses that have been marked as SPAMmers, so that you could easily filter the messages out of your mailbox once its hit the first 2 or 3 people. Might just make the SPAM even less effective. Re:GAIM functionality (Score:2) --- Rob Flynn Re:Yes... (Score:2) It's the same reason windows is still popular and it's the same reason linux on the desktop is going nowhere. AIM makes no money! (Score:3) I've been waiting to hear this "You ungrateful bastards/you must be viewing ads to use AIM" argument...AOL alomst certainly loses money on AIM and those ads, mostly because they don't SELL any! All the ads are for AOL services or AOL-owned companies [zdnet.com] !! AOL puts the service out to lure people into subscribing to AOL, not to make money off ads. And no one can say that AOL is hurting for customers right now...they are still the largest ISP in the world. So, yes, we can complain, especially when we depend on features AOL clients don't supply -- interoperability, alternate platforms, logging, etc... Re:Everybuddy... (Score:2) In other words, TOC gives you instant messaging without the bloat. What's the problem? -- Re:AOL loses money on AIM because... (Score:2) Re:Yes... (Score:2) In the meantime AOL is free to block whoever it likes from its service. If you're not a subscriber and you're not using their AIM client, what's in it for them? If Microsoft, Yahoo! and Jabber etc. all gained access it would cost AOL millions. All joking aside... (Score:2) I've brought this example up before, but here goes: say one of your AIM buddies, unknown to you, commits or comes under suspicion of having committed some hot-button computer crime (DoS, whatever). At roughly the same time, you were online, with this user in your buddy list (or vice versa). Now, you're drawn into the investigation. All your electronics are confiscated as potential evidence. At best, you might get them back in a year. At worst, the investigation of your friend will go to trial, and it could be several years. Or perhaps that copy of Office97 isn't licensed to you, or you've got napster installed, and the feds start pressuring you to testify, using this as leverage. "You were online at the same time, and on his buddy list! What do you mean you don't know anything?! C'mon, just spill it and we'll forget about this whole copyright infringement thing." IMs have their place, but we shouldn't be naive about how these technologies will be (ab)used by authorities. -Isaac Re:Everybuddy... (Score:2) Then am I correct in assuming that GAIM still works properly, despite what was mentioned in the (Note: I just looged onto aolim using GAIM and it seems to work fine. Yay TOC!) Re:Bunch of BS (Score:2) -- Re:Which ammendment gave YOU the right to use AIM? (Score:3) There is no support for per-connection billing and accounting, a la X.25, built into IP. When you are on an IP network, anyone else on the network can communicate with you, unless you explicitly firewall them. When you run a service on this public IP network, you accept and agree that anybody can interact with that service, unless you specifically disallow it. Of course, people are allowed to build identification and accounting into their protocols to support anything they want. If they want to charge me for using their AIM service, they have the right to. Simply require authenticated logins (as they already do) and don't give those logins out for free any more. Now, the point: there is a difference between using or complaining about AIM the network service, and using or complaining about AIM the executable program for Windows. AOL gives AIM service for free to everyone else. I paid my bandwidth bill for the month too. I'm sure AOL understands that some people will try to use their network service with non-AOL-provided software. What AOL has done recently is attempt to tie AOL the network service to AOL the Windows executable. It is within AOL's rights to try to do this. It is within my rights to try to undo this, by publishing software that is compatible with the new version of their network protocol. Would you mind re-explaining what we don't have a right to do again, as it relates to network services and applications being different things? Re:Which ammendment gave YOU the right to use AIM? (Score:2) But your perspective is different. AOL owns the network and therefore controls anything that uses it. I don't believe in that. I think that if an entity puts a server out their for public use, it should be publicly usable. That's the thing. You can't own a publicly accessible service. That is, you can't control it. You can try...but that is like violating a social contract. Its implied. Thats what Jabber is trying to do in part--provide a service that has no owners. Also...people have the right to complain about anything they please. So quit telling people to shut up. Re:There are a few issues confused here... (Score:2) Wait, they weren't within their rights. The FCC fined them, and aren't AOL and Time Warner the same fucking company now? Re:Which ammendment gave YOU the right to use AIM? (Score:2) I don't understand you people. Isn't it intuitively obvious to you that if someone makes something, they should have control over it? Why should they be giving you access to their service when you haven't given them anything? BTW, AOL provides AIM executables for Mac and Linux as well. Why do you keep talking about their "Windows executable"? ------ Re:AIM makes no money! (Score:2) ------ AOL's OpenIM proposal (Score:2) This is AOL's proposal for an open architecture that allows competing IM services to exchange instant messages. If implemented, this would allow AIM users to communicate with users of Yahoo Messenger, MSN Messenger, Jabber et al. -- Re:Time to replace IM's with OLM's. (Score:2) -- Re:Time to replace IM's with OLM's. (Score:2) The idea is to yield the convenience of ICQ without a central chat server, simply by using the most widespread protocols. -- Re:Which ammendment gave YOU the right to use AIM? (Score:2) What part of capitolism don't you understand? ------ I would "pay" if I could... (Score:2) And don't tell me to run Windows. I don't do Windows. Jabber over HTTPS proxy (Score:2) The 'right' way to solve this problem is to put a Jabber server on your corporate extranet, allow only inside users to connect to the server on 5222/5223, and allow outbound connections to other jabber servers (and dialback from them on 5269). Re:There are a few issues confused here... (Score:2) The second paragraph also stated how pointless AOL's restrictions were, but I guess you both replied before reading that.
https://slashdot.org/story/01/03/22/2348244/aol-blocking-open-source-im-clones--again
CC-MAIN-2017-09
refinedweb
8,502
73.07
As websites become more and more interactive the need frequently arises to send data back and forth between web clients and servers using Ajax. ASP.NET combined with jQuery makes this process simple to implement for web developers. For this example, we’re going to POST the following JavaScript object to our server, and return a Boolean value indicating if the data was correctly received. var tommy = { name: "Tommy", birthday: new Date(1921, 0, 1), hobbies: ["Pinball", "Holiday camp"] }; The server side The first thing we need to do on the server side is define a data structure corresponding to the JavaScript object we wish to send. public class Person { public string Name { get; set; } public DateTime Birthday { get; set; } public string[] Hobbies { get; set; } } Notice how the casing of the variable names in the JavaScript and C# objects doesn’t have to match. We can declare our JavaScript object properties in camel case and our C# properties in Pascal case and the model binder will work just fine. Now let’s write our method. All we need to do is create a standard ASP.NET MVC controller method which takes a single parameter of the Person type, like so. public bool ProcessData(Person person) { return person != null; } As mentioned, I’m returning a Boolean value indicating whether the person object was successfully received. That’s all we need to do on the server side. Let’s writ the client-side code. The client side The first thing we’ll do is use jQuery to write the Ajax call. $.ajax({ url: "@Url.Action("ProcessData", "Home")", type: "POST", contentType: "application/json", data: JSON.stringify({ person: tommy }), success: function(response) { response ? alert("It worked!") : alert("It didn't work."); } }); Here we’re invoking the ubiquitous “ajax” method, passing it an object containing all the information it needs to get the job done in the form of field values. Let’s take a brief look at each one. url The url field contains the dynamically-calculated address of our target method. type The type field indicates the type of HTTP request we wish to send. In this case it is a POST request. contentType Here we are specifying that the data is of the JSON format. This is important as it provides the server with the knowledge it needs to be able to deserialise our data into a POCO object. data The data field contains the actual data that we want to send in the POST request. There are two things of note here. The first is that we have assigned our “tommy” object to a field named “person” in a new anonymous object. Why is this? Well, the JSON object that we send to the server needs to correspond to the signature of the method that will receive the request. As our method looks like this: public bool ProcessData(Person person) We need to send it an object that looks like this: { person: [our person object] } The second thing to note is that we invoke the JSON.stringify method in order to serialise the whole thing into a JSON string. This JSON string is what the server will receive and attempt to deserialise when we make the Ajax call from a web browser. We need to do this because “person” is a complex object and not a primitive JavaScript value, such as a string. Were we to send an object to the server consisting entirely of primitive values, this stringify call would not be necessary. success This function executes if the request succeeds. While in this case I’ve used an anonymous function, I could also have externalised this method and set “success” to a function pointer instead. This is all the server side code we need. Let’s fire up our website and run the code to see what happens. Making the call If we insert a breakpoint in our ProcessData method and execute our JavaScript, this the result we get in Visual Studio. As you can see, the model binder has done a wonderful job. Not only has it instantiated our object and mapped the simple Name property, but it has also translated the JavaScript Date value into a valid C# DateTime object, and mapped out our string array perfectly. We can now go ahead and do what we like with the data.
http://www.levibotelho.com/development/posting-javascript-objects-with-ajax-and-asp-net-mvc/
CC-MAIN-2018-34
refinedweb
723
71.95
SpikeQueue class¶ (Shortest import: from brian2.synapses.spikequeue import SpikeQueue) - class brian2.synapses.spikequeue.SpikeQueue(source_start, source_end)[source]¶ Data structure saving the spikes and taking care of delays. - Parameters source_start : int The start of the source indices (for subgroups) source_end : int The end of the source indices (for subgroups) Notes : —– : **Data structure** : A spike queue is implemented as a 2D array `X` that. : Attributes Methods Details - peek()[source]¶ Returns the all the synaptic events corresponding to the current time, as an array of synapse indexes. - prepare(delays, dt, synapse_sources)[source]¶ Prepare the data structures This is called every time the network is run. The size of the of the data structure (number of rows) is adjusted to fit the maximum delay in delays, if necessary. A flag is set if delays are homogeneous, in which case insertion will use a faster method implemented in insert_homogeneous.
https://brian2.readthedocs.io/en/latest/reference/brian2.synapses.spikequeue.SpikeQueue.html
CC-MAIN-2022-40
refinedweb
145
54.32
Blogging the existing archive pages - all that changes is the filter on the query, and the paths they're rendered to. With that in mind, we'll refactor the existing IndexContentGenerator. Rename it to ListingContentGenerator, and add a couple of attributes and a new method to the class: class ListingContentGenerator(ContentGenerator): path = None """The path for listing pages.""" first_page_path = None """The path for the first listing page.""" @classmethod def _filter_query(cls, resource, q): """Applies filters to the BlogPost query. Args: resource: The resource being generated. q: The query to act on. """ pass The 'path' and 'first_page_path' attributes allow us to specify the paths to store generated pages to on a class-by-class basis. The _filter_query method will allow us to select only the entities we want for a given listing page. Once again, we need to refactor part of the generate_resource method. Added sections are highlighted in yellow: def generate_resource(cls, post, resource, pagenum=1, start_ts=None): import models q = models.BlogPost.all().order('-published') if start_ts: q.filter('published <=', start_ts) cls._filter_query(resource, q) posts = q.fetch(config.posts_per_page + 1) more_posts = len(posts) > config.posts_per_page path_args = { 'resource': resource, } path_args['pagenum'] = pagenum - 1 prev_page = cls.path % path_args path_args['pagenum'] = pagenum + 1 prev_page = cls.path % path_args template_vals = { 'posts': posts[:config.posts_per_page], 'prev_page': prev_page if pagenum > 1 else None, 'next_page': next_page if more_posts else None, } rendered = utils.render_template("listing.html", template_vals) path_args['pagenum'] = pagenum static.set(cls.path % path_args, rendered, config.html_mime_type) if pagenum == 1: static.set(cls.first_page_path % path_args, rendered, config.html_mime_type) if more_posts: deferred.defer(cls.generate_resource, None, resource, pagenum + 1, posts[-1].published) The changes this time are minor: We call _filter_query at the appropriate time, to give subclasses an opportunity to add filter conditions to the query, and we use the new path and first_page_path attributes with a format string to generate the URI paths to store the pages at. Specifying a subclass to implement the existing archive generation functionality is straightforward: class IndexContentGenerator(ListingContentGenerator): """ContentGenerator for the homepage of the blog and archive pages.""" path = '/page/%(pagenum)d' first_page_path = '/' @classmethod def get_resource_list(cls, post): return ["index"] generator_list.append(IndexContentGenerator) And specifying a subclass to implement the tag page generation is nearly as simple: class TagsContentGenerator(ListingContentGenerator): """ContentGenerator for the tags pages.""" path = '/tags/%(resource)s/%(pagenum)d' first_page_path = '/tags/%(resource)s' @classmethod def get_resource_list(cls, post): return post.tags @classmethod def _filter_query(cls, resource, q): q.filter('tags =', resource) Note that, for the first time, our get_resource_list method returns more than one entry: Here, it returns the list of tags associated with the post. Thanks to the dependency regeneration functionality we defined way back in part 3, this list will be used to regenerate only the resources needed: If we add or remove a tag, that tag's pages are regenerated, while if we modify the post's title or summary, all the tags containing that post are regenerated. The third and final part of adding tag support is to add the list of tags, with links to the tag pages, to the individual blog posts. Open up post.html, and add this block immediately after the h2 tag containing the post's title: <p> Posted by {{config.author_name}} {% if post.tags %} | Filed under {% for tag in post.tags %} <a href="/tag/{{tag|escape}}">{{tag|escape}}</a>{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} </p> Add the same block of code in listing.html, after the h2 tag for each post and we're done! You can see the latest version of the blog at and view the source here. In the next post, we'll add site search and Disqus comment support to bloggart.Previous Post Next Post
http://blog.notdot.net/2009/10/Blogging-on-App-Engine-part-5-Tagging
CC-MAIN-2019-09
refinedweb
613
50.73
Ipxroute Displays and modifies information about the routing tables used by the IPX protocol. Used without parameters, ipxroute displays the default settings for packets that are sent to unknown, broadcast, and multicast addresses. Syntax ipxroute servers [/type=x] ipxroute ripout network ipxroute resolve {guid | name} {guid | AdapterName} ipxroute board= n [def] [gbr] [mbr] [remove=xxxxxxxxxxxx] ipxroute config Parameters servers [ /type= x ] : Displays the Service Access Point (SAP) table for the specified server type. x must be an integer. For example, /type=4 displays all file servers. If you do not specify /type, ipxroute servers displays all types of servers, listing them by server name. ripout network : Discovers if network is reachable by consulting the IPX stack's route table and sending out a rip request if necessary. Network is the IPX network segment number. resolve { guid | name } { guid | AdapterName } : Resolves the name of the guid to its friendly name, or the friendly name to its guid. board= n : Specifies the network adapter for which to query or set parameters. def : Sends packets to the ALL ROUTES broadcast. If a packet is transmitted to a unique Media Access Card (MAC) address that is not in the source routing table, ipxroute sends the packet to the SINGLE ROUTES broadcast by default. gbr : Sends packets to the ALL ROUTES broadcast. If a packet is transmitted to the broadcast address (FFFFFFFFFFFF), ipxroute sends the packet to the SINGLE ROUTES broadcast by default. mbr : Sends packets to the ALL ROUTES broadcast. If a packet is transmitted to a multicast address (C000xxxxxxxx), ipxroute sends the packet to the SINGLE ROUTES broadcast by default. remove= xxxxxxxxxxxx : Removes the given node address from the source routing table. config : Displays information about all of the bindings for which IPX is configured. /? : Displays help at the command prompt. Examples To display the network segments that the workstation is attached to, the workstation node address, and frame type being used, type the following command: ipxroute config Formatting legend Command-line reference A-Z TCP/IP utilities and services
https://technet.microsoft.com/en-us/library/bb490923.aspx
CC-MAIN-2017-17
refinedweb
337
63.19
IRC log of databinding on 2008-03-18 Timestamps are in UTC. 14:58:00 [RRSAgent] RRSAgent has joined #databinding 14:58:00 [RRSAgent] logging to 14:58:02 [trackbot-ng] RRSAgent, make logs public 14:58:02 [Zakim] Zakim has joined #databinding 14:58:04 [pauld] zakim, code? 14:58:04 [Zakim] sorry, pauld, I don't know what conference this is 14:58:04 [trackbot-ng] Zakim, this will be DBWG 14:58:05 [trackbot-ng] Meeting: XML Schema Patterns for Databinding Working Group Teleconference 14:58:05 [trackbot-ng] Date: 18 March 2008 14:58:05 [Zakim] ok, trackbot-ng, I see WS_DBWG()10:00AM already started 14:58:08 [pauld] zakim, code? 14:58:10 [Zakim] the conference code is 3294 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), pauld 14:58:22 [Zakim] + +0791888aaaa 14:58:27 [pauld] chair: pauld 14:58:42 [pauld] zakim, aaaa 14:58:42 [Zakim] I don't understand 'aaaa', pauld 14:58:52 [pauld] zakim, aaaa is me 14:59:04 [Zakim] +pauld; got it 14:59:16 [pauld] zakim, who is on the phone? 14:59:18 [Zakim] On the phone I see George_Cowe, pauld 14:59:38 [Zakim] +Yves 15:01:49 [pauld] Topic: Detection 15:04:11 [pauld] pauld: built annotation 15:04:45 [JonC] JonC has joined #databinding 15:05:01 [pauld] .. see the examples and collection pages 15:05:13 [pauld] gcowe: will look at optionally adding it to the service 15:06:06 [pauld] minutes from 2008-3-11 teleconference 2008-2-19 teleconference approved 15:06:34 [pauld] 15:09:20 [pauld] Topic: ISSUE-2: test suite 15:09:40 [pauld] gcowe: the XBinder guys picked up an old copy of the testsuite and sent results 15:09:44 [pauld] pauld: cool! 15:10:05 [pauld] gcowe: we've added a load more tests, so I sent them a new copy 15:10:17 [Zakim] +JohnC 15:10:17 [pauld] pauld: that's great. thx! 15:10:56 [pauld] zakim, +johnc is really jonc 15:10:56 [Zakim] sorry, pauld, I do not recognize a party named '+johnc' 15:11:03 [pauld] zakim, johnc is really jonc 15:11:06 [Zakim] +jonc; got it 15:12:00 [pauld] pauld: collection is now checked in with annotation! 15:12:27 [pauld] pauld: what's next for the test suite? 15:13:07 [pauld] gcowe: not a lot, we've run the tools we can, half the toolkits missing, Adrian had the ability to run them 15:13:23 [pauld] pauld: but for basic, how do we stand? 15:13:55 [gcowe] 15:16:03 [pauld] pauld: I can rerun SOAP4R and ZSI, can someone help with WCF 15:16:14 [Yves] I am doing gsoap c and c++ 15:18:14 [pauld] Topic: Charter Renewal? 15:18:59 [pauld] pauld: dependent on publishing Last Call documents 15:19:19 [pauld] yves: we should be able to ask for another six months 15:20:22 [pauld] Topic: Status of Basic Patterns 15:20:48 [pauld] pauld: thanks George for the work on differencing 15:21:48 [pauld] .. status section needs updating further 15:23:14 [pauld] Topic: Last Call comments from Schema WG 15:23:16 [pauld] 15:24:18 [pauld] Topic: lc-xsd-5 15:24:30 [pauld] """ 15:24:32 [pauld] * Schema documents vs. schemas: Following up on the point above, there are 15:24:33 [pauld] schema documents that do not stand on their own in defining a schema 15:24:35 [pauld] that's useful for validation. For example, if a schema document merely 15:24:36 [pauld] defines a complext Type T as being derived by extension from type B with 15:24:38 [pauld] attribute A, then you don't really know what the type is until you find 15:24:39 [pauld] the base type B, and that may well be in a different schema document. 15:24:41 [pauld] Maybe there is element content in effective type T. If there is an 15:24:42 [pauld] element E declared of type T, then what does the requirement to "[expose] 15:24:44 [pauld] all of the [XML 1.0] element node and attribute node content described by 15:24:45 [pauld] the originating [XML Schema 1.0] document" mean? The problem is that it's 15:24:47 [pauld] not really schema documents that directly call for or don't call for 15:24:48 [pauld] content in documents to be validated. Schema documents contribute to the 15:24:50 [pauld] construction of a schema (formally defined at [4]), which in turn contains 15:24:51 [pauld] element declarations, etc. that can be used to require or allow content 15:24:53 [pauld] in documents to be validated. >>It seems that some serious thought is 15:24:54 [pauld] needed as to whether it's schema documents or schemas that would conform 15:24:56 [pauld] to the databinding specification.<< In any case, referring to the 15:24:57 [pauld] element or attribute content "described by a schema document" is not just 15:24:59 [pauld] too informal; as suggested above, it's likely that you really want to 15:25:01 [pauld] talk about the element or attribute content allowed by a schema. 15:25:03 [pauld] Conversely, you could more clearly define a set of rules relating to 15:25:05 [pauld] individual schema documents if that's what you really intend. 15:25:07 [pauld] """ 15:25:33 [pauld] pauld: this is related to the infoset (v) document issue. It would be much harder to write test tools for this 15:25:50 [pauld] yves: we're testing for bytes on the wire, not at the infoset level 15:27:18 [pauld] pauld: the only way I could see this working is if they had an XML format for their infoset or even the PSVI 15:27:30 [pauld] pauld: anyone want to support this comment? 15:27:35 [pauld] *crickets* 15:28:16 [pauld] RESOLUTION: lc-xsd-5 rejected 15:29:04 [pauld] Topic: lc-xsd-6 15:29:06 [pauld] """ 15:29:19 [pauld] * Section 1.4 says that conformance requires that an implementation: "MUST 15:29:21 [pauld] be able to consume any well-formed [XML 1.0] document which satisfies 15:29:22 [pauld] local-schema validity against the originating [XML Schema 1.0] document 15:29:24 [pauld] exposing all of the [XML 1.0] element node and attribute node content in 15:29:25 [pauld] the data model." Again, local-schema validity is not a relation defined 15:29:27 [pauld] on the pair {instance, schema document}, it is (presuming you indicate 15:29:28 [pauld] which type or element declaration to start with) defined on the pair 15:29:30 [pauld] {instance, schema}" 15:29:31 [pauld] """ 15:30:23 [pauld] 15:31:18 [pauld] pauld: anyone feel like they have better words for this assertion? 15:32:21 [pauld] *crickets* 15:33:17 [pauld] gcowe: let's ask them for better text! 15:33:30 [pauld] ACTION: pdowney to ask the Schema WG for advice 15:33:30 [trackbot-ng] Created ACTION-129 - Ask the Schema WG for advice [on Paul Downey - due 2008-03-25]. 15:34:10 [pauld] pauld: so we accept the comment, but don't have the skills to address it to schema WG's satisfaction 15:34:18 [pauld] Topic: lc-xsd-7 15:34:55 [pauld] * Section 2: "The [XPath 2.0] expression is located from an [XML Schema 15:34:56 [pauld] 1.0] element node which may be the document element, or an element 15:34:58 [pauld] contained inside an [XML 1.0] document such as [WSDL 2.0] description." 15:34:59 [pauld] It's not quite clear what is meant in saying that an "[XPath 2.0] 15:35:01 [pauld] expression is located from". Is this trying to establish the "Context 15:35:02 [pauld] Node" for the XPath expression as being the node of the <xsd:schema> 15:35:04 [pauld] element? If so, we recommend you say that more clearly, preferably with 15:35:05 [pauld] hyperlinks to the pertinent parts of the XPath Recommendation. Also, the 15:35:07 [pauld] phrase "may not" can be read as prohibiting the case where the element 15:35:08 [pauld] note is the document node. I suspect you meant "need not". Finally, [XML 15:35:10 [pauld] Schema 1.0] element node isn't a term that appears in the XSD 15:35:11 [pauld] Recommendation; did you mean the "root element information item of the 15:35:13 [pauld] schema document"? 15:35:50 [pauld] pauld: accept "need not" change to text 15:36:33 [pauld] pauld: suggest a note to say "this is to establish the Context node for the XPath expression" 15:37:14 [pauld] pauld: seems reasonable to link to the XPath recommedation 15:38:04 [pauld] RESOLUTION: accepted lc-xsd-7 with suggested text changes 15:38:30 [pauld] Topic: lc-xsd-8 15:38:33 [pauld] * Sections 2.x: The phrase "An [XML 1.0] document exibits the XXXXX 15:38:35 [pauld] pattern...." is used repeatedly in these sections and their descendents. 15:38:36 [pauld] See comments about about need to refer to "schema documents", if that's 15:38:38 [pauld] what's intended. 15:38:49 [pauld] pauld: looks like the documents (v) infoset comment again 15:39:22 [pauld] RESOLUTION: rejected as for lc-xsd-5 15:40:50 [pauld] yves: is that the instance document? 15:41:48 [pauld] pauld: we could be clearer that it's a WSDL 1.0, 2.0, Schema, whatever, but balooning the boilerplate isn't desirable 15:42:38 [pauld] pauld: we already have "2.1 Schema Element 15:42:39 [pauld] The xs:schema element MAY be the document element, but MAY also appear within other descriptions such as a [WSDL 2.0] or [WSDL 1.1] document. †" 15:43:40 [pauld] yves: text tied up better to the "An [XML 1.0] document exhibits the" 15:44:18 [pauld] s/RESOLUTION: rejected as for lc-xsd-5/RESOLUTION: accepted lc-xsd-5 as requiring clarification/ 15:46:13 [pauld] Topic: lc-xsd-9 15:46:16 [pauld] 15:46:45 [pauld] * Section 2.1.2: talks about qualified local elements, but the sample 15:46:46 [pauld] schema contains no local elements. 15:46:57 [pauld] pauld: we could change the example to include local elements 15:47:19 [pauld] gcowe: what does that mean for the test suite? is this one excluded? 15:47:36 [pauld] pauld: I suspect this is something we've excluded, so it could be safe 15:48:28 [pauld] could risk introducing an advanced pattern 15:48:59 [pauld] RESOLUTION: accepted lc-xsd-9, will expand example 15:50:42 [pauld] example something like: 15:50:43 [pauld] <xs:element 15:50:45 [pauld] <xs:complexType> 15:50:46 [pauld] <xs:sequence> 15:50:48 [pauld] <xs:element 15:50:49 [pauld] </xs:sequence> 15:50:51 [pauld] </xs:complexType> 15:50:52 [pauld] </xs:element 15:51:31 [pauld] gcowe: will update example 15:52:19 [pauld] Topic: lc-xsd-10 15:52:25 [pauld] * Section 2.1.6: BlockDefault. This pattern seems to imply that 15:52:26 [pauld] substitutions and or derivations are blocked if the @blockDefault 15:52:28 [pauld] attribute is provided, but in fact that attribute carries a value that can 15:52:29 [pauld] selectively enable or disable blocking for any combination of extension, 15:52:31 [pauld] restriction, and substitution. It seems unlikely that the rule of 15:52:32 [pauld] interest is really that the attribute is present. Is that what's 15:52:34 [pauld] intended, or did you wish to actually check for certain values of the 15:52:35 [pauld] blockDefault. Note, in particular, that an explicit blockDefault="" has 15:52:37 [pauld] the same semantic as leaving out the attribute entirely. 15:52:38 [pauld] I regret that I did not have time to review the remainder of the patterns 15:52:40 [pauld] in the draft, but I would assume that the above comments would be 15:52:41 [pauld] representative of what would be found for other patterns. 15:52:43 [pauld] jonc: mea culpa! 15:53:42 [pauld] jonc: pattern needs tightening up, 15:54:54 [pauld] pauld: it's been moved to Advanced anyway 15:55:32 [pauld] ACTION: jcalladi to sort out BlockDefault patterns 15:55:33 [trackbot-ng] Created ACTION-130 - Sort out BlockDefault patterns [on Jonathan Calladine - due 2008-03-25]. 15:56:18 [pauld] RESOLUTION: accepted lc-xsd-10, BlockDefault has been moved to Advanced 15:56:34 [pauld] Topic: 15:56:57 [pauld] s/Topic:/Topic: lc-xsd-11 Editorial Concerns/ 15:57:19 [pauld] The databinding draft is very long, and a lot of it is devoted to what is 15:57:21 [pauld] ultimately boilerplate. Consider the targetNamespace pattern. It is 15:57:22 [pauld] introduced with nearly 1/2 page of multicolor writeup, but really all it's 15:57:24 [pauld] trying to say seems to be: This pattern requires that the schema 15:57:25 [pauld] document have a targetNamespace attribute with an absolute URI as its 15:57:27 [pauld] value. That could be said much more clearly and concisely. I think the 15:57:28 [pauld] draft would be much more effective if the patterns were introduced in a 15:57:30 [pauld] manner that was as concise and clear as possible. It's not helpful to 15:57:31 [pauld] repeat over and over "An [XML 1.0] document exhibits....", and as noted 15:57:33 [pauld] above, the example schema could be made shorter and clearer. Finally, 15:57:34 [pauld] what would be most helpful for a pattern like this is to explain ">>why<< 15:57:36 [pauld] an absolute URI"? The Schema recommendation points to the XML Namespaces 15:57:37 [pauld] recommendation for the definition of a namespace name, and that in turn 15:57:39 [pauld] requires a URI Reference [5], not an Absolute URI. So, it would be 15:57:40 [pauld] useful in general if some of the boilerplate were eliminated and the 15:57:42 [pauld] sections made much shorter and easier to read, but conversely it would be 15:57:43 [pauld] useful to say a bit about what makes the pattern interesting. Explain 15:57:45 [pauld] briefly if there's a reason why absolute namespace URIs are interesting, 15:57:46 [pauld] or did you really just mean this pattern to be "a non-absent 15:57:48 [pauld] targetNamespace is available"? 15:57:59 [pauld] pauld: boilerplate? 15:58:29 [pauld] pauld: it's not a very human readable spec! 15:58:48 [pauld] gcowe: it is computer generated 15:59:03 [pauld] jonc: hard to avoid 15:59:37 [pauld] pauld: without a concrete proposal, I'm going to push back. The work is in our testing and detector .. 16:00:21 [pauld] pauld; >>>why<<< 16:00:29 [pauld] s/;/:/ 16:01:18 [pauld] jonc: discussion was it's opening the flood gates, and this is for the primer 16:01:46 [pauld] pauld: I know, I'm not keen on specs which justify themselves 16:02:08 [pauld] pauld: we're pretty clear why a pattern is Basic or Advanced 16:02:40 [pauld] pauld: we're not clear on how patterns come about 16:03:13 [pauld] .. sounds like something we could add as editorial text, volunteers? 16:04:05 [pauld] pauld: we've done a lot of work in terms of test tools and suites, and that' the best approach IMO 16:05:05 [pauld] jonc: was in Noah's position, but it's seems best left to additional documents and discussion, on a wiki? 16:05:52 [pauld] pauld: XML was famously wafted by Tim Bray as a small spec, then the first thing he did was publish an "annotated version". You're free to do the same :) 16:07:02 [pauld] pauld: I think its' fair comment to say why a pattern is interesting. Hmm. Will look at that generically in the introduction. 16:08:31 [pauld] RESOULTION: accepted lc-xsd-11 in part, will add more introduction text 16:09:13 [pauld] Topic: Status of Publication 16:09:48 [pauld] pauld: all of the comments accepted are editorial, any objections to incorporating the text and then going ahead to Last Call as planned? 16:09:55 [pauld] None heard 16:11:13 [pauld] pickup again next tuesday 16:12:01 [Zakim] -pauld 16:12:02 [Zakim] -jonc 16:12:04 [Zakim] -Yves 16:12:06 [Zakim] -George_Cowe 16:12:07 [Zakim] WS_DBWG()10:00AM has ended 16:12:09 [Zakim] Attendees were George_Cowe, +0791888aaaa, pauld, Yves, jonc 16:12:16 [pauld] RRSagent, generate minutes 16:12:16 [RRSAgent] I have made the request to generate pauld 16:17:02 [pauld] rrsagent, make logs public 17:36:03 [Zakim] Zakim has left #databinding
http://www.w3.org/2008/03/18-databinding-irc
CC-MAIN-2015-27
refinedweb
2,893
63.83
." Debian unstable's got me covered (Score:0, Informative) Since I use GnuPG to sign my e-mails (not that I believe anyone actually verifies the signatures, nor do I send any e-mails for which it would really matter all that much -- it just seems like good practice), I ran to check my version of GnuPG as soon as I saw the /. blurb. 1.4.2-2 Hmm. The -2 means that this is the second packaging of the 1.4.2 release. So it's been out for a while. Checking the changelog, I see that 1.4.2-1 was released 24 Sep 2005. My system would have gotten the update within a couple of days of that release date, so I got the fix nearly six months *before* the vulnerability announcement. Can't complain about that!:Debian unstable's got me covered. Um NO. (Score:3, Informative) 1.4.2-2 is not equal to 1.4.2.2, and it is older than 1.4.2.2 the -2 is the 2nd Debian modification of 1.4:software or data flaw? (Score:3, Informative) The problem is in display. It displays the unencoded preamble and postscript inline with the (properly) verified parts of the email. You then, essentially, have to guess which is which.) actually not (Score:3, Informative) is the part between the begin and end signature bars. PGP/MIME fixes this problem but MIME creates new ones. PGP Inc sells a fine PGP client that also does a pretty good S/MIME. I have no problem with the PGP protocol or a carefully designed, properly integrated plug in. What I do have a problem with is the idea that effective security can be delivered as an ad-hoc bolt on to be lashed into place with some perl scripts. If you want to do end-to-end security you have to come to terms with the fact that the real end point is the user.. Triple bag it (Score:2, Informative) From [x5.net] anyway. Does it affect routers or the infrastructure of the Internet? Only insofar as domain registrars never validate change requests properly. A carefully-crafted attack could use this to append a change-of-IP request to some ISP's routine request to a registrar, which means an attacker could create a phony DNS server for the express purpose of polluting the DNS namespace. If the registrar uses GPG's validation as proof of a legit request (and some are quite happy with a fax with no proof of origin at all) then it could have an impact. Is this a likely scenario? No. The problem with lack of validation has been around for decades and has been used by cybersquatters and porn merchants, but never (as far as I know) for Black Hat activities. The lack of any significant effort has never been due to security. My best guess is that it's due to skript kiddies being clueless. Which is just as well. If demonstrable and simple exploits aren't being used to cause catastrophic levels of mayhem, then I think we're pretty safe against this somewhat more sophisticated vulnerability requiring (as you coorectly point out) a MitM are partly to blame for adopting the standard). It's just that getting a cert for email is like extracting teeth and the encryption is horribly slow and bloated. Re:GPG is: (Score:3, Informative) Back then (early '90s), simple encryption SOFTWARE was considered a munition, similar to if he snuck an atom bomb out of the country. The software was "released" onto the evil internet (perhaps not even by Phil), and as I recall, Phil was arrested or charged, or questioned. My history is based on memory from reading Boardwatch magazine (a GREAT internet publication in the hey-day). So I may not recall 100% correctly. Re:Enigmail is fine... (Score:2, Informative) Your version number may not change (Score:3, Informative) infringement which led to Louis Freeh's FBI persecuting Phil. That is why folk found the idea that Bizdos was behing PGP, he almost had Phil Z. sent to jail for distributing it (although in fairness to Jim he did not anticipate Freeh persuing the case in the way he did and his objective was to stop Phil infringing his patent not send him to jail.) The PGP code was rewritten quite a few times for a number of reasons. MIT brought out a legal version that used the non-commercial use license from MIT. The MIT portions were open source but the RSAREF part was encumbered. GPG started as an attempt to develop an entirely unencumbered version of PGP after the Diffie Hellman patent expired in 97. The IDEA algorithm would have been dropped even if it had not been patented as it had been compromised by then. A second implementation was in any case required to get OpenPGP accepted as an IETF standard. Around the same time Phil Z. was starting PGP inc and wanted to use PGP as the company name. Otherwise the FSF version would have probably been called something like GnuPGP..
https://slashdot.org/story/06/03/09/233227/security-flaw-discovered-in-gpg/informative-comments
CC-MAIN-2018-17
refinedweb
860
72.36
Author: Andreas Kupries <[email protected]> Author: Andreas Kupries <[email protected]> Author: Vince Darley <[email protected]> State: Draft Type: Project Vote: Pending Created: 02-Nov-2004 Post-History: Tcl-Version: 9.1 Implementation-URL: Abstract This document describes an API which reflects the Filesystem Driver API of the core Virtual Filesystem Layer up into the Tcl level, for the implementation of filesystems in Tcl. It is an independent companion to [219] ('Tcl Channel Reflection API') and [230] ('Tcl Channel Transformation Reflection API'). As the latter TIPs bring the ability of writing channel drivers and transformations in Tcl itself into the core so this TIP provides the facilities for the implementation of filesystems in Tcl. This document specifies version 1 of the filesystem reflection API. Motivation / Rationale The purpose of this and the other reflection TIPs is to provide all the facilities required for the creation and usage of wrapped files (= virtual filesystems attached to executables and binary libraries) within the core. While it is possible to implement and place all the proposed reflectivity in separate and external packages, this however means that the core itself cannot make use of wrapping technology and virtual filesystems to encapsulate and attach its own data and library files to itself. Something which is desirable as it can make the deployment and embedding of the core easier, due to having less files to deal with, and a higher degree of self-containment. One possible application of a completely self-contained core library would be, for example, the Tcl browser plugin. While it is also possible to create a special purpose filesystem and channel driver in the core for this type of thing, it is however my belief that the general purpose framework specified here is a better solution as it will also give users of the core the freedom to experiment with their own ideas, instead of constraining them to what we managed to envision. Another use for reflected filesystems is as a helper for testing the generic filesystem layer of Tcl, by creating filesystems which forcibly return errors, bogus data, and the like. An implementation of this TIP exists already as a package, TclVfs. This TIP asks to make that mechanism publicly available to script and package authors, with a bit of cleanup regarding the Tcl level API. Specification of Tcl-Level API The Tcl level API consists of a single new command, filesystem, and one change to the existing command file. The new command is an ensemble command providing five subcommands. These subcommands are mount, unmount, info, posixerror, and internalerror. (Note that this TIP does not introduce a new C API, but rather exposes an existing C API to Tcl scripts.) The mount Subcommand filesystem mount ?-volume? path cmdprefix This subcommand creates a new filesystem using the command prefix cmdprefix as its handler. The API this handler has to provide is specified below, in the section "Command Handler API". The new filesystem is immediately mounted at path. After completion of the call any access to a subdirectory of path will be handled by that filesystem, through its handler. The filesystem is represented here by the command prefix which will be executed whenever an operation on a file or directory within path has to be performed. If the option -volume is specified then the new mount point is also registered with Tcl as a new volume and will therefore from then on appear in the output of the command file volumes. This is useful (and actually required for reasonable operation) when mounting paths like ftp://. It should not be used for paths mounted inside the native filesystem. The new filesystem will be immediately accessible in all interpreters executed by the current process. The command returns the empty string as its result. Returning a handle or token is not required despite the fact that the handler command can be used in more than one mount operation. The different instances can be clearly distinguished through the root argument given to each called method. This root is identical to the path specified here. In other words, the chosen path (= mount point) is the handle as well. We have chosen to use early binding of the handler command. See the section "Early versus late binding of the handler command" for more detailed explanations. Important note: The handler command for the filesystem resides in the interpreter performing the mount operation. This interpreter is the filesystem interpreter mentioned in the section "Interaction with threads and other interpreters". The unmount Subcommand filesystem unmount path This methods unmounts the reflected filesystem which was mounted at path. An error is thrown if no reflected filesystem was mounted at that location. After the completion of the operation the filesystem which was mounted at that location is not visible anymore, and any previous filesystem accessible through this path becomes accessible again. The command returns the empty string as its result. The info Subcommand filesystem info ?path? This method will return a list of all filesystems mounted in all interpreters, if it was called without arguments. When called with a path the reflected filesystem responsible for that path is examined and the command prefix used to handle all filesystem operations is returned. An error is thrown if no reflected filesystem is mounted for that path. There is currently no facility to determine the filesystem interpreter (nor its thread). The posixerror Subcommand filesystem posixerror error This command can be called by a handler command during the execution of a filesystem operation to signal the POSIX error code of a failure. This also aborts execution immediately, behaving like return -code -1. The argument error is either the integer number of the POSIX error to signal, or its symbolic name, like "EEXIST", "ENOENT", etc. The internalerror Subcommand filesystem internalerror cmdprefix This method registers the provided command prefix as the command to call when the core has to report internal errors thrown by a handler command for a reflected filesystem. If no such command is registered, then internal errors will stay invisible, as the core currently does not provide a way for reporting them through the regular VFS layer. We have chosen to use early binding of the handler command. See the section "Early versus late binding of the handler command" for more detailed explanations. Modifications to the file Command The existing command file_ is modified. Its method **normalize is extended to recognize a new switch, -full. When this switch is specified the method performs a normal expansion of path first , followed by an expansion of any links in the last element of path. It returns the result of the expansion as its own result. The new signature of the method is - file normalize ?-full? path Command Handler API The Tcl-level handler command for a reflected filesystem has to support the following subcommands, as listed below. Note that the term ensemble is used to generically describe all command (prefixes) which are able to process subcommands. This TIP is not tied to the recently introduced 'namespace ensemble's. There are three arguments whose meaning does not change across the methods. They are explained now, and left out of the specifications of the various methods. root: This is always the path the filesystem is mounted at, i.e. the handle of the filesystem. In other words, it is the part of the absolute path we are operating upon which is 'outside' of the control of this filesystem. relative: This is always the full path to the file or directory the operation has to work on, relative to root (s.a.). In other words, it is the part of the absolute path we are operating upon which is 'inside' of the control of the reflected filesystem. actualpath: This is the exact path which was given to the file command which caused the invocation of the handler command. This path can be absolute or relative. If it is absolute then actualpath is identical to "root/relative". Otherwise it can be a sub- or super-path of relative, depending on the current working directory. And finally the list of methods and their detailed specification. The initialize Method handler initalize root This method is called first, and then never again (for the given root). Its responsibility is to initialize all parts of the filesystem at the Tcl level. The return value of the method has to be a list containing two elements, the version of the reflection API, and a list containing the names of all methods which are supported by this handler. Any error thrown by the method will prevent the creation of the filesystem and aborts the mount operation which caused the call. The thrown error will appear as error thrown by filesystem mount. The current version is 1. The finalize Method handler finalize root The method is called when the filesystem was unmounted, and is the last call a handler can receive for a specific root. This happens just before the destruction of the C level data structures. Still, the command handler must not access the filesystem anymore in no way. It is now his responsibility to clean up any internal resources it allocated to this filesystem. The return value of the method is ignored. Any error thrown by the method is returned as the error of the unmount command. The access Method - handler access root relative actualpath mode This method is called to determine the "access" permissions for the file (relative). It has to either return successfully, or signal a POSIX error (See filesystem posixerror. The latter means that the permissions asked for via mode are not compatible with the file. Any result returned by the method is ignored. Regular errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The argument mode is a list containing any of the strings read, write, and exe, the permissions the file has to have for the request to succeed. write contained in mode implies "writable". read contained in mode implies "readable". exe contained in mode implies "executable". The createdirectory Method handler createdirectory root relative actualpath This method has to create a directory with the given name (relative). The command can assume that relative does not exist yet, but the directory relative is in does. The C level of the reflection takes care of this. Any result returned by the method is ignored. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The deletefile Method handler deletefile root relative actualpath This method has to delete the file relative. Any result returned by the method is ignored. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The fileattributes Method handler. Any result returned by the method is ignored for this case. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The matchindirectory Method handler matchindirectory root relative actualpath pattern types perm mac This method has to return the list of files or directories in the path relative which match the glob pattern, are compatible with the specified list of types, have the given perm_issions and _mac creator/type data. The specified path is always the name of an existing directory. Note: As the core VFS layer generates requests for directory-only matches from the filesystems involved when performing any type of recursive globbing this subcommand absolutely has to handle such (and file-only) requests correctly or bad things (TM) will happen. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. types is a list of strings, interpreted as set. The strings are the names of the types of files the caller is looking for. Allowed strings are: files, and dirs. The command has to return all files which match at least one of the types. If types is empty then all types are valid. perm is a list of permission strings (i.e. a set), i.e. read, write, and exe. The command has to return all files which have at least all the given permissions. If perm is empty then no permissions are required. mac is a list containing 2 strings, for Macintosh creator and type. If mac is empty then the data is irrelevant. The open Method handler open root relative actualpath mode permissions This command has to return a list describing the successfully opened file relative, or throw an error describing how the operation failed. The thrown error will appear as error thrown by the open command which caused the invocation of the handler. The list returned upon success contains at least one and at most two elements. The first element is obligatory and is always the handle of the channel which was created to allow access to the contents of the file. If the second element is present it will be interpreted as a callback, i.e. a command prefix. This prefix will always be executed as is, i.e. without additional arguments. Any required arguments have to be returned as part of the result of the call to open. This callback is fully specified in section "The channel close callback". The argument mode specifies if the file is opened for read, write, both, appending, etc. Its value is a string in the set r, w, a, w+, or a+. The argument permissions determines the native mode the opened file is created with. This is relevant only if the mode actually requests the creation of a non-existing file, i.e. is not r. Note: it is possible to return a channel implemented through reflection here. See also section "The channel close callback" for more. The removedirectory Method handler removedirectory root relative actualpath recursive This method has to delete the given directory. The argument recursive is a boolean value. The method has to signal the POSIX error "EEXIST" if recursive is false and the directory is not empty. Otherwise it has to attempt to recursively delete the directory and its contents. Any result returned by the method is ignored. Regular errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The stat Method handler stat root relative actualpath This method has to return a dictionary containing the stat structure for the file relative. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The following keys and their values have to be provided by the filesystem: dev: A long integer number, the device number of the path stat was called for. This number is optional and always overwritten by the C level of the filesystem reflection. ino: A long integer number, the inode number of the path stat was called for. mode: An integer number, the encoded access mode of the path. It is this mode which is checked by the method. Notes: The stat data is highly Unix-centric, especially device node, inode, and the various ids for file ownership. While the latter are not that important both device and inode number can be crucial to higher-level algorithms. An example would be a directory walker using the device/inode information to keep itself out of infinite loops generated by symbolic links referring to each other. Returning non-unique device/inode information will most likely cause such a walker to skip over paths under the wrong assumption of having them seen already. To prevent the various reflected filesystem from stomping over each other with regard to device ids this information will be generated by the common C level of the filesystem reflection. The inode numbers however have to be assigned by the filesystem itself. It is possible to make a higher-level algorithm depending on device/inode data aware of the problem with virtual filesystems (and has actually been done, see the Tcllib directory walker), this however is a kludgey solution and should be avoided. The utime Method handler utime root relative actualpath atime ctime mtime This method has to set the access and modification times of the file relative. The access time is set to atime, creation time to ctime, and the modification time is set to mtime. The arguments are positive integer numbers, the number of seconds since the epoch. Any result returned by the method is ignored. Errors thrown by the method are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. The copyfile Method handler copyfile root relative_src actualpath_src relative_dst actualpath_dst This method is optional. It has to create a copy of a file in the filesystem under a different name, in the same filesystem. This method is not for copying copying the file. The same fallback will happen if the method is available, but signals the POSIX error "EXDEV". The copydir Method handler copydir root relative_src actualpath_src relative_dst actualpath_dst This method is optional. It has to create a recursive copy of a directory in the filesystem under a different name, in the same filesystem. This method is not for copying of directories based method of copying the directory file by file.. The same fallback will happen if the method is available, but signals the POSIX error "EXDEV". The rename Method handler rename root relative_src actualpath_src relative_dst actualpath_dst This method is optional. It has to rename a file in the filesystem, giving it a different name in the same filesystem. This method is not for the renaming renaming the file. The same fallback will happen if the method is available, but signals the POSIX error "EXDEV". Interaction with Threads and Other Interpreters. Virtual filesystems in Tcl are process global structures. In other words, they are seen and accessible by all interpreters, and all threads in the current process. For filesystems implemented completely at the C-level this is not that big a problem. However a filesystem implemented based on the reflection here will always be associated with a Tcl interpreter, the interpreter executing the requested filesystem operations. This cannot be avoided as only the interpreter containing the handler command also has all the state required by it. The filesystem/interpreter association also implies that any such filesystem is associated with a particular thread, the thread containing that interpreter. Filesystem requests coming from a different interpreter are handled by executing the driver functionality in the filesystem interpreter instead. In the case of requests coming from a different thread the C level part of the reflection will post specialized events to the filesystem thread, essentially forwarding the invocations of the driver. When a thread or interpreter is deleted all filesystems mounted with the filesystem mount command using this thread/interpreter as their computing base will be automatically unmounted and deleted as well. This pulls the rug out under the other thread(s) and/or interpreter(s), this however cannot be avoided. Future accesses will either fail because the virtual files are now missing, or will access different files provided by a different filesystem now owning the path. Interaction with Safe Interpreters The command filesystem is unsafe and safe interpreters are not allowed to use it. The reason behind this restriction: The ability of mounting filesystems gives a safe interpreter the ability to inject code into a trusted interpreter. The mechanism is as follows: An application using a trusted master interpreter and safe slaves for plugins reads and evaluates a file foo directly in the trusted interpreter. A malicious plugin loaded into one of the safe slaves knows about this file foo, and its actual location. It mounts a virtual filesystem using a driver which is part of its own code, over the directory foo is in. When the trusted interpreter reads foo, it does not go to the native filesystem anymore, but the mounted filesystem. In other words the driver in the slave provides the contents, the code which is executed in the trusted environment. From here on the slave can do anything it wishes in the trusted environment. Access to any other file in the directory can be passed through unchanged to the filesystem originally owning the path. The Channel Close Callback The channel close callback is an optional callback which can be set up by the Tcl layer when a file is opened. This is done in the open method, by returning a 2-element list. The first element is the channel handle as usual and the second element the command prefix of the callback. The command prefix is early-bound, i.e. the command will be resolved when the callback is set up. The resolution happens in the current context, and thus can be anywhere in the application. Because of this it is strongly recommended to use a fully-qualified command name in the callback. The callback is executed in the current context of the operation which caused the channel to close. It is executed just before the channel is closed by the generic filesystem layer. The callback itself must not call close. It will always be executed as is, i.e. without additional arguments. Any required arguments have to be made part of the prefix when it is set up. The channel is still live enough at the time of the call. This does assume that the filesystem does not use a reflected channel to access the contents of the virtual file. If a reflected channel is used however, the close callback is not required, as the finalize method of the channel can be used for the same purpose. Under normal circumstances return code and any errors thrown by the callback itself are ignored. In that case errors have to be signaled asynchronously, for example by calling bgerror. Any result returned by the callback is ignored. Errors thrown by the callback are reported through the registered handler for internal errors, if there is any. They are ignored if no such handler is present. Note that it is possible that the channel we are working with here is implemented through reflection. The order in which the various callbacks are called during closing is this: The channel for the file is closed via close by the VFS. The channel close callback has been set up as a regular close handler, and is called now. The close function of the channel driver is called, reflected into the Tcl level and cleans it up. The close operation completes. The important point here is that the channel close callback set up by the filesystem is definitely called before the reflected channel cleans up its Tcl layer, so the assertion above about the channel being live enough to be read and saved from the filesystem Tcl layer holds even if both filesystem and channel are reflected. It also holds if reflected transformations are involved. Early versus Late Binding of the Handler Command We have two principal methods for using the handler command. These are called early and late binding. Early binding means that the command implementation to use is determined at the time of the creation of the channel, i.e. when chan create is executed, before any methods are called. Afterward it cannot change. The result of the command resolution is stored internally and used until the channel is destroyed. Renaming the handler command has no effect. In other words, the system will automatically call the command under the new name. The destruction of the handler command is intercepted and causes the channel to close as well. Late binding means that the handler command is stored internally essentially as a string, and this string is mapped to the implementation to use for each and every call to a method of the handler. Renaming the command, or destroying it means that the next call of a handler method will fail, causing the higher level channel command to fail as well. Depending on the method the error message may not be able to explain the reason of that failure. Another problem with this namespaces it becomes necessary to force the usage of a specific fixed context for the resolution. Note that moving a different command into place after renaming the original handler allows the Tcl level to change the implementation dynamically at runtime. This however is not really an advantage over early binding as the early bound command can be written such that it delegates to the actual implementation, and that can then be changed dynamically as well. Limitations For now this section documents the existing limitations of the reflection. The code of the package TclVfs has only a few limitations. One subtlety one has to be aware of is that mixing case-(in)sensitive filesystems and application code may yield unexpected results. For example mounting a case-sensitive virtual filesystem into a case-insensitive system (like the standard Windows or MacOS filesystems) and then using this with code relying on case-insensitivity problems will appear when accessing the virtual filesystem. Note that application code relying on case-insensitivity will not under Unix either, i.e. is inherently non-portable, and should be fixed. The C-API's for the methods link and lstat are currently not exposed to the Tcl level. This may be done in the future to allow virtual filesystems implemented in Tcl to support the reading and writing of links. Note - Exposure of links may require path normalization and native path generation, something the TclVfs implementation does not support. This limitation regarding any type of link, hard or or soft, is quite deeply entrenched in the TclVfs code. The public C-API filesystem function Tcl_FSUtime is Unix-centric, its main data argument is a struct utimbuf *. This structure contains only a single value for both atime and ctime. The method utime of the handler command was nevertheless defined to take separate values for access and creation times, in case that this changes in the future. The Tcl core VFS layer was written very near to regular filesystems and has no way to transport regular Tcl error messages through it. This is the reason for the introduction of the internal error callback. This problem cannot be fixed within the 8.5 line as it requires more extensive changes to the public API. Note that when such changes are done the reflection API has to change as well, as it then allows the direct passing of errors. At that point the C layer of the reflection will have to support both this and the new version of the API. Examples of Filesystems The filesystems provided by TclVfs are all examples. webdav ftp sites http sites zip archive tar archive metakit database namespace/procedures as filesystem widget fs Some examples can be found on the Tcler's Wiki, see pages referring to Encryption Compression Jails Quotas Reference Implementation The package TclVfs can serve as the basis for a reference implementation. The final reference implementation will be provided at SourceForge, as an entry in the Tcl Patch Tracker. The exact url will be added here when it becomes available. Comments on suggest it might be a good idea to modify the 'file attributes' callback to make it more efficient for vfs writers, especially across a network and when vfs's are stacked. Currently one needs to make multiple calls to accomplish anything. [ Add comments on the document here ] This document has been placed in the public domain.
https://core.tcl-lang.org/tips/doc/trunk/tip/228.md
CC-MAIN-2022-40
refinedweb
4,630
54.42
This tutorial is an extension to my article on building VC clients for VB ActiveX DLLs. Having read such an article, the question about handling events naturally comes to the reader's mind. Here I'm going to show you how to handle custom events generated in VB ActiveX components in a Visual C++ client. First we are going to build a MFC client and then turn our attention to creating an ATL client. Doing all of this is not tough at all, as you shall see. A framework like MFC makes it very easy for the programmer to receive event notifications from an ActiveX code component. A word before we move on, I assume that the reader is conversant with VB ActiveX technology, Automation and MFC COM and IDL (Interface Definition Language). A VB ActiveX component is a unit of code that follows the COM specification for providing objects. It exposes much of its functionality through one or more interfaces. These software components have a high reusability factor.As the component needs to communicate with the client, they can be implemented as an in-process (read DLL) component and an out-process (read EXE) component. In an in-process component (ActiveX DLL components), the communication between the server and the client is implemented in the address space of the client application. Though this makes them faster than ActiveX EXE components, who need to be loaded in their own address space, the biggest drawback is that a faulty DLL will crash the client, and in turn, the object. That tends to bring everybody down.:-) An event can be simply defined as something that occurs during the applications lifetime. Events allow a class to communicate to other classes and programs. A component raises an event to notify the client about the completion of some task. This event can be caught by the client, and the client can respond to the event as it sees fit. Custom events provide event-handling capabilities to classes and components. The object that generates the event is known as the event source and the object that responds to an event is known as the event sink. First, we are going to fire a custom event from a VB ActiveX DLL and handle the notification in an MFC client. In other words, we have to build an event sink in an MFC client that responds to events generated by a VB ActiveX component, which acts as the event source. The code in the event sink is executed when the event is fired. To declare a custom event called evtTaskDone in VB, use the Event keyword in the General Declarations section of a class module like: evtTaskDone Public Event evtTaskDone() This custom event can then be fire by using the RaiseEvent RaiseEvent evtTaskDone With all that in mind, we now roll up our sleeves and dig into some code. First, we are going to build a VB ActiveX DLL that is the source of the event. Fire up VB and in the New Project dialog, choose ActiveX DLL and click Open. VB creates a new DLL component project called Project1, having a single class Class1. Go to Project->Properties and set the Project Name as VBEvents. In the Project Explorer View, right-click on Class1 and choose to remove it from the project. Note: We could also have chosen to use this class, but then we wouldn't have seen the VB Class Builder. Now right-click again on the Project Explorer View and add a single Class Module to the project. In the Add Class Module dialog, choose VB Class Builder and click on Open. In Class Builder, go to File->New->Class and add a new class called clsEventSrc Accept the default values and click on OK. Next, go to File->New->Event and add a single event to this class called evtNotify. Update all the changes to the project and close the Class Builder window. clsEventSrc evtNotify Next, click on Tools->Add Procedure and add a new procedure to the clsEventSrc class called prcFireEvent to the class you just created like: prcFireEvent Public Sub prcFireEvent() RaiseEvent evtNotify End Sub The procedure simply fires our event. Save everything and go to File->Make to build VBEvents.dll and register the component. Our MFC client, is a plain old Appwizard generated Dialog-based application with additional Automation support. As usual, open VC++ 6.0 and create a new MFC Appwizard EXE project called MFCClient. Hit Build to build the project, and take a break from all that hard work! The OLE/COM Object Viewer is a nifty little tool that is shipped along with Visual C++ 6.0. It will help us generate the IDL file for the DLL component. Go to Tools->OLE/COM Object Viewer and open this tool. Next, in OLE/COM Object Viewer, click on File->View Typelib and navigate to the VBEvent.dll file that we have previously built. Ready for some magic? Click on Open and open up ITypeLib Viewer Can you can view the IDL file? Whoa! Save the file through File->Save As, as VBEvents.IDL and close the tool. We have no need for it at present. Next in our VC++ project, add this IDL file to the project. In FileView, right click on the IDL file and choose Settings. In the MIDL tab, set the Output header file name to VBEvents.h and the UUID filename to VBEvents_i.c. Also deselect the MkTyplib compatible option. Save everything and in FileView, right-click on the VBEvents.IDL file and choose Compile. This will build the typelibrary and generate the necessary files. Examine the MIDL generated VBEvents_i.c file. It contains all the UUID definitions that the client can use to build a sink object. In VBEvents.h, notice the dual interface _clsEventSrc. The component's dispinterface __clsEventSrc is identified by DIID___clsEvent. This is the event source for out custom event. _clsEventSrc dispinterface __clsEventSrc DIID___clsEvent The next step is to add an sink object to connect to the source event. Fortunately, for us, MFC makes the task of building an event sink as easy as 1-2-3 (and well,4-5-6) With a couple of MFC macros, you can delegate much of the intricacy involved behind building a sink to MFC. First, add a new CCmdTarget derived class to the project called MFCSink. In the ClassWizard choose to select the Automation option. This is our sink object with Automation support. CCmdTarget Then import the server's typelib in the client with #import. If you haven't read my previous article, read it here. Otherwise, go right ahead and use code like: #import "VBEvents.dll" rename_namespace("MFCClient") using namespace MFCClient; There's nothing new to this code. While you are there in stdafx.h, also #include the afxctl.h file Next, open MFCSink.cpp and modify the INTERFACE_PART macro so that the second parameter (IID) is the IID of the event source, in our case DIID___clsEventSrc. Your interface map should look like: INTERFACE_PART DIID___clsEventSrc BEGIN_INTERFACE_MAP(MFCSink, CCmdTarget) INTERFACE_PART(MFCSink, DIID___clsEventSrc, Dispatch) END_INTERFACE_MAP() Next, in the DISPATCH Map of the MFCSink class add a DISP_FUNCTION_ID macro for each of the events defined in the source interface that you want to handle. My DISPATCH Map looks like: DISPATCH DISP_FUNCTION_ID BEGIN_DISPATCH_MAP(MFCSink, CCmdTarget) //{{AFX_DISPATCH_MAP(MFCSink) DISP_FUNCTION_ID(MFCSink, "evtNotify",1,evtNotify, VT_EMPTY, VTS_NONE) //}}AFX_DISPATCH_MAP END_DISPATCH_MAP() In Classview, right-click on the IMFCSink interface and add a single Method evtNotify(). Notice, that as per our DISPATCH Map, this method takes no parameters and returns a void. Our implementation of this method displays a simple MessageBox and looks like: IMFCSink evtNotify() MessageBox void MFCSink::evtNotify() { // TODO: Add your dispatch handler code here AfxMessageBox("Event notification handled in MFC client"); } All that remains for us to do is hook up and terminate the connection appropriately in the client code. MFC makes this job very easy with AfxConnectionAdvise() and correspondingly AfxConnectionUnadvise(). If you are not familiar with these functions, now would be a good time to look up their documentation. AfxConnectionAdvise() AfxConnectionUnadvise() Moving on, declare three variables in the dialog class header as : _clsEventSrc *m_pSrc; MFCSink *m_pSink; DWORD m_dwCookie; The first is a pointer to the interface through which we shall fire the event. The second is a pointer to the sink object. Lastly, the m_dwCookie variable is a cookie that stores the number of connections that has been established. We'll need this when we want to disconnect from the event source. In our case, we set this to 1 in the dialog class constructor. Don't forget to #include the VBEvents_i.c file. The code to establish the connection in the dialog's OnInitDialog() member function looks like: OnInitDialog() CoInitialize(NULL); /*Initialize COM system*/ m_pSink=new MFCSink; /*create an instance of the sink object*/ /*create source object*/ HRESULT hr=CoCreateInstance(CLSID_clsEventSrc,NULL,CLSCTX_INPROC_SERVER,IID_IDispatch,(void**) &m_pSrc); if(SUCCEEDED(hr)) LPUNKNOWN m_pUnk=m_pSink->GetIDispatch(FALSE); if(SUCCEEDED(hr)) { /*establish the connection*/ if (AfxConnectionAdvise(m_pSrc,DIID___clsEventSrc,m_pUnk,FALSE,&m_dwCookie)) return TRUE; else return FALSE; } else return FALSE; As we have setup a connection, we also need to disconnect when the dialog is destroyed. We can do that in the dialog's OnDestroy(). In ClassView, right-click the dialog class and Add a Windows message handler to handle WM_DESTROY messages. In the handler, add the following code to successfully disconnect. OnDestroy() WM_DESTROY LPUNKNOWN m_pUnk=m_pSink->GetIDispatch(FALSE); AfxConnectionUnadvise(m_pSrc,DIID___clsEventSrc,m_pUnk,FALSE,m_dwCookie); if(m_pSink!=NULL) { delete m_pSink; /*the sink destructor must be public or compiler will complain*/ m_pSink=NULL; m_pSrc=NULL; } With everything in place, we now need to fire the event. Simply call: m_pSrc->prcFireEvent(); anywhere in your code where you want to fire the event. Building a pure ATL client means a little more typing than the MFC client. But it's a lot easier than creating connectable objects in raw C++. Remember, that the sink has to support IDispatch. So that means at a minimum, implementing 7 methods.( 3 for IUnknown and 4 for IDispatch). To our relief, ATL provides the IDispEventSimpleImpl<> and IDispEventImpl<> template classes that helps us in quickly creating dispinterface sink objects. These is a host of information and code available for creating ATL sinks for dispinterface based source objects that you might want to lookup. Relevant Microsoft KB Articles Q:181277, Q:181845 and Q:194179 IDispatch IUnknown IDispEventSimpleImpl<> IDispEventImpl<> Back to the task at hand, to make our client very efficient, we'll use an IDispEventSimpleImpl derived class. First create a new ATL/COM AppWizard generated EXE project called ATLClient. To this add a dialog called ATLClientDlg. The dialog has two buttons, one to setup the connection and the other to fire the event. Next import te server's typelib with #import as described in the MFC client section above. Moving on to the sink object, the declaration looks like: IDispEventSimpleImpl ATLClientDlg #define IDC_SRCOBJ 1 static _ATL_FUNC_INFO OnEventInfo = {CC_STDCALL, VT_EMPTY, VT_NULL}; class CSinkObj : public IDispEventSimpleImpl<IDC_SRCOBJ, CSinkObj, &__uuidof(__clsEventSrc)> { public: HWND m_hWndList; CSinkObj(HWND hWnd = NULL) : m_hWndList(hWnd) { } BEGIN_SINK_MAP(CSinkObj) //Make sure the Event Handlers have __stdcall calling convention SINK_ENTRY_INFO(IDC_SRCOBJ, __uuidof(__clsEventSrc), 1, evtNotify, &OnEventInfo) END_SINK_MAP() // Event handler HRESULT __stdcall evtNotify() { // output string to list box TCHAR buf[80]; wsprintf(buf, "Sink : Notification Event Received"); AtlTrace("\n%s",buf); return S_OK; } }; All I have done is add a sink map to the IDispEventSimpleImpl-derived class and then add a sink entry corresponding to each event of a source interface that I would like to handle. The ATL_FUNC_INFO structure helps us pass parameters to event handlers. In our event handler however, we do nothing fancy. Just a simple debug message would do. ATL_FUNC_INFO In the dialog class, add variables: private: CSinkObj* m_pSink; _clsEventSrc *pEvent; The dialog class's OnConnect() looks like: LRESULT OnConnect(UINT,WORD,HWND hWndCtrl,BOOL& bHandled) { m_pSink=new CSinkObj(hWndCtrl); HRESULT hr=CoCreateInstance(CLSID_clsEventSrc,NULL,CLSCTX_INPROC_SERVER, __uuidof(_clsEventSrc),(void**)&pEvent); if(SUCCEEDED(hr)) { m_pSink->DispEventAdvise(pEvent); } return hr; } As before, call: pEvent->prcFireEvent(); when you want to fire the event. Don't forget to use DispEventUnadvise() to disconnect when the dialog is destroyed. DispEventUnadvise() That's it! We have built both an MFC and an ATL client that responds to events generated by a VB ActiveX DLL code component. The code and project files were built with Visual C++ 6.0 SP3 under Win95. I have included another project VBTimer consisting of a VB ActiveX DLL and respective ATL client project files. This project does something a little more sophisticated than our first VB DLL, which fires an event without any parameters. The ActiveX DLL implements a VB Timer that fires an event with an single parameter (timer count) after every 1 second interval. This event is caught by the ATL client that displays the timer count in the output window. NIIT Technical ReferenceMicrosoft KB Articles Q181845,Q181277 and Q194.
https://www.codeproject.com/Articles/1100/Handling-VB-ActiveX-Events-in-Visual-C-client?fid=2146&df=90&mpp=10&sort=Position&spc=None&select=865127&tid=1569029
CC-MAIN-2017-34
refinedweb
2,145
55.03
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. hx2dart A proof of concept dart target for haxe. Quick Start Guide - Download and extract dart. - open build.hxml and replace "$DART_SDK" with the location of the extracted dart sdk. - compile and run demo : haxe build.hxml As long as everything worked there should be a file named "hxdart.dart" in the bin folder with output from the file traced to the terminal. Please note this is a very early proof of concept that has not been tested widely so may not work on all environments at this stage. To view a sample dart web app see samples/helloworld More Info Folder Structure src/ -contains files required for dart generation along with standard haxe classes. Contents of this file intended to end up in "std" folder of haxe install. src/haxe/macro/ Contains macro classes required for Custom JS Generation. src/dart/_std/ Contains versions of haxe standard files required for dart target demo/ Contains demo code used to test out features as I haven't got to writing any tests yet. bin/ Where the generated dart file will be once the project is compiled. hxjs2dart Another proof of concept intended to be an external haxelib which cross compiles projects using the haxejs API's to the equivalent for dart. hxdart2js The opposite of hxjs2dart, hxdart2js is also intended to be an external haxelib which cross compiles haxe projects using the dart API to the equivalent for javascript. Why target dart? - Perfomance - Althoughit hasn't been officially released yet there should be major performance improvements for web apps running in the dart vm compared to the equivalent javascript project. Apparently the VM has also been architected to allow it to increasingly improve in perfomance at a rate which the same engineers believe they will no longer be able to achieve for the V8 javascript envine. - The dart VM is a language VM (no byte code) and the syntax is similar to haxe with bits of other languages already targeted by haxe mixed in. Most of the code currently being used to generate dart is directly ported from the haxe.macro.Printer, although it will change more as things progress. For these reasons a dart target should be less complex and a relatively easy win compared to other targets. - Along with the potential performance improvements achieved by targeting the dartVM in the browser, the standalone dartVM has potential to be very useful. - Why not? Isn't dart a competitor with Haxe? Not really. Both languages do compile to javascript but that's about the only area they could be considered competitors in my opinion. I'm not particularily interested in the dart2js compilation at the moment because haxe already has a really good JS target, but I think the dartVM once embeded in Chrome browser could be a really important target for haxe in the future. Why compile using the JS target instead of a new dart target? - I'm not very good at OCAML yet (although I now understand a lot more about haxe after reading up on OCAML and hacking around in the haxe source). - This will allow anyone with haxe3 installed to generate dart as well as edit and hopefully contribute to the generators using haxe macro and pattern matching syntax without having to compile a new version of haxe from source. - It should provide faster iteration of changes particularily as dart hasn't been officially released yet and there may still be breaking changes. - Dart is also a dynamic language with optional types and there are no negative performance implications by not declaring types (although this will hopefully be an included option for debugging purposes soon). Why does the generated dart look weird? This is just a proof of concept and at the moment the only thing I'm really concerned with is getting dart code generated in haxe to run in the dartVM. Why is there only one .dart file? Dart has a slightly different system for handling importing code from different modules or libraries while avoiding namespace clashes. The current hx2dart implementation which joins the parts of a path using "_" ensures there are no namespace classes for Classes, Enums or Interfaces of the same name without having to worry about importing from multiple modules. Anyone can change this by editing the DartGenerator.hx code and it would be great compare some different solutions to this and potentially offer different options at compile time depending on the intended use of the generated dart. At the very least I think any final dart target should have a similar Custom Generator option to the JS target to allow anyone to customise this type of thing. The dart2js compiler includes the option to merge all dart source files into one before deployment and can minify the class names anyway so there is little difference in the final on this issue if the hx2dart output is also minified. What will be involved in getting a current haxe project to run in dart? - First we need stable haxe2dart generation for the core haxe language features and standard libraries. - Externs will be needed for the core dart libraries such as "dart:html". - Next it will depend on how much platform specific code the project uses. For current projects targeting js there is another proof of concept hxjs2dart which is intended to abstract any differences between the js and dart API's without needing to manually port an code.
https://bitbucket.org/fzzr/hx2dart
CC-MAIN-2017-39
refinedweb
930
59.94
Note: This document is a living document and may not represent the current implementation of Flux. Any section that is not currently implemented is commented with a [IMPL#XXX]where XXXis an issue number tracking discussion and progress towards implementation. An assignment binds an identifier to a variable or function. Every identifier in a program must be assigned. An identifier may not change type via assignment within the same block. An identifier may change value via assignment within the same block. Flux is lexically scoped using blocks: - The scope of an option identifier is the options block. - The scope of a preassigned (non-option) identifier is in the universe block. - The scope of an identifier denoting a variable or function at the top level (outside any function) is the package block. - The scope of the name of an imported package is the file block of the file containing the import declaration. - The scope of an identifier denoting a function argument is the function body. - The scope of a variable assigned inside a function is the innermost containing block. An identifier assigned in a block may be reassigned in an inner block with the exception of option identifiers. While the identifier of the inner assignment is in scope, it denotes the entity assigned by the inner assignment. Option identifiers have default assignments that are automatically defined in the options block. Because the options block is the top-level block of a Flux program, options are visible and available to any and all other blocks. The package clause is not a assignment. The package name does not appear in any scope. Its purpose is to identify the files belonging to the same package and to specify the default package name for import declarations. To be implemented: IMPL#247 Add package/namespace support. Variable assignment A variable assignment creates a variable bound to the identifier and gives it a type and value. When the identifier was previously assigned within the same block the identifier now holds the new value. An identifier cannot change type within the same block. VarAssignment = identifier "=" Expression Examples of variable assignment n = 1 n = 2 f = 5.4 r = z()
https://docs.influxdata.com/flux/v0.7/language/assignment-scope/
CC-MAIN-2019-13
refinedweb
361
57.06